it won't remember what you asked or how it answered your earlier question. To get a large language model such as Llama to act like a chatbot and remember your conversations, you'll practice prompting for multi-turn conversations. You'll get to ask the model to suggest fun things that you can do on the weekend, and then you'll be able to ask follow-up questions based on the fun activities that it proposes. Let's try it out. As we have seen in the previous lessons, we'll first import Llama from our utils package. So let's do that. And we are going to ask a simple question to our Llama model. And let's create that. I'm going to put in, what are some of the fun activities I can do this weekend? And let's see the response from our model. So Llama has responded with a bunch of nice activities I can do this weekend. My favorite would be spending a day at a spa with a massage. But as you can see, the response is pretty good. Now, let's add another prompt. And I'm going to number this as prompt number 2. And I'm going to ask Llama, which of these would be good for my health. I'm going to run this. Now, what do you think will happen? As you can see the output, the output says it talks about caffeine, it talks about alcohol, and it gives me a very generic output taking into account like what is good for my health. But what it did not do is it did not take into account these. These was related to the fun activities I was going to do this weekend. So here's what we did. You asked the model for some fun ideas to do this weekend. It generated a response with lots of good ideas, including hiking, spa, day, It's not referring back to its previous list of ideas, and instead talks about how caffeine, alcohol are bad for health. Why did Llama change the topic? Because it doesn't remember So what do you need to do to get the Llama to To help Llama keep track, we need to build up the context of the conversation. Now let's call the new question prompt 2. Now with all the context, Llama generates a sensible answer that has stayed on topic, You can see that the oral prompt is built above a set of prompt response pairs. The chat prompt always finishes with the latest prompt from the user, letting the model know it should respond. Note that you're always passing a single prompt consisting of multiple parts. And with each turn, you'll add in a new prompt response pair. For Llama chat prompts, you have to use some So here you see the instruction tag, which we learned in previous lesson in square brackets. Remember that a chat prompt ends with the latest input from the user. So you'll wrap that with an instruction tags too. Then you wrap each prompt response pair with a new set of characters called start and end tags. You open this last prompt with a start tag, but this time you don't close with an end tag. That's because the turn isn't over. You want the model to respond. Let's go back to the notebook and try this chat prompt format out in the code. So let's implement this in the code. Here's our prompt 1, and our response to prompt 1. And let's run this. Here's our prompt 2. Now let's create our chat prompt. And we have to make sure that we are putting the right tags in place. We'll put the prompt one here, end it with our instruction tags, and we'll get the response from our prompt 1, So let's print this prompt and see what do we get. So we got the full prompt, which we will now pass to the LLM. But you can see our request prompt has instruction tags. It has a starter tag. And this is the response which we get from the model. And this is the end tag. So this is the end of our first turn and this is the start of our second turn and we are asking sending a prompt to our Llama model. So the reason why we are sending add_inst=FALSE is because And here we are constructing the prompt for multi-turn chat. So we want to turn off the instruction tags addition from our helper function. And we will add verbose equals to true so that we can see our prompt. So what we are seeing here is our well-formatted prompt. Here's the input prompt to the model. Here's the response. And here's the end of our single term. And then here's our second prompt to the model. Okay, so now let's print out the response and see what happens. So here's the response. And you can see that the response is pretty good. It's related to our first prompt and it stores the previous context. And some of these activities look really good and maybe you can try out with your friends over the weekend. Now let's move on to building a helper function for our multi-turned chats. For this lesson, we have provided a second helper function that will format your chat history and prompt. And we are calling this as Llama Chat. So let's import Llama Chat from utils. And let's try to use that helper function here. We will create the same prompt, what we did before. And let's create our prompt 2. Okay, now let's create our response to, We'll turn on the verbose true so that we can see the response. We are setting the verbose equals to true so that we can see the prompt. Here's the prompt, which looks pretty accurate based on everything we have learned so far, Great, now we can see that we are getting a similar response with using our Llama chat helper function. Okay, as a next step i would like you to try this yourself. Try adding a follow-up query to this conversation. So you can add additional query right here, additional prompt. You can ask like which of these activities would be fun with friends or any other question you might want to. So I'm going to give you some starter code and you can try it by yourself. So here's our prompt 3, which of these activities would be fun with friends. And as you can see, I've added additional prompt 3 in my list. And we have now added a response, which we got response to when we had called it prior to running this cell. And so I want you to try this and run it and see what response you get. In the next lesson, we will go over prompt engineering best practices that will help you prompt the LLM to perform a range of tasks, including summarization and much more. So let's go onto the next lesson.