In this lesson, you will learn how to access and prompt Mistral models via API calls and perform various tasks like classification information extraction, personalization and summarization. Let's start coding. In the classroom, the libraries are already installed for you, but if you're running this or your own machine, you will want to install the following: pip install mistralai. Let's just comment this out because we don't need to run this in this session. We have a helper function here to help you load the Mistral API key, and another helper function here for you to load the Mistral models so you can get started running Mistral API easily. Okay, ask the model. Hello, what can you do? Feel free to pause the video and change the prompt however you like. At the end of this lesson, I will walk you through the code in the helper function, so that you can see how the API calls works and use the API outside of this classroom environment. First, let's take a look at how you can use our models to classify bank customer inquiries. In this prompt, you are a bank customer service bot. Your task is to assess customer intent and categorize customer inquiry. We have a list of predefined categories. If the text doesn't fit in any of the categories, classify it as customer service. Then you can see here we're providing some examples for the model to know exactly what we're expecting. Okay. If we want to make sure our prompt doesn't have any spelling or grammar error, we can ask the model to correct the spelling and grammar first. Let's run this. And then let's print the response. We can see that it made some grammar corrections. For example, customer inquiry is now the customer inquiry. Now we can use this corrected prompt and replace the inquiry with the actual inquiry. I am inquiring about the availability of your cards in the EU. And then let's run the sale and we get country support, which is what we expect. Now let's run another inquiry. What is the weather today? Because this is not in any of the predefined categories. The model correctly categorize it as customer service. Now let's come back to take a closer look at this prompt and see what kind of prompt technique we used. First of all, we use role play to provide our model a role, which is a bank customer service bot. This as personal context to the model. Second, we used few shot learning, where we give a few examples in the prompts. Learning can often improve model performance, especially when the task is difficult or when we want the model to respond in a specific manner. Third, we use deminetors like hash or angle brackets to specify the boundary between different sections of the text. In our example, we use the triple hash to indicate examples and angled bracket to indicate customer inquiry. And finally, in a case when the model is verbose, we can add: "do not provide explanations or notes", to make sure the output is concise. If you're wondering which deminetor to use, it doesn't matter as choose whichever you prefer. Next, I would like to show you an example of information extraction. We have seen many cases where information extraction can be useful. In this example, let's say you have some medical notes and you would like to extract some information from this text. In this prompt, we provide the medical notes and ask the model to return JSON format with the following JSON schema, where we define what we want to extract, the type of this variable, and the list of output options. For example, for diagnosis, the model should output one of these four options. Let's run this one. And when we run the model, we get exact format of what we defined. Here in the Mistral function. We defined it JSON as true to enable JSON mode. We'll go through the Python API calls at the end of the lesson. Let's take a look at this prompt again. What strategy we use here is that we explicitly ask in the prompt to return JSON format. It's important to ask for the JSON format when we enable the JSON mode. Another strategy we use here, is that we define the JSON schema. We use this JSON schema in the prompt to ensure the consistency and structure of the JSON output. Note, that if we don't have the is JSON equals true. The output may still be a JSON format, but we recommend you to enable the JSON mode to return a reliable JSON format. Next, let's take a look at how our models can create personalized email responses to address customer questions. Because large language models are really good at personalization tasks. Here's an email where the customer, Anna, is asking the mortgage lender about the mortgage rate. And here is our prompt. You are a mortgage lender customer service bot, and your task is to create personalized email responses to address customer questions, answer the customer's inquiry using the provided facts below. And then we have some numbers about the interest rates in the prompts. And similar to what we have seen before, we use the string format to add the actual email content to this email variable here. Let's run the cell. Let's run the Mistral model. As you can see, we get a personalized email to Anna answering her questions based on the facts provided. With this kind of prompt, you can imagine that you can easily create your own customer service bot. Answer questions about your product. It's important to use clear and concise language when presenting these facts or your product information. This can help the model to provide accurate and quick responses to customer queries. Finally, we have summarization. Summarization is a common task for large language models, and our model can do a really good job as summarization. Let's say you want to summarize this newsletter from The Batch. And here's the prompt I tried. You are a commentator. Your task is to write a report on the newsletter. When presented with the newsletter, come up with interesting questions to ask and answer each question. Afterward, combine all the information and write a report in the markdown format. Then I have a section to insert the content of the newsletter and a section for instructions. First, to summarize key points. Second is to generate three distinct and thought provoking questions. Third is to write an analysis report. Let's run our Mistral Model. And this is exactly what we asked for. We get a summary, we get interesting questions, and we get the analysis report. Of course, you can always ask the model to summarize the newsletter without these instructions. If you have a complex task providing step by step instructions, usually help the model to use a series of intermediate reasoning steps to solve complex tasks. In our example, using these steps might help the model think in each step and generate a more comprehensive report. One interesting strategy here is that we ask the model to automatically guide the reasoning and understanding process by generating examples with explanations and steps. Another strategy that's used often is that you can ask the model to, output in a certain format. For example, using a markdown format. So that's all the prompts I want to show you in this lesson. In this lesson, we used a helper function to help us load the Mistral model. Here's how the API call works. We first need to define the Mistral client. You will want to replace this API key variable with your own API key. If you are running Mistral models outside the classroom environment. We also need to define the chat messages. The chat message can start with a user message, or a system message, or in a system message. A system message usually sets the behavior and context for the AI assistant, but is optional. You can have both the system message and the user message just in the user message, and experiment and see which kind of messages produced better result. In this lesson, we'll just have everything in the user message. Then we define how we can get the model response where we need to define the model and the messages. If we enable the JSON mode, we need to add a line here. Response format type as JSON object. To specify that we want the response format as JSON. There are several other not required arguments we can change here. You can check the API specs to see all the details. Okay, so that's it for this lesson. We learned how we can prompt Mistral Models to do various tasks. In the next lesson, we'll take a look at how do you choose which Mistral models for which use case. See you in the next lesson.