Now you will reuse the BLEEP model for question answering of an image. This means you can give the model a picture of a dog and a woman on the beach and ask a question such as who is at the beach and the model will answer your question based on that image. Let's do that now. For the visual question answering task, you can ask the question about the image to the model and the model should return an answer. If you ask the model what the dog is wearing, you should get a pair of glasses. Now let's code this. In this classroom, the libraries have already been installed. If you're running this on your own machine, you can install the Transformers library by running the following. Since the libraries are already installed in this classroom, I won't run this and I'll just comment it out. Just like the last two lessons to perform the specific task, we need a few things. Just like the last two lessons to perform the specific task, we need a few things. Just like the last two lessons to perform the specific task, we need a few things. The model and the processor. So let's import blip for question answering class from the Transformers library. Now that the class is loaded, let's load the model by using the fromPretrain methods and by passing the related checkpoint for question answering. Let's do the same for the processor. So we will load from the Transformers library, the class AutoProcessor. And we will load the processor using fromPretrain also. And we just need to pass the same checkpoint. And that's it. Now let's load the image that we need to pass to the processor to get the input. Now that we have filled the dzisiaj again we will our incoming images are then we will pass the same input to the model to generate the answer. So the delivery file will be the legacy folder with the release information, The order code file where our new image was already done. In the university lab we used a ans의 used a line that is not unique to this world we have instance Vielen Barr TV. We also need to pass the path to the image that we want to include. to this world we have instance, that we want to open. Let's check the image. And here we are. And you can see that we need to have a picture of a dog and a woman on the beach. Now let's ask a question to our model about this specific image. For example, let's ask it how many dogs are in the pictures. We need to pass the inputs to the model. So we will use the processor to process both the image and the question. And we will return the tensors to be PyTorch tensors. Just like the previous lab, we will use the generate methods from the model to get the outputs. And we need to use the processor to decode the output. And you can see that the model was able to answer correctly to the question that there is indeed only one dog in the picture. Now, I invite you to stop the video and ask other questions about this specific picture, or you can even upload your own pictures and ask whatever question comes to your mind. In the next lesson, we will learn about zero-shot classification with ClipModel, and we'll find the final code in the study that counts.