This lesson introduces the concept of conversable agent, a building agent class Autogen that can be used to construct multi-agent conversations. You will learn about the basic functionality, as well as build your first two agent chat and show a fun conversation between two stand up comedies. Let's dive in. Let's begin with learning the concept of an agent. In AutoGen, an agent is an entity that can act on behalf of human intent, send messages, receive messages, perform actions, generate replies, and interact with other agents. AutoGen has a built-in agent class called "conversable agent". it unifies different types of agents in the same programming abstraction. It comes with a lot of built in, functionalities. For example, you can use a list of LLM configurations to generate replies, or we can do code execution or function and tool execution. It also provides a component for keeping human in the loop and checking for stopping the response. You can switch each component on and off and customize it to suit the needs of your application. Using these different capabilities You can create agents of different roles using the same interface. In the beginning, let's import the OpenAI API key from the environment. So we will import the get_openai_api_key utility function, run it to get the OpenAI API key. And then define a LLM configuration. In this course we will use GPT-3.5 turbo as a model. And next, let's import the conversable agent class from AutoGen. And we will create our first conversable agent object. Here we use, conversable agent class to define an agent named Chatbot. We pass this LLM configuration as we defined up there to this conversable agent. So then this agent will be able to use that large language model to generate the reply. And we also define the human input mode as "never". This means the agent will never, seek for human input. It will only use this large language model to generate a reply. In general, you could, switch the input mode to other modes. For example, you could say "always". Then, the region that will always ask for human input before it tries to generate reply on his own. And these are only the basic setups for this agent. In general, you could also add code execution configuration or function execution, and other settings. But let's begin with this, simple setup. The first thing you could do is to ask this agent who generates a response to a question using the generate reply method. Here we call these agents generate reply function and give it to messages list. The message has a content. "Tell me a joke" and "role as a user" and if we run this, you should be able to get a reply from the agent. So the agent says, "Sure, he's a joke for you. Why did the scarecrow win an award? Because he was outstanding in his field." So this is the most basic, thing you could do, by asking a question and get a reply from the agent. Now, if you call this function again, what would happen? Let's say, we call this generate reply function again. Now replace the content with "repeat the joke". Right. Do we expect the agent to repeat the joke? Actually, no. Because, when we call the generate reply function, it doesn't alter the internal state of the agent. So when we call the generated function again, it doesn't know that it already generated a reply before. So, it will be a fresh function, for generate reply. So it will generate the new reply, without knowing that it has, replied it once before. You could certainly do this, in the application for generating different replies if you want, but we want to keep the state and maintain the state and make it perform a series of tasks, we need a different approach. In the next part, let's look at how to create a conversation between multiple agents. And we will do a stand-up comedy example. We want to create an application where to stand-up comedians will talk to each other and make fun of each other. So the first agent we'll create is a conversable agent named Cathy. In this case, we give it a system message to let the agent know your name is Cathy and your stand-up comedian. And we'll pass the same, configuration and the same human input mode to never. When you don't specify this system message, then the agent will have an empty message. And we'll just perform as a generic purpose assistant agent. And using this message, we customize the behavior of the agent. Okay, that's one comedian we created. How about adding another one? Let's create another conversable agent named Joe and give the system a message. "Your name is Joe and your stand-up comedian", and we add another, instruction, after that, we say "Start the next joke from the punchline of the previous joke." So this gives the more specific instruction about how to carry over the conversation. Okay, so we have two comedians. Now it's ready to put them to work and create a conversation. The way we initiate the conversation is we will call this initiate a chat function from one of the agents. For example, if we want Joe to start the conversation, we'll call Joe's, initiate chat function. We'll set the recipient as Cathy and give it the initial message. The message says, "I'm Joe. Cathy. Let's keep the jokes rolling." We set the max turns to be two. So, have two turns of the conversations and then finish. Let's see what happens. So the first message is the same message as we set here. "I'm Joe. Cathy, Let's keep the jokes rolling." And next the message is from Cathy as she said, "Hey Joe, great to meet another comedy enthusiast. let's dive right in with some jokes. Why did the math book look sad? Because it had too many problems." And next turn, Joel says, "Well, Cathy, at least now we know why the mask book was always so negative". So you can see that Joe follows our previous instruction and starts the next joke from the last punch line, and Cathy follows the opposite. "Exactly. I just couldn't subtract the sadness from its pages." So it's making a continuation of that joke. And finally proposes another joke. And after these two turns of exchange, the conversation stopped. So after the conversation finishes, we could inspect the chat history in the chat result. So import pprint library and do a print of the chat history. So you can see all the messages that were exchanged first from Joe, second from Cathy, and third from Joe. Cathy again. And you could also inspect the token usage in the chat result. We'll call the chat result cost function. We'll see that, we are using the cheapest GPT-3.5 turbo model. We consumed 97 completion tokens and 219 prompt tokens. And the total tokens is 316. And the code cost is this much dollars. So in general, we could define the conversation in different ways. You could also check the summary of the chat result by calling the chat result summary function. So by default, we're using the last message as the summary of the chat result. So in this case we see lots and lots of messages from the conversation by Cathy. If you want to change the summary method you could figure with a different summary method. For example, We could run the conversation again with a different summary method called reflection with a large language model. And you can give a summary prompt called Summarize the Conversation. So what happens is after the conversation finishes, we will call the large language model with this prompt. And the large language model will reflect on the conversation and produce a new summary. So the same conversation happens and the same result happens. Because we're by default we are using caching to generate the same messages for the same input. So now if we check the summary again we see this time the summary becomes "the conversation was focused on sharing jokes and paths between Joe and Cathy. They playfully exchange the math and character related jokes to keep the laughs flowing." So that's a better summary. You'll notice that I used the max terms equal to two to control how many turns happen in this conversation. What if you don't know the right number of turns before the conversation finishes? What can you do? We could change the termination condition by providing additional, configuration called "is termination message." This is a Boolean function. So it takes a message as input and returns a true or false. Meaning whether the message means and the conversation should be terminated. Okay. For example, you will notice that I changed the system message here. "So when you are ready to end the conversation, say, I gotta go." We also pass this stopping condition as checking whether the "I gotta go" is inside of the message. If we detected that "I gotta go" for this, we will consider the conversation, as finished. And that is given to each agent. So each agent will check the condition, from the received message from the other agent. And if they see the "I gotta go" is contained in the message they received, they will stop replying. Let's run that and that's initialize the conversation again with the new stopping condition and see what happens. So the first few messages are similar, but this time you can see they have more turns of conversation. Cathy made a joke, and Joe responded and Cathy asked Joe about a different joke, and Joe, responded with other jokes. And eventually the last message from Joe is "Glad you enjoyed it, Cathy. Puns are always a hit. Thanks for the laughs. I gotta go." Yeah. So Joe ended the conversation with "I gotta go" and Cathy sees that phrase. So it stopped. replying. So this is a different way of stopping the conversation and is more flexible. So after the conversation finishes, what if you, want to, like, continue the conversation? Or if you want to, see whether this time, the agent can preserve the state. we can give it give it out the test. It could test with the same similar questions we said before. Next time, we will let Cathy send another message. "What's the last joke we talked about" and set the recipient to be Joe. Okay, well, this time, will they remember what's the last joke? Let's check that. Bingo. Joe responded. "The last joke we talked about was the Scarecrow winning award because he was outstanding. in his field". And they also follow the same termination condition. And, so we see this time Cathy said, "I gotta go" So Joe also knew that's a signal of stopping the conversation. They stop replying. So this demonstrated a way to make agents work in conversations. Start a conversation, continue the conversation. Remember where it hit. And this is just a very basic, demonstration of how to use the conversable agent to construct a conversation between two agents. In the next few lessons, we will learn many other conversation patterns and some agent design patterns, including tool using reflection, planning and code execution, etc...