I'm so excited about this use case, because we will use performance optimization to improve our crew's ability to give us consistent result. So we just talked about speed and quality and how you want to keep consistent no matter what. In this, we're going to use agents to basically load into support data and understand what happened. What were the customers saying about it? Who are the people from the team that was helping? What was the sentiment analysis on it? And in everything that is to be done on this data. For this use case we're going to start with tabular data. This is data from support. So it will include a lot of information about who where the customers that had problems. What are the problems that these customers had. And we're going to be able to look into this data to understand patterns. Our agents are going to try to parse out the different details in all these data points to understand who are these customers? Who is helping them from our team? And what are the issues types that they have? We're going to go even beyond that. Also, thinking about the issues descriptions themselves and frustrations that these customers might have. So the agents will be able to look at how this data and parse it entirely in order to create not only suggestions, but even plot charts and visualizations, so that we can parse this data better and even share with others. To start, we're going to have three agents a suggestion engine agent, a report generator agent and a chart specialist agent. These three agents are going to perform a series of tasks, starting with two tasks in parallel a generate suggestion task and a generate tables task. One task is going to look through all the issues reports that we have, and come up with suggestions on how we can make that better. The second one is going to look at all of those same data points, and now try to organize that into a tabular view and grouping the data in ways that makes sense. After that, we want to make sure that we plot some visualization in charts so that we can share this report with others, and we can understand better what this looks like. This is going to be so fun and exciting, because agents that can code and actually plot charts are so powerful. Then we're going to have a final report being put together by our agents that is going to encapsulate everything that these agents have done, from their suggestions to the tables to the charts. This is a very special use case. It's the first time that we're going to see an agent that can code. Our chart specialists is allowed to execute code, and it's going to write code on the fly and execute it for us in order to create these visualizations that we will need. This is such a nice use case, because agents that can code can do so much more. In the end of the day the support data analysis use case will do a few different things. Go over a series of the data from support. Generate suggestions for improvements based on that. Then organize this data into tabular insights in groupings that make sense and plot charts. So that we can visualize any trends, and then wrapping things up by running a full final report on this analysis. This is so good. We're going to build this and learn how you can use agents that code in order to create very complex use cases that were not possible before. So let's dive into it right now and understand how we can build it yourself. In order to build this use case, we're going to start with load our regular imports. This is very boilerplate compared with the other use cases where we're importing all the different libraries that we need, including our main CrewAI classes. Now for this use case, we're going to be using GPT-4o. We don't need to even set this model because this is the default for CrewAI. So let's go ahead and load our agents and tasks with our classical snippet. Just look at the Yaml files. The reason why we are using 4o for this agents and tasks is because some of these agents are going to be fairly complex. Remember, we have agents here they're capable of writing code that actually plot charts. So we need those agents to use a heavier model in order for them to have a certain level of quality. So we're using GPT-4o that. Now let's take a look at our agents and tasks before we keep going. So there's three agents for this use case. We have the suggestion engine agent, the report generator agent and the charge specialist agent. If you look at this, all those agents also have their roles, goals, and backstory. And again, you're invited to change them in any way that you want in order to get different results. If you look at the task Yaml, you can see that we have four different tasks. We have a suggestion generation task that is going over all the agent types historical data, customer satisfaction and customer feedback. Then we have our table generation task that goes over through all the data and tries to summarize key metrics and trends, kind of looking into issue classification results and agent performance in order to put together a tabular view on how our agents and issues are doing. If you look at the third task, we have the chart generation task. This task is going to try to plot charts around issues, distribution and priority levels, resolution times, and customer satisfaction, including agent performance, so that we know who from our support team is doing the best work on helping our customers. And then as a final task, we have our final report Assembly that basically brings everything together into one final singular report as a markdown. Now that we know what are agents and what are our tasks, let's dive into the code and actually build this crew. This crew is going to be pulling the support data from a local CSV file. So in here we're going to use a file read tool from the CrewAI tools package. In order to read the support tickets data CSV file. From here we're going to find all the information about every single support ticket that we have data on. So let's look into those tickets right now. So here is a quick glimpse on our tickets data. You have ticket ID, customer ID, issue type and so much more. This is the file that our agents are going to be using in order to extract information from. And it's more of a bigger file than usual. So let's start by creating our agents, tasks and crew. In here you can see that we are pulling the agent's config with the Yaml config data in order to create our agents, and we're giving for our suggestion generation agent in our reporting agent access to our csv tool that's able to load their tickets information. And then for our chart generation agent, we're setting a new attribute that we haven't used before. That is: allow code execution. With this attribute is set to true. This agent's now able to write and execute code in a protected environment using Docker. So whatever this agent does, it's isolated from your local machine and can execute code only in that sandbox. Now that we have our agents, let's make sure that we create our tasks. For our tasks, we have four different tasks. As we said, the suggestion, the table generation, the chart generation, and then the final report. You can see that in the final report we're also using our context attribute, because we want this final report to pull information from all the other tasks in order to build this final one. So it's going to pull the results from all the other three tasks before it actually writes the final report. Now that we have both our agents and tasks, we can just put them all together into a crew. And the crew is super straightforward, is just bring all the agents in all the tasks into one. Now, before we kick off this crew, let's test it by running CrewAI test to make sure that we understand how good this crew and agents actually are. So let's test our crew. During the lesson we mentioned that you have a command line interface with the CrewAI test command, but in here, because we're using a Jupyter notebook, we're going to call the test method directly. So you can just put your crew's name and call the test on it. And you can pass a few attributes to that. In this case how many iterations you want to test us with, in what model you want to act as LLM as a judge. Now let's execute this. So here you can see that our crew starts executing as usual. Our initial agent the suggestion engine agent it starts to work on generating actionable suggestions based on the data. And you can also see that's using our read files content tool and loading that CSV right there. And it loads all the data. So it starts to parse it and understand it. Let's take a long and see how this agent execution goes. And here you can see the final answer from our agent. It went over all the tickets and gave pretty actionable results for every single one of them. Now let's see how our next agent will pick up this work. So here you can see our next agent, the report generator, starting to generate tables that summarize some of the key metrics and trends that are observed in that a support data. And here you can see that it's loading that data again. And then parsing that data in order to create the right tables that you want to show in this final report. And here you can see some of the final results for this agent. That includes an issue classification results and even an agent performance. Agent here meaning that person that actually helped with the support ticket and gave us good summarization numbers across everything. Now let's see our chart generation agent that's going to write code in order to plot some visuals. Here you can see or chart specialist agent that will go to the data trying to plot visuals around priority levels, resolution times, customer satisfaction and agent performance. You can see that it has a plan around loading the data and creating the charts and everything, and actually writes the code. You can see the code in here importing pandas and matplotlib, and kind of like loading the data in order to create this charts. You can see that the images were generated and we're going to look at what their file looks like in a second. Now let's go towards our final agent that puts together the entire report. And here you can see your final agent the reporter generator. And it's going to use all the other tasks result in order to put together the final report. And this is so interesting because it's going to now bring the charts, bringing the tables, the suggestions, everything into this one final single report. So let's see what that will look like. Here you can see the final result where you can see the overall issues. You can see the charts being displayed, the agent performance, and another visual about the agent performance and then customer satisfaction and everything that we asked about, including suggestions and how to make some of these issues better and key trends that were observed during the execution. Now, before we actually plot this markdown so that we can tag along on what exactly it looks like and all these visuals, let's look on how our testing came about. You can see here a table that shows for every single task offer for crew. From reading the data and doing suggestions to actually adding and creating tables, to then plotting charts and then putting the whole report together from the first task that loads the data and comes up suggestions to the second one that actually grouped this data into tabular views, to the third one that actually write the charts and create visuals for us, to the final one that put the entire report together. We got different scores for every different run, and we have a final score for the total crew. And you can see the total execution time as well. On how fast it was. This is a pretty interesting report, and we can run this multiple times as well. As a matter of fact, I recommend you to run this multiple iterations, at least 2 or 3, so that you can pick up on averages, and you can have a better understanding on how well your agents are actually behaving. But now that we have this course, let's do some training so that we can get our agents to perform better from here on. And then we can run test again to make sure that we have better results overall. So let's train this crew. For you to train your crew with agents. We mentioned in the lesson that you have a CLI where you can use the CrewAI train command to train your crew directly on your terminal. As we're running this on a Jupyter notebook, we're going to be using the train method directly. But this is work the same way. You can pass, how many iterations you actually want to train this crew for, and where you want to save the training data. As long as it's a pkl file, you should be good. So let's run CrewAI train and see how we can train these agents to do better. I'm so excited about this, because you can see that our agents are executing the same way. The same way that we had doing tasks. But there's one major difference. Whenever it finishes a task, and let's scroll to the bottom of this, it's going to give us the final answer, but now it's going to ask us for feedback so that we can actually provide feedback on how good this answer is, or things that we believe it should have done differently. So let's actually give it some feedback here. So we in here we can type whatever feedback we want. This is very useful if you're trying to have your agents to do better work in general, or trying to comply with a very specific format and or trying to get a longer results or anything in between. In here, I'm going to say that I want a better suggestions. So let me type that out. In here you can see that the feedback that I'm providing is that I want this agent to do better suggestions. I want them to be more thoughtful and meaningful so that we can improve the overall support quality. So I can just press enter, and my agents are going to do the same task again. But now, taking into account that feedback. And this is going to repeat for every task of the crew. So let's tag along data for a second. Now our second agent just wrapped up the second task. You can see that the final results include the three tables that we asked for, including issue classification results, agent performance, and customer satisfaction. Let's give it some feedback that we want an extra table with actual comparisons. So the feedback that we are providing for this agent is that it needs to make sure to include an extra table that have great comparisons that we can actually use. So let's give this feedback to see how this agent is going to update its answer. So in here you can see how the agent now updated sensor to include our feedback. It still offers the initial three tables including issues, classification, results, agent performance and customer satisfaction, but now has an extra table with comparative analysis that didn't have before. So here you can see real time how the agent picked up on the feedback and updated its task output. And the best thing is that your agents are learning from this. So it's not only that it's updating its results right now, but it's actually learning to always give these new results in the future. So it's going to learn from this and never forget. You're essentially onboarding your agents and teaching them how exactly you want them to do the job that you're giving them to do. Now let's look at our last agent's results. So this agent generated all the images. It executed the code. It has every single png that we need. So, honestly, I don't have much feedback on this one. I'm just going to let it know that this is great. So you can see my feedback is that this is great and that this agent generated all the necessary images. So we should move along. Now our final agent finished up its work. It gave the full summary including the tables, the visuals, the suggestions and everything that you need. But I didn't like that it put all the charts towards the end. Instead of having them tag along the tables that we had the both. So I'm going to give it some feedback about that. So you can see that the feedback that I'm providing here is that this agent needs to make sure to always include the chart visuals close to the information related to them. Like the tables. And make sure that the report is super complete and it's not missing any necessary information nor great suggestions that we might need. So let me send this feedback to this agent. Now you can see that our final agent updated its response to include not only the tables, but the charts to be close to it and even the report itself is now a little longer, including more suggestions and information that helped us to make a better report overall. So now that we have this in, our agent are fully trained, let's run CrewAI tasks again to compare or last test results with new ones that it would get. And now that we train these agents. Now let's run test again to make sure that we can compare the results from the testing before to training with the test after the training. So this is going to be the exact command that we executed before where we're testing this crew just once, using the same model that we used before. So let's kick this off. So now you can see the test results from running our crew test after running CrewAI train. And we're going to go over it in a second and compare with the previous tests that we had. But one thing that I want to call out is that we're running this only once. Ideally, you run this multiple iterations so that you can have a more accurate average per view of how well your agents and your crew is behaving. If we compare this with our previous results, what are we going to see is that there was a bump on test number one and on test number two, but then test number three, and test number four fell a little short. Execution time remains virtually the same. So ideally we would run this for multiple times and even do a few different number of iterations of training to make sure that our agents are getting closer and closer to what we expect them to do. But this is a great use case on how you can optimize a few tasks in order to get them to perform better over time. Okay. So we just finished running tests and you can see that this time we ran multiple iterations. So we actually have three iterations of testing our agents. And we have scores for every single task for each iteration. And we also have averages at the end. We can see how our agents have been doing throughout all the iterations, and how much time we took them for getting this work done. Now let's compare this with our old crew test execution. And I have taking a screenshot behind the scenes so that we can see what this will look like. For this use case that we were running here, you can see that task number one had quite a bump on the average. And task number 2 was roughly the same. Task number three was also roughly the same. But then the test number four and the overall crew score both got big bumps. So on average we have better results for the tasks for the crew, and time is just you roughly the same. So here you can see how you can make a difference on getting very consistent results every time that you run our crew and having them behave on the exact way that you want them to, by following a specific format or by doing specific things whenever you want your crew to do. CrewAI train and testing are such a great features that allow you to really fine tune your agents and understand everything that they are doing. This is so excited and then lock so many use cases out there. You can use this to make sure your agents follow an specific format, or make sure that your agents are never leaving data out of their final reports. So there's a lot of use cases And guess what? Now you can build them yourselves. So now before we wrap this up, let's run this crew one final time and let's plot the final results so we can see what this report actually looks like. Okay. For us to kick off this crew, it's going to be our usual thing where we just call the kickoff on our crew. Because we have no variables being interpolated in the tasks or the agents, we don't need to pass any inputs in here. So let's kick this off. Now let's check out the result of our final execution. And I'm just going to print this as a markdown. If you look into this we got our whole final report. You can see tabular view from our issue types and the frequency and the priority levels and even a chart with all the distribution for all the different issues that we have. You can see that everything seems to be very balanced out. When you look at priority levels, you can see that API issues have a lot of like high priority problems and billing issues as well. When you look at agents performance and what agents are doing better, we have some tabular view on what agents are helping with, what kind of issues and how well they're doing. But by looking at the chart, you have actually have a better understanding. So you can see that agent number one, for example, is doing a lot of tickets, but also is not getting a very good satisfaction in rating. Agent number two on the other side, take as much time as agent number one, but has a way higher satisfaction rating. So this is a very interesting way for you to actually explore the data. Thanks to these agents actually writing this code for you. And if we keep scrolling down, we can see that distribution of satisfaction across all the months. So you can see how things have been trending up. But since then got a small bump down. And then we have all the different suggestions and how we can actually improve or support in our features based on all the tickets supports that we got. This is a very interesting report and you can definitely see how useful this could be for a company setting. You could be using these in your company. You could be using this in your team. And this is just a showcase. You could be doing this for H.R. For coding or so many other use cases within your company. I'm so excited about this one because this shows you a practical use case on how you can actually use AI agents out there. I hope you really enjoyed this one, but hey, don't go anywhere. Let's jump in the next lesson right now.