Welcome to AI Agents and LangGraph. Built in partnership with LangChain and Tivoli. Taught by Harrison Chase, co-founder and CEO of LangChain, as well as Rotem Weiss who is the co-founder and CEO of Tavily. Harrison, it was not quite a year ago that we're working on our first course on LLM frameworks. And back, then we had an ancient example, but, you know, it was a little bit of a struggle to get it to work. Since then, I've seen many teams successfully build AI agents. What was your view on that? Yeah, that's exactly right, Andrew. We've seen many people successfully building agentic applications now, and I think there were a couple of key improvements over the last year. First, function calling LLMs have made tool use much more predictable and stable. And also specific tools like search have been adapted for agentic use. Yes. If you think about it, when you do a query on a search engine, it will return multiple links that you can then follow to find answers. However, what an agent really wants are answered that you can then reference to the link. Also, the agent needs predictable formats for its results. This is exactly what Agentic Search provides. That makes sense. And by the way, I'm personally a user of LangChain and Tavily. And the reason for being excited is of both of you here. And let me start by describing agents and agentic workflows. let's say that the three of us are going to write a paper together. Maybe I would start with some planning and try to put together an initial outline of the paper. Yeah. And I can take it from there and do some research, maybe run some queries and compile documents related to our topic. Okay, I guess that leaves me to write the first draft. I hope it's not a long paper. Yeah, yeah. Short paper. Hopefully. And then after that, I might veto the paper. Try to make some constructive suggestions and hand it back to Harrison to see if he wants to make some changes or to Rotem to see if he wants to do some more research. and so on. And this would be an example of an agentic or an agent like workflow, in which we iterate to produce a work product. And, in contrast to how people often use LLMs to write essays today, which is served by the prompt and the, write the essay for the first work to the last word in one shot. This type of iterative workflow gives a much better work product. In fact, as a person, I don't write that well. If you force me to just write from start to finish with no backspacing allowed. So concretely, you can similarly prompt an LLM to tell it to write an outline, prompt an LLM to write a draft or revise prompt an LLM to do research and so on. But to dive a bit deeper, let me share what I think are some of the key design patterns of agentic workflows. First, I think there's planning, which is thinking through the steps to take like the outline and what to do after that. Then tool use. Knowing what tools are available and how to use them, like our search tool. Reflection, which refers to iteratively improve a result, possibly with multiple LLMs critiquing and making useful suggestions to drive that type of editing cycle. Multi-agent communication. So you can think of each of us as playing a role again to an LLM with a unique prompt to play a unique role in this process. And then memory, that is tracking the progress and results over the multiple steps. Now, some of these capabilities are related to the LLM itself, like function calling for tool use. But many of these capabilities are actually implemented outside the LLM by the framework that they operate in. Exactly. And the LangChain framework has had many of these elements for some time. Memory is available in multiple forms. Supporting function calling, LLMs and tool execution are available. And we talked about this in the last DeepLearning class we did, functions tools and Agents course. But LangChain has recently updated its support for agentic workflows. Here's some examples of agents that we now better support. So, ReAct agents was an early paradigm for building agents, and ReAct stands for reasoning and action. Another example that we recently added support for is the example presented in the Self Refine paper, where it does the iterative refinement that you were talking about. And most recently, you can see the alpha coding example. Which creates a coding agent using flow engineering. And from all these diagrams you can see that these agents and their behavior is defined by a cyclical graph. And to support directly building agents like this, LangChain has extended to support LangGraph. In this course, you will start by building an agent from scratch with just an LLM and Python. Then, you'll learn about the components of LangGraph by rebuilding that same agent using the LangGraph components directly. Since search tools are such an important part of many agent applications, you will learn the capabilities of agentic search and how to use it. There are two additional capabilities that are helpful when building agents. First, is being able to receive human input. This allows you to guide an agent at critical points. The second is persistence. This is the ability to store the current state of information so that you can return to it later. This is great for both debugging agents and for productionalizing them. We will build a project using LangGraph, but you'll have to reach the last lesson to figure out what it is. And at the end, I'll describe some of the cool applications and future directions I see for this technology. Many people have worked to create this course from LangChain. I'd like to thank Lance Martin and Nuno Campos. From Tavily, Assaf Elovic And from DeepLearning.AI. Geoff Ladwig. All right. We don't want to keep those agents waiting. Let's go on to the next video to get started.