Welcome to this short course on LangChain for large language model application development. By prompting an LLM or large language model, it is now possible to develop AI applications much faster than ever before. But an application can require prompting an LLM multiple times and parsing its output, and so there's a lot of glue code that needs to be written. LangChain, created by Harrison Chase makes this development process much easier. I'm thrilled to have Harrison here, who had built this short course in collaboration with DeepLearning.ai to teach how to use this amazing tool. Thanks for having me. I'm really excited to be here. LangChain started as an open source framework for building LLM applications. It came about when I was talking to a bunch of folks in the field who were building more complex applications and saw some common abstractions in terms of how they were being developed. And we've been really thrilled at the community adoption of LangChain so far. And so look forward to sharing it with everyone here and look forward to seeing what people build with it. And in fact, as a sign of LangChain's momentum, not only does it have numerous users, but there are also many hundreds of contributors to the open source, and this has been instrumental for its rapid rate of development. This team really ships code and features at an amazing pace. So hopefully, after this short course, you'll be able to quickly put together some really cool applications using LangChain, and who knows, maybe you even decide to contribute back to the open source LangChain effort. LangChain is an open source development framework for building LLM applications. We have two different packages, a Python one and a JavaScript one. They're focused on composition and modularity. So they have a lot of individual components that can be used in conjunction with each other or by themselves. And so that's one of the key value adds. And then the other key value add is a bunch of different use cases. So chains of ways of combining these modular components into more end-to-end applications making it very easy to get started with those use cases. In this class, we'll cover the common components of LangChain. So we'll talk about models. We'll talk about prompts, which are how you get models to do useful and interesting things. We'll talk about indexes, which are ways of ingesting data so that you can combine it with models. And then we'll talk about chains, which are more end-to-end use cases along with agents, which are a very exciting type of end-to-end use case which uses the model as a reasoning engine. We're also grateful to Ankush Gola, who is a co-founder of LangChain alongside Harrison Chase, for also putting a lot of thought into these materials and helping with the creation of this short course. And on the DeepLearning.AI side, Geoff Ludwig, Eddy Shyu, and Diala Ezzeddine have also contributed to these materials. And so with that, let's go on to the next video, where we'll learn about LangChain's models, prompts, and parsers.