Embeddings to Applications, built in partnership with Weaviate. Large language models have enabled many new and exciting applications to be built. But a known shortcoming of LLMs is that a trained language model does not have knowledge of recent events, or knowledge available only in proprietary documents that it did not get to train on. To tackle this problem, you can use retrieval augmented generation or RAG. And a key component of RAG is a vector database. Proprietary or recent data is first stored in this vector database. Then, when there's a query that concerns that information, that query is sent to the vector database which then retrieves the related text data. And finally, this retrieved text can be included in the prompt to the LLM to give it context with which to answer your question. Vector databases actually preceded this recent generative AI explosion. They have long been a broad part of semantic search applications. These are applications that search on the meaning of words or phrases rather than keyword search that looks for exact matches, as well as in recommender systems where they've been used to find related items to recommend to a user. I think, it would be really useful for you as an AI developer to understand how a vector database works, what really goes on under the hood. And I think, this will allow you to use vector databases more effectively in your own project. So for example, you know how to decide when to apply sparse search such as keyword search or dense search, which is what you get with vector similarities, or hybrid search which combines both sparse and dense search. Understanding how different similarity calculations work, will also help you to choose the best distance algorithm. And understanding this challenge of scaling vector databases and search will help you to choose between different embedding search algorithms. I'm thrilled that our instructor for this course is Sebastian Vitalets, who is Head of Developer Relations at Weviate, and who has deep experience instructing users on how to build and use vector databases. Thanks, Andrew. It's a real privilege to be working with you on this course. By the end of this course, you'll understand and implement many of the elements that make up vector databases. Things like embeddings, dense vectors that represent the meaning of phrase, distance metrics, like dot product or cosine distance, different kinds of vector search, things like linear search, where you look at all the entries in a database, or approximate search, where you speed up search by allowing for results that are close and also different search paradigms like sparse, dense, and hybrid search. And finally, you build real-world applications of vector databases, creating a rack system with hybrid and multilingual search functionality. That's a lot of great stuff and so hopefully some of the ideas we cover will inspire you to continue your own LM and machine learning journey by building on top of vector databases. Quite a few people have contributed to this course. We're grateful to Zain Hassan from Weaviate, as well as Jeff Ludwig and Ismael Gagari from DeepLearning.ai. So, hopefully, some of the examples we cover in this short course will inspire you to continue your own large language model and LLM journey by building on top of vector databases. Let's go on to the next video to get started.