Hi and welcome to this short course, Understanding and Applying Text Embeddings with Vertex AI, built in partnership with Google Cloud. In this course, you'll learn about different properties and applications of text embeddings. We'll dive together into how to compute embeddings, that is feature vector representations of text sequences of arbitrary length, and we'll see how these sentence embeddings are a powerful tool for many applications like classification, outlier detection, and text clustering. If you've heard of word embedding algorithms like Word2Vec or GloVe, that just examine a single word at a time, this is a bit like that, but much more powerful, and much more general because it operates at the level of the meaning of a sentence or even a paragraph of text, and also works for sentences that contain words not seen in the training set. In this course, you'll also learn how to combine text generation capabilities of large language models with these sentence level embeddings and build a small-scale question answering system that answers questions about Python based on the database of Stack Overflow posts. I'd like to introduce the other instructor for this course, Nikita Namjushi. Thanks, Andrew. I'm so excited to be teaching this course with you. As part of my job at Google Cloud AI, I help developers build with large language models and I'm really looking forward to sharing practical tips that I've learned from working with many cloud customers and many many LLM applications. This course will consist of the following topics. In the first half, which I'll present, we'll first use an embeddings model to create and explore some text embeddings. Then we'll look together to go through a conceptual understanding of how these embeddings work and how embeddings for text sequences of arbitrary length are created and also use code to visualize different properties of embeddings. The second half is taught by Nikita. Well, after you've had a chance to explore some different properties of embeddings, you'll then see how to use them for classification, clustering, and outlier detection. Because sentence level embeddings start to get at the meaning of an entire sentence, this really helps an algorithm to reason more deeply and make better decisions about text. So, after this, we'll see how to use a text generation model and some of the different parameters you can adjust. And finally, we'll put everything you've learned about embeddings, semantic similarity, and text generation together to build a small-scale question-answering system. Many people have contributed to this course. We're grateful for Eva Liu and Carl Tanner from the Google Cloud team, and also on the DeepLearning.ai side, Daniel Vigilagra and Eddie Hsu. The first lesson will be about how to get started with embedding text. That sounds great. Let's get started.