I'm thrilled to present to you this short course on diffusion models taught by Sharon Zhou. Midjourney, Stable Diffusion, DALL-E, and others are able to generate an image, sometimes a beautiful image, given only a prompt. How do these algorithms work? You may have heard of a vague description of these algorithms learning to subtract noise to generate an image. But in this short course, Sharon will step you through a concrete implementation of image generation using a diffusion model so that you understand the technical details of exactly how it works. Cool, thanks Andrew! In this course, you'll be learning about the current state and capabilities of diffusion models used today. You'll start by understanding the sampling process, starting with pure noise and progressively refining it to obtain a final nice-looking image. You'll build the necessary programming skills to train a diffusion model effectively. You'll learn how to build a neural network that can predict noise in an image. You'll add context to the model so that you can control where you want it to generate. And finally, by implementing advanced algorithms, you'll learn how to accelerate the sampling process by a factor of 10. This is an intermediate to advanced course. We assume you're familiar with Python and basic neural network training. So, for example, we'll assume you know what "backpropagation" is. We'll use PyTorch throughout, but if you're familiar with other machine learning frameworks, such as TensorFlow, you should be able to follow along just fine. And so the running example we'll use for this short course will be generating 16x16 sprites, like those little 8-bit characters used in video games. We chose this example so that it's feasible for you to not just go through the notebooks, but also run them yourself to generate cute sprites yourself, right there in that Jupyter notebook. Diffusion models are becoming a foundation for cutting-edge research in the life sciences and other sectors too. For example, generating molecules for drug discovery. So when you understand the technical details of diffusion models, you'll also be in a better position to understand and perhaps apply such models yourself. Many people worked together to build this short course. I want to thank Aaron Lou and Mehmet Giray Ogut for their significant contributions, and on the DeepLearning.AI side, also Geoff Ladwig and Eddy Shyu. So with that, let me hand it over to Sharon, and I hope you enjoy the course! Great! Let's get started!