In this video you'll learn about a new sampling method that is over 10 times more efficient than DDPM, which is what we've been using so far. And this new method is called DDIM. So your goal is that you want more images, and you want them quickly. But sampling so far has been slow, because one, there are many time steps involved, you know, 500, there are even more sometimes to get a good sample, and also each time step is dependent on the previous one. It follows a Markov chain process. But thankfully, there are many new samplers here to address this problem, since this has been a long-standing problem with diffusion models. It's that you can train them, and they can create amazing beautiful images that are both diverse and realistic, but it's so slow to get something out of them. And one of these samplers that has been very popular is called DDIM, or Denoising Diffusion Implicit Models, which is just the name of the paper. And this paper was written by Jiaming Song, Chenlin Meng, and my PhD co-advisor Stefano Ermon. DDIM is faster because it's able to skip time steps. So instead of going from time step 500 to 499 to 498, it's able to skip all these time steps. It's able to skip quite a bit, because it breaks the Markov assumption. Markov chains are only used for probabilistic processes. But DDIM actually removes all the randomness from this sampling process, and is therefore deterministic. And what it does essentially is it predicts a rough sketch of the final output, and then it refines it with the denoising process. So let's compare DDPM here on the left, which is what we've been doing so far, and DDIM here on the right. Yes, it is much faster with DDIM. You immediately see a book there after time step 19. And we're still going for DDPM. We're still going. And this goes all the way up to 500 with DDPM, as you know. Great, so here's the lab. A lot of the setup will look the same. So I'm just going to run this cell here, set up the UNet again. Our hyperparameters. Here's the DDPM noise schedule. We'll use it to compare it to DDIM later. Now I'm instantiating the model. And here's where fast sampling comes in with DDIM. So here's the function for DDIM. You can look at the paper for the details, but this implements that with its scaling factors. And then we load up our trained model here. So what's cool is that we can actually just load up the trained model and use either DDIM or DDPM. It doesn't matter. This is just a sampling process after training. And this is a sampling algorithm using DDIM. And the only thing to call out is that there is a step size involved. We're not going through every single time step. We're actually skipping steps here. And here is where n is 20. So this is 500 over 20. We'll run that here. And then we can sample. That was much, much faster. I could barely even see that there. And now the GIF is just being composed. We'll speed up the video for the GIF. All right. So here's what that looks like. And it's very fast that it's able to instantly turn into these sprites. Now with this faster sampling method, you don't always get the same level of quality as if you were to run DDPM. But these actually do look quite good. Empirically, people have found that with a model trained on these 500 steps, for example, DDPM will perform better if you sample for 500 steps. But for any number under 500 steps, DDIM will do much better. And so now here's the same but with a context model. So you can load in that context. Great. So these are just random contexts here but you can set them yourselves as well. And this is what they look like. So now your question is probably, how does the speed compare? So we can load up the original DDPM functions and sampling algorithm. And we can compare it using timeit in this notebook. So we're going to compare DDIM with DDPM. All right. Look at that speed up. Wow. So try running these in your own notebook and see what you get.