You're still here, which means you're kind of hopeful there's a little more. Well, this kitchen is a big kitchen. It's a professional-grade kitchen. Now, that you've started, you can go off and bring in all kinds of components that make your AI kitchen better than anyone else's, And you know the secret. So, let's take you through the secret hallways of the kitchen to get you ready to make even bigger, brighter, bolder meals. So, you walk through the planner door with me and now you understand the t-shirt I wore in the intro and in some of the front bumpers you can see, plugins and planners and personas are the key thematic areas of Semantic Kernel. You jumped into plugins deeply, and now you're into the planner world and around the corner personas is there, but you don't have to wait because come into the next notebook and you'll get a nice surprise. And because you are hanging out for this last notebook, I want to tell you about our open-source chat group pilot. An open-source chat group pilot is basically all the kind of chat behaviors you might want to be able to build into your app yourself are available for absolutely free. It's kind of like having a high-end AV available underneath your seat, like an Oprah episode. Look under your seat. This is a free application that we made available to you that demonstrates a backend and frontend that has cool things like a token meter that shows you how chat expands different tokens. It has AI UX behaviors around latency. You've stayed here, so I want to give you a present. And the present is letting you know about this repo here. This repo, it is https://github.com. Microsoft Chat Compiler. But this repo is available for you to easily create that application I just saw you run it on your own computer. And it's pretty easy to install. There's a server backend web API, and there's a front-end web app, web apps and TypeScript react. The web API is in C sharp.net. No worries. I was not a C sharp.net programmer a few months ago, but it's as simple as a".netbuild.dotnetrun". Kind of like a yarn start, yarn install. It's all kind of the same nowadays. So, don't worry, right? And what you can do is you can have that cool chat app running on your computer. And what happens is you, I don't know, like when I first ran on my computer, I was like, wow, this feels like democratization of AI's productivity. I'm just reading from the screen now. It's like, wow, I want to be productive. I want to make my own system. I can use the chat copilot to understand things like plugins, plans, and personas. Let me walk you through for a second, it's super important because what happens is if you wanted to build a what's called a chat gpt compatible plugin, you might ask yourself, "Wow I wrote that code how do i test it?" Well, that's what this chat compiler is there for. it lets you, basically click on the upper right corner, and add your chat gpt plugin you made in semantic kernel. all that work to make plugins it was worth it, and don't forget once you add all these plugins in there, what happens you end up being able to go, and you wait for it, And the plans will be auto generated in the context of your chat. And it sounds a little like wild, but it's pretty amazing. You can upload documents. Remember the similarity engine. You can try different versions of them out. You can try Azure Cognitive Search, Chroma, Quadrant, maybe you can also turn on plans. So, sort of see the plans AI generates. I know you're like, just too excited to go out, and use this right now, and no problem because again, it's open source and it's also iterating very quickly. So, it really is like this magical thing you've been waiting for. You didn't know existed and it's been made for you. You deserve it, don't you? And in the settings panel, you'll be able to see token usage in general. It really gives you a quick overview of how these systems work in production use cases. And on top of that, there is a backend server. This is the front end. The front end is pretty cool, right? So, the most powerful thing of our system is the backend server. It's basically Semantic Kernel as a service. You can flip on a bunch of things, like auth, more vector DBs. You can turn on telemetry. There's also content safety, meaning content filtering for more AI safety, responsible AI. There are ways to import different types of documents into your vector stores. And there's also OCR. We have some multimodal experiments running as well. So, you can also work with images in the system. Again, our focus has been large language models. But it's been kind of cool that we have different things that just come for free, basically, and they're there for you. So, if you look at the config file of the backend server, you'll see some of the goodness. You can set a default configuration for a different model for completion of editing a planner. You can actually have it talk if you want and also recognize speech. This is an engineer's dream authorization. You can use a Cosmos DB. You can, I mean, if you're like a nerd, like I can be on some days, you probably on all days, depending on who's listening in, who you are. Don't worry, I don't know who you are. But look at this. There are so many different things you can turn on. And, you know, one of my favorites, the application insights, the telemetry. Oh, so good, right? Thanks for listening into how business thinkers can start building a plug-in semantic kernel. Thank you for your time, and I hope your bucket of time gets plugged up, and your bucket of money starts to fill up too.