safety for LLM applications. When dealing with large and complex problems like those we see in LLMs, we rely on new metrics to locate important phenomena in our data. In this course, you explored some of those metrics that help us to detect data leakage, hallucinations, and prompt injections. That helps us to evaluate and measure the quality and safety of the systems developed in the field. I invite you to continue exploring new ways to identify important issues in LLM data and contributing those back to the community through blog posts and open source. I'd love to see what you come up with!