In this lesson you will learn about why on-device AI is so popular. You will learn about various benefits such as to reduce latency, improve efficiency, reduce cost and privacy preservation. You will also learn about various applications of on-device AI in the real world today, including real-time speech detection, real-time semantic segmentation, real have object detection as well as physical activity detection. Let's dive right in. Let's start with a few fun facts about on-device AI. Did you know that every time you take a picture with your smartphone, over 20 AI models run to capture the perfect picture within a few hundred milliseconds? In the industrial IOT segment, the estimated economic impact of all devices is about $3 trillion. And every time you drive a car that's equipped with advanced driver assistance it's entirely based on on-device AI. And on-device, AI is everywhere. When you type on a keyboard, on a laptop. It's powered by a language model that runs on-device. When you talk to a smart speaker, the text to speech is entirely powered on-device. Robots that deliver and assemble have a lot of on-device AI. Drones that can scan landscapes for industrial and agricultural use cases use on-device AI. Every time you edit a picture on a smartphone or a laptop that's powered by on-device AI, and every time you drive a car that's also powered by on-device AI. And here are a few of the applicable use cases with audio, with image, as well as sensors. The popular audio and speech applications include text to speech, speech recognition, machine translation, audio noise removal. When you work with images and videos, you have photo classification, QR code detection, virtual background segmentation, and when you work with sensors, you know physical activity detection, keyboard models, digital handwriting recognition are all powered by on-device AI. And what's fascinating about this is you could even mix multiple modalities with audio, with image, with video and speech and sensors, and produce multimodal on-device AI models. And the most popular industries where on-devices is applicable include the mobile smartphone industry, your PC industry. your industrial IOT industry, as well as the automotive industry. Now let's look at why you would want to run models on-device. There are four main reasons. The first one is that it's cost-effective, because you can utilize all the computational resources that are available locally without any additional cloud computing resources. The second one is efficiency, because you can process data locally and not need to send it to the cloud, receive the results, and then process it again locally. This whole process becomes computationally much more efficient. The third is about privacy, because your data will remain on your device and never leave it. And that ties into our fourth reason, which is personalization. Because having models being customized locally on your device without any external data can create uniquely personalized experiences. In this lesson, we're going to walk through a new way to deploy models on-device that makes it extremely easy to take a model that you trained in the cloud and make it work on the device within about five minutes. And there are four main steps as part of this process. In the first one you will capture the model as a computational graph. Then you will take that computational graph and compile it for your target device. You will then validate the numerics of that model on the device that you're trying to deploy to. And then finally you will measure performance on the device. Now all of this requires an actual device in the loop. That's why you will be using a device in the loop to go through each of these four steps. And finally, when these four steps are done, you will have an artifact that you can deploy on a device and integrate it in your application. To make this extremely seamless, you will be using Qualcomm's AI hub that automates all of these four steps the capture, the compilation, the validation, and the performance measurement along with the device that's provided to you, so you can go through this process in about five minutes. Now on-device AI is also extremely popular for generative AI applications. This includes things like live translation, live transcription, photo generation, AI based photo editing, semantic search, text summarization, and various virtual assistants, writing assistants, as well as image generation. All of these are commercially deployed applications of on-device AI on your smartphone or laptop. And these are a collection of all the various models that are all deployable today on a smartphone, a laptop, and IOT device, or a car. This includes language models like, Llama and Baichuan. Speech models like Whisper, face detection models like Mediapipe_pose, Stable Diffusion, and over 100 plus models that are all very easy to deploy on-device today.