A Practical Introduction to Edge AI
This Tech Talk introduces edge AI, where trained models run on local devices such as embedded systems or nearby edge computers instead of the cloud. It explores the advantages of this approach, including lower latency, reduced bandwidth, better data privacy, and improved reliability during connectivity issues. The talk compares the AI development workflow for edge and embedded systems versus cloud deployment, noting that while the overall process is similar, more effort is needed in optimizing and deploying models to resource-constrained hardware. Practical strategies—such as pruning, quantization, and automatic code generation—to help models fit on target processors are discussed. Finally, the talk highlights the importance of robustness and safety in edge AI applications, covering techniques such as out-of-distribution detection, formal verification, and incorporating physical constraints into models.
Published: 15 Jul 2025