In this webinar, the third in our monthly series on TI Edge AI technology, you will learn how TI Edge AI processors and software solutions let you maximize AI model performance on specialized deep learning accelerators without having to program the cores. Our solutions let you deploy AI models to your embedded applications with industry-standard APIs (e.g. TensorFlow Lite, ONNX Runtime and TVM). This approach combines the performance advantages of TI Edge AI processors with a programming environment that is simple, flexible, and easy to use.
- Visualizing deep learning models to understand how they operate on an embedded processor
- Optimization techniques to maximize performance
- Using industry-standard APIs to compile, deploy, and accelerate your models
- Hands-on session to explore various deployment techniques
- TI Model Zoo—60+ pre-compiled models to help you develop faster and more efficiently
- Performance benchmarking methodologies—what you need to know
Register below for one of the two sessions depending on your timezone.
it is recommended to watch the previous webinar to understand the development flow at the link below.
Please feel free to post any questions below for preparation or if you have any difficulty running the previous webinar code.