Other Parts Discussed in Thread: MMWAVE-L-SDK, IWRL6432, IWRL6844
Tool/software:
Hi TI team,
We are working on deploying a CNN or ANN model directly on the IWRL6432BOOST, with all inference performed on the device itself — without relying on a host PC, and without simply offloading radar data for external analysis. We have studied the mmWave-L-SDK and understand the signal processing and classification flow provided by TI (e.g., micro-Doppler DPU + classifier).
Now we are trying to implement and run a custom neural network model (e.g., CNN or MLP) directly on the IWRL6432BOOST device. We would like to ask:
- Is it possible to run CNN or ANN inference on the Cortex-M4F core of IWRL6432BOOST?
- Are there any supported approaches such as CMSIS-NN, TensorFlow Lite for Microcontrollers, or other libraries?
- Does TI provide any suggested method to deploy a trained ML model onto this platform?
For example, converting a model into embedded C code? - Can the current classifier DPU be replaced, or is it possible to insert a custom neural network after the feature extraction stage?
- Are there any existing projects or examples where developers have successfully implemented neural network inference on IWRL6432BOOST?
- What are the resource limitations or model size recommendations for real-time CNN inference on this chip?
(e.g., memory constraints, average inference time per input)
We are very interested in realizing true low-cost, low-power Edge AI directly on this mmWave radar chip. If TI has any documentation, tools, or suggested workflows for this use case, we would greatly appreciate your support. Thank you!
Best Regards,
An-I Yu