IWRL6432BOOST: Deploying CNN/ANN on IWRL6432BOOST (Edge AI without Host PC)

Part Number: IWRL6432BOOST
Other Parts Discussed in Thread: MMWAVE-L-SDK, IWRL6432, IWRL6844

Tool/software:

Hi TI team,

We are working on deploying a CNN or ANN model directly on the IWRL6432BOOST, with all inference performed on the device itself — without relying on a host PC, and without simply offloading radar data for external analysis. We have studied the mmWave-L-SDK and understand the signal processing and classification flow provided by TI (e.g., micro-Doppler DPU + classifier).

Now we are trying to implement and run a custom neural network model (e.g., CNN or MLP) directly on the IWRL6432BOOST device. We would like to ask:

  1. Is it possible to run CNN or ANN inference on the Cortex-M4F core of IWRL6432BOOST?
  2. Are there any supported approaches such as CMSIS-NN, TensorFlow Lite for Microcontrollers, or other libraries?
  3. Does TI provide any suggested method to deploy a trained ML model onto this platform?
    For example, converting a model into embedded C code?
  4. Can the current classifier DPU be replaced, or is it possible to insert a custom neural network after the feature extraction stage?
  5. Are there any existing projects or examples where developers have successfully implemented neural network inference on IWRL6432BOOST?
  6. What are the resource limitations or model size recommendations for real-time CNN inference on this chip?
    (e.g., memory constraints, average inference time per input)

We are very interested in realizing true low-cost, low-power Edge AI directly on this mmWave radar chip. If TI has any documentation, tools, or suggested workflows for this use case, we would greatly appreciate your support. Thank you!

Best Regards,

An-I Yu

  • Hello An-I Yu,

    1. Is it possible to run CNN or ANN inference on the Cortex-M4F core of IWRL6432BOOST?
      Yes

    2. Are there any supported approaches such as CMSIS-NN, TensorFlow Lite for Microcontrollers, or other libraries?
      All of our examples use either MATLAB or PyTorch

    3. Does TI provide any suggested method to deploy a trained ML model onto this platform?
      For example, converting a model into embedded C code?
      Yes, the surface classification example within the Jupyter Notebook method or Edge AI Studio (via source) uses the TI NNC to compile the ONNX model generated from PyTorch flow into C code optimized for the Cortex-M4

    4. Can the current classifier DPU be replaced, or is it possible to insert a custom neural network after the feature extraction stage?
      Yes, every part of the process can be switched out. If using same device, same dataset, you can do whatever you want in terms of features extracted from the dataset and what kind of model to use. Do note that our current tools lack a way of notifing the user if the size of model fits on the IWRL6432's memory size.

    5. Are there any existing projects or examples where developers have successfully implemented neural network inference on IWRL6432BOOST?
      I have mentioned Surface Classification as it is currently the only one in the open source PyTorch method, but we also use ML on-chip in the Gesture and Human-Nonhuman classification examples.

    6. What are the resource limitations or model size recommendations for real-time CNN inference on this chip?
      (e.g., memory constraints, average inference time per input)
      With IWRL6432 you will realistically run into memory constraints more often than inference time constraints. Chips such as IWRL6844 are much more suited for ML applications with its 1.2MB memory, but it is still very new and examples do not currently exist for it. The flow will be the same however, just compiled for 6844's Cortex-R5

    Best Regards,

    Pedrhom

  • Thank you very much for the detailed explanation and clarifications.

    This is extremely helpful for us to understand the practical deployment flow of CNN/ANN on the IWRL6432BOOST.

    Also, we appreciate your guidance and the reference to the Surface Classification, Gesture, and Human/Non-human examples. This gives us a solid starting point to implement our own model on-device.

    Thanks again for your support!

    Best Regards,

    An-I Yu