This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

IWRL6432BOOST: Retraining & deploying gesture recognition model on IWRL6432BOOST — Adding new gestures, need detailed E2E workflow (features, ONNX→TVM→C, quantization, deployment)

Part Number: IWRL6432BOOST


Tool/software:

Hello TI team and community,

We are currently working with the IWRL6432BOOST gesture recognition demo (xWRLx432 gesture recognition). The default demo works well and recognizes

the 6 gestures with good accuracy. We would like to add additional gestures and deploy a retrained model back onto the IWRL6432BOOST.

We have already:

  • Run the official gesture recognition demo successfully on hardware.

  • Collected feature data via Industrial Visualizer / UART as suggested in the demo docs (demo mentions: “extracted features output over UART can be saved and used as training data for a new model”).

  • Understood the high-level flow (see attached diagram below): 1D FFT → Feature Extraction → Classification → Post-processing.

  • Planned workflow: Data collection → Train model (PyTorch/TF) → Export ONNX → TVM → C code generation → Integrate into demo → Deploy & test.

However, we are unsure about the detailed retraining and deployment procedure. The documentation only provides a high-level note under Retraining, but no full script or workflow. We would like to request detailed guidance.

  • Hello, 

    Thank you for sharing the context on your project and what you have already tried.

    The diagram you shared is from our Surface Classification example which, like Gesture Recognition, uses a machine learning model under the hood. For Surface Classification we do provide the full retraining flow/procedure as you have noticed. However, unfortunately for Gesture Recognition this is not something we are currently providing publicly or are able to support on this forum.

    So I'm not sure I can help beyond what is already conveyed by the note on the user guide regarding retraining. The extracted features provide a starting point for model retraining, as you have already discovered these can be saved using the Industrial Visualizer (or optionally mmWave Data Recorder, a newer tool that we have developed which while it is not directly intended for gesture data collection may be more suitable for this purpose). Once you have that new dataset the specific model architecture, training methodology and implementation details are up to you. 

    Best regards,

    Josh