Tool/software:
Hello TI team and community,
We are currently working with the IWRL6432BOOST gesture recognition demo (xWRLx432 gesture recognition). The default demo works well and recognizes
the 6 gestures with good accuracy. We would like to add additional gestures and deploy a retrained model back onto the IWRL6432BOOST.
We have already:
-
Run the official gesture recognition demo successfully on hardware.
-
Collected feature data via Industrial Visualizer / UART as suggested in the demo docs (demo mentions: “extracted features output over UART can be saved and used as training data for a new model”).
-
Understood the high-level flow (see attached diagram below): 1D FFT → Feature Extraction → Classification → Post-processing.
-
Planned workflow: Data collection → Train model (PyTorch/TF) → Export ONNX → TVM → C code generation → Integrate into demo → Deploy & test.

However, we are unsure about the detailed retraining and deployment procedure. The documentation only provides a high-level note under Retraining, but no full script or workflow. We would like to request detailed guidance.