This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

IWR6843ISK-ODS: Adding new gestures to Gesture with machine Learning demo code

Part Number: IWR6843ISK-ODS
Other Parts Discussed in Thread: MMWAVEICBOOST,

Hi,

I wish to add new gestures to the Gesture with Machine Learning demo code on the IWR6843ISK-ODS with MMWAVEICBOOST. How do I go about it? 

How are the gestures processed by the radar in order to do so?

Regards,

Purvi

  • Hi Purvi, 

    In order to add new gestures to this demo you would need to train a new neural network. There are a couple ways to go about it.

    1. Collect raw ADC data using the DCA1000. Then do offline processing on this data to extract features. Then continue with further model building steps such as annotation (labelling) of the data, and training and testing the model. 

    2. Using the demo code running on the device, capture the extracted features which are sent out via UART and save these features. Then continuing with annotation, and training/testing. 

    The second option would not require the DCA1000 and does not require offline feature extraction because this would already be done by the device running the demo code. However, this option can also be less flexible than the first. With the first option, you have the ability to extract additional features, or change thresholds used in feature extraction for example.

    Best Regards,

    Josh