This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

IWRL6432BOOST: Human vs Non-human classification using IWRL6432BOOST

Part Number: IWRL6432BOOST

Tool/software:

I am trying to use the human vs non-human classification demo, but with my own data set. I know this question is probably very basic, but I'm having a hard time figuring out how to read the UART data from the sensor using the motion detection demo. What I want to do is use the data from the sensor and train an external neural network. Is this possible? If not, can I make a data set and integrate it into the demo? I really appreciate your help. 

  • Hi

    Thanks for your query. Please allow us a couple of days to respond

    Regards

  • Hello, 

    Getting the data out from the senor with the motion detection demo is definitely possible. What data specifically are you trying to obtain? The human vs nonhuman classification module utilizes features extracted from the microdoppler spectrum that is generated from each tracked object to classify the objects. Is this the data you are trying to obtain?

    In the motion detection demo there are several processed data outputs that can be enabled in TLV format (more information on these outputs here). These outputs can be individually controlled by the guiMonitor CLI command. Check the CLI Configuration section of the motion detection demo documentation for a description of this command.

    Regarding saving the output data and readying it to train an external network there would be a bit of work required on your end. The visualizer tool that is included in the mmWave L SDK does not currently provide the capability to save output data. However, you can save output data using the Applications Visualizer included in the Radar Toolbox. From there you would need to develop your own methods to annotate/label the captured data and train the network.

    Best Regards,

    Josh

  • Thank you for replying Josh! This seems to fix my issue; I will look into the Applications Visualizer and hopefully be able to extract the needed data. I was planning on using point clouds to train my model, but maybe the features you mentioned are a better approach. Thank you for your help!

    Best regards

  • No problem. Feel free to create a new post if you run into any issues. 

    Best Regards,

    Josh