This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

IWR6843ISK: Obtain lower level forms of data from the radar for machine learning

Part Number: IWR6843ISK

Hello,

My team and I are building a system to detect polar bears in the wild using radar technology. We have successfully set up a radar to receive point cloud information and track targets. However, next we are looking to see what improvements we can make to our tracking system by using machine learning. I remember hearing somewhere that the radar has different levels of filtering; the highest of which being the point cloud information. I'm looking to investigate using the lower level forms of data that the radar outputs for machine learning. What is the best approach to collect one of these levels of data? And is there a specific lower level data format that would be particularly useful for machine learning?

Thanks,

Josiah

  • Hi Josiah,

    The point cloud information which is output from the device is the end result of the processing chain in the out of box demo. If you wish to train a machine learning model on lower level data, you can intercept the data at different points in the processing chain prior to point cloud generation (e.g. after 1D-FFT, after 2D-FFT) and extract features you want to use for your machine learning application. You may wish to collect raw ADC data using a DCA1000 board then do offline processing to determine features that can be extracted from the data which optimize the performance of your machine learning model.

    Best Regards,

    Josh

  • Hi Josh,

    Thanks for the speedy reply. That sounds like exactly what we’re looking for. We’ll look into using the lower level data first. Do you know of any good resources/tutorials on how to intercept the lower level data in the processing chain? 

    Thanks,

    Josiah

  • Hi Josiah,

    You can see an example of this in the Multi Gesture and Motion Detection 68xx Lab in the Industrial Toolbox. The lab can be found at <mmwave_industrial_toolbox>/labs/Gesture_Recognition/Multi_Gesture_and_Motion_Detection_68xx. In this demo, custom features are extracted from the data and used as input to a neural network which does an inference for gesture recognition. 

    Best Regards,

    Josh

  • Hi Josh,

    Awesome. This should be enough to get us started. Thanks for your help!

    Best,

    Josiah