This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

IWR1843BOOST: Dense Point Cloud Generation For Facial Tracking

Part Number: IWR1843BOOST
Other Parts Discussed in Thread: IWR1843

Tool/software:

Good morning!

I have been browsing the forums and toolboxes for a couple months now and I am afraid I am not sure where to start.

I am working on a Senior Design Capstone project for university and I am trying to implement facial/eye tracking on the above mentioned EVM (without using the DCA). I am trying to get access to dense point cloud information so that I can do a little more post processing on the integrated MCU to locate ears and nose as well as eyes and/or facial landmarks.  I was trying to find some way to access this dense point cloud information on chip and just do the post processing there instead of taking it off chip because I don't have the budget for the DCA board as well.  None of the examples I was able to find had available source code of any kind that I could find. So any help or direction would be greatly appreciated, I am fairly new to the TI MCU family as I've been on STM processors, so the tool chain and example projects are a touch convoluted and confusing so any links would be very helpful.

Examples I have tried finding source for:
People Detection and Vitals Monitoring

ROS Robot Auto Nav

  • Hi

    Thanks for your query. Please allow us a couple of days to respond

    Regards

  • Circling back on this. Not trying to spam or perturb, trying to make a tight timeline work.  Any suggestions on where to look while a better answer/solution is being searched for?

  • Hello,

    Frankly I do not believe one 1843 will have the resolution you are looking for in order to do facial tracking. A 4x3 antenna array just does not provide the resolution needed in order to track the movement the most, it may be able to do is notice or detect the eye is moving and even then it would have to be at short ranges. If possible I would stick to larger movement such as head, arms, legs, etc.

    Below are some papers I find interesting involving using mmWave to do imaging levels of resolution.

    https://www.nature.com/articles/s44172-023-00156-2

    https://www.researchgate.net/figure/Detecting-head-movements-via-mmWave-signals_fig1_339244845

    Best Regards,

    Pedrhom

  • My distance will be no more than 2 feet. I really would like to know what functions I would need to call and where I would need to call them in order to pull dense point cloud info from the device.  As the data is handled in the DSP module, I find it hard to believe that I cannot reach in and grab some structure or capture some stream at regular intervals on-chip and manipulate. Would creating my own DSP paradigm to be performed on hardware that picks out facial points be easier than calling some functions on this device?

  • Hello,

    I believe creating your own DSP paradigm would make much more sense to me. Since this is research into a new application, collecting raw data which is the unprocessed, unfiltered, dumped I/Q data from the ADC buffer and processing it from there is what I heavily suggest doing to potentially reach your goal. Otherwise you are at the mercy of the pointcloud generation algorithms already in place, which has many sets of filtering and calculations that may not match what you want. Also, doing it via raw data and offline allows for a much larger range in configurations, since running on-chip will cause you to run into walls such as UART transfer speed and memory size for the radar cube, both things not an issue when going down the raw data route. For example yes you are right that you could get the ADC buffer from the DSP, but to have that be prepared to be sent over will never be fast enough to support real time processing, thus you would be stuck getting data one frame at a time. What I suggest of course this means you will need a DCA1000 but I genuinely believe it is the only realistic way you would be able to achieve this feat.

    Best Regards,

    Pedrhom

  • Theoretically, if I didn't need real time processing think low frame rate or event triggered is there known ways to pull it over and mess with it on chip?

  • Hello,

    You are able to set breakpoints when running examples in debug mode and then use Code Composer Studio's variable and memory explorers to check the values of variables.

    For example for IWR1843 out of box demo, you can look at variable result->radarCube.data to find the address of where the radar cube is being stored in memory. The point cloud would be variable dpcResults->objOut

    Best Regards,

    Pedrhom

  • I very much appreciate your response and help. I was looking at this section of code and it mentions something like "Post-DSP" or something similar and so this leads me to (hopefully) my last question, would this be post 1D FFT out of the front end or post whole DSP side as seen from the example graph here.

  • Hello

    That image is for the 3D People Tracking 6843 example which is fundamentally different than 1843 out of box demo. That said you would be able to use the variable explorer to check anything you want during a code pause. The 1D FFT is what we call the range profile and is a supported TLV that outputs over UART in realtime.

    Best Regards,

    Pedrhom