This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AWR1642BOOST: Question about the Record data file of AWR1642EVM Short Range Radar demo

Part Number: AWR1642BOOST

Hello everyone,

I followed the steps in Short Range Radar (SRR) User's Guide, in the Visualizer configuration, there is a "Record&Replay options" to fill with, so I entered a new file name and a path. After lauching Visualizer, I got this file like this, it has 8 columns, but I don't understand what it is, what does it mean. Is this the output packets with the detection information which are sent out every frame through the UART? The echo time sequences of  all the detected targets? If not, how can I get the echo data sequences of all the detected targets? Thanks a lot!

Best Wishes,

Kathy

  • Hi,
    Yes, you are correct these are the output packets with the detection information which are sent out every frame through the UART. There is no echo information.

    The echo information would have to be post processed on the host.

    thank you
    Cesar
  • Hi Cesar,
    many thanks for your reply. I have looked through the srrdemo_16xx_dss project, I want to figure out the structure of the output packets and how it was sent to the host. But I am still not quite sure about my understanding is right or wrong : The output packets is a 'MmwDemo_output_message_dataObjDescr' object, it has two properties: 'uint16_t numDetetedObj' and 'uint16_t xyzQFormat', so it's just a 3D coordinates information, as you said, point cloud? If my understanding is true, the output packets are only information about location, than it is not sufficient for my use, because I want the echo information of detected objects, I need to do time-domain/frequency-domain on the echo data. Could you please give any help that what should I do for this goal? Do I need to rewrite the project code? Another question is, I didn't understand that you said in the reply"The echo information would have to be post processed on the host. " Do you mean that besides the output packets, the echo information has also been sent to the host? Thanks a a a a lot!

    Best wishes,
    Kathy
  • Hi,

    Your understanding is correct, the demo only provides the point cloud for every frame. I think there are 15 frames/sec ( I can double check that)

    You can save this information to a file. Is this the echo information you need?

    thank you
    Cesar
  • Hi Cesar,
    thanks a lot for your reply. The point cloud is not the information I need. I need the echo sequences of each detected targets, it's two-dimensional time and frequency response of the target. Is this so called "L3 detMatrix" the echo sequences? It is shown in the picture that it has range bins and doppler bins (in the file:///C:/ti/mmwave_sdk_02_00_00_04/packages/ti/demo/xwr16xx/mmw/docs/doxygen/html/index.html , sorry I cannot upload the picture of L3 detMatrix here, only text). Many thanks again.

    Best wishes,
    Kathy
  • Hi,

    Can you please point us to some more information about "echo sequence" Maybe an academic paper would be useful that describes mathematically what this is.

    We are not clear what this means

    thank you
    Cesar
  • Hi Cesar,

    thanks again. Please let me use a public academic paper to explain what kind of data I want. My algorithm is basically like this one, I want to do two dimensional time-frequency-domain transform on the echo data ( kind of like two dimensional FFT), then I get spectrograms like Fig.2 in this paper, then do all kinds of classification methods to distinguish pedestrains or other cyclists,etc, or to recognize their motion mode. So I need the echo sequence as input for my algorithms, the echo sequence is the 2D plural matrix data after ADC and before 2D FFT is done, does it correpond to the L3 radarCube in AWR1642BOOST? Because I saw from the document( (file:///C:/ti/mmwave_sdk_02_00_00_04/packages/ti/demo/xwr16xx/mmw/docs/doxygen/html/index.html) :"Before FFT calculation, a Blackman window is applied to ADC samples using mmwlib library function. The calculated 1D FFT samples are EDMA transferred to the radar cube matrix in L3 memory. " If it is true, how could I get this data? I hope my expression is clear. Many thanks for your help! I am looking forword to your reply.

    Best wishes,

    KathyKnowledge Exploitation for Human Micro-Doppler Classification.pdf

  • Hi,

    I am still in the process of checking with our R&D team

    thank you
    Cesar
  • Hi Cear,
    is there any news for my question? Thanks again.

    Regards,
    Kathy

  • Hi Cear,
    is there any news for my question? Thanks again.

    Regards,
    Kathy
  • Kathy,

    Sorry for the delay.

    I have talked with our systems team. Please see below their feedback. Here is the information to find the demos mentioned below:

    OOB - mmwave SDK Out of Box Demo version 2.1 www.ti.com/.../mmwave-sdk

    Traffic Monitoring 2.0 demo chain - http://dev.ti.com/tirex/#/ ; Software->mmWave Sensors -> Industrial Toolbox
    People Counting demo chain - http://dev.ti.com/tirex/#/ ; Software->mmWave Sensors -> Industrial Toolbox

    If you are doing reasearch work, our recommendation is to collect raw data with DCA1000 board and process this data with your algorithms in MATLAB

    thank you
    Cesar


    Feedback from Systems Team:
    There are two detection chains where we generate low level data similar to this:

    In the traditional detection chain, using either OOB demo or Traffic monitoring 2.0 demo chain, there is calculated a range-doppler detection matrix. You don’t get angle of arrival information until you do CFAR detection on range-doppler and then AngleOfArrival estimation on the angle spectrum from each detected point. So if one wants only range and Doppler, one could try to extract the range-doppler heat map frame by frame and use that.

    In the people counting chain, using the people counting lab, there is calculated a range-angle detection matrix. In this case, IF the scene is slow moving, for the capon beamforming to work correctly, i.e. the objects in the scene are generally traveling at less than 0.6 of the Chirp Vmax, then you get a range-angle detection matrix. You don’t get Doppler until you do CFAR detection on the range-angle, then do spatial filtering on the original radar cube and do Doppler estimation on each detected point.

    So those are two possibilities for this kind of work. To do either of these the person will have to either have to collect raw data and then do their own MATLAB radar processing chain, OR they will have to take one of our target processing chains, and modify it to extract either the range-doppler heat map or the range-angle heat map, this is challenging due to the low bandwidth of the interfaces on the device.
  • Hi Cesar,

    thanks a lot for you and your team!  So I have two choices, one is to use another DCA1000 board to collect raw data, the other choice is to modify the CSS project code, is my understanding right?You mentioned that the low bandwidth of the interfaces on the device, do you mean that the data rate of range-doppler heat map will be relatively slow? Because when I choose on the configuration page of mmWave Demo Visualizer to display "range doppler  Heat map" or "range azimuth Heat map", the frame rate is only 1~3fps. Thanks again.

    Best wishes,

    Kathy

  • Yes, if the size of the heat map is too large, there will not be sufficient time to send the date through the uart connection to the PC before the next frame starts.

    You should definitely experiment with the demo

    thank you
    Cesar