This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

DCA1000EVM: how to relate data sent from DCA1000 through ethernet to the data sent from AWR1843 through UART

Part Number: DCA1000EVM
Other Parts Discussed in Thread: AWR1843, , AWR1843BOOST

HI,

I was able to create my own script in python to collect the raw data sent from The DCA1000 board and I'm currently working on collecting data from both sensors in order in a further step to process the raw data and compare it to the processed data from AWR1843.  

As the recording process starts randomely when both boards are streaming and independently from both sensors starts. my question is how to relate the data collected from both sensors to each other. In other words, how to relate the UDP packets sent through ethernet to a data frame sent through UART.

Also if  numAdcSamples= 256 and numRxChannels= 3 then numAdcSamples * numRxChannels * 4 (size of complex sample) = 3072 bytes are streamed pro frame. Is it correct?

Thanks for your help,

Best Regards,

Amine 

  • Hi,

    Could you please provide more details about your setup?

    Do you capture raw data by using mmWave Studio or by running the mmWave studio OOB demo?

    thank you

    Cesar

  • Amine,

    For determining the size of your captured binary file, please refer to the following equation:

    File size = (2 components) * (2 Bytes) * (# ADC samples) * (# antennas) * (# chirps) * (# frames)

    where the number of antennas is the number of virtual antennas which most often is the product of the number of TX antennas and the number of RX antennas.

    In your case, you are asking about the file size on a per frame basis. I would need to see your .cfg configuration file to confirm this, but I do not think that 3072 Bytes is the correct answer. But I would need more information to accurately determine that.

    To answer your first question, it depends on what you're streaming out. If you're streaming out raw ADC on the DCA1000EVM, then the data sent by the AWR1843 will not line up as the AWR1843 only sends out post-processed data.

    It all depends on what you are sending out over the AWR1843 and and what you are sending out over the DCA1000EVM.

    Regards,
    Kyle

  • Hi kyle ,

    Thank you for your response!

    Attached you will find my configuration file.

    profile_wo_elevation.cfg

    I'm actually streaming the point clouds from the detected objects for the AWR1843boost and the raw ADC data for the DCA1000EVM. As the recording process starts independently from the sensors start, it can be that I start recording while the DCA1000EVM streaming the data of the frame. And as I understood from the documentation, the ADC data will be streamed first through DCA1000EVM and then the point cloud generated from the same raw ADC data will be streamed after from the AWR1843boost and I'm interested in how to relate this point cloud generated (streamed through UART) to its raw ADC data streamed through DCA1000EVM.

    For example in my recorded folder, I have for:

    ADC data: ( incomplete UDP packets for frame number 1)+ ( complete UDP packets for frame number 2 )+ ( complete UDP packets for frame number 3) + .... +  (complete UDP packets for frame number N)

    Radar PC data: ( complete point clouds  for frame number 1)+ ( complete point clouds for frame number 2 )+ ( complete point clouds for frame number 3) + .... +  ( complete point clouds for frame number N)

    And my objective is to reorder the data like that: ( incomplete UDP packets for frame number 1) +  ( complete point clouds for frame number 1) + ( complete UDP packets for frame number 2 )+  (complete point clouds for frame number 2) + ( complete UDP packets for frame number 3) + ( complete point clouds for frame number 2) + .... +  ( complete UDP packets for frame number N) +  ( complete point clouds for frame number N)

    To note that I'm not using mmWave studio for the recording process. but my own developped python script for recording the RAW ADC data and this program (https://github.com/radar-lab/ti_mmwave_rospkg) for recording the point clouds from the AWR1843boost

    Hope this make it more clear!

    Thank you for your support!

    Best regards,

    Amine

       

  • Amine,

    If you are looking to compare the raw ADC output coming from the DCA1000EVM and compare it to the post processed data from the AWR1843BOOST, the TI recommended approach would be to use the SDK Out-of-Box Demo along with the SDK Demo Visualizer found here: https://dev.ti.com/gallery/view/mmwave/mmWave_Demo_Visualizer/ver/3.2.0/

    You can use that as a reference.

    Regards,
    Kyle

  • Also,

    You need to enable LVDS streaming in your configuration file.

    In your current file the CLI argument is: lvdsStreamCfg -1 0 0 0

    This will not enable any kind of LVDS streaming on your device.

    Please refer to the SDK User's Guide for more information on the lvdsStreamCfg CLI command.

    Regards,
    Kyle

  • Hi Kyle,

    I Don't get what the mmWave demo visualizer has to do with the DCA1000EVM.  Can you provide more information?

    To note that my purpose is to process on my own the ADC raw data extracted from the DCA1000EVm and compare my results to the point cloud data extracted from the AWR1843boost.

    Best regards,

    Amine

     

  • hey,

    Sorry I sent the wrong config file.

    Actually it look like this:

    % Doppler Peak Grouping:enabled
    % Static clutter removal:disabled
    % Angle of Arrival FoV: Full FoV
    % Range FoV: Full FoV
    % Doppler FoV: Full FoV
    % ***************************************************************
    sensorStop
    flushCfg
    dfeDataOutputMode 1
    channelCfg 15 5 0
    adcCfg 2 1
    adcbufCfg -1 0 1 1 1
    profileCfg 0 77 40 7 57.14 0 0 70 1 576 11721 0 0 30
    chirpCfg 0 0 0 0 0 0 0 1
    chirpCfg 1 1 0 0 0 0 0 4
    frameCfg 0 1 16 0 50 1 0
    lowPower 0 0
    guiMonitor -1 1 1 0 0 0 1
    cfarCfg -1 0 2 8 4 3 0 15 1
    cfarCfg -1 1 0 4 2 3 1 15 1
    multiObjBeamForming -1 1 0.5
    clutterRemoval -1 0
    calibDcRangeSig -1 0 -5 8 256
    extendedMaxVelocity -1 0
    lvdsStreamCfg -1 0 1 0
    compRangeBiasAndRxChanPhase 0.0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0
    measureRangeBiasAndRxChanPhase 0 1.5 0.2
    CQRxSatMonitor 0 3 5 121 0
    CQSigImgMonitor 0 113 10
    analogMonitor 0 0
    aoaFovCfg -1 -90 90 -90 90
    cfarFovCfg -1 0 0 20.09
    cfarFovCfg -1 1 -5.01 5.01
    sensorStart

    Best regards,

    Amine

  • Amine,

    The Demo Visualizer allows you to record and save post processed data.

    Please refer to section 4.4 "Concurrent Recording of Processed Stream From mmWave Device". You can this document here: http://www.ti.com/lit/ug/swru529b/swru529b.pdf

    This way you can compare the post processed data with the data you capture on the DCA1000EVM.

    You should also enable the HSI header in your "lvdsStreamCfg" CLI command.

    Regards,

    Kyle