This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

IWRL6432BOOST: Sensor Hitching, Possible Config Error?

Part Number: IWRL6432BOOST

Tool/software:

Hello there!

We are attempting to use this sensor in a proof of concept for a larger 360 solution, however we are running into a few problems that are making the use of this sensor difficult.

The first and most important is that the sensor hitches.  It gets stuck in some state that will not transmit data to the host any longer.  We first observed this problem when we had modified some source code and built a custom image to the load onto the device. It simply takes tracked targets and pipes them to another MCU to be processed over I2C. (There are some kinks to iron out here like potential DMA and rolling modification to I2C buffers).  However it seemed that depending on the speed at which the I2C buffer was being read out of the device, the sensor would stop reading (we placed a light toggle in where the sensor data was being process in MSS so we could tell when it was tracking). This however mayyyy have been a red herring as we were trying to dial in config settings using a base TI image in the examples and the high performance config.  As we were tweaking options and settings in the config, we started to notice hitching and even more frequently, when the config was loaded in the visualizer, the sensor wouldn't even start.  Are there any suggestions as to what is causing the sensor or the integrated MCU to hitch and how to either fix it or get around it?  Thanks!

Current state of the config:

sensorStop 0

channelCfg 7 3 0

chirpComnCfg 8 0 0 256 4 28 0

chirpTimingCfg 6 63 0 75 60

frameCfg 2 0 200 64 250 0

antGeometryCfg 0 0 1 1 0 2 0 1 1 2 0 3 2.418 2.418

guiMonitor 2 1 0 0 0 1 0 0 1 1 1

sigProcChainCfg 64 2 3 2 8 8 1 15

cfarCfg 2 8 4 3 0 12.0 0 0.5 0 1 1 1

aoaFovCfg -60 60 -40 40

rangeSelCfg 0.1 12.0

clutterRemoval 1

compRangeBiasAndRxChanPhase 0.0 1.00000 0.00000 -1.00000 0.00000 1.00000 0.00000 -1.00000 0.00000 1.00000 0.00000 -1.00000 0.00000

adcDataSource 0 adc_data_0001_CtestAdc6Ant.bin

adcLogging 0

lowPowerCfg 1

factoryCalibCfg 1 0 40 0 0x1ff000

mpdBoundaryBox 1 0 1.48 0 1.95 0 3

mpdBoundaryBox 2 0 1.48 1.95 3.9 0 3

mpdBoundaryBox 3 -1.48 0 0 1.95 0 3

mpdBoundaryBox 4 -1.48 0 1.95 3.9 0 3

sensorPosition 0 0 1.9 0 0

minorStateCfg 5 4 40 8 4 30 8 8

majorStateCfg 4 2 30 10 8 80 4 4

clusterCfg 1 0.5 2

staticBoundaryBox -3 3 0.5 7.5 0 3

gatingParam 3 2 2 2 4

stateParam 6 3 12 50 5 200

allocationParam 8 10 0.1 6 0.5 20

maxAcceleration 0.4 0.4 0.1

trackingCfg 1 2 100 3 61.3 191.7 100

presenceBoundaryBox -3 3 0.5 7.5 0 3

microDopplerCfg 1 0 0.5 0 1 1 12.5 87.5 1

classifierCfg 1 3 4

baudRate 115200

sensorStart 0 0 0 0

  • Hello, 

    Let's try to first confirm the issue you are seeing with the unmodified software. Is this behavior deterministic? For example do you generally notice the issue when a person is being tracked in the scene or it happens randomly even with an empty scene. I'm thinking it's possible that the entire configured frame time is being used up with chirping / processing / data output. If the configured frame time elapses before all of the required actions are done for that frame then the device can get stuck in this state. 

    Thank you for sharing the configuration you are using. Since you are configuring the frame in burst mode then the time spent in the chirping period is burstPeriodicity * numBurstsInFrame = 12.8 ms. That leaves a significant amount of time for processing and data output (~237 ms) however, I noticed that you have essentially all of the available processing blocks enabled; point cloud generation in 'auto' mode (major motion and minor motion processing), mpd state machine processing, object tracking, target microdoppler generation, and target classification. After all of that processing, the data is output via UART. The data that is output is specified by the guiMonitor CLI command (description here) and in your case it seems to output pointCloud, rangeProfile, statsInfo, trackerInfo, microDopplerInfo, and classifierInfo. The size of some of these outputs are fixed but others will be larger when there are more detections and/or tracked targets (point cloud, tracker, microdoppler). Additionally, your configuration includes 'baudRate 115200' which if you are using the default demo software, then this command will actually have no effect. This command essentially updates the baud rate that the device uses for UART and by default the device is already configured to 115200. Typically we use this command to increase the baud rate for faster data output, for example: 'baudRate 1250000'.

    I'd recommend you try these options to confirm/resolve the issue:

    1. Increase the baudRate for data output. This change would enable the data output to be completed ~11x faster.  baudRate 115200 -> baudRate 1250000

    2. Increase the frame time. You can set this to an unreasonably high value just to confirm the issue. frameCfg 2 0 200 64 250 0 -> frameCfg 2 0 200 64 2000 0

    3. disable some outputs with guiMonitor

    Also, have you been able to use Code Composer Studio to debug the issue? Doing this should allow you to see exactly where the code is hanging. There is a guide for using CCS Debug with mmWave devices available here

    Best regards,

    Josh

  • Hello Josh,
    Thanks for your response. We used the provided example cfg profiles TrackingClassification_HighBw_4Ant.cfg and TrackingClassification_MidBw.cfg. Our goal is to get the ~10cm range resolution and enable RX antennas 1, 2, and 3. With your input, here is our updated config file with increased frame time, less guiMonitor outputs, and baudrate at 1250000:

    sensorStop 0
    channelCfg 7 3 0
    chirpComnCfg 8 0 0 256 4 28 0
    chirpTimingCfg 6 63 0 75 60
    frameCfg 2 0 200 64 2000 0
    antGeometryCfg 0 0 1 1 0 2 0 1 1 2 0 3 2.418 2.418
    guiMonitor 2 0 0 0 0 1 0 0 1 0 1
    sigProcChainCfg 64 1 3 2 8 8 1 0.3
    cfarCfg 2 8 4 3 0 12.0 0 0.5 0 1 1 1
    aoaFovCfg -60 60 -40 40
    rangeSelCfg 0.1 6.0
    clutterRemoval 1
    compRangeBiasAndRxChanPhase 0.0 1.00000 0.00000 -1.00000 0.00000 1.00000 0.00000 -1.00000 0.00000 1.00000 0.00000 -1.00000 0.00000
    adcDataSource 0 adc_data_0001_CtestAdc6Ant.bin
    adcLogging 0
    lowPowerCfg 1
    factoryCalibCfg 1 0 40 0 0x1ff000
    boundaryBox -3.5 3.5 0 9 -0.5 3
    sensorPosition 0 0 1.9 0 0
    staticBoundaryBox -3 3 0.5 7.5 0 3
    gatingParam 3 2 2 2 4
    stateParam 6 3 12 50 5 200
    allocationParam 8 10 0.1 6 0.5 20
    maxAcceleration 0.4 0.4 0.1
    trackingCfg 1 2 100 3 61.3 191.7 100
    presenceBoundaryBox -3 3 0.5 7.5 0 3
    microDopplerCfg 1 0 0.5 0 1 1 12.5 87.5 1
    classifierCfg 1 3 4
    baudRate 1250000
    sensorStart 0 0 0 0

    Running this config file with the base OOB motion and presence detection demo code still causes the sensor to hitch. Running the code in debug mode throws the error: 

    Error: Error in setting up doa profile:-40102

    Where retval = -40102 comes from DPC_OBJECTDETECTION_ENOMEM__L3_RAM_DET_MATRIX in DoaProc_configParser(). My assumption is that this is due to increasing the number of RX antenna used in channelCfg from just 1 and 3 to include 2 and as a result, there is insufficient L3 memory available for the detection matrix. In dpc.c, the RAM buffers are defined:

    /*! L3 RAM buffer for object detection DPC */
    #define L3_MEM_SIZE (0x40000 + 160*1024)
    extern uint8_t gMmwL3[L3_MEM_SIZE]  __attribute((section(".l3")));
    /*! Local RAM buffer for object detection DPC */
    #define MMWDEMO_OBJDET_CORE_LOCAL_MEM_SIZE ((8U+6U+4U+2U+8U) * 1024U)
    extern uint8_t gMmwCoreLocMem[MMWDEMO_OBJDET_CORE_LOCAL_MEM_SIZE];
    /*! Local RAM buffer for tracker */
    #define MMWDEMO_OBJDET_CORE_LOCAL_MEM2_SIZE (25U * 1024U)
    extern uint8_t gMmwCoreLocMem2[MMWDEMO_OBJDET_CORE_LOCAL_MEM2_SIZE];
    /* User defined heap memory and handle */
    #define MMWDEMO_OBJDET_CORE_LOCAL_MEM3_SIZE  (2*1024u)
    extern uint8_t gMmwCoreLocMem3[MMWDEMO_OBJDET_CORE_LOCAL_MEM3_SIZE] __attribute__((aligned(HeapP_BYTE_ALIGNMENT)));
    
    /* User defined heap memory and handle */
    #define MMWDEMO_OBJDET_CORE_LOCAL_MEM3_SIZE  (2*1024u)
    uint8_t gMmwCoreLocMem3[MMWDEMO_OBJDET_CORE_LOCAL_MEM3_SIZE] __attribute__((aligned(HeapP_BYTE_ALIGNMENT)));

    I assume that since memory is limited, the solution would be to decrease the number of bins to create a smaller detection matrix. Any suggestion on how to do this via modifying our config file?

  • Hello, 

    Okay, thanks for the information.

    We used the provided example cfg profiles TrackingClassification_HighBw_4Ant.cfg and TrackingClassification_MidBw.cfg

    To clarify these default configuration files, the midBW configuration already enables all tx and rx antennas, the HighBW_4Ant configuration doubles the number of ADC samples (which results in twice as many range bins) but because of memory limitations, only 2 of the 3 rx antennas can be enabled. 

    My assumption is that this is due to increasing the number of RX antenna used in channelCfg from just 1 and 3 to include 2 and as a result, there is insufficient L3 memory available for the detection matrix.

    Your assumption is exactly correct. 

    Our goal is to get the ~10cm range resolution and enable RX antennas 1, 2, and 3.
    I assume that since memory is limited, the solution would be to decrease the number of bins to create a smaller detection matrix. Any suggestion on how to do this via modifying our config file?

    Understood on your goal and yes, your assumption is also correct here; however, one important note is that the detection matrix is actually the smaller of the two main data structures with the radar cube generally using up much more memory. The radar cube is allocated first so you are only getting the error related to detection matrix memory allocation because seemingly there was enough room to allocate the larger radar cube but then not enough left for the detection matrix. 

    You can reduce the radar cube size by reducing the number of virtual antennas, range bins (adcSamples), and/or doppler bins (burstsPerFrame). You can reduce the size of the detection matrix by reducing the number of doppler bins and/or azimuth bins (azimuthFftSize). Since you mentioned you want fine range resolution and to use all antennas then one option could be to reduce the doppler bins if velocity resolution is not as important for your use case. 

    Also one thing to point out is that these tracking configurations enable 'auto mode' in the processing chain (compared to major motion mode and minor motion mode). In auto motion detection mode, two separate processing chains are running 'at once' which means all of the data structures are duplicated (one radar cube for major motion processing and one radar cube for minor motion processing and same with detection matrix). Minor motion helps identify very small motions from people/objects that are otherwise still, is this level of sensitivity required in your application? Removing minor motion processing could free up a lot of space. 

    Also I wanted to point out a few helpful resources for tuning the demo parameters to align with your specific application needs. 

    Best regards,

    Josh