This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

IWR6843ISK-ODS: IWR6843ISK-ODS: Gesture Recognition (Machine Learning) Distance Feature Configuration Issue

Part Number: IWR6843ISK-ODS

Tool/software:

I changed RANGE_BIN_END to 20 so the device could capture features out to about 100 cm, but after flashing the firmware I’m unable to read any data from the data COM port.

I used JTAG for debugging, but I’ve run into an issue. Is there anything I might have overlooked, and how can I resolve it?

  • Hello.

    Can you please upload a different version of your images as they do not seem to show up?

    Sincerely,

    Santosh

  • Hi Santosh,

    Sorry — I think there was a problem with the image upload in my previous message.

    I've uploaded the image again and would like to clarify my question.

    I tried changing RANGE_BIN_END to 20 so that the device could detect gesture features around 100 cm away, as shown in the image.

    However, I encountered an error. Below is the message I saw during debugging with JTAG.

    I suspect the problem might be that the RANGE_BIN interval is too large for the device to handle.
    I also tried narrowing the interval between RANGE_BIN_START and RANGE_BIN_END, but the same problem still occurred.

    Any suggestions would be appreciated.


    Best regards,

    Haoyu

  • Hello Haoyu, 

    Unfortunately that error message is not very helpful for debugging the issue, it just indicates the device is crashing at some point. Can you try placing breakpoints at different points in the code and stepping through to identify where the crash is happening.

    I might expect this crashing behavior when increasing the number of range bins processed as you mentioned but if you see the same issue with less range bins there may be a different  issue.

    Can you also let me know what version of the radar toolbox you have?

    Best regards,

    Josh

  • Hey Josh,

    I set some breakpoints and noticed that when RANGE_BIN is between 15 and 20, things break right after DPC_ObjectDetection_execute finishes and we go back into main.c to run DPM_execute. Since I can’t peek inside DPM_execute, I don’t know exactly what’s going wrong. On top of that, every time I debug, the crash happens at a different spot, so it’s really tough to pin down the exact culprit—I just know it’s somewhere in MmwDemo_DPC_ObjectDetection_dpmTask.

    Additionally, the toolbox version I’m using is 3.10.00.05.

    Best regards,

    Haoyu

  • Hello Haoyu, 

    Thank you. Based on the behavior you describe it does seem like it is running out of frame time before completing the processing. 

    After taking a look at the gesture recognition code I think the issue is likely due to some inefficiencies/bugs in the feature extraction part (gesture.c). For example in Computefeatures_DOABased(), there is a for loop which searches through rangeBin = 0 to RANGE_BIN_END, but I believe it only needs to be from RANGE_BIN_START to  RANGE_BIN_END, This would cause the additional processing time even when the actual interval of range bins specified for feature extraction is less than the default. Can you try changing this so the loop begins at RANGE_BIN_START?

    Similarly, the Computefeatures_preStart() zeros out the doppler bins close to zero doppler for all range bins but again this likely only needs to be done for the range bins of interest. 

    Best regards,

    Josh

  • Hello, Josh

    Thanks for your reply.

    I modified the loop settings in Computefeatures_DOABased() and got it running successfully, but when I collect gesture features, I still can’t detect gestures within the range I set. My RANGE_BIN settings and radar configuration are as follows:

    [radar config]

    [range bin setting]

    [feature view for gestures performed between 60 cm and 100 cm ]

    Is there anywhere else I need to modify?

    Best regards,

    Haoyu

  • Hello Haoyu, 

    I'm sorry for the slow response. As you may know, this demo uses a machine learning model to make predictions about which gesture was detected. Since the model was trained on features collected at a shorter range it likely does not recognize the features enough to make an accurate prediction when doing the gestures at a longer range. for good performance you would probably need to collect data and retrain the model for this range.

    best regards,

    Josh 

  • Hi Josh,
    Thanks for your reply.

    I think there may have been some misunderstanding due to how I explained things earlier, so let me clarify my question.

    I'm using the sample project code and trying to get the device to detect hand gestures within the range of 1 cm to 100 cm. To do this, I modified the RANGE_BIN settings and used the enhanced COM port to collect gesture features for training purposes.

    However, when I set the RANGE_BIN between 15 and 20, I noticed that when I perform a gesture at around 100 cm, the features received through the COM port do not contain any detectable gesture data. This confused me, because based on my radar configuration, the device should be able to detect up to around 200 cm.

    But when I wave my hand at 100 cm, the value of numPoints is mostly zero. Does this mean the device isn’t receiving gesture data at that distance? If so, what would be the recommended way to address this?

    [radar config]

    [range bin setting]

    [feature view for gestures performed between 60 cm and 100 cm ]

    Any suggestions would be appreciated.


    Best regards,

    Haoyu

  • Hi Haoyu, 

    Okay, I understand. Thank you for clarifying. The magnitude of cells in the heatmap will be lower at further range so you may need to try lowering the threshold value used for the numPoints (THRESH_NUM_POINTS). Ideally the feature should be 0 when no gesture is being performed with a noticeable spike when gestures are performed.

    Regards,

    Josh

  • Hello Josh,

    Sorry for the delayed response. We've conducted some experiments adjusting the THRESH_NUM_POINTS parameter. Specifically, when I stand at a distance of 75cm to 100cm and perform hand-waving gestures in an open area, we observed the changes in gesture features—particularly numPoints, evaluationMean, and AzimuthMean. Altering the value of THRESH_NUM_POINTS did not effectively detect gestures at the specified distances.

    The test results are as follows,we modified the THRESH_NUM_POINTS values to 1350, 1300, and 1200, respectively, and performed hand-waving gestures at the specified distances. The feature data showed minimal differences compared to scenarios without any hand gestures.

    [gestrue feature for THRESH_NUM_POINTS is 1350]

    [gestrue feature for THRESH_NUM_POINTS is 1300]

    [gestrue feature for THRESH_NUM_POINTS is 1350]

    Do you have any suggestions for adjustments?

    Best regards,

    Haoyu

  • Hi Haoyu, 

    It seems like the number of points feature is slightly increasing as you lower the threshold. If you continue to lower it further does the numPoints remain at 0 when no gestures are performed? For example, could you try a value of 1000 for THRESH_NUM_POINTS? You  might also try slightly adjusting DOPPLER_BINS_TO_SUPPRESS. these parameters were chosen empirically based on what worked well for the range we tested at so they will likely need to be changed to get good feature data at a different range.

    Additionally, make sure the hand/arm is inside of the region of interest when doing the gestures. Have you experimented with different standing locations? 

    Best regards,

    Josh

  • Hello, Josh,

    I’d like to clarify that last week, when I was experimenting with the THRESH_NUM_POINTS setting, I found that lowering its value did indeed increase the numPoints count. However, between 60 cm and 100 cm there was no distinguishable difference in the feature data between a waving gesture and no gesture at all. I believe this makes it difficult for a neural network to recognize the gesture correctly (please let me know if I’m mistaken).

    The test results indicate that the gesture features only change significantly within 30 cm, whereas the feature data for no gesture and at 70 cm show little difference (as shown in the figure below).

    [Comparison of feature data for no gesture, right-to-left waving at 30 cm, and right-to-left waving at 70 cm]

    THRESH_NUM_POINTS: 1000

    RANGE_BIN: 15 ~ 20

    Additionally, today I tried adjusting DOPPLER_BINS_TO_SUPPRESS. I increased it step by step from the default value of 5 up to 30 and observed the resulting feature data. I found that the larger the DOPPLER_BINS_TO_SUPPRESS value, the wider the required amplitude of the waving gesture (please correct me if I’m wrong). However, I don’t understand how changing DOPPLER_BINS_TO_SUPPRESS relates to enabling the device to detect gestures at farther distances.

    [Comparison of feature data for large-motion gestures and small-motion gestures at 30 cm, with DOPPLER_BINS_TO_SUPPRESS set to 5]

    [Comparison of feature data for large-motion gestures and small-motion gestures at 30 cm, with DOPPLER_BINS_TO_SUPPRESS set to 30]

    Any suggestions would be appreciated.


    Best regards,

    Haoyu

  • Hello Haoyu, 

    I believe this makes it difficult for a neural network to recognize the gesture correctly

    You are correct. A clear peak should be visible in at least the numPoints and weightedDoppler features. It looks like 1000 is indeed too low if you are getting 20-40 for numPoints when no gesture is performed. 

    I should have clarified about DOPPLER_BINS_TO_SUPPRESS, I think you could try lowering it to 4 or 3. Increasing the value only reduces the amount of doppler bins used in feature computation.

    Have you tried varying the distance you are standing at to see if you get better results? I wonder if the number of range bins is too small and some of the gesture motion is being missed. You could also try increasing the number of range bins used so that a larger area is considered for feature extraction.

    Regards,

    Josh

     

  • Hello Josh,

    Sorry for the late reply. I tried adding RANGE_BIN_START to the initial position obtained from the detection matrix in ComputeFeatures_RDIBased and ComputeFeatures_DOABased (as shown below), and found that, with the original THRESH_NUM_POINTS setting, I can now observe gesture features from 70 cm to 100 cm. Are there any considerations or potential issues I should be aware of with this configuration?

    Any suggestions would be appreciated.

    Best regards,

    Haoyu

  • Hi Haoyu, 

    That is good to hear that you can see the features now. It seems in our example code, ComputeFeatures_RDIBased hardcodes the starting pointer to the 1st range bin and ComputeFeatures_DOABased actually just starts from the 0th range bin which is incorrect. We will need to fix these issues in a future radar toolbox update. I believe the changes you've made are correct. 

    Best regards,

    Josh