This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

IWRL6432BOOST: We can't do human vs non-human classification successfully on video doorbell demo

Part Number: IWRL6432BOOST

Tool/software:

Hardware:

IWRL6432BOOST

Software:

video_doorbell_demo.Release.appimage on radar_toolbox_2_10_00_04

Industrial_Visualizer.exe

We want to evaluate human vs non-human classification based on machine learning for video doorbell demo. We have flashed above prebuilt firmware on radar_toolbox_2_10_00_04 into IWRL6432BOOST board and used following modified chirp configuration:

% ***************************************************************
% long_range_state_machine.cfge: Used to detect the presence of humans
% in outdoor environments, specifically for video doorbells. Detects
% movement through the point cloud, which is fed into a state machine
% to increase detection robustness.
% ***************************************************************
sensorStop 0
channelCfg 7 3 0
chirpComnCfg 23 0 0 256 4 68 0
chirpTimingCfg 9.9 24 0 12.5 62
frameCfg 2 0 280 8 250 0
antGeometryCfg 0 1 1 2 0 3 0 0 1 1 0 2 2.418 2.418
guiMonitor 2 0 0 0 0 1 1 0 1 1 1
sigProcChainCfg 64 2 1 1 0 0 0 15
cfarCfg 2 8 4 3 0 12.0 0 0.5 0 1 1 1
aoaFovCfg -80 80 -40 40
rangeSelCfg 0.1 8.0
clutterRemoval 1
compRangeBiasAndRxChanPhase 0.0 1.00000 0.00000 -1.00000 0.00000 1.00000 0.00000 -1.00000 0.00000 1.00000 0.00000 -1.00000 0.00000
adcDataSource 0 adc_data_0001_CtestAdc6Ant.bin
adcLogging 0
lowPowerCfg 1
factoryCalibCfg 1 0 40 0 0x1ff000
% Motion/Presence Detection Layer Parameters
mpdBoundaryArc 1 0.5 5 -30 30 0.5 2
mpdBoundaryArc 2 0.5 3 -70 -31 0.5 2
mpdBoundaryArc 3 0.5 3 31 70 0.5 2
stateParam 3 3 12 50 5 200
majorStateCfg 8 6 60 20 15 150 4 4
clusterCfg 1 0.5 2
% Tracking Layer Parameters
sensorPosition 0 0 1.2 0 0
gatingParam 3 2 2 2 4
allocationParam 6 10 0.1 4 0.5 20
maxAcceleration 0.4 0.4 0.1
trackingCfg 1 2 100 3 61.4 191.8 100
presenceBoundaryBox -3 3 0.5 7.5 0 3
% Classification Layer Parameters
microDopplerCfg 1 0 0.5 0 1 1 12.5 87.5 1
classifierCfg 1 3 4
rangeSNRCompensation 1 12 6 5 12
presenceGPIO 1
% baudRate 1250000
baudRate 115200
sensorStart 0 0 0 0

 

When we have conducted video doorbell experiment, we found there were no bound boxes on human and non-human object(maybe first appear but then disappear) and the label is always unknown label. We can't classify human and non-human object. Could you give us suggestions for performing human vs non-human classification on video doorbell demo based on machine learning approach? Thank you very much.

  • Hello, 

    The issue is likely because you are using one of the video doorbell configuration files. Even though you have added the commands to enable tracking and classification, the detection layer parameters for video doorbell are not optimized for tracking targets.

    My recommendation would be to use the configuration intended for tracking/classification, for example, {MMW_SDK5_INSTALL}\examples\mmw_demo\motion_and_presence_detection\profiles\xwrL64xx-evm\TrackingClassification_MidBw.cfg. 

    Best Regards,

    Josh

  • Hi, Josh,

    I have used video_doorbell_demo.Release.appimage and TrackingClassification_MidBw.cfg for human vs non-human classification. But I get following error message:

    I also have found that I just changed following mpdBoundaryArc boxes 

    (No Boundary Box)

    mpdBoundaryArc 1 0.3 5 -30 30 -0.5 3
    mpdBoundaryArc 2 0.3 3 -70 -31 -0.5 3
    mpdBoundaryArc 3 0.3 3 31 70 -0.5 3

    to boundaryBox box

    boundaryBox -3.5 3.5 0 9 -0.5 3

    we can always get following result and it seems the red boundary boxes always appear when people move but it can't classify human vs non-human object and its label always unknown label. (BTW, above modification bases on previous post custom configuration)

    Could you give us some suggestions? or could you provide us corresponding configurations to do human vs non-human classification for mpdBoundaryArc boxes on video doorbell demo(video_doorbell_demo.Release.appimage)? Thank you very much.

  • Hello, 

    Sorry for the delay. I am checking with the author of the demo, please give me a couple days to get back to you.

    Thanks,

    Josh

  • Hello, Josh

    Have you checked this problem for me? Thank you very much for your support.

    BRs.

  • Hello, 

    I sincerely apologize for the delay here. I will try to give a bit of context to the behavior you are seeing:

    1. The video doorbell demo has some modifications in the source code compared to the motion and presence detection demo. One of those modifications is related to the windowing function used in the Range FFT, because of this, that demo requires a specific ADC sample size (256). This may be one reason that you had issues when trying TrackingClassification_MidBw.cfg initially. 
    2. There are a couple of types of boundary boxes for this demo which I will try to clarify, I apologize for any confusion. The two types are mpd (mpdBoundaryBox, mpdBoundaryArc) and tracker (boundaryBox, staticBoundaryBox, presenceBoundaryBox). These different types of boxes serve different functions. The mpd zones are used in the presence detection DPU (mpd) and the tracker zones are used by the tracker algorithm. For targets to be tracked, and subsequently be classified as human or nonhuman, the configuration must include the configuration commands for tracking. This is why when you add the boundaryBox command to the configuration you can see the target boxes shown in the visualizer.

    The human vs nonhuman classification demo is using a trained machine learning model to do the classification. The radar data used to train that model was collected with a configuration such as TrackingClassification_MidBw.cfg. Even though you are able to get the demo running and targets being tracked, they are likely always labeled as unknown because the radar data looks much different from what the classifier is expecting. If you make some changes to the detection layer parameters of the configuration, making the data more similar to what the classifier is expecting, you may see improved performance. I was able to get the classifier result to label me as human by making these changes to the configuration you pasted above. 

    chirpComnCfg 23 0 0 256 4 68 0     ->     chirpComnCfg 16 0 0 256 4 47 0
    chirpTimingCfg 9.9 24 0 12.5 62     ->     
    chirpTimingCfg 6 32 0 40 60.5
    frameCfg 2 0 280 8 250 0     ->     
    frameCfg 2 0 230 64 100 0
    sigProcChainCfg 64 2 1 1 0 0 0 15     ->     sigProcChainCfg 32 2 1 2 8 8 1 0.3

    and adding boundaryBox -3.5 3.5 0 9 -0.5 3

    To take a step back, what is the purpose that you are starting from the video doorbell demo instead of motion and presence detection demo? Is the target application closer to the video doorbell use case or do you just want the tracker to use the arc shaped zones instead of the box shaped zones?

    Best Regards,

    Josh

  • Hello, Josh.

    Thank you very much for your detailed information.

    1. What is the purpose that you are starting from the video doorbell demo instead of motion and presence detection demo?

    Because we think video doorbell demo contains feature capable of classification between humans and non-humans and we are also very interested in outdoor security applications.

    2.  Is the target application closer to the video doorbell use case or do you just want the tracker to use the arc shaped zones instead of the box shaped zones?

    We think video doorbell which use arc shaped zones instead of box shaped zones is more suitable for outdoor security application.

    We have changed chirp configurations according to your modification as follows:

    sensorStop 0
    channelCfg 7 3 0
    chirpComnCfg 16 0 0 256 4 47 0
    chirpTimingCfg 6 32 0 40 60.5
    frameCfg 2 0 230 64 100 0
    antGeometryCfg 0 1 1 2 0 3 0 0 1 1 0 2 2.418 2.418
    guiMonitor 2 0 0 0 0 1 1 0 1 1 1
    sigProcChainCfg 32 2 1 2 8 8 1 0.3
    cfarCfg 2 8 4 3 0 12.0 0 0.5 0 1 1 1
    aoaFovCfg -80 80 -40 40
    rangeSelCfg 0.1 8.0
    clutterRemoval 1
    compRangeBiasAndRxChanPhase 0.0 1.00000 0.00000 -1.00000 0.00000 1.00000 0.00000 -1.00000 0.00000 1.00000 0.00000 -1.00000 0.00000
    adcDataSource 0 adc_data_0001_CtestAdc6Ant.bin
    adcLogging 0
    lowPowerCfg 1
    factoryCalibCfg 1 0 40 0 0x1ff000

    mpdBoundaryArc 1 0.5 5 -30 30 0.5 2
    mpdBoundaryArc 2 0.5 3 -70 -31 0.5 2
    mpdBoundaryArc 3 0.5 3 31 70 0.5 2
    stateParam 3 3 12 50 5 200
    majorStateCfg 8 6 60 20 15 150 4 4
    clusterCfg 1 0.5 2

    sensorPosition 0 0 1.2 0 0
    gatingParam 3 2 2 2 4
    allocationParam 6 10 0.1 4 0.5 20
    maxAcceleration 0.4 0.4 0.1
    trackingCfg 1 2 100 3 61.4 191.8 100
    presenceBoundaryBox -3 3 0.5 7.5 0 3

    boundaryBox -3.5 3.5 0 9 -0.5 3

    microDopplerCfg 1 0 0.5 0 1 1 12.5 87.5 1
    classifierCfg 1 3 4
    rangeSNRCompensation 1 12 6 5 12
    presenceGPIO 1

    baudRate 115200
    sensorStart 0 0 0 0

    We have used above modified chirp configuration with video_doorbell_demo.Release.appimage firmware. But we still can't label me as human and always label unkown:

    BTW, what does your following sentence mean? Do you mean following detection layers parameters?

    "Even though you are able to get the demo running and targets being tracked, they are likely always labeled as unknown because the radar data looks much different from what the classifier is expecting. If you make some changes to the detection layer parameters of the configuration, making the data more similar to what the classifier is expecting, you may see improved performance."

    We have just tune above detection layer parameters but not label me as human. Could you give us some suggestion which parameter play key role in the final performance? Thank you very much.

    BRs.

  • Hello, 

    Thank you for the response. 

    we think video doorbell demo contains feature capable of classification between humans and non-humans and we are also very interested in outdoor security applications.

    Understood. Does your outdoor security application have strict power requirements? This is ultimately the strength of the video doorbell demo and while it is true that the classification code is present and can be enabled in this demo, the configuration required for accurate classification with the classifier is different from the video doorbell example configurations. 

    BTW, what does your following sentence mean? Do you mean following detection layers parameters?

    Sorry for the confusion. I was referring the the parameters that I listed which includes sensor front end and detection layer parameters. 

    Best Regards,

    Josh

  • Hi, Josh. Thank you very much for your quick response. For your following questions:

    Does your outdoor security application have strict power requirements?

    Currently, we don't have strict power requirement and we just want to evaluate human vs non-human classification performance with ML. We have tuned the following sensor front and detection layer parameters combined with video_doorbell_demo.Release.appimage firmware:

    chirpComnCfg 16 0 0 256 4 47 0
    chirpTimingCfg 6 32 0 40 60.5
    frameCfg 2 0 230 64 100 0
    sigProcChainCfg 32 2 1 2 8 8 1 0.3

    But we still can't do human vs non-human classification successfully. We also found the above parameters maybe cause previous error if we adjust wrong direction. You said you get the classifier result to label you as human by making these changes to the configuration you pasted above. 

    What firmware do you use (video_doorbell_demo.Release.appimage or motion_and_presence_detection_demo.release.appimage)?  We will continue to adjust above parameters. BTW, is there anything I misunderstand? Thank you very much!

    BRs.

  • Hello, 

    Sorry for the delayed response here. Since you do not have strict power requirements, my recommendation would be to evaluate the human vs non-human classification performance using the default motion_and_presence_detection_demo.release.appimage instead of the modified video doorbell firmware. This will allow you to use the appropriate configuration file and should give better classification results. 

    Appimage location: {MMWAVE_SDK5_INSTALL}\examples\mmw_demo\motion_and_presence_detection\prebuilt_binaries\xwrL64xx-evm\motion_and_presence_detection_demo.release.appimage

    cfg file location: {MMWAVE_SDK5_INSTALL}\examples\mmw_demo\motion_and_presence_detection\profiles\xwrL64xx-evm\TrackingClassification_MidBw.cfg

    Best Regards,

    Josh