This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

CCS/IWR6843ISK: Is IWR6843 ISK suitable to detect multiple objects which are behind each other?

Part Number: IWR6843ISK
Other Parts Discussed in Thread: IWR6843, ,

Tool/software: Code Composer Studio

Hi,

Can IWR6843 ISK detect multiple objects of different material which are behind each other as shown in the figure at 0.5 mtr distance?

Would this detection possible in the python GUI application provided by the TI?

Keen to learn your thoughts about such application.

Would like to know the demo application and chirp configuration suitable for this application.

  • Dear Sudharshan:

    Based on the dimensions of the set up your application will need focus on range resolution. Range resolution denotes the minimum distance that two objects can be resolved as two separate objects and not just one object. In your case the metal object and the plastic box are separated by 10 cm. So make sure that when configuring the device that the range resolution does not exceed the distance between the two objects in the scene.

    With regards to the python GUI, assuming that you have downloaded the mmWave Industrial Toolbox on your machine the following path would be needed to get to the GUI's user guide.

    Path: <mmWave Industrial Toolbox Install Path>\labs\people_counting\visualizer\docs

    I would suggest starting with the Out of Box Demo which gives you the ability to play around with the chirp configurations. In addition, I would try using the Out of Box Demo visualizer as well as the Python GUI that you mentioned.

    Best Regards, 

    Connor Desmond

     

  • Hi Connor,

    In the above application, the metal object and plastic box are behind each other when we look it from the antenna end. Ideally from radar perspective, the two objects looks like as shown below,

    Would like to know the demo application and chirp configuration suitable for this application.

    Regards,
    Sudharshan

  • Dear Sudharshan:

    Below is link to an chirp configuration tool which you can use to configure your device. Play around with different configurations and come back to me when if you have any further questions with your results. As I said before play around with the Out of Box Demo it is a really helpful tool to get an idea of what you want to do.

    With regards to the image you have shown above how much do you want your point cloud to resemble this image.?

    Link:

    https://dev.ti.com/gallery/view/1792614/mmWaveSensingEstimator/ver/1.3.0/

    Best Regards,

    Connor Desmond

  • Hi Connor,

    >>>With regards to the image you have shown above how much do you want your point cloud to resemble this image.?
    Keen to learn what will be minimum possible distance between the scanned points in the point cloud? Could you please suggest, Which configuration we need to change to achieve this

    Currently we are using the the below configuration for allocation param

    AllocationParam 300 800 0.1 30 0.5 20

    Thanks

    Sudharshan

  • Dear Sudharshan:

    I would start playing around with the range resolution and see where that gets you. Remember with regards to range measurements the best performance is going to be when the object is perpendicular to the device. Play around with the tools I gave you have those generate a configuration for you. Then you can read up on the parameters of those configurations and change your starting configuration. Let me know how it goes.

    Best Regards,

    Connor Desmond

  • Hi  Connor,

       Apologies for late reply.

      We tried multiple experiments with advanced profile configurations for out-of-the-box demo application. During our experiments, we are not able to detect multiple objects when one object is behind other,  But if two objects are separated with some distance ( 50 cm side by side ) then device is detecting it as a 2 objects.

    We have carried out the same experiment with  Sense_and_Direct_HVAC_Control demo application , but the results are still similar to out-of-the-box demo.

    May i learn any findings from your experiments?  

    Could you please help regarding this requirement?

    Note: During our experiments , IWR6843ISK antenna went bad. So, we carried out the experiments with IWR6843ISK-ODS antenna. 

    May i learn if any significant difference in the results with change of antenna?

    Thanks 

    Shraddha

  • This may not be a configuration issue, but an issue with the radar signal not getting to the object that is behind the first object. If the first object is reflecting all of the signal and none of the signal is getting reflected off of the object behind it then it will not be detected. I would suggest trying different materials and see if you can not get better results. With regards to the ODS here is a link that will answer your question:

    http://e2e.ti.com/support/sensors/f/1023/t/833770?IWR6843-What-is-the-difference-between-IWR6843ISK-and-IWR6843ISK-ODS-

    Best regards,

    Connor Desmond

  • Hi Connor, 

    We have carried out same experiment with different object for Out-of-the-box demo, the object is human holding laptop in hand and standing in front of RADAR , we checked the results in web based demo visualizer

     and python GUI application. We have used the configuration file "profile_advanced_subframe.cfg" from out-of-the-box demo profiles. In web based demo visualizer , we are observing "Number of detected Objects: field is varying continuously between 5 to 7 and with python GUI visualizer application number of targets is observed "0" .

    Below link used for demo visualizer:

    https://dev.ti.com/gallery/view/mmwave/mmWave_Demo_Visualizer/ver/3.3.0/ 

    May i learn any difference and limitations between these two visualizers ?

    We tried the same with Sense_and_Direct_HVAC_control demo  also with its default configuration file , but web based visualizer is not able to send configurations for it. But we are able to send the same configuration from python GUI application and this application is showing targets = 1 when human is standing with laptop in front of the RADAR.

    We need to use Sense_and_Direct_HVAC_control as reference application because it has Human and Non-Human classification algorithm.

    Could you please help, how to detect multiple objects in our usecase ?

    Thanks 

    Shraddha

  • Shraddha:

    I apologize for the large gap of time with regards to my assistance. I am working to get you an answer to your question or get someone who can help you get your question answered more quickly. Give me until early next week to get this process running again.

    Best regards,

    Connor Desmond

  • Dear Shraddha:

    The Sense_and_Direct_HVAV_control will not detect static objects. I would use the Out of Box demo and do the following:

    1. Make sure that static clutter removal is disabled.

    2. Get the highest range resolution possible at the max range you want to detect objects in your scene.

    3. Turn off Peak Grouping in Range and Doppler, so that clusters can be identified. 

    4. Make sure that each of the two objects separately can be detected before putting them both in the scene.

    For help using doing steps 1-3 I would suggest reading the CLI commands documented in the SDK user's guide.

    Please let me know how it goes.

    Best regards,

    Connor Desmond

  • Hi Connor, 

    As per your suggestions, I flashed out-of the box demo for checking the output. As i observed that , if i disable the "static clutter removal" and  "peak grouping in range and doppler" option, i could see more number of detected objects even if no object is present in front of the RADAR. But when i enable that option, the number of detected objects reduced. 

    The following image is for the no object case when the options were disabled :

      

    As in image we can see the Number of detected objects field is showing 18 and it will be varied frequently.

    Could you please assist me regarding this?

    Please suggest your thoughts

    Thank you 

    Shraddha

     

  • Shraddha:

    I would suggest reading more about the CFAR configuration and chirp configuration. Then apply changes and record the differences on what you see in the scene. From viewing other scenes that pertain to static object detection the disabling the static clutter removal is needed. I am not absolutely sure but the variance of the number of detected objects could be reduced for your scene by tinkering with the CFAR and Chirp configurations.

    Best regards,

    Connor Desmond