This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

IWR6843ISK-ODS: Give a guide of "Detecting Human Falls and Stance"

Part Number: IWR6843ISK-ODS

Dear TI supporter , 

1) Which Project(Lab) reference source  is better to exercise and implement the algorithm of "Detecting Human Falls and Stance"  ?  Can refer to 3D people demo or scan area demo ?

   

2) I just getting start and trace the code of  the 3D people demo, need to have a simple exercise , please post me where is the code area to output the point cloud data in which relating

function I could calculate it

  1.  height of each target
  2.  dimensions of each target

    of a person ?

Thanks

Ben.

     

  • Hi Ben,

    I recommend you use 3D People counting for this. You should start with the fall detection processing on your PC, as it is easier to debug your application.

    The python source has a parser, which you can use to get the point cloud and tracker data.

    Regards,

    Justin

  • Hi Justin,

    Some question :

    1) How to explain the phenomenon I cycle in red as below picture , How to eliminate it ?

    2) When I keep stay without any movement , there will be no point cloud output for an instant and leave the tracking , 

         and most of time I need keep body movement , and point cloud will disappear soon. 

        Is sensitivity problem ? Which parameter can adjust to improve ?

    3)  How to calculate target actual height with EVM side mount (~10°)

         Is the drawing rectangle of a tracking target with the actual height? 

         As my observation , fewer & scattered point cloud in the drawing rectangle when I was moving or sitting , that could not implied meaning of target height ?

         and most of time I need keep body movement , and point cloud will disappear soon , that will cause me hard to observe the height in different stance of a target.

    Thanks

    Ben.

  • Hi Ben,

    1. I think this is multipath, see this E2E post: https://e2e.ti.com/support/sensors/f/1023/p/907792/3355141#3355141

    2. You can increase doppler resolution, or lower CFAR Thresholds. See this document for details on doppler resolution.

    3. Please see the experiment on height detection, fall detection. There is psuedo code that explains techniques I have used.

    Regards,
    Justin

  • HI Justin , 

    About your Answer2. for my Question2 :

      yes, I have finishing read  this document (Programming Chirp Parameters in TI Radar Devices) you mentioned.

         (Also have learned others so have some concepts before :  "The fundamentals of mmWave sensor")

       Sorry,   in which chapter/section in  this document ? .  I don't understand you said to increase doppler resolution and no explanation about my Q2 ,

       Do you mean increase  "Velocity Resolution" ? Can you please  more explain ?  thanks

      [Velocity Resolution ]

     In applications, like park assist, you might need to separate out objects with small velocity differences, for which good velocity resolution is needed. Velocity resolution mostly depends on the transmit frame  duration, that is, increasing the number of chirps in a frame improves the velocity resolution.

     

    Thanks

    Ben.

  • Hi Ben,

    Doppler resolution is velocity resolution.

    The paper describes how doppler resolution is calculated. So you can calculate the doppler resolution of the chirp you are already using, then you can modify it to meet your requirements. You will be limited on memory, the easiest way to increase the doppler resolution is to increase the idle time, at the expense of max doppler.

    Regards,

    Justin