This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

IWRL6432BOOST: IWRL6432BOOST

Part Number: IWRL6432BOOST
Other Parts Discussed in Thread: IWR6843ISK, IWRL6432

Tool/software:

Hello,

I am using the IWRL6432ISK board for people tracking use in the people tracking lab. However, I've noticed that the point clouds generated by the IWRL6432ISK are poorer and more spread out compared to those from the IWR6843ISK. I believe the primary reason for this difference is that the people tracking lab for the 6843 uses the Capon algorithm for angle estimation, while the lab for the 6432 uses FFT. Therefore, I have two questions:

  1. How can I achieve richer and more concentrated point clouds? It is crucial for me that the point clouds be more concentrated.
  2. If the difference is due to the Capon algorithm, how can I incorporate the Capon algorithm into the people tracking lab for the IWRL6432?
  • Hello.

    How can I achieve richer and more concentrated point clouds? It is crucial for me that the point clouds be more concentrated.

    You can increase the pointcloud concentration by reducing the CFAR threshold in the configuration.  This will filter out less points and will provide a more concentrated point cloud, but will increase the amount of noise too, so you can adjust this value experimentally.

    If the difference is due to the Capon algorithm, how can I incorporate the Capon algorithm into the people tracking lab for the IWRL6432?

    Please try adjusting the CFAR threshold first as it is an easier way to solve your described problem.

    Sincerely,

    Santosh

  • Hi,

    Thank you for your kind response. Based on your suggestion, reducing the CFAR threshold does make the point clouds richer; however, it does not achieve the desired concentration of point clouds. Our main goal is to separate two adjacent targets as much as possible, so achieving greater concentration is especially important for our application.

    Given the points discussed above, I would appreciate hearing your opinion on the two questions I mentioned.

    Thank you and best regards,

  • Hello.

    Our main goal is to separate two adjacent targets as much as possible, so achieving greater concentration is especially important for our application.

    You can improve this by increasing the number of chirps if you are ok with increasing your power.  Could you provide clarification on what you mean by more concentrated pointcloud?  Increasing the CFAR threshold should increase the concentration of the pointcloud but you mentioned it made it "richer" but not concentrated enough?

    For information + an implementation of the capon algorithm in the 6432 processing chain, you can take a look at the Life Presence Detection demo source code and migrate the Capon chain as needed.

    Sincerely,

    Santosh

  • Dear...

    Thanks for your consideration. By saying "concentration for point cloud", I mean having points close together for a target, and not scattered and spread such that two targets can be distinguished.

    The reason I am heading toward Capon is the point mentioned above.

    Before proceeding to implementing capon in 6432, I will be grateful if I know according to your experiences, can the board handle computational burden of Capon along with track?

    The lab you mentioned has no tracker.

  • Hello Amir.

    I mean having points close together for a target

    To achieve this, you can try and minimize the range resolution, which allows points closer together to be more distinguishable, but it will come at the cost of your maximum range.  I recommend you try adjusting your current configuration in the mmWave Sensing Estimator to try and meet your requirements.

    Before proceeding to implementing capon in 6432, I will be grateful if I know according to your experiences, can the board handle computational burden of Capon along with track?

    The lab you mentioned has no tracker.

    Currently, the tracker is not enabled by default in the LPD demo as the processing time required to run both would be very long.  You can test it for yourself and adjust the frame time as needed.

    Sincerely,

    Santosh

  • Dear Santosh,

    Thanks for the response and the comments.

    i, we have tried improved range resolution but our challenge is mainly azimuth resolution. That is the reason we are thinking of Capon.

    ii, by saying adjusting proper frame time for having both Capon as well as gtrack, then you confirm that the hardware can manage these two?

    Sincerely,

    Amir

  • Hello.

    by saying adjusting proper frame time for having both Capon as well as gtrack, then you confirm that the hardware can manage these two?

    We do not have a demo that currently enables both; you will have to enable both and check the timing for yourself to see if it can support both tracking and the Capon chain in that frame time.  I believe both DPUs are in the LPD demo, but only Capon is enabled; you can tweak the framePeriod parameter in the configuration to support a longer frame time and subsequently a longer processing time.

    Sincerely,

    Santosh