This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

IWR6843ISK: Help to get best point cloud and to modify viewer

Part Number: IWR6843ISK
Other Parts Discussed in Thread: IWR6843

I would like help on 2 subjects with respect to using the IWR6843ISK:

1) I would like to start with the out-of-box demo and just use it to collect point cloud data (x,y,z data is sufficient but if velocity is also available that is a plus).  Then I would like to do my own processing on the data.  The best viewer for the out-of-box example is the web visualizer, but the tutorial says that the Industrial Viewer can also work.  I have not tried that viewer for this demo, but I have used it for other demos with the IWR6843.

   a) Is the best way to start with the Industrial Viewer, and modify that (Python) code?  Other tutorials suggest this possibility and I have installed the necessary software dependencies to do this.  However, the Python source code consists of multiple Python files in multiple folders, so I am wondering if there is a project file that I can use with Visual Code or visual Studio or some other IDE to help me navigate through all the files to understand how to make the changes I wish to make?  I have tried to do this manually, one file at a time, but it would be much better to work on this as a complete project rather than file-by-file.

2) Considering #1 above, are there guidelines/suggestions for how to to optimize the point cloud considering the following:

   a) Range of objects to detect is about 5 meters or less initially

   b) I want to detect the walls and stationary features in the environment and not filter them out.  There will be few if any moving obstacles although the radar sensor itself will be moving, so in that sense everything will be moving with respect to the radar.

   c) Most things to detect should have a large radar cross section that is larger than a human although I would like to also detect that size if possible.

   d) I would like to get several points for each detected item, and in fact I would prefer to not group the points into "items" at all until I do the processing myself on the point cloud data.  Ideally I would get tens or hundreds of points per detected item (ie, as for a wall).

So far I have had some success by limiting the max range for which points get filtered out, and by unchecking the box to group range points together in the visualizer GUI.  However, at best I get maybe 15-20 points for a small object that presents an 8 sq ft profile to the radar at about 3 feet distance and I would like to see more points if possible.  So this is where I could use some guidance.

Thanks for your help.

  • I'm doing similar kinds of things, although with the AOP and not the ISK.

    The CFAR algorithm is the main thing controlling the density of point clouds. Reducing the gain for both range and doppler targets will increase pointcloud density and outline fidelity. Especially if you're moving the radar, the doppler targets are more important than the stationary ones, but you need to ungroup and reduce gain on both.

    However, you're going to run into processing time issues when you exceed approximately 200 points... which is environment-dependent, not parameter-dependent, so you can think it's all good and then you turn too fast and generate 500 contacts this frame. The CFAR execution itself will take longer than the chirp-to-chirp time, it misses the interrupt, and the whole thing crashes and needs to be power cycled.

    Additionally, if you're using the UART to stream out the results in the default encoding, you'll run out of time for the serial transmission once you hit about 120 points. You can double that by replacing the floating-point stream with a 16-bit fixed-point stream... but it's still a serious limitation, with a <1Mbps cap. The UART drivers onboard will do up to 3Mbps, and the SPI will do 50MHz, so you could hook it up to a new conversion MCU with high-speed UARTs or SPI slave (like a Teensy or something) and get better throughput.

    One of my current projects is working on firmware for the 6843 that outputs the densest, least-processed spatialized data possible. Maybe we could collaborate on some kind of open source firmware?

  • Thank you for your comments Aubrey.  I started to see that same problem (system crash) when I reduced gain too far.  But your comments made me think of another way to potentially address this problem; it requires a compromise but I think I could live with it in my case.  I have been leaving the elevation and azimuth fields of view set at the full +/- 90 degrees default.  I could reduce those substantially and maybe achieve a higher density of points in the remaining (reduced) FoV up to the 200 or so threshold for crashing.  Then I might be able to use multiple sensors to get a larger FoV as long as they don't interfere.

  •    a) Is the best way to start with the Industrial Viewer, and modify that (Python) code?  Other tutorials suggest this possibility and I have installed the necessary software dependencies to do this.  However, the Python source code consists of multiple Python files in multiple folders, so I am wondering if there is a project file that I can use with Visual Code or visual Studio or some other IDE to help me navigate through all the files to understand how to make the changes I wish to make?  I have tried to do this manually, one file at a time, but it would be much better to work on this as a complete project rather than file-by-file.

    I would recommend using the industrial visualizer. Unfortunately we don't have a project file for it yet. Typically I just use the "open folder" option in visual studio code to navigate between the python files.

     a) Range of objects to detect is about 5 meters or less initially

    I would recommend adjusting the chirp configuration using the mmwave sensing estimator tool in the "advanced" tab.

       b) I want to detect the walls and stationary features in the environment and not filter them out.  There will be few if any moving obstacles although the radar sensor itself will be moving, so in that sense everything will be moving with respect to the radar.

    This is controlled by setting the clutterRemoval argument in the configuration file.

       c) Most things to detect should have a large radar cross section that is larger than a human although I would like to also detect that size if possible.

    I would set the CFAR threshold high to filter out weaker signals.

       d) I would like to get several points for each detected item, and in fact I would prefer to not group the points into "items" at all until I do the processing myself on the point cloud data.  Ideally I would get tens or hundreds of points per detected item (ie, as for a wall).

    You might prefer even exporting a 2D structure, such as the range-azimuth heatmap off the device to achieve this. This would allow you avoid the problem of the number of points varying frame-over-frame, and you could recover more information that gets removed in the CFAR processing.

    Best,

    Nate