This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

IWR6843AOPEVM: ROS environment detect

Part Number: IWR6843AOPEVM
Other Parts Discussed in Thread: IWR6843AOP

I'm trying to use the sensor to detect obstacles in a ROS environment, very similar to the Autonomous Robotics ROS Sense And Avoid example, but (1) the pointclouds I get are usually sparse, not like the Sense And Avoid example, and it's hard to tell which object a cluster means. BTW, I have turned off peak grouping; (2) I also find that small obstacles very near to the sensor (<1m) cannot reflect strong signals, which could be easily filtered out when doing some global signal processing. 

I try to refer to the AOP chirp configurations in other examples such as the Small Obstacle Detection, but only the sensor front-end parameters are applicable in ROS projects, which doesn't help much. May I know if it's possible to improve both cases by tuning the configuration parameters? If so, could you please provide any other configuration template? Thank you.

  • Hello,

    To start off, I want to mention that due to the fundamentals of how Radar works and the doppler effect, objects in motion are picked up much stronger than static objects. For detection purposes, it does not matter whether the sensor is stationary and objects are moving or if the object is stationary with the sensor moving. In the Sense and Avoid demo, the platform is moving which provides well defined point cloud data.

    By default we have 0 doppler turned off in the configuration files. This means that any point detected to be completely static is removed from the pointcloud (PC) output. For this environment I would ensure that 0 doppler points are included via either clusterRemoval if using non tracking ROS, or staticRangeAngleCfg if using tracker version. When you first do this you will see points everywhere as now surfaces such as walls and the floor are now going to be a part of the PC. You can adjust TX power backoff as well as CFAR SNR thresholding to tune out these unwanted detections and only detect objects desired. This can be done due to the reflectivity (SNR) of objects differing from each other.

    With proper tuning I assure you that objects <1m can be detected. As an example you could put your hand right in front of the sensor and it still be detected due to the micro motions of your hand and the blood pumping in your veins being enough motion for detection.

    Best Regards,

    Pedrhom Nafisi

  • Thanks for your help.There is one thing I'm curious about what he mentioned, does iwr6843aop standalone support the tracker version? Because when I ran the small_obstacle_detection demo by industrial_visualizer, parameters of the detection and tracking layers such as "dynamicRACfarCfg" could not be recognized as CLI commands. That should mean I was using a non tracking version, so there was the error, right? Is there any method to switch the version so that the demo can work on iwr6843aop?

  • Hello,

    Glad I could help. If you get could not be recognized errors using default CFGs, then you have the wrong binary flashed to the device. For the tracker version make sure you use the tracking binary which will be the 3D People Counting Binary. The README.txt file in the bin folder goes over which binary should be flashed for what version.

    Do note that the tracker version currently does not visualize the tracker, however all the data is fully outputted by the sensor and there is a rostopic you can subscribe to do add your own rviz visualization.

    Best Regards,

    Pedrhom

  • After I flash the iwr6843aop by the small_obstacle_detection_68xx.bin or the 3D_people_count_68xx_demo.bin in the prebuilt_binaries folder, the cfgs can be successfully recognized under the ros_driver enviornment. But there is no pointcloud in rviz, and there is no data output after I echo the rostopic. Do you know where the problem is? 

  • Hello,

    I just reconfirmed by successfully re-running everything from scratch to confirm no bugs in the ROS package. I will go over every step in-detail and hopefully this helps. This was done with ROS Melodic.

    1. Navigating to mmwave_ti_ros/ros_driver/ and running catkin_make

    2. Within mmwave_ti_ros/ros_driver/, ran source /devel/setup.bash

    3. Set EVM to flashing mode. Flashed the 3D_people_count_68xx_demo.bin found in mmwave_ti_ros/ros_driver/src/ti_mmwave_tracker_rospkg/bin/

    4. Set EVM to functional mode.

    5. Power cycle board.

    6. Navigated to mmwave_ti_ros/ros_driver/src/ti_mmwave_tracker_rospkg/launch/ and did roslaunch AOP_3d_Tracking.launch

    Afterwards RVIZ launched and I was able to see the pointcloud immediately. Is there any delta between these steps and what you did? Are you perhaps on ROS Noetic? This should be supported but if you do everything identical on Noetic and it does not work let me know

    Best Regards,

    Pedrhom Nafisi