This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

IWR6843ISK: sensorPosition command in Long Range People Detection

Part Number: IWR6843ISK

Tool/software:

Hi,

I am currently testing and analyzing the Long Range People Detection example from Radar Toolbox 2.10.

The sensorPosition CLI command is used to set the sensor height, azimuth, and elevation tilt values.

However, the sensor output(Point Cloud and Target X,Y,Z) does not apply the sensorPosition command value; instead, it outputs coordinates based on the Radar's field of view (FOV) reference frame.

Additionally, I found that the Industrial Visualizer code uses the sensorPosition command value to perform coordinate rotation.

I have examined the Gtrack algorithm code and found that when the azimuth tilt is not zero, the point cloud coordinates are transformed into the world coordinate system.

However, I did not find any transformation applied to the boundaryBox coordinates.

My Questions:

  1. Does the sensor's UART output (Point Cloud and Target X, Y, Z) not apply the sensorPosition command value?
  2. What is the purpose of transforming coordinates into the world coordinate system in Gtrack?
  3. When estimating objects in Gtrack, which shape is used for the boundary box among the following illustrations?

Thanks.

  • Hi

    Thanks for your query. Please allow us a couple of days to respond

    Regards

  • Hello, 

    Does the sensor's UART output (Point Cloud and Target X, Y, Z) not apply the sensorPosition command value?

    That is correct, the output data is given in sensor coordinates (not world coordinates).

    What is the purpose of transforming coordinates into the world coordinate system in Gtrack?

    The purpose of the sensorPosition command and the transformation into world coordinates for gtrack processing is required because the boundary boxes used in the gtrack algorithm are given in world coordinates. Gtrack only uses detections that are inside the boundary box so it must transofrm to world coordinates to confirm that the points are within the area of interest.

    When estimating objects in Gtrack, which shape is used for the boundary box among the following illustrations?

    I'm not sure I really understand this question. Can you please clarify? Both of these figures could be valid. For example, figure 2 can be valid if there is an azimuth tilt applied to the sensorPosition command. 

    Best regards,

    Josh