This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

IWR6843: questions on people counting visualizer

Part Number: IWR6843

Hi

I have a couple of questions on the python visualizer in people counting lab.

1. In line 604-616 of gui_main.py (updateGraph), the code removes the static point cloud and update indexes. I don't understand the logic of the following code. firstZ is the first element in statics. Does that mean the  moving points is stored first in pointCloud and then the static points? 

                    firstZ = statics[0][0] # index of first static point
                    numPoints = firstZ
                    pointCloud = pointCloud[:,:firstZ]

2. In addition, the point cloud and indexes are updated but the targets are not updated accordingly. Why is that, e.g., no static pointCloud is included in tracking the targets?

3. In line 620-642 of gui_main.py (updateGraph), why do we need the point cloud persistence to update the graph?

Thanks!

Kai

  • Hi Kai,

    1. The dynamic cloud is calculated before the static cloud on device, so the point cloud has all the dynamic points first. Your understanding is correct.
    2. Static points are only used to keep a track alive in some situations. The ability to remove static points is done for visualization purposes - it doesn't effect the performance as tracking is all on device.
    3. This is also done for improved visualization. Some people want to see the last few frames of data, other people have complained that the points appearing and disappearing too fast is jarring so I put this feature in to solve those complaints.

    Regards,

    Justin

  • Hi Justin

    Thanks for your reply.

    So in the visualizer, it only shows the non-static points associated with each target unless we enable the visualization of static points. In addition, does the visualizer always show the total targets and points in a couple of previous frames at each frame?

    Thanks!

    Kai

  • Hi Kai,

    Yes, it only shows dynamic points associated with the target if static points are hidden. The points number is the total number of points for the current frame (dynamic + static).

    Regards,

    Justin

  • Hi Justin

    Does the visualizer always show the total targets and associated dynamic points in a couple of previous frames for the current frame?

    Thanks!

    Kai

  • Hi Kai,

    The target data is sent one frame after the point cloud, but the visualizer synchronizes these two data streams, so that you see points and target data from the same frame. You can set it to use one persistent frame so you only see data from one frame at a time.

    Regards,

    Justin

  • Hi Justin

    Another question, in line 108-116 in graphUtilities.py, do you always use the box with the size of (0.25, 0.25, 0.5) for the target tracker?

    Thanks!

    Kai

  • Hi Kai,

    I keep a constant box size, again for better visualization. 

    Regards,

    Justin

  • Hi Justin

    I am asking the bounding box of targets because I am trying to detect human fall. My initial idea is that the bounding box includes all the points associated with each target so that I can find the dimension of each target and use the dimensions to decide if a human is standing, sitting, or laying down.

    Have you ever tried to plot the bounding box using the points from target indexes? It seems to me some of the points associated with a single target could spread pretty randomly, which makes the dimension estimate difficulty. What do you think?

    Thanks!

    Kai

  • Hi Kai,

    The point cloud won't accurately represent the dimensions of the target. If you average over frames you may be able to develop an averaging scheme that gives reliable data. You will have to experiment with the data.

    Regards,

    Justin

  • Hi Justin

    Yes, the averaging over frame is a good idea. In order to do the average, I need to know the point IDs for the same target over a few frames. I was wondering if the target ID represents the same target over a few frames?

    Thanks!

    Kai

  • Hi Kai,

    Target ID will represent the same target - only exception occurs when a target is lost then re-allocated.

    regards,

    Justin

  • Hi Justin

    I am working on the averaging method now. The problem I have is the number of points in point cloud is not the same as the size of indexes. Please note that I already considered the factor that point cloud is for frame N and index is for frame N-1. I save the number of points and indexes in the past 10 frames and compare them for the same frame. Do you see the same scenario?

    Thanks!

    Kai

  • Hi Kai,

    I haven't had that issue. At what point are you grabbing the point cloud and indexes?

    Regards,

    Justin

  • Hi Justin

    After the following piece of code, but I save the number of points without removing the static points and compare it the len(indexes) for the proper frame.

            if (numPoints): # only save the data in previous 10 frames
                self.previousCloud[:5,:numPoints,fNum] = pointCloud[:5,:numPoints]
                self.previousCloud[5,:len(indexes),fNum] = indexes
            self.previousPointCount[fNum]=numPoints

    Thanks!

    Kai

  • Hi Kai,

    Are you also saving indexes without cutting them off? The indexes are also trimmed to remove indexes related to static points here - if you aren't fixing this you will have too few indexes.

    Regards,

    Justin

  • Hi Justin

    I am not trimming the indexes as well. I save the number of points and indexes in the past 10 frames. Most of the times, the two numbers are the same but they are different from time to time. And the number of points is larger and sometimes it is smaller.

    Thanks!

    Kai

  • Hi Kai,

    There may be an issue with the data transmit on device, or the parser could be making an error.  Which version of the Industrial Toolbox are you using?

    Regards,

    Justin

  • Hi Justin

    I fixed this issue and it is a bug from my code. The following question is as follows:

    Since a single target ID at different frames could represent different targets (i.e., one target doesn't create a tracker while another target takes the target ID), how to accurately calculate the target dimension using the averaging method?

    Thanks!

    Kai

  • Hi Kai,

    For the most part, I think you can assume that a target ID will always be the same target. However, when making this assumption, you can also implement a check to try to determine if a target ID now represents a new target. For example, if you find the dimensions change very quickly, this target ID may be tracking a different person.

    Tracker will do its best to maintain a single track for each person, so I think its best to operate with that assumption and put checks in place to see if tracker has made an error.

    Regards,

    Justin

  • Hi Justin

    Thanks for your suggestion.

    I did some tests using the averaging method and found that the dimension of target is very large compared to the human dimension if I keep the static points. The target dimension looks better when I only keep the dynamic points just like what you did for visualization. Do you think it is a proper way to detect human?

    In addition, the position of target in target list is also defined with respect to the center of sensor, correct?

    Thanks!

    Kai

  • Hi Kai,

    Position of target is in relation to the position of the sensor.

    For larger dimensions when in including static points - I haven't tested myself, but the static points use a less accurate AOA method so it would make sense their placement is not as accurate, potentially leading to larger dimensions.

    Regards,

    Justin

  • Hi Justin

    Oh, I didn't realize the AoA algorithm is different for static and dynamic points. Is there any reference describes the different AoA algorithm?

    Thanks!

    Kai

  • Hi Kai,

    Unfortunately, it isn't properly documented. It is the same AOA algorithm as Out of Box demo.

    Regards,

    Justin

  • Hi Justin

    If you can point me to the code in SDK where it implements the AoA for static and dynamic points, I can check it by myself.

    Thanks!

    Kai

  • Hi Kai,

    Please see the module documentation: C:\ti\mmwave_sdk_03_04_00_03\docs\mmwave_sdk_module_documentation.html - signal chain is described in the demo section.

    Regards,

    Justin