This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
Hello,
We understand from another thread that I posted that a Frame (typically 50ms) consists of
1. Active time (actual chirp time)
2. Processing time
3. UART comms time.
For the Processing and UART segment, we are not "capturing" any raw data and this could be treated as a blind spot.Is this correct?
If we want to decrease this blind spot, we could change into CAN comms to lessen the communication time. We could also increase the chirp loops however, this will run into memory limitation.
The reason why we wanted to lessen the blind spot is because for the current case we are not able to capture a person walking fast (or running) properly. The object created will typically "break" and create a new one and it will seem as if there were multiple different objects created along the path of the movement. Is it correct that this could be because for 1 frame (50ms), we are just collecting and actual data for maybe 5 to 25ms depending on the chirp profile, and the rest is blindspot?
I would also like to ask if it's doable to modify the firmware so that whenever a frame's step 1 has finished, it will not wait for step 2 and 3 to finish execution, but instead continue with Step 1 again for the next frame while we are doing step 2 and 3 for the previous frame in parallel. This may require switching between two buffers of data so that when, let's say, Frame 1 collected the raw data and saved to buffer A, when it start to collect data for the next frame it will then save it to buffer B because buffer A will be used in step 2 and 3 of the previous frame.
I'm still trying to understand the structure of the CCS project but may need help in determining which part of the code to modify to implement above change IN CASE it is doable or if it makes sense.
Let me know your thoughts.
Hi Mary,
My understanding is that when your target person moves too fast, the tracker doesn't track them properly - instead, it creates a new track. This isn't because there is a blind spot - this is because the tracker is predicting the position of the track incorrectly. The tracker does the following:
If the tracker gets step 2 wrong, it will likely allocate a new track. In the situation you described, it is very likely that the incorrect guess is caused by incorrect doppler information. Do you know the max velocity on your chirp? If this is too low, the doppler information will be incorrect, and the tracker will guess incorrectly. (In the 3D People Counting chain, objects exceeding the maximum velocity may also disappear). You also need to check that the tracker time step is the same as the frame period. Which lab/configuration are you using specifically? You mentioned two in the title.
Regards,
Justin