This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

IWR6843AOP: Integration of the device into our application and product

Part Number: IWR6843AOP
Other Parts Discussed in Thread: MMWAVEICBOOST, IWR6843

Hi,

We are developing a security system and we are planning to integrate mmWave sensors, we just started to work or experiment with radar technology. The highest range we need for our application is 20m. Within this range we would like to detect humans, and track them accurately. Additionally, we are planning to detect static objects, like walls or parked vehicles, and to calculate the distance between humans and these static objects.

Ti's IWR6843 mmwave sensors seem to fit perfectly into our application. Therefore, we have several of your ISK, ODS, AoP and MMWAVEICBOOST devices here for testing purpose. We have gone through the labs in the Research explorer especially Area Scanner as well as People Counting are similar to our use case. Additionally, we started to take a look at the the code in CCS while studying the mmWave SDK Module Documentation. But somehow we are stuck in details concerning the flow. What is the proper approach to get rapidly started with the flow of the code? Is there a specific part of your integrated systems and code on which we got to focus? For us it would be easier to adopt all standard functions (target lists, gtrack etc.) and focus on using for our application, without analysing the whole code in detail. How do other developers get started with developing their own application? It would be great if you let us know the specific module on which we should focus and make changes so that we could work on our application with the available API and code.

Regards,

Divya 

  • Hello

    Thank you for your interest in MmWave devices and  that provided demo have been relevant

    While we get back to you with Peole counting application specific development, wonder if you had an opportunity to understand the example processing chain in SDK OOB demo.  The SDK users's guide can take you to relevant doxygen section for OOB  chain, data flow and implementation details.

    Thank you,

    Vaibhav

  • Hi Vaibhav, 

    Thank you for the reply. I did take a look but I wanted to know if the flow chart mentioned in the doxygen section is it a generic one? I'm looking for support in both people counting and area scanner development.

    Regards,

    Divya

  • Hi Divya,

    The SDK doxygen relates to the SDK OOB processing chain which is a range->Doppler (followed by angle of arrival) processing chain which is suited for long-range detection and high velocity applications.

    The indoor people counting demos use a different processing chain namely 3D Capon beamforming which uses range-azimuth-elevation (followed by Doppler estimation) capon beamforming processing to provide high angle resolution for indoor applications (and low velocity). Currently, the available high level documentation for people counting demo is included in C:\ti\mmwave_industrial_toolbox_4_5_1\labs\people_counting\68xx_3D_people_counting\docs\3DPeoplecountingDemo_ConfigurationDetails.pdf. As you can see, this is very high level and we are working on developing more detailed implementation documents for these demos but they will be released in a future toolbox update.

    Please note that due to a major winter storm in North Texas this weekend, we are facing massive power outages causing delays in our responses.

    Regards

    -NItin

  • Hi Nitin,

    Thank you for the follow up and information.

    Just to be clear and on the same page of understanding our security system application is an outdoor application and according to the above information I see that the applications were categorized into indoor and outdoor. The indoor applications are using beamforming and outdoor application is using OOB processing chain. So Is area scanner an indoor application?  people counting is clearly an indoor application as I see it from above. I'm looking for a clear detailed Implementation documents for the outdoor demos. Could you please let me know how do I proceed further from here?

    Regards,

    Divya

  • Divya,

    What is the max range requirement of your security application? any other information that you can provide about the use case (e.g. field of view, moving targets detection etc) would also help us to make a better suggestion.

    Thanks

    -Nitin

  • Hi Nitin,

    Thank you for the reply.

    The max range requirement of the security system is about 17 m. The field of view is pictorially represented below. It completely depends on where the sensor system is placed on the vehicle. The first two pictures  are both side view where the sensor system is placed in two different places on the vehicle. Our major goal is to calculate the distance between the vehicle and an approaching person - the maximum distance needed between these two object is 2m, which is presented in the third top view picture. Therefore, the sensor system would detect the points from the vehicle itself and also presence of human in the field of view. Here I'm assuming the points from the vehicle itself would be static points and the moving target detection would be the human or humans. Please do let me know if you need more information.

         

                                                                                           

    Regards,

    Divya

  • Hi Divya,

    Area scanner and people counting demos are indoor (i.e. short range) applications for less than 10m range. For outdoor applications where you want to track people at longer distances, the Long Range People Tracking demo should be used as a starting point.

    Please refer to slides 51-53 of the SDK training at: https://training.ti.com/easy-evaluation-and-development-mmwave-systems-software-development-kit. It is highly recommended to go through this entire training to gain a better understanding of the SDK architecture and application development.

    As shown in the block diagram on slide in these slide 53, the long range people tracking demo is made by adding tracker (trackerProc DPU) on top of the 68xx Out of box demo chain. As such the detailed HTML documentation for the out of box demo provided with the SDK applies.

    C:\ti\mmwave_sdk_03_05_00_04\packages\ti\demo\xwr68xx\mmw\docs\doxygen\html\index.html

    The added component here is the trackerProc DPU so look at the section "Extending the SDK architecture for advanced applications" in the above training and refer to the long range people tracking demo code including the custom trackerProc DPU  developed for this purpose and how it is integrated with the SDK OOB demo chain.

    Regards

    -Nitin

  • Hi Nitin,

    Thank you for the update. 

    Could you please let me know on how the demos are categorized as indoor or outdoor? Is it just based on distance that it could measure or anything more in specific? I'm asking you this because like I said it depends on where the sensor system is placed on the vehicle. It is also possible to use 10 m.

    Regards,

    Divya

  • Hi Divya,

    It's a distinction we have made based on the intended use case and one of the factors determining that use-case is range. So as I mentioned earlier, the people counting demos are targeted towards tracking and counting people within a room e.g. up to 3-4m radial in case of overhead people tracking demo and up to 8-10m in case of wall mount people counting demo. Since these demos are meant for counting people, they implement 3D capon beamforming detection chain which provides a higher angular resolution but is meant for short ranges and low velocities (human walking motion). The detection and tracking layer parameters are tuned accordingly for this use case.

    The Long range people detection and tracking demo on the other hand is meant for relatively coarse tracking of people at higher range, e.g. up to 50m. It uses the detection chain which is used in the Out of box demo. This detection chain, commonly referred to as the range-Doppler processing chain is able to detect objects (including people) at longer ranges but has lower angular resolution as compared to the Capon beamforming chain. The detection and tracking layer parameters are also tuned accordingly for long range use case meant for tracking people (but not counting) at relatively larger distances which is why we call it an outdoor demo.

    I misspoke about the Area scanner demo earlier so need to correct myself: The Area scanner demo is also based on the Out of box demo i.e. the Range-Doppler based chain but it has the added functionality added for static object detection. It can detect and track moving targets at up to 15m but note that the static object detection works at up to < 5m.

    Note that for all these demos, the sensor is expected to be stationary for the tracking to work. Also, the following sensor placement will not be able to meet the 8.5m radial (17m total) detection range as shown in the following picture since the field of view for AOP or ODS EVMs is about 120 degrees (i.e. +/- 60 degrees). So I would suggest using the other two orientations that you have shown above.

    Hope this clarifies some of your questions.

    Regards

    -Nitin

  • Hi Nitin,

    The information provided was definitely helpful. 

    I tested both the area scanner and long range detection outdoor and also indoor. I see that area scanner could detect upto 6 meters and after that the tracked object just becomes a dynamic point. There are multiple ghost static points in area scanner. Long range detection it has multiple ghost points and I also see .cfg file for 100 meters shows error but I'm not looking for 100 meters so that shouldn't be a question. Maybe 2D and 3D 50 meters I do not see any difference in the output and detects multiple ghost points on both the cases. Correct me if I'm wrong and What is the reason for the ghost static points in area scanner and multiple ghost tracked objects in long range detection? What could be the solution or things we should keep in mind?

    Regards,

    Divya 

  • Hi Divya,

    Ghost points are usually the result of multi-path reflections i.e. the signal getting bounced back from different objects in the scene and received at the antenna. While it is difficult to get rid of ghost points completely, you can try one or more of the following to reduce them.

    1. Reduce clutter in the scene.

    2. Reduce transmit power by changing the txPowerBackoff word in the profileCfg command as described in the following post. Note that this will reduce the detection range as well.

    https://e2e.ti.com/support/sensors/f/1023/p/945799/3494470#3494470

    3. Another thing to try would be to increase the detection threshold in the cfarFovCfg command. 

    4. Another way to reduce ghost points would be to limit the detection field of view only to the area of interest by changing the FoV in the aoaFovCfg command

    aoaFovCfg -1 -90 90 -90 90.

    5. If the ghost points fall outside the area of interest for tracking, another option is to limit the tracking area through the boundaryBox command.

    Thanks

    -Nitin