This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AWRL1432: velocity disambiguate in SDK 5.3 MMW demo

Part Number: AWRL1432

Hi,

    I am using the MMW demo in SDK 5.3, and I want to use the velocity disambiguate algorithm in the SRR demo, which requires configuring two different chirps.

    I checked the code in the SDK (common_full.c), which limits the number of profiles and chirps to only 1, which means I can't configure another chirp for velocity disambiguate.

    

    

    Does the current SDK have a way to meet this requirement?

Thanks!

  • Hi,

    I am looking into this! I will update by next Monday.

    Best Regards,

    Kevin

  • Hi,


    I have a clarification question. Do you mean that you need to have two different chirps within the same frame? SDK5.3 does not support multiple "profiles."

    However, if you wish to use a different chirp configuration without having to issue a "sensorStop" command, this is indeed possible. The gesture recognition demo found at (<SDK_ROOT>\examples\mmw_demo\gesture_recognition) does this. You can refer to gesture_recognition.c specifically. 

    The demo achieves this through the use of the power management software. The functions gestureToPresenceSwitch() and presenceToGestureSwitch() are invoked when a change is desired. At a high level, the different configurations are stored in structures. When a desired condition is met, one of the switch functions is called to load the different chirp parameters. The next time the device enters and exits deep sleep, it will transmit with the new chirp configuration.

    That being said, from my understanding of velocity disambiguate, very short frame times are used. Therefore, there will not be enough time to enter deep sleep mode so the above approach will not work. Instead, the configuration will need to be hardcoded (CLI bypass enabled) and set up for only 1 frame of transmission. After this frame, the device will have to be reconfigured by software. TI is currently working on an example implementation of this, it is planned for release at end of year 2023.


    Best Regards,
    Kevin 

  • Hi Kevin Ortiz,

    Yes, I need to have two different chirps within the same frame.

    In srr demo, we can configure two different chirps in a profile (only the idle time is different), as shown in the figure below, one of which is used for detection and the other is only used for velocity disambiguate.

    This relies on the two functions shown in the figure below:

    The implementation of the first function is shown in the figure below:

    While the configuration of the mmw demo RF front end in the 5.3 SDK uses a different process:

    Is this requirement not possible in the 5.3 SDK?

    Besides waiting for a new version of the SDK, is there any way for me to do velocity disambiguate from the point cloud layer?

    Thanks

  • Hi,

    Definitely you cannot send "different profiles" within the same frame in SDK5. However, in SDK3 we often used the term "sub-frames" to describe each profile. The principle remains the same. Instead of different profiles within the same frame, we will have to use different configurations across consecutive frames.

    The implementation is of course different, but velocity disambiguate is still possible.  As I mentioned, we are working internally on supporting this and providing an example which will be available closer to the end of the year. Below are some details on the example which is a work in progress.

    Best Regards,

    Kevin 

  • Hi Kevin,

    1.If we use different configurations across consecutive frames, and we need to detect fast-moving targets such as cars, when the frame time is 100ms and the vehicle speed is 30m/s (108km/h), the target position detected in two frames will move by 30*0.1 = 3m, the algorithm need to handle this, will it lead to higher failure or error rates?

    2.Due to time, I'm considering porting the velocity extension algorithm based on the Doppler phase offset compensation assumption in srr demo to SDK5.

    According to the documentation in SDK5, AoA implementation recalculates the Doppler for the detected points with doppler phase compensation (if 2-Tx is used). 

    But I didn't find the relevant code in the DPU_Aoa2dProc_process() function, is this implemented by hardware?



    3.Because of 2, I need to obtain the input data for Azimuth FFT. According to the example in srr demo, this data has been Doppler compensated for the data of the second tx antenna. Can this be obtained in SDK5?

    Thanks!

  • Hi,

    1) It is likely that such a large frame time will result in higher failure rates. The frame rate would need to be pretty small. This is the strategy we are utilizing for the example implementation I've mentioned. More details will be available as we get closer to finish date.

    Regarding Q2 & Q3, I need to do some investigation. I should have a response here by Tuesday of next week.

    Best Regards,

    Kevin

  • Hi Kevin,

        is there any update about Q2 & Q3 ?

  • Hello.

    This is not currently available, and we are aiming to have this feature available by the end of Q4 of 2023 or Q1 of 2024.

    Sincerely,

    Santosh