This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
Hi All,
Im succesfully doing the demos on the BOOST using the Visualiser in the SDK.
In the Docs section there is a "tuning" document which says you can tune by modifying the cfg in the demos folder.
Some example tuning parameters im interested in (and I'll explain why at the bottom):
Cluster Cfg, Gating parameters.
I looked in multiple files, i couldn't find any simple .cfg file anywhere, and none had those parameters. I've spent along time looking online but Im a bit confused.
I've got the mmwave studio - tried to open but couldnt find any script files, even in the radar toolbox which has a bunch of demos. Not sure how to use them, do I need these?
My use case is I want to "see through" thick grass and detect the distance to bare soil. With the demo this is working already, but I want to make it a bit more robust. And I want to do this on a Rpi. So imagine a stick 2m above some long grass, like 60cm long grass. And the mmWave kit is looking down and detecting distance to the soil.
Could you help me set this up and then get it onto an Rpi5? I think it shouldnt be hard as the demo is working quite well, but some tuning should help!
Kind regards
Fred
Hello Fred.
You can refer to the profiles/chirp_configs in the SDK as well as the Radar Toolbox for examples on how to use these parameters. If you just want to modify the specific parameters you mentioned, you can copy the configuration that you have working from the visualizer and make those modifications. However, given that it is already working, the only thing I would look to modify is the rangeSel cfg command, and increase the minimum distance for detection to filter out the grass. For setting up on a Raspberry Pi, I would start with using the version of the SDK that is supported by the OS on the RPI5 and as long as the UART/USB connection for serial communication, the setup flow should be the same.
Sincerely,
Santosh
Hi Santosh,
Thanks. Running the SDK on the RPI Linux makes sense.
And If I'm right on the CMD line I'd call the SDK - does TI have any examples of how this would be done?
And about this output: I did some experiments and it generally sees through the grass. But you might imagine there are 2 object detections - 1 at ground level and 1 slightly higher.
I'd like to output the object centroids, but also, output all those other detections (red and green dots?) - e.g. the point cloud. Do both of these come as output automatically?
Btw for my use-case what do you think about playing with the threshold for SNR ? or perhaps cluster size?
Thank you!
One more thing - do i NEED to use CCS or anything other than just Linux + SDK in order to run the board and get data out like object centroids?
Hello Fred.
And If I'm right on the CMD line I'd call the SDK - does TI have any examples of how this would be done?
I'm not aware that there is command line support to use the SDK, but let me double check to confirm this. You can use a serial terminal to send configurations to the device once it has an application running, but if I'm interpreting your question as regarding using the SDK functions via command line, I do not believe this is possible.
And about this output: I did some experiments and it generally sees through the grass. But you might imagine there are 2 object detections - 1 at ground level and 1 slightly higher.
My suggestion regarding range-gating was for this purpose only. Do you know how far above the radar is going to be from the ground? From my understanding the radar was at a fixed position, so if you know the distance to the grass, you can simply only look at points past the start of the grass so that it only picks up the points for the ground.
I'd like to output the object centroids, but also, output all those other detections (red and green dots?) - e.g. the point cloud. Do both of these come as output automatically?
You could do this with the tracker as the center of the track would be the object centroid, and also output the pointcloud. However, I don't think this would work for your use-case as the tracker is used for tracking moving objects, and would be of no use for the ground.
Btw for my use-case what do you think about playing with the threshold for SNR ? or perhaps cluster size?
I don't think it would help differentiate between the grass and the ground as the SNR might be similar, but you can experiment with these features to see if you are able to distinguish between the two using these features.
One more thing - do i NEED to use CCS or anything other than just Linux + SDK in order to run the board and get data out like object centroids?
Makefiles are provided to compile the projects, but CCS simplifies a lot of the process for you. There are linux builds of CCS if you have concerns about running on an OS other than windows.
Sincerely,
Santosh
Thanks for the info
So what is the way to run an application on a raspberry pi? Can you break it down for me plz?
My understanding is:
1. Configure with CSS
2. Compile into makefile
3. Run makefile on linux/RPI
Thanks,
Fred
Hello Fred.
From my understanding, if you are connecting an IWR device to the Raspberry Pi, you should follow the same flow as seen in the getting started guide, which I have linked here. Please follow these steps and let me know if you have any questions.
Sincerely,
Santosh
Hey Santosh,
So in the set of docs - there is reference to starting a GUI from python.
There is also a reference to the TLV files that are created.
But there isnt anywhere a specific explanation of how to access these TLV files as the board is running - can you outline this process?
Thanks,
Fred
Hello Fred.
TLV's are not files; they are structs we use to store data that is going to sent from the device. You can find these TLV's being created and transmitted via UART in the mmwDemo_TransmitProcessedOutputTask() in the mmwave_demo.c/motion_detect.c included in the mmw_demos in the SDK.
Sincerely,
Santosh