Automotive Toolbox: mmwave_automotive_toolbox_3_6_0
Hi,
I was going through the Multi-Gesture demo & had a few queries regarding the ANN & radar configurations.
1. Documentation:
I went through People Counting demo & OOB demo, it has detailed documentation covering the output TLV formats & the data processing chain.
For Multi-Gesture demo, I could only find the User Guide explaining how to run the demo & comments in the source code.
Is there more detailed documentation for Multi-Gesture demo similar to People Counting & OOB demo which can be referred ? (Explaining how the features: Weighted range,Weighted doppler etc are calculated/interpreted )
2. ANN
Multi-Gesture demo uses an ANN for gesture detection. The currently detected gestures do not satisfy our use case & we wish to retrain the network with training data.
a) Is there any information available regarding the training of ANN ?
For ex. No of training samples used per gesture to train the ANN, No. of epochs etc. It can give us useful insights on how we might need to train the network.
b) The current demo does not work as per expectation at the required range (0 - 3 cm). The ANN training samples(containing feature data) will be collected with gestures at 0 - 3 cm range.
As per my understanding, the chirp config used for this demo is fixed. Can the chirp config be changed so that the demo gives better output at closer range ?
Thanks