I am curious regarding the statistics of a measurement from the radar, in particular the range and speed.
Taking range as the example, I know one can calibrate the range bias and calculate the range resolution given the chirp parameters. If we suppose a range measurement can be given by
measured_range = range + bias + noise
Calibration should ensure that the bias is relatively small. The question then is how we can describe the uncertainty (i.e. noise or precision). The range resolution tells us the minimum separation between two objects for them to be seen as independent peaks in the FFT.
What if there's a single object, can we apply a similar train of thought? Perhaps we can suppose that the uncertainty about this single peak is +- half the range resolution? Is this true?
Another thought I had was that the variance could be range-dependent, as the further an object, the greater the time offset between Tx and Rv chirp, resulting in a lower overlap and larger portion of the chirp to be thrown away. This seems unlikely: looking at the low range default from https://dev.ti.com/gallery/view/mmwave/mmWaveSensingEstimator/ver/2.0.0/ we have the following givens
- max range 10m
- ramp end time 49 us
the one way travel time for 10m is 0.03us. So even at the maximum there is hardly an effect.
Another parameter specified here is the number of range bins (256). Given that these bins are representing 0->max, one could perhaps say that the accuracy is locked to the bin size, in this case 10m over 256 bins -> 3.9cm which is somehow not the same as the "range interbin resolution" (4.883 cm)
In conclusion, I have some doubt on this. Likely in part due to incomplete understanding of the overall processing pipeline. Not to mention how velocity plays into this.