Other Parts Discussed in Thread: TIDEP-01012
TI user guides suggest performing frequency calibration of the multi-chip radar systems due to imperfectly consistent transmit frequency and the frequency chirp slope. The correction is to be applied to the time domain/ADC data prior to phase calibration and beam forming. The correction equation requires experimental evaluation of the frequency and chirp rate variations, possibly separate for different MMW operation frequencies.
It is not clear what can be the origin of the TX frequency difference and the chirp rate since all cascaded chips are driven by 20GHz master chirps and x4 multipliers. Is it because the MMW frequencies are not simple harmonics and there are some frequency control loops involved in each chip? That would explain some frequency difference but not the mean slope, although it can slightly vary over duration of the chirp. Please clarify, since e.g TI Signal Processing User Guide_4chipCascade doc Fig.14 shows experimental data suggesting that variations are real.
Please describe some details regarding frequency variation measurements among VR channels. Does it require transmission of single tones and external MMW spectrum analyzer? The above mentioned user guide shows FFT in the range domain, but that would require transmitting chirping signals and is related to the chirp rate rather than frequency as suggested by the Fig.14 description. In addition, some variations in the peak location could be attributed to the range difference between target and the VR channel location if the target is too close. Please clarify. A tutorial would be helpful.
For the same reason as above it is not clear how the chirp rate can be different among different chips. An explanation and a guide how to estimate the rate variations would be helpful.
It also appears that TI team recommends to generate multiple calibration matrices for different beam angles in order to improve beam quality. Is it the case?
Thank you in advance.