Hello,
There is no detailed description of the starting frequency, bandwidth, and slope used during calibration in the calibration document. So I have the following questions to ask:
1. Is constant frequency calibration required during calibration?
2. How does the choice of center frequency during calibration affect PS INL?
3. As the calibration itself also has the problem of jitter in the measurement data, does the calibration operation require N times of calibration to obtain the expectation?
4. If N operations are required, what is the value range of N?
5. Is this measurement noise Gaussian, and can we use the mean instead of the expectation?
6. During the calibration/restore process of the TX phase shifter, the document does not require us to perform INL correction on the measured PS Cal Result Array DegreeTXm (0 to 63) value, but only requires us to save and restore it to the AWR device . So how does the AWR device correct the INL caused by the phase shift? According to this process, the AWR device can completely complete this process automatically. Why do we need to implement this process manually?