Other Parts Discussed in Thread: UNIFLASH
Tool/software:
Hi TI experts,
I have following questions regarding calibration on IWRL6432:
- Is ATE calibration required for IWRL6432 (FCCSP) or TI has done and efused it to the chip now? How do we tell if it's a production sample?
- Is the factory calibration (factoryCalibCfg) required during production? Or we can leave saveEnable set, which will result in doing calibration for every reboot. What's the pros and cons for this?
- Should factory calibration be done on PCBA (board) level or assembly device? In addition to temperature and reflection objections, is there anything else we should pay attention to the test environment?
- For range bias and phase calibration, I've read the "[FAQ] IWRL6432: IWRL6432 compRangeBiasAndRxChanPhase calibration" and some other threads. May I know how to determine if the output (compRangeBiasAndRxChanPhase) values are good? For example, I took 200 output values to do the average, the standard deviation of each parameters in compRangeBiasAndRxChanPhase is like:
0.0003 | 0.0057 | 0.0045 | 0.0057 | 0.0056 | 0.0114 | 0.0050 | 0.0069 | 0.0082 | 0.0060 | 0.0095 | 0.0018 | 0.0141 |
Also, for Boost EVM the default "compRangeBiasAndRxChanPhase 0.0 1.00000 0.00000 -1.00000 0.00000 1.00000 0.00000 -1.00000 0.00000 1.00000 0.00000 -1.00000 0.00000" includes phase rotation in RF front-end. Do I replace it with the average compRangeBiasAndRxChanPhase I calculated directly, or I need to add them to the default values?
- The range bias and phase calibration should be done per-device or per-design? If it's per-device, how do we save it to the device during production?