Other Parts Discussed in Thread: INA240
SAR ADC seminar documents and other TI sources attempt to illustrate all analog voltage potentials have no direct impact on MCU relative to Rs source impedance.
Yet when the ANix input is subjected to high potentials or voltages being monitored via resistive dividers or sensors near such potentials the MCU may be subjected to line transients. Given those conditions It would seem the higher Rs impedance a better choice? Seemingly ANIx input impedance exceeding 1.5 megohm provides a certain signal isolation from high potentials and transient sources. Seemingly the rule of thumb for most analog circuit designs around high potentials especially in TV circuits where so many different frequency may converge or impact the signal.
Would not lowering Rs impedance below 500 ohms then subject the SAR ADC to even greater potentials or transients even when TVS diodes and other EMI filtering protect the ANix input? Why is there no discussion on this topic, yet TIDA engineers continue to produce high voltage inverter designs where the Rs impedance is kept high (9.1k) seemingly on purpose? Does the settling time or TSN hold value relative to cADC have issues with high impedance sources? Perhaps a fits all approach does not work well in all analog to digital conversions, for TI to ignore the obvious seems sketchy at best.
Are there trade offs that have not being documented relative to SAR ADC and indirectly monitoring high voltage potentials via ANIx input dividers, it would seem so..