This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

MSP430F5329: Sample Time for 12-bit ADC

Part Number: MSP430F5329

I am looking at SLAU208P Revised October 2016. I am in the ADC12_A section 28.2.5.3 Sample Time Considerations Page 737. It says:

The resistance of the source RS and RI affect tsample. The following equation can be used to calculate the
minimum sampling time tsample for a n-bit conversion, where n equals the bits of resolution:
t
sample > (RS + RI) × ln(2n+1) × CI + 800 ns
Substituting the values for RI and CI given above, the equation becomes:
t
sample > (RS + 1.8 kΩ) × ln(2n+1) × 25 pF + 800 ns
For example, for 12-bit resolution, if RS is 10 kΩ, tsample must be greater than 3.46 µs.

How do you get from 
(10 kOhm + 1.8kOhm) * ln(2^(13)) * 25 pF + 800 ns to being greater than 3.46 us? 

If I do not convert units I get 11.8 kOhms * 2.564949 * 25 pF = 756.66 which I can not see any way I can add 800 ns to indicate it must be greater than 3.46 us

Normally I would think you go to base units so 11800 Ohms * 2.564949 * 0.000000025 F = .000757 seconds and then add 800 ns you get 0.0007578 sec or 757.8 us.

I have so many problems with equations in TI documentation. One more intermediate step would clear up a lot. But you never do it. 

What am I missing? How does this work? 

Since I am asking, in my case I have about 50 volts through a resister divider of 237.4K on top, and 10K on bottom (to ground) with a .1 MFD in parallel with the 10K on the bottom. I would guess the 0.1 MFD would swamp the 25 pF on the other side of the 1.8K resistor, so I should model it as just the 1.8KOhm resistor. Is that correct?  But I still need to know how the equation works. 


Kip

**Attention** This is a public forum