I am using a TMCS1101A4B in this way:
The 0-3.3V signal into the ADC looks like this with no load:
And like this with a 44mA load (24VAC @ 60Hz):
With no load, I see ~85mV lower than the expected 1650mV.
- I can simply shift the values by that much, but since I don't know where it comes from, I worry it will vary in the future.
- Should I have my device self calibrate on startup before there is a load? Even then it could vary later. Best I can do is maybe find a time with no load once/day.
- Is such adaptive calibration typically needed, or is something going on with my implementation?
- Since I'm computing RMS, any offset error is amplified.
Wait, what if I used max peak-to-peak over 5-10 seconds instead? Then there's no offset/calibration needed. This seems to be working pretty OK:
float mA = mV / 400 * 1000; // 400 mV/A sensitivity.
With no load I'm seeing 4 to 6mA (1.61 to 2.42mV).
With a 44mA load (24VAC, 546ohm resistor) I'm seeing 38.27 to 44.31mA (15.31 to 17.72mV). This is much better than other things I've tried. I'd like it to be more accurate, but it's sufficient for the range I care about: 150 to 400mA (though I haven't tested that high yet).
Two questions then:
1) Does the approach seem good?
2) Do you have any suggestions for improvements? I could lower the RC filter cutoff, eg 1uF (159Hz) instead of 0.33uF (482Hz) and still be above my target 60Hz.