This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TMAG5170-Q1: performances

Part Number: TMAG5170-Q1
Other Parts Discussed in Thread: TMAG5170

Hi all,

i have done measurements on TMAG5170_EVM to study a high precision measurement system.

For 50 successives samples on X axis, +/-100mT range, 32x average conversion, only X axis acquisition, Ta=20°C, without any magnet on proximity, i have imported the results into Excel.

The difference between the MAX and the MIN sample is 85uT. Excel calculates a standard deviation of 20uT.

1/ Does it means that the 1 sigma "noise" is 20uT in my case ?

2/ How to compare this standard deviation with RMS (1 Sigma) mognetic noise of +/-0.025mT mentionned in the datasheet ? To statute a worst case analysis, do i to multiply 0.025 by 6.6 (state of the art claims to multiply RMS value by 6.6 to calculate peak value) ?

3/ it seems that some people consider 3 or 5 sigma (RMS or Peak ??) to claim the overall precision of a system. Your opinion ?

4/ To understand well how the TMAG works, is it correct that the internal ADC is 16b and the Conv_Avg average samples ?

5/ I notices that noise (difference between MAX and MIN samples) is twice more in +/-25mT range. Should it be a random hasard ? Indeed, i thought that high range should implie much noise.

6/ I notices that noise seems to be same with 1 axis or 3 axis measurements. According to TMAG operation, does it seem to be correct ?

7/ Is the RMS (1 Sigma) mognetic noise mentionned in the datasheet be really not relative to the range ? If yes, higher range have a better ENOB. Furthermore TMAG5170-A2 which have a higher range and quite the same noise should much much better also. Can you confirm ?

Sorry for this long list of questions. Hope you should answer about this hopeful TMAG.

Best regards.

  • Remy,

    Thank you for your efforts here and for reaching out with your questions.  In general std deviation and RMS values are very similar.  The primary difference is that the standard deviation is taken as the variance relative to the mean, and so we only really see the AC component of the signal.  The RMS value would be AC + DC, and in the case where there is no DC component, then these should be equivalent. So, by comparison, your 20 uT is better than the expected 25 uT for the X channel.

    In response to each item you listed:

    1) Yes, your 1 sigma noise value here is 20 uT.  As mentioned above, we anticipate this to typically measure about 25 uT.

    2) Since noise is an inherently AC type measurement as we are not interested with a fixed input signal, there really should be virtually no DC component in the specification (that would be represented as offset instead), and therefore std. deviation and RMS values should be able to be treated equivalently.  We don't have anything specifically published related to the expected peak value. Multiplying by 6.6 implies a probability distribution greater than 6 sigma, and you would likely be hard pressed to find a data sample outside of that range.

    3) Assuming a Gaussian distribution, 3 sigma should represent 99.7% of all data.  5 sigma would represent about 99.99%.  In your case you measured +/-42.5 mT, which would fall just outside 2 sigma.  In practice, to ensure a good estimation to 5 sigma accuracy, you would need to have a very large sample size to possibly exceed this value. Depending on how critical this value is in your system, you may want to guard-band for more than 3 sigma.  However, in most cases 3 sigma is probably sufficient.

    4)The internal ADC is 12 bit.  The 16 bit result is the product of oversampling (32x averaging).  This does offer more granularity, but does impact the overall conversion rate. When averaging is off, the device will only yield a 12 bit result.  This 16 bit result is not a true 16 bit resolution, however.  In order to achieve this more samples would be necessary.  The theoretical ENOB for this device is about 14-15 at 32x.

    5)Right now, we have the noise spec listed as an input referred value, which is fixed irrespective of the sensing range.  It may have just been a random case that you observed more for the 25 mT range, but we can only speculate.  The device is in an advance launch phase, and the fully released characterization results are not yet available.  

    6)The same ADC is performing the conversions for 1 or 3 axis.  Each channel is pipelined into the conversion cycle and we expect that this will not impact your observed noise.  However, due to the physical design of the device, we do anticipate lower noise on the Z channel than for X or Y.

    7)While the noise is currently represented as an input referred value that is fixed for all sensitivity ranges, this does mean that the possible SNR for the +/-100 mT range is greater than the +/-25 mT range. With a 12 bit conversion, the LSB in the 25 mT range is 12 uT, which is less than the expected RMS noise, and this does impact the ENOB.

    Thanks,

    Scott

  • Thank you Scott for this long and quick answer.

    Everything are very clear and much appreciated.

    Best regards.