This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TI-MAGNETIC-SENSE-SIMULATOR: Value mapping of sensor simulation.

Part Number: TI-MAGNETIC-SENSE-SIMULATOR

Hi,

In the simulation I am using the TMAG5273A1 sensor. The value of the sensor is dependent on the averaging and the range setting inserted in the simulation, but as I understand the datasheet, this isn't entirely correct.
When I use the +-40mT setting instead of the 80mT setting, the signal output is 2x higher, which is correct I expect. But if I (in the 40mT range) go from 1 sample averaging to 8 samples averaging (or any other value), the value increases 16x. How come? Is this a difference between simulation and actual sensor or am I understanding it wrong and is the mapping from 12 to 16bit only active if averaging is on?

Thank you!

  • I'm also curious whether the placement of the X,Y and Z elements in the sensor is taken into account in the simulation, or that the simulation calculates it with the center of the sensor.

  • Robin,

    This is a bit of a nuance regarding what is actually occurring in the device.  The resolution of the integrated ADC is 12-bit, but the register size is 16-bit.

    When operating in 1x conversion mode, the register result will effectively only report a 12-bit code.  The last 4 LSBs should not contain any relevant information.  However, as soon as averaging is enabled the result extends to a 16-bit value.  At full 32x averaging the ENOB is about 14.5 bits.

    In TIMSS the physical location of the sensing elements is accounted for.  The three elements are reduced to a single point based on the location of the Z-axis sensor. There may be some very small offset for the X and Y elements, though XYZ sensors are placed in a cluster together.

    Thanks,

    Scott