So I am understanding that the axis' use signed 8 bit values being converted from the unsigned 12 ADC values. Is there anyway to be able to have the axis' use signed 16 bit values?
This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
So I am understanding that the axis' use signed 8 bit values being converted from the unsigned 12 ADC values. Is there anyway to be able to have the axis' use signed 16 bit values?
That depends on what you mean.
If you mean convert the 8 bit result to 16 that's trivial.
If you mean get 16 bits of accuracy that's probably not possible.
If you mean use the 12 bit converted value that may take a little care but is probably possible (assuming the original is using close to the full scale of the A/D)
Robert
Thanks for the quick reply,
I have done both of these here is the code showing
#define Convert16Bit(ui12Value) ((int16_t)((ui12Value)<<4))
but, the range that is outputted is still between 0-255.