This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TMS320F28335: Inaccurate conversion value

Part Number: TMS320F28335

Dear all:

Has anyone used fixed-point arithmetic in the iq format? Why is the value I convert inaccurate?

For example _IQ30 (-1.903162068687379), the result is - 1.903162122;

According to official documents, there should be 10 valid figures,  I only got 7.

thank you!

  • Hi,

    Did you try with _IQ30 (-1.903162068) instead of the above larger data. The doc mentions of 9 digits after decimal. It seems to be somehow approximating the larger data value.

    Regards,
    Gautam
  • Dear Seven,

    The _IQ30 type conversion is a macro in IQmathLib.h which multiplies your variable by 2^30.  You need to append an 'L' to your data so the pre-processor knows to use a 64-bit intermediate data type.  Without that you will lose precision.  Try:

    _IQ30 (-1.903162068687379L)

    Regards,

    Richard

  • Hi Seven Han,

    You should specify to what and how do you convert. _IQ30 to float? Then you should say it please next time.

    What you get converting _IQ30 to float is correct. _IQ30 is 32bits number and float mantissa is only 24bits wide! Let's calculate:

    _IQ30 (-1.903162068687379) = integer(-1.903162068687379 * 2^30) = -0x79CD6846 .
    Converting to float these bits are normalized first, -0x79CD6846 << 1 = -0xF39AD08C . Then 8 least significant bits are lost, and higher order 24bits number is rounded to the nearest. -0xF39AD08C -> -0xF39AD100 . Now to calculate conversion IQ30 -> float, we need to divide it by 2^(30+1), where +1 is from normalization step (<< 1). I get -1.90316212177276611328125.. Pretty close to what you see.

    IQ30 precission, when close to the maximum (+-2.0) is much better then float. Up to IQ() = 2^24 / 2^30 = +-0.015625 you won't loose any bits converting to float. And going down from 2^24 / 2^30 float precision is going much better than IQ30 precision.


    Regards
    Edward