This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Should float be replaced?

Other Parts Discussed in Thread: CC2541

We are using CC2541, and I noticed these words in the IAR compiler reference:

Using floating-point types on a microprocessor without a math coprocessor is very

inefficient, both in terms of code size and execution speed. Thus, you should consider

replacing code that uses floating-point operations with code that uses integers, because

these are more efficient.

We do have some code for linear calculations. Something like:

(uint16)(x * 6.15 + 3.1)

Is it much slower than this alternative?

(uint16)(((uint32)x * 615 +310)/100)

Any suggestions would be appreciated.

  • Hi,

    As you had pointed out floating point operations would be very much inefficient on CC2541 microprocessor. The compilers have been designed to generated code with least amount of processing and size to fit them in the limited RAM and Flash.  Bit wise operations using Integers would be much more optimal.

    Regards,

    Arun

  • Thanks for the answer.


    In our case, the coefficients can change, so bitwise operations are not possible. I would like to know the performance difference of the two following expressions:

    int a, b, x;
    int calc = (int)(((int32)x * a + b) / 100);

    and

    int x;
    float a, b;
    int calc = (int)(x * a + b);

    Is it possible to quantify the difference, say, in CPU cycles (or time)?

    Thanks!

  • The best way of finding the performance is to try yourself (it may depend on the compiler settings). You can for instance use a timer such as Timer 1 to see the number of cycles used (just read the counter before and after the calculation). I believe that the first variant is quicker than the second, but they will both be slow, as division takes a long time on an 8051.

    On a fixed-point processor, it is better to use fractional representation of non-integer numbers, where you divide by a power of 2 instead of by 100, as such divisions may be done by shifting. Your required accuracy will decide the number of shifts to do. For instance, if you divide by 128 instead of 100, you can implement your calculation as

    int a, b, x;
    int calc = (int)(((int32)x * a + b) >> 7);

    With your previous example of a = 6.15 and b = 3.1, you would use the approximation of a = 787 and b = 396 .

    Note also that on an 8-bit processor, you should make sure the data types are as small as possible in every calculation to save time.

  • hec said:

    The best way of finding the performance is to try yourself (it may depend on the compiler settings). You can for instance use a timer such as Timer 1 to see the number of cycles used (just read the counter before and after the calculation). I believe that the first variant is quicker than the second, but they will both be slow, as division takes a long time on an 8051.

    On a fixed-point processor, it is better to use fractional representation of non-integer numbers, where you divide by a power of 2 instead of by 100, as such divisions may be done by shifting. Your required accuracy will decide the number of shifts to do. For instance, if you divide by 128 instead of 100, you can implement your calculation as

    int a, b, x;
    int calc = (int)(((int32)x * a + b) >> 7);

    With your previous example of a = 6.15 and b = 3.1, you would use the approximation of a = 787 and b = 396 .

    Note also that on an 8-bit processor, you should make sure the data types are as small as possible in every calculation to save time.

    Hi hec,

    This was great help.

    I was actually measuring the performance while I asked. To my surprise, my previous method 1 (int32) was slower than method 2 (float) by nearly 80%! So 32-bit integer operations are obviously no better than float on the 8-bit processor.

    By simply changing "/100" to ">> 7" (w/ coefficient changes), the performance changed significantly. The calculation is now 57% faster instead of 80% slower.

    So big thanks! :-)