Hi!
Can anyone explain me how I can calculate the needed CPU-Cycles for an multiplication/division of two integer and a multiplication/division of two floating point values!?
Thanks!
greets
-René
This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
Hi!
Can anyone explain me how I can calculate the needed CPU-Cycles for an multiplication/division of two integer and a multiplication/division of two floating point values!?
Thanks!
greets
-René
It needs a lot of CPU-cycles and it is not practical to calculate that.
Which MSP430 device? Which version of which compiler and which floating-point library? And how was it set up? The values of the arguments also have a lot to do with it.
Division depends on the software implementation and maybe even on the values in the calculation. It likely cannot be calculated. I remember that an integer division on an 8MHz F1611 took about 57µs
Integer multiplication, as long as the device has a hardware multiplier, has a fixed execution time, but it depends on the surrounding code the compiler generates (like blocking interrupts, saving registers etc).
Some MSPs support 16 bit hardware multiplication, some even 32 bit. My older MSPGCC did use 16 bit hardware multiplication even for long values (doing a function call) instead of using the 32 bit hardware multiplier.
For float, the implementation of both, division and multiplication is in software in any case, and its execution time depends on the data type, coder skill, used algorithm, and even the values it is working on.
Well, you should stay away from float/double anyway, if possible. use fixed-point integer arithmetics instead (e.g. multiply by 1000 to get 3 digits resolution)
**Attention** This is a public forum