Other Parts Discussed in Thread: C2000WARE
Hello TI,
I have a (understanding) problem with the CLA vs. CPU calculate time.
We use a TMS320F28069. For testing and debugging I compare the absolutely (!) identical code. At first in the Main-CPU, second in a CLA-Task. I expected the same calculation time.
But my measurements shows a very much faster calc time for the CLA.
The difference is 3µs (CLA) vs. 24µs (CPU)!
To measure the time I toggle a GPIO.
The CLA-Task is measured also by a GPIO. Started by Sotwaretrigger (IACK), stopped by the CLA1_INT2_ISR Routine. The codesnippet below.
All needed Variables are in the CLA1DataRam-Area. Calculation is correct and works (in CLA and in CPU).
void main( void )
{
…
init, etc
…
EALLOW;
GpioDataRegs.GPASET.bit.GPIO13 = 1; // Start measure time
EDIS;
__asm(" IACK #0x0002"); //start CLA-Task2
}
// INT11.2
__interrupt void CLA1_INT2_ISR( void ) // CLA
{
EALLOW;
GpioDataRegs.GPACLEAR.bit.GPIO13 = 1; // Stop measure time
EDIS;
PieCtrlRegs.PIEACK.all = PIEACK_GROUP11;
}
The main time for calculation is needed by this code. There are some Divisions. But why is this in the CLA so much faster?
delta_Z_o = ( ( Ch ) * ( P_Z_o - ( delta_Z_o_n1 / Rjc ) ) * deltat ) + delta_Z_o_n1;
delta_Z_o_n1= delta_ZK_o;
This formula is called 6 times (also with other variables).
Compiler Optimization has no speed effect.
Do you have some Ideas how this should happen?
Thank you.
Best Regards
Markus


