This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TMS320F28374S: Full vs No Optimization shows timestamp fluctuation

Part Number: TMS320F28374S

We are trying to timestamp all our periodic Tasks, Swis, Hwis, and our zero-latency interrupt but we see fluctuation when optimization is on. My first test of this logic on a SWI in our codebase is showing a 1ms +-1us for its period when optimization is off but 1ms +-14us when optimization is turned on. We have some GPIOs set. that I set directly after the timestamp. On an oscilloscope, I see the +-1us when the code is optimized and not. This seems to indicate that the optimization is doing something to the timestamp logic. Perhaps it's moving it later into the logic? Note, I've tried removing the GPIO set/clear actions but this doesn't remove the issue. We'd like to get this as close to 'real' period as possible without infringing on the actual work being done.

The timestamp module is running off of Timer 2 using the full range of the timer period (0xFFFFFFFF). Our zero-latency interrupt is about 10us, so this could easily be the factor causing the difference. Mostly, I'd like to ensure that the timestamp is taken first thing in all of our threads.

Here is an example of what I'm trying to do:

static volatile uint32_t lastTimestamp = 0; 
static volatile uint32_t timestamp = 0; 
void swiFunction(void) 
{ 
    timestamp = HWREG(CPUTIMER2_BASE + CPUTIMER_O_TIM); 
    . 
    . 
    . 
    // save the delta between timestamps (CPUTIMER_O_PRD, period = 0xFFFFFFFF) 
    saveDelta(0xFFFFFFFF - timestamp, 0xFFFFFFFF - lastTimestamp); 
    lastTimestamp = timestamp; 
}

  • That is strange. You can try taking a look at the disassembly for this function to see if things really are being moved around or reordered in a way that could cause this.

    Could you also try moving the ISR and timestamp code to run from RAM (assuming it's not already)? I'm wondering if maybe the optimization is changing the alignment of the code in flash just enough to result in some Flash prefetch/cache miss that's affecting the execution time.

    Whitney

  • CPUTIMER2_BASE = 0xC10

    The Macro expansion of the line is 

    (*((volatile uint32_t *)(0x00000C10U + 0x0U)))

    This disassembly looks right to me. I moved the object .text to RAM as well but was still seeing the +-14us shifts occurring. This was based on a calculation that divided the system tick by the system frequency. After reworking the calculation, it appears that there was some error being induced there. This was likely being caused by an unintended precision loss conversion.

    Using a low pass filter calculation, the average delta time between interrupts is the expected value. Checking these with optimization off shows that they were just being calculated and stored with the error. Everything looks fine now.

    Thanks for your help!