This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
Tool/software: TI C/C++ Compiler
I use delay function SysCtlDelay() to delay millisecond, the function is implemented as follows:
void SleepMillisecond(u16 u16milisecond) { SysCtlDelay(SysCtlClockGet() / 3000 * u16milisecond);//1ms }
void SysCtlDelay(uint32_t ui32Count) { __asm(" subs r0, #1\n" " bne.n SysCtlDelay\n" " bx lr"); }
Under normal circumstances, It's work fine. But when I change some Precompiled Macro (define by myself), This function will output double delay time. I'm so sure that Precompiled Macro do not affect this code.
I use oscilloscope to check Clock is OK. I also check anothter function execution time, however Precompiled Macro is change, execution time is same, except function SysCtlDelay().
I step into this function to debug disassembly, the executed number of assembly instructions in both cases are the same. So I really don't understand why it has different result with same code and same clock?
Compiler Environment: IAR Embedded Workbench 7.0
#ifdef PRODUCT_VERSION /* code */ #endif #ifdef STORE_VERSION /* code */ #endif
the time doubled) when it is invoked in Iot_Sound.c and Iot_Sleep.c,
So in my opinion compiler switch would not affect the result of delay function.
To begin - it is believed that you are (really) pressing against the bounds of, 'Vendor MCU Support.' (your highly unusual use circumstance - has 'over-challenged you' - thus perhaps 'unfair' to burden the vendor - as well...)
Your code: 'SysCtlDelay(SysCtlClockGet() / 3000 * u16milisecond); //1ms'
especially the part, 'SysCtlClockGet()' invites 'vulnerability' - does it not? (hard coding your value - rather than employing the (far more) cumbersome 'SysCtlClockGet()' - proves to your advantage - does it not?)
It is expected - due to the "integral multiple' of the duration of 'SysCtlDelay()' - that your 'failing (doubling) mode' has someway/somehow - enabled such (possible) 'double calling' of 'SysCtlDelay().' Would it not prove useful for you to call 'SysCtlDelay()' ... 'back to back' - noting then - if such 'doubling of function duration' - persists? If that 2nd call - harvests 'normal' duration - then it is assured that your 'failing mode' has 'predisposed' SysCtlDelay() to its 'doubling fate.'
As a sailor - we're taught to seek, 'Any port in a storm.' It is (almost) guaranteed that your use of 'one of the many, MCU Timers' (instead) - would, 'Avoid this issue - completely!'
Hello,
Unfortunately I am leaning towards agreeing with cb1 in that this seems difficult for me to explain or comment with the information provided.
Are you saying that the code of
void SleepMillisecond(u16 u16milisecond) { SysCtlDelay(SysCtlClockGet() / 3000 * u16milisecond);//1ms }
Gives different results based on using either PRODUCT_VERSION or STORE_VERSION?
Have you checked the result of SysCtlClockGet for both cases?
If using the ROM_ command works, another thing you may need to look at is optimization and other compiler settings. Unfortunately with IAR, I am not knowledgeable about any possible 'gotchas' regarding such settings.
Mr Jacobi, (i.e. Ralph)
Utter Blasphemy - 'gotchas' - induced under a longer existing - far more deployed - REAL (ALL ARM MCUS Invited) PRO IDE! HIGHLY UNLIKELY - and far more evident w/in those 'lesser versions' - claiming low price (free) as their (sole) 'raison d'etre.'
And causing GREAT Pain & Suffering - when client users GROW - and wish to evaluate ANY 'Leapfrogging ARM MCU!' (there are multiple - BTW...)
Middle of the night - last night - I devised a method to 'tease out' this poster's issue: