Because of the holidays, TI E2E™ design support forum responses will be delayed from Dec. 25 through Jan. 2. Thank you for your patience.

This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

__delay_cycles() question???

I am using the intrinsic function __delay_cycles();  Is there anyway to use a variable with this function instead of a constant?? for example __delay_cycles(time);  instead of __delay_cycles(1000);  every time I try using a variable I get an error. Is there another intrinsic function that delays clock cycles but uses a variable instead of a constant??

 

-Mike

  • Hi Mike,

    From the MSP430 Optimizing C/C++ Compiler v 3.1 User's Guide (SLAU132c.pdf) pp. 109:

    "The __delay_cycles intrinsic inserts code to consume precisely the number of specified cycles with no
    side effects. The number of cycles delayed must be a compile-time constant."

     

    Best regards,

    AES

  • Two ideas for your implementation...

    1. Use a timer rather than a software delay. This will allow you to enter a low-power mode (such as LPM3 or LPM0) while you are waiting, greatly reducing the power consumption of your system, or
    2. Implement your own software delay function using some kind of loop statement that as the __delay_cycles() intrinsic function call at its core. This is better than a SW delay written in "pure" C since that one would be subject to all kinds of compiler optimizations and may result in different delays depending on your optimization level, compiler version, and compiler vendor...

      For example, you could do something like this:

      void delay_ms(unsigned int delay)
      {
          while (delay--)
          {
              __delay_cycles(PUT_CPU_CLOCK_SPEED_IN_HZ_DIVIDED_BY_1000_HERE);
          }
      }

    Regards,
    Andreas

  • In the delay_ms example, wouldn't the while loop at some extra clock cycles to the delay? Also, is there a way to find out exactly how many clock cycles of delay it adds?? IIs there a document that shows the delay of C code?

     

    Thanks,

    Mike

  • Mike,

    yes with the code there will be a small "offset" and "gain" error in the actual delay you get. The code is good to guarantee a certain minimum delay, with a very small error into the positive direction. That's why the delay inside the loop was chosen rather large ("ms"). If you need a dead-on delay, the best way would be using a timer. But if you for some reason still want to use SW delays, I would recommend coding a function in assembler rather than measuring the C function. You could compensate in the function itself for the "offset" and "gain" errors of the call and loop overhead. Also, our IDEs have cycle counter functions that you could use to analyze the delay function. But for SW delays, be aware that interrupts could kick in (if enabled) and change your resulting observable delay.

    Regards,
    Andreas

  • Dear Andreas,

    Such a wonderful resolution!

    Very thanks!

    Kang

  • Take a look at https://groups.google.com/forum/?fromgroups=#!topic/ti-launchpad/C1Vogtp2xa4

    They propose to use this function.


    static void __inline__ brief_pause(register unsigned int n)
    {
        __asm__ __volatile__ (
                    "1: \n"
                    " dec        %[n] \n"
                    " jne        1b \n"
            : [n] "+r"(n));
    }

    Isma.

  • Michael Conover said:
    In the delay_ms example, wouldn't the while loop at some extra clock cycles to the delay?

    Yes. However, any implementaion that is based on idlign around for a certain number of cycles, is affected by any interrupts that, well, interrupt the code and take soem time to execute.

    Usign a timer doesn't care whether the time of the delay is spent in low-power-mode, in a while loop that waits for the timer to expire, or in a an ISR. (except if the CPU is just in an ISR when the timer expires, then the remaining execution time of the ISR is added to the delay)

    I implemented long delays (up to 65 seconds) by a ms counter that is incremented in an 1ms timer interrupt (timer clocked by 1MHz, interrupt every 1000 counts) and a second delay with 1µs resolution (as 12µs minimum) based on the same 1MHz timer but by waiting for the compare interrupt of a second CCR unit. Giving me exact delays up to 65ms.

**Attention** This is a public forum