I am using MSP430 F5438. And I want to implement function which waits for certan amount of microseconds (not miliseconds).
Something like this:
void watiMicroseconds(int microseconds)
{
????
}
How can I do that?
This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
I am using MSP430 F5438. And I want to implement function which waits for certan amount of microseconds (not miliseconds).
Something like this:
void watiMicroseconds(int microseconds)
{
????
}
How can I do that?
Hi Flek,
The __delay_cycles intrinsic inserts code to consume precisely the number of specified cycles. The number of cycles delayed must be a compile-time constant.
Also, I think that you should read this thread http://e2e.ti.com/support/microcontrollers/msp43016-bit_ultra-low_power_mcus/f/166/t/18638.aspx
Best Regards,
AES
It can be said that the standard c language has no concept of time. But specific c compiler may come with a library that includes delay functions. For MSP430, IAR has one that delays integer number of MCLK. If the MCLK you use is an integer multiple of MHz, you can delay interger multiple (or even certain fractions) of microseconds easily with that.
Hi flek,
are u having the sample code for ur controller? if there should be a code for ON/OFF led on a particular port pin using some delay.from der u will get on how much delay the led is glowing so that u can easily make a count for a microsecond delay.be sure to make the sys running on external clock so that u can speed up ur program exectution make the proper delay.
for example if ur code is running on 1Mhz basic clock so that 1/1Mhz is equal to i microsecond . basic clock settings are available with the sample code pls check those things ,note if u change the clock(ACLK,MCLK,SMCLK)ultmately u function should change for proper delay.
simply u can use delay_cycles() but put an eye on the clock speed by which ur controller is working.
Regards,
kshatriya
Hi flek,
there is an instrinsic C function which calls __delay_cycles.
The __delay_cycles inserts code to consume precisely the number of specified clock cycles (MCLK) with not side effects. The number of clock cycles delayed must be a compile-time constant, so you will use this instrinsic like:
__delay_cycles (1000); // delay program execution for 1000 cycles
The instrinsic function speeks for itself. I consumes exactely the ammount of MCU clock cyles (MCLK) you specify in parentheses.
So, the delay time for the instrinsic calculates as follows: required delay / instruction cyle time = value (in parentheses).
I.e.
MCU clock MCLK = 16MHz --> instruction cycle time = 62.5ns (= 1/MCLK)
Required delay = 5s
--> 5s/62.5ns = 80000000
Rgds
aBUGSworstnightmare
Using a __delay_cycles has some disadvantages. If an ISR is being executed while the delay is in effect, its execution time is added to the delay. Each time again. Since of course the execution of the given cycles is halted during ISR execution.
Also the execution speed depends on MCLK. So the values need to be recalculated for each project.
For my projects, I used a different approach. My TimerA is clocked with 1MHz to give a 1ms timer interrupt. So I use one of the other CCR lines for the µs delay. The delay function itself then is
void TimerDelay1us(unsigned int time){
__asm__ __volatile__ ( "push r2"::); // save interrupt state
_DINT(); // disable interrupt (to allow proper setup)
usdelay=1;
TA1CCR2=TA1R+time+2;
TA1CCTL2=CCIE;
_EINT(); // enable interrupt (to allow CCI take place)
while(usdelay);
TA1CCTL2=0;
__asm__ __volatile__ ( "pop r2"::); // restore original interrupt state
}
In the TA1CCR2 ISR, usdelay is simply set back to 0. usdelay is a global volatile char.
At 16 MHz, there is an offset of 4-5µs to the given time value, so you can with an accuracy of +-1 µs request a delay of 4 to 65530 µs. Since interrupts need to be enabled, there's a chance that on start or end of the delay, an ISR might be executed that increases the delay time. To reduce this risk, you may deactivate ISRs before callign this method (but they WILL be enabled during the delay) or disable any other interrupt sources. Any ISRs exeuted while the delay has not expired do not add to the delay.
If you need a very precise delay of only a few µs, then of course nothing is more precise than adding a proper number of NOP instructions with disabled interrupts, and ensure the compiler won't optimize them away.
Hi flek,
try the code below (add it to the beginning of your main(); first instruction in main should be WDTCTL 0 ..). This will initialize your main clock (MCLK) to 12 MHz resulting in a instruction cycle time of 1/12MHz = 83.3ns.
__delay_cycles(1) will delay your program for 83.3ns now. If you need 5us this will calculate as:
5us/83.3ns = 60
--> __delay_cycles(60) will delay your program for 60 x 83.3ns = 4.998 us.
Rgds
aBUGSworstnightmare
WDTCTL = WDTPW+WDTHOLD; // Stop WDT
// ACLK = REFO = 32kHz, MCLK = SMCLK = 12MHz
UCSCTL3 |= SELREF_2; // Set DCO FLL reference = REFO
UCSCTL4 |= SELA_2; // Set ACLK = REFO
__bis_SR_register(SCG0); // Disable the FLL control loop
UCSCTL0 = 0x0000; // Set lowest possible DCOx, MODx
UCSCTL1 = DCORSEL_5; // Select DCO range 24MHz operation
UCSCTL2 = FLLD_1 + 374; // Set DCO Multiplier for 12MHz
// (N + 1) * FLLRef = Fdco
// (374 + 1) * 32768 = 12MHz
// Set FLL Div = fDCOCLK/2
__bic_SR_register(SCG0); // Enable the FLL control loop
// Worst-case settling time for the DCO when the DCO range bits have been
// changed is n x 32 x 32 x f_MCLK / f_FLL_reference. See UCS chapter in 5xx
// UG for optimization.
// 32 x 32 x 12 MHz / 32,768 Hz = 375000 = MCLK cycles for DCO to settle
__delay_cycles(375000);
Well, if you leave the default setting for the MCU clock, it can be anything between 0.4 and 2 MHz. (well, maybe not THAT bad, but still..)
For our applicaiton, I set upt the DCO for 16MHz and use the FLL (frequency locked loop) stabilisation using the REFO oscillator. That's more or less what aBUGSworstnightmare does in his sample code too. Only after doing so (or using an external quartz) provides enough accuracy for using the UARTs or rely on any timing.
This case your MCLK is fast enough to use __delay_cycles_ even for just 5 µs despite of the calling overhead. If you just need a delay of 1 cycle (and running with 1MHz as you assumed) it would have been sufficient to just insert a NOP into your code. As a NOP is an operation with no side effects that takes exactly 1 cycle.
Cycle counting is, however, not a good thing. If you for some reason change your MCLK, then all predefined cycle waitt imes are void and you need to rewrite/check all your code to correct them. Using timers when possible is the better way, as you can simply adjust the timer divider to the new clock frequency and everything is fine.
As for communication, the given delays are usually minimum values, so normally it would be okay if you make a 5ms delay instead of 5µs. And if you just call the external device once a minute, it makes no difference in update speed too :)
I don't know the DS18B20, but if it has an SPI interface, as most sensors have, it woul dbe easier to use the MSPs SPI hardware instead of manually pulling lines. THen you only need to set a divider to MCLK so that the clock pulses do not exceed the external hardwares requirements and all the job is done by writing and reading bytes to and from the SPI TX and RX registers.
one more thing: using __delay_cycles_ to make a 1 minute delay is, well, surely not the way to go :)
One basic thing I do on all my projects is to set up a timer so it will generate an interrupt every millisecond. In the interrupt service routine a global variable is incremented. So I have a clock with 1ms tick time. All I need to do is to compare the clock value with x+60000 to get 1 minute delay. or x+1000 for 1 second. I even have a separate second counter (incremented on every 1000st call of the 1ms interrupt) for longer delays up to days.
If you need to preserve power, you can also set upt a timer that generates an interrupt after 1 minute and put the processor into sleep. it will then wake up when the time has expired, not consuming any 8at least almost) power in the meantime.
But this is somewhat higher sophisticated stuff and maybe overkill for your project. Unless you work on battery power and need to preserve energy for long-lasting operation.
For first steps, doing it the simplest way until it works is surely a good starting point, as long as you don't block your way for future improvements. So if the chip supports SPI, connect it to one of the SPI lines of the processor. You can still use them as simple program controlled I/O if you want, but if you someday want to make the step and use the SPI hardware, it is already connected to the right pins. :) My library of hardware support modules for several MSPs started with handmade pin toggeling and counted cycles too :)
Thank you all for answers. I will try it in a 2 weeks, becuse until than I have some real work to do... :) This microsecond project is my "private" project.
I implemented protocol for communicating with DS18B20. It woks sporadicaly, because timing problems. So, I will do folowing:
1) speed up microcontroler to 12 MHz (thank you aBUGSworstnightmare)
2) implement my code in folowing way:
a) wait for a 1 minute (using software timer, RTOS is offering such function)
b) comunicate with DS18B20, and display temperature
In a future I will modify my code as Jens-Michael Gross suggested.
**Attention** This is a public forum