This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Fails initial detection of Input port voltage level, sometimes delays return when polling TAIFG/CCIFG

Other Parts Discussed in Thread: MSP430F2272

I am reading a serial stream of data at 1200 bits/second. The serial line goes high to acknowledge the request for data, then each byte starts with an low "Start" bit and starts with a high "End" bit. I am using Port2.4 for the serial data input, and polling the Port2.4 to see the Acknowledge, and then polling to detect the low  "Start" bit. The MSP430F2272 always detects the Acknowledge, but sometimes (1 out of 6 times) delays detecting the start bit for 2.7ms to 30ms (by which time many bits of serial data have passed, as have many opportunities to detect the a low voltage level).

I am checking the signal (levels and quality) with a oscilloscope and a logic analyzer, and the serial data source is not presenting any problems. The logic analyzer _always_ reads the signal flawlessly.

The other problem I am having is that when I start timer A in the delay() function below, it often does not return. It gets assembled as follows, and it just doesn't make sense that it could fail (I originally tried checking the two interrupt flags individually, then resorted to checking them both when that didn't work). I put debug statements on either side of this statement and proved to myself that this is where the microprocessor is hanging.
//while(!(TACTL&BIT0 || TACCTL0&BIT0)); 

082de BIT.W #1, &0x0160
082e2 JNZ 0x082ea
082e4 BIT.W #1, &0x0162
082e8 JZ 0x082de

I am using the DCOCLK for MCLK, with the calibrated 8MHz clock. I am using Timer B (driven by VLO/ACLK) to count 10 seconds and send another request for data, and I am using Timer A (SMCLK = DCOCLK/8 = 1MHz) to count the microseconds between clocking in bits.

#define DEFAULT_CALBC1_8MHZ (XTS + RSEL3 + RSEL2) // RSEL12, XT2OFF, XTS, no divider (chooses a value of RSEL that guarantees that the oscillator is slower than 8Mhz to simplify look timing
#define DEFAULT_CALDCO_8MHZ (DCO1 + DCO0) // DCO3 + no modulation

BCSCTL1 = CALBC1_8MHZ; // Select preprogrammed BCSCTL1 8MHz calibration
DCOCTL = CALDCO_8MHZ; // Select preprogrammed DCOCTL 8MHz calibration
BCSCTL2 = DIVS_3; // MCLK = DCO, SMCLK = DCO/8
BCSCTL1 &= ~XTS; // Set Low-Freq mode (XTS should already be low)
BCSCTL3 |= LFXT1S_2; // VLO on (with XTS=0 already, sets ACLK = VLOclk)


// TimerB marks the time to read the sensor (accumulate 1 second at a time)
TBCTL = TBCLR; // clear TimerB counter
TBCCTL0 = CCIE; // Enable timer b Capture/compare interrupt
TBCCR0 = VLO_DIV8_1SEC; // ~20000 clock cycles = 20ms @ 1 MHz (SMCLK = 1MHz)
TBCTL = TBSSEL_1+ID_3+MC_1; // ACLK, divide by 8, start and count up to CCR0

P2SEL &= ~BIT4; // this should be the default 
ADC10AE0 &= ~BIT4; // reinforcing the default 
ADC10CTL0 = 0x00; // ADC off - reinforcing the default
P2IES &= ~BIT4; // Configure interrupt to trigger on low to high transition (doesn't matter because interrupt is not being used - this application just polls the bit)
P2IE &= ~BIT4; // Disable P2.4 interrupts
P2OUT &= ~BIT4; // Force low if we ever switch directions
P2DIR &= ~BIT4; // configure as input

/* power the sensor and request serial data, process the acknowledge and the start bit, then request one byte at a time*/
unsigned int TTLMeasure(char *buffer)

{
unsigned int i;
unsigned int retVal = 1;
char c;

P1OUT |= BIT2; // turn on sensor / request start of serial data
while(!(P2IN & BIT4)); // poll sensor (stop when signal is high) - ***always detects this ***

while((P2IN & BIT4)); // look for low signal  - ***sometimes misses multiple periods of low voltage here***

i = 0;
do {
c = TTLRXChar(); // read in 1 character
buffer[i] = c;
i++;
}
while(c != 0 && c != '\n' && i < 60); // stop when c is null, \n, or buffer is full

buffer[i-1] = '\0'; // make sure the string is terminated
P1OUT &= ~BIT2; //turn off sensor
return(retVal);

}

//-----------------------------------------------------------------------------------------------
// TTLRXChar() bit-bangs in a 1200 bps, TTL level character from a digital sensor
//-----------------------------------------------------------------------------------------------
char TTLRXChar( void )
{
int i;
char c;
char p = 0; // Even parity bit

i = 0;

while((P2IN & BIT4)); // Wait for P2.4 to go low (start bit)

c = 0; // ensure initial value is 0

delay(HALF_BIT); // Delay to read at the middle of a bit
for (i=0;i<8;i++) {
delay(ONE_BIT); // already in the middle of a bit -> delay to middle of the next bit to be read
if(P2IN&BIT4)
c += 0x80; // Add bit to character buffer

if(i<7) {
c = c >> 1; // Prepare character buffer for next bit
}
}
delay(STOPBIT_DELAY); // delay to the stop bit
return(c);
}

void delay( unsigned int c ) { // delays 1 usec per c
unsigned int i = c;

if(12 <= i) {                     // protect against CCR0 = 0
i -= 6; // Subtract 48 clock cycles or 6uS for the computation

TACTL = TACLR; // clears TAR, Count Direction (MC), Clock Divider (ID), flags
TACTL = TASSEL_2 + MC_1; //SMCLK, Mode = up (MC0), ID=0 (clock divider = 0), clear TAIFG
TACCTL0 = 0;
TACCR0 = i;

while(!(TACTL&BIT0 || TACCTL0&BIT0)); //   **** second frustration - sometimes processing this statement requires 40ms, 50ms, or even 58ms, even though 90% of the time it takes the expected amount of time ****
}
}

  • Why is your delay loop so complicated?

    Just let the timer run in cont mode.
    All you need to do is
    - return if the result is smaller than the setup time (including the function call and return)

    - subtract the setup time (using a signed parameter would make things faster, first subtract then compare for <= 0) but also limit the max delay to 32ms)
    - clear TACCTL0.CCIFG

    - set TACCR0 = TAR+delay

    Btw: setting TACLR does not clear the ID bits, it only clears the internal prescaler value and sets TAR to 0, without triggering any compare event.


    I see that at different parts of your code, you enable several IE bits. Where are the ISRs for these interrupts? Such an interrupt may trigger inside your delay loop, so the delay time may be expanded. And I’ve seen ISRs that took much longer than 58ms :)

    However, your code has the same flaw as my suggestion: the code isn’t synchronized with the timer clock. So It might happen even if you just cleared the timer, that it will count to 0 on the very next moment, and not 1µs later, as you don’t know on which of the 8 steps of the DCO/8 SMCLK has just arrived. You always have a jitter of up to 1µs. Consider this when calculating and subtracting the setup time. And for a (remaining) delay of 1, the timer might increment right between reading (or clearing) TAR and writing TACCR0, getting the interrupt 65ms later. So be generous when defining the minimum delay time.

  • Thank you for taking on the challenge of responding to my question - I'm sorry it is a little long and disjointed.

    I like the simplicity of your timer loop protocol - setting the mode to Continuous, and TACCR0 = TAR + DesiredDelay;  I have a question about it though:  What happens if TAR were to roll over in the middle of my desired delay?  In this situation, I think I would have TACCR0>65535; does the internal logic that sets CCIFG handle that case?

    Finally, I think that I just figured out this morning that my debugger was the primary source of the problem - I configured it to update the values of the variables in the Watch window twice per second, and it appears that the debugger halted the processor whenever it felt that need

  • Rollover happens for both, the timer and the CCR.

    Assume TAR is 65535 and delay is 1000, then 65535+1000 gives 999 (16 bit math overflow) for the CCR (which only holds 16 bits anyway), and the timer will reach this (after the rollover) after 1000 ticks. Mission accomplished.

    Yes, the debugger uses the same memory bus to fetch the variables (except for register variables) as the processor does. So the processor must be stopped for the access (like the DMA controller has to stop the processor for DMA transfers).

    Depending on the target MSP (not available on all), the debugger may stop the timers/clocks during this fetch, so at least inside the MSP are no inconsistencies caused by this fetch. However, external timings (like incoming data) are affected by this ‘dead time’.

**Attention** This is a public forum