This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

USART bytes reception time out setting up.

I configured my usart module as the following:

  • Baud rate = 1200 bit/sec.
  • 8 bit character.
  • One stop bit.
  • Odd parity check.

The slot (timing) for a single 8 bit character, as I understand, is (11/1200)*1000 = 9.17 ms (11 bit  = 1 start bit + 8 bit of a single character + 1 parity bit + 1 stop bit).

My goal is to set up a reliable time-out logic for the possible "gap" between each received byte. And I have tried the following setting-up:

inside the usart reception handler,

#pragma vector = UART0RX_VECTOR
__interrupt void USART0_RX(void)
{
    //some code to stop timer A here.
    ....
    expiry_Timer_stop();
rxBuffer[rxIndex++] = U0RXBUF; //some code to start timer A here. .... expiry_Timer_start(some_tick); // And I have some logic to prevent timer A from starting after the last byte of the incoming frame has arrived. // The some_tick parameter specifies when would timer expired in Up-Mode.

// The four dot string "...." omits some logic control which I think is not relevant to my question. }

I set the some_tick to the value making timer A counting up to 9.17 ms (which as the protocol told is the max allowed gap between byte characters). And Timer A
works in up-mode, so TAR counts up to some_tick, mcu enters the __interrupt routine:

#pragma vector = TIMERA0_VECTOR
__interrupt void Timer_A()
{
  receptionTimeOut = TRUE;
  _NOP();
}

The boolean flag receptionTimeOut is for the main loop to detect the time-out between bytes. And the main() function basically only has two state (Transmission and Reception) jumping from each other.

The problem is that this approach is not always working . And the way I test this setting-up is just send some normal desired frame (the frame that the mcu is programmed to consider being valid) from PC, but the expiry_timer timed out (mcu enters __interrupt void Timer_A() ) every time a byte is received. So then I set the some_tick to be a bit longer than 9.17 ms. Say 10 ms or 12 ms, then mcu some time works  as I wished, and other times still time-out.

My question is, would this setting-up be a working solution? Or did I make it too complex to test?

And I think there might be flaw in my timer configuration. So I posted the relevant code here:

void expiry_Timer_init()
{
    TACTL |= (TASSEL1 | TACLR | MC0); // clock source from SMCLK, up-mode counting.
    TACCTL0 |= CCIE;
}

void expiry_Timer_Stop()
{
    TACTL = 0x00; 
}

void expiry_Timer_Start(unsigned int interval)
{
    if (interval > 0)
    {
        TACTL |= (TASSEL1 | TACLR | MC0); // clock source from SMCLK, up-mode counting.s
        TACCR0 = interval;
        TACCTL0 |= CCIE;
    }
}

  • In theory, the setup should be working. However, there are some minor flaws:
    1) in expiry_Timer_init, you start the timer too. Since you 'init' the timer in expiry_timer_Start anyway (setting TACTL etc.), the init is superfluous. And it immediately starts generating interrupts while not being synchronized to the reception.
    2) do you use a real RS232 (COM port) or an USB version? In USB, the data sent by the PC is sent in packages to the USB device that turns them into a serial stream. Due to this, there are gaps in the transmission. Except for few more sophisticated USB/SER converters, you will never have a 100% constant data stream. Even in the (unlikely) case that neither OS nor USB layer or other peripheral activity will cause further delays.
    3) on a software-based synchronization, you'll always have some interrupt latency. The ISR isn't called instantly. Other ISRs may be currently executing (an ISR won't interrupt another ISR, even if it has higher priority) This needs to be take into account when setting the expiration time. Imagine the RX interrupt being executed when the next byte has already started and the next start bit or even a few data bits have already been received. This is no problem for the RX ISR, as you have time until the next byte has been received, to read the previous one. However, it also means your (tight) timer has already detected a timeout when the ISR is executed.
    Unless you really need the timeout being to tight, I'd go for a two-byte delay. If the RX ISR isn't called in the time it takes to receive two bytes, you either have an overflow (the ISR wasn't called for some reason in time) or a timeout.

    Anyway, I'd have made it differently:
    Let the timer run in up mode. It can be used for other things then too. (e.g. giving you a 1ms interrupt for main timing)
    When a byte is received, set TACCR0=TAR and set CCIE. This starts the delay.
    When the timer expires, clear CCIE )so you won't get additional interrupts while the gap continues)
    It still won't make up for any ISR latency.

    You can also use the start edge detect feature. If after receiving a byte, the start edge of the next isn't detected in time, then there is a timeout. However, this will require to ensure the ISR latency being below one bit length.

    Another possible approach is to capture the timing of the edges of the serial signal. For an uninterrupted transfer, the difference between the rising edge of the stop bit and the falling edge of the start bit is exactly one bit. You can capture both edges with the timer. In the RX ISR, you can wait for the falling edge interrupt to be captured and created the difference between this and the last rising capture. It#s a bit ugly, as it means worst case wasting 1/11 of your time inside the RX ISR until you see the starting edge is not coming (and requires two additional wires), but if you need it so tight... The advantage is that you know that the next byte hasn't started in time rather than waiting until it has not been received in time.

**Attention** This is a public forum