This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

MSP430F249: UART

Part Number: MSP430F249


I have rewritten the UART data management state machine to support the UART RX interrupt. However, the UART never generates and interrupt!

// UART initialization.
UCA0CTL1 |= UCSWRST; // Disable the UART.

UCA0CTL0 |= 0x00; // No parity, LSB first, 8-bit character, one stop bit,UART mode, Asynchronous mode
UCA0CTL1 |= 0x40; // ACLK
UCA0MCTL |= UCBRF_3 + UCBRS_5 + UCOS16; // 230,400 baud
UCA0BR0 |= 0x04; // Upper byte of baud rate prescalar
UCA0BR1 |= 0x00; // Lower byte of baud rate prescalar

UCA0CTL1 &= ~UCSWRST; // Enable the UART.
IE2 |= UCA0RXIE; // Enable the UART receive interrupt.

// Copy character in the RX buffer into Serial_Port_Data.
#pragma vector=USCIAB0RX_VECTOR
__interrupt void USCI0RX_ISR(void) {
_DINT();

Serial_Port_Data = UCA0RXBUF;
System_Flags |= UART_RX_DATA;

_EINT();
}

Regards, Harvey

  • 1) How do you tell that the interrupt doesn't occur? A breakpoint in the ISR is usually convincing. 

    2) Do you enable GIE somewhere? [__enable_interrupt() or _EINT() or some such]

    3) Is System_Flags declared "volatile"?

    4) [Unsolicited:]

    _EINT();

    The CPU interrupt mechanism does the _DINT and _EINT for you. Using _EINT in an ISR opens up a tiny window that will most likely show up the night before product delivery.

  • I removed _DINT() and _EINT() from the ISR and all of a sudden I am getting interrupts! I am not sure why this creates problems but I will press on. 

    Now I have other issues but progress has been made.

    Thanks Bruce!

  • If that is the entire interrupt service routine, why bother? You have replaced checking RXIFG with System_Flags and RXBUF with Serial_Port_Data. Gaining nothing in terms of time or functionality.

    It is in fact slower.

  • The following ISR runs about 8 to 10 times slower than expected. Counting clock cycles I estimate 2 to 2.4 microseconds. However, I measure almost 20 microseconds from LED on to LED off. Any ideas why?

    #pragma vector=USCIAB0RX_VECTOR
    __interrupt void USCI0RX_ISR(void) {

    P3OUT &= ~LED1n; // Turn on led

    Upstream_Data[Upstream_Pointer] = UCA0RXBUF; // Write data to buffer memory.
    Upstream_Pointer++; // Point to the Packet_Command location in the Upstream_Data array.

    if( Upstream_Pointer >= Rx_Packet_Length ) {
    System_Flags |= UART_RX_DATA;
    Upstream_Pointer = 0; // Initialize the array index.
    }

    P3OUT |= LED1n; // Turn off led
    }

    The UART seems to be running correctly at 230,400 baud, and XIN measures 16.1 MHz. Clock initialization follows.

    //Set_Clock for 16 MHz clock into XIN.
    // DCOCTL = 0; // Select lowest DCOx and MODx settings
    BCSCTL1 = CALBC1_16MHZ; // 0x10F9 // Set range
    DCOCTL = CALDCO_16MHZ; // 0x10F8 // Set DCO step + modulation
    BCSCTL2 = SMCLK_2M; // 0x06
    BCSCTL3 = LFXT1S_2; // 0x20 // VLO for ACLK (~12Khz)


    do {
    IFG1 &= ~OFIFG; // Clear OSCFault flag
    for (DDC03_index = 0xFF; DDC03_index > 0; DDC03_index--); // Time for flag to set
    }

    while (IFG1 & OFIFG); // OSCFault flag still set?

    // Once chip is running set to desired operation, see register definitions for details
    DCOCTL = 0xE0; // DCO set to 16MHz
    BCSCTL1 = 0xCF; // XT2 oscillator off, LFXT1 in high-frequency mode, ACLK divider set to 1, RSEL highest range.
    // BCSCTL2 = 0x00; // MCLK and SMCLK source DCOCLK, Div-0
    BCSCTL2 = 0xC0; // ACLK = MCLK = XIN, divide by 1.
    BCSCTL3 = 0xF0; //

    // while (IFG1 & OFIFG); // OSCFault flag still set?

    // Once chip is running set to desired operation, see register definitions for details
    DCOCTL = 0xE0; // DCO set to 16MHz
    BCSCTL1 = 0xC7; //
    BCSCTL2 = 0xC0; // MCLK and SMCLK source DCOCLK, Div-0
    BCSCTL3 = 0xF0; //

    Thanks, Harvey

  • I don't get it. FIrst you set the DCO using the calibrated values from information memory, then you change them to uncalibrated.

    You check the oscillator fault flag but don't appear to have configured either LFXT or XT2.

    While your comment says you set MCLK to the DCO, the bits say SELM is 11 which is LFXT1. Lucky for you the fail safe system falls back to the DCO. There are handy symbols defined for these fields in the device header file. (SELM_0 and DIVM_0 etc.)

    Lots of fiddling with clock system registers so I don't know what you think you are doing. That final bit of fiddling most certainly does not set the DCO to 16MHz.

    Oh, before using the calibrated values (CALDCO, etc.) you should verify the checksum for the TLV data to make sure it isn't trashed. How you handle a failure there is up to you. Halt and catch fire, or use some best guess values.

  • I inherited this code and find programming the clock section confusing in any case.

    I ripped out everything after the "while" and made some adjustments to the clock programming and everything seems to be working the way it should.

    I am left with:

    DCOCTL = CALDCO_16MHZ;
    BCSCTL1 = XT2OFF + XTS + RSEL3 + RSEL2 + RSEL1 + RSEL0;
    BCSCTL2 = 0xC0; // MCLK and SMCLK source DCOCLK, Div-0
    BCSCTL3 = 0;

    do {
    IFG1 &= ~OFIFG; // Clear OSCFault flag
    for (DDC03_index = 0xFF; DDC03_index > 0; DDC03_index--); // Time for flag to set
    }

    while (IFG1 & OFIFG); // OSCFault flag still set?

    Any comments or criticisms?

    Thanks

  • Hi Harvey,

    BCSCTL1 = XT2OFF + XTS + RSEL3 + RSEL2 + RSEL1 + RSEL0;

    I would change this to something like BCSCTL1 = XT2OFF | XTS | (0xF0 & CALBC1_16MHZ)  so you are using the calibrated values for RSELx at 16MHz. 

    We also typically include a line like the below to check if the Calibrated values have not been erased:

      if (CALBC1_16MHZ==0xFF)					// If calibration constant erased
      {											
        while(1);                               // do not load, trap CPU!!	
      }

    In our examples we just trap, but as David said regarding checking the TLV Checksum, how you handle that error case is up to you. 

    Best Regards,
    Brandon Fisher

**Attention** This is a public forum