This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

MSP430WARE I2C interrupt confuse

Other Parts Discussed in Thread: MSP430WARE, MSP430F5438

I'm writing a test of usci_b_i2c driver of MSP430WARE.  I'm using IAR and F5438

I wrote a very simple I2C Master-Receiving multiple bytes test.

It works fine.  but something strange when checking waveform of SDA/SCL

Every RXIFG of a byte at Master end happens about 70-150us later than the time the byte received, 

and then the SCL delays 30-50 us at low during the receiving of the next byte (between random 2 bits of the byte).

it's so strange.  what's the reason?

the code is really simple:

   void main (void) {

   I2C_initMaster();

   __bis_SR_register(GIE);

   USCI_B_I2C_masterMultiByteReceiveStart(USCI_B0_BASE);

   __bis_SR_register(LPM0_bits + GIE);

}

#pragma vector = USCI_B0_VECTOR

__interrupt void USCI_B0_ISR (void) {

    switch (__even_in_range(UCB0IV,12)) {

        case USCI_I2C_UCRXIFG: {

            toggleBluePin();

            buf[i] = USCI_B_I2C_masterMultiByteReceiveNext (USCI_B0_BASE);

  • Taurus Ning said:
     buf[i] = USCI_B_I2C_masterMultiByteReceiveNext (USCI_B0_BASE);

    ISR cannot end this way... please post the missing code.

    Regards,

    Peppe

  • #pragma vector = USCI_B0_VECTOR

    __interrupt void USCI_B0_ISR (void) {

        switch (__even_in_range(UCB0IV,12)) {

            case USCI_I2C_UCRXIFG: {

            toogleBluePin();

            buf[idx] = USCI_B_I2C_masterMultiByteReceiveNext (USCI_B0_BASE);

            idx++;

            if(idx >= DATA_LEN) {

            USCI_B_I2C_masterMultiByteReceiveFinish (USCI_B0_BASE);

            __bic_SR_register_on_exit(LPM0_bits);

            }

            break;

            }

            default:

            break;

        }

    }

    besides, I run the example usci_b_i2c_ex1_masterRxSingle.c in 430WARE,

    the problem is same.  so is it a bug of 430WARE driver?

  • Taurus Ning said:
    __bic_SR_register_on_exit(LPM0_bits);

    Here you are returning from interrupt to active mode... but your main does not have an ending infinite loop, so CPU starts executing random code and probably resets and the whole thing begins again...

    Taurus Ning said:
    so is it a bug of 430WARE driver?

    It could be, it would not be the first

    Regards,

    Peppe

  • The problem I asked happened far before the exit_LPM.  It happened during the transmitting.

  • Taurus Ning said:
            buf[idx] = USCI_B_I2C_masterMultiByteReceiveNext (USCI_B0_BASE);

    You shouldn't call any function that needs to wait or a result from inside an ISR. Never.

    Likely, those functions are designed for being used in a polling main loop. If doing a transfer inside an interrupt, things mus tbe doine completely different, implementing a state machine insid ethe ISR, so it can enter and exit ASAP while only doing the necessary things for the next step in thw transmission.

  • Thanks for your reply.... but it seems not as your said.

    here is the source code of the driver:

    uint8_t USCI_B_I2C_masterMultiByteReceiveNext (uint32_t baseAddress) {

        return (HWREG8(baseAddress + OFS_UCBxRXBUF));

    }

    clearly, it's not a polling.

  • Taurus Ning said:
    clearly, it's not a polling.

    I see.  It just returns the value in the given RX register. What a waste of code, what a superfluous function. if it were an inline function, or a macro, but a function... Argh!

    What about replacing USCI_B_I2C_masterMultiByteReceiveNext (USCI_B0_BASE); by a simple UCB0RXBUF; ?

    Well, how about a little bit math...

    What are your clock frequencies? MCLK, SMCLK etc? At which speed is the I2C operating?

    It takes some time from the moment an interrupt is flagged to the execution of the first instruction of the ISR. Now if the ISR contains a function call (and yours apparently has two), the compiler needs to save some registers it otherwise wouldn't need to. This also adds to the latency. At the end, the delay you observe might simply be the time between the flagging of the interupt and the execution of your line toggling code.

    Without knowing the whole system state, all I can do is guessing around.

  • The code is run on MSP430F5438, System clock is maybe 25MHz?  I2C is using SMCLK, and the baudrate is set to 400K

    USCI_B_I2C_masterInit(USCI_B0_BASE,

            USCI_B_I2C_CLOCKSOURCE_SMCLK,

            UCS_getSMCLK(UCS_BASE),

            USCI_B_I2C_SET_DATA_RATE_400KBPS

    );

    The delay time puzzled me is not the time in ISR execution, but some delay happened between random bits during the I2C byte transmitting by USCIB.  Is it some behaviour of Hardware? and what's the reason? 

    I'll do some further test about getting rid of function call.

    P.S.  I'm soooooo agreed with you about the stupid design of USCI_B_I2C_masterMultiByteReceiveNext function, and lots of similar functions in I2CDriver of MSP430WARE.  

    Thanks a lot for your help.

    Regards,
    TN

  • Taurus Ning said:
    The code is run on MSP430F5438, System clock is maybe 25MHz?

    You don't know? You should. Don't you set it anywhere?

    Taurus Ning said:
    The delay time puzzled me is not the time in ISR execution, but some delay happened between random bits during the I2C byte transmitting by USCIB.  Is it some behaviour of Hardware? and what's the reason? 

    It's not random. At certain points in its state machine, the USCI waits for you and holds the clock low until your software provides information, reads incoming bytes etc. And if your code takes long to do it, the USCI will wait. Not necessarily at a byte border.

    So e.g. if a byte was received, the USCI immedately begins to receive the next byte. But at bit7 it stops until you have read the previous byte form RXBUF. Now some time passes between the appearance of an interrupt to th epoint where you set the signal line (shown on the scope) and then more time passes until the proper operation is performed and hte USCI can continue.

    Now let' sassume that your system clock isn't 25MHz. The default is 1MHz. So on 400kHz, so a byte is transferred within 20 MCLK cycles. Interrupt latency and code execution time would perfectly fit the pattern you observe.

**Attention** This is a public forum