This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

I2c Question. Why do I need a delay, and why doesn't my rx interrupt clear?

I2c Loop. Want to read one byte from slave but understand I have to read a minimum of 2. 

while (1) { i2cread(0); }

MY I2c is working after I added a delay after sending restart to the slave, but why is that required? Before I add any delay, I would miss every other byte, but with the delay added, I catch all the data.

unsigned int i2cread(char registry)
{
	while (UCB0CTL1 & UCTXSTP);			
	UCB0CTLW0 |= UCTR + UCTXSTT;            // I2C TX, start condition
	while ((UCB0IFG & UCTXIFG0)==0);
	UCB0TXBUF=registry;
	while (UCB0CTL1 & UCTXSTT);
	
	//send restart
	UCB0CTLW0 &= ~UCTR;
	UCB0CTLW0 |= UCTXSTT;
	__delay_cycles(100000); //without delay here, I only pickup every other byte
	while (UCB0CTL1 & UCTXSTT);
	UCB0CTL1 |= UCTXSTP;
}



/////////////Interrupts//////////////////////////
#pragma vector = USCI_B0_VECTOR
__interrupt void USCIB0_ISR(void)
{
  switch(__even_in_range(UCB0IV,0x1E))
  {
        case 0x04:
        	UCB0CTL1 |= UCTXSTP;
          break;                            // Vector 26: TXIFG0 break;
        case 0x16://RX
P3OUT ^= BIT0; *bufferptr=UCB0RXBUF; bufferptr++; break; default: break; } }

Also, this seems funny as I was probing the SDA line along with my RX interrupt (P3.0 XOR when interrupt received). My msp430 keeps entering the ISR, which tells me UCB0RXIFG never gets cleared until the stop bit is set. See below for better explanation. Top line is SDA, while bottom line is P3.0 (XOR on every rx interrupt). After a start and restart, I keep entering my ISR during that long delay I put in. Interrupt clears only after I send a stop bit.

Please help me clear this up. I have it working but I don't understand the problem completely, and would like to know why it works the way it does.

  • If you don’t insert the delay, this is a racing condition. Your I2C clock is quite slow compared to the CPU clock. So once TXSTT clears, you ‘instantly’ set TXSTP, which may or may not (depending on the current internal state of the USCI) instantly stop the transfer or continue to receive at least one byte.
    After TXSTT has cleared, it takes at least one USCI clock cycle (IIRC USCI input clock divided by prescaler, but not by UCBRx) to advance to the point where the reception of the first byte starts.
    So put a small delay after the while instead of a long delay before. Or feed the USCI with the same clock as MCLK.

    About the other problem, well, you didn’t post your USIC setup code. It looks like an unhandled interrupt. Or you’re receiving many bytes during your delay (can’t say because I don’t know the clock settings and the USCI init code)

  • Thanks for feeding me some info. Here's my USIC setup. Can you provide any more info on why I keep jumping into my ISR?

    //I2C Setup

    P1SEL1 |= BIT6 + BIT7; // Pin init
    UCB0CTLW0 |= UCSWRST; // put eUSCI_B in reset state
    UCB0CTLW0 |= UCMODE_3 + UCMST + UCSSEL_2 + UCSYNC;// I2C master mode, SMCLk
    UCB0BRW = 20;//20; // baudrate = 100Khz. MCLK = 16Mhz. SMCLK=2Mhz.
    UCB0TBCNT = 0X01; //Not used in Master Mode
    UCB0I2CSA = 41; // address slave is 41
    UCB0CTLW0 &=~ UCSWRST; //clear reset register
    UCB0IE |= UCNACKIE;
    UCB0IE |= UCRXIE0;

  • Your intermittent use of UCB0CTLW0 and UCB0CTL1 is a bit confusing. It might be a problem too:
    UCMODEx is bit 9+10 in UCB0CTLW0, but bit 1+2 in UCB0CTL0. Be sure you use the correct symbols (can’t check myself) or you’re configuring the USCI to something completely different than you think.
    For the bits in UCB0CTL1, these are the same in UCB0CTLW0. But I guess, different symbols exist for them too. Please double-check the header files.

    But let’s assume the bits are set correctly. If it were an unhandled interrupt, it would be called over and over again and main would effectively stall (so the delay would never expire).
    Apparently, you’re receiving multiple bytes. After the start was sent, main waits for 100000 MCLK cycles. Which on 16MHz is 625 I2C clock cycles (100kHz) = 69 bytes. Even more, since the ISR execution adds to the delay.
    Without the delay, you sometimes stopped the transfer before the USCI had a chance to even start receiving a byte, and with the delay, you stop the transfer after the USCI has received many bytes. (btw: are you sure MCLK is 16MHz?
    If you just want to receive a single byte, all you need is to wait for 1 I2C clock cycle after UCTXSTT has cleared. This is 160 MCLK cycles in your case. Then set UCTXSTP.

    However, I’m not sure what happens to the RX interrupt for this last byte. The users guide doesn’t cover this case and the flow diagram doesn’t show whether XIFG is set or not when UCTXSTP is set during byte reception. But since you got ‘every other byte’ without the delay, it seems it is generated.

    Atlernatively, you might want to use the automatic stop generation (UCASTP_2 in UCB0CTLW1 and UCTBCNTx in UCB0TBCNT) and check for the UCBCNTIFG bit.

**Attention** This is a public forum