This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

SLAA281b and micro SD

Other Parts Discussed in Thread: MSP430F1611

According to the Secure Digital (SD)  wiki (http://en.wikipedia.org/wiki/MicroSD) and the SD Association (http://www.sdcard.org/developers/tech/sdcard/pls/), there are several MMC / SD flash card types, with micro SDHC being the most common today.

Does the circuit and source code for SLAA281b apply to the 8-pin micro SD and SD HC cards covered by the "Physical Layer Simplified Specification" for version 2.0 (2006) and version 3.01 (May 18, 2010)?  I know it is an obtuse question but these devices are very finicky about pull-up resistor / timing / clocking details for SPI.

  • I tried to interface MMC/SD cards with this code too and I found that it is of limited use. Especially the detection and initialisation is not really usable in a microcontroller environment. So I rewrote the whole thing and now it works with MMC and SD cards up to 2GB. Even the micro SD.
    It does not work at all with SDHC cards for two reasons:
    1) the SDHC cards have a different internal structure to describe their capabilities. This is mainly cause by the fact that the older structure used on MMC cards and on SD cards in MMC compatibility mode, can only describe a maximum of 2GB memory and does not cover the newer low-voltage modes on the newer cards and
    2) for this reason, the MMC compatibility mode is usually no longer present in SDHC cards.
    The TI code uses the SPI serial interface for accessing the cards. This allows a maximum transfer speed of 8MB/s (25MHz clock speed). The SD cards usually use a 4-bit parallel/seriell bus which can deliver 4 times the data. And since organisation data compatibility isn't possible with SDHC cards at all, often the MMC compatible single-bit serial bus (and all the internal commands and firmware) has been dropped too and the card will only work in SD mode. Which is completely different to handle.

    With my code based on the TI code and more or less (some changes due to 3.3V SD card and 3.6V MSP voltage) the circuit in SLAA281b. I was able to write ~300kB/s and read ~500kB/s with an 8MHz MSP430F1611 and I plan to quadruple this speed on the 5438 with its 16MHz clock and improved SPI.
    And detecting of a card change and the card initialisation is done with a state machine in a tiemr unterrupt, so it happens completely in the background and does not interfere with the normal program flow.

    There are some other threads about SD cards and SPI with more details already in this forum.

  • Hi,

    This brings the following question:

    Are there any plans of implementing SDIO interface into higher end MSP430 and Stellaris microcontrollers?

    As SD cards are now a very common and important part of embedded systems it should be seriously considered by TI

    engineering and strategic departments. The competitors like ST and Atmel have them for some time already at least on Cortex-M3 microcontrollers.

    Regards

    Jan

  • Hi Jan,

    from my point of view SD cards were used in SPI mode for most of the time (or by most of the applications) since no dedicated peripheral is needed. I would appriciate if TI would release an App Note for SD card interface to MSP430 which uses the SPI at full speed too.

    To process the data on the SD card you should also think about implementing a file system like FAT16 i.e..

    In case of Stellaris MCUs you should have a look at Stallarisware since there is a SD card and FAT16 example included (i.e. have a look at the software for this tool http://focus.ti.com/docs/toolsw/folders/print/eks-lm3s6965.html).

    Rgds
    aBUGSworstnightmare

  • Hi aBUGSworstnightmare,

    You are entitled to your own opinion. I hope, that TI people are more openminded.

    Remember that High Density SD cards do not need to support SPI interface and it means that some of them can work and some not.

    That leaves you with an option of selecting only some brands and capacities of your card and more importantly you have to explain it

    to your customers.

    As TI has an IP for SDIO implemented on other processors (OMAPs) it shouldn't be difficult to move it to Stellaris.

    Maybe it would be too difficult with MSP430.

    Once again Atmel and ST (maybe others as well) have SDIO interface on their Cortex-M3 micros.

    Theoretically you would be able to interface other devices then, not only memory cards, provided that you have access to SDIO specification

    to write software, but it is a different story.

    Regards

    Jan 

     

     

  • There is nothing preventing you from a software-based implementation of the SD interface.
    Sure, it will be slower than with hardware, but then, on microcontrollers the amount of data isn't that high. And the slowest part is still the read/write time of the flash interface inside the SD card.
    And while the MMC card SPI interface implementation (since it is SPI) is really straightforward (and the SPI hardware therefore simple, a plain bit shifting operation as it is for all 'real' SPIs), the implementation of the parallel interface is way more complex, as the data direction changes depending on the current command and the transmitted status values.
    So most of it cannot be done in plain hardware anyway. Most of the handling has to be done in software. So any hardware support that does not include a complete implementation of the MMC protocol (a complete sub-processor) won't be of much help.

  • My objective is an MSP430-based data logger with embedded nonvolatile storage.  A micro SD card (inaccessible to the user) is advantageous:  (1) ultra-low power when idle, (2) streamlined architecture (SPI interface), and (3) low cost.  The advantages match the MSP430.  The challenges seem to be: (A) finding a card that implements the version 3.01 standard for SPI with 8-pin micro SD cards, and (B) implementing that interface since there are operating differences between cards of different capacities among manufacturers.

    SLAA281b and http://alumni.cs.ucr.edu/~amitra/sdcard/Additional/sdcard_appnote_foust.pdf are starting points for code development.  It's an interesting project, to be continued.

  • Paul Lander said:
    My objective is an MSP430-based data logger with embedded nonvolatile storage.

    Welcome to the club :)

    Paul Lander said:
    there are operating differences between cards of different capacities among manufacturers

    There are  differences even with same capacity. Timeouts, maximum clock speed (during init) and other parameters are different.
    I have about 10 different brands from 512MB to 1GB in use and got them all to work with the MSP430F1611 and the ATMega128, but it required some tweaking to implement all necessary checks and delays to make the first 4 work. The rest then worked seamlessly.
    There's also a difference between micro- and mini/normal SD cards. The micros require a different init function (they don't react on the MMC init sequence while the mini/normal SDs do)

    p.s.: in the mentioned appnote, the code uses one ocmmon DMA trigger and trusts in DMA priorities. This is not a good approach when dynamically using ressources. Maybe the two available DMA channels have not the right priority order. Yet it is no problem using the RX and TX trigger for the two channels. Both are available and it works fine. Also, the transfer code uses a SPI clock divider of 2. On the newer devices with USCI, a divider of 1 is allowed, pushing the maximum SPI clock to 25MHz which is also the allowed maximum for all SD cards I've seen. It also pushes the DMA to the limits as it takes 4 MCLK cycles per transfer and two transfers per databyte, so since the CPU still uses some cycles (unless you go into LPM0), the SPI transfer won't run with the maximum speed. Still faster than with the /2 divider.

  • Jens, is there a chance you can post up your SD code for the 1611? Thanks!

  • Victor Youk said:
    Jens, is there a chance you can post up your SD code for the 1611? Thanks!

    I don't think my boss will fire me if I do, but I won't take a chance. Sorry.
    I'm already testing his patience by 'wasting' so much time here :)
    Maybe you can ask him for a license, but due to the low number of licensees and the effort to put into the paperwork, I don't think you'd like his answer :)

    For him, my duty here is to get and not to give. ;)

  • Oh well, now I'm working from the previously linked PDF/appnote (which uses very similar codes... I guess the professor used the appnote as a reference).

    On the unchanged codes for either, the only errors I get are on these lines:

     

    DMA1SA = U1TXBUF_;

    DMA1DA = U1TXBUF_;

    DMA0SA = U1RXBUF_;

     

    It compiles fine without the underscores. Unfortunately it still doesn't run, as it appears that the DMA never indicates that it is done - code gets hung up here:

     

    /* Just twiddle our thumbs until the transfer's done */

    while ((DMA0CTL & DMAEN) != 0) { }

    Now if I change it from single transfer to block transfer, the code does not get hung up there, but data read back is different than what is received.. I suppose this has something to do with me deleting the underscores, as for the TI and the professor, it did compile for them.

     

  • Block transfer does not wait until the next interrupt is flagged, so after the initial interrupt, all data is shifted at once. Of course this is faster than the data is sent and therefore the received data won't match (TX buffer overrun)

    I don't understand the lines above: if DMA1 is shifting from U1TXBUF to U1TXBUF, it makes no sense except for stuffing dummy bytes into the send channel.

    There are some differences between the USART and the USCI module: USCI interrupts are level triggered, while USART interrupts are edge triggered. The problem is, that after stuffing the first byte into TXBUF, TXIFG stays set (since the content of TXBUF is immediately forwarded to the output shift buffer) and no second DMA is triggered. Hence no bytes are moved into TXBUF anymore and the transfer stalls.

    It might be necessary to switch the DMA mode to level-triggered DMA.

    Personally, I never used DMA on the 1611 with USART (where I implemented the SD support first). The lower SPI clock rate (clock/2) provided enough time (16 MCLK cycles) for reading and stuffing the next byte. On the 54xx, the maximum speed is clock/1, so only 8 MCLK cycles remain and DMA is faster than a wait/stuff loop

    I don't know where these underscores do come from. Probably a header file is missing, or you are using a different compiler.

  • sorry for the confusion, the code I posted above were little snippets from the read SD block code here:

     

     

    int sd_read_block (sd_context_t *sdc, u32 blockaddr, unsigned char *data)
    {
    	unsigned long int i = 0;
    	unsigned char tmp;
    	
    	unsigned char blank = 0xFF;
    	
    	/* Adjust the block address to a linear address */
    	blockaddr <<= SD_BLOCKSIZE_NBITS;
    	
    	/* Wait until any old transfers are finished */
    	sd_wait_notbusy (sdc);
    	
    	/* Pack the address */
    	sd_packarg(argument, blockaddr);
    	
    	/* Need to add size checking */
        if (sd_send_command(sdc, CMD17, CMD17_R, response, argument) == 0)
     		return 2;
    	
    	/* Check for an error, like a misaligned read */
    	if (response[0] != 0)
    		return 3;
    	
    	/* Re-assert CS to continue the transfer */
    	spi_cs_assert();
    	
    	/* Wait for the token */
    	i=0;
    	do
    	{
    		tmp = spi_rcv_byte();
    		i++;
    	}while ((tmp == 0xFF) && i < sdc->timeout_read );
    	
    	if ((tmp & MSK_TOK_DATAERROR) == 0)
    	{
    		/* Clock out a byte before returning */
    		spi_send_byte(0xFF);
    		/* The card returned an error response. Bail and return 0 */
    		return 4;
    	}
    	
    	/* Prime the interrupt flags so things happen in the correct order. */
    	IFG1 &= ~URXIFG0;
    	IFG1 &= ~UTXIFG0;
    	
    	/* Get the block */
    	/* Source DMA address: receive register. */
    	DMA0SA = U0RXBUF;
    	/* Destination DMA address: the user data buffer. */
    	DMA0DA = (unsigned short)data;
    	
    	/* The size of the block to be transferred */
    	DMA0SZ = SD_BLOCKSIZE;
    	/* Configure the DMA transfer*/
    	DMA0CTL = 0x0CF0;
    		//DMADT_0 | /* Single transfer mode */
    		//DMASBDB | /* Byte mode */
    		//DMAEN | /* Enable DMA */
    		//DMALEVEL |
    		//DMADSTINCR1 | DMADSTINCR0; /* Increment the destination address */
    	
    	/* We depend on the DMA priorities here. Both triggers occur at
    	the same time, since the source is identical. DMA0 is handled
    	first, and retrieves the byte. DMA1 is triggered next, and
    	sends the next byte. */
    	/* Source DMA address: constant 0xFF (don't increment)*/
    	DMA1SA = (unsigned short)&blank;
    	/* Destination DMA address: the transmit buffer. */
    	DMA1DA = U0TXBUF;
    	/* Increment the destination address */
    	/* The size of the block to be transferred */
    	DMA1SZ = SD_BLOCKSIZE-1;
    	/* Configure the DMA transfer*/
    	DMA1CTL = 0x00F0;
    		//DMADT_0 | /* Single transfer mode */
    		//DMALEVEL |
    		//DMASBDB | /* Byte mode */
    		//DMAEN; /* Enable DMA */
    	
    	/* DMA trigger is UART receive for both DMA0 and DMA1 */
    	DMACTL0 = DMA0TSEL_3 | DMA1TSEL_3;
    	//DMACTL0 = 0x0033;
    	
    	/* Kick off the transfer by sending the first byte */
    	U0TXBUF = 0xFF;
    	return 1;
    }

     

    I did add the level triggering, however same result - getting hung up at waiting for DMAEN to clear.

  • Victor Youk said:
    IFG1 &= ~URXIFG0; IFG1 &= ~UTXIFG0;

    Do you have the IE flags set? This will prevent the DMA from being triggered.

    Victor Youk said:
    DMA1SZ = SD_BLOCKSIZE-1;
    [...]
    /* Kick off the transfer by sending the first byte */
    U0TXBUF = 0xFF;


    This setup, together with the fact that you trigger both transfers by the RXIFG flag, will not run with maximum speed and won't work with level-triggering (as required for the new MSPs). Why?
    First, because you start the transfer of the received byte only after it has been received (of course) and the TX process being finished (tx buffer and output shift buffer empty). Only after the received byte has been written, the DMA transfer of the next byte into txbuf is started. This means that during both transfers (8 MCLK cycles) plus (I think) one more SPICLK, the SPI is idle. So if the SPI is running with MCLK/2 (the maximum on the USARTs), for every 8 busy ticks, the SPI is idle for another 5 ticks, leaving it at 61% of maximum speed.
    Also, reading from RXBUF will actually clear the RXIFG bit, so in level-triggered mode, the DMA trigger will be reset before the second DMA will fire, so it never fires.

    In my own function, I use separate triggers for RX and TX and level mode. This way. writing to TX starts as soon as I enable the TX DMA, asking for two bytes (one for the shift register and immediately after a second one for the instantyl emptied TXbuf). So the DMA size to TXBUF is same as the RX size, and I don't push anything to TXBUF myself. DMA priority is improtant still (so RX is serverd before TX)
    When running the SPI at full MCLK, this is a very tight schedule: 8 MCLK cycles per byte and two DMA transfers per byte, which also take 8 MCLK cycles. So GIE must be cleared during the transfer (No IRQs). And the CPU is effectively stopped.

    If you have a debugger attached, you can tra reading out th ecurrent DMA registers, so you know where in the chain of events the DMA stops. Does it stop at the last byte, missing a trigger, ow does is not start at all.

**Attention** This is a public forum