This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

ADC to SPI via DMA, CPUOFF

I would like to automate data acquisition and storage to off-board flash without using the CPU.  I'd like to use the DMA to transfer bytes from ADCMEMx to the USCI TX buffer.  The USCI can only handle 8b data length, but the output of the ADC is 16b wide.  The challenge is setting up triggers:

1.  First DMA transfer (high byte) should be triggered by ADC IFG when ADC is done with acquisition
2.  Second DMA transfer (low byte) should be triggered by USCI TXIFG when first byte is pushed into the TX buffer
3.  Next ADC acquisition should be triggered by USCI TXIFG after second byte is pushed into the TX buffer

However, you can't have multiple triggers, so this is not possible.  Does anyone have any ideas about how this can be automated?  I was thinking of maybe using a timer to trigger the DMA but it feels risky, I want to make sure the ADC and the USCI are synchronized.

There is a relevant discussion in this post http://e2e.ti.com/support/microcontrollers/msp43016-bit_ultra-low_power_mcus/f/166/p/41263/144901.aspx
but the details of implementation were never discussed, I'm very curious about actually implementing this.

Thanks,

Mateja

  • Can you use two DMA channels, one for each trigger? You didn't mention the MSP part you're using but a lot of them have multiple DMAs.

  • It might be possible to do this with just the ADC trigger.

    The DMA can map 16 bit to 8 bit transfers. This is not explained in detail in the docs, but the logical way this works would be one 16 bit read, two 8 bit writes. If so, the SPI will be usually idle because the ADC conversion takes longer than the transfer. If this is ensured, TXBUF will be empty and teh shift register will be empty too. So when the ADC trigger comes, the memory is read and the first byte (MSB? LSB?) is written to TXBUF and immediately forwarded to the shift register. So next clock cycle, TXBUF is empty again and will receive the other byte. This requires the SPI clock at least as fast as MCLK, since the DMA runs synchronized MCLK and the move from TXBUF to the shift register is down with the SPI clock rate.

    Another aproach would be double buffering. It requires two DMA channels. First channel is triggered by the ADC and does a repeated block copy of the ADC12MEM to a dedicated memory buffer. Word-wise.
    The second channel is workign in byte mode, copies this buffer to TXBUF and is triggered by the first channel. So data (even more than one channel) goes from ADC12MEMx word-wise to the RAM and from there byte-wise to the TXBUF. This, however, will only work if SPICLK is at least by a factor of 2 faster than MCLK (a DMA transfer requires 4 MCLK cycles), so the content of TXBUF is already sent when the DMA pushes the next byte in.

    But on many low-power applications, MCLK is intentionally slow anyway. And since the CPU does not have to do anything at all in this second setup...

  • Andrew and Jens-Michel, thanks for your replies.

    On pg. 6-4 of the user guide in the DMA chapter (I'm using 2xx), it states, "When transferring word-to-byte, only the lower byte of the source-word transfers."  Do you know of another way to configure the DMA to achieve your first suggestion?  To my knowledge, the DMA can't do this sort of transfer natively.  You are right, when the ADC trigger comes, the memory is read and the first byte is written to TXBUF and immediately forwarded to the shift register.  But the second byte that needs to be transferred must be triggered ideally by the TXIFG or a timer, I'm afraid that if I configure it only for bust-block (block size 2B) that the DMA will not give the USCI enough time to transmit the previous byte.  Do you agree?

    I am currently trying to come up with a solution using one DMA channel with a timer, but if that fails, I'll try using chained DMA channels, as you suggested.

    Thanks,

    Mateja

  • This particular information is mising in the 5438 DMA chapter.
    And it makes not much sense to have a word-byte transfer if it only transfers the LSB. This is identical to a byte-byte transfer except that in block mode the source address is incremented by two. Definitely much room for improvement.

    Another possible setup comes into mind. It requires three DMA channels :)
    Two channels are triggered by the ADC. The first transfers the LSB to TXBUF, the second transfers a fixed byte to a timer config, starting the timer, and the third is triggered by the timer and transfers the MSB to TXBUF.
    But I doubt that this is doable, because I think the MSB of the ADCMEM registers cannot be read in byte mode at all. (I haven't tried, but I know for sure that the MSB of the DMA config registers cannot be read or written in byte mode and when writing the LSB, it will clear the MSB. Likely the ADC registers are limited to 16 bit access too.)

    You can drive the ADC in 8 bit mode only, then only the LSB needs to be transfered. It speed up the conversion by 4 ADC12CLK cycles ad a side effect. And if you sample fast enough and your signal is not static (low frequency), you can interpolate 16 8-bit samples to get a 12 bit result  from the stored values.

    If (and only if) the MSB can be read by a byte read, it should be possible to time the things for burst block write. It is necessary that
    1) the SPI clock is at least half as fast as MCLK (since one DMA transfer takes 4 MCLK cycles), so the first byte can go into the shift register before the second is written and
    2) the samples come slow enough so a complete 2 byte transfer can take place between two samples. Since a conversion requires at least 17 ADCCLK cycles (4 S&H + 13 conversion), this is true if ADCCLK is at least SPICLK.
    3) the MSB can be read by a byte access (which has to be tested)
    If SPICLK is at least twice MCLK, this will also work for more than one conversion result (ADC12MEMx..y)

  • Here's what I've come up with.  As long as the comm speed is faster than the ADC, I believe it's going to be impossible to automate this whole process without the CPU, because the ADC must be stopped periodically, and in order to stop it, you have to have the CPU.  In my code, DMA0 transfers a block of 100 words to memory, then an ISR stops the ADC and enables DMA1, which transfers the block to the UART.  I would very much appreciate it if anyone had any suggestions for improvement.  Thanks!  Sorry about the wacky formatting, I'm not sure how to use the comment editor to make the code look nice.

    Mateja

    #include  <msp430x26x.h>


    unsigned int ucData[100];


    void main(void)

    {

      WDTCTL = WDTPW + WDTHOLD;                 // Stop WDT

      BCSCTL1 = CALBC1_1MHZ;                    // Set DCO

      DCOCTL = CALDCO_1MHZ; 


      ADC12MCTL0 = INCH_2; // MEM0 <- P6.2

      ADC12IFG = 0; // Clear ADC12 interrupt flag

      ADC12CTL0 = MSC + SHT0_4; // Set sampling timer

      ADC12CTL1 = CONSEQ_2 + SHP; // Use sampling timer


      P3SEL = 0x30;                             // P3.4,5 = USCI_A0 TXD/RXD

      UCA0CTL1 |= UCSSEL_2;                     // SMCLK

      UCA0BR0 = 26;                             // 1MHz 38400

      UCA0BR1 = 0;                              // 1MHz 38400

      UCA0MCTL = UCBRS2 + UCBRS0;               // Modulation UCBRSx = 5

      UCA0CTL1 &= ~UCSWRST;                     // **Initialize USCI state machine**


      DMACTL0 = DMA0TSEL_6 + DMA1TSEL_4;        // ADC12IFGx triggers DMA0, UCA0TXIFG triggers DMA1


      DMA0SA = (void (*)())&ADC12MEM0;          // Src address = ADC12 module

      DMA0DA = (void (*)())ucData;              // Dst address = RAM memory

      DMA0SZ = 100;                           // Size in words

      DMA0CTL = DMADSTINCR_3 + DMAIE + DMAEN;   // Config


      DMA1SA = (void (*)())ucData;           // Src address = RAM

      DMA1DA = (void (*)())&UCA0TXBUF;          // Dst address = UCA0

      DMA1SZ = 200;                           // Size in bytes

      DMA1CTL = DMASRCINCR_3 + DMASBDB + DMALEVEL + DMAIE;   // Config


      P6SEL |= BIT2;                            // P6.2-ADC option select 


      DMA0CTL |= DMAEN; // Enable DMA0

      ADC12CTL1 |= CONSEQ_2; // Repeat-single-channel

      ADC12CTL0 |= ADC12ON + ENC + ADC12SC; // Turn ADC on, enable conversion, trigger start 

      _BIS_SR(CPUOFF + GIE); // LPM0 + GIE

    }


    #pragma vector = DMA_VECTOR

    __interrupt void DMA_ISR(void)

    {

      if( DMA0CTL & DMAIFG )

      {

    DMA0CTL &= ~DMAIFG; // Clear DMA0 interrupt flag

    ADC12CTL1 &= ~CONSEQ_2; // Stop conversion immediately

    ADC12CTL0 &= ~(ENC + ADC12ON); // Disable ADC12 conversion

    DMA1CTL |= DMAEN;

      } else

      if( DMA1CTL & DMAIFG )

      {

    DMA1CTL &= ~DMAIFG; // Clear DMA1 interrupt flag

    DMA0CTL |= DMAEN; // Enable DMA0

    ADC12CTL1 |= CONSEQ_2; // Repeated-single-channel

        ADC12CTL0 |= ADC12ON + ENC + ADC12SC; // Turn ADC on, enable conversion, trigger start

      }

    }

     

  • Mateja Putic said:
    I'm not sure how to use the comment editor to make the code look

    You can use SHIFT-Enter to avoid these nasty empty lines.

    Also, you can use the 'HTML' button above to pup up a window with the HTML representation of the editor content and replace it by your own formatted HTML text  if you want.

    You can trigger each ADC sequence by a timer.
    Unfortunately you cannot use the IFG for completing the sequence for triggering an ISR as it won't trigger the DMA then.
    I tmight be poossible to enable the ISR for the first sample channel or the one before the end-of-sequence, so you get an ISR that can count the number of already passed sequences and do some action of necessary.
    It is possible to set up a second DMA channel that is triggered by the first and does the second part of the job.

    e.g. DMA0 is triggered by teh ADC and copies all the ADC12MEM contents to RAM usign word transfers. If the block has been copied, DMA 1 is triggered by DMA0 and copies a 'start' command to DMA2, which in turn will copy the buffer content byte-wise to the UART. This only works with the 5xxA series as the non-A is unable to write to DMA registers through DMA. It also requires the serial port being faster than the ADC12 data coming in (but if each sequence is triggered by a timer, this can be easily adjusted) and it occupies all three DMA channels. Yet it would allow to automate this whole job without any software intervention.

  • Argh! Server error when transmitting my reply. I HATE this. This forum is slow enough if it's working.

    Okay, second try...

    Mateja Putic said:
    I'm not sure how to use the comment editor to make the code look nice.

     you can use shift-enter to avoid these nasty empty lines (actually paragraph breaks)

    YOu should be able to fully automate it, at least on the 5xxA series, as the non-A is unable to write to DMA registers through DMA.

    1) trigger each conversion sequence with a timer. This way you can adjust the sample frequency fairly easy.
    2) the end-of-sequence IFG triggers DMA1 which will copy all data (op to 32 bytes) to a RAM buffer.
    3) if this is done, DMA1 is triggered by DMA0 and copies a command-word into DMA2
    4) DMA2 will jump in and copy the data block to UART, triggered by UART TX bit (this is why DMA1 is needed, as there can be only one trigger)

    Two drawbacks: it uses up all three DMAs and will be a bit difficult to implement, but it should run without any software intervention at all.

**Attention** This is a public forum