This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

ADC config, DMA transfer to memory, DAC questions!

Other Parts Discussed in Thread: MSP430FG4618, SIMPLICITI, CC2500

Hi all,

I am using MSP430FG4618 Experimenter board with CCS.  I have been able to sample a signal, and then use DMA to send it to the DAC to read it right back out, this is pretty much what I need to do at the moment.  However, I have some questions about configuring the ADC, on the DAC operation, and on using DMA.

I am trying to configure an ADC12 sample at 1kSample/second.  To do this, I think I need to either use clock dividers, (since Aclock is at 32kHz,  AClock/32 should do it) and set this as the signal that initaites a sample, I just do not know how to configure the ADC to use this as the sampling trigger.  Does the clock used to trigger a sample have to be the same one as the clock controlling the 13 cycles required for conversion? if not, how do I configure it?

Basically I want the following

1K/samples/sec
High conversion frequency

The next problem I need to solve is that I need to send this information to memory, build up a given amount, say 100 samples, and then send it to the DAC.  However,  I need to continue taking ADC samples while the first set of data is being sent to the DAC.  So I "think" I need a sort of "rolling memory" field where the data is sent.  I want to use "bulk sends" to the DAC, so this transfer needs to occur as rapidly as possible. I also need to ensure that the DAC does not run out of data to convert to analog. IE, I need the DAC to have the same conversion speed as the ADC does.  Any advice on setting up DMA/DAC to do this?

Thanks
Paul

  • paul miller said:
    basically I want the following
    1K/samples/sec
    High conversion frequency

    Which one? 1kSamples/s or a high sampling frequency? 1k isn't much. 150k is much. >200k is difficult and >300k impossible.

    paul miller said:
    I am trying to configure an ADC12 sample at 1kSample/second.  To do this, I think I need to either use clock dividers, (since Aclock is at 32kHz,  AClock/32 should do it)

    Is it 32kHz or 32.768Hz? That's quite a difference.

    paul miller said:
    Does the clock used to trigger a sample have to be the same one as the clock controlling the 13 cycles required for conversion? if not, how do I configure it?

    It depends. If running incontinuous mode, the next sampel is taken as soon as the previous ha sbeen completed, so one clock manages both. Yet not every clock pulse will start a new conversion. Only every nth, where n = 13 (+r 11 or 9 for 10 or 8 bit mode) + S&H time (which is at least 4 and can go to 1024).

    Then there is a mode where the S&H time starts with a clock pulse and after it the 13 cycles for the conversion are added.

    There is a third mode where one edge of an external signal (may be external hardware or internal timer output) starts the S&H process and the other edge ends the sampling and starts the conversion. Which is done within 13 clock cycles of the applied ADCCLK (which is obviously NOT the clock signal triggering the S&H gate).

    The third mode is the most complex one but it offers 1) 100% equidistant sampling, 2) exact sampling frequency, 3) a result 13 ADCclock cycles after the signal edge, no matter how long the sampling time was. Usually, the clock doing the conversion is the internal ADC12OSC signal.

    This description is for the 54xx ADC12A module. Th eone on the 4618 may be slightly different but should be mostly identical.

    There are other threads in this forum dealing with ADC12 setup and ADC12 reading through DMA etc.

  • Hey,

    The signal that I am looking at has componenets anywhere from .5Hz up to ~300Hz.  I do not care about how long it takes for a conversion to take place, so long as I can get a "high enough" sampling frequency to completely reconstruct the signal, 2x is the standard amount, but I have read and been adviced that I should go 2.5-3X to be on the safe side. So to answer

    paul miller said:
    basically I want the following
    1K/samples/sec
    High conversion frequency

    Which one? 1kSamples/s or a high sampling frequency? 1k isn't much. 150k is much. >200k is difficult and >300k impossible.

    II want 1ksamples/sec, and it to "convert" each sample from a to d, "fast enough" so that each conversion is complete in time to begin another. 

    "

    paul miller said:
    I am trying to configure an ADC12 sample at 1kSample/second.  To do this, I think I need to either use clock dividers, (since Aclock is at 32kHz,  AClock/32 should do it)
    Is it 32kHz or 32.768Hz? That's quite a difference."

    I have seen both 32kHz and 32.768kHz in various documents, both appear in different instances of the data sheet and in code examples.

    In the method that I am currently using, I have the "Start adc" in a loop, where it takes 1 sample, uses the "conversion compelte" , or "mem ful" IF to start a DMA transfer to the DAC. But again, I don't really care about the "how" of this, I would just like to have it done, and MAYBE optimize for power, but this concern is not primary.

     

     

     

  • paul miller said:
    The signal that I am looking at has componenets anywhere from .5Hz up to ~300Hz.  I do not care about how long it takes for a conversion to take place, so long as I can get a "high enough" sampling frequency to completely reconstruct the signal, 2x is the standard amount, but I have read and been adviced that I should go 2.5-3X to be on the safe side.

    That's a common mistake. You don't need 2x to be able to reconstruct the signal, you need 2x to be able to detect and reconstruct its highest frequency. Which means a sine wave. When you sample with exactly 2x, you'll see two alternating values. These two are somwher eon the signal and all you can tell is that the signal you sample must have exactly half of teh frequency at which you sample. YOu cannot tell the waveform (square/sine/whatever) nor the peak value (you don't know at which point of the wave you sample). worst case is that you're exactly sampling at the zero-crossing point and you'll see a flat zero line.

    So 2.5x is better to exactly determine the frequency, but not to reconstruct the signal, neither amplitude nor shape. (BTW a shape other than the sine wave implies that there are overtones and therefor the real maximum frequency is MUCH higher than your sampling frequency)

    Besides this, you wrote you want to collect a block of samples and then want to put them out in a burst through DAC. What do you expect that will do to your signal? If you sample with 1kHz and then put it out using e.g. DMA clocked at 4MHz MCLK, this will result in a 100000%(!) frequency shift. With a signal length of 1/1000 of the original sampling time. Does not make much sense to me but maybe you have a reason.

    Anyway, for slow equidistant samples using a timer for the trigger is best.
    The timer should be clocked by something that allows a safe trigger frequency of your choice.
    Personally, I always set up a timer being clocked with 1MHz (from 1/4/8/16Mhz DCO or 8/16MHz crystal) running in continuous mode. then I have CCR0 set up so it will trigger an interrupt every 1ms by setting it to CCR0+1000 (incrementing by 1000 timer ticks) inside this interrupt. This way, this ISR is executed every millisecond.
    With this handy setup for general delay and timing purposes , you can program TACCR1 to do the same but with 500 timer ticks in toggle mode. (alternatively you can set up the timer A or B for 1kHz PWM mode and use set/reset mode for TACCR1 with a value in the middle).

    Then set up the ADC to trigger (SHSx) on Timer_A.OUT1 (or TimerB.Out.1 if using Timer B), Don't set SHP, set the ADC in repeated single mode and start conversion.
    On raising edge of TimerA.OUT1 the ADC will open its input pin and take a sample of the signal into its input capacitor. When TimerA.OUT1 falls, it will close the input and start conversion. 13 clock cycles later, the result is ready and will trigger an interrupt or DMA transfer (however you configure it). On the next raising edge, the input is opened again etc.

    if you use DMA, you can configure it to e.g. do 256 transfers of the sampled value into a 512 byte buffer. Once these transfers are donw, you'll et an interrupt. In this ISR you reconfigure the DMA to repeat the transfer but into a different 512 bytes buffer. This must be done before the next sample is ready. You have 1ms minus 10 MCLK cycles (DMA transfer time plus interrupt latency) for doing this. Then you have another 255 ms to do something with the sampled values before the second buffer has been filled and you'll need to take care of that.
    Of course less than 256 samples in a row are as well possible, to keep the buffers small.

    paul miller said:
    I have seen both 32kHz and 32.768kHz in various documents, both appear in different instances of the data sheet and in code examples.


    ACLK itself has no 'frequency' it is clocked by a clock source.
    Depending on the MSP, there are several clock sources possible.
    The mostly used one is an external 32kHz watch crystal, which has not 32.000kHz but rather 32768Hz. So it divides smoothly down to 1.00Hz by using binary dividers.
    On the 4618, ACLK can be sourced from such a crystal as well as from a high-frequency crystal, a digital clock signal or the internal VLO (with its few kHz).

    In your case, I'd simply use the ADC12OSC as clock source for the ADC12, but a 32kHz crystal-clocked ACLK would do fine. But a faster clock means a shorter period at which the reference voltage is drained and a shorter tiem in which the sampled voltage on the input capacitor will decrease due to leakage. Using ADC12OSC, however, increases power consumption a bit.

  • Thank you very much.. this is very helpful!

    The final goal of this is to transmit the information wirelessly using a CC2500 and simpliciti, but, since I don't have that code put in place yet, I am trying to emulate it on one device.  I think to do this, I am going to do fill a buffer from the ADC, and then when it triggers, I'm going to send the information to a different part of memory in a burstblock (as if it got transmitted) and then transfer the data to the DAC with DMA timed off of Timer A.  I think this should keep me from having the frequency shift you mentioned, and it only occurred to me yesterday about 6 hours after I made this post.

     

     

    Jens-Michael Gross said:

    if you use DMA, you can configure it to e.g. do 256 transfers of the sampled value into a 512 byte buffer. Once these transfers are donw, you'll et an interrupt. In this ISR you reconfigure the DMA to repeat the transfer but into a different 512 bytes buffer. This must be done before the next sample is ready. You have 1ms minus 10 MCLK cycles (DMA transfer time plus interrupt latency) for doing this. Then you have another 255 ms to do something with the sampled values before the second buffer has been filled and you'll need to take care of that.
    Of course less than 256 samples in a row are as well possible, to keep the buffers small.

    Any chance you could provide an example of how to do this?  I think what you are saying (in pseudo-code) is

    transfer 256 times from one buffer to the other,

    if ISR , change destination address for DMA to a different location?

    Repeat?

    Do I have the basics of that right?

     

    Also,  is the 1ms minus 10MCLK cycles the time it takes for 1 DMA transfer and the interupt request, or all of the DMA transfers? 

     

    Thanks again.

     

  • paul miller said:
    transfer 256 times from one buffer to the other,

    From ADC12MEM to a buffer
    paul miller said:
    if ISR , change destination address for DMA to a different location?
    Repeat?
    Yes.So when the first buffer is full, you switch the DMA to the second buffer and then have much time to process the first buffer.

    paul miller said:
    Also,  is the 1ms minus 10MCLK cycles the time it takes for 1 DMA transfer and the interupt request, or all of the DMA transfers? 

    It is the time between the moment your code knows that the first buffer has been filled to the moment the DMA must be armed and ready to fill the second buffer, or you'll miss the next conversion, teh DMA won't trigger and the process will stall. This is for edge-triggered DMA (maybe with level-triggered DMA you'll have time until the ADC12MEM overflows, being one more ms). The DMA transfer itself takes 4 MCLK cycles each, plus maybe a delay if the current CPU instruction may not be interrupted (for read-modify-write commands, if the DMA is configured to not interrupt them).
    The 1ms is because of the 1kHz sampling frequency. So you have 1ms from the moment a result is placed into ADC12MEM (and the next/last DMA is triggered) until the next result is ready and the DMA must be armed to process it.
    Then you have 256 (or however long your buffer was, the DMA can process up to 64k blocks) ms to work with the buffer until the next buffer has been filled. You should be fnished with the data now, because even if you use a third or fourth buffer, you need to process data in a average as fast as it comes in. More or larger buffers will extend the averaging time, but if you cannot process the data fast enough, it will just pile up higher before it overflows.
    A smart buffer size is the block size in which you can process the data. e.g. if you can send the data using an RF connection in bursts of up to 120 bytes, then 60 transfers = 120 bytes is a good block size (remember, the ADC12MEM readings are 12 bit and occupy 12 bytes each, the DMA will transfer 16 bit per transfer). For USB, a higher buffer size might be better for higher throughput. And with UART you'll need at least 1kHz * 2 bytes * 10 bit = 20kBaud baudrate for 1kHz sampling frequency.

    paul miller said:
    I'm going to send the information to a different part of memory in a burstblock (as if it got transmitted)

    Well, yes, you could do to simulate the software overhead of the simpliciTI protocol and sending process, but you already have two buffers, so you can process the data in the first buffer direcly without copying.

    paul miller said:
    and then transfer the data to the DAC with DMA timed off of Timer A.  I think this should keep me from having the frequency shift you mentioned,

    Yes. You can use the same trigger that triggers the conversion (if the same trigger is available for ADC and DMA - the available triggers differ slightly across MSPs). Then you only have a delay of one buffers size between sampling and output.

**Attention** This is a public forum