This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

New TM4C123 ADC Silicon Erratum ?



Afternoon All!

I may have stumbled upon a new TM4C123 silicon erratum with the ADC module that does not appear to be documented thus far. There *IS* a workaround.

The application required burst sampling, for about 1ms at full speed (1MS/s), to this end it was decided to use the TRIGGER_ALWAYS setting, in order to avoid having to re-trigger 125 thousand times per second. The init code set up SS0 to TRIGGER_PROCESSOR in the first instance whilst waiting to go, then set to TRIGGER_ALWAYS when it needed to go. Data was handled by uDMA with an interrupt on completion.

The ISR set the trigger mode back to TRIGGER_PROCESSOR and then proceeded to try to drain the FIFO of the last samples that had been scheduled but were now surplus to requirement. The issue was that the FIFO FULL flag could not be cleared. The FIFO OVERFLOW flag was set and could not be cleared. The ADC BUSY flag was set and could not be cleared. In short, you could only do this burst once before needing to power cycle the MCU.

The solution came in the form of disabling SS0 in the Interrup Handler. This allowed the FIFO to be drained, there was no more OVERRUN flag, the ADC unit would cease being BUSY and it was possible to do as many bursts as desired - once the FIFO was drained, it was possible to re-enable SS0 and it would work again.

It seems as if it is impossible to exit TRIGGER_ALWAYS once it has been set without disabling the Sequencer. Now the code initialises SS0 to TRIGGER_ALWAYS and gating is done by enabling and disabling the sequencer. Problem fixed, but I guess it should also work by toggling the trigger mode with the sequencer enabled (it definitely works going TRIGGER_PROCESSOR -> TRIGGER_ALWAYS). There is an erratum about re-triggering an already running sequencer causing continuous triggering, but this is a little different.

Hope this info is of benefit to someone out there.

Best regards,

Pat.

  • May my small tech group commend you for a thoughtful, nicely detailed & caringly presented posting?   Very well done.

    I'm assuming that "SS0" denotes Step Sequence 0 - which has 8 step capability.   (your mention of 125K samples/second suggests that)

    We've this question - you wrote, "The ISR set the trigger mode back to TRIGGER_PROCESSOR and then proceeded to try to drain the FIFO..."

    Are you draining the FIFO while (still) w/in the ISR?   If so - that's not normal/customary - is it?

    Might you note the rough size of your (typical) FIFO data collection?  

    And have you fed your ADC w/"known signal levels" - and verified that your method does not "miss" first arrivals & causes no compromise in ADC performance?

    Again thanks for a caring & well conceived posting...

    [edit] Note that poster has issued a new post (after my writing) - confirming 8 sequence steps.   Issue of "all w/in the ISR" (undesired) remains...

  • Hello Pat,

    Wouldn't a timer trigger be simpler where disabling the timer removes the source of the trigger allowing the controller to be disabled. Note that the ADCBUSY flag must be monitored when making any changes. If the ADCBUSY flag is set when any change is made, the result cannot be determined.

    Regards
    Amit
  • cb1-,

    In answer to your question, no the FIFO drain was not being done in the ISR. Indeed the ISR code is minimalistic, please be aware that this version works :

    //*****************************************************************************
    //
    // ADC0 Interrupt Hander for SS0 - called when the uDMA handler has finshed
    //
    //*****************************************************************************
    void ADC0IntHandler(void) {

    // Disable sequencer to start with
    ADCSequenceDisable(ADC0_BASE, 0);

    // Clear the interrupt
    uDMAIntClear(1 << UDMA_CHANNEL_ADC0);

    // Flag the main code that there is data from the ADC
    HWREGBITW(pg_ulWakeUp, WAKEUP_ADC0) = 1;

    }

    If the same code was used, but changed from disabling SS0 to setting the triggering to TRIGGER_PROCESSOR again it causes the issues described. What was odd was that I could manually drain the FIFO - ie using a JTAG debugger I could read the FIFO then I would read the FIFO status register and see read and write pointers change. After 8 reads I kept getting zeroes - ie not a valid ADC result, *BUT* it would still show the FIFO as being full and the OVERRUN flag was also still set. Trying to clear the OVERFLOW did not work. So it is a sort of in between issue - the ADC is no longer actually sampling, but the FIFO has issues that can't be resolved without disabling the SS.

    I have fed the ADC with values I was also capturing with a DSO and can confirm that, other than the sampling speed being a thousand times faster on the DSO, the waveform looks the same.

    Best reagrds,

    Pat.
  • Amit,

    Very good point regarding ADCBUSY, I think you have hit the nail on the head - ie that changing the trigger mode on the ADC whilst it is BUSY (and it will be permanently BUSY when set to TRIGGER_ALWAYS) has weird and unpredictable results. Well spotted. I guess that disabling the SS whilst ADCBUSY is set is valid though.

    As for timer, I will probably migrate this to GPIO level triggered. Right now it uses GPIO edge trigger to get into a GPIO ISR which then enables the SS. The "issue" with level triggered interrupt on GPIO is that it could "go away" before the burst is finished and so I would never get to the uDMA ISR since the ADC will stop requesting DMA transfers. In a perfect world I guess I need to use an edge to trigger a one shot from a timer and use that pulse to keep the ADC pumping data. I need to look closer at the datasheet for that.

    Best regards,

    Pat.

  • Hello Patrick,

    Please note that when using GPIO in level trigger will generate multiple triggers during a conversion which can cause issues. Instead the Trigger must be edge sensitive and also spaced correctly to ensure that triggers do not happen during a conversion.

    Disabling SS during conversion is correct though.

    Regards
    Amit
  • Amit,

    Many thanks for the heads up on the level triggering. That kind of puts a spanner in the works, guess I am going to have to stick with what I have and accept a small overhead in getting into the GPIO ISR to trigger the ADC (fortunately my ISRs are minimalistic so even with tail chaining it will still be sub-microsecond)..... it sounds like TRIGGER_ALWAYS is the only guaranteed method to generate a continuous stream of data :

    If you use multiple edges from a timer or GPIO then you would need to make sure that edges cannot happen before the SS is finished, but then you also need to make sure there is no delay between the SS finishing and the new trigger happening, otherwise you'de end up with samples that are not necessarily always 1000ns apart.

    Thanks for confirming that it is OK to disable a SS whilst the ADC is BUSY.

    Best regards,

    Pat.
  • Patrick Herborn said:
    ... level triggering. That kind of puts a spanner in the works

    May I offer up (potentially) Spanner Two? 

    Firm/I have observed - that not always - are each/every ADC sample taken at exact intervals.   We note this across several ARM-based (different vendors') MCUs.

    You don't specify the necessity/justification for so, "Precise sampling intervals."   Might you detail?

    Our findings - as the MCU load, number of active interrupts & general program complexity increases - the greater the likelihood of (some) sample interval "creepage."   Of course every design is different - every case unique - I'm offering this for your consideration - should precise sample intervals be (really) demanded.

    In my firm's case - we generated a precise, high accuracy linear ramp - and injected it (properly) into the MCU's ADC.   And - under the conditions described (above) the logged (ADC) data was not always truly linear - indicating "variation" w/in the sample intervals.

    In our (client's) specific case - where precision sample intervals were demanded (along w/high accuracy measurements) we chose a dedicated ADC.   Based upon specifics of your application - this method may (better) meet your needs - thus warrant consideration...

  • Hiya cb1!

    I'm not sure that my spanner tolerance can run to two in one day! (there's probably some joke about nuts in there but I'll refrain for the sake of keeping the discussion focused and clean).


    I take your point regarding the non-monotonicity of the sampling events, but I would say the following :


    1) You make mention of the sampling monotonicity being related to the system load, ie software has an influence on it. I would agree 100% that this WILL be an issue if you are at the mercy of variable interrupt latency.

    2) My approach does not use software for anything other than setting up and starting the burst in the beginning, then at the end shutting the ADC back down (having done 128 lots of 8 samples autonomously) and draining the FIFO. The only part of that which is semi-time-critical is the latency between the GPIO pin toggling and the GPIO ISR making the ADC start. It doesn't matter if there is some latency at the end.

    The monotonicity of the sample chain is at the mercy of the silicon in my implementation and the datasheet is not sufficiently detailed to ascertain whether there are, or are not, any hard guarantees with regard to sample sequencing (we'de need to see the HDL for that and I can't imagine TI would want to share, but they could give a statement confirming), specifically the time between samples. It is possible that the silicon cannot guarantee the sample timing and then there's nothing that I can do about that without adding more hardware. That said, my suspicion is that the hardware is fundamentally good, and that the "challenge" is how to drive it to its full potential. I already alluded to the "fact" that trying to do this in the ISR 125,000 times a second is NOT the right answer and my hope is that offloading this to the uDMA will fix that problem - the existing implementation can tolerate a uDMA arbitration delay of around 1 microsecond before it loses samples, and I could improve that by reducing the ARB down to 4 samples at a time, and triggering a DMA transfer when the FIFO is only half full.

    In reply to the question of "need", it would be fair to say that it is NOT an absolute requirement in this application to have a truly monotonic sample chain - there is some tolerance for some jitter in there. We need to be able to locate a key event (voltage) some 150-400us after the trigger - so long as each 8 samples don't take variably between 8 and 9us we should be good. I guess I can cross-reference this back against the DSO to see how we are doing with that.

    Best regards,

    Pat.