This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

long term reliability MSP430

I am using the MSP430 for systems that should have a long lifetime (30 years) at temperatures of 35 degrees max (average would be 25 degrees) at 3.3V

So I have some questions about the long term reliability:

- what is the chance of failure in this period if a program with a watchdog keeps running without ever re-initialising?

- does it help to reset the MCU regularly from software (by writing wrong watchdog passwd for example)

- Will the lifetime be longer if the chip spend most time in sleep mode, or do regular sleep/wake cycles (10 Hz. for example) wear it down faster?

- I am using the DMA to measure 4 ADC inputs in one program, and I am worried about it "missing" a measurement and ending up measuring the wrong inputs. How can I reset the input pointer at the start? For example, will setting "ADC10MCTL0 = ADC10INCH_3;" reset the input pointer?

- Is the reliability of the new FRAM devices better than the old FLASH ones?

Thank you!

  • In theory, neither CPU core nor RAM wear out. However, the MSPs are not proof against cosmic rays or radiation(including rare ionizing events in the plastic case material) which may cause an unexpected change in the ram content or processor registers. This may happen at any time, but the (very low) chance adds up with operation time.

    Also, other external influences are possible, like CPU crashes due to ESD. That’s mainly why there is a watchdog: reboot if the device crashed due to hardware events. Not to ‘fix’ software bugs and deadlocks.
    However, Flash cell data retention, while specified with >100years, may be much less if flashing wasn’t done properly. Marginal read mode check should be performed if long-time operation is required. I wouldn’t care for 5-10 years, but 30years is quite long.
    And then there are the external components, especially any electrolytic capacitors in the supply. Which age much faster. Even though the usual 1000-5000hrs are for 85°C operation and do greatly increase if never going above 35°C.

    It’s not clear what you mean with ‘input pointer’.

    FRAM reliability depends on usage. If you use FRAM just for code storage, I’m not sure whether there is a difference (>100years@25°C, >40 years@70°C). The read endurance is 10^15 cycles per cell, so while(1); would run for at least 32 years on 1MHz clock speed.

  • Thanks for the answer.

    With "input pointer" I mean the register that tells the ADC which input (A0..A7) to convert.
    I am afraid that, when using the ADC together with the DMA to convert a number of inputs, this ADC register, or the DMA memory pointer, might miss an increment, resulting in the wrong input being associated with the wrong memory location, until the MCU resets (which in my case is: never).
    Is there any way to make sure the two stay in sync?
  • You don’t say which MSP you’re using, but the use of ADC10MCTL0 makes me assume that it is a 5x family device with ADC10_A.

    As you can see in the register description, the ADC10INCHx bits can only be set when ENC=0 and the ADC is not converting.

    When ENC=0, then setting ENC=1 and starting a new conversion will of course start with the value in ADC10INCHx. Whether you changed it or not.
    But this does not reset the DMA, if you are using it to transfer the results. If you set the DMA to copy 100 results, and you stop the ADC10 and restart it, the DMA will simply continue at the address it was last, possibly messing up which goes where. The DMA does not mean why a DMA transfer is triggered and where the data comes from or what it means. So it is up to you to make the transfer consistent.
    OTOH, the DMA usually won’t miss a transfer. One trigger, one transfer. (and one conversion, one trigger). There is, however, a possible problem: When the device is in LPM, this means that MCLK is stiopped. The DMA requires MCLK for the transfer. If MCLK is driven by DCO and DCO is off (LPM>0), then it takes some time to get MCLK up again. And the DMA transfer is delayed. Probably so much that the ADC has already completed another conversion. In this case, a converison result is lost and the whoel thing gets out of sync.
    This is likewise true for software interrupts - the CPU will have a delay too when coming out of LPM>0.

**Attention** This is a public forum