This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Managing batch-to-batch RTC calibration variability with MSP430FE427?

Other Parts Discussed in Thread: MSP430FE427

Hi,

I'm looking at  how to practically manage RTC calibration with the MSP430FE427. Because this is for a high-volume, low-cost application, I cannot afford to individually "tune" the RTC calibration constant for each unit. So for this to work, I see two possibilities :

1. I understand there is a fairly consistent RTC calibration for devices within the same batch, so is there a way to automatically detect via firmware (or programming hardware) when a new batch of chips is in production? Is there some sort of serial number or batch number within the MSP430FE427 that is accessible to the firmware (or to programming hardware)?

2. Is RTC calibration constant within the device already "tuned" by TI? If so, I could then read this value and apply an offset (determined empirically) to suit the crystal loading conditions for my application.

Joe.

 

  • I am very ignorant about RTC calibration. But I think the crystal used is the primary factor.

  • old_cow_yellow said:

    I am very ignorant about RTC calibration. But I think the crystal used is the primary factor.

    Well, that depends on the crystal.

    I'm using a 5ppm crystal, but observing an error of about 45ppm, so in this case, it seems mostly to be the chip.

    Joe.

     

  • Joe da Silva said:

    Well, that depends on the crystal.

    I'm using a 5ppm crystal, but observing an error of about 45ppm, so in this case, it seems mostly to be the chip.

    I assume your 32kHz crystal is the usual tuning fork type. If so, the crystal will only be 5PPM accurate at around 25C. If you are testing with the crystal warmed up, 45PPM error is not hard to achieve - just look at the parabolic characteristic your crystal maker publishes. The MSP430 itself shouldn't affect the crystal accuracy much, and there are no batch related issues which would have a significant effect. If you have the capacitive loading badly tuned you can pull a crystal off frequency, but 45PPM is farther than most 32kHz tuning fork type crystals can readily be pulled. If the tuning is that bad, the oscillator is unlikely to be very stable.

    If your issue is temperature, it is possible to use the temperature sensor in the chip to produce accurate timing. The parabolic characteristic of the crystal doesn't vary too much from sample to sample, so a regular temperature measurement and a little maths will allow you to track the exact current crystal frequency, and compensate for it. This does, however, require calibration of the individual temperature diode.

    Steve

     

  • Steve Underwood said:

    Well, that depends on the crystal.

    I'm using a 5ppm crystal, but observing an error of about 45ppm, so in this case, it seems mostly to be the chip.

    I assume your 32kHz crystal is the usual tuning fork type. If so, the crystal will only be 5PPM accurate at around 25C. If you are testing with the crystal warmed up, 45PPM error is not hard to achieve - just look at the parabolic characteristic your crystal maker publishes. The MSP430 itself shouldn't affect the crystal accuracy much, and there are no batch related issues which would have a significant effect. If you have the capacitive loading badly tuned you can pull a crystal off frequency, but 45PPM is farther than most 32kHz tuning fork type crystals can readily be pulled. If the tuning is that bad, the oscillator is unlikely to be very stable.

    If your issue is temperature, it is possible to use the temperature sensor in the chip to produce accurate timing. The parabolic characteristic of the crystal doesn't vary too much from sample to sample, so a regular temperature measurement and a little maths will allow you to track the exact current crystal frequency, and compensate for it. This does, however, require calibration of the individual temperature diode.

    Steve

    [/quote]

    Yes, the crystal is a tuning fork type. However, this is a very low power application, with minimal self-heating, and the testing was conducted at 23C (air conditioned environment), so temperature doesn't explain the observed RTC inaccuracy. As for the capacitive loading, aside from minor PCB parasitics, this is entirely provided by the MSP430FE427. AFAIK, on-chip capacitances do not have very tight tolerances, so I'd expect (and my testing confirms*) this is where batch-to-batch variations would come into play.

    * I should point out that I also have some older samples of MSP430FE427 (2005 vintage) that were tested simultaneously with the above (2010 & 2011) samples. The older samples exhibited an RTC error of les than 10ppm, whereas the newer samples exhibited an RTC error of 35-45ppm.

    Joe.

     

  • Joe da Silva said:
    * I should point out that I also have some older samples of MSP430FE427 (2005 vintage) that were tested simultaneously with the above (2010 & 2011) samples. The older samples exhibited an RTC error of les than 10ppm, whereas the newer samples exhibited an RTC error of 35-45ppm.

    That hardens my suspicion that there have been some changes in the oscillator circuitry over the last years. We experience differences with the 1232 and 1611 processor. Same layout, same brand and model of crystal, no crystal failures on previous batches, up to 10% failure on the recent batches.

    About the check: you can perform a quick-calibration: output the crystal clock signal (ACLK) to a port pin. Scan this port pin with a high-precision timer, calculate the required correction settings and transmit them back to the MSP.

    Teh MSP under test can check its INFO memory, and if there is no calibration value available, it enables the ACLK output and expects calibration values through, well, SPI/UART/ whatever. Can be even written using JTAG, if you usee.g. the batch-programmable (non-free) version of the Elprotronic FET-Pro430 programming software which you feed a TI.TXT file with the calibration data for storage on the INFO sector.

    If the MSP starts and there is a calibration value, the process is skipped. So all you need to do is plugging the new device to the test board, wait a second or two and it is calibrated. Can be made fully automatic.

  • Jens-Michael Gross said:

    That hardens my suspicion that there have been some changes in the oscillator circuitry over the last years. We experience differences with the 1232 and 1611 processor. Same layout, same brand and model of crystal, no crystal failures on previous batches, up to 10% failure on the recent batches.

    Well, the interesting thing is that all but one of my test samples is "Rev E" silicon. One of my 2011 samples is "Rev H" silicon. Yet while there is substantial difference in RTC timing between the 2005 samples and the 2010/2011 samples, the "Rev E" sample of Feb 2011 seems similar to the "Rev H" sample of Feb 2011. In other words, there seems to be a correlation to manufacturing date, but not to silicon revision, as might be expected. Strange.

    BTW, I should correct a misunderstanding I had when I first posed this question in two parts. I had thought the "RTC calibration constant" was a value that was written to some register in the MSP430FE427. However, my firmware engineer tells me it's actually purely a firmware thing. Hence the second part of my question is null and void.

    Joe.

     

  • So it's more likely a change in the production rather than a silicon change (well, a silicon change wouldn't show same symptoms across different devices, unless all were updated wiht the same bug).

    So maybe TI has changed the foundry for the production, or the plastic material has been changed to something with smaller resistance or with a different (dielectrical) influence on the pin parasitic capacitance or whatever.

**Attention** This is a public forum