Other Parts Discussed in Thread: CC1310,
Tool/software: TI-RTOS
Hi,
I'm characterising the clock drift on a bunch of CC1310 Launchpads.
The crystal specified in the LaunchXL-CC1310 schematic is rated to +/-30ppm, yet I'm measuring around +110ppm (+/-20ppm or so) on all the boards I'm testing by comparing Clock_getTicks against my reference clock.
This is *after* enabling temperature compensation, before which I was seeing more like +215ppm +/-60ppm.
My measurement cycles run for at least 12 hours, since short term drift measurements are fairly useless in terms of precision.
If I force the board to stay awake running on the HF_XOSC (e.g. by starting a never-ending radio command), the drift is only +/-15ppm or so.
My reference clock source is synchronised to various atomic clocks via NTP and has been well characterised and corrected for drift over several years, so I don't expect a lot of drift from this source - certainly not more than 8 seconds per day! And that also doesn't explain excellent readings when the launchpad runs on HF_XOSC only.
I see high drift only when I allow it to standby, presumably using the LF_OSC as timing source since that's what startup_files/ccfg.c lists as the default - #define SET_CCFG_MODE_CONF_SCLK_LF_OPTION 0x2 // LF XOSC
I have tried trimming this drift using SET_CCFG_EXT_LF_CLK_RTC_INCREMENT - and confirmed that my changes are present in Debug/ccfg.obj and the final output ".out" file - to no avail.
Even with RTC increments up to 2000ppm (0.2%) from nominal (eg 0x7FBE76 = 32833.67Hz should make it run noticeably slower), the board seems to initially take the setting but quickly returns to +100ppm or so.
Has anyone else attempted to characterise the clock drift on these boards, and seen or solved anything similar?
Tomorrow I'll try reading the RTC directly in an attempt to determine if the problem is actually LF crystal drift or an issue with saving/restoring Clock_getTicks over periods of standby, but today I'm out of time.