This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

CC2640: Questions about processor cycles and RTC

Part Number: CC2640

Hi Team,

I have a customer who is working with the CC2640 and are doing the following:

CPU cycles on the CC2640 are to be counted which are then compared with the expired RTC cycles (from the AON_RTC_SEC and AON_RTC_SUBSEC registers) in order to carry out corrections. In order not to be disturbed by the TI-RTOS and other peripheral / software modules, which would lead to a loss of accuracy / accuracy, this process takes place before any initialization in the main function (ICall, Task, BIOs, etc.). At this point, they have tried two approaches:

A while loop with counters and NOPs has been implemented, which contains a precisely defined length of ASM commands. Based on the number of cycles for each command taken from the Cortex-M3 Technical Manual by ARM, the theoretical duration could be determined. This was calculated with the help of the CPU frequency of 48MHz (with some tolerance) so as to spend exactly one second in the loop. However, it has now been shown that if the value from the RTC is read, there is an external deviation at the time in the loop, which leads to the fact that one day is essentially longer / shorter. In addition, it has been shown that each measurement "measures" a completely different deviation. This means that in one run, it is 2 seconds and the next 10 seconds per day (calculated based on a one-second measurement), which gives a very broad spread. This results in the conclusion that in the background (even with deactivated interrupts by "cpsid i") on the chip other things are still running, which were started before reaching main ().

After the unsatisfactory output of this first attempt, a second attempt has been made, whereby the loop with a defined execution length of a run is aborted when a 1 can be read from the register AON_RTC_SEC The number of passes is used with the time as a basis for calculation, results with similarly strong deviations and variance were obtained (deviation from reading / writing of RTC register is known), which should not be / should be expected in our expectation Hardware Breakpoint and then manually read via CCS IDE.

- The question now arises whether this type of approach can function at all from the point of view of TI (ie with a complete overview of the system), or whether too many uncontrollable variables are introduced (for example the Cortex M0, code already running, or other disturbing factors ...)?

- Another question is to initialize the RTC, which currently consists of an AONRTCDisable (), AONRTCReset (), and AONRTCEnable () because Clock_getTicks () does not exist at this point. Is this command sequence correct / sufficient?

- Additional question: The 48MHz system frequency synthesized from the 24MHz quartz is available from Power-On (according to the experience already when entering the BLE-Peripheral Main function) or where and when is the PLL initialized for this?

With no result achieved with the above concept, the idea was to shift the time measurement towards hardware, and to keep the implementation effort to a minimum. Thus, the SysTick timer was initially configured to generate an interrupt after 0.25 seconds (at 48MHz by 24bit depth limited to 0.34seconds), which was routed into a specially created function. In this function, the RTC is then read out and the values ​​for further calculations or for debugging are buffered. This approach yields virtually the same results on each run and thus no longer shows the high scatter as before.

There is, however, a disadvantage: the times read from the RTC deviate so far from the set interrupt cycle that a deviation of several minutes (0.2247924805 seconds) occurs per day. In addition, these results do not coincide with observations made when our firmware with integrated calendar runs on the same hardware. This shows a daily time deviation of 10-20ppm. Furthermore, according to Bluetooth specification, which allows for the clock quartz a maximum deviation of 500ppm, the device could no longer communicate with other participants (BLE Peripheral firmware works however). At this point it is completely unclear to us where the measured differences come from.

Can you provide some insight here? What could be the cause of the discrepancy in the timings? Do you maybe have some ideas about how a deviation between the system clock and the crystal could be measured?

Thanks and Regards,

Mihir Gupta

FAE - South Germany

  • Hello Mihir ,

    Thank you for your inquiry. Since your questions are about system internals, I will need some additional time to see what information is available.

    Again, thank you for your patience.

    Best wishes
  • Hi Mihir,

    Could you also share some details on what exactly the customer needs this for? You mention "to carry out corrections", what sort of corrections are they after?

    Regards,
    Fredrik
  • Well, the goal is to measure RTC clock cycle drifts to run corrections/adjust the internal RTC in order to improve accuracy. Currently, the clock drifts about 1-2sec/day (neglecting temperature) which is quite a lot especially for a device which once deployed is inaccessible for years (due to economics).
  • Mihir,

    2 seconds per 24 hours is 23 ppm. It is possible to find more accurate crystals, but I guess for going lower than 15 ppm accuracy you must either use an external clock source or do individual calibration in production test.

    I am not sure how calibrating against the system clock will help. It will have the same amount of tolerance (15 ppm or more).

    Regards,
    Fredrik
  • It is actually the purpose of this to figure out whether it is possible to use some kind of mechanism based on the system clock for calibration purposes in the first place or in other words determine base offset without external equipment/measurements/input. As Mihir described above, two different approaches to determine feasibility have already been taken but the SoC's subsystems or some unknown "entity" within the hard-/software seems to play a role thus prevent reaching a satisfactory solution. The first example is about a simple while loop with some NOPs and a counter variable were implemented (number of ASM instructions and their cpu cycles are known from on Cortex-M3 reference manual) which stops whenever the AON_RTC_O_SEC reaches 1. Based on the counted cycles a correction value is calculated allowing to introduce a leap second(s) every day (or 86400 seconds). However, it turns out the calculated values are inconsistent for each time this is run (code is located in BLE Peripheral main-loop before anything else such as Pin_Init is executed). More precisely the calculation usually yields between 1-10 of seconds for different measurements under same environmental conditions. The test device verifiably drifts for 1sec per day at 25degC (roughly 10ppm). Concluding, it seems there is something else running on the chip preventing repeatability and sensible results. The question is what is happening here and why it is not possible to execute code predictably at this stage (run level)?

    After the first approach failed incredibly without even a hit what was going on, a second plan was derived based on the SysTick timer (or GPTimers which deliver the same results). To this end, the SysTick timer was set to generate an interrupt every 0.25s (register loaded with 12000000, based on a 48MHz) and the interrupt routine read out the AON_RTC_O_SUBSEC register (AON_RTC started prior to timer initialisation). Using this approach it was possible to produce the same results (which one would expect) over and over again but the calculations show that the RTC drifts roughly 670sec/day which again verifiably can not be right. The question again is what leads to that gross difference which can not be explained with crystal tolerance and this time not even with some other piece of code running in the background because the critical parts are located in hardware modules.

    While it would be super nice to get some answers to those questions and the ones Mihir posted above. It is actually more important to come up with a solution to do some sort of calibration (quality left aside for now) during "runtime". Since the pool of ideas is exhausted at this point due to the issues mentioned above (and a lack of insight into the core system) it is necessary to get some input on either how to solve calibration with the approaches already tried out (maybe there is something which needs to be deactivated unknown of) or maybe TI got something others can not see due to a lack of general overview over the CC26xx/TI-RTOS/BLE Stack/... ecosystem. Furthermore, TI mentions RTC calibration using the SensorController in their Technical Reference Manual but there seems to be no further information what is actually meant by that and how the TI engineers thought this would work/be implemented.
  • Hi MR,

    I still do not understand how it is possible to calibrate run-time when the 48 MHz clock will have the same amount of tolerance as the RTC. The crystal driving the 48 MHz clock will be +/- 10 ppm at best.

    While I do not know exactly which part of the TRM you are referring to, it is most likely a calibration we do when the RTC is driven by the internal RC oscillator.

    Cheers,
    Fredrik
  • The 24MHz crystals are trimmed/considered more precise than the clock crystals thus the 48MHz system clock should be no less imprecise is that assumption correct?

    PS: I am referring to TRM page 1301 (SWCU117H.pdf).
  • I am not sure that is the case. Look at mouser.com for example, both 24 MHz and 32.768 kHz crystals go down to 10 ppm initial tolerance.
  • Even if both devices are equally imprecise or lets assume the system clock crystal is worse. While this might mean that it is not possible to run calibrations like this on thousands of devices reliably it seems to me that this still does not explain the issues experienced during the conducted tests or even accounts for the quantity in calculated/measured drift seen as described earlier.
  • Hi Frederik,

    as you said, since calibrating the RTC is not possible with the process we intended to use due to system crystal tolerances which prevent this from working reliably, I was wondering what TI recommends to for RTC drift compensation (base offset)?

    Apart from calibrating base offset at 25degC, I guess it is advisable to compensate for temperature related drift. To do this I intend to create a look-up table with the drift values at certain temperatures and measure the temperature say every 60sec to conserve battery. Assuming the temperature was constant over the last 60sec I add the small drift occurring within that timeframe until my counter value reaches 1sec which then introduces a leap second. Do you thing that is a suitable solution or are there TI examples already available?  

    Regards

  • Hi Fredrik,

    can you provide sample code for "Frequency measurements to compensate RTC frequency" as mentioned in the CC2640 Technical Reference Manual on page 1301? 

    I am aware that it is not suitable for the problems discussed earlier but intended when the RTC is driven by the internal RC oscillator.

    Regards,

    MR

  • Hi MR,

    Correcting for temperature drift is absolutely doable by using a look-up table. There is no example code for this specifically, but you can use BATMON module to read the temperature.

    Regards,
    Fredrik
  • Hi MR,

    The "Frequency measurement to compensate RTC frequency" is a low level driver that is used to calibrate the quite inaccurate RCOSC_LF to be within 500 ppm to fulfill the Bluetooth requirements. It should be available in source, but I need to dig a bit in the code to figure out where it is.

    Regards,
    Fredrik