This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

LAUNCHXL2-RM57L: Hercules Clocking

Part Number: LAUNCHXL2-RM57L

Got a question on the clocking in the Hercules, and I guess microcontrollers in general (still somewhat new to microcontrollers, especially the intricacies). What is the dependency on the the peripheral and system clock and instruction execution? So the CPU executes at up to 330MHz, but when I toggle the a pin on and off (just sit in a while loop and switch the pin voltage high and low, nothing else), I get an output frequency of about 3.4MHz when the peripheral clock is running at 75MHz. This makes sense since this means that it takes about 140ns to switch (a load and store instructions take about 5 cycles each, and at 75MHz that is 133ns, plus another instruction or two in there).

So, my question is, when is each clock used? When executing an instruction that needs a peripheral, does the execution clock "switch" (CPU clock is used until something needs to be stored in a peripheral register), effectively lowering the execution speed? I'd imagine that this is the case since there may be dependencies/linearity needed when executing instructions (don't want to start an instruction if the peripheral based instruction needs to finish first).

I've also been wondering, what is the difference between the system clock and the CPU clock? I'm guessing that the CPU clock just runs the CPU instructions (CPU register access, ALU stuff, etc...) and the system clock does data transfers between the different subsystems (memory, cache, bus access, etc...).

Hopefully those questions make sense, I'm still trying to sort it out in my head a bit. Also, if this subject is already in some documentation, just let me know where it is since I haven't seen anything that really describes this stuff.

Thanks,

Max

  • Hi Max,

    The clocking in an MCU is divided into domains. Each domain has a different clock driving it and the limits on those domains are derived from the limitations of the modules/peripherals that reside in those domains. The CPU is generally driven by GCLK which is, in you example, the full 330MHz clock Each instruction that executes will execute at the 330MHz clock rate (most instructions are single cycle instructions). There are, however, some wait states imposed for accessing slower elements in the device or perhaps due to arbitration between masters etc. If you look at the architectural block diagrams within the TRM and Datasheet, you will see various Masters, peripherals, memories and such feeding into blocks that are called switched central resources (SWR's) or bridges between busses in the device. These SWR's perform several tasks such as bus arbitration, bus protocol translation, and timing related tasks. In the case of peripheral access, these interfaces will add about 24-25us latency for each access between the CPU and the peripheral. The peripheral clock only corresponds to the timing within the peripheral domain and will determine, for example, how long an access to a peripheral register will take. The CPU know there is delay and this is accommodated to an extent until a timeout occurs and an exception is flagged. It is all rather complex and impossible to explain fully in this context. Certainly, a review of Harvard Architecture would be helpful as well as some research of MCU architectures.

    For Hercules, there is also a clock tree in the Datasheet. This shows how the OSCIN clock is used to derive all the different clock domains within the device and also covers other clocks such as the LF and HF LPO clocks within the device architecture.