This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Running the MSP430G2231 at 16MHz

Other Parts Discussed in Thread: MSP430G2231

I want to run the MSP430G2231 at 8 or 16 MHz. According to the datasheet, the device has internal calibration information to achieve this speed.

When I look at the MSP430G2231.h file, I find DCO and BCS calibration pointers to the 1MHz speed, but no other clock speed.

On the io430G2231.h file, however, the pointers to the internal address containing the 16 MHz calibration are provided.

Is this an oversight on the MSP430G2231.h file, or should I assume that the data on the 0x10F8 (for DCO) and 0x10F9 (for BCS) is not to be trusted?

I found another post on this forum stating there are some tools from ElProtonic to do the 8 or 16 MHz calibration. Can I get a link to the exact tools that would allow me to find this info?

Is there any other way to easily achieve the 8 or 16 MHz frequency with good precision?

Thanks for your input! Best regards,


  • If the datasheet tells that they are there and iox.h has the pointers, but mspx.h hasn't, it looks like an oversight when generating the mspx.h from an older one. Older MSPs didn't have other information than the 1MHz.

    Jose Quinones said:
    Is there any other way to easily achieve the 8 or 16 MHz frequency with good precision?

    What do you call precision?

    The calibrated values aren't precise. First, the DCO running on these values will only be precise for a certain temperature. The temperature drift of the DCO is nothing you can ignore. Then, unless modulation is 0, that measn that coincidentally the required frequency is directly available from one of the 256 possible DCO/RSEL combinations, you'll have a jitter on the clock when teh DCO constantly jumps between two frequencies to achieve the orrect average frequency.

    Good precision is an external crystal. Jitter-free, low temperature-coefficient, no calibration needed.

    If you want to calibrate your chip yourself, all you need is a precision clock source (e.g. an 1MHz quartz oscillator). Then set up a timer with capture/compare to measure the DCO driven SMCLK against the external clock and adjust the DCO up and down until it matches. Then store the resulting value as calibration value. You can even do so for two temperatures (by using the internal temperature sensor) and you can later compensate for the temperature drift.

    I haven't done it myself, but, well, the hardware inside the FLL+ or unified clock module in newer MSPs exactly does this, using an external 32kHz crystal or the internal 32kHz calibrated REFO as reference. It shouldn't be too difficult to do it in software (I guess there is even an older appnote showing this on older MSPs without calibration date or FLL)

  • I believe that the error was in allowing the following --  "On the io430G2231.h file, however, the pointers to the internal address containing the 16 MHz calibration are provided.".

    The G2231 isn't supposed to have any calibrated DCO values besides 1 MHz -- I believe the product web page and also the data sheet say that.  It is one of the few significant differences between the G2231 and the F2012 that it has only the single calibrated frequency whereas the F2012 has 1,8,12,16 MHz calibrated.  You can run the G2231 at 16 MHz but be mindful that there is a tight range of minimum and maximum supply voltage at which 16 MHz operation is defined to be possible -- as the Vcc supply voltage decreases, so does the maximum supported speed of the chip according to the data sheet, so keeping under 8MHz will allow you much more low Vcc support and lower power consumption during runtime.

    If you want to self calibrate the IC to run at 16 MHz you'll have to play with the couple of DCO clock control related register values in the neighborhood of the range that you'd expect to find generating 16 MHz and make your own calibration.  Somewhere there is a program that will calculate the right register settings and store the calibration values into flash in the information segment according to the usual manner for such values, though to use that you should have a crystal or other accurate reference time base available to determine the frequency.

    If you want a MSP IC that runs at 16 MHz that is factory calibrated just get a F2012 instead and use the G2231 for lower speed / less critical applications.

    I am a bit surprised that they left out the 8/12/16 MHz calibrations for the G2231, though, I guess they must have figured that the calibration process actually cost enough money to make it a worthwhile savings to eliminate the testing on the value line parts.


  • Hi Jose,

    if you need 'precision' use a crsystal (+ load caps)!

    You can 'calibrate' the DCO to the desired frequency on your own. Use the DCO Library you can find here to get the values.

    Kind regards

  • Hi,

    by the way, the MSP430F20xx/MSP430G2xx contains a sample program called 'msp430x20xx_dco_flashcal.c. You can dowload it here or find the sourcecode below.

    It will calculate and store new DCO data to INFOA but you need to have the external 32kHz crystal soldered in place.


    //  MSP430F20xx Demo - DCO Calibration Constants Programmer
    //  Description: This code re-programs the F2xx DCO calibration constants.
    //  A software FLL mechanism is used to set the DCO based on an external
    //  32kHz reference clock. After each calibration, the values from the
    //  clock system are read out and stored in a temporary variable. The final
    //  frequency the DCO is set to is 1MHz, and this frequency is also used
    //  during Flash programming of the constants. The program end is indicated
    //  by the blinking LED.
    //  ACLK = LFXT1/8 = 32768/8, MCLK = SMCLK = target DCO
    //  //* External watch crystal installed on XIN XOUT is required for ACLK *//
    //           MSP430F20xx
    //         ---------------
    //     /|\|            XIN|-
    //      | |               | 32kHz
    //      --|RST        XOUT|-
    //        |               |
    //        |           P1.0|--> LED
    //        |           P1.4|--> SMLCK = target DCO
    //  A. Dannenberg
    //  Texas Instruments Inc.
    //  May 2007
    //  Built with CCE Version: 3.2.0 and IAR Embedded Workbench Version: 3.42A
    #include "msp430x20x1.h" // do not forget to change this according to the device you're using

    #define DELTA_1MHZ    244                   // 244 x 4096Hz = 999.4Hz
    #define DELTA_8MHZ    1953                  // 1953 x 4096Hz = 7.99MHz
    #define DELTA_12MHZ   2930                  // 2930 x 4096Hz = 12.00MHz
    #define DELTA_16MHZ   3906                  // 3906 x 4096Hz = 15.99MHz

    unsigned char CAL_DATA[8];                  // Temp. storage for constants
    volatile unsigned int i;
    int j;
    char *Flash_ptrA;                           // Segment A pointer
    void Set_DCO(unsigned int Delta);

    void main(void)
      WDTCTL = WDTPW + WDTHOLD;                 // Stop WDT
      for (i = 0; i < 0xfffe; i++);             // Delay for XTAL stabilization
      P1OUT = 0x00;                             // Clear P1 output latches
      P1SEL = 0x10;                             // P1.4 SMCLK output
      P1DIR = 0x11;                             // P1.0,4 output

      j = 0;                                    // Reset pointer

      Set_DCO(DELTA_16MHZ);                     // Set DCO and obtain constants
      CAL_DATA[j++] = DCOCTL;
      CAL_DATA[j++] = BCSCTL1;

      Set_DCO(DELTA_12MHZ);                     // Set DCO and obtain constants
      CAL_DATA[j++] = DCOCTL;
      CAL_DATA[j++] = BCSCTL1;

      Set_DCO(DELTA_8MHZ);                      // Set DCO and obtain constants
      CAL_DATA[j++] = DCOCTL;
      CAL_DATA[j++] = BCSCTL1;

      Set_DCO(DELTA_1MHZ);                      // Set DCO and obtain constants
      CAL_DATA[j++] = DCOCTL;
      CAL_DATA[j++] = BCSCTL1;

      Flash_ptrA = (char *)0x10C0;              // Point to beginning of seg A
      FCTL2 = FWKEY + FSSEL0 + FN1;             // MCLK/3 for Flash Timing Generator
      FCTL1 = FWKEY + ERASE;                    // Set Erase bit
      FCTL3 = FWKEY + LOCKA;                    // Clear LOCK & LOCKA bits
      *Flash_ptrA = 0x00;                       // Dummy write to erase Flash seg A
      FCTL1 = FWKEY + WRT;                      // Set WRT bit for write operation
      Flash_ptrA = (char *)0x10F8;              // Point to beginning of cal consts
      for (j = 0; j < 8; j++)
        *Flash_ptrA++ = CAL_DATA[j];            // re-flash DCO calibration data
      FCTL1 = FWKEY;                            // Clear WRT bit
      FCTL3 = FWKEY + LOCKA + LOCK;             // Set LOCK & LOCKA bit

      while (1)
        P1OUT ^= 0x01;                          // Toggle LED
        for (i = 0; i < 0x4000; i++);           // SW Delay

    void Set_DCO(unsigned int Delta)            // Set DCO to selected frequency
      unsigned int Compare, Oldcapture = 0;

      BCSCTL1 |= DIVA_3;                        // ACLK = LFXT1CLK/8
      TACCTL0 = CM_1 + CCIS_1 + CAP;            // CAP, ACLK
      TACTL = TASSEL_2 + MC_2 + TACLR;          // SMCLK, cont-mode, clear

      while (1)
        while (!(CCIFG & TACCTL0));             // Wait until capture occured
        TACCTL0 &= ~CCIFG;                      // Capture occured, clear flag
        Compare = TACCR0;                       // Get current captured SMCLK
        Compare = Compare - Oldcapture;         // SMCLK difference
        Oldcapture = TACCR0;                    // Save current captured SMCLK

        if (Delta == Compare)
          break;                                // If equal, leave "while(1)"
        else if (Delta < Compare)
          DCOCTL--;                             // DCO is too fast, slow it down
          if (DCOCTL == 0xFF)                   // Did DCO roll under?
            if (BCSCTL1 & 0x0f)
              BCSCTL1--;                        // Select lower RSEL
          DCOCTL++;                             // DCO is too slow, speed it up
          if (DCOCTL == 0x00)                   // Did DCO roll over?
            if ((BCSCTL1 & 0x0f) != 0x0f)
              BCSCTL1++;                        // Sel higher RSEL
      TACCTL0 = 0;                              // Stop TACCR0
      TACTL = 0;                                // Stop Timer_A
      BCSCTL1 &= ~DIVA_3;                       // ACLK = LFXT1CLK

  • Hi All,

    Thanks for your answers! I was able to download the DCO_Library file set, but when I add it to my project I get the following error:

    Error[e46]: Undefined external "TI_SetDCO(int)" referred in main ( ...\Debug\Obj\main.r43 )

    Is there a parameter that I need to configure on the assembler/compiler for this to work?

    I have the DCO_Library.h file included and in it you can find the line:

    extern char TI_SetDCO(int Delta);

    which pretty much includes the call to the function. Even if I move this to the main.c file, I get the same error.



  • Hi Jose,

    the documentation of the library should give you the information you need.

    If you want to have 'the easiest solution' you should use the example code from my post above.

    Now you may ask youself: what's the difference? Well, the example is usefull if you will re-create the DCO calibration data (i.e. because you've erased them by accident, or because there is not data for your specific frequency (i.e. when using 10MHz)). The library is intended to be used for re-calibrating the DCO during runtime. That's useful because the DCO frequency is depending on the temperature (default DCO cailbration values were generated at 25°C).


  • I guess teh funciton in question is in teh HAL library that supporst several basic tasks like setting the DCO or configuring the clocks. This library needs to be included into the link, so the proper funciton can be extracted and put into the code.

    I don't use it, so I don't have it, nor do I knwo the link, but I know that such a thing exists and it will explain the error.

    Anyway, aBUGSworstnightmare is correct: the library simulates the FLL module hardware found on newer MSPS for an adjustment at runtime. YOu can use it for calibration anyway by lettign it settle and then put the resulting register values into config memory. Then you can throw the code away for your project.


  • Hi All,

    Thanks again for all your help. I am following on this topic and I understand what we are discussing, so I am definitely making progres here. There is only one concern.

    It is clear to me that to better tune the DCO, closing the loop with a very precise external watch clock will help me find the right DCO and BCS register values. I gave a try to one of the pieces of code you kindly offered before, and it worked very nicely!

    It is also clear to me that I need to repeat this process every now and then as temperature changes will affect the DCO, not to mention the tune up is not 100% accurate so across time I would loose even more accuracy and precision, returning me to the very same predicament I started with.

    However, since I only have one timer on the G2231 and I need to be outputing PWM practically at all times, it seems to me this is not going to pan out. Or is there a secret technique I am missing here?

    I am using the PWM output to close the loop on a motor which is to obtain position. I can envision the tune up subroutine running as soon as I reach the commanded position, but what if I receive a command while I am tuning? Clearly the best way to know is to actually run it and let the real time output call itself out. I am thinking the collision will not be very pleasing, though, so at the end this may not be an option.

    I am thinking about using a 4 MHz oscillator as my TACLK as that would solve my problems. I don't care if the DCO is running close to or far away from 16 MHz because all I need is superbly fast instruction execution in order to avoid ISR collisions which was harming the motion control execution.

    I am using the timer to generate PWM output (to control the motor speed) but also to read a time command. Needless to say, this interaction must work flawlessly. Luckily the reads occur every 1 or 2 ms, so I have lots of times between events. However, at the event the instruction execution has to be super fast because the read and the output are asynchronous to each other, so every now and then they would coincide. If I don't service the ISR fast enough, the PWM output gets messed up pretty bad and this is quite catastrophic.

    Of course this would be super easy were I to have two timers and I could use one for the read and one for the PWM. But of course this is about making it as cheap as possible (as always), so 1 timer will have to do! The 4 MHz oscillator may not be price attractive, though, so at the end I may end up doing the DCO technique at the risk of adding a delay in response.


  • Jose Quinones said:
    returning me to the very same predicament I started with.

    Not really. The initial variation of the DCO is laaaaarge. The same setting of the DCO on different chips can result in +-50% or even more different clock frequency. Having it calibrated once will drop this to maybe 1 or 2% or even less.

    For the temperature compensation, you may measure at different temperatures, calculate the resulting error from the base calibration and set points at which to switch to a different setting. This can be done before flashing the actual applicaiton into the MSP and therefore the timers are free again. At runtime all you have to do is checkign the temperature and if necessary switch to a different calibration setting.

    Jose Quinones said:
    he instruction execution has to be super fast because the read and the output are asynchronous to each other, so every now and then they would coincide. If I don't service the ISR fast enough, the PWM output gets messed up pretty bad and this is quite catastrophic.

    Don't you use the MSPs internal PWM generation hardware features? With this, the PWM duty cycle is independently of the code execution. Code is only needed to change the duty cycle or PWM frequency. Not to maintain a steady output.
    You'll need CCR0 for settign the PWM frequency, and then CCR1 (and the other CCRs, but I think the G2231 has only one) for the PWM output on TA.1 pin. There are several oteh rthreads about this.

  • Hmmm... You just gave me an idea!

    At the moment I am using TACCR1 to generate my PWM. At the ISR, I set the beginning of the PWM cycle and configure to reset of the output at the moment the duty cycle is met. When that event happens, then I configure to set the output at the start of the next cycle. The only problem with this venue is that then I can not use duty cycles close to 0% or 100% because then I run the chance of loosing one of the timer events, in which case the PWM gets all screwed up as I need an entire TAR cycle to resume PWM operation.

    I can not use TACCR0 to configure the PWM cycle length or frequency because I am using TAR to measure the length of the pulse which is my speed command. Well, I could, but then I would need to use the timer overflow interrupt as a counter to measure my input signal's time. Hmmm... I'll be darned, that could actually work pretty nicely...

    However, I was thinking about configuring the timer to run up to FFFF (continuous mode) and configuring TACCR1 to RESET/SET. In this case I would use the TACCR0 interrupt to configure the next cycle start and TACCR1 to hold my Duty Cycle. I would still have a problem with latency if per example I try too low a duty cyle, but would be able to reach 100% duty cycle as the clear event would be 100% in hardware and there would be no software intervention whatsoever.

    Thanks for stirring my neurons!

  • PWM control using softwar einterrupts is an unsafe thing. The workings depend ont he interrupt latency (you'll probably have soem more ISRs running) and as you already noticed, you cannot get close to 0% and 100%. How close you can get fdepends on the maximum latency, that is the added execution time of all higher-priority ISRs plus the PWM ISR latency. Plus the length of the longest part in main where interrupts are disabled (e.g. while using the hardware multiplier).

    If you use TAR overflow (TA running in continuous mode), you will have a fixed PWM frequency (that of the TAR overflow), but you can use CCR0 for setting the hardware PWM to a fraction of this (e.g. setting CCR0 to 32768 will get you 50% duty cycle etc.). I haven't tried it, but since the set/reset modes trigger on TAR->CCRx and TAR->0, it should work.

    I'm not sure whether you can actually get 100% and 0% (one of them should work, depending on polarity), at leas you'll get 1 tick close to it. And for a real 100% or 0% setting, you can set the OUTBIT and switch the outmode to OUTBIT. This will tie the port pin to the OUTBIT value (PWM disabled). It can be done in the funciton that sets the current PWM value as a special case.

    TACCR1 is the free for your capturing.

    You should, however, set th enew value in a global variable and do the actual switching of CCR0 either in the TAR overflow or CCR0 ISR, (depending on polarity), so you'll get the switch always at start or end of a PWM cycle.

    Jose Quinones said:
    I was thinking about configuring the timer to run up to FFFF (continuous mode) and configuring TACCR1 to RESET/SET. In this case I would use the TACCR0 interrupt to configure the next cycle start and TACCR1 to hold my Duty Cycle.

    Well, in my above setup you can swtich CCR0 and CCR1 at will (CCR0 is only special if using UP mode, as it will trigger the timer timer overflow prematurely). However, depednign on your signal logic, it make sno difference whether you change the cycle in CCR0 or TAROV interrupt.
    And I was thinking you need the second CCR as input four your measurement.