This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

SPI and UART of MSP430F5438

Other Parts Discussed in Thread: MSP430F5438

Hi,

I would like to verify few details in my design which includes MSP430F5438:

1. I'm going to connect SPI pins (e.g. 76,77,78,79) directly to 32 chips  with the following parameters:

LOGIC INPUTS (SDI, SCLK, , SDA, GPIO) CS

 

V

IH Input High Voltage

0.7 × V

DRIVE

V

 

V

IL Input Low Voltage

0.4

 

V

 

I

IH Input High Current

−1

 

μA

 

V

IN = VDRIVE

I

IL Input Low Current

1

 

μA

 

V

IN = GND

Hysteresis

 

150

 

mV

 

Table 4. SPI Timing Specifications

Parameter

Parameter

Limit

 

 

Unit

 

 

Description

 

 

f

 

SCLK

5

 

 

MHz max

 

 

SCLK frequency

 

 

t

 

1

5

 

 

ns min

 

 

falling edge to first SCLK falling edge CS

 

 

t

 

2

20

 

 

ns min

 

 

SCLK high pulse width

 

 

t

 

3

20

 

 

ns min

 

 

SCLK low pulse width

 

 

t

 

4

15

 

 

ns min

 

 

SDI setup time

 

 

t

 

5

15

 

 

ns min

 

 

SDI hold time

 

 

t

 

6

20

 

 

ns max

 

 

SDO access time after SCLK falling edge

 

 

t

 

7

16

 

 

ns max

 

 

rising edge to SDO high impedance CS

 

 

t

 

8

15

 

 

ns min

 

 

SCLK rising edge to high CS

 

 

Is this OK in terms of load and timing?

2. What is the maximum UART rate in transmission?

Thanks,

Rafi

  • Hi Rafi,

    The maximum SPI clock rate is the system clock. However, lacking attached hardware, I haven't tested yet whether a divider of 1 will work properly. Anyway, your devices only allow 5MHz, so with a 16MHz system clock, you can only get a maximum SPI bit clock of 4MHz (16/4). With a system clock of 15MHz, you could get 5MHz (15/3). Less is sometimes more :) But even with 4MHz, you'll have a hard time stuffing the SPI hardware. Only 32 or 24  MCLK cycles per byte. You'll neet optimized C code for this, and for higher clockrates, hand-optimized assembly code is necessary to reach MCLK/2. I went to using the DMA controller for block transfers. Using an ISR for handling the sending is a no-go at these speeds.

    Also note that you'll require 32 separate chip select lines to address your devices, unless they are capable of daisy-chaining (which, OTOH, will significantly reduce the throughput if you just want to access one single device).

    You can, however, reduce the number of required CS lines by using two (or even 4) separate SPI channels and attach 2 or 4 devices to the same CS line but different SPIs. The 5438 has 8 SPIs. Not that in PIS maste rmode, the CS line for addressing the slave(s) can be any I/O pin, not just the one assigned to the UART. The dedicated pin is only mandatory if using the MSP in slave mode or multi-master mode.

    After a quick look over the timings, there should be no problems. Neither the clock requirements nor the setup and hold times sum up to something that could be a problem.

    Depending on your devices, you might need to experiment with the clock phase and/or polarity setting as well as with the LSB/MSB and 7/8 bit data settings. For MMC/SD cards, I was successful with setting the UCCKPL bit. (inactive clock is high)

    As for the load, it is fairly low. 32µA are definitely no problem for the MSP, but depending on the used voltages (if you use different ones for your SPI devices and the MSP) you should ensure that 1) the MSPs output voltage is high enough to reach the 0,7*VCC requirement of the SPi devices (won't work with VccMSP=3V and VccSPI = 5V) and 2) the current rushing into the MSP port pins when exceeding VccMSP+0.2V is not higher than the allowed +-2mA through the clamp diodes. (in addition, this inrush current might result in a raise of the VccMSP in case the total current consumption on VccMSP is smaller than the inrush - depending on the used voltage regulator)

     

  • Thanks Jens-Michael for you elaborated reply.

    I would like to verify few more issues:

    1. Does working with SPI clock of 2MHz still require a very optimized C code? Can we give pointer to a buffer or each word is handeled seperately? Do you have code sample?

    2.Regarding the CS line. It looks like our slaves work in 16-bit words. However the msp430f works in bytes (8 bits). How is that solved? How does the CS signal behave? In the data sheets we saw the msp430f raise the CS line after 8 clock pulses, but the slave wants it down for 16 pulses. Both chips claim to support SPI. Is it not a standard?

    3.What is the maximum UART rate and throughput?

    Thanks you very much,

    Rafi

  • Good questions. There' sno definitive answer, it all depends on the surrounding conditions.

    1) if your MCLK is 16MHz and your SPI clock is 2 MHz, you have 64 clock cycles for checking whether TXBUF is empty (for putting next byte to send or dummy byte for receive in) and checking whether a byte was received and storign it away. It is still not much time and definitely not enough for letting in ISR handle this. But it should work.

    The main advantage of SPI is that YOU clock the slave. That means, if you delay putting a byte into TXBUF, the slave won't send anything into your RXBUF. So if your code is too slow, throughput will decrease, but unless you make something terribly wrong, you won't lose data.

    2) CS line is (at elast with all slaves I know) initiating and ending an SPI transmission. If CS goes high even within an SPI transmission, the slave should abort the current command and with the next CS transition to low, the slave should be ready for a new run as if nothing happened. If you use one CS line for several slaves and they are connected to different SPI lines (clock/data), all will be ready for duty, but only those who get an SPI clock signal will output data. The others will just go back to sleep when CD gets high again. So you can select up to 8 (the 5438 has 8 SPIs) at once and get data from one to 8 just as you wish, even with different clocks and asynchroneous.

    SPI transmissions are in 8 bit chunks (bytes) but can consist of any number of bytes. MMC cards e.g. receive a block read command, the address, then send (still with CS low)  a number of FF bytes while the internal controller fetches the data from RAM, then a SYNC byte that is not FF and then 512 bytes of data. All with CS low. And if you're satisfied after getting the first 256 bytes or so, simply put CS high and the rest is swallowed.

    There are other SPI slaves (e.g. D/A converters) which allow daisy-chaining. This means you clock in your data into their input shift register from below and the shift register will output its upper bit at teh same time to the next device. All are selected with once CS line. When CS goes high again, they will take what is in their shift register at this moment. So you simply pass the first byte you send to the last device in the chain and the last byte is taken be the first device in chain.

    So you are not required to raise CS after 8 clock pulses. You rais it when the transmission is finised - how many bytes you send/receive depends on the slave.

    Note that if you want to receive a byte, you need to send a byte, because sending and receiving is synchroneously and the clock pulses are generated when putting something to send in TXBUF.

    Also keep in mind that the send buffer is double buffered. So if you put a byte into RXBUF, it will (on idle SPI) immediately be forwarded into the output shift register and TXBUF is empty again. you can put the next byte into it right away. When the byte in the shift register has been sent, the byte in TXBUF is moved into the shift register without delay, leaving TXBUF empty again. At the same moment, the received byte in moved into RXBUF (maybe 1/2 clock pulse later, depending on phase setting). But now you're under pressure to fetch this byte from RXBUF since the next one is already being clocked in by teh TX byte that was just moved into the output shift. And 8 SPI clocks later the next byte is ready in RXBUF. If you wait with writing into TXBUF after your read RXBUF, you have plenty of time, but also you lose throughput as SPI transfer is stopped until you put the next byte into TXBUF.

    For this reason I implemented DMA transfers for larger data blocks. Will only work with USCI0 and 1, not 2 and 3 as there are no DMA triggers for USCI 2 and 3. Also only one SPI can be handled by DMA at the same time, as it requires two DMA lines (one for RXBUF, one for TXBUF) and the 5438 has only three. If only sending on SPI or sending OR receiving on I2C/UART, three could be handled at once. But since you don't know how many bytes have been received already, using DMA for continuous receive into a buffer is no good idea. Only if you exactly know how many bytes are to expect and you wait for the complete transfer to be finished.

    3) for SPI, there is Fsystem as maximum clock. I didn't time it oput but it seems that this is also the maximum SPI clock (=16MHz on the non-A versions of the 5438). Would be nice. I2C is limited to a maximum of 400kHz as bit clock but can be fed by Fsystem too. In UART mode (asynchroneous operation), the maximum bit clock is 1MHz, and the maximum input clock is Fsystem too. But you shouldn't go higher than source/16 if you want the oversampling features for better synchronizing to the incoming bitstream. If you have the Source clock (e.g. SMCLK) running at 1MHz and use an UART bitclock of 1MHz, then incoming bits must be come with exactly 1MHz or chances are that the UART cannot properly receive them. With 1/16 divider, the UART can resync to the bit edges by 1/16 of bitclock, allowing a much higher tolerance for receiving.

    So in theory, the absolute maximum SPI throughput is 2.000.000B/s (16MHz/8bit), for UART it is 100.000B/s (1MHz/10 bit) and I2C it is 44.444B/s (400kHz/9bit). Still you need to be able to handle these data rates in your software.

     

  • Thanks again Jens-Michael Gross,

    refering to your answer number 2: I understand I can control the low duration of the cs line of the msp430f5438? I thought SPI is a fixed hardware module.

    refering to your answer number 3: I wondered about the output throughput (transmit) of the UART module. We are waiting for a code from TI that suppose to output to a bluetooth module (of Panasonic) throughput > 500Kbps.

    Do you see a problem in designing a system with msp430f5438 that receive data from sensors via 1M SPI and transmit it via 1M UART?

    Thanks,

    Rafi

     

     

  • The dedicated CS line of the SPI module is only used in slave or auto-master-slave mode. Then it controls the SOMI pin (it is high-impedance as long as CS is HIGH and set to putput if CS goes LOW) and the acceptance of clock signals by teh shift registers.

    If you're the master, it is not needed/used at all. For the other devices you can use ans I/O pin as CS pin. YOu set it to low when you start transmission and high again when you're finished. The SPI hardware is very simple and does not know when you intend to end a transmission. (other than the I2C protocol/hardware that has dedicated start/end patterns and addressing)

     

    The Throughput of the UART module is limited, however, UART means "Universal Asynchroneous Receiver Transmitter" and refers to the RS232-like serial transmission mode of the USCI module. In this mode, certain limitations apply so the asynchroneous transfer can be synchronized with the (unknown since not transmitted separately) bit clock etc. Thus the limitation to 1MBd. And then you should clock it with precise 16MHz (quartz based, and no low power modes used that will stop the quartz)

    In SPI mode, the USCI module works synchroneously and there shouldn't be a problem operating the module at much higher speed. Provided the clock and data lines are short and with low capacitance, it should work with Fsystem. On my experiments, I was able to send and receive 512 byte bursts with 16MBd, using two DMA lines for send and receive. I didn't have any hardware attached, just connected SIMO and SOMI, so SCLK didn't count. Maybe in a real world setup it might be too fast. Anyway, on an ATMega128 with ~16MHz clock I used its hardware SPI module to interface an SD card with 8MHz SPI clock without problems. Due to the lack of a DMA controller, the effective data transfer rate was about 400kb/s. On the MSP with DMA, it _should_ run bursts with no delay.

    Anyway, keep in mind that you need to handle the data, so 1MBd burst speed shouldn't be a problem, continuously receiving/sending with 1MBd surely will. And do not forget that 1MBd on UART is base don 10 bit per byte (= 100kB/s) while on SPI it is based on 8 bit/Byte (=128kB/s).

    And if you just want to forward SPI to UART, you'll probably need some double-buffering (unless you make a very exact timing), that will cost even more processor load. You cannot use DMA for this purpose, as DMA only works with fixed, predefined sizes and you do not have a counter that tells you how much of a block has been transmitted already. So you'll have to go back to byte-by-byte transmissions and then DMA is mostly useless, since polling the DMA registers takes as much time as polling the UART/SPI registers manually.

    If the timing is exactly balanced, there could be a setup possible that automatically stuffs every incoming byte from SPI into the TX register of the uart. So you could automate the process up to the point where no CPU action is needed at all, putting the transfer completely into the background. This should work best if the MSP is set to be slave, so the bytes are clocked into SPI when available from outside. Else you still must provide data to SPI TX to make it clock the data in. You cannot automate if the transmission shall be bidirectional, because then interpretation of the SPI data is necessary for the incoming/outgoing data flow.

    What wil work or not depends on the external hardware and your imagination.

     

  • Thanks Jens-Michael Gross,

    Do you have sample code for UART to USB for the "MSP-EXP430F5438 Experimenter Board"?

    Do you have sample code for SPI?

    Thanks again,

    Rafi

     

  • Sorry, I don't have the experimenter board. My experience with the 5438 is fairly new, I have only the PS5X100 testboard (just a processor socket and some pin rows, no hardware on it).

    My UART code is too much modularized (1 to 2 or 4 UARTs etc., it will compile for Atmel, MSP5438 as well as for MSP1611) so I cannot give you a working example without giving you the surrounding project skeleton too. And I doubt my boss would appreciate this :)

    SPI is really straight. Once you configured the controller for clock speed, phase and bit oriientation, you only have to stuff bytes into TX and pull from RX buffer.

    void SpiInit (unsigned int speed) {
      UCA0CTL1 = UCSWRST;
      UCA0CTL0 = UCSYNC|UCMST|UCCKPL;
      UCA0CTL1 |= UCSSEL_SMCLK;
      UCA0MCTL = 0;
      UCA0BR0=speed;
      UCA0BR1=(speed>>8);
      UCA0CTL1 &= ~UCSWRST;
      return ;
    }

    This function initializes the hardware. Speed is actually a divider for the system clock.

    The data transfer is absolutely simple. Before startign any transfer, pull the CS line (whichever I/O-Pin you assigned for this) low, after done with the transfers, push it high again. In between, the following functions should be useful:

    extern inline unsigned char SpiSendReceiveByteWait(const unsigned char data)
    {
      unsigned char i;
      i= UCA0RXBUF;  // dummy read to reset UCRXIFG (you could also manually clear UCRXIFG I think)
      // Start transmission
      UCA0TXBUF = data;
      // Wait for transmission complete
      while(!(UCA0IFG&UCRXIFG));
      return UCA0RXBUF;
    }

    This one sends a byte and at the same time receives a byte, delivering this byte back as return value. SPI is bidirectional. While you send a byte, you receive a byte. If you only want to receive a byte, you have to send a dummy byte, and if you want to just send a byte, you'll receive a dummy byte. THis function does both, but since it returns after the byte has been sent and one has been received, it wastes much time for larger transfers. But for a successful SPI communication is is all you need.

    it is defined as extern inline, so the compiler might compile it drectly into your code instead of calling it as a subroutine. This way, its optimizing features might shrink the code even more than with a subroutine call, as the parameters can be transformed to immwdiate values instead of being loaded into a register, clobbering it, etc.

     

    If you need more speed and don't want to waste too much time waiting for a return byte you don't need, use these macros:

    #define SpiSendByteFirst(x) do{UCA0TXBUF = x ;} while (0)                         // reset UTXIFG0, send first byte
    #define SpiSendByteNext(x) do{ while(!(UCA0IFG&UCTXIFG)); UCA0TXBUF = x; } while(0)
    #define SpiSendByteLast(x) do{ while(!(UCA0IFG&UCTXIFG)); UCA0TXBUF = x; while(UCA0STAT&UCBUSY); } while(0)  // do not wait for UCRXIFG0, it is not clear here

    The first one starts the transfer.It does not check whether there is still a  byte sending. all subsequent bytes except the last are sent with the middle function. It waits until the transmit buffer is ready for reception of the next byte, then 'returns'. The last byte should be sent with the third macro, as it waits until really all data has been sent.

    The do{}while(0) construct ensures that the macro will expand to a single C block statement, even if used behind an if() statement without brackets. So you can use it like any function.

    The opposite of this is

    #define SpiReceiveByteFirst(x) do{ UCA0TXBUF = 0xff; } while (0)                                     
    #define SpiReceiveByteNext(x)  do{ while(UCA0STAT&UCBUSY); (x)=UCA0RXBUF; UCA0TXBUF=0xff; } while(0)                                      
    #define SpiReceiveByteLast(x)  do{ while(UCA0STAT&UCBUSY); (x)=UCA0RXBUF; } while(0)

    Here, the first initiates reception of a byte by sending a dummy byte, the next one waits for reception, then sends another dummy byte, the last one just waits for the last byte has been sent (and therefore a byte has received) and returns the received byte into the passed lvalue. Note that here the macro does not behave like a function. Its parameter must be an lvalue, that means a variable name or a dereferenced pointer where the macro can write something to. A constant or a pointer/reference will produce an error.

    You see it is very simple to use SPI.

    This is not really optimized code. Since sending is double-buffered (one byte in TX buffer, one byte in output shift register), one could put another byte in buffer while the last one is still in the output shift register. But then you had to stop interrupts, as an IRQ between stuffing the next byte into TXBUF and reading the result of the actual transfer would result in a receive overflow and also break the chain. Since disabling and enabling IRQs adds to execution time too (and saving current IRQ state on stack, so you can use the code inside ISRs, requires assembly language), it makes no real difference. It only makes sense when transfering a bunch of data at highest speed (e.g. from a memory device like an MMC/SD card or an external flash memory chip), and I developed a block transfer function using DMA for this purpose. So I didn't see a reason for further optimization.

     

  • Thank you very much Jens-Michael Gross. You helped us a lot.

    We implemented your code only we used UCB0xx instead of UCA0xx like you did. Thus we removed the  UCA0MCTL = 0; in the init method as it is not available for B0.

    We didn't see any activity on the SPI lines (we checked the clk line for example). What do we need to do in order to see activity?

    thanks,

    Rafi

  • Did you enable the port pins for module activity? (PxSEL=y) Since my code modules are designed for maximum portability and associated port pins are different even on processors with identical hardware modules, I didn't include the port initialisation into the init function. (the original code supports all 8 SPIs through enumeration parameter)

    It might be necessary to set the SIMO and SCLK pins to output too. Some modules do require this (I don't remember for the USCI) and so I always initialize the ports that way at system startup. While the firmware is modular and used across projects, the hardware is project dependent and so is the port initialisation.

    For selecting the slave, you'll also need to use any I/O line as CS signal and set it to low to tell the slave that it is selected and clock pulses are valid.

    Don't select the STE line associated with the SPI with PxSEL. If you do, it will put the SPI into slave mode once low, at least in 4-wire SPI mode. The SPI hadrware will not control any CS line for slave selection, when in master mode. The hardware doe snot knwo when a new transfer startd and how many bytes it will be. It's up to your main code to manage this. Since the slave chip select is of no meaning to the SPI module, you should see activity on SIMO and SCLK without it as soon as you put something into the TX register. But of course it is necessary for any real communication.

  • Thanks Jens-Michael Gross. You have help us a lot.

    We would like to output two clock signals: 250KHz and 1MHz from the MSP430F5438.

    We understand it can be done without interrupt routine, only by configuring dividers for MCLK , SMCLK or ACLK and routing them to some port pins.

    We would appreciate if you could instruct us (C code).

    Thanks,

    Rafi

     

     

  • Indeed, you can output all three system clocks to port pins.

    {P2DIR|=1;P2SEL|=1;} will output the current MCLK to P2.0 (pin 25).
    {P11DIR|=2;P11SEL|=2;} will output SMCLK to P11.1 (pin 85).

    {P1DIR|=1;P1SEL|=1;} will output ACLK to P1.0 (pin 17).
    {P2DIR|=0x40;P2SEL|=0x40;} will output ACLK to P2.6 (pin 31).
    {P11DIR|=1;P11SEL|=1;} will output SMCLK to P11.0 (pin 84).

    {P4DIR|=0x80;P4SEL|=0x80;} will output SMCLK to P4.7 (pin 50).
    {P11DIR|=4;P11SEL|=4;} will output SMCLK to P11.2 (pin 86).

    Depending on clock sources and dividers you'll get the required frequencies.

    You can also generate a clock signal by using the timer(s). Clock the timer from e.g. SMCLK (assumed 1MHz), setting TAxCCR0 to 3, TAxCCTL0 to up mode, toggle, will output 250kHz at P1.1 or P8.0 (TA0.0) or P2.1 or P8.5 (TA1.0). You can also use TimerB with output to P4.0 (TB0).

     

  • Thanks Jens-Michael Gross.

    Reffering to the first half of your reply - how do we configure the dividers of MCLK,ACLK and SMCLK?

    thanks again,

    Rafi

  • I strongly recommend reading chapter 3 of the users guide slau208. It explains the usified clock system module in detail.

    On the 5438, the dividers are defined by setting the appropriate bits in the UCSCTL5 register. You can set them to divide the assigned oscillators frequency by 1,2,4,8,16 or 32. You can even set an additional  divider for the external ACLK output pins (DIVPA) that will output only a fraction of the internal ACLK to the pins.

     

  • Thanks again Jens-Michael Gross.

    I got there and managed to output the desired clocks (divisions of SMCLK sourced by several options I tried).  The trouble is they are not stable.

    What is the most stable configuration (source + dividors/multipliers) for creating 1MHz and 250MHz in your opinion?

    Thanks again,

    Rafi

  • What do you mean with 'not stable'? Wrong frequency or frequency drift or both?

    The stability of the clock outputs solely depends on the stability of their oscillator source. They are just digitally controller dividers for thir input clock (you can even drive them by an external clock of your choice, abusing them as pulse dividers/flipflop stages.) So if you see instability, it is their oscillator source that is instable.

    If you use plain DCO as source, it has a large tolerance and a large temperature coefficient.

    If you use FLL stabilized DCO, then the long-term stability of the oscillator matches the one of the reference (usually internal 32kHz oscillator or even external 32kHz clock crystal). However, its short-term stability is not es good as the FLL adjusts the DCO based on its frequency in relation to the reference. So DCO jumps between 'slightly too high' and 'slightly too low', matching the desired value in the average. This adjustment is done every reference clock pulse. If you enter low power modes which turn off the clock sources, all calculations are void, as it takes a relatively long time before the DCO reaches its operating area again.

    If you use an external quartz crystal as oscillator for the clocks, you'll need to ensure that it is running peroperly (else your clock will fallback to unstabilized DCO) by clearing the OFIFG bit and ensuring it stays cleared for >50ms. Then you can switch your clocks (SMCLK etc.) to the LFXT1 or XT2 oscillator and enjoy the most stable clock you can get, with excellent short-term stability and good long-term characteristics (you can increase them by applying controlled heating to the quartz, keeping it at a higher, but constant temperature than the environment, so you eliminate the temperature drift.

    I have no problems with an internally FLL stabilized DCO running at 16MHz and clocking the USCI modules for stable 115200Bd transfers. On other MSPs I used 8MHz quartz for the same purpose.

    How much effort you should make depends on your exact requirements.

  • Thanks.

    I meant that the phase of the clock is not stable (observed by oscilloscope). The signal jitters in time.

    How can I be sure the FLL is indeed working. This is my code:

    UCSCTL2 &= 0x9c00; // sets D=1,N=1 for FLL deviders

    UCSCTL3 &= 0xff08;  //select XT1CLK as FLL reference and divide it by 1

    UCSCTL4 &= 0xff0f; 
    UCSCTL4 |= 0x0030;   //select source for SMCLK  as DCOCLK

    P4DIR|=0x80;
    P4SEL|=0x80;        //will output SMCLK to P4.7

    thanks,

    Rafi

  • First, before you use XT1CLK as clock source, you'll have to ensure that LFXT1 is running stable. The detailed procedure is in the users guide. If LFXT1 failes or is not running, the FLL will be clocked by REFO instead, which of course isn't as stable as the crystal (still much better than the DCO alone).

    The FLL will adjust the DCO, if necessary, every XT1CLK tick. The base algorithm is to count the number of DCO ticks during two XT1CLK ticks and adjust the DCO by one step if the count is lower/higher than it should be. 
    Also, since the number of possible DCO settings is limited, there's a modulation setting that will switch the DCO between its current settign and the next higher setting for a given pattern. This modulation is most likely the cause of the jitter you observe:
    Over a period of 32 clock cycles, DCO will be set to the next higher setting for 0 to 31 clock cycles, causing a clock speed increase of 2 to 12% (factor between two DCO settings) for 0 to 31 of the 32 clock cycles, evenly distributed. See figure 3-2 of the family users guide.

    So using the FLL will give you a fairly stable average frequency, but the phase jitter of the clock can be up to 12%.

    You can try to extract the current DCO values determined by the FLL, disable FLL and manually set them. And then check phase jitter again. If it is cause by the FLL, it should be gone then, but the frequency might be off and the final values depend on the actual device. You can enable FLL, let it adjust the frequency, then disable FLL and live with temperature drift and the ferquency error caused by the limited number of DCO steps.

    Or add an high-frequency crystal and take MCLK and SMCLK from there. It will give highest precision and remove all phase jitter. At the cost of, well, increaed cost :)

  • Thanks jens-Michael Gross.

    You are right. The modulation was on. Also the reason the dividers didn't work is because I didn't set the DCO frequency range correctly.

    How can I extract the DCO steps to minimum. I prefer inaccuracy over instability.

    Is it the DCO bits in UCSCTL0?

    Thanks,

    Rafi

     

  • Yes, you'll need the DCO bits in UCSCTL0 and the DCORSEL bits in UCSCTL1.

    If you disable modulation, you don't see any clock jitter anymore. You should be still able to use the FLL, resulting in a frequency change every 1/32768s., keeping the average frequency on the correct level. If you disable FLL too, you'll have temperature drift and device-dependent variations in frequency. You can stretch this adjustment interval by settign FLLREFDIV in UCSCTL3. (and of course increase the FLLD and/or FLLN bits in UCSCTL2 acccordingly)

    You can turn FLL on at startup and then turn it off after a certain time. It should have settled (with modulation off) after 32/32768s = ca. 1ms. Once FLL is turned off, the DCO will continue with the last setting.

    There are, however, some issues with writing to the FLL registers and the DCO settings. See the errata sheet of the 54xx. Else the whole thing won't react as expected.

     

  • thanks again for the great help.

    I thought it was behind me but when I integratet those lines (please see at the end) in our code, the phase has started jitter again. I saw there was no change in the UCSCTLX registers. the line that caused the jitter is (surprisingly) P8DIR = 0xFF;

    I output the clocks from P4.7 and P11.0 so there is no connection to P8.

    thank again,

    Rafi

    The clocks code is:

    //1MHz to P4.7 
    UCSCTL1 =0x0031; //disable modulation,frequency range 0.64-1.5MHz
    UCSCTL2 =0x0020; //N divider of FLL is 32
    UCSCTL3 =0x0002; //select X1 as ref to FLL and divide it by 1
    UCSCTL4&=0xff0f;
    UCSCTL4|=0x0330; //unsure DCO->SMCLK and DCO->ACLK
    P4DIR|=0x80;
    P4SEL|=0x80;    //will output SMCLK to P4.7

    //250KHz to P11.0
    UCSCTL5&=0xf0ff;
    UCSCTL5|=0x0200;//divide ACLK(1MHz) by 4
    P11DIR|=0x01;
    P11SEL|=0x01;   //will output ACLK to P11.0

  • That's weird. The P8 pins are only attached to P8, TA0 and TA1 CCR outputs. There's nothing that could do a feedback/distortion and cause a jitter on the other clock signals.

    You should post both versions of the code (with P8 and without) and a detailed description (an oscilloscope screenshot is nice), so a TI employee can take a look at it. Maybe you discovered a silicon bug.

  • Thanks again.

    Do you know where I can see what is my code size,data size and flash consumption?

    I'm using IAR.

    Thanks again,

    Rafi

  • IAR can generate a map file that is an ASCII text file indicating memory resource consumption, etc.

  • Thank you very much.

    I just want to verify something regarding the A2D of the MSP430F5438. What is the maximum sampling rate?

    Can I get 500KHz at some version of the chip (maybe with the "A" suffix. If I used it - are there any differences to the regular  MSP430F5438 I should be aware)?

    Thanks,

    Rafi

  • BrandonAzbell said:
    IAR can generate a map file that is an ASCII text file indicating memory resource consumption, etc.

    I guess, the real question is: HOW? And whether it is detailed enough to be useful for a particular purpose.

    rafi zachut said:
    What is the maximum sampling rate?

    maximum ADC12CLK frequency: 5.4 MHz.
    Conversion time 13*ADC12CLK,
    minimum sample time: 4*ADC12CLK.
    plus 1 ADC12CLK for synchronizing

    300kHz. One channel only.

    If you only need 8 bit conversion, thr conversion time is reduced to 9 clock cycles, giving a maximum sampling frequency of 385kHz.

    The A chip isn't faster, only that the internal ADC12 oscillator is trimmed a bit closer to the maximum that can be achieved with an external clock source (ACLK/SMCLK/port)

    rafi zachut said:
    are there any differences to the regular  MSP430F5438 I should be aware

    The A is faster (max. 25MHz), consumes a bit less power, is significantly more expensive, has a replacable bootstrap loader and some different silicon bugs. And some peripherals (especially the SPI) have faster maximum speeds. Not the ADC12.

  • Jens-Michael Gross said:

    IAR can generate a map file that is an ASCII text file indicating memory resource consumption, etc.

    I guess, the real question is: HOW? And whether it is detailed enough to be useful for a particular purpose.

    [/quote]

    IAR Systems Embedded Workbench provides documentation for the code generation tools targeting the MSP430.  The particular document to reference is the EW430_CompilerReference.pdf.  Search for "map" in Adobe will result in page 44 describing how to create this map file.

    Use the "Generate linker listing" in the IDE project options, or -x on the command line.

  • BrandonAzbell said:
    IAR Systems Embedded Workbench provides documentation for the code generation tools targeting the MSP430.  The particular document to reference is the EW430_CompilerReference.pdf.  Search for "map" in Adobe will result in page 44 describing how to create this map file.

    Use the "Generate linker listing" in the IDE project options, or -x on the command line.

    Thanks for this detailed answer. I'm not an IAR user myself, but this question has been asked several times in the last months, so it seems to be difficult for people to find it.

    p.s.: searching for 'map' requires that you already know that you have to look for this term. Also, it isn't obvious for a beginner that you'll find the requested information in a 'linker listing'.

  • Jens-Michael Gross said:

    IAR Systems Embedded Workbench provides documentation for the code generation tools targeting the MSP430.  The particular document to reference is the EW430_CompilerReference.pdf.  Search for "map" in Adobe will result in page 44 describing how to create this map file.

    Use the "Generate linker listing" in the IDE project options, or -x on the command line.

    Thanks for this detailed answer. I'm not an IAR user myself, but this question has been asked several times in the last months, so it seems to be difficult for people to find it.

    [/quote]

    Agree on the fact that many have asked.  However, if search was used first, they would find that this commonly asked question has been answered already.

     

    Jens-Michael Gross said:

    p.s.: searching for 'map' requires that you already know that you have to look for this term. Also, it isn't obvious for a beginner that you'll find the requested information in a 'linker listing'.

     

    I don't expect everyone to have IAR experience to know this term, etc.  But once the correct term is highlighted, I do expect the "how" to be answered by the existing documentation.  Also, I am a believer in citing my sources (at least I try to do this as much as possible).  Not for the purpose of shoving it someone's face, but to draw your attention to where I found the answer such that IF someone actually used a search first method, they would now know where to find the answer.

  • thanks again.

    Regarding the A2D.  if I use even less bits. for example only 4. Can I get faster sampling rate?
    thanks,

    Rafi

  • rafi zachut said:
    if I use even less bits. for example only 4. Can I get faster sampling rate?


    No. The ADC12A can be programmed to make 8, 10 or 12 bit conversions and requires 1 clock cycle more than the number of bits. Plus a multiple of 4 clock cycles for sampling the input voltage.
    How many bits you actually use has no influence on the conversion time. :)

    Remember that you'll actually have to do something with the sampled values. On 300kHz and 16MHz system clock this is only ~50 CPU cycles per sample, including synchronisation. And the limited memory of the MSP (16k on the 5438) will be full in 50 milliseconds, even if you only store 8 bit per sample.

    It would be a hard job to store this onto an SD card in time, even with DMA support. Sending so much with UART or I2C is impossible. Perhaps an IDE harddrive...

  • Thank you.

    I have a question regarding the circuit of the evaluation bourd of the MSP430F5438 (MSP-EXP430F5438 ).

    The circuit has a EEPROM (24LC128I). Is it used only by the "serial to USB" converter (USB3410)?

    It seems there are also wires (clock and data) to the msp430f5438. Are those only for burning purpose?

    Thanks again,

    Rafi

     

  • Hello again,

    I saw in the MSP-EXP430F5438 (MSP430F5438 evaluation  board) drawings, that the GPIO are used to power peripheral devices.

    By the data sheet the GPIO can supply 5mA. Is there total limit per the chip / per port ?

    What happens to the GPIO in the sleep modes? Is it configurable?

    Thanks,

    Rafi

  • rafi zachut said:
    Is there total limit per the chip / per port ?

    The per chip limit is based on
    - Maximum junction temperature (95°C)
    - Junction-to-case thermal resistance
    - Junction-to-ambient thermal resistance
    - total power dissipation (I*(VCC-Vin)) or (I*(Vin-VDD)) for all port pins plus non-port power consumption.
    - total current limit per pin/port

    Also, the total current for all pins except one may not exceed 48mA to hold 0.25V voltage drop @1mA/2mA(3mA/5mA) and 100mA for 0.6V voltage drop @3mA/6mA (10mA/15mA) at the last pin for VCC=1.8V/3.0V with reduced drive strength ( full drive strength).

    So you can drive up to 15mA per Pin and will reach Vout=2.4V@VCC=3V, as long as you only drive up to 100mA total.
    Based on this info, I'd say it is possible to drive even more. But 15mA with 0.6V drop is 10mW already, leading to 5°C temperature raise against ambient. In a closed case this sums up (ambient heats up and therefore the junction temperature raises further) and you'll eventually exceed the 95°C maximum junction temperature.
    With increasing current, the voltage drop also increases. A good guess is 1V@20mA , giving 20mW power dissipation and 10°C temperature raise against ambient and doubling ambient heating too. Maybe even more.

  • thanks for the elaborated reply.

    Just to make sure - if I'm powering few devices by the MSP GPIO, and I'm below the total dissipation of 48mA - then in order to minimize the voltage drop I should split the load to as many pins as posssible, and not connect it to a single one.

    Am I right?

    Thanks,

    Rafi

  • Indeed, putting several pins in parallel will lead to a smaller voltage drop. Just ensure that all connected pins change output level together, or you'll induce cross-currents. And keep in mind that due to manufacturing tolerances not all port pins will provide an equal share to the total current. (different currents at the shared resulting voltage, as the dropdown/current equilibrium is different for each pin)

  • Thanks.

    Regarding the MSP430F5438 UART. I understand there is a support for RxD and TxD by  the USCI_Ax modules.

    What about the other control lines: RTS (request to send), CTS (clear to send), DSR (data set ready) ,DTR (data terminal ready)?

    should they be GPIO?

    Thanks again for the help.

    Rafi

  • rafi zachut said:
    What about the other control lines: RTS (request to send), CTS (clear to send), DSR (data set ready) ,DTR (data terminal ready)?
    should they be GPIO?

    Yes. These lines are not related to the transmission itself. And the hardware does not know whether you need to stop the transmission or not. Only your software knows.
    Often, these lines are not needed at all. You'll need to transmit a proper level on the DTR line, or some terminal programs will neither send nor receive. Depending on the hardware, you might just connect the DTR and DSR line.

    RTS and CTS are optional. This so-called hardware flow control doe snot mean that it is upported by the hardware directly, it means that hardware (a separate line) is used to transmit it, rather than including it into the data stream (Xon/Xoff software handshake). If you ignore it, well, all that can happen is that you lose byte sif the MSP sends faster than the PC can handle. If oyu sen dlarge amounts of data from the PC to the MSP, you'll need some means of flow control, but then you will usually use some things like XModem or Kermit protocol which implement a handshake anyway.

  • Thanks.

    We are developing on the  MSP430f5438 using IAR. So far we only developed.

    Now we are trying to burn the firmware so the hardware will be stand alone. I.e. we won't need the debuger and MSP-430FET-UIF (flash emulator) device.

    Thanks again,

    Rafi 

  • Thanks.

    We are developing on the  MSP430f5438 using IAR. So far we only developed.

    Now we are trying to burn the firmware so the hardware will be stand alone. I.e. we won't need the debuger and MSP-430FET-UIF (flash emulator) device.

    What should we do in order to burn a firmware so we can detached the hardware from the debugger?

    Thanks again,

    Rafi 

  • Hello,

    can you suplly a code for the A2D of the MSP430F5438 which:

    1.samples are collected from A7 (P6.7)

    2.sample rate is around 150KHz.

    3.samples are not collected via interrupt (not so stable), but automaticaly to a vector. for example the A2D works for 1ms to collect 100 samples.

    Thanks,

    Rafi

  • Teh 5438 has an ADC16A, which not only has 16 channels, but also has 16 memory locations for the sample results. There is no fixed connection between channel and memory.

    You can set up all 16 memory locations to sample the same signal. If the ADC12A runs in continuous mode and there is no 'end of sequence' in any of the 16 control registers, the ADC will sample 16 times before it overwrites the first. You can configure it to trigger an interrupt after conversion 8 and 16 and you'll have enough time to save the last 8 results before they are overwritten. (50 microseconds)

    Alternatively you can set up a DMA channel to do this job. It will be triggered once a sequence is complete (up to 16 conversions) and write the 16 values to a buffer.
    Then you have 16 samples to increase the destination pointer of the DMA and set it ready for the next burst (105 microseconds).

    If you do not need continuous sampling, bu tonly a limited number of equidistant samples, you can set up the ADC to only sample on a single memory location and set a DMA so it will transfer the value the moment the sampling is done, up to a given number of samples. Up to 9k samples, as tehre is no MSP with more than 18k ram.

    There are also setups where the sample data is copied by one DMA to RAM and then copied by another one to UART or SPI, since sampling with 150kHz will fill the available ram within 30milliseconds. And writing them away requires 3MBaud (which is above UART capability and can be done only with SPI @2.4MBaud clock)

    As for the 150kHz samplin frequency: teh conversion takes 13 cycles (+1 if you're using multiple channels) plus the selected S&H time. Multiply this with 150.000 and you'll have the required clock frequency for SMCLK or ACLK. Or divide your SMCLK or ACLK by 150.000, subtract 13 (or 14) and the result is the _exact_ S&H time you'll have to use for 150kHz sampling time.

    More advanced is to trigger the S&H gate by a timer which overflows with 150kHz. There you can fine-tune the sampling frequency by the timer frequency.

  • Thanks Jens-Michael Gross,

    Looks like the first option is the simplest. Let's start with the following relaxed demands:

    1.only 16 samples are neaded

    2.input to A2D is A7.

    3.sample rate is 150K. (Given by ACLK) 

    Can you give example code for this?

    Please elaborate what triggers the sampling sequence and how we get "completion" notification.

    Thanks,

    Rafi

  • Okay, let's see...

    I'll assuming some things, like the used reference (assuming 2.5V internal against AVss), the required settling time for the S&H filter (assuming shortest) etc.
    ACLK needs to be 2.7MHz or 2.55MHz (I don't know whether there are 17 or 18 clock pulses needed per sample due to synchronizing logic).
    Possible other settings would be 3.3MHz/3.15MHz or 4.5MHz/4.35MHz. Then there must be a line added ADC12CTL0 |= ADC12SHT00|ADC12SHT10; or ADC12CTL0 |= ADC12SHT01|ADC12SHT11; respectively.

    P6SEL |=BIT7; // disable I/O on A7
    P6REN&=~BIT7; // disable pullup
    ADC12CTL0 = ADC12ON;
    ADC12CTL1 = 0;
    ADC12CTL2 = 0;
    ADC12IE = 0;
    ADC12CTL0 |= ADC12REFON | ADC12REF2_5V; // enable internal 2.5V reference
    ADC12CTL0 |= ADC12MSC;  //continued sampling after first manual trigger
    ADC12CTL1 |= ADC12SHP; // S&H controlled by sampling timer
    ADC12CTL1 |= ADC12SSEL0; // clock source ACLK
    ADC12CTL1 |= ADC12CONSEQ0|ADC12CONSEC1; // repeated sequence of channels
    ADC12CTL2 |= ADC12TCOFF; // temperature sensoe not used
    ADC12CTL2 |= ADC12RES0|ADC12RES1; // 12 bit resolution
    // following two lines repeated 16 times for x=0..x=15
    ADC12MCTLx = ADC12SREF0; // reference is internal reference to AVss
    ADC12MCTLx |= ADC12INCH0|ADC12INCH1|ADC12INCH2; // input channel A7
    ADC12MCTL15 |= ADC12EOS; // End-Of-Sequence: the conversion will stop after getting here
    ADC12IE = ADC12IE15; //Enable interrupt after 16th conversion ( only include this line when using an ISR to get the result)
    ADC12CTL0 |= ADC12ENC; // enable conversion
    // setup completed

    The samplign of a sequence is initiated by

    ADC12CTL0 |= ADC12SC; // start conversion of the sequence

    Then two different things can be done.
    Either you do a busy-waiting for the ADC being done with the 16 samples. This is simply done with

    while (ADC12CTL1 & ADC12BUSY);

    Then the ADC12IE line in the initialisation shoudl be omitted. Or an ISR is used. It will be triggered after the 16th conversion is done into ADC12MEM15. Inside this ISR (see compiler documentation how to write one) this code should be used:

    switch(ADC12IV){
      case 0x24: // ADC12MEM15 filled
        // copy ADC12MEM0 to ADC12MEM15 here and signal main() that it is done, e.g. by setting a global volatile variable
        break;
      default:
      ;
    }

    The interrupt model can be extended by omitting the line ADC12MCTL15 |= ADC12EOS; and adding ' |ADC12IE7 ' to the 'ADC12IE = ' statement.
    Then there are two cases:
    case 0x13: // ADCMEM0..7 have been filled and
    case 0x24: // ADCMEM8..15 have been filled

    The ISR will be called twice as often but you'll have a continuous stream of samples. Since it is continuous now, the while(ADC12CTL1&ADC12BUSY) won't work anymore (always busy). And you only need to set the ADC12SC bit once instead for every 16 samples (with idle time in between)

    could a MOD please split this and the three post above into a separate thread? It has nothing to do with the original topic anymore.

  • thanks for the elaborated reply.

    I tried to run your code with the busy loop technique. It get stuck on the busy loop. I don't understand several things:

    1. in  ADC12CTL2 |= ADC12RES0|ADC12RES1; // 12 bit resolution. Is it not sufficient to use ADC12RES1. According to the data sheet the OR is not defined.

    2. in ADC12CTL1 |= ADC12CONSEQ0|ADC12CONSEC1; // repeated sequence of channels. Did you mean ...|ADC12CONSEQ1?

    Also, for the moment I ignored the sample rate. But I tried to configure ACLK to 2.55MHz (though the data sheet says 13 clocks are needed for 12 bit conversion - not 17). I wonder why the following code did not set ACLK to 2.55MHz. It ignored the FLL dividers setting.

      //UCSCTL1 = 0x0041; //disable modulation,frequency range 1.3 - 3.2 MHz
      //UCSCTL0 = 0x0000; //frequency range 1.3 - 3.2 MHz 
      //UCSCTL2 = 0x0050; //N divider of FLL is 80
      //UCSCTL3 = 0x0002; //select X1=32.768KHz as ref to FLL and divide it by 1
      //UCSCTL4&= 0xff0f;
      //UCSCTL4|= 0x0330; //unsure DCO->SMCLK and DCO->ACLK
      //P4DIR |= 0x80;
      //P4SEL |= 0x80;    //will output SMCLK to P4.7 in order to check 

  • rafi zachut said:
    Is it not sufficient to use ADC12RES1.

    Yes. You're right. After 8 hours looking onto th emonitor, i sometimes mix up a line or two :)
    Maybe this enabled 16 bit mode :)

    rafi zachut said:
    Did you mean ...|ADC12CONSEQ1

    Second yes. :)

    rafi zachut said:
    the data sheet says 13 clocks are needed for 12 bit conversion - not 17).

    Yes, that's right. The conversion takes 13 cycles. But there are at least 4 cycles S&H time (unless you provide a manual S&H gate by by using e.g. a timer output to open S&H when the timer output raises and then start conversion when it lowers. But then the minimum S&H time (for a timer output with 50% duty cycle) would be 13 ADCclock cycles. Or an assymmetrical timer output is needed - even more advanced stuff.

    This is what I did for the different clock proposals: increaseing the S&H time from 4 to 8 or 16 cycles, so a higher clock results in 150kHz sampling rate (due to enlarged sample time), with the additional benefit of an extended low-pass (due to the sampling time) and reduced error from self- discharge during the (then shorter) 13 clock cycles of conversion (hold) time.

    You should not use binary values for setting up hardware registers. It requires translation every time someone looks at it and makes detecting of errors more difficult.

    rafi zachut said:
    //UCSCTL0 = 0x0000; //frequency range 1.3 - 3.2 MHz 

    Being on the lowest DCO tap (as well as the highest) will automatically cause a DCO fault. You should select a different (lower) DCORSEL setting.

    Also, you seem to misunderstood the working of the DCO and the FLL.
    The datasheet entry '1.3..3.2Mhz' for DCOx=0 and DCORSEL =4 means that the DCO will oscillate with ONE SINGLE frequency somewhere BETWEEN 1.3 and 3.2 MHz. Which one depends on the device. It can be 1.3MHz as well as 3.2MHz, or somewhere in between, but only this one.

    The FLL ( Frequency Locked Loop, in opposition to a PLL, a Phase Locked Loop, which will produce an accurate frequency multiple at the cost of high power consumption) compares two frequencies and does an action based on the comparison result.
    In the MSP, the FLL will switch the DCO higher if it is too low compared to the reference and lower if it is too high. Yet the FLL cannot switch the DCO lower than DCOx=0, which it already is. And depending on the actual device you have, this may be still much too high.

    Also, the FLL can do this adjustment only if 1) FLLN number of DCO cycles have passed and no REFO pulse has arrived (DCO too fast) or 2) one REFO cycle has passed and not enough DCO cycles happened (DCO too slow). This means even if everything is allright, the clock will constantly switch between too high and too low, resulting in a correct average frequency. But with a strong, slow jitter.

    To overcome this, the modulation pattern is used. The FLL will not only increment DCOx, it will, before doing so, first switch to a higher (or lower) modulation pattern.
    This modulation pattern will switch the DCO between DCOx and DCOx+1 for every single DCO cycle (depending on the pattern). So the jitter is the same (still DCOx and DCOx+1) but it will switch up to every DCO clock pulse, giving a much faster switching, so the correct average frequency is maintained over a much shorter period of time.

    Yet you have disabled modulation.

    rafi zachut said:
    //UCSCTL2 = 0x0050; //N divider of FLL is 80

    I sit? My calculator tells me that 2.550.000/32768 is 77.8 (78) and not 80.

    After all this I'd recommend:

    Select DCORSEL = 3. This means the lowest frequency the DCO can reach in any case is 1.15MHz (but can be as low as 0.64) and the highest will be at least 6.07MHz (but may go as high as 14), In this range, the DCO can deliver 32 discrete frequencies, no more. And the FLL will switch the DCO up and down to maintain a correct average.

    Then do not disable modulation. Without it, the two frequencies mied to build the average will switch much less often, resulting in a huge clock skew every REFO clock cycle.

    Since the expected DCO outputs are somewhere between lowest and highest frequency, you should give the FLL a head-start by setting DCOx = 8, which should be somewhere near the desired 2.55MHz.

    Then the FLL will start adjusting the DCO up or down until it has reached the two freuencies above and below 2.55MHz and starts switching between them for an average of 2.55Mhz.

    If this jitter is not acceptable, you cannot use the DCO and need to use an external quartz with a multiple of 2.55MHz or a programmable external frequency source with a power-hungry PLL.

  • Again thanks.

    Can you guess why I can't pass the busy loop?

    Also in the interrupt technique - Why should I signal main() about the end of data transfer. The program counter will return to main when the transfer finish anyway, no?

    Also if we only run once (your narrow interrupt model) - the "case state" is not needed. Am I right?

    thanks and sorry I dragged you into exhausting binary representations ,

    Rafi

     

  • rafi zachut said:
    Can you guess why I can't pass the busy loop?


    Do you use the 'single sequence' mode? In 'repeated sequence' mode, the ADC12 will be busy eternally, starting the next sequence as soon as the previous was done.

    rafi zachut said:
    Also in the interrupt technique - Why should I signal main() about the end of data transfer. The program counter will return to main when the transfer finish anyway, no?

    Yes, but the main won't know that something happened. The main program does not know that it has been interrupted and new data is available. That's the main purpose of ISRs: doing things in the background withóut the main code desperately checking for an event all the time. If, however, something has to be done after an event (but nothing too time-critical, as main may be busy with something else right now), then the ISR needs to notify main somehow.

    A possible notification is, however, waking up main fom a low power mode (LPM). Yet main will only know that soemthing happened. If there are more than one ISR which wake up main, a notification is needed still. If there is only one possible event, then it would be sufficient to enter LMP (which stops the execution of main) and continue after the interrupt happened and the ISR has cleared the LPM bits and returned.

    rafi zachut said:
    Also if we only run once (your narrow interrupt model) - the "case state" is not needed. Am I right?

    Not the case construct. But reading ADC12IV is necessary as it will (implicitely) clear the interrupt source. If you dont read it (or write, which will clear ALL pending interrupts for this module), your ISR will be called immediately again as soon as you leave it. if you read it, only the highest priority interupt will be cleared. If there are more pending for the same ISR (e.g. the next conversion is ready, if the IE flag ahs been set) you'll enter the same ISR once again, but now with a different value from ADC12IV.
    This may not be likely fo rthe ADC12 with the narrow setup, and even with the double-buffering, but can be important with other modules such as the UART or the timers.

    keep in mind that with this 'single bufering' method, you'll have only a small time window (even decreased by the ISR latency time) after the sequence has been completed and the interrupt flagged, before the first sample in the sequence will be overwritten with the first result of the next sequence. If you run the ADC in repeated sequence mode, that is. In single sequence mode, the ADC will stop after triggering the interrupt, clear the busy bit and wait for you to start the next sequence. Which introduces a gap into your sample chain.

    rafi zachut said:
    thanks and sorry I dragged you into exhausting binary representations ,

    No need to be sorry for that.Once it's there, somebody else may profit from it later too, provided he's smart enough to use the search function :)

  • Hello,

    we have a board based on the evaluation board MSP-EXP430F5438.

    The purpose is to have the ability to update firmware from distant at our client without IAR and JTAG.

    Is there a way to update a firmware in a similar way to this:

    1. application runs on PC

    2. PC application send new code via USB to the board

    3. USB to UART device on board transfer new code to UART of MSP430F5438

    4.burning firmware takes new code and burn flash

    Are there things like that ready?

    Thanks,

    Rafi

  • rafi zachut said:

    2. PC application send new code via USB to the board

    3. USB to UART device on board transfer new code to UART of MSP430F5438

    4.burning firmware takes new code and burn flash

    Not easily. The USB to UART device needs to be smart. There is no built-in way to make the MSP accept a new firmware through plain standard UART transfers.

    Most MSPs, including th 54xx, do have a bootstrap loader (BSL) built-in, which accepts a new firmware (or more exact, several commands that may write to the flash) through a 9600Bd connection, but to invoke the BSL, a certain sequence is required on other pins, and this cannot be simulated with the correct timing through a virtual serial connection voer USB. So the USB-to-Serial device needs to 'know' this sequence and has to handle it. (effectively, the FETs are doing exactly this, just that they use the JTAG interface instead of the BSL)

    If you have a real serial port on the PC, you can run the BSL-Scripter, a software that knows the BSL protocol, and directly connect it to the MSP (with a level shifter to translate teh TTL levels of the MSP to the V.24 levels of the COM port).

    Note that the pins used for the BSL are NOT the hardware UART pins, but usually P1.0 and P1.1 and in additon reset and test, to enter BSL mode.

    The last way to do it is to implement an update function into your fiormware that is already on the MSP. Thsi software can then download the applicaiton and update the firmware. It is, however, a tricky task to make it fool-proof and ensure you won't lose the device if the update somehow fails on first attempt..

  • Thanks for the reply.

    I would like to focus on the bootstrap loader option on the evaluation board MSP-EXP430F5438 with some additional wiring.

    In the schematics of the board I saw that the  USB to UART chip (TUSB3410VF) has the DTR and RTS pins available. can they be used to toggle the BSL entry sequence on the RST and TEST pins?

    Can we modify the BSL-scripter to send some dummy characters at the beginning just to trigger the BSL using the HW I described?

    Thanks

**Attention** This is a public forum