This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TMS320F28377D: jitter in SPI ADC DAQ

Part Number: TMS320F28377D


I am using the F28377D 'Delfino" DSC.  I am using an external ADC (AD7986) to get 18 bit samples.  The ADC will do 2MSps, but that requires a clock frequency of 100MHz.  The DSC has 50MHz SPI ports, but I am unable to use them, I am using the pins for EMIF.  So I have the PLLSYSCLK set to 25MHz (40ns period).  The SPI port is transferring two 9 bit values since the bit width is limited to 16.

I can acquire and store the data, code below.  When I look at the time between samples it varies from 3.76 us to 3.84 us.  This equates to +/- 1 clock cycle (40 ns). 

Is it possible to get the time tighter to a consistent value?  I am currently not using interrupts or CPU clock timer, will either of these help?

Uint32 Read_SPI(){

    int i;
    Uint32 vsread

    // Transmit ADC dummy data
    for(i=0; i<2; i++){
    SpibRegs.SPITXBUF = 0;      //SPI-B
    }

    // Wait until data is received
    while(SpibRegs.SPIFFRX.bit.RXFFST !=2) {}
    
    //Convert SPI-B data
    vsread = SpibRegs.SPIRXBUF;          //load MS 9b
    vsread <<= 9;                                      //shift left 9 bits
    vsread |= SpibRegs.SPIRXBUF;        //load and merge LS 9b
    vsread = vsread ^ 0x20000;               //convert from 2's complement
    return vsread;
 } 

  • Hi John,

    I think the main issue here is that you lowered PLLSYSCLK to 25 MHz.  This slows your whole system down.  Each SPI has a baud rate register which can be used to lower the frequency for just the peripheral and leave the system running at max speed.  Take a look at the SPIBRR register.

    Regards,

    Kris

  • I am sorry I wrote the wrong clock. 

    I changed the low-speed peripheral clock (LSPCLK) in the LOSPCP register.

    These bits configure the low-speed peripheral clock (LSPCLK) rate relative to SYSCLK of CPU1 and CPU2.

    000,LSPCLK = / 1
    001,LSPCLK = / 2
    010,LSPCLK = / 4 (default on reset)
    011,LSPCLK = / 6
    100,LSPCLK = / 8
    101,LSPCLK = / 10
    110,LSPCLK = / 12
    111,LSPCLK = / 14
    Note:
    [1] This clock is used as strobe for the SCI and SPI modules.

    The default would be 200/4 = 50Mhz, I changed it from 4 to 2, so that 200/2 = 100MHz.

    SPI Baud Rate Control
    These bits determine the bit transfer rate

    For SPIBRR = 3 to 127: SPI Baud Rate = LSPCLK / (SPIBRR + 1)
    For SPIBRR = 0, 1, or 2: SPI Baud Rate = LSPCLK / 4

    The fastest you can set the SPI Baud Rate is LSPCLK /4.  Default LSPCLK was 50/4 = 12.5MHz, with LSPCLK at 100MHz the baud rate is 25MHz (40ns)

      

  • John,

    Thanks for the clarification. That sounds better.

    How are you calling Read_SPI()? If I understand the issue correctly, you are seeing an issue of variation between each iteration of Read_SPI() (not necessarily the function itself causing the timing variation), is that correct?

    Regards,
    Kris
  • Kris,

    I have a control loop, Write SPI output value using McBSP-A configured as SPI, then read SPI port and write time stamp and data value to file

        // data loop
        for(y=0; y<data_pts; y++){
    
            Write_SPI(Vc);              //write control voltage
    
            vsd = Read_SPI();           //read SPI port value (Count)
    
            Vs = (float)(vsd / 52428.8);
    
            HW_wr_data[ndx++] = IpcRegs.IPCCOUNTERL;
            HW_wr_data[ndx++] = vsd;
     
            DELAY_US(3);
    
            Vc = Control(Vs);
    
        } //end y loop

  • John,

    Thanks for the details. For precise timing, here is what I think is your best option:

    - Use a CPU timer to trigger a DMA channel

    - Have the DMA copy two pieces of data in the SPI TX register. In your case, it sounds like these can be any value, but you could always dedicate a few unused RAM locations to the cause.

    The reason I chose this method instead of an ISR is because there are a number of factors which can cause a variance in ISR timings. This method should guarantee the samples are collected at precise timings.  If you time it right, you could probably time it so that the data is ready by the time you hit the read function and you won't have to kill any cycles.

    You could further advance this with another DMA channel that copies the RX data to specific memory locations. In your code, you could just always read those locations for the latest data.

    Regards,

    Kris