This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

ADS1256: Bleedover/noise in differential readings between channels..

Part Number: ADS1256

I am noticing bleed over when pulling differential data from channels.

here is an output: where 0 is channel 0,1 channel 1 is 1,2 and so forth..

Channel 0: 1.718778v
Channel 1: 0.068488v <--- where is this coming from?
Channel 2: 0.000106v
Channel 3: 0.000100v

The only channels with anything hooked up to them are Channels 0 and 1 with a Load Cell (4 wire Wheatstone) (run through an OpAmp) where Channel 0 has the Vout and Channel 1 is the VRef coming off the OpAmp

the OpAmp and the Load Cell do share the same 5v+ and VCC.  The sample rate is 30k (really only getting about 4000 samples/second) and the Gain is set to 4. 

Happy to share code too if there is value to it.. I can't tell if the bleed is timing or electrical given that it runs all zeros (or very near zero) when there is no weight.

  • To add some flavor text here are a few more runs..

    Channel 0: 1.787240
    Channel 1: 0.068414
    Channel 2: 0.000148
    Channel 3: 0.073314

    Channel 0: 1.792080
    Channel 1: 0.068454
    Channel 2: 0.000041
    Channel 3: 0.068475

    Channel 0: 1.787111
    Channel 1: 0.073335
    Channel 2: 0.000097
    Channel 3: 0.000079

    Channel 0: 1.718776
    Channel 1: 0.073346
    Channel 2: 0.000138
    Channel 3: 0.000073

    those outputs are each the last results of a run, each run consisting of 1000 reads of each channel in a loop (so set channel 0, sync, wake, DRDY, RDATA, read, set channel 1.... and so forth)

    Thanks

  • Hi Josh,

    Welcome to the TI E2E Forums!

    When you say that nothing is hooked up to the other channels, do you mean that channels 2 & 3 are left floating?
    If that is they case, then there is no telling what voltage you might measure when reading these channels. You can try grounding the unused inputs and re-measuring. If you do that, you'll probably see the measurement results on these channels will tend towards zero, but they can still be non-zero due to noise.
     

    4 kSPS Data Rate
    When MUX'ing between channels, keep in mind that the SINC5 digital filter is performing a moving average on the last 5 conversion results, so the effective data rate when multiplexing (switching channels after each conversion) will have the effect of reducing the data rate by about a factor of 5 at the higher data rates (at the slower data rates the digital filter looks more like a SINC1 filter with a single-cycle latency).  Table 14 in the ADS1256 datasheet provides a list of expected multiplexer cycling throughput for each data rate.
     

    You also asked about the value measured on channel 1...

    What voltage are you expecting to see on this channel?

    Do be mindful to clock out all of the ADC data before the next /DRDY falling edge. If your MCU is slow to clock out the data and is still clocking out data when the new conversion result is ready, it is possible that the value you shift out could be a combination of old and new data. In other words, it is possible to get corrupted data. To check if this might be happening, I would recommend running the ADS1256 at a much slower data rate and observing if the measurement result(s) becomes a much more reasonable value.

  • Chris,

        Thanks for replying... Some of your comments lead me in directions that were very useful and created new questions...  First off I was floating 2~7 and only 0 and 1 were plugged in...  They were still nosy when I locked them down to the same common as the board.. However, on a lark I added a second power supply (bench top adjustable) set it to 5v and all of my analog inputs on it...(since I was using difference).  In this setup the new benchtop power supply was feeding 5v to my load cell, 5v to the opamp (of which 0 and 1 were connected to). I then added 2~7 to that and they cleaned up super well...  So apparently the board power is very noisy and my analog signals should avoid the board... so yeah there is that..

    Now the values measured on pin 0/1 (difference)... That is a bit confusing now..  I turned to validate that what I send in is what I expect back out..  That doesn't seem to be the case though.. I hard wired the power supply to pin 0/1 so I should get an output that matches the input... But I am not :(..

    Here is the C code that is calculating the voltage (in case that is the problem)


    voltage = reading*5.0/0x7fffff

    Here is the wiring:
    ain0 = Power supply + terminal
    ain1 = Power supply - terminal

    ain2 ~ ain7 = floating (unplugged)

    test code does 100 readings then averages the last 3 and it is only reading channel 0/1 as a single difference channel.

    Here are some results:
    Power supply at 2v

    • 2.104512v - 15000sps
    • 2.105127v - 15000sps
    • 2.101307v - 15000sps
    • 2.101245v - 15000sps
    • 2.109394v - 30sps
    • 2.109432v - 30sps
    • 2.109398v - 30sps

    power supply at 0v

    • 0.073317v - 15000sps
    • 0.070043v - 30sps

    power supply at 4v

    • 3.276422v - 15000sps
    • 3.276467v - 15000sps
    • 3.274836v - 15000sps
    • 3.273214v - 15000sps
    • 3.437569 - 30sps
    • 3.437581 - 30sps
    • 3.437623 - 30sps
    • 3.437624 - 30sps

    So first off I would expect these to very nearly hit their numbers...  but they are off (0 should read very near zero but its reading closer to .1v) given the swings this doesn't quite make sense... over read 0 to 2v but WAY under at 4v... and The numbers change between 15000 samples and 30 samples.. 

    Lastly, I did try to mess with calibration, but then numbers got TOTALLY weird so I yanked out that bit of the code.

    Unless I am thinking about this wrong, I should have a very strong correlation between my power supply and what the ADS1256 says its getting back... 

    Is there a protocol for how to use the calibration? (for instance should I calibrate after every gain change or data rate change.. or when calibrating do I need to make sure that 0v are being sent or 5v are being sent? do I really need to calibrate?)..

    As it is there is far more noise than I was expecting for something with between 17 and 23 bits of resolution and the readings don't seem to correlate in a repeatable way the input... I am however 100% sure its on me, so I am trying to figure out what I am missing in this integration... 

    Thanks




    Josh

  • Hi Josh,

    To get an idea of how much error and noise is coming from the power supply and how much is coming from the ADC, I would recommend shorting AIN0 & AIN1 together (and connecting to ground). Then measure the differential voltage between AIN0-AIN1 to see what the offset and noise performance looks like (without averaging) of the ADS1256 on your PCB.

    Another thing to check would be your reference voltage source. If the reference voltage is inaccurate then the ADC will appear to have a large gain error. What do you use for the ADS1256's reference source? NOTE: The ADS1256 only takes up to a 2.6V reference, so make sure not to use the supply voltage as the reference with this device.

    If you happen to have a schematic you can share it would be very helpful.

    Regarding calibration, at this point I would recommend avoiding calibration until your confident that you're getting reliable measurement results. If you try to include calibration while getting bad readings then calibration might mask the problem and make it a bit more difficult to resolve. However, it should be okay to allow the ADS1256 to perform its own auto-calibration.

    Once you're ready to begin calibrating the system with external stimulus, then you'll want to provide the calibration signals with the load cell connected (to help calibrate out the combined ADC + reference + load cell errors, for example). Performing the system offset calibration with no load applied to the load cell can help to remove offset errors, as well as the offset caused by the weight of the weigh pan (if applicable). Offset calibration should always be performed before gain calibration. For gain calibration, you may not be able to generate a 100% full-scale (i.e. 5V) signal from the load cell, so you'd probably want to apply the maximum weight to the load cell (or as large as you can with a known calibrated weight) and measure the result to compute the gain error. Once you've determined the gain error, can you manually program the FSCx registers with the gain error correction factor. The ADS1255-7 Design Calculator can help you decipher the significance of the 24-bit FSC value.

     

  • So I have two boards I have tried to integrate and both are giving me similar results..  I have removed all of my op-amps and the loadcell... And am just reading a reference voltage I am sending A0..

    Here is a graph of what 10000 reads looks like (this is at 7500 samples/second setting BUT not using the open stream, so I am doing 1 read at a time, hence the roughly 3500 reads per second actual speed).



    Its going along fine, then randomly there are these drops.. The thing about these drops are...  At the time of this test, I was using a simple voltage splitter to send in a very small but known voltage.. Also the board I am testing with has a 1.6v reference voltage if that could relate to this at all.. 

    Maybe seeing this will help you go "oh you should....." as at this point I have a known steady voltage being sent into a single channel, and I am reading it correctly except when it comes back with a nearly fixed not right number randomly,

  • Hi Josh,

    Would you happen to have the raw data (the hex code values, before converting to voltage) in a text file that you would be able to share?

    Also, I'd be glad to review your code too. If you do not wish to post your code here, you can email it to me via: pa_deltasigma_apps@ti.com

    ...The first thing that comes to my mind is that perhaps your code is reading data while a conversion is completing and you're getting corrupted data. You might want to try further slowing down the ADC data rate and reading each conversion result twice to see if the results match.

  • I have the data in a UDOUBLE array so writing that out as hex should be plenty easy..  and the code is basically the free stuff from Waveshare (just tweaked to write to a text file instead of the screen).. I am at the office currently, but I'll post something tonight.. I'll also look at doing some of the internal auditing you are suggesting.

    Thanks!

  • Your suggestion of timing pointed me in a good direction...  I added a few microseconds between each read block (3 to be precise) and now the data started coming back more reasonably...  

    Is there a reference C or C++ implementation that I could audit this code base against... I have very little faith in the bcm2835 library-based implementation.. and while I may still keep using the bcm2835 library. The higher-level code just seems sketchy at best so I would love to do a code review from a more trusted source.

    For reference here is what my chart looks like with that 3-microsecond delay between each byte read..

    So it looks like I am +/- 0.002 volts which is way better than I was before... 

  • Hi Josh,

    It is on my backlog to release some example C code for this device, but for the mean time, I can share a code snippet and a link to a simple example project that may be helpful...

    The example project can be found on this E2E thread: https://e2e.ti.com/support/data-converters/f/73/p/338399/1182307#1182307

    And here is an example code snippet of how to read data: 

    // Sends RDATA command to read the data
    // NOTE: Wait for /DRDY to go low before calling this function
    int32_t dataRead_byCommand(void)
    {
    	uint8_t dataRx[3];
    
    	set_CS(0);							// Set /CS pin low
    	spi_send_byte(RDATA_OPCODE);		// Send RDATA command
    	__delay_cycles(TD_DIDO_DELAY_T6);	// Delay
    	dataRx[0] = spi_send_byte(0x00);	// Clock out MSB
    	dataRx[1] = spi_send_byte(0x00);	// Clock out Mid-byte
    	dataRx[2] = spi_send_byte(0x00);	// Clock out LSB
    	set_CS(1);							// Set /CS pin high
    
    	// Sign extend and return result
    	return (int32_t)( ( (dataRx[0] & 0x80) ? (0xFF000000) : (0x00000000) ) |
    									  ((int32_t) (dataRx[0] & 0xFF) << 16) |
    									  ((int32_t) (dataRx[1] & 0xFF) << 8 ) |
    									  ((int32_t) (dataRx[2] & 0xFF) << 0 ) );
    }

    NOTE: The above code had some built in delays inside of the "set_CS()" function that delayed the first SCLK rising edge to be at least 50 ns after the /CS falling edge.

    Also, the t6 delay between the RDATA command and the sending of "0x00" to clock out data is an important timing parameter. I don't think delaying between "0x00" bytes is necessary, but the above code did have some inherent delay between bytes since the SPI peripheral was configured to send one byte at a time and wait for the TX/RX operations to complete before sending the sending of the next byte.

    Yet another possible issue I've seen with SPI peripherals is that you typically need to read EVERY RX byte (even if it is a don't care). For example, the RX data received while sending the "RDATA" byte is not important, but if you do not read it out, it will remain in the SPI's FIFO and will be clocked out as data if it is not first cleared from the FIFO.

  • I double-checked my voltages with a ocilliscope and am comfortable saying that the output is now probably just noise (after much code reviewing).  So I moved on to try to conquer continuous read... None of the samples out there worked.. But I was able to tweak away until this worked.... 

    ADS1256_SetChannal(Channel);
    ADS1256_WriteCmd(CMD_SYNC);
    DEV_Delay_us(10);
    ADS1256_WriteCmd(CMD_WAKEUP);
    DEV_Delay_us(10);
    DEV_Digital_Write(DEV_CS_PIN, 0);
    ADS1256_WaitDRDY();
    DEV_Delay_us(8);
    DEV_SPI_WriteByte(CMD_RDATAC);
    DEV_Delay_us(8);
    
    for(i = 0; i<count; i++){
         ADS1256_WaitDRDY();
         buf[0] = DEV_SPI_ReadByte();
         DEV_Delay_us(3);
         buf[1] = DEV_SPI_ReadByte();
         DEV_Delay_us(3);
         buf[2] = DEV_SPI_ReadByte();
    
         read = ((UDOUBLE)buf[0] << 16) & 0x00FF0000;
         read |= ((UDOUBLE)buf[1] << 8); /* Pay attention to It is wrong read |= (buf[1] << 8) */
         read |= buf[2];
         if (read & 0x800000)
              read |= 0xFF000000;
         ADC_Value[i] = read;
         while(DEV_Digital_Read(DEV_DRDY_PIN) == 0);// you have to wait for the bit to rise or you can double read
    }
    DEV_SPI_WriteByte(CMD_SDATAC);
    DEV_Digital_Write(DEV_CS_PIN, 1);

     The timing to get into the loop is crucial, you have to open CS before you call RDATAC and you have to close it after SDATAC... Also if you don't wait for DRDY to go high, you can also get out of timing... In short its a bugger.. but this was working consistently for me... All and all, unless something looks totally wrong I think this is all working now.. 

    Thank you for all of your help!

  • Hi Josh,

    I apologize for the delayed response. I was out of the office for a couple days.

    The sequencing of your code looks okay. I just had one comment about the /CS operaiton:

    • /CS can be toggled between commands and between data reads.

      Perhaps your "ADS1256_WriteCmd()" function is controlling the /CS pin, but if not, do make sure that /CS is low before sending the SYNC and WAKEUP commands.

      Also, I would recommend toggling /CS before and after reading the data (i.e. set it low before clocking out the data, and set it high after clocking out the LSB). By toggling /CS, you ensure that a new SPI frame is started each time you read data. Without this, you risk getting out of sync with the ADS1256 in the case that a glitch occurs on the SCLK signal.

       
  • Even in a continous loop you would lower and raise the CS on every beat of the loop?  So psudo code like:

    CSLow()
    Sync()
    Wake()
    WaitDRDY()
    RCDATA()
    CSHigh()
    Loop
         CSLow()
         ReadByte()
         ReadByte()
         ReadByte()
         CSHigh()
         DoStuffToBytes()
    EndLoop

    I thought I played with that and got very odd behaviors, but I can play with that again if that is what you mean

  • Hi Josh,

    If I were writing the code, I would probably do something like this:

    CSLow()
    SDATAC()  // Make sure device is in SDATAC mode before reading or writing to registers
    CSHigh()

    CSLow()
    // Configure device registers here...
    CSHigh()

    CSLow()
    Sync()
    CSHigh()

    CSLow()
    Wake()
    CSHigh()

    CSLow()
    RDATAC()
    CSHigh()

    Loop

    WaitDRDY()

    CSLow()
    ReadByte()
    ReadByte()
    ReadByte()
    CSHigh()

    DoStuffToBytes()

    EndLoop

    You probably don't need to toggle /CS between commands that are so close together, such as SYNC and WAKE. However anytime the ADC is idling for a significant period of time, I'd recommend setting /CS high to reduce the chances of a nose glitch on SCLK.