This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

ADC3910D125: ADC3910 data has large mysterious signal, even when inputs shorted

Part Number: ADC3910D125
Other Parts Discussed in Thread: THS4541, TPS6521905

Tool/software:

We are baffled and frustrated by this.  The ADC3910D125 returns data with an apparent noisy high frequency signal with an amplitude of about 15% of the A/D full scale values, when it should be showing a flat line with some noise.  Here are the details:

  • DC coupled differential signals from THS4541 differential amps connected just like the ADC3910D125 evaluation board except that the output is a simple single pole filter (two 27 ohm resistors and a 22 pF cap across the differential pair at the ADC pins), and we have a higher 4.75X gain on the amp.
  • 100 MHz sample clock from our Xilinx FPGA, DC coupled (our layout guy messed up and put the terminating resistor by the ADC rather than the FPGA, but the signal looks stable)
  • External REF35120 voltage reference with 10uF and 0.1uF on the Vref pin to the ADC, and 1uF on the NR pin
  • 10, 1 and 0.1uF caps on the IOVDD pin 1.8V supply, from a separate LDO than the 1.8V AVDD
  • Two 0.1uF caps for the AVDD power pins

We've verified that the digital interface to the FPGA works well with the ramp and inverting ADC data patterns.  And the two channels on the DDR bus are properly demultiplexed in the FPGA.  And we used a logic analyzer to verify that the data we see is actually on the bus from the ADC.

The mysterious signal we see is always the same amplitude (about 160 counts in 10-bit mode) centered roughly where we would expect it.  We can apply an input offset with a DAC driving one leg of the differential input of the THS4541 (the other side of the pair is a single ended scope input which is grounded for these tests) and that shifts the signal up and down as expected.  Except that we would expect to see a relatively flat line with a little noise, not an amplitude of 160 counts!.

And get this: even when we short across the differential pair with a solder blob over the cap right next to the ADC pins, the mystery signal is still the same.  No change in amplitude.

One board appears to have a noisy 2-3 MHz signal when sampled at 100 MHz, but we believe that is probably an aliased signal, because when we try other sample clocks from the FPGA (110, 90, 80, 70...) we see higher frequencies, but what looks like a single main rough sine wave overlayed with a lot of noise.  On another board at 100 Msps, the signal looks more like 6 MHz, but it is also the same amplitude.

We've looked at the Vref, AVDD and IOVDD lines (with 200 MHz scopes, so we might be missing something) and see some noise, but nothing big enough or regular enough to explain the huge signal we see in the data.

Our configuration settings are quite simple.  We reset the ADC and then set the registers to use the external voltage reference, and to switch to offset binary rather than 2's complement format on the output data bus.  That's it!.  The signal is still evident in 2's complement mode, but just formatted differently.  And setting to use the internal voltage reference doesn't change anything (though the external reference is still connected to pin 17).

I've attached Excel graphs of the data we see.  And as I said, we see the same amplitude and shape to the data even when we short the differential inputs.  So it would see to be generated inside the ADC somehow.

Please help!  We cannot explain why the ADC seems to be generating this signal in the data that we do not see on the inputs.

  • Hi Glen,

    Sorry for your frustration. Would you please send over your schematics and I can review and see what might be going on there first.

    Thanks,

    Rob

  • Hi Rob,

    Here are the relevant sections of the schematic:

    I included the analog front end, but remember that we still see the signal when C17 (the cap across the A/D input pair) is grounded.  The second input channel circuitry looks the same as this.

    Not shown is the CH_A_OFFSET signal which comes from a DAC and amplifier that sets a DC offset of -250 to +250 mV.  If the cap is not shorted, then changing the offset indeed moves the center of the signal from the ADC up and down, but it still has the unwanted signal on top.

    AVDD18 and IOVDD18 come from separate LDOs that are part of your PMIC chip, the TPS6521905 shown below:

    We originally thought that maybe the buck switching noise was getting into the LDO outputs.  But they look pretty clean, and the analog circuits are at the other end of the board, with the local bypass caps you see in the other schematic pages.

    Glen

  • We made an interesting discovery, but can't explain it.  If we set bit 5 of the DEV_CFG_4 (address 0x30B) register to change to single-ended mode, then the mystery signals go away and the data looks like we'd expect.  We checked with sines, ramps and square waves and everything now works.

    But this is with no change to the hardware, which clearly is sending a differential signal to the ADC centered on Vcm as opposed to a signal ended signal offset by Vcm and a fixed Vcm going in the INxM pin as shown in the datasheet.

    Why does a differential signal work properly with the single ended mode setting, but not with the default differential setting???

  • Hi Glen,

    Interesting. Can you remove D5 and D6 from the analog inputs? You might be clamping the inputs too hard and moving to single-ended lessens that burden.

    What happens in DIFF mode and the signal is static or very small?

    Regards,

    Rob

  • Hi Rob,

    I'd thought about the D5/D6 clamps, but nearly all of our tests while we were trying to figure this out were with the inputs grounded or being fed a 0 DC voltage by an AWG, resulting in no differential signal other than some noise out of the TDA.  We even went so far as to put a solder blob across C17 to make sure the differential inputs were shorted, and still saw the crazy signals.  It was so bad that we had stopped using an AWG a week ago because the signal was barely visible through the extra high frequency signals the A/D seemed to be adding.  It wasn't until we flipped the bit to single ended and could finally see a flat line when there was no signal that we started feeding an AWG into the board again.

    The signal plots you see in my original post were all taken with no signal.  Thus all the apparent signal you see in the plots appeared to be generated in the A/D.  All we could see on the input lines was a very small amount of noise, and no differential signal.

    It almost seems like the datasheet might have the logic on that bit inverted, it works so much better with the single ended setting.  Can you explain what that bit actually does?  From the datasheet schematic examples it looks like it still calculates the difference between the two pins, but in single ended mode the negative pin is supposed to be kept at Vcm.  What changes internally?  If the bit were reversed and the negative pin were going up and down in the true single ended mode, would that cause a problem?

    Glen

  • Hi Glen,

    Thank you for the details, let me set this up in the lab tomorrow and see what I find.

    I will reach out to you tomorrow.

    In the meantime, can you please send the spi write configuration you are applying to the ADC only?

    Thanks,

    Rob

  • I appreciate that Rob, and look forward to hearing what you find!  Here is our current setup:

    • Perform a hardware reset with the reset pin
    • Wait for the reset to complete by polling register 0x38 (CFG_ALERT)
    • Write and read the scratchpad SPARE_REG register 0x39 to verify that the SPI bus is communicating with the ADC properly
    • Flip bit 1 of DEV_CFG_3 register 0x8d high and then low to toggle the ALERT output pin and confirm that our MCU sees it and is connected
    • Set bit 6 of DEV_CFG_3 @0x8d to set offset binary format on the data bus (instead of 2's complement) as our FPGA expects it
    • Set bit 6 of BUFF_CURR @0x30a to set offset binary format if digital features are used
    • Set bit 6 of DEV_CFG_4 @0x30b to select the external Vref (a REF35120 chip)
    • Set bit 4 of DIG_INPUT_CFG @0x307 to disable the DCLKZ output.  Our FPGA only needs the DCLK clock
    • Set bit 5 of DEV_CFG_4 @0x30b to set single-ended input mode rather than differential <-----This is what fixed the problem, strangely

    And that's it.  And unless we are writing the whole register, we use a read_modify_write routine to only change the bits of interest, just in case some of the reserved or undocumented bits do something.

    One new piece of information is the input scale with the 0x30b bit 5 mode bit set to single ended.  Despite the fact that we have a differential signal, and not a fixed Vcm level on the INxM pin, the A/D scaling appears normal for a differential setup.  Our THS4541 FDA is configured for 4.75 gain.  That means that a +/- 200 mV (400 mVp-p) signal becomes a +/- 950 mV (1.9 Vp-p) signal at the ADC differential input pins.  We fed in a +/- 200 mV (400 mVp-p centered on zero) sine wave and the A/D converted it to the full 10-bit scale, just barely clipping on one end due to a small offset.  Reducing the amplitude to half that, and the A/D conversion showed the sine wave from 1/4 to 3/4 of the A/D full scale.  I would have thought that if that bit really made it single ended that the scale would be adjusted by 2X to account for only a single end of the pair moving +/- 475 mV, or only a 950 mV span.  But that was not the case.  Yet another clue that makes me think the function of that bit is opposite what the datasheet says.

    Glen

  • By the way, you mentioned the diode clamps we have to protect the A/D inputs.  The amp has a 3.3V supply, while the ADC has 1.8V, so the inputs could go beyond the 2.1V limit in the datasheet.  But I notice that your evaluation board has the same amp and supply voltages, but no clamps.  Are the clamps unnecessary as long as the inline resistors are there and limit any current into the ADC chip's internal clamps?  Because our input signal comes from outside, we can't guarantee that the signals don't go outside of the intended range.  But it would be great if we could get rid of the clamps.

  • Hi Glen,

    Putting the diode clamps on the analog inputs will hurt the spectrum performance, if this is a time domain application, then you may be okay. This is why I was asking to remove them to make sure they aren't adding to the issue.

    Finding the right diode clamp can be a challenge, here are a few that I know work well.

    RB851Y

    DLM-10SM

    SMS7621

    Some customers will clamp ahead of the amp, that's another idea.

    Ensuring you do not over stress the analog inputs is needed, so you don't go over the abs max ratings. This will erode the device over time.

    Regards,

    Rob

  • Hi Glen,

    I see what the issue is for the SE vs. DIFF on the analog inputs of the ADC....and why you are seeing good performance vs. not in DIFF mode.

    The amplifier is overranging and the common mode voltages are not proper with the ADC's analog inputs. So, what is happening, is one side of the differential on the analog inputs is overranging and other side, is not.

    How this is happening is the amplifier you are using, THS4541, is not properly setup. At least per the schematic shows to have a single-ended interface per your schematic.

    This is incorrect. In order to go from SE input to DIFF output on this amplifier you need to balance this out properly. So the common modes voltages at the analog input are equal on both side, both inputs or 1.275V per the ADC datasheet.

    To do this, config the amp in this manner instead. See below

    You can also use this TI FDA calculator tool to help show the proper configuration and resistors needed in order to go from SE to DIFF.

    See link: www.ti.com/.../01.00.00.00

    Another test you can do, is apply no signal at all. Then measure on either side of C17 with a DMM. The voltage should be equal on either side and should be 1.275V or very, very close to that.

    Regards,

    Rob

    PS - I proved this out on the bench to show one of our other apps engineers in the lab today.

  • Hi Rob,

    Thank you very much for your time in the lab, and sending the FDA calculator.  But I'm afraid it only confirms that the setup we have and our manual calculations on the range are correct.

    I must not have communicated our input signals correctly, but they differ from the assumptions you put in the model:

    • the input signal range on the top is +/- 200 mV or 0.4 Vpp, not 1V.  We have relay switchable attenuators to make sure that the input amplitude stays within that range
    • the 4.75 gain was intended to make the 0.4 Vpp span be amplified to the 1.9V differential span of the A/D converter input
    • the lower leg is driven by a DAC and amp to provide an offset range of +/- 250 mV.  The intent is to match that to the center of the incoming signals on the top to bring the <400 mVpp signal within the range of the A/D, i.e.- center the differential output on zero

    When I enter those numbers in the calculator, there is no overrange.  And when I probe the circuit I get the same readings the model predicts:

    • With a DC 0V input on the top (Vs) and zero offset (Vinn) the voltage on either side of C17 is very close to 1.275V
    • With 100mV at Vs, the differential pair spreads about 1.275 +/- 0.25, so a 0.50 signal, close to the 0.475 I'd expect
    • With 200mV at Vs, the pair at C17 reads 1.275 +/- 0.49, so a 0.98V signal, a bit beyond the max 0.95 limit of the A/D, but close
    • WIth -100mV and -200mV I get about the same values, but reversed, as expected, giving a max negative signal at the A/D of about -0.95V centered on Vcm
    • If I set the offset at Vinn to up to +/- 250 mV in the model and move the Vs_dc to match (so the offset matches the center of the 0.4Vpp signal) the offsets cancel out and the differential signals are the same if Vs_ac is 0.4Vpp or less
    • Under the full range of offsets (+/- 250 mV) and signal amplitude (<= 400 mVpp) the model doesn't indicate any overrange, and the physical circuit works ok too

    So it seems we really are feeding in a differential signal centered on Vcm, and not overranging.  So I'm still confused why the A/D only works if we set it to single ended mode, and also why the scaling (volts to ADC) counts match what we calculate for a differential mode signal.

    Here is the FDA calculator set up to match our circuit and signals:

    ...and with the max offset:

    What do you think?

    Glen

  • Hi Glen,

    If you are reading the correct common mode voltage, then there is something in FPGA processing the data.

    Are you capturing in the correct format? 2compliment (default) vs offset binary?

    Also, please send over the list of spi writes you are using the config the device. Sometimes the sequence is specific.

    So send the sequence you are using.

    Regards,

    Rob

  • Hi Rob,

    We are reading the correct common mode voltage at the A/D input (with a scope or voltmeter) at C17.  And when apply a signal, then the differential pair signals diverge above and below the common mode voltage by the amounts our calculations and the THS4541 model predict.

    And we used the ILA (logic analyzer) feature of the FPGA to sniff the data coming from the A/D.  With the default configuration for differential input mode (according to the datasheet) the data jumps all over, but when set to single ended mode the data looks exactly like you'd expect from differential mode.

    We capture in offset binary format since that is how our FPGA design expects it.  But when we leave it in 2's complement mode the same apparent reversal of the differential mode bit applies.  With it cleared (differential mode) the converted data jumps all over, and when set (supposedly single ended mode) then the data is correct, taking into account the different 2's complement format.

    The registers are set in the sequence I sent earlier.  I'll repeat that here:

    • Perform a hardware reset with the reset pin
    • Wait for the reset to complete by polling register 0x38 (CFG_ALERT)
    • Write and read the scratchpad SPARE_REG register 0x39 to verify that the SPI bus is communicating with the ADC properly
    • Flip bit 1 of DEV_CFG_3 register 0x8d high and then low to toggle the ALERT output pin and confirm that our MCU sees it and is connected
    • Set bit 6 of DEV_CFG_3 @0x8d to set offset binary format on the data bus (instead of 2's complement) as our FPGA expects it
    • Set bit 6 of BUFF_CURR @0x30a to set offset binary format if digital features are used
    • Set bit 6 of DEV_CFG_4 @0x30b to select the external Vref (a REF35120 chip)
    • Set bit 4 of DIG_INPUT_CFG @0x307 to disable the DCLKZ output.  Our FPGA only needs the DCLK clock
    • Set bit 5 of DEV_CFG_4 @0x30b to set single-ended input mode rather than differential <-----This is what fixed the problem, strangely

    Thanks,

    Glen

  • Thanks Glen,

    Typically customers send over a small txt file, just wanted to make sure nothing else was configured, etc.

    We will perform the same operations as you do and see what happens.

    I will get back to you later today or early tomorrow.

    Thanks,

    Rob

  • Good point about making sure nothing else was configured, or maybe we got a bit wrong or something.  So here is our C code that configures the A/D.  The read_modify_write parameters are (reg_addr, first bit, bit_field_len, data to write):

        // Note: The ADC reset is done by the FPGA, so its SPI must be init first
        err = adc_reset(true);
        if (err) {
            return -ETIMEDOUT;
        }
    
        // Write the SPARE_REG (this returns an error if the readback doesn't match)
        k_sleep(K_MSEC(50));
        if (adc_spi_write(ADC_SPARE_REG, 0xA5)) {
            LOG_ERR("Error writing/reading to ADC SPI");
            err = -ECOMM;
        }
        // TODO: Turn this into a hardware self-test item
        LOG_DBG("ADC_ALERT = %d", adc_alert_is_active());
        LOG_DBG("Flipping polarity of ADC_ALERT to test signal input");
        adc_read_modify_write(ADC_DEV_CFG_3, 1, 1, 1);
        LOG_DBG("ADC_ALERT = %d", adc_alert_is_active());
        adc_read_modify_write(ADC_DEV_CFG_3, 1, 1, 0);
    
        // Offset binary output format when ADC digital features are bypassed
        adc_read_modify_write(ADC_DEV_CFG_3, 6, 1, 1);
        // Offset binary output format when ADC digital features are used
        adc_read_modify_write(ADC_BUF_CURR, 6, 1, 1);
        adc_read_modify_write(ADC_DEV_CFG_4, 6, 1, 1);  // external Vref
        adc_read_modify_write(ADC_DIG_INPUT_CFG, 4, 1, 1);  // disable DCLKZ output.  We aren't using it
    
        // TODO: Keep this single ended setting?  Why does this work?  Is the datasheet wrong about this bit?
        adc_read_modify_write(ADC_DEV_CFG_4, 5, 1, 1);
    

    And here are the defines for the register addresses:

    // ADC defines
    #define ADC_RESET_REG   0       /* 0x80 to reset */
    #define ADC_CFG_ALERT   0x38    /* Returns 0 when device is ready to configure after reset */
    #define ADC_SPARE_REG   0x39    /* No function, use for read/write testing */
    #define ADC_DEV_CFG_1   0x88
    #define ADC_DEV_CFG_2   0x89
    #define ADC_CLK_CFG_1   0x8a
    #define ADC_CLK_CFG_2   0x8b
    #define ADC_PDN_CFG     0x8c
    #define ADC_DEV_CFG_3   0x8d
    #define ADC_CLK_CFG_3   0x8e
    #define ADC_CLK_CFG_4   0x8f
    #define ADC_PIN_CFG_1   0x90
    #define ADC_TEST_PATT_CFG       0x91
    #define ADC_TEST_PATT_CHB_7     0x92
    #define ADC_TEST_PATT_CHB_13    0x93
    #define ADC_TEST_PATT_CHA_7     0x94
    #define ADC_TEST_PATT_CHA_13    0x95
    #define ADC_GLOBAL_PDN          0x97
    #define ADC_INTERFACE_CFG_1     0x98
    #define ADC_INTERFACE_CFG_2     0x9C
    #define ADC_HFSB_FPDN_CFG       0x9E
    
    #define ADC_DIG_PAT_ENA         0xA1
    #define ADC_DIG_PATTERN_CHA_7   0xA2
    #define ADC_DIG_PATTERN_CHA_15  0xA3
    #define ADC_DIG_PATTERN_CHB_7   0xA4
    #define ADC_DIG_PATTERN_CHB_15  0xA5
    #define ADC_INTERFACE_CFG_4     0xA6
    
    #define ADC_DIG_INPUT_CFG   0x307   /* DCLKZ disable, disable data inputs to digital blocks */
    #define ADC_BUF_CURR        0x30a   /* Dig feature output format, gain tracking across temp */
    #define ADC_DEV_CFG_4       0x30b   /* voltage ref, half speed, 10/8 bit resolution */
    #define ADC_GBL_CLK_CFG_1   0x484   /* global clock enable for digital block */
    #define ADC_GBL_CLK_CFG_2   0x4be   /* specific clocks for decimation, stats, etc. */
    #define ADC_GBL_CLK_CFG_3   0x4bf   /* enable clock to digital output */
    

  • Hi Glen,

    We did some work in the lab this evening and found using the external reference is what is causing the issue.

    Can you please verify this on your end?

    Please remove both setting the register and also make sure the external reference is powered down or disconnected from the VREF pin.

    I am going to talk to design in the meantime to see what is going on. There maybe another register setting that got missed in the release of the datasheet.

    I will update you tomorrow.

    Regards,

    Rob

  • Hi Rob,

    I tried disconnecting the external reference and set the register to use the internal reference, as you requested.  The behavior is still the same, with the mystery signal if set to differential mode, and normal operation with single ended mode set.

    We had a zero ohm resistor near the A/D to make it easy to disconnect the external reference so we could compare external vs. internal.  I tried the experiment with the Vref input pin floating and grounded, and the results were the same.

    The only difference I saw with the external ref disconnected is that the conversion data seems a bit noisier.

    Still a mystery!

    Glen

  • Hi Glen,

    Okay, thank you for checking.

    Lets work on this offline. 

    I will close this post and send you an email. Please be on the lookout for my email.

    Regards,

    Rob