This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

FDC2212: Status field descriptions.

Part Number: FDC2212
Other Parts Discussed in Thread: TPS610981, CC1310

Hi All,

I developed a product using FDC2212. During the prototypes, I reached the oscillator amplitude using my oscilloscope as described on the datasheet. However, at that moment we have only 10 parts to measure and configure. Now, with the trial run of 2000 parts, each part seems to have a different value of IDRIVE to reach the amplitude range. So, I'm planning to make a routine that read the status (0x18) and according to ERR_ALW and ERR_AHW bits increase or decrease the Sensor Current. The problem is that these bits always return 0, even with ERROR_CONFIG (0x19) AH_WARN2OUT and AL_WARN2OU seted to 1. At the channel data register (0x00), I found the same behavior, the error bits always reads 0.

Something is strange because, at the description of STATUS Reg (0x18) the bits 9 and 10 are reserved, but have the description...

How can I read the Status (Amplitude Warning) of FDC2212?

I'm looking forward to your feedback.

Tks in advance...

Best regards.

  • Ivan,

    We are looking into this and will get back to you soon.
  • Hi Ivan,
    The amplitude warning flags are triggered when the oscillation voltage falls out of the range of 1.2V - 1.8V. Please refer to the following app notes below:

    1. Sensor Status Monitoring: http://www.ti.com/lit/an/snoa959/snoa959.pdf, Refer to Section 3.4
    2. Setting IDrive Configuration: www.ti.com/.../snoa950.pdf

    Best Regards,
    Bala Ravi

    Please click "This resolved my issue" button if this post answers your question

  • Hi Bala, thanks for your response.
    I already had read both documents. However, for the FDC2212, the bits ERR_AHW(10) and ERR_ALW(9) of the STATUS register (0x18) always returns 0, even with oscillation amplitude below 0.7V and over 2.3V.
    As it doesn't signal the amplitude status, I'm using the absolute value read from FDC, because I saw that this absolute value goes up when I increase the sensor current up to a maximum value and then starts to go down when I increase de sensor current to next level. At this peak absolute value, the oscillation amplitude is around 1.2V. So, I'm using this way to find the best sensor current (IDRIVE) for each sensor...
    My question was how to make FDC return the ERR_AHW and ERR_ALW (bits 10 and 9) different of 0. The STATUS register only returns DRDY (6) and the channels (3-0). It always returns 0x0048, for me.
  • Hi Ivan,
    Allow me some time to test this in lab and get back to you.

    Best Regards,
    Bala Ravi
  • Sure Ravi!

    Taking advantage of your e-mail, I'm using a sensor frequency (tank circuit) of 8MHz, internal oscillator, single channel (Ch0) and putting FDC to Shut Down mode and battery supplied board (no real ground).  As I read at datasheet, for single channel I should configure Fref for max of 35MHz. As I'm using the internal oscillator (43MHz) I'm setting Fref Div to 2. Also, Fin < Fref/4, so I should also set the Fin Div to 2.
    The value read is the same (of course) with both dividers at 1, however, the noise level is pretty big with Divs 2 comparing with Div 1. Do you know why?
    Another doubt is: I'm configuring the FDC at the following sequence: RCOUNT, SETTLE_Count, clock dividers, Error Cfg, CONFIG, MuxCfg, DrvCurr.
    Should I put the CONFIG register as the last to be configured, since the Channel Enable occurs when this Register is configured?
    Using the Clock Dividers = 0x1001, I can see some wrong values read back from FDC, look the picture below even with the board stands.
    image.png
    The blue curve is the FDC raw value and the red is the battery discharge...
    When I turn it on, I have a lower level and some pulses to a higher level. After some hours, the nominal value goes to a higher level and the pulses go down...
    Do you have any idea why it's happening? It occurs in some parts only. I have more than 600 parts working and 5% of them this phenomenon occurs.
    One more question: at app notes SNOA943 I saw about Readback time around 688us. Should I wait for this Readback time after the interrupt occurs to read the Data Register, or when the Interrupt (bit DRDY sets) this readback time is already over?

    Thanks and best regards...






  • Hi Ivan,

    Bala has recently switched groups, so I am taking over this thread for him.

    It is not surprising that the noise floor increases when you set FREF_DIVIDER to 2. The longer the LDC is given to measure a single data point, the lower the noise floor. With the same RCOUNT value, setting FREF_DIVIDER to 2 essentially gives the LDC half as many reference clock cycles to complete its measurement.

    Fortunately, when using the internal reference clock, you don't need to use the FRER_DIVIDER. It's only necessary when using an external oscillator.  I recommend setting both FRER_DIVIDER and FIN_DIVIDER to 1. Just note that the internal oscillator has a large part-to-part variation, which will affect the absolute output code of each device given the same sensor setup. This may or may not be an issue for your application; it's just something to be aware of.

    I do recommend setting CONFIG last, because this register controls when the device exits sleep mode and begins making conversions. We recommend configuring the device when it is in sleep mode. It's very possible for a conversion to become corrupted if the drive current or other settings are changed in the middle of the conversion.

    As for the graph you shared, could you clarify the time scale of the x-axis? The scale on the picture is a little hard to see. Often, spikes like this are caused by EMI. If you are using long cables in your setup, you might try adding ferrites at the end of the cables. If that doesn't help, we have an app note that may help: EMI Considerations for Inductive Sensing Applications.

    Finally, the readback time is based on your microcontroller. It's the amount of time that your microcontroller needs to complete the I2C communication and read the data registers. As soon as one conversion ends, another begins (either on the same channel or on the next sequential channel, depending on which channels are enabled). It is possible to set the conversion time to be smaller than the readback time, in which case unread data would be overwritten. As soon as the interrupt flags, you should read the newly available data.

    Please let me know if you have any followup questions. I'd be happy to help.

    Best Regards,

  • Hello Kristin, first of all, thanks for your feedback.

    Trying to clarify my design for you, that's a single board (circuit + electrodes) powered by a coin battery CR2032. All the CIs are TI! I'm using FDC2122 as the sensor, CC1310 as processor and RF link, and TPS610981 as Boost to keep the minimum required voltage to FDC (2.6V) even when the battery goes down this voltage. The electrodes are been used as balanced. See figure below to understand the construction.

    According to my tests results, I left the FREF_DIVIDER and FIN_DIVIDER to 1, that had the best performance, and I'm configuring the CONFIG register as the last one (it seems to me more logical). Also, I'm reading the data as soon as the interrupt occurs, so I think that no "overwritten" is happening.

    About the graph that I shared, it's the raw data x battery voltage during all one night (around 14hours) with one point every 4 seconds (that's my acquisition time - 0,25Hz). That's the reason to put the FDC in Shutdown mode instead of sleep.

    What is strange for me is that only for some battery voltage we have these pulses. At first, the main level is low and the pulse goes up. After a while, the level goes up and the pulses go down. The unwanted pulses have almost the same amplitude ever...

    I have more than 1000 parts running, but only 2-5% has this behavior. Removing the TPS610981 and bypassing the Vbat to the circuit, I resolve this issue. Do you think that could be any noise caused by TPS? The only reason the use the TPS on the board is to increase the battery life since it can work even with battery above 2.6V.

    I'm looking forward to your feedback.

    Thanks in advance!

  • Hello Ivan,

    Although I am not an expert in the TPS610981, it does seem plausible that it could be injecting noise into the FDC2212. May I ask what your input deglitch filter is set to? This can be found in the MUX_CONFIG.DEGLITCH register field. The input deglitch filter helps eliminate noise at higher frequencies than the expected sensor frequency, and if it is set too high it will be less effective. It might help mitigate this behavior. Make sure that it is set to the lowest possible value that is higher than your sensor frequency.

    Otherwise, it would be helpful to see a scope capture of the FDC2212's VDD pin during the time of the spikes. If the TPS610981 is injecting noise into the FDC2212, you could post a question to one of the power forums. They might be able to suggest a workaround or an alternate part that could improve your battery life and eliminate the noise issue.

    Best Regards,

  • Dear Kristin,

    I think that the blame is TPS610981.

    I started to read the Vbat (before regulator) using ADC, Vcc (after regulator) using CC1310 AON_BatMon lib, at the same time that I read FDC. That value peak read from the FDC is also seen in the voltage Vcc, see the figure below. (I cannot see it on the oscilloscope).

    The blue line is the VBat, the Red line is Vcc (after TPS) and the green one is the FDC raw read value.

    So, I'm starting to investigating TPS610981 instead of FDC2212. 

    Just to be sure, I removed the TPS from the board and passed through the Vbat to Vcc, all the peak value are removed.

    The blue line is FDC raw data and Red one is battery voltage.

    Just a question about this last graph, why the FDC raw values decrease with the Voltage? It should be a relation of Freq_In/Freq_Ref, it's not? What's should be changing with the input voltage, Freq_in (tank circuit) or Freq_Ref (internal oscillator)?

    Answering your question about MUX_CONFIG.DEGLITCH, it's set to 33MHz, but I already used 10MHz and there was no difference in that.

    Thanks and best regards...

    Ivan

  • Hi Ivan,

    I'm glad to hear you found the source of your error.

    As for your question about the last graph, it is likely F_in that is changing. The sagging battery voltage could cause the sensor oscillation amplitude to sag as well, which could interfere with the internal frequency measurement. Certainly when the sensor oscillation amplitude is outside of the recommended range (1.2-1.8V), the SNR suffers.

    Best Regards,
  • Dear Kristin,

    At the beginning of my code, I developed a routine to calibrate the sensor current (IDRIVE), because (according to my first question posted on this forum) the Amplitude Errors Flags at Status Register doesn't work. So I read the FDC raw data and increase the sensor current while the raw data value grows. When the raw data value starts to decrease, I stop and store the sensor current. Normally it occurs when the voltage amplitude at the sensor is a little bit higher than 1.2Vpp. So, my questions are:

    1) why the raw value starts to decrease since I'm still increasing the sensor current (IDRIVE)?

    2) do you think that I should leave the sensor current one or two steps higher (sensor amplitude around 1.6Vpp), even reading raw data value lower? Remember that my product is battery powered, and low power consumption is requested.

    3) if I decide to remove the TPS610981, that today is keeping the voltage fixed (or should, because it is the cause of my main error) from my circuit, should I run this "calibrate" routine frequently or when detecting lower battery?

    Best regards...

    Ivan

  • Hi Ivan,

    The raw code decreases because of the sensor frequency measurement block's architecture. Unfortunately I can't explain this in more detail. However, I would not recommend setting your drive current this way. My concern is that the point at which the output code starts to decrease could change across temperature, and you might operate at a less than ideal oscillation voltage. I definitely would not perform this process more than once. Changing the drive current would result in an output code shift that could interfere with your sensing algorithm.

    Instead, I would recommend calibrating your drive current using lab measurements. I would measure your sensor oscillation amplitude across all of your expected target distances, your full Vdd swing, and if possible your full temperature range. Then select a single drive current value that keeps the sensor oscillation amplitude in the recommended range throughout all of these conditions (or as close as possible to within the recommended range).

    Best Regards,