This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

bq77pl900 cell-V measurement calibration procedure

Other Parts Discussed in Thread: BQ77PL900

I am using the bq77pl900 chip to manage a high-capacity, 8-cell lithium-polymer battery pack in host-control mode. I've managed to get everything working pretty much as I want it to, with the exception of the cell-V measurement calibration. I'm starting to think I must be doing something fundamentally wrong somewhere, because by far my greatest source of variability in my cell voltage measurements, is coming from the chip calibration process itself.

The process of cell-V calibration involves presenting 6 calibration signals on the VOUT pin, and performing calculations using the voltage measurements obtained, to generate two calibration constants which are use to scale and offset the cell-voltage measurements.

On my hardware, VOUT is connected to a 14-bit ADC chip. With a 1.25 reference voltage and 14 bits of resolution, each ADC count represents about 76 microVolts (1250000 mV reference, divided by 16384 counts in 14 bits).

The following are typical ADC counts I see at VOUT, for the 6 calibration measurements :

        ADC counts
Step 1:    0
Step 2:    15708
Step 3:    32
Step 4:    69
Step 5:    12611
Step 6:    9161

The values calculated from this data are as follows :

1. Vdout_0V = 0.00000
2. VREF_m = 1.19965
3. Vdout_VREF_m = 0.00243
   Kdact = -0.00203
4. Vdout_2V5 = 0.00527
   Vref_2V5 = 2.59603
5. Vout_1V2 = 0.96312
6. Vout_2V5 = 0.69962
   Kact = 0.18871
   Vout_0V = 1.18951

The first number at each step is the direct data measurement, in Volts; other numbers (steps 3, 4, and 6) are derived values, as decribed in the datasheet procedure. The final two numbers are the results, used to scale and offset cell-V measurements. Names reflect the names as used in the datasheet.

My problem is that I am finding that a difference of only 1 ADC count (on the 3rd of the 6 calibration values measured) can result in a cell-V measurement difference of over 60 milliVolts ! That's less than 76 microVolts difference on that signal, which certainly seems to me to be "in the noise".

This means that I'll be doing cell-V readings (on an unchanging test signal), and getting numbers within 2 or 3 milliVolts of each other, which is about what I'm aiming for - all good. Then, a calibration occurs, and after this, my cell-V results all jump up or down, by up to 60 milliVolts. The signal is the same, but my calibration constants have been changed enough to completely skew the resulting voltage.

There's a lot more detail I can supply if it will help - code that's doing these calculations; the settling/sampling/sorting/averaging that I'm doing on the ADC results; per-device uV measurement of the 1.25V reference's exact value; etc.

I was previously doing a calibration every 10 minutes (when actively charging or discharging), but have now switched to limiting these calibrations, so that one does not occur during any single charge/discharge cycle. I suspect I should do a re-calibration whenever I detect any significant temperature changes, but quite frankly I have not yet seen any drift which is nearly as large as the calibration uncertainty itself introduces!

Is anyone using this calibration procedure successfully? It's not helping me much (if at all), and I'm tempted to just throw it all away.

Many thanks to everyone who has read this far.

regards,
Brian

  • Still no response, TI?

    I can only assume I'm expecting too much of the calibration process (i.e repeatability).

     

  •  My apologies Brian, your post is very old and it was missed somehow.  Hopefully you received support from your local sales and application people or one of the other support methods. Perhaps this response will be helpful for future projects or other users.

    There is an implied state 0 in the calibration procedure where VAEN is enabled with the calibration set to 0; that is you would/must be measuring cell voltages before calibrating.  The datasheet page 32 cal steps just jump in without that clarification, by listing VAEN last in the list it might be interpreted as being set last when it should be set first.  Some cal modes will give voltages when entered directly, some won't.  

    The values for measurements for steps 1, 3, and 4 that you list look low.  For step 1 I would expect a value close to the selected reference, around either 1.2 or .975V. Referring to the figure on page 31 of the datasheet, the values for measurements 3 and 4 should be lower than measurement 1 with some identifiable difference.  These should look similar to the measurements at 5 and 6 since the differences are expected to be small - offsets and S/H scaling factor which is ~1.  These odd readings likely did affect your result some.

    Certainly the system designer must decide how and when to calibrate and how much to filter both calibration and cell measurement values. The IC designers provided the cal method to calibrate the internal system since a 2 point cal on 10 cells could be quite complex. Re-calibration in system during environmental changes could allow compenstation of drift with temperature, likely these would be in the noise, but could introduce undesired jumps in measurements due to calibration change.

    As you note Brian, feedback from system implementers could be useful to other community members. Thanks for your input and sorry again for the lack of timely response from the community.

  • Thanks for the response, I will investigate and report back.

    I'm not sure if it makes any difference or not, but my calibration procedure, at each of the six steps is : turn off VAEN, modify the CAL bits, then turn VAEN back on, then a brief settle delay (200+ microseconds), then begin sampling the signal.

    Now I'm wondering - should I just turn on VAEN and leave it on for the calibration measurements?

    Possibly related - can you please clarify what this means: "Some cal modes will give voltages when entered directly, some won't."

    ( 'Entered directly' meaning what exactly? )

    And one more thing, please - in the datasheet, I see Steps 1,3,4, and 6 specify "CELL[4:1] = 0". Step 2 specifies no cell values, and Step 5 specifies "CELL2 = 0, CELL1 = 0". Is there a reason for the inconsistency? I'm assuming all the "CELL" values ought to all be zero, during the calibration measurements.

    cheers,

    Brian

  • Thank you WM! The data certainly looks MUCH better now (much more consistent results, and fits your expectations now).

    Here are some results:

    =================================================================================
    Data from 3 calibrations, leaving VAEN enabled, and setting CAL bits to zero first.
    =================================================================================

    BP chip calibration data
    ========================

    Cal reading, step 1 : ADC average  62978.300
    Cal reading, step 2 : ADC average  63004.700
    Cal reading, step 3 : ADC average  50384.550
    Cal reading, step 4 : ADC average  36532.900
    Cal reading, step 5 : ADC average  50398.550
    Cal reading, step 6 : ADC average  36546.950

    1. Vdout_0V = 1.21044
    2. VREF_m = 1.21095
    3. Vdout_VREF_m = 0.96839
       Kdact = 0.19989
    4. Vdout_2V5 = 0.70216
       Vref_2V5 = 2.54285
    5. Vout_1V2 = 0.96866
    6. Vout_2V5 = 0.70243
       Kact = 0.19989
       Vout_0V = 1.21071


    BP chip calibration data
    ========================

    Cal reading, step 1 : ADC average  62978.400
    Cal reading, step 2 : ADC average  63008.900
    Cal reading, step 3 : ADC average  50385.550
    Cal reading, step 4 : ADC average  36533.300
    Cal reading, step 5 : ADC average  50399.450
    Cal reading, step 6 : ADC average  36546.600

    1. Vdout_0V = 1.21044
    2. VREF_m = 1.21103
    3. Vdout_VREF_m = 0.96841
       Kdact = 0.19986
    4. Vdout_2V5 = 0.70217
       Vref_2V5 = 2.54317
    5. Vout_1V2 = 0.96868
    6. Vout_2V5 = 0.70242
       Kact = 0.19987
       Vout_0V = 1.21072


    BP chip calibration data
    ========================

    Cal reading, step 1 : ADC average  62976.200
    Cal reading, step 2 : ADC average  63007.700
    Cal reading, step 3 : ADC average  50386.500
    Cal reading, step 4 : ADC average  36533.850
    Cal reading, step 5 : ADC average  50402.100
    Cal reading, step 6 : ADC average  36549.150

    1. Vdout_0V = 1.21040
    2. VREF_m = 1.21101
    3. Vdout_VREF_m = 0.96843
       Kdact = 0.19981
    4. Vdout_2V5 = 0.70218
       Vref_2V5 = 2.54350
    5. Vout_1V2 = 0.96873
    6. Vout_2V5 = 0.70247
       Kact = 0.19982
       Vout_0V = 1.21071

  • I'm glad to hear the results look better.  The questions may not be significant now but:

    VAEN should be set before entering calibraiton. I'd suggest leaving it set during the sequence, if not, set CAL[2:0] to 0 before turning VAEN back on. Note the dependency indicated at the register description on page 45 of the datasheet.

    By "entered directly" I meant setting the cal mode, then turning on VAEN. It sounds like this is what you were originally doing and likely why the values turned out wrong for some readings.

    It is sometimes hard to know what the original author had in mind on a data sheet, and what happened in editing.  I don't think there is a significance to the CELL[4:1] bits notation in the cal steps. I'd suggest leaving them low.

    As a summary, I'd suggest a sequence similar to the following for cell voltage calibration:

    1. Select a time when the system is quiescent; I don't know that this is required, but seems reasonable.  Obviously this may not be possible if calibrating on a fixed schedule.
    2. Set host mode (if not the normal mode of the system)
    3. Addr 0x05, write 0x00; Set the cal mode to 0, cell selection to 0 (lowest cell). Cell should not matter, cal mode does.
    4. Addr 0x04, write 0x00; Turn off balancing on lower 8 cells
    5. Addr 0x03, write 0x00; Turn off balancing on top 2 cells, turn off any monitored output
    6. Addr 0x03, write 0x01; Turn on VAEN, VOUT will represent the lowest cell.
    7. Addr 0x05, write 0x10; Step 1 of the Cell Voltage Measurement Calibration in the datasheet
    8. Proceed through the calibration measurements. Wait an appropriate settling time for the system response, for sample/hold measurements I'd suggest waiting for one 1.25ms sample interval + settling time. This may be excessive as the part should sample on selection change. Apply filtering as appropriate for your system.
    9. Addr 0x05, write 0x00; turn off cal mode
    10. Addr 0x03, write 0x00; turn off VAEN if not measuring cell voltages immediately following cal.
  • Your suggested sequence is pretty much exactly what I am doing now. It would really be good if this info could find its way into the next revision of the datasheet, wouldn't you agree?

    As to the dependency on page 45 ("The CELL_SEL b6–b4 (CAL2–CAL0) bits should be 0 when VAEN(b0) in register 3 is changed from 0 to 1 or the
    VOUT pin will not go active."), well, the VOUT does indeed "go active" in a sense, it just does not present the *expected* signal - otherwise, all of my calibration steps (before the fix) would have seen the same result at VOUT, no? So it's clearly 'active' if the CAL bits are non-zero when VAEN is enable - just not 'correct'.

    To be perfectly honest with you, I interpreted that "dependency" as trying to tell me that the CAL bits must be zero, or else the VOUT pin would not present any cell's Voltage - since I could clearly see that different signal levels were presented, based on the CAL bits. In other words, I was assuming "go active" == "go active with a cell V".

    regards,

    Brian