This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

BQ34Z100-G1: SOC not accurate during NiMH charging

Part Number: BQ34Z100-G1
Other Parts Discussed in Thread: EV2400, BQSTUDIO, GPCCHEM, GPCRA0

SOC not accurate during NiMH charging

Hello,

I would like to ask for help configuring the bq34z100-G1. Our application is most interested in when the battery pack obtains a full-charge, whereas most application descriptions seem focused around discharge. I am confident the fuel gauge has the ability to provide the information we need, but how we go about configuring the control values is a rather intimating combination-lock that I have not yet deciphered.

Is there a technical reference manual that goes into a bit more depth in description than SLUSBZ5B
For example: 7.3.6.14     Ra Tables
Is there some place that explains the difference between R_a0 0 and R_a0x 0?..is one for discharge and the other charge? How does one find out what the values actually mean?

 

After completing several charge/discharge cycles I am getting reasonable predictive value calculations for discharge. Specifically SOC and AverageTimeToEmpty() seem reasonable.

However, during a charge cycle, the SOC value quickly reaches 100% while the AverageTimeToFull() is semi-reasonable. Why the quick change in SOC?

Is there a straightforward way to obtain more accurate charging predictions?

Ideally: I would like the SOC to indicate when the battery pack has actually obtained a full charge. It would also be fantastic if it could calculate NiMH self-discharge. We want to charge the pack, put it in stock and ready for field-use. The fuel gauge should indicate an accurate charged SOC so the pack can be pulled from stock and relied upon to contain enough power to finish an entire day of field work.

 

Attached are some example charge/discharge log files collected using:
bqStudio 1.3.86 and EV2400 on a Win10 machine along with a bq34z1xxEVM and our 7 series cell 4500mAh NiMH (ID = 6100) battery pack.

Each file contains the .log collected data and the resulting DataMemory changes recorded by the “Auto Export” feature of the latest bqStudio.

“ChargeTest3” is an example of AverageTimeToFull more accurate than SOC. During this test the thermal sensor is located on the bq34z1xxEVM board.

ChargeTest3.zip

“DischargeTest6” contains a good example of things working well during discharge.

DischargeTest6.zip

“ChargeTest6” worked fairly well, but still reached Full Charge too soon. During this test (and all subsequent tests,) the thermal sensor is placed directly in the mid-section of the battery pack.

ChargeTest6.zip

“ChargeTest7” is an example of AverageTimeToFull again more accurate than SOC.

ChargeTest7.zip

“Discharge-ChargeTest8.zip is a combined discharge/charge cycle with SOC reaching FC too fast.

Discharge-ChargeTest8.zip

 

In the document SLUSBZ5B-revised July 2016, there is a section that discusses Charge Efficiency and mentions several values. I do not find anything in the Data Flash Summary related to NiMH charge efficiency…only for Pb.. SubClassID 34 offset 4.

7.3.6.16                Charge Efficiency

Charge Efficiency

Charge Eff Reduction Rate

Charge Effi Drop Off

Charge Eff Temperature Compensation

Where are these values found for NiMH?

Are the Pb values also used for NiMH? (my *guess* is YES.)

Does the “learning” process optimize these values automatically, or must they be set manually?

 

Any assistance in helping make this device useful for our application will be greatly appreciated.

Cheers,

-Steve

 

  • Hi Steve,

    Each subclass (Ra0 and Ra0x) in the Ra Table class is a separate profile of resistance values normalized at 25 degrees for the cell in a design. The cell has two profiles. They are denoted by the x or absence of the x at the end of the subclass.

    The purpose for two profiles for the cell is to ensure that at any given time at least one profile is enabled and is being used while attempts can be made to update the alternate profile without interference. Having two profiles also helps reduce stress on the flash memory.

    The Ra Table class has 15 values for each R_a subclass. Each of these values represent a resistance value normalized at 25°C for the associated Qmax Pack-based SOC grid point as found by the following rules:

    For Cell0 R_aM where:
    1. If 0 ≤ M ≤ 8: The data is the resistance normalized at 0° for: SOC = 100% – (M × 10%)
    2. If 9 ≤ M ≤ 14 : The data is the resistance normalized at 0° for: SOC = 100% – [80% + (M – 8) × 3.3%]

    This gives a profile of resistance throughout the entire SOC profile of the battery cells concentrating more on the values closer to 0%.

    To account for self heating during charge please make sure to modify the following parameters (may be found in some datasheets, or via testing):

    Charge Efficiency

    Charge Eff Reduction Rate

    Charge Effi Drop Off

    Charge Eff Temperature Compensation

    Yes, the the Pb fields are also utilized for NiMH.

    The above parameters must be set individually. The learning cycle adjusts Qmax when updating from 0x04 to 0x05 and the Ra tables when updating from 0x05 to 0x06.

    For NiMH, please ensure the ChemID is a good fit using the gpcchem tool. If a good fit is not found, please let me know so we may characterize the cell in Dallas.

    Sincerely,
    Bryan Kahler
  • Bryan,

    Thank you for the explanation of how Ra0 & Ra0x are used. Is there some form of reference material where this information can be found?

    The 1.3.86 version of bqStudio that I’m using only lists Pb Temp Comp and Pb Reduction Rate in the Data Memory. Pb EFF Efficiency and Pb Drop Off Percent are NOT listed. I suppose I’ll need to utilize the Advanced Comm I2C feature to manually access the Data Memory? If these two values are needed, why are they not included in the editable Data Memory values? These values are also missing in the exported gg.csv file.

    Presently: Pb Temp Comp == 24.96 and Pb Reduction Rate == 10.0

    I’m collecting the data to submit to the Gauging Parameter Calculator to see what it says. We presently use the same NiMH cell in a different pack with the ID 0x6100, so that’s what I’m starting with.

    Any suggestions on *reasonable* Charge Efficiency values for a NiMH cell?

    Thanks again,

    -Steve

  • Bryan,

    I’ve looked at the Data Memory values @ 0x22 and have more questions.

    Using the Advanced Comm I2C feature in bqStudio 1.3.86 to read the Charge Subclass section of Configuration Data Flash to obtain the Pb Efficiency values produced the following result:

    Advanced Comm Transaction Log

    TimeStamp , Read/Write , Address , Register , Length , Data ,

    2018-07-25 02:34:41 764 , Wr , aa , 61 , 1 , 00

    2018-07-25 02:36:18 545 , Wr , aa , 3e , 1 , 22

    2018-07-25 02:36:48 887 , Wr , aa , 3f , 1 , 00

    2018-07-25 02:37:04 751 , Rd , aa , 40 , 32 , FF CE 02 26 64 7B 1F BE 77 60 7E 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

    Using Table 11. Data Flash Summary (under 7.3.4 of SLUSBZ5B) as a guide, the values are interpreted as:

    FF CE = Suspend Low Temp (I2) (= -5.0 as reported by bqStudio)

    02 26 = Suspend High Temp (I2) (= 55.0 as reported by bqStudio)

    64 = Pb EFF Efficiency (U1) (= 100, default according to Table 11)

    7B 1F BE 77 = Pb Temp Comp (F4) (= 24.960 as reported by bqStudio, Xemic Floating point?)

    60 = Pb Drop Off Percent (U1) (= 96, default according to Table 11)

    7E 00 00 00 = Pb Reduction Rate (F4) (= 10.0 as reported by bqStudio)

    00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00….all remainder is zero

    The bqStudio values for Pb Temp Comp and Pb Reduction Rate are in conflict with what Table 11 say they should be.

    Table 11 states:
    Pb Temp Comp: Min = 0, Max = 0.078125, Default = 0.01953125 %
    Pb Reduction Rate: Min = 0, Max = 1.25, Default = 0.125 %

    bqStudio reports:
    Pb Temp Comp = 24.960 %
    Pb Reduction Rate = 10.000 %

    Why is there such a difference between the documentation and software?

    Can anyone provide some insight into how to properly utilize these Charge Efficiency values so as to help improve the meaning of SOC during the charge cycle?

    Thank you,

    -Steve

  • Hi Steve,

    Thank you for the detailed post. I will provide an update by COB Tuesday.

    Sincerely,
    Bryan Kahler
  • Hi Steve,

    To determine the charge efficiency values for your cells and application, please coulomb count the charge into and out of the battery at average application charging rate and average application discharging rates.

    Thank you for the feedback. The values reported by bqStudio are proper and reflect table 25 of the Datasheet. Please use those values.
    Pb Temp Comp = 24.960 %
    Pb Reduction Rate = 10.000 %

    Sincerely,
    Bryan Kahler
  • Hi Bryan,

    Thanks for the information and suggestions.

    For our 4500mAh battery pack we should have 16,200C total.

    Using the recorded .log files and Excel to perform the Coulomb counting:

    During discharge I measure about 14,500C

    During charge I measure about 15,500C

    Charging appears to require 107% the Coulombs available for discharging.
    Is the Pb Reduction Rate of 10% supposed to be this observed 7%? What about the PT EFF Efficiency and Pb Drop Off Percent values?

    My most recent Charge cycle after changing the ChemID from 6100 to 6116 (as recommended by the “Gauging Parameter Calculator”) resulted in SOC jumping from 31% to 100% at the 20 minute mark for a 95 minute charge time. At this point SOC is basically meaningless for a charge cycle. I hope I can figure out how to make this device useful.

    Thank you for suggesting I look at Table 25, however: This Table makes Table 11 even more confusing.

    Which Table is correct?

    I look forward to any help,

    -Steve

     

  • Hi Bryan,

    I wonder what happened to the update by COB Tuesday.

    This device still produces useless SOC values during charging.

    Is there no help available?...should I simply give up trying?  Have we wasted our time designing this device into our product?

    Please help,

    -Steve

  • Hi Steve,

    This is correct. Charging may require more coulombs to pass along the current sense element as not all coulombs make it into the battery due to efficiency losses, such as heat.

    To determine the charge efficiency values for your cells and application, coulomb count the charge into and out of the battery at average application charging rate and average application discharging rates.

    The values reported by bqStudio are proper and reflect table 25 of the Datasheet. Please use those values with bqStudio.
    Pb Temp Comp = 24.960 %
    Pb Reduction Rate = 10.000 %

    Modify the above values as needed for your application.

    As for the SOC jump to 100% early, this is due in part to the flatness of the cells.
    Reduce Ra0 in both resistance tables by half.

    Sincerely,
    Bryan Kahler
  • Bryan,

    Thank you for the response but this is getting nowhere.

    You have previously stated these same values with no explanation about differences between Table 25 and Table 11 and how they might help my problem. I guess in the long run I should probably change the 10% to 7% to account for the measured difference. This does not at all explain the strange SOC jump...or the other 2 Pb efficiency values ignored on Table 25 and is it TRUE that NiMH mode actually uses these correction values?

    “As for the SOC jump to 100% early”: What exactly is a flatness?...the cells seem rather cylindrical...why would flat explain the SERIOUS jump? Now the recommendation is to half the Ra0 values that have been accumulating during the learning sessions? Do I then lock things down to keep the system from drifting into goofy behavior during field usage?

    Each trial takes at least 2 days on my side...it certainly tries ones patience while getting nowhere.
    Knowledge is power!...seems few know how these things work.

    I am an engineer paid good monies to make stuff work.
    Discrepancies in technical/scientific/medical/legal documentation is: seriously scary.

    WHY would SOC jump in the first 20minutes of charging jump from 32% to 100%?...it actually took 95minutes for full charge. What would cause that? I hear/read so many formula steps about the sequence of events needed to help the system “learn” your configuration. I also have read about settings that allow complete “user” control of all parameters and have the learning system leave the analyses to the engineers.
    I want to use your tools...I NEED to KNOW WHY things work.
    At this point simply getting things close to working would be a good start.

    I will try halving all the Ra0 values on both tables and spend another couple of days to find out if it helps.
    ...it seems reasonable for about 20 minutes...then goes wacko....go figure.

    Which table do I believe? 25...11?

    Help?,
    -Steve
  • Hi Steve,

    Look forward to the update.

    While the tests are progressing, these resources may be helpful to review to further explain the Impedance Track algorithm and familiarize yourself with IT concepts. if you have any questions, please let me know.

    www.ti.com/.../slua364b.pdf

    www.ti.com/.../slua375.pdf

    As we have deviated from the original topic, please open a new thread with your expectations for the SOC readings and include the steps taken to arrive thus far.

    Sincerely,
    Bryan Kahler
  • Hi Bryan,

    Thank you for the links to the theory of WHY and how it works.
    Truly it is an incredible system! What a steep learning curve to twiddle the dials just right.
    So far I’ve been working with prototype boards and battery packs. The actual first-run circuit boards show up soon. I suppose it’s cram-study of theory and collected data and see if anything makes sense about why SOC jumps so radically...I really need it to stop jumping so radically.

    I do not feel that we have deviated from the original topic. This is exactly what I wanted to know, yet does not solve my problem just yet....please do not assume this has resolved my ISSUE.

    e2e has not resolved the issue.
    I follow the steps and the CHARGE SOC is ill behaved for NiMH.
    Any other suggestions?

    I will study the documents and probably become an expert consultant marketing my skills to the folks needing stuff designed with batteries included. WHY does the SOC jump during charging?

    I am simply asking for guidance and direction in an area the device gets little attention...predicting SOC during charging.

    I’ll let you know if I figure it out.

    Sincerely,
    -Steve
  • Hi Steven,

    Radical SOC jumps are usually caused by a poor chemID fit and/or an unlearned pack.

    For NiMH cells:

    NiMh/Nickel-Metal Hydride
    The required test consists of the following steps:
    1. Test is performed at room temperature. If the cell was at a different temperature before, let the cell
    relax for two hours at room temperature prior to the test.
    2. Charge to full using the charging method specified by the cell maker. To detect charge termination,
    utilize Delta Temperature or negative Delta Voltage methods.
    3. Let the battery relax for five hours to reach full equilibrium open circuit voltage (OCV).
    4. Discharge the battery at C/20 rate until the minimal voltage (as specified by the cell manufacturer) is
    reached.
    5. Let the battery relax for 5 hrs to reach full equilibrium OCV.

    As for the TI Thinks Resolved button: This is merely to indicate that if the steps are followed the issue may be resolved if all goes according to plan.

    Becoming an expert consultant in this field can be quite profitable. If the errors persist after using the GPCCHEM tool to determine the proper chemID, and after having performed a learning cycle, post your gg.csv, SREC and a log file of the event for further analysis.

    Sincerely,
    Bryan Kahler
  • Charge-Discharge20.gg.zipCharge-Discharge20.zipBryan,

    I have spent several days collecting very accurate charge/discharge data and the GPCCHEM tool now claims I should use 6113 as my ID (7.91% deviation).  Previously I have tried 6116 (%8.91) and 6114 (10.2%).  I started this project using 6100.  I also followed your recommendation of halving all the values for R_a0(x).  After the changes I ran a charge cycle and at about 6 minutes into an expected 90 minute charge, the SOC jumps from 0% to 100% with “Average Time to Full” reporting 65535….totally useless.

    At least I’ve been able to make it worse!

    At this point I am considering using it only as a nice voltage current measuring device and develop my own code to calculate some sensible SOC value to drive our fuel gauge LEDs….this is ridiculous.

    Can you tell me exactly how SOC is calculated?  What numerical parameters are used in the calculations?  How is it possible for the values to be so ludicrous?

    Find attached the most recent log file (using 6100 Chemical ID and prior to halving the R_a0 values).

    Is there anyone that can provide any insight?

    Sincerely,

    -Steve

  • Hi Steve,

    Thank you for the files. I will review and update this post on Tuesday.

    Sincerely,
    Bryan Kahler
  • Hi Steve,

    Based on the flags and values of the Ra tables, the device has not been through a learning cycle. A proper learning cycle is required prior to gauging. Update status in some of the attached gg files appear to have been manually set to 0x06 - manually setting update status to 0x06 will not result in accurate SOC.

    Please try configuring the following registers:

    Pack Configuration: 0x09F1
    Pack Configuration B: 0xAF
    Pack Configuration C: 0xBE

    Set the NiMH parameters back to default values and try to run a learning cycle.

    If the learning cycle still fails, please use the GPCRA0 tool instead to create a learned gauge. The gpcra0 tool and documentation may be found here: http://www.ti.com/tool/gpcra0

    Sincerely,
    Bryan Kahler