This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

BQ34110: Bad SOC calculation

Part Number: BQ34110

Hi,

we did all necessary measurements for the GPC cycle for a 24 V AGM battery with 12 cells with 9 Ah.

The results where not so good so we started playing around with the values for LearnSOC, FitMaxSOC and FitMinSOC to bring SOC error into an acceptable range.

Finally this are the best results we got:

3652.GPCPackaged-report.zip1460.GPCPackaged.zip

We did the GPC cycles with 6 A and with 45 A at -5°C, 20°C and 40°C to meet the requirements of our application.

We used the results of the cycle to test the SOC calculation.

Unfortunately the results where really bad as can be seen in the following diagrams:

The above measurements where done at 20°C. At different temperatures (-5°C, 40°C) it looks the same.

We tried to use the CEDV smoothing feature to get better results, but at the high discharge current (third diagram) it really doesn't work at all, the SOC has a hard drop to 7 %. 

Does someone have an idea what to change to get better results. 

Best Regards,

Oliver Wendel

  • There are a few issues:

    1. This gauge is a coulomb counting gauge and with this it relies on FCC based on what it measured *in the past*.

    2. Your use cases have excessively different loads.

    3. Initial FCC must be set to the load case that you anticipate will occur next.

    The gauge has no way of knowing what you will do next. So if you just discharged with 500mA, and this was qualified for an FCC update, then the gauge will set FCC for a 500mA use case.

    If you follow this with a 10A (or worse, a 50A) discharge, then the gauge will initially overestimate capacity and then count down from this overestimated capacity, based on the passed charge (coulomb count) and when the cell voltage hits EDV2, it will re-adjust SOC to Learned Low % (default 7%).

    You will get a massive SOC jump if the gauge learned FCC based on a vastly lower load (500mA), if you use a 50A discharge.

    There is no other way around this than accepting this jump and keeping the discharge within the parameters for a qualified discharge for FCC updates (see TRM https://www.ti.com/lit/ug/sluubf7a/sluubf7a.pdf, 2.7.3 Capacity Learning (FCC Update) and Qualified Discharge).

    The CEDV configuration that you obtained through GPC will only configure the gauge to self-adjust the EDV2/1 thresholds based on load and temperature. However, this will not allow the gauge to anticipate a massive change in load and hence FCC. It will let the gauge learn FCC more accurately for different load and temperature. A huge load of almost 6C won't really work well, though. 

    Note that the gauge disables EDV detection if current exceeds the Overload Current threshold. This is 5A by default so neither your 10A nor your 50A use case will allow the gauge to update FCC.

    It looks like this application isn't a good candidate for CEDV. Huge load changes like this will require a way for the gauge to predict capacity, not use past FCC measurements. This looks like Impedance Tracking would be a better fit (though also challenging to configure for extreme load changes like 500mA to 50A).

  • Hi Dominik,

    thank you for your reply.

    Ok, the three loads were considered to test the battery managment, they will not occur in real application. In our application mostly we will have a high current pulse followed by a period with low current, as shown in following picture:

    Do you think the gauge will calculate better SOC values with this load profile or does it not make no difference? The high current pulses will last around 30 s.

    For GPC cycle what high load current would you propose?

    Another point to mention is that we have a rarely discharged application, this is the reason why we use the BQ34110.

    Best regards,

    Oliver Wendel

  • This is a very difficult case for a gauge. If it's rarely discharged, Impedance Tracking isn't a good fit. If there are extreme pulses, CEDV is problematic.

    CEDV is basically a coulomb counter with a voltage threshold that defines the coulomb count from full when the battery is depleted.

    If you look at your load picture, the gauge doesn't "know" that there will be a massive load spike coming in the future. So it will just coulomb count with the voltage being well above the EDVx thresholds, not taking the future massive load spikes into account.

    Then there's a sudden, short load spike where the voltage drops massively. If voltage drops below an EDVx threshold, the gauge will force SOC to the respective limit. By default 7% for EDV2 and 3% for EDV1. 0% for EDV0.

    I can't think of a CEDV configuration that could reliably catch this other than reducing FCC for a constant high load discharge and disabling FCC learning. This may not be such a bad idea, as the effective charge that you can get out of this battery is determined by the load spikes. I wouldn't want the gauge to report 40% SOC based on unreasonably high FCC and then the next spike drives the voltage below and EDVx threshold.

    So my recommendation is to disable FCC learning (and also EDV compensation due to the short high load spikes) and set FCC to a value that reflects a full discharge with the high load.

  • Thank you very much.

    We will try this.