This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

BQ4050: CDEV vs IT

Part Number: BQ4050
Other Parts Discussed in Thread: BQ40Z50,

In a previous response to the question about choosing CEDV vs IT, it was noted that impedance track is not ideal for "highly pulsed loads and applications that do not allow rest periods ... [and] rarely discharged applications because it needs to take voltage measurements at different capacities to update the impedance occasionally". We are currently deciding between the BQ4050 and BQ40Z50 for a device that will usually be placed into a charging cradle between uses. This description above would seem to favor the BQ4050 for this application, but we would like to better understand the background behind these statements. Is there any further detail available to help us see how those differences will apply to our situation? Thanks in advance.

  • Hello Travis,

    I would recommend using Impedance Track when possible, but we would need to know more details about your system to give better suggestions. Do you know what your typical use case is? What battery chemistry will be used?

    CEDV can be better for gauging LFP chemistries and pulsed loads since it is mostly based on coulomb counting, whereas Impedance Track is a more complex algorithm which is better suited for applications which occasionally allows a rest period, and does not have highly dynamic voltage and current.

    CEDV will not have as high of accuracy and it will have a harder timer accounting for battery aging like Impedance Track does very well.

    Sincerely,

    Wyatt Keller

  • Hi Wyatt,

    Thank you for the response. Our particular application is for a 2 cell LiPo battery. The typical use case is that the device is always on, but sitting in a charging cradle for ~ 22 hours a day in a low power mode, where those other ~ 2 hours are spread across intermittent higher power uses of 5 - 10 minutes each during the work day. There will also be plenty of occasions where it is left off the cradle between sessions, but that shouldn't be seen as the norm. We have the ability to programmatically disconnect the charge supply from the battery as needed, which could be used to avoid the constant application of the charge supply or force a discharge cycle.

    The comment about impedance track not being ideal for applications that do not allow rest periods in particular caught our attention, and we would like to better understand how that concern fits into our use case. Thanks.

  • Hello Travis,

    For your application I would recommend impedance track, as long as you get a Qmax update every 5-10 cycles, accuracy will remain good. It sounds like there will be some relaxation periods for your system between charge and discharge, which is what is required for a Qmax update.

    The TRM will have the most information about the conditions for a Qmax update, for example in the BQ40Z50-R4 TRM section 6.4.2 QMax Update Conditions.

    https://www.ti.com/lit/pdf/sluuch2

    We also have other features like the Fast Qmax update which doesn't require as long of a rest between cycles to take a valid OCV to qualify for a Qmax update.

    Sincerely,

    Wyatt Keller

  • Hi Wyatt,

    Thanks again. After looking at the Qmax description, I want to follow-up on the disqualification of "capacity change between suitable battery rest periods is less than 37%". Each use of our device will not reach this threshold, and in fact each use is unlikely to even hit the 10% default discharge threshold for incrementing the cycle count (which then informs 'Max Error'). It seems very likely that if the users are routinely placing the device back on the charging cradle after every use then we will have to programmatically disconnect the charging supply according to some scheme. What do you recommend as the best strategy with regard to gas gauge accuracy and battery health? I see three possibilities:
    1. Always charge the battery when placed in the cradle and if the device is never discharged to the necessary thresholds during normal use, then initiate a forced discharge \ recharge cycle after N uses (or days).
    2. Always prevent the charger from charging until the device has fallen to at least 63% SOC (QMax threshold).
    3. Prevent the charger from charging unless the battery has reached the cycle count threshold (90% SOC) since it was last fully charged, and then monitor 'Max Error' and initiate a forced discharge \ recharge cycle if 'Max Error' exceeds X%.

    I suspect that #1 offers the most intuitive user experience but is worst for accuracy \ battery health, #2 is probably best for accuracy \ battery health but the worst user experience, and #3 is a compromise between the two. Are there any alternate schemes that are better? What would you recommend?

    Thanks in advance.

  • Hello Travis,

    I would recommend the first method, the reasons are because we have the MaxError parameter which can be used to determine if a "conditioning cycle" is needed. So for example once MaxError reaches 5% then you discharge more than 37%, let the gauge take an OCV by checking the REST and VOK bits, charge the battery back to 100%, let the gauge take another OCV to update the Qmax, and this should be all the gauge needs to maintain accuracy (for impedance track).

    Sincerely,

    Wyatt Keller

  • Hi Wyatt,

    Just to confirm the recommend of #1 vs #3 - the TRM indicates that 'Max Error' is incremented by 0.05% (per default value of Cycle Delta) under two conditions: once every 24 hours (per default value of Time Cycle Equivalent), and every time CycleCount increments - which only happens if the SOC falls below 90% (per default value of Cycle Count Percentage). By recommending option #1, are you saying that repeated discharges to something like 95% will not result in an inaccurate 'Max Error'? By my reading, if we assumed users always put the device back on the charge cradle, then Max Error would reach 5% every 100 days, whereas if we allowed it to fall to 90% (maybe averaging 2.5x per day) then Max Error would reach 5% every 28 days.

    I'll also note that we have been advised that battery health may be adversely affected if it is repeatedly recharged after minimal discharge. Does TI have any insight into that concern?

    Thanks again.

  • Hello Travis,

    You can configure the 24 hour period or discharge percentage needed to increase cycle count to better fit your application. It sounds like you would rely mostly on the time based increments of MaxError which should be fine.

    It's damaging to batteries to remain at higher SOC values for prolonged times, I do not have any experience with long term testing with short discharges that will reduce the SOH more rapidly than just remaining at 100% SOC for that time period, it is always better for battery health to do a full charge and discharge cycle throughout it's lifetime.

    Sincerely,

    Wyatt Keller

  • Thanks Wyatt. I don't think I have any further questions at this time. Appreciate all your help.