This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

BQ27426: Set User Rate-mA, but RM change rate is not matched with the setting.

Part Number: BQ27426
Other Parts Discussed in Thread: 4460

We set the current (User Rate-mA) to 1230mA (load mode=0 load select=6) and also set a constant current of 246mA for the discharge test.

However, the result looks like 1230mA is not contributing to the RM calculation.
Please help to confirm
1. my understanding is that after setting a fixed current (1230mA), the meter will calculate the RM in the discharge mode with a fixed current of 1230mA and the duration of discharge. is this understanding correct? If not correct, what should it be?
2. If #1 is correct, referring to the attached test log, please assist in identifying what is causing the test results to be inconsistent with expectations?

0307_1250mA.log

  • #1: Yes, the gauge will use the User Rate for capacity predictions, if you set Load Select to 6. This will, depending on cell capacity and User Rate and chemistry, cause a significant voltage drop over the internal cell resistance, hence cell voltage will reach Terminate Voltage after less passed charge than with a significantly lower User Rate.

    #2: You log file shows that SOC drops non-linearly (faster at the beginning), indicating that the gauge underestimates capacity (Unfiltered FCC is 1095mAh because of your high user rate). However, once SOC reaches what TI calls "fast scale region", the gauge switches load select automatically, hence FCC jumps up to 1180mAh and SOC starts to drops slower, hitting 0% at Terminate Voltage. See 7.4.2.1.24 Fast Scale Load Select in the TRM

  • Line 1115: ElapsedTime=4460, RM=554,

    Line 1119: ElapsedTime=4476, RM=553

    Looking at this, after 16 seconds, RM has decreased by 1mAh and the backprojection of the current used by the meter for prediction should be 225mA (1mAh/16mAh*3600sec)

    Shouldn't this inverse current be the 1230mA we set? Why is there such a change?

  • The gauge uses user rate for a prediction. The gauge uses coulomb count between predictions, hence the 1230mA has no effect on how fast RM changes between predictions. RM adjusts with coulomb count between predictions (and it will be smoothed, which is implemented as an algorithm and not a simple linear filter) hence it mostly depends on measured current (not user rate) and discontinuities between last prediction + coulomb count vs. new prediction.