This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Bad behaviour occuring some time after Rs recalibration

In our application we need to run Rs recalibration from time to time due to large variations in temperature. After running the Rs recalibration, and getting the expected result, and even if this result is really close to the existing value (in the lab with no temperature variation), after a time the motor starts behaving bad. It'll run fine for the first couple of short runs (typ. 720deg per run, control is disabled between runs), then over the next few runs degrade progressively with erratic current draw, sputtering, worse and worse and very soon unusable. We've done a number of systematic checks to confirm this pattern. Do anyone have a clue to what might be wrong?

Rgds,
Stein

  • 720 degrees? So 2 revolutions? With sensorless velocity control? Don't quite understand that part of your post...

    couple things come to mind
    1. is the motor heating up quickly, making Rs change dramatically during runs? If so you should consider proj_lab07 which shows you how to run Rs recalibration while the motor is running.
    2. perhaps it is something to do with the stop/start during runs. maybe the controllers aren't get reset so they are holding their last value and on the subsequent start-ups you being in a poorly controlled state. this could cause excess current usage and increase the Rs further
  • Hi, some more details on our setup. It's a BLDC run sensorless with FOC. We have our own speed and position loops (which are absolutely simple and preserves no state whatsoever between consecutive "runs" of the motor) The motor is typically run in numerous short increments of about 360-720deg. There's no need for continuous monitoring since the temperature won't change much without us having the option to recalibrate. (the temp. variations mainly come from the environment)

    Regarding point 2: The case is that we can run just fine this way through long test sequences, either loading params from nvm or running the full test first, with variable loads and and it all works very well. Until we do a Rs recal, even if it get's close to the original value, then it'll run fine a couple times, before starting to degrade as described. As we don't care about Rs other than making sure the TI lib has the correct value, we cannot at this point see any other explanation than that some unfortunate state has been reached within the TI lib by improper use or otherwise. We're a bit stumped on how to diagnose further though...
  • you can't run position control sensorless. The rotor has to have some minimum speed for FAST to lock on and track sensorlessly. This is typically in the 1-5 Hz range. Even if you disable ForceAngle there would be large amounts of time in a 360 or 720 degree movement where the angle estimate was very poor. Plot the Angle EST coming from FAST during your movements and you will see.
  • We're running well above minimum speed and we've measured the inaccuracy caused during startup with ForceAngle, which is pretty large about 30deg worst case, but not significant since there's a resolver further on, behind some gears with very large ratio, so we basically just need to be able to position approximately
  • so to summarize,
    you only see the problem with you enable RsRecal before a start-up?
    if you just load the parameters from user.h and start your tests it always behaves correctly?

    how much current are you using in USER_MOTOR_RES_EST_CURRENT vs USER_MOTOR_MAX_CURRENT ?

    Do you always enable or disable the OffsetCal flag?
  • yes, the problem only occurs when we've run RsRecal, but persists until we cycle power, simply stopping and starting doesn't help. Loading from user.h or running the full test at startup, both works fine. USER_MOTOR_RES_EST_CURRENT 1, and USER_MOTOR_MAX_CURRENT 20. I do offset cal whenever the full calibration is run, and whenever the rs recal is done.
  • I feel like we are missing something about the way you are doing your testing.

    Running the RsRecal only injects the a DC current of RES_EST_CURRENT, and then uses the Rs as an input to the EST model.

    The Rs and Ls are then used to set the default speed controller gains. I assume you have tested and tune these gains...is it possible that in your code you are setting the tuned gains for normal operation, but when doing the RsRecal you have missed some logic which leaves the gains in their default state?

    That's all I can think of...nothing like this has ever been reported before.

    My other thought was that you were using so much current for RES_EST_CURRENT that it was quickly heating the motor and providing a larger Rs value. Then once you had run a few times the value dropped to "normal" levels and since you are running at relatively low frequency the angle estimations were impacted. But you are only using 1A on a 20A motor, this should have no effect.

    What are you doing with the ForceAngle flag at start-up and run-time?

    Can you explain your motion profile a bit more? What are the speed & acceleration commands that you are sending in a typical test?