I am simulating an audio-range state-variable filter, using OPA827 opamps and THAT 2180C voltage controlled amplifiers. Doing a transient analysis (5ms-50ms) results in time steps that are far too large to simulate accurately, and my decaying sine waves look like they're constructed of just a few linear segments. Other symptoms include DC convergence failing and oscillations - some slowly rise to supply voltages or decay back to zero from changing oscilloscope horizontal scale (time resolution), with no circuit changes, or a small (10mv) parasitic oscillation may appear and disappear, likewise controlled by oscilloscope scaling..
If I change the maximum time step (TR maximum time step) from its default of 10 gigaseconds to 100 nanoseconds or less, transient simulations work correctly (at least it behaves like a theoretical state-variable filter, complete with Q enhancement at high frequencies and other typical misbehaviors), but nearly every time step (visible in the progress dialog) is 100ns, with the occasional shorter one. Somewhere around 10us, the simulation starts to fail at high frequencies and/or high Qs.
Once, radically altering a bunch of other transient analysis parameters, I got similar but not as good results, where Tina was changing the step size more effectively, but not well enough. And I didn't save that parameter set, as I was pretty much thrashing randomly at the time.
In both cases, the simulation took absurd amounts of time compared to the default parameters.
I have tried all the other provided parameter sets, and none make any significant difference.
What I would like is to understand the way that Tina chooses the automatic step time well enough to adjust it to be sensitive enough to work with this circuit and similar ones. What I would settle for is having a magic number to tweak that works better than limiting the maximum time step.
The .tsc file follows.