I used an interrupt on timer2 to create a 1ms tick counter in my application, and everything works, but one parameter leaves me puzzled. I tried out all 3 of the timer emulation modes (TIMER_EmulationMode_StopAfterNextDecrement, TIMER_EmulationMode_StopAtZero, TIMER_EmulationMode_RunFree), and all of them seem to result in exactly the same behavior. So I was wondering, what is the difference between them? Both in how they function and what the implications are.
For example, does TIMER_EmulationMode_StopAtZero wait until the interrupt is cleared before the timer resumes operation? Does it mean that if it's a heavy interrupt or there are other interrupts being processed when it occurs, the timer might be imprecise? Would it mean that TIMER_EmulationMode_RunFree might result in some ticks being lost under these conditions?