A legacy software project I inherited shows erronous behaviour. In case there is no STOP signal a certain error should be thrown. Currently, detecting that there was no STOP signal is challenging because for some reason the interrupt comes much later than expected. 16 averaging cycles are configured. So far it works if there is a STOP signal (witnessed by my scope).
The registers CLOCK_CNTR_OVF_H and CLOCK_CNTR_OVF_L are written like this.
void TDC7200::calcMaxWindowLength(float& result)
{
// Caclulates the time window between start and next start minus 20 % margin in seconds.
// ...
}
bool TDC7200::setCounterOVF()
{
float maxWindow;
calcMaxWindowLength(maxWindow)
// CLOCK_PERIOD = 1 / 16e6
// The external oscillator has 16 MHz Uint16 clockCntOvf = static_cast(maxWindow / CLOCK_PERIOD);
Uint8 clockCounterOvfHigh = static_cast(clockCntOvf >> 8);
Uint8 clockCounterOvfLow = static_cast(clockCntOvf);
const Uint16 cmdByte1 = 0x46; const Uint16 cmdByte2 = 0x47;
// Write to registers 6 and 7 using SPI
// ... }
I would expect, that in each cycle an overflow occurs. Hence, after 16 cycles, the interrupt occurs. However, this is not the case. It takes much longer (approx. 4.1 ms) and many measurement cycles in this case.
Can the TDC7200 even be configured to behave as described above?