This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
Hi TI,
I found that the time offset of TDC7200 between the input time interval and the measured results is not a constant one in MODE 1, as shown in the following.
Input time interval (ns) | 12 | 50 | 100 | 200 | 500 | 1000 | 1500 |
Measured result (ns) | 11.81 | 49.76 | 99.66 | 199.52 | 499.3 | 999.2 | 1499.2 |
offset (ps) | 190 | 240 | 340 | 480 | 700 | 800 | 800 |
offset deviation (ps) | 610 |
The deviation of the offset is as large as 610ps from 12ns to 1.5us, this could induce large measurement error regardless of the precision is ~± 50ps.
Why this happened? I have measured three chips of TDC2700, the offsets are all with several hundred ps.
I'm sure the chips are properly configured and the input time intervals are accurate (the input time interval have been captured by a high-speed ossciloscope to observe).
Looking forward to your reply.
Hello Kenney,
Thanks for your question and for posting to the PS forum! We appreciate all the detailed information provided, just a couple of questions:
1. Can you provide the register setting for the TDC7200?
2. And just to verify were these measurements made on the EVM or are they on a custom board?
3. If measurements were made on your own board, what is the external clock frequency? Keep in mind that the accuracy of the device heavily depends on a stable clock being provided to the CLOCK pin. Using a 16MHz clock generally will provide the lowest deviation between measurements, but the clocks stability is also important. The clock is used to calibrate the internal time base that is used to make the measurements.
And just as a quick note, using a Mode 1 is not recommended for measurements above 500ns, when making long measurements like this Mode 2 is typically the recommended setting. I understand you also might need to make the quick measurements in your application, so Mode 1 would be the way to go, but you might have to sacrifice accuracy for the longer ranges.
Best,
Isaac
Hi Isaac,
1. In my setup, the registers of TDC7200 had all been configured with their default values.
2. The measurements were made on my own board but the adopted external oscillator was the same part as TDC7200EVM, that is the ASFLMB-8.000MHZ-LY-T from Abracon Corporation. And I'm sure the reference clock is quite clean as there is no other additional interference on the board. I'm also sure that the external decoulpling capacitor of TDC7200 is properly selected.
3. Acording to the datasheet of TDC7200, I think the frequency and jitter of external reference clock only affect the precision (standard deviation) of the measurement results, and they have nothing to do with the drift of path delay. I think the drift may be induced by the untable frequency of the internal high-frequency ring oscillator, the measured results of TIME1 and NORM_LSB under the input time interval of 100ns and 1us are shown as following.
Input time interval (ns) | 100 | 1000 |
TIME1 | 1705 | 17080 |
NORM_LSB | 0.05843 | 0.05849 |
CH1_TDC | 99.62315 | 999.0092 |
offset (ps) | 376.85 | 990.8 |
△offset (ps) | 613.95 |
From the above table, we can find that there is a slightly difference of the NORM_LSB with these different input time intervals, the difference is 0.05849 ns - 0.05843 ns = 0.06 ps. Though the difference of LSB is really small, but the counts of TIME1 is very large and it can cause large error on the measurement results. For example, in the above table, the drift of the offset can be also roughly calculated as 0.06 ps * (17080 - 1705)= 923 ps, which is relatively with the same level of 614 ps.
So I think the reason may be that the frequency of the internal ring oscillator is relatively high when it begins to oscillate, and the frequency decreases with the time, after a while (maybe 1 us later, according to the measurement results), the frequency of the internal ring oscillator seems to be stable but the error has been caused. If my inference is correct, this may be a bug of TDC7200, as the drifts of the relatively error between the input time intervals and the measured results are always supposed to be as low as possible so that it won't deteriorate the accuracy (~±50 ps) of measurement. Or would you please tell me how to reduce this kind of drift (down to <100ps, from 50ns to 2us) by configuring the registers or adding some other parts on the board?
Thanks very much for your previous reply and looking forward to your another professional reply.
Hello Kenney,
I was unable to get to your post today. I will review this on Monday in order to provide any feedback. We appreciate the patience!
Best,
Isaac
Hey Kenney,
I appreciate your patience and thanks for the detailed response. Just to make sure I understand correctly, you are trying to point out that there is not too much deviation between measurements but simply a steady offset between measurements?
As mentioned having a good reference clock will help keep the clock maintain a good precision between measurements but if there is a steady offset in your measurement this would likely not be affected by this. Have you tried increase the number of calibration periods to check if this has any sort of change in your measurement? If this is a steady offset I don't think it will make a difference but it would be best to check.
My next thought would be to see if you have you attempted these tests on a different board to ensure that the problem is able to be replicated from board to board? Different items such as trace lengths or even layering, or if there is more parasitic capacitance present in your board it is possible that it could explain why there is a constant delta in a specific board. You could attempt to make the measurements on a different board such as the EVM and if the delta is not present or if the delta value changes with a different board then we might be assume that something in the layout is causing this delta between measurements. If the delta persists to be the same then it would be safe to assume that it is inherent to the device.
Best,
Isaac
Hi Isaac,
Thanks very much for your reply, however, you did not fully understand my question. I was trying to point out that the offset between measurements is not steady enough and it is too large to be accepted.
Input time interval (ns) | 100 | 1000 |
CH1_TDC | 99.62315 | 999.0092 |
offset (ps) | 376.85 | 990.8 |
△offset (ps) | 990.8 -376.85 = 613.95 |
As you have mentioned, if the offset is steady enough (eg. △offset < 50 ps), it won't affect the measurement results as the offset can be compensated when calculation. However, as shown in the table, the △offset is very large as 614 ps and it is much larger than the precision of TDC7200 (~50ps), which means a large error of 614 ps would be caused if we regard the offset as a steady one. The offset may be compensated by making a look-up table (LUT), in which each input time interval and the corresponding offset may be recorded, however, since the △offset varies from chip to chip, and may be affected by temperature as well, the LUT-based compensation seems not practical.
Different number of calibration periods have been tried but it has little influence on the △offset.
From my point of view, the offset itself may be affected by the parasitics of board and the input delay of the device, but the offset should be steady and it should not vary with input time intervals, as the parasitics of board and the input delay of the device have nothing to do with the input time interval. However, from the measurement results, the offset is not a steady one and may vary largely with different input time intervals.
As mentioned in my previous reply, I think the reason might be that the frequency of the internal ring oscillator of TDC7200 is not a stable one. The frequency may be relatively high when the internal ring oscillator begins to oscillate, and the frequency decreases with the time, after a while (maybe 1 us later, according to the measurement results), the frequency of the internal ring oscillator seems to be stable but the error has been caused.
As the measurement results of TDC7200 with different input time intervals have not been given in the datasheet, please show me TI's measurement report if possible. I am quite sure that the problem is something as a bug of TDC7200, I would appreciate it if you can check this problem with the designer of TDC7200.
Thanks again for your time.
Hello Kenney,
Thanks for the clarification, I think we both misunderstood which offset we were speaking about. In your case you are focusing on the △offset between measurements, but my question was regarding the offset for each measurement.
For example, in your first set of data you listed a measurement of 100ns with a 340ps offset and in your next set of data you listed another measurement of 100ns with a 376.85ps offset. The delta between those measurements is ~36.85ps.
As another example, you listed 1000ns measurement contained an offset of 800ps and in your second measurement you listed an offset of 990.8 ps which is a delta of ~190.8ps.
My question was in regards to how consistent is the offset for the same measurement so if you measure 100ns, what is the deviation you are seeing in that offset. I am not sure how the delta between two measurements is being used for in your application, but if we can figure out if there is a steady offset in each measurement then you should be able to remove that offset from every measurement to obtain a more accurate delta. As you make your measurements longer it seems like the offset value grows, which makes sense, because the device will lose accuracy as the time increases in mode 1. Just to mention again Mode 1 is not recommended for measurements above 500ns, so if the offset at points beyond 500ns are not steady the datasheet does warn that there is a higher deviation between measurements the longer your measurements are.
Looking forward to your response!
Best,
Isaac
Hello Isaac,
Appreciate for your detailed explanation. We hope the △offset as low as possible (<100ps, at least) to get accurate results for input time intervals of 12 ns to 1500 ns, despite that the standard deviation increases with the increasing of the input time interval in Mode 1. I think for a TDC chip, the △offset should be with a low value (at least the same level as the standard deviation, eg. 200 ps @ 1500 ns, as depicted in your figure), otherwise large measurement errors (~610ps, according to my measurements) would be caused if with different input time intervals. I also think the △offset may have nothing to do with the standard deviation, as the standard deviation is caused by the jitter of the internal ring oscillator (increases with time), but the △offset is caused by the drifts of the averaged results of different measurements. For a TDC chip, I think the averaged result should be a steady one regardless that the standard deviation may be large. If this kind of large △offset is inevitable for TDC7200, could you please give me some advices on reducing the △offset ?
Thanks again.
Hello Kenney,
Thanks for the reply. I have never seen any testing done or done any testing myself to get the delta offset you are talking about. I would need to check what the delta offset is on the EVM to see if I obtain similar results as what you are getting on your system. Unfortunately, we will off on holiday in the US and will not be able to get any testing completed until next week. I hope this is okay, feel free to let me know if you have any other questions.
Best,
Isaac
Hi Isaac,
Thanks very much for your plan on trying to obtain similar test results, I will wait for your confirmation about the problem of △offset and I'm looking forward to your suggestions on reducing the △offset.
Hello Kenney,
I have not been able to run the tests yet. Hopefully I am able to run them this week, I will get back to you once I have results. Thanks!
Best,
Isaac
Hello Kenney,
I ran the tests today with the EVM using the following register configurations:
I ran the tests and verified with a scope, although the scopes measurement was always delayed compared to the result collected by the TDC7200. I believe this is due to the added capacitance from the scopes probes. But nonetheless it was just used as a tool to confirm where the START and STOP pulses occurred. My tests results showed a very different outcome in comparison to your results.
Input Interval (ns) | 12 | 50 | 100 | 200 | 500 | 1000 | 1500 |
Measured Result (ns) | 12.49 | 50.45 | 100.45 | 200.3 | 500.29 | 1000.14 | 1500.10 |
offset (ps) | 490 | 450 | 450 | 300 | 290 | 140 | 100 |
Some things to note are that the offset was growing smaller as I increased the measurement time. And one thing in your measurement is that the input time was always larger than your measured result.
Best,
Isaac
Hi Isaac,
Thank you very much for carrying our the tests. I think your measurement results are very much similar to mine. In fact, as the input delay of START maybe larger or smaller than that of STOP (can be affected by the delay of input cables), it does't matter whether the offset is growing smaller or larger with the increasing of input time interval, what matters are the trend of measured results and the value of △offset. In your measurments, the measured result decreased with the increasing of the input time interval, whcih is consistent with my test results. The △offset from 12ns to 1500ns in your measurments can be calculated as △offset = 490ps - 100ps = 390 ps. Your measured △offset is slightly smaller than mine (610ps), but I have verified that the measured △offset varies chip to chip. I have measured 6 chips of TDC7200 and the results show that the △offset varies from ~360ps to ~1.25ns. The large value of △offset would unfortunately deteriorate the accuracy of TDC7200, regardless that the standard deviation is less than 300ps with input time interval from 12ns to 1500ns.
As mentioned in my previous replys, I think the reason might be that the frequency of the internal ring oscillator of TDC7200 is not a stable one. The frequency may be relatively high when the internal ring oscillator begins to oscillate, and the frequency decreases with the time, after a while (maybe ~1000ns later, according to the measurement results), the frequency of the internal ring oscillator seems to be stable but the error has been caused.
Thanks again for your time, looking forward to your reply.
Hello Kenney,
I do not believe the input delay of the START will have much of an affect, the device does not start timing the signal until the START pulse is received but any delay to the STOP signal will surely affect the reading. Unfortunately for my setup the STOP signal is coming from a long wire so I do not think my measurements are as accurate as I wish they could be. As far as the △offset, I just do not understand the focus on this spec, I have not come across customers that have had any problems where the △offset is an issue.
The main concern is usually just any offset that may be present in your measurement from the true ToF calculation. In my measurement the largest offset I encountered was 490ps. In an ultrasonic air coupled application this would equate to ~8.4*10^-8 meters of offset which is hardly any difference at all, if you use it in a LiDAR application of course the impact would be larger since the speed of sound is much greater, with my presented offset it comes out to ~0.0735 m.
The clock does deviate with time, hence why calibration is performed after measurements and why information is included in the datasheet about how the standard deviation changes with respect to time/length of your measurement, so essentially the clock becomes less stable as time goes on. If you are looking for max accuracy you should be able to calculate the deviation of this offset for a given interval, and you can factor this offset into your calculation. At least that is how most customers have increased their level of accuracy and resolution.
Best,
Isaac