I'm trying to get the ADS7066 running on a new design, and I've come across an issue that I can't figure out.
On the front page of the datasheet, it says that full throughput can be achieved with a SPI frequency >4.5MHz.
Section 6.7 gives limits of 3200ns for t_conv and 800ns for t_acq.
The timing diagram at the top of the following pgae (page 8) shows the chip select being held high for t_conv (min 3200ns), then going low for t_acq (800ns). It is during this t_acq that a conversion result is passed out to a host. So if all the SPI activity takes place in 800ns and each sample is 16 bits (leanest data possible, no averaging or flag bits), the min time per bit is 800ns/16 = 50ns, yes? This in turn means that in order to achieve full throughput, SPI frquency has to be >20MHz and to clock out the full 24-bit packet in 800ns needs >30MHz.
So, where does the figure of 4.5MHz come from? Presumably there's something I haven't taken into account or something I haven't properly understood (or maybe a typo in the datasheet). My host is a low-power FPGA so although 30MHz should be achievable, it would simplify things (and reduce power) if I were able to slow the clock down to <10MHz.
An explanation to clear this up would be very helpful.