I'm developing a product that puts multiple devices on a shared bus, each with its own CDCE913 driving both the microcontroller and the ADC. Bus commands provide sync pulses that tell me what the clocks are on the master and slave nodes, and I'll be feeding that into a PID with the goal of locking all the slave clocks to the master.
The CDCE913 has a 16.384MHz crystal (http://search.digikey.com/scripts/DkSearch/dksus.dll?Detail&name=535-9676-1-ND), with output Y1 divided by 4 to provide a 4.096MHz ADC clock, and the relevant settings to generate 8MHz output on Y2 to drive the microcontroller (which has its own PLL boost to 32MHz, though I should try running 32MHz extclk directly at some point). The MCU has a DAC output through an attempt at a filter, straight into Vctrl. The theory is that I should be able to take this 10ppm crystal and pull it a decent amount more than that, and the 12-bit DAC should provide enough resolution to have a reasonable lock between all the units on the bus.
However, I have been able to obtain no evidence that I'm able to alter the crystal speed in the slightest. In fact, I've constructed a simple program that uses another microcontroller to count the clocks from two units at the same time, and the difference I'm seeing is more like 500ppm, with no visible effect from changing Vctrl across the entire range (0-1.8V).
A major part of the problem is that the TI documentation on pullability refers to a number of crystal parameters that nobody else does (datasheets typically have ~2-3 parameters, TI wants ~4-5), so I have no idea how to calculate whether a given crystal (like the one above) is even remotely appropriate. Regardless, I shouldn't be seeing 500ppm difference when the crystals are spec'd for 10ppm....
Any hints on what's going on?