I've got a system using PCI2060I where the secondary side runs (asynchronously) at 33.00 MHz and the primary side runs typically at 66.66 MHz. The only device on the secondary is an FPGA that uses DMA to transfer data to/from the host on the primary side. This FPGA is programmed to always run data bursts of 16 32-bit words starting at a 16-word address boundary. When doing read bursts on one of the host systems I use, I am seeing these bursts split into a series of two-word bursts. At first I thought the host was doing this, but looking at the primary side bus I found that the shortened bursts were not terminated by the host (target disconnect) but rather by the bridge (frame goes high after first transfer - stop never asserts low). In all cases, this host inserts a lot of wait states, typically 12 to 16. On the secondary side, my FPGA sees a target disconnect with data by the bridge on the second data cycle. So the question I have is:
1) Does the bridge have some sort of timeout mechanism that will abort a burst transfer after some number of wait cycles?
2) If so, can this behavior be changed with a register setting? i.e. can I increase the timeout or turn it off? This "feature" is causing a large slowdown in the data throughput of the system.