This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Trying to understand upstream read burst for PCI2060 bridge.

I've got a system using PCI2060I where the secondary side runs (asynchronously) at 33.00 MHz and the primary side runs typically at 66.66 MHz.  The only device on the secondary is an FPGA that uses DMA to transfer data to/from the host on the primary side.  This FPGA is programmed to always run data bursts of 16 32-bit words starting at a 16-word address boundary.  When doing read bursts on one of the host systems I use, I am seeing these bursts split into a series of two-word bursts.  At first I thought the host was doing this, but looking at the primary side bus I found that the shortened bursts were not terminated by the host (target disconnect) but rather by the bridge (frame goes high after first transfer - stop never asserts low).  In all cases, this host inserts a lot of wait states, typically 12 to 16.  On the secondary side, my FPGA sees a target disconnect with data by the bridge on the second data cycle.  So the question I have is:

1) Does the bridge have some sort of timeout mechanism that will abort a burst transfer after some number of wait cycles?

2) If so, can this behavior be changed with a register setting?  i.e. can I increase the timeout or turn it off?  This "feature" is causing a large slowdown in the data throughput of the system.

  • Hello,
    Try setting the primary and secondary latency timers to its maximum value.
    Regards
  • Thanks.  After posting I actually stumbled across the primary latency timer, and setting that does seem to improve things.  You said also to set the secondary latency timer, but (correct me if I'm wrong) I thought that would have no affect on transfers initiated on the secondary bus.  I'm not concerned about downstream transfers (host to peripheral) because at the moment these are all single-cycle anyway.  In any case, setting the primary latency timer seems to fix the existing issue.