We are attempting to do sustained writes from the ARM out to an FPGA via uPP.
We are running the UPP device in a two channel 16 bit mode, with an 8192 line count and the system priorities set for uPP = 0. Our code is set to create a large block of data, tell the UPP to start transmitting it, wait for UPP to complete, then immediately repeat with another large block.
We find that we always miss data at some point when doing this, for any size of data block from 32kB to 7MB, even if we slow the clock down considerably from its maximum 66MHz to 7MHz.
The issue appears to be memory contention on the ARM - at some point the Linux OS we're running on there will be loaded to the point where the UPP misses a data block. We can work around this by blocking the OS completely while the transfer is in progress (by disabling all linux interrupts), but that is not ideal.
We believe that if the UPP is unable to get data from the DMA due to any reason, it should deassert the ENABLE line until it can again, but we never see that. Is there any known problem with the UPP design?
We also often see a miss in the data without ever seeing the underrun/overflow interrupt raised - without this indication, there is no way to know that the data transfer has not been successful, which makes the UPP unusable. Is there a genuine situation in which this could occur?