This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AM1808 UPP Problem

Other Parts Discussed in Thread: AM1808

TI,

AM1808, WinCE 6.0 BSP from MPC Data

I am using the UPP (Universal Parallel Port) to receive data from an A/D converter.  I am currently using the example UPPTest.exe program to test my hardware.  When testing, I am seeing data discontinuities in the received data stream.  The discontinuities occur consistently throughout the data with an interval equal to the driver PAGESIZE.  In other words, when the driver switches from one buffer to the next, it loses some samples.  I have made only minor changes to the given programming example.  They are:

 UPPTest.c:

"if(!CreateThread( NULL, 0, (LPTHREAD_START_ROUTINE)WriteThread, NULL, 0, NULL))..." was commented out.  This stops data transmission.  Not needed.

 platform.reg:

In the "UPP Driver" section:

1)  UPP Channel A, mode is changed from 2 to 0.  This changes from duplex mode where channel B transmits and channel A receives to channel A receives only.

2)  UPP Channel B, loopback is changed from 1 to 0.  This turns off loopback from channel B to channel A.  Channel A now gets data from the hardware port and its connected A/D converter.

This problem does not seem to occur at the example 500KHz clock rate, but does happen at 1MHz and higher.  I have tested this setup at 100MHz clock rate and it works well except for the discontinuities.  The max clock rate for this port is stated as CPU clock / 4 = 114MHz.  I would like this to work with a buffer size of 50MB.  Any thoughts?  Would altering the driver to work with a single large buffer do the job?  I do not need to run continuously, just fill a large buffer, stop and then process it.

Best Regards,

Nelson

 

 

  • Nelson,

    When you say "page" or "buffer", are you referring to a uPP window or line transition?  The uPP DMA has a 2D programming model, where an overall transfer (window) can be divided into one or more lines.  I not expect to see data discontinuity between lines, so I'm guessing that the problem happens when switching from one window to the next.

    One helpful feature is that the uPP peripheral allows you to queue a second window to begin automatically when the current window completes.  Are you doing this, or are you waiting for the end-of-window interrupt event and then programming the next window in response?  In psuedocode, the preferred procedure would look like this:

    1. Program first uPP transfer
    2. Program second uPP transfer
    3. Wait for first uPP transfer to complete
    4. Program third uPP transfer
    5. Repeat steps 3, 4

    Basically, after every window, the next window should start automatically and you should be able to program the window-after-next.

    Also, the maximum supported speed for the uPP peripheral is 75 MHz.  On a 300 MHz device, this neatly corresponds to CPU / 4.  On the newer 456 MHz devices, this is no longer the case.  I would not recommend trying to operate at 114 MHz.

    Hope this helps.

  • Joe,

    Thank you for your response.  Since I submitted the problem, I have delved deep into the hardware architecture and the MPC Data UPP driver.  It looks like the driver is set up to do just what you stated and that is to keep the next window queued while the current one is filling.

    At this point I believe one of two things is happening.  Either the window buffer switch takes longer than one sample interval or copying the filled window to the calling process’s target buffer is taking longer than it takes for the DMA to fill a window.  Either way, I don’t think that the current method can be made to run at a sampling rate of greater than 1MHz.

    As I stated earlier, I don’t need a continuous flow of incoming data.  All I need to do is alter the driver to let the DMA fill one large buffer and stop.  That data will then be processed "off line".  It looks like the DMA is capable of 32-bit addressing, so theoretically I could alter the driver to fill one very large buffer and stop.  The buffer could be the size of all of the available memory.  I’m not sure how much contiguous memory WinCE will give me, but I will give it a try.  Any help with WinCE memory allocation is welcome.  I also need to somehow have the driver pass the buffer pointer back to the calling process.  I have not done a lot of WinCE programming, so help here is also welcome.

    Regards,

    Nelson

  • Nelson,

    How big are your individual data buffers?  If they're small (ex. < 1 KB) then I can definitely see the CPU falling behind for successive high speed uPP transfers.  A uPP speed of 1 MHz is relatively slow, so I am surprised that this is the threshold that you are observing.  I would definitely recommend using larger data buffers if possible to avoid excessive CPU overhead.

  • Hi Joe,

    To answer your question on the size of the data buffers, there are two 32K byte buffers for a total allocation of 64K.

    I haven't had time to go through a debug process to see what is actually happening, I just know that the buffer queueing code is in place.  I don't know that it is actually working as intended.

    I can get by for a while, knowing that samples are being skipped, but it is imperative that the problem be fixed.  What do you think about having the driver use one large buffer?

    Regards,

    Nelson

  • Nelson,

    It's certainly possible to simply run a single 64k buffer.  If you want to start processing after the first 32k is complete, you can use a single buffer with two 32k lines.  Individual lines can be as large as 64KB (and a single transfer can have up to 64K lines in its overall "window"), so you're nowhere near the "maximum" transfer size that uPP can handle.  The bigger issue may be whether or not your operating system will allow you to allocate such a large contiguous buffer.

  • Joe,

    Could you please have MPC Data look into the driver issue?  I don't believe that they wanted to limit the UPP port clock speed to 1MHz.  The advantage to using the port is its 100MHz speed.  This is essential to the success of our project.  Thank you.

    Regards,

    Nelson

  • Hi Nelson,

    As you have already seen the uPP driver does keep two windows queued so I don't know why you are seeing the discontinuities, at 1MHz at least. With a DMA page(window) size of 32KB it will take approx. 16ms to fill each DMA buffer. This is more than enough time to service the interrupt, memcpy the data into the client's buffer and re-queue the previous DMA buffer (which should all take <1ms).

    You might want to try adding a RETAILMSG to the IST to check that you are seeing interrupts every 16ms at 1MHz.

    It is possible that increasing the sample rate to 100MHz may be too fast for the IST to keep up but it is odd that it is failing at 1MHz.

    I've taken a quick look through the uPP driver code and the logic for the bInitialTransferA/B looks a little odd (not sure why its there actually). Looks to me like this flag would cause the DMA buffers to be queued in the wrong order. I suggest commenting out "bInitialTransferA = 1" in UPP_StartTransfer().

    If you wanted to change the driver to fill one large buffer that should be fairly straightforward to do. You could update the code that sets UPID1 to have a fixed byte count of 32768 and configures the number of lines to your buffer size / 32768.

    You may run into problems attempting to allocate very large buffers using HalAllocateCommonBuffer(). An alternative would be to reserve a block of RAM via config.bib.

    Hope this helps,
    Mike Wyatt
    MPC Data Limited, a BSQUARE company

     

  • Mike,

    I had the UPP driver updated to fill a single large buffer.  The work was done at MPC Data and has been proven to work very well after thorough testing.

    The buffer queuing problem remains unsolved at this point, but with the driver update, I consider my problem solved.  There may be a time in the future where I need the buffer queuing to work, but not right now.  Thank you for your help on this matter.

     Regards,

    Nelson