This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

GPIO writes need a pipeline flush

Hi,

we are using gpio on a C6670 to access flash memory.

This worked fine until we started running code in 3 cores. Now we find that the gpio writes are executed without our programmed delays and flash reading becomes unreliable. Probably due to writes being cached within the pipeline?

Adding a read immediately after a write fixes this.

But is there a specific pipeline flush command that would obviate the need for a read (that may be optimized out)?

Thanks

Ian

  • Ian,

    It may help to explain more about what you are doing with the GPIOs and what else may be happening related to the GPIOs or config space on all 3 cores.

    My guess is that you are refering to the write buffer for config space writes such as to the GPIO when you say pipeline or cache.

    If my guess is correct, then your code may be writing to a GPIO and then accessing the Flash using a different bus. When you have multiple cores running, more activity occurs on the config bus write buffer, which then causes a delay before a new write will physically occur, that is before it gets out of the write buffer.

    If this is your case, then the choices are

    1. Do the read after the write as you are doing now. This forces the CPU to wait until the read's result is returned, and that will not happen until all writes have been completed.

    2. Use the MFENCE instruction. This will also for the CPU to wait until all of its writes have completed. See the CPU & Instruction Set Reference Guide for more information.

    Regards,
    RandyP

     

    If you need more help, please reply back. If this answers the question, please click  Verify Answer  , below.

  • Hi Randy,

    thanks for the quick reply.

    We are using the gpio to a setup the address lines of external flash and to read in the 8 bit data.

    You are probably right in referring to write buffer rather than pipeline.

    I had just started to look at MFENCE. Which would be the best approach? The read or MFENCE?

    By the way - where is the write buffer for config space documented?

    Regards,

    Ian

  • Ian,

    In the Training section of TI.com, there is a training video set for the C66x SOC architecture. It may be helpful for you to review all of the modules. In particular, the CorePac & Memory Subsystem Module may be the one that covers the MFENCE instruction. You can find the complete video set here.

    Between read or MFENCE, my opinions are:

    • the read is the easiest because you already have it working
    • the read works in this case but might not work in every case (I cannot think of any case where it would not, though)
    • MFENCE was added to the C66x architecture for exactly this scenario, so it is the "intended" or "expected" method
    • MFENCE is the new and fancy instruction, so it should be a great thing to use
    • MFENCE will probably provide better performance because the read can add an extra 15-20 clock cycles

    Documentation of internal details like a write buffer can be lacking. That is a polite way of saying that are a lot of details that may be mentioned somewhere in one of the many documents, but it can be hard to find.

    In general, you can assume that writes to almost anything get buffered in some way, whether it is in CorePac memory path buffers or Interconnect port buffers.

    Regards,
    RandyP

     

    If you need more help, please reply back. If this answers the question, please click  Verify Answer  , below.