This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AM2434: Need support for these questions

Part Number: AM2434

Tool/software:

Hi team,

Continuing our discussion about AM2434 support, below you can find some additional questions from the customer. 

 

We spent some time digesting the content of our conference call, which was very productive – thank you for that time! We have been unable, for the moment, to go deep in our GPMC setup for a more in-depth analysis of our bus low throughput issues, but we noticed the large difference between read and write performance is due to the GPMC staying idle during 12 bus cycles between read operations, while this large hiatus does not happen between write operations. We hope to provide more details in the coming weeks.

Meanwhile, we have some other questions (we can keep sending them here or directly to Mukul Bhatnagar and Pekka Varis if you wish):

 Are R5F able to make use of the MSRAM owned by the isolated M4F? What would be their latencies when compared to the main OCSRAM?

 Per our conf call, we understood GPMC is a kind of old technology that is being phased out. We usually rely on parallel buses to communicate with the FPGA (so that the FPGA can oversample signals sent by the MCU and still keep some bandwidth), and we would like to know what the modern solutions are to ensure fast and reliable communication with FPGAs.

 

Can you help us to respond them in a timely manner?

Thanks and regards,

Hamilton

  •  Are R5F able to make use of the MSRAM owned by the isolated M4F? What would be their latencies when compared to the main OCSRAM?

    Hello Hamilton,


    Yes, even if the M4F core is isolated, the R5F core can still access the MSRAM memory region of the M4F.

    Access latency for reading 4 bytes :
    • R5F reading M4F’s MSRAM: ~183 ns
    • MCU core reading  MSRAM: ~2.5 ns

    GPMC Throughput:

    Could you please share the target throughput you are aiming to achieve with GPMC?

    Also, have you evaluated GPMC with DMA enabled? In many use cases, enabling DMA can significantly improve throughput ..

    Alternative Interfaces:

    As discussed, GPMC is legacy and not preferred in modern designs. Other customers have successfully used to:
    • OSPI 
    • PCIe

    These interfaces provide higher bandwidth and scalability compared to GPMC. On the SoC side, both OSPI and PCIe are supported.

    On the FPGA side, you’ll need to configure it to act as either a PCIe endpoint or an OSPI slave device. 

    Note: While OSPI is typically used to interface with NOR/NAND flash memories in MCU+ SDK, there’s no support OSPI from being used to communicate with an FPGA


     PRU for Low-Latency GPIO-like Parallel Communication :

    •  PRU core can be used for bit-banging or parallel I/O communication to the FPGA with precise timing and low latency.

    Regards,

    Anil.