Hi,
I am currently performing some benchmarks to evaluate the performance penalty from the Cortex-R5 when other masters are running in parallel (DMA, Ethernet).
For now, I evaluated that when 1 DMA channel runs as fast as possible when the core performs intensive operations on the same memory, core performance can be 2 times less.
I have found some other posts suggesting that this interference only occurs when multiple masters access the same 64-bits in the same memory, but having also tested with a different memory location, I can certainly say this is not true.
In order to build a complete understanding, I would need the following information from Texas Instruments:
- What are the references of the CPU interconnect and Peripheral interconnect?
- Where can I find find their documentation for the revisions used in the TMS570LC4357?
- Can TI provide all other TI documentation completing the latter, which would describe the interconnects integration into TMS560 design (configuration, restrictions, arbitration, priority, ...)?
- In the Ethernet chapter 32.2.14 from Reference Manual (March 2018), one can read "The device contains a chip-level master priority register that is used to set the priority of the transfer node
used in issuing memory transfer requests to system memory.". But there seems to be nothing documented about that in the interconnect chapters. Can TI provide more information about this master priority mechanism at chip-level that would allow configuring EMAC versus CPU or DMA priority for interconnect traffic?
Thanks