This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AM6442: How to implement a Linux system boot R5F CPU project running on MSRAM

Part Number: AM6442
Other Parts Discussed in Thread: TMDS64EVM, SYSCONFIG

MCU+SDK 9.1

Linux+SDK 8.6

TMDS64EVM

Hello:

https://e2e.ti.com/support/processors-group/processors/f/processors-forum/1300880/am6442-run-r5-from-sram-with-a53-running-linux

During our testing, we encountered a problem that the program ran slower on DDR than on MSRAM. The link above is the question we FAE asked earlier and also the background to my question.

First answer the questions raised in the previous link:

The example I use is gpio_led_blink, which comes from the MCU+SDK 9.1 package. The project name for the example is gpio_led_blink_am64x_evm_r5fss0-0_nortos_ti-arm-clang. As we can see, this is a program running on the R5F0-0 kernel. I have seen the program before, only the example code control of the peripheral is the output of GPIO, not the input. My understanding is that it does not have the input interrupt related issues mentioned in the above link.

The gpio_led_blink example is successfully configured to boot from Linux to the DDR, and the LED light can be observed blinking.

I've also seen the AM64x academy Multicore module development documentation, which doesn't seem to mention how to allocate SRAM nodes in the device tree on Linux systems.

https://dev.ti.com/tirex/explore/node?node=A__AeMVTHckwFDmFoNkRHpRPw__AM64-ACADEMY__WI1KRXP__LATEST 

I want to implement Linux system boot R5F CPU project running on MSRAM:

For the CCS project, I refer to and compare the linker.cmd and example.syscfg files running on the DDR and MSRAM programs. The following configuration is done for example.syscfg file (MCU+SDK 9.1 linker.cmd can be modified in example.syscfg, and the resource table size is set to 0x1000.)

MSRAM   : ORIGIN = 0x70080000 , LENGTH = 0x40000

Change to

MSRAM0   : ORIGIN = 0x70080000 , LENGTH = 0x1000

MSRAM1   : ORIGIN = 0x70081000 , LENGTH = 0x3F000

Change on SECTION

MSRAM0 –> resource_table

MSRAM1 -> Text Segments 、Code and Read-Only Data、Data Segment、Memory Segments、Stack Segments、Initialization and Exception Handling

After testing with the above changes, I found that the program did not load successfully.

Based on the above test and previous responses on the post, I have the following questions:

>First, if I recall correctly that reserved-memory section that was modified in the customer's device tree is for the DDR specifically. You can find the SRAM allocations in the am64-main.dtsi file. So if the customer is adding an SRAM allocation, they need to add it to the SRAM node instead of the DDR node.

Where is the SRAM node in the k3-am64-main.dtsi file? How do I allocate SRAM? What changes do I need to make to the k3-am642-evm.dts file after SRAM allocation?

 

>Second, please note that we have only tested the 1MB VIRTIO section as working in the DDR, NOT in the SRAM. I am not sure whether the resource table also needs to be in DDR, or if it can go into the SRAM. But that is another thing to test.

Is it verified that the R5F CPU program is booted into MSRAM memory by Linux system? Or does it work in theory?

If it has been verified and feasible, can TI provide a modification example for our reference, including the modification of the device tree file?

  • Hello Huang,

    Glad to hear that you are making progress! I am going to split this into separate steps to make it easier for me to track where you are.

    Step 1: modify the GPIO blink example to be able to be loaded from Linux (already done) 

    Add a resource table, and update the linker.cmd file as documented here: 
    https://dev.ti.com/tirex/explore/node?node=A__AeMVTHckwFDmFoNkRHpRPw__AM64-ACADEMY__WI1KRXP__LATEST

    Step 2: make sure that the R5F memory allocation does not conflict with Linux 

    Please refer to the AM64x academy, multicore module, page "How to allocate memory"
    https://dev.ti.com/tirex/explore/node?a=7qm9DIS__LATEST&node=A__AbwqjEswy38Z6lZWYQC-5g__AM64-ACADEMY__WI1KRXP__LATEST

    The Linux SRAM allocation is defined in file k3-am64-main.dtsi, node os_sram

            oc_sram: sram@70000000 {
                    compatible = "mmio-sram";
                    reg = <0x00 0x70000000 0x00 0x200000>;
                    #address-cells = <1>;
                    #size-cells = <1>;
                    ranges = <0x0 0x00 0x70000000 0x200000>;
    
                    tfa-sram@1c0000 {
                            reg = <0x1c0000 0x20000>;
                    };
    
                    dmsc-sram@1e0000 {
                            reg = <0x1e0000 0x1c000>;
                    };
    
                    sproxy-sram@1fc000 {
                            reg = <0x1fc000 0x4000>;
                    };
            };
    

    So the R5F cannot use SRAM addresses 0x701C_0000 to 0x701F_FFFF.

    Step 3: try putting everything EXCEPT the resource table and VRINGS in the SRAM 

    Please DO NOT try to move the resource table or the VRINGs from your working example in Step 1. Leave them where they are for now. We just want to test whether Linux can place program data into the SRAM for the R5F core.

    I don't have time to put together a full example today. But I would expect that you could add the data section that you want to give to the R5F into that oc_sram devicetree node. Let's say you called it r5f0_0_memory@80000.

    Then you should be able to pass in a link to your new sram allocation r5f0_0_memory as an optional sram property in the r5f0_0 node on the board-level devicetree file.

    Refer to Linux SDK, board-support/ti-linux-kernel-6.1.46+gitAUTOINC+247b2535b2-g247b2535b2/Documentation/devicetree/bindings/remoteproc/ti,k3-r5f-rproc.yaml for more information.

    I would expect it to look similar to the references to the memory allocations in the "memory-region" properties that you see in k3-am642-evm.dts, node &main_r5fss0_core0.

    Regards,

    Nick

  • hello Nick,

    Referring to your reply, I have done some tests here, and the conclusion is:

    1. Linux successfully boots the R5F0-0 core project (gpio_led_blink) to run on MSRAM. We can see the leds flashing (other R5F, M4F projects are still running on DDR).

     2. After the remoteproc driver of the Linux system boots the MCU+ core, the Linux system is stuck and the LED indicator of the Linux system stops blinking (LD23).The linux system cannot accept any instructions from the COM port. Linux startup logs are posted in the following section.

     

    Only the k3-am64-main.dtsi file was modified, the rest of the file is the same as TI published on Linux SDK 8.6.

    In the k3-am64-main.dtsi file, the contents of the oc_sram node and the main_r5fss0_core0 node are modified.

        //In oc_sram node
        oc_sram: sram@70000000 {
    		compatible = "mmio-sram";
    		reg = <0x00 0x70000000 0x00 0x200000>;
    		#address-cells = <1>;
    		#size-cells = <1>;
    		ranges = <0x0 0x00 0x70000000 0x200000>;
    
    		// main_r5fss0_core0_msram:r5fss0_core0_msram@80000 {
    		// 	compatible = "shared-dma-pool";
    		// 	reg = <0x80000 0x40000>;
    		// 	no-map;
    		// };
    
    		main_r5fss0_core0_msram:r5fss0_core0_msram@80000 {
    			reg = <0x80000 0x40000>;
    		};
    		
    		………… 
    
    		};
    	};
    
    	//In main_r5fss0_core0 node
    	main_r5fss0_core0: r5f@78000000 {
    		compatible = "ti,am64-r5f";
    		reg = <0x78000000 0x00010000>,
    		      <0x78100000 0x00010000>;
    		reg-names = "atcm", "btcm";
    		ti,sci = <&dmsc>;
    		ti,sci-dev-id = <121>;
    		ti,sci-proc-ids = <0x01 0xff>;
    		resets = <&k3_reset 121 1>;
    		firmware-name = "am64-main-r5f0_0-fw";
    		ti,atcm-enable = <1>;
    		ti,btcm-enable = <1>;
    		ti,loczrama = <1>;
    		sram = <&main_r5fss0_core0_msram>;  //refer ti,k3-r5f-rproc.yaml
    	};
    

    The above is the complete device tree modification section.

    After testing, it is clear that the gpio_led_blink project is not the cause, even if we delete the.out file from linux.

    When the EVM board is powered on and the Linux system is started, the Linux system does not print logs after we observe that the remoteproc logs are printed.

    For Linux systems to boot R core projects to run on MSRAM, and Linux systems to run properly. What changes need to be made to the device tree file?

    All log messages from remotrproc to Linux system stuck during Linux system start

    [  OK  ] Created slice system-systemd\x2dbacklight.slice.
    [    6.416434] CAN device driver interface
             Starting Load/Save Screen …ess of backlight:ssd1307fb0...
    [  OK  ] Started Load/Save Screen B…tness of backlight:ssd1307fb0.
    [    6.556266] davinci_mdio 300b2400.mdio: Configuring MDIO in manual mode
    [    6.686344] davinci_mdio 300b2400.mdio: davinci mdio revision 1.7, bus freq 1000000
    [    6.698069] k3-m4-rproc 5000000.m4fss: assigned reserved memory node m4f-dma-memory@a4000000
    [    6.738511] davinci_mdio 300b2400.mdio: phy[15]: device 300b2400.mdio:0f, driver TI DP83869
    [    6.757330] k3-m4-rproc 5000000.m4fss: configured M4 for remoteproc mode
    [    6.763952] platform 78000000.r5f: configured R5F for remoteproc mode
    [    6.770905] k3-m4-rproc 5000000.m4fss: local reset is deasserted for device
    [    6.846236] platform 78000000.r5f: assigned reserved memory node r5f-dma-memory@a0000000
    [    6.861951] remoteproc remoteproc0: 5000000.m4fss is available
    [    6.894571] remoteproc remoteproc1: 78000000.r5f is available
    [    6.905515] platform 78200000.r5f: configured R5F for remoteproc mode
    [    6.927692] remoteproc remoteproc0: powering up 5000000.m4fss
    [    6.937510] remoteproc remoteproc0: Booting fw image am64-mcu-m4f0_0-fw, size 444304
    [    6.947359]  remoteproc0#vdev0buffer: assigned reserved memory node m4f-dma-memory@a4000000
    [    6.956840] remoteproc remoteproc1: powering up 78000000.r5f
    [    6.962681] remoteproc remoteproc1: Booting fw image am64-main-r5f0_0-fw, size 858964
    [    6.971241] virtio_rpmsg_bus virtio0: rpmsg host is online
    [    6.977086]  remoteproc0#vdev0buffer: registered virtio0 (type 7)
    [    6.983326] remoteproc remoteproc0: remote processor 5000000.m4fss is now up
    [    6.990867] virtio_rpmsg_bus virtio0: creating channel ti.ipc4.ping-pong addr 0xd
    [    7.000192] virtio_rpmsg_bus virtio0: creating channel rpmsg_chrdev addr 0xe
    [    7.016566]  remoteproc1#vdev0buffer: assigned reserved memory node r5f-dma-memory@a0000000
    [    7.025126] platform 78200000.r5f: assigned reserved memory node r5f-dma-memory@a1000000
    [    7.034456] virtio_rpmsg_bus virtio1: rpmsg host is online
    [    7.039574] remoteproc remoteproc2: 78200000.r5f is available
    [    7.041104]  remoteproc1#vdev0buffer: registered virtio1 (type 7)
    [    7.051996] remoteproc remoteproc1: remote processor 78000000.r5f is now up
    [    7.062462] m_can_platform 20701000.can: m_can device registered (irq=35, version=32)
    [    7.074960] m_can_platform 20711000.can: m_can device registered (irq=37, version=32)
    [    7.091502] remoteproc remoteproc2: powering up 78200000.r5f
    [    7.097348] remoteproc remoteproc2: Booting fw image am64-main-r5f0_1-fw, size 458576
    [    7.108707]  remoteproc2#vdev0buffer: assigned reserved memory node r5f-dma-memory@a1000000
    [    7.119977] virtio_rpmsg_bus virtio2: rpmsg host is online
    [    7.125732]  remoteproc2#vdev0buffer: registered virtio2 (type 7)
    [    7.131960] remoteproc remoteproc2: remote processor 78200000.r5f is now up
    [    7.139213] virtio_rpmsg_bus virtio2: creating channel ti.ipc4.ping-pong addr 0xd
    [    7.147207] virtio_rpmsg_bus virtio2: creating channel rpmsg_chrdev addr 0xe
    [    7.170965] platform 78400000.r5f: configured R5F for remoteproc mode
    [    7.185111] platform 78400000.r5f: assigned reserved memory node r5f-dma-memory@a2000000
    [    7.217109] remoteproc remoteproc3: 78400000.r5f is available
    [    7.269471] platform 78600000.r5f: configured R5F for remoteproc mode
    [    7.277015] remoteproc remoteproc3: powering up 78400000.r5f
    [    7.286098] remoteproc remoteproc3: Booting fw image am64-main-r5f1_0-fw, size 454872
    [    7.290839] remoteproc remoteproc5: 30034000.pru is available
    [    7.298630] platform 78600000.r5f: assigned reserved memory node r5f-dma-memory@a3000000
    [    7.308650]  remoteproc3#vdev0buffer: assigned reserved memory node r5f-dma-memory@a2000000
    [    7.320567] virtio_rpmsg_bus virtio3: rpmsg host is online
    [    7.330026]  remoteproc3#vdev0buffer: registered virtio3 (type 7)
    [    7.342011] remoteproc remoteproc3: remote processor 78400000.r5f is now up
    [    7.353996] virtio_rpmsg_bus virtio3: creating channel ti.ipc4.ping-pong addr 0xd
    [    7.363615] remoteproc remoteproc4: 78600000.r5f is available
    [    7.368046] virtio_rpmsg_bus virtio3: creating channel rpmsg_chrdev addr 0xe
    [    7.383543] remoteproc remoteproc4: powering up 78600000.r5f
    [    7.389363] remoteproc remoteproc4: Booting fw image am64-main-r5f1_1-fw, size 458704
    [    7.402987]  remoteproc4#vdev0buffer: assigned reserved memory node r5f-dma-memory@a3000000
    [    7.414644] virtio_rpmsg_bus virtio4: rpmsg host is online
    [    7.414774]  remoteproc4#vdev0buffer: registered virtio4 (type 7)
    [    7.414782] remoteproc remoteproc4: remote processor 78600000.r5f is now up
    [    7.415798] virtio_rpmsg_bus virtio4: creating channel ti.ipc4.ping-pong addr 0xd
    [    7.416112] virtio_rpmsg_bus virtio4: creating channel rpmsg_chrdev addr 0xe
    [    7.466776] remoteproc remoteproc6: 30004000.rtu is available
    [    7.478329] remoteproc remoteproc7: 3000a000.txpru is available
    [    7.606253] remoteproc remoteproc8: 30038000.pru is available
    [    7.615296] remoteproc remoteproc9: 30006000.rtu is available
    [    7.624610] remoteproc remoteproc10: 3000c000.txpru is available
    [    7.626183] remoteproc remoteproc11: 300b4000.pru is available
    [    7.628167] remoteproc remoteproc12: 30084000.rtu is available
    [    7.646197] remoteproc remoteproc13: 3008a000.txpru is available
    [    7.712864] remoteproc remoteproc14: 300b8000.pru is available
    [    7.807600] remoteproc remoteproc15: 30086000.rtu is available
    [    7.837950] remoteproc remoteproc16: 3008c000.txpru is available
    [  OK  ] Created slice system-systemd\x2dfsck.slice.
    [    9.483514] TI DP83869 300b2400.mdio:0f: attached PHY driver [TI DP83869] (mii_bus:phy_addr=300b2400.mdio:0f, irq=POLL)
    [    9.502728] icssg-prueth icssg1-eth: TI PRU ethernet driver initialized: single EMAC mode
    [    9.724215] usbcore: registered new interface driver usbfs
    [    9.799809] usbcore: registered new interface driver hub
    [    9.805407] usbcore: registered new device driver usb
    [  OK  ] Found device /dev/mmcblk0p1.
             Starting File System Check on /dev/mmcblk0p1...
    [  OK  ] Started udev Wait for Complete Device Initialization.
    [  OK  ] Started Hardware RNG Entropy Gatherer Daemon.
    [  OK  ] Reached target System Initialization.
    [  OK  ] Started Daily rotation of log files.
    [  OK  ] Started Timer service to update the IP on OLED each 10s.
    [  OK  ] Started Daily Cleanup of Temporary Directories.
    [  OK  ] Reached target Timers.
    [  OK  ] Listening on Avahi mDNS/DNS-SD Stack Activation Socket.
    [  OK  ] Listening on D-Bus System Message Bus Socket.
             Starting Docker Socket for the API.
    [  OK  ] Listening on dropbear.socket.
             Starting Reboot and dump vmcore via kexec...
    [  OK  ] Listening on Docker Socket for the API.
    [  OK  ] Reached target Sockets.
    [  OK  ] Reached target Basic System.
    [  OK  ] Started Job spooling tools.
    [  OK  ] Started Periodic Command Scheduler.
    [  OK  ] Started D-Bus System Message Bus.
             Starting Ethernet Bridge Filtering Tables...
             Starting Print notice about GPLv3 packages...
             Starting IPv4 Packet Filtering Framework...
    [  OK  ] Started irqbalance daemon.
             Starting Matrix GUI...
             Starting startwlanap...
             Starting startwlansta...
             Starting System Logger Daemon "default" instance...
    

  • Hello Nick,

    It is now possible to boot R5F cores from Linux running on MSRAM, and Linux is running properly.

    But there are still some problems, and they are as follows:

    • If the M4F is not running the program, R5F0-0 (MSRAM), R5F0-1 (DDR), R5F1-0 (DDR), R5F1-1 (DDR) can boot normally from Linux, and Linux is running properly.

    When I tested it today, I found that all MCU+ core related.out files were not deleted before.

    Only the.out file of the gpio_led_blink project, which runs in the R5F0-0 and MSRAM regions, is deleted.

    The rest of the MCU+ core link works as follows (without modifying the ipc_rpmsg_echo_linux example) :

    > R5F0-1 -> ipc_rpmsg_echo_linux_am64x-evm_r5fss0-1_freertos_ti-arm-clang.out

    >R5F1-0 -> ipc_rpmsg_echo_linux_am64x-evm_r5fss1-0_freertos_ti-arm-clang.out

    >R5F1-1 -> ipc_rpmsg_echo_linux_am64x-evm_r5fss1-1_freertos_ti-arm-clang

    >M4F -> ipc_rpmsg_echo_linux_am64x-evm_m4fss0-0_freertos_ti-arm-clang.out

    Delete the.out file linked to the M4F core.After the AM64x EVM board is powered on again, it is found that Linux can run normally, and the COM port can input commands to Linux.

    We're not using the M4F core at the moment, so we didn't test it any further.

    • While R5F0-0 runs on MSRAM, the rest of the R5F core runs on DDR memory. R5F running on DDR cannot use ipc_rpmsg for data interaction between R5F cores, Linux and R5F can use ipc_rpmsg for data interaction (tested using ti-rpmsg-char).

    I modify the ipc_rpmsg_echo_linux example. The ipc_rpmsg module is tested only in R5F0-1, R5F1-0, and R5F1-1.

    The ipc rpmsg echo.c file of the R5F0-1 project is modified

    //In ipc_rpmsg_echo.c file 105-112 line
    uint32_t gRemoteCoreId[] = {
    //    CSL_CORE_ID_R5FSS0_0,
        CSL_CORE_ID_R5FSS0_1,
        CSL_CORE_ID_R5FSS1_0,
        CSL_CORE_ID_R5FSS1_1,
    //    CSL_CORE_ID_M4FSS0_0,
        CSL_CORE_ID_MAX /* this value indicates the end of the array */
    };
    
    //In ipc_rpmsg_echo.c file 316 line
    if( IpcNotify_getSelfCoreId() == CSL_CORE_ID_R5FSS0_1 )

    The R5F1-0 and R5F1-1 projects will not be modified.

    When tested, the results were consistent with the description above the link below:

    https://e2e.ti.com/support/processors-group/processors/f/processors-forum/1256786/am6442-ipc_rpmsg_echo_linux-rpmsg-cannot-be-used-for-data-transfer-between-r5f-and-m4f-cores-in-linux

    Will TI consider optimizing this part in the future?

    Regards,

    Huang

  • Hello Huang,

    Next steps for debugging

    1) just to triple check: which versions of the SDK are you using for both MCU+ SDK, and Linux SDK?

    2) I am having trouble understanding exactly what is going on in your description:

    1. Linux successfully boots the R5F0-0 core project (gpio_led_blink) to run on MSRAM. We can see the leds flashing (other R5F, M4F projects are still running on DDR).

     2. After the remoteproc driver of the Linux system boots the MCU+ core, the Linux system is stuck and the LED indicator of the Linux system stops blinking (LD23).The linux system cannot accept any instructions from the COM port. Linux startup logs are posted in the following section.

    If I understand this properly, the R5F core continues to run as expected (i.e., the gpio_led_blink example continues to blink an LED), but at the same time Linux becomes nonresponsive?

    3) Did you check to make sure that there are no resource conflicts over the UART? Something bad might happen if both Linux and the R5F code try to use the same UART peripheral. For more information, refer to https://dev.ti.com/tirex/explore/node?a=7qm9DIS__LATEST&node=A__AROmAnuFxeqz306G2XuoZw__AM64-ACADEMY__WI1KRXP__LATEST

    4) make sure that there are no memory conflicts across ALL of the cores (i.e., between R5F projects and M4F cores, as well as between those cores and Linux). 

    What if things still are not working? 

    Feel free to give Tony one of your modified projects for me to review if you still cannot get it working. Please highlight exactly where you made changes - git patches are a really nice way to show me exactly what you changed in our examples.

    If you are not already using git version control, I would highly suggest that you start using it. It makes it much easier to keep track of all the changes and tests. For more information, refer to https://dev.ti.com/tirex/explore/node?a=7qm9DIS__LATEST&node=A__AThWljFSwXcdQKgF4QGSXw__AM64-ACADEMY__WI1KRXP__LATEST

    Regards,

    Nick

  • Hello Nick,

    Thank you very much for your reply.

    you are right!

    Linux can now boot R5F cores to run on MSRAM, and M4F cores and Linux systems are running normally. Based on this, the RPMSG module can work properly regardless of whether the shared memory area between MCU+ cores is configured on DDR or MSRAM for testing.

    Because I was in a hurry to test, I used some wrong files to test the resulting errors.

    Test using MCU+SDK 9.1

    >If the M4F is not running the program, R5F0-0 (MSRAM), R5F0-1 (DDR), R5F1-0 (DDR), R5F1-1 (DDR) can boot normally from Linux, and Linux is running properly.

    It may be that I used the wrong .dtb file to test. There is no problem in modifying the generated .dtb file according to the above reply. The M4F core (DDR) can be started normally.

    > While R5F0-0 runs on MSRAM, the rest of the R5F core runs on DDR memory. R5F running on DDR cannot use ipc_rpmsg for data interaction between R5F cores, Linux and R5F can use ipc_rpmsg for data interaction (tested using ti-rpmsg-char).

    I have reconfigured the MCU+ core that uses IPC in the system project, but during the test, I did not recompile all MCU+ projects and replace all the.out files.My guess is that some MCU+ cores have been executing IpcNotify_syncAll(SystemP_WAIT_FOREVER).

    Regards,

    Huang

  • Hello Huang,

    Glad to hear that things are working for you! Please feel free to create a new thread if you have any additional questions.

    Followup

    Could you do me a favor? You asked a really good question. I would like to create a new AM64x academy page based on our discussion. Can you attach these parts of your Linux devicetree file & remote core linker files? 

    Linux devicetree:
    sram devicetree node
    remote core devicetree nodes that reference the SRAM

    Remote core linker files:
    The parts where you assign different regions to DDR & SRAM

    Regards,

    Nick

  • Hello Nick

    No problem. I have been studying with AM64x Academy. I also hope that AM64x Academy can help more AM64x users.

    Take the example of gpio led blink in MCU+SDK.( Linux SDK 8.6 / MCU+SDK 9.1)。

    To implement Linux boot R5F running in MSRAM, the main changes are as follows:

    • Linux devicetree-->k3-am64-main.dtsi (Only modify it. The other.dtsi/.dts files do not need to be modified)

    //sram devicetree node

    oc_sram: sram@70000000 {
    		compatible = "mmio-sram";
    		reg = <0x00 0x70000000 0x00 0x200000>;
    		#address-cells = <1>;
    		#size-cells = <1>;
    		ranges = <0x0 0x00 0x70000000 0x200000>;
    
            //SRAM node allocation -> MSRAM: ORIGIN = 0x70080000 , LENGTH = 0x40000 
    		main_r5fss0_core0_msram:r5fss0_core0_msram@80000 {
    			reg = <0x80000 0x40000>;
    		};  
    
    		atf-sram@1c0000 {
    			reg = <0x1c0000 0x20000>;
    		};
    
    		dmsc-sram@1e0000 {
    			reg = <0x1e0000 0x1c000>;
    		};
    
    		sproxy-sram@1fc000 {
    			reg = <0x1fc000 0x4000>;
    		};
    	};
    

    //remote core devicetree nodes that reference the SRAM

    		main_r5fss0_core0: r5f@78000000 {
    			compatible = "ti,am64-r5f";
    			reg = <0x78000000 0x00010000>,
    			      <0x78100000 0x00010000>;
    			reg-names = "atcm", "btcm";
    			ti,sci = <&dmsc>;
    			ti,sci-dev-id = <121>;
    			ti,sci-proc-ids = <0x01 0xff>;
    			resets = <&k3_reset 121 1>;
    			firmware-name = "am64-main-r5f0_0-fw";
    			ti,atcm-enable = <1>;
    			ti,btcm-enable = <1>;
    			ti,loczrama = <1>;
    			sram = <&main_r5fss0_core0_msram>; //sram node reference
    		};

    • Remote core linker files(linker.cmd)

    //The parts where you assign different regions to DDR & SRAM

    SECTIONS
    {
        .vectors  : {
        } > R5F_VECS   , palign(8) 
    
        GROUP  :   {
        .text.hwi : {
        } palign(8)
        .text.cache : {
        } palign(8)
        .text.mpu : {
        } palign(8)
        .text.boot : {
        } palign(8)
        .text:abort : {
        } palign(8)
        } > MSRAM  
    
    
        GROUP  :   {
        .text : {
        } palign(8)
        .rodata : {
        } palign(8)
        } > MSRAM  
    
    
        GROUP  :   {
        .data : {
        } palign(8)
        } > MSRAM  
    
    
        GROUP  :   {
        .bss : {
        } palign(8)
        RUN_START(__BSS_START)
        RUN_END(__BSS_END)
        .sysmem : {
        } palign(8)
        .stack : {
        } palign(8)
        } > MSRAM  
    
        GROUP  :   {
        .irqstack : {
            . = . + __IRQ_STACK_SIZE;
        } align(8)
        RUN_START(__IRQ_STACK_START)
        RUN_END(__IRQ_STACK_END)
        .fiqstack : {
            . = . + __FIQ_STACK_SIZE;
        } align(8)
        RUN_START(__FIQ_STACK_START)
        RUN_END(__FIQ_STACK_END)
        .svcstack : {
            . = . + __SVC_STACK_SIZE;
        } align(8)
        RUN_START(__SVC_STACK_START)
        RUN_END(__SVC_STACK_END)
        .abortstack : {
            . = . + __ABORT_STACK_SIZE;
        } align(8)
        RUN_START(__ABORT_STACK_START)
        RUN_END(__ABORT_STACK_END)
        .undefinedstack : {
            . = . + __UNDEFINED_STACK_SIZE;
        } align(8)
        RUN_START(__UNDEFINED_STACK_START)
        RUN_END(__UNDEFINED_STACK_END)
        } > MSRAM  
    
    
        GROUP  :   {
        .ARM.exidx : {
        } palign(8)
        .init_array : {
        } palign(8)
        .fini_array : {
        } palign(8)
        } > MSRAM  
    
        .bss.user_shared_mem (NOLOAD) : {
        } > USER_SHM_MEM    
    
        .bss.log_shared_mem (NOLOAD) : {
        } > LOG_SHM_MEM    
    
        .bss.ipc_vring_mem (NOLOAD) : {
        } > RTOS_NORTOS_IPC_SHM_MEM    
    
        .bss.nocache (NOLOAD) : {
        } > NON_CACHE_MEM    
        
        // Add resource_table
        GROUP  :   {
        .resource_table : {
        } palign(4096)
        } > DDR_0  
    }
    
    MEMORY
    {
        R5F_VECS   : ORIGIN = 0x0 , LENGTH = 0x40 
        R5F_TCMA   : ORIGIN = 0x40 , LENGTH = 0x7FC0 
        R5F_TCMB0   : ORIGIN = 0x41010000 , LENGTH = 0x8000 
        NON_CACHE_MEM   : ORIGIN = 0x70060000 , LENGTH = 0x8000 
        MSRAM   : ORIGIN = 0x70080000 , LENGTH = 0x40000 
        USER_SHM_MEM   : ORIGIN = 0x701D0000 , LENGTH = 0x80 
        LOG_SHM_MEM   : ORIGIN = 0x701D0080 , LENGTH = 0x3F80 
        RTOS_NORTOS_IPC_SHM_MEM   : ORIGIN = 0x701D4000 , LENGTH = 0xC000 
        FLASH   : ORIGIN = 0x60100000 , LENGTH = 0x80000 
        
        DDR_0   : ORIGIN = 0xA0100000 , LENGTH = 0x1000 // Allocate the DDR area to place the resource table
        DDR_1   : ORIGIN = 0xA0101000 , LENGTH = 0xEFF000 
        LINUX_IPC_SHM_MEM   : ORIGIN = 0xA0000000 , LENGTH = 0x100000 
    }

    Don't forget to add the resource table to the project, I use example.syscfg file to configure (IPC->Linux A53 IPC RP Message)

    Linux booting M4F to run on MSRAM is not something I'm concerned about right now, so I didn't test it on my side.I hope Nick can add to it at AM64x Academy.

    Regards,

    Huang

  • Hello Huang,

    This is great, thank you again for sharing your code with me! I will build on your example when I have some spare time to add more pages to the multicore academy.

    Regards,

    Nick

  • Hello Huang,

    I'm working on adding the SRAM example we talked about for R5F and M4F. I did notice one thing that may or may not impact your usecase:

    I am building off of the ipc_rpmsg_echo_linux example, SDK 9.1.

    I looked at R5F0_0 SysConfig setting TI DRIVER PORTING LAYER > MPU ARMv7 > CONFIG_MPU_REGION3 to make sure that the SRAM memory region had "Allow Code Execution" checked.

    By default, the "Access Permissions" listed "Supervisor RD+WR, User RD". I am not sure if that would prevent the stack from working properly, or if you need to make sure to update that to "Supervisor RD+WR, User RD+WR".

    Regards,

    Nick