This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

AM6442: AM64x Shared Ethernet between R0 (FreeRTOS) and A53 (Linux), RPMessage issue

Part Number: AM6442

Tool/software:

Hello,

we want to implement a shared ethernet connection between R0 (FreeRTOS) and A53 (Linux) on the AM6442.

ICSSG Port is used for ethernet netif. The intercommunication interface is initialized. The two netifs function as ports of the bridge.

icmp.c is fixed that the bridge is able to answer on pings. This works already.

Bootflow / Behavior:

R0 starts R1..3 and Linux.

We wait for Linux to be booted equivalent to the mcu+ (version 11.00.00.15) and industrial (version 11.00.00.08) SDKs examples with the function: RPMessage_waitForLinuxReady()

In the bootprocess of Linux, this function of R0 Core is returned with success.

After that, we register a Callback and send the announcement with:

#define LINUX_CHANNEL_ENDPT_ID 13

#define IPC_RPMESSAGE_SERVICE_PING "ti.icve"

RPMessage_announce(CSL_CORE_ID_A53SS0_0, LINUX_CHANNEL_ENDPT_ID, IPC_RPMESSAGE_SERVICE_PING);

Problem:

At the Moment, we have to send the announcemessage in a 2 second interval in a while loop until we receive a message!

Linux does not register the rpmsg channel immediately nor correctly. It registers the announce message between 15 to 300 seconds after boot, very undeterministic. When linux registers the answer, it does not create the rpmsg channel. An ethernet interface is created. The rpmsg response from linux is registered in the R0 by the callback but is not the request for shared memory information, what we would expect.

Assumption:

The described behavior causes most likely a faulty or no configuration of the shared ethernet....

There could be a problem in the device tree regarding the mbox mode, where we are unsure. Is the following configuration valid, when communicating rpmsg between R0 and A53?

Any other ideas are welcome!

Device Tree:

&mailbox0_cluster2 { 

status = "okay";

mbox_main_r5fss0_core0: mbox-main-r5fss0-core0 {

ti,mbox-rx = <0 0 2>;

ti,mbox-tx = <1 0 2>;

};

};

/* Cluster mode for remoteproc driver set to single-CPU mode */ 

&main_r5fss0 { 

ti,cluster-mode = <2>; 

}; 

&main_r5fss0_core0 { 

mboxes = <&mailbox0_cluster2>, <&mbox_main_r5fss0_core0>; 

memory-region = <&main_r5fss0_core0_dma_memory_region>, 

<&main_r5fss0_core0_memory_region>, 

<&main_r5fss0_core0_memory_region_shm>; 

};

Information:

At the moment, we are using:

MCU+ SDK Version 09.01.00 (for ti lwip stack)

Industrial SDK Version 09.02.00 (currently not in use)

Typical Log in Linux (just remoteproc and rpmsg related outputs):

[    6.111329] omap-mailbox 29020000.mailbox: omap mailbox rev 0x66fc9100
[    6.517054] platform 78000000.r5f: R5F core may have been powered on by a different host, programmed state (0) != actual state (1)
[    6.626408] platform 78000000.r5f: configured R5F for IPC-only mode
[    6.652051] platform 78000000.r5f: assigned reserved memory node r5f-dma-memory@a0000000
[    6.729469] remoteproc remoteproc0: 78000000.r5f is available
[    6.779652] remoteproc remoteproc0: attaching to 78000000.r5f
[    6.841043] platform 78000000.r5f: R5F core initialized in IPC-only mode
[    6.913405] rproc-virtio rproc-virtio.1.auto: assigned reserved memory node r5f-dma-memory@a0000000
[    7.035426] virtio_rpmsg_bus virtio0: rpmsg host is online
[    7.043940] rproc-virtio rproc-virtio.1.auto: registered virtio0 (type 7)
[    7.071093] remoteproc remoteproc0: remote processor 78000000.r5f is now attached
[   65.822444] virtio_rpmsg_bus virtio0: creating channel ti.icve addr 0xd
[   65.828665] rpmsg_eth_probe: probe called
[   65.834573] rpmsg_eth virtio0.ti.icve.-1.13: start_addr = 0xa0200000
[   65.841775] rpmsg_eth virtio0.ti.icve.-1.13: size 0xc00000
[   65.847035] rpmsg_eth_init_ndev: init ndev called
[   65.851340] rpmsg_eth virtio0.ti.icve.-1.13: Default MAC Address = 00:00:00:00:00:00
[   65.858306] rpmsg_eth virtio0.ti.icve.-1.13: Assigning random MAC address
[   65.864389] rpmsg_eth virtio0.ti.icve.-1.13: New MAC Address = fa:cd:d6:80:28:83
[   65.872040] virtio_rpmsg_bus virtio0: msg received with no recipient
[   65.881795] rpmsg_eth_set_mac_address: set mac address called
[   65.885130] virtio_rpmsg_bus virtio0: msg received with no recipient
[   65.887588] rpmsg_eth_create_send_request: create send request called
[   65.892690] virtio_rpmsg_bus virtio0: creating channel ti.icve addr 0xd
[   65.901073] rpmsg_eth: create_request: create request called
[   65.904281] virtio_rpmsg_bus virtio0: channel ti.icve:ffffffff:d already exist
[   65.915777] virtio_rpmsg_bus virtio0: rpmsg_create_channel failed
[   65.924457] virtio_rpmsg_bus virtio0: creating channel ti.icve addr 0xd
[   65.930545] virtio_rpmsg_bus virtio0: channel ti.icve:ffffffff:d already exist
[   65.937046] virtio_rpmsg_bus virtio0: rpmsg_create_channel failed
[   65.943326] virtio_rpmsg_bus virtio0: msg received with no recipient
[   65.949150] virtio_rpmsg_bus virtio0: creating channel ti.icve addr 0xd
[   65.955155] virtio_rpmsg_bus virtio0: channel ti.icve:ffffffff:d already exist
[   65.961634] virtio_rpmsg_bus virtio0: rpmsg_create_channel failed
[   65.967162] virtio_rpmsg_bus virtio0: creating channel ti.icve addr 0xd
[   65.973131] virtio_rpmsg_bus virtio0: channel ti.icve:ffffffff:d already exist
[   65.979639] virtio_rpmsg_bus virtio0: rpmsg_create_channel failed
[   65.985120] virtio_rpmsg_bus virtio0: creating channel ti.icve addr 0xd
[   65.991101] virtio_rpmsg_bus virtio0: channel ti.icve:ffffffff:d already exist
[   65.997596] virtio_rpmsg_bus virtio0: rpmsg_create_channel failed
[   66.003135] virtio_rpmsg_bus virtio0: creating channel ti.icve addr 0xd
[   66.009184] virtio_rpmsg_bus virtio0: channel ti.icve:ffffffff:d already exist
[   66.015729] virtio_rpmsg_bus virtio0: rpmsg_create_channel failed
[   66.021236] virtio_rpmsg_bus virtio0: creating channel ti.icve addr 0xd
[   66.023819] rpmsg_eth_ndo_open: open called
[   66.027182] virtio_rpmsg_bus virtio0: channel ti.icve:ffffffff:d already exist
[   66.027193] virtio_rpmsg_bus virtio0: rpmsg_create_channel failed
[   66.027217] virtio_rpmsg_bus virtio0: creating channel ti.icve addr 0xd
[   66.048710] virtio_rpmsg_bus virtio0: channel ti.icve:ffffffff:d already exist
[   66.055201] virtio_rpmsg_bus virtio0: rpmsg_create_channel failed
[   66.058747] rpmsg_eth_set_rx_mode: set rx mode called
[   66.060708] virtio_rpmsg_bus virtio0: creating channel ti.icve addr 0xd
[   66.071116] rpmsg_eth_ndo_set_rx_mode_work: set rx mode work called
[   66.072668] virtio_rpmsg_bus virtio0: channel ti.icve:ffffffff:d already exist
[   66.083838] rpmsg_eth_set_rx_mode: set rx mode called
[   66.084725] virtio_rpmsg_bus virtio0: rpmsg_create_channel failed
[   66.089771] rpmsg_eth_add_mc_addr: add multicast address called
[   66.095290] virtio_rpmsg_bus virtio0: creating channel ti.icve addr 0xd
[   66.100710] rpmsg_eth_create_send_request: create send request called
[   66.111124] rpmsg_eth: create_request: create request called
[   66.113136] virtio_rpmsg_bus virtio0: channel ti.icve:ffffffff:d already exist
[   66.124438] virtio_rpmsg_bus virtio0: rpmsg_create_channel failed
[   66.131835] virtio_rpmsg_bus virtio0: creating channel ti.icve addr 0xd
[   66.140387] virtio_rpmsg_bus virtio0: channel ti.icve:ffffffff:d already exist
[   66.148661] virtio_rpmsg_bus virtio0: rpmsg_create_channel failed
[   66.155858] virtio_rpmsg_bus virtio0: creating channel ti.icve addr 0xd
[   66.163381] virtio_rpmsg_bus virtio0: channel ti.icve:ffffffff:d already exist
[   66.165028] rpmsg_eth_state_machine: state machine called
[   66.171535] virtio_rpmsg_bus virtio0: rpmsg_create_channel failed
[   66.180590] rpmsg_eth_set_rx_mode: set rx mode called
[   66.182343] virtio_rpmsg_bus virtio0: creating channel ti.icve addr 0xd
[   66.187828] rpmsg_eth_create_send_request: create send request called
[   66.193522] virtio_rpmsg_bus virtio0: channel ti.icve:ffffffff:d already exist
[   66.210061] rpmsg_eth: create_request: create request called
[   66.218505] virtio_rpmsg_bus virtio0: rpmsg_create_channel failed
[   66.220463] rpmsg_eth virtio0.ti.icve.-1.13: Failed to receive response within 25 jiffies
[   66.233048] virtio_rpmsg_bus virtio0: creating channel ti.icve addr 0xd
[   66.235571] rpmsg_eth_ndo_set_rx_mode_work: set rx mode work called
[   66.244677] rpmsg_eth_add_mc_addr: add multicast address called
[   66.246537] virtio_rpmsg_bus virtio0: channel ti.icve:ffffffff:d already exist
[   66.254368] rpmsg_eth_create_send_request: create send request called
[   66.263798] rpmsg_eth: create_request: create request called
[   66.270992] virtio_rpmsg_bus virtio0: rpmsg_create_channel failed
[   66.281743] virtio_rpmsg_bus virtio0: creating channel ti.icve addr 0xd
[   66.289535] virtio_rpmsg_bus virtio0: channel ti.icve:ffffffff:d already exist
[   66.305053] virtio_rpmsg_bus virtio0: rpmsg_create_channel failed
[   66.313217] virtio_rpmsg_bus virtio0: creating channel ti.icve addr 0xd
[   66.323093] virtio_rpmsg_bus virtio0: channel ti.icve:ffffffff:d already exist
[   66.332569] virtio_rpmsg_bus virtio0: rpmsg_create_channel failed
[   66.339566] virtio_rpmsg_bus virtio0: creating channel ti.icve addr 0xd
[   66.347038] virtio_rpmsg_bus virtio0: channel ti.icve:ffffffff:d already exist
[   66.355141] virtio_rpmsg_bus virtio0: rpmsg_create_channel failed
[   66.362167] virtio_rpmsg_bus virtio0: creating channel ti.icve addr 0xd
[   66.369846] virtio_rpmsg_bus virtio0: channel ti.icve:ffffffff:d already exist
[   66.376581] virtio_rpmsg_bus virtio0: rpmsg_create_channel failed
[   66.381042] rpmsg_eth virtio0.ti.icve.-1.13: Failed to receive response within 25 jiffies
[   66.385281] virtio_rpmsg_bus virtio0: creating channel ti.icve addr 0xd
[   66.396578] virtio_rpmsg_bus virtio0: channel ti.icve:ffffffff:d already exist
[   66.403334] virtio_rpmsg_bus virtio0: rpmsg_create_channel failed
[   66.408955] virtio_rpmsg_bus virtio0: creating channel ti.icve addr 0xd
[   66.415096] virtio_rpmsg_bus virtio0: channel ti.icve:ffffffff:d already exist
[   66.421650] virtio_rpmsg_bus virtio0: rpmsg_create_channel failed
[   66.427168] virtio_rpmsg_bus virtio0: creating channel ti.icve addr 0xd
[   66.433154] virtio_rpmsg_bus virtio0: channel ti.icve:ffffffff:d already exist
[   66.439670] virtio_rpmsg_bus virtio0: rpmsg_create_channel failed
[   66.445192] virtio_rpmsg_bus virtio0: creating channel ti.icve addr 0xd
[   66.451154] virtio_rpmsg_bus virtio0: channel ti.icve:ffffffff:d already exist
[   66.457674] virtio_rpmsg_bus virtio0: rpmsg_create_channel failed
[   66.463270] virtio_rpmsg_bus virtio0: creating channel ti.icve addr 0xd
[   66.469287] virtio_rpmsg_bus virtio0: channel ti.icve:ffffffff:d already exist
[   66.475790] virtio_rpmsg_bus virtio0: rpmsg_create_channel failed
[   66.481366] virtio_rpmsg_bus virtio0: creating channel ti.icve addr 0xd
[   66.485285] rpmsg_eth_set_rx_mode: set rx mode called
[   66.487305] virtio_rpmsg_bus virtio0: channel ti.icve:ffffffff:d already exist
[   66.491797] rpmsg_eth_ndo_set_rx_mode_work: set rx mode work called
[   66.498207] virtio_rpmsg_bus virtio0: rpmsg_create_channel failed
[   66.509244] virtio_rpmsg_bus virtio0: creating channel ti.icve addr 0xd
[   66.513176] rpmsg_eth_set_rx_mode: set rx mode called
[   66.515217] virtio_rpmsg_bus virtio0: channel ti.icve:ffffffff:d already exist
[   66.525058] rpmsg_eth_set_rx_mode: set rx mode called
[   66.530566] rpmsg_eth_add_mc_addr: add multicast address called
[   66.535868] virtio_rpmsg_bus virtio0: rpmsg_create_channel failed
[   66.536953] rpmsg_eth_create_send_request: create send request called
[   66.541334] virtio_rpmsg_bus virtio0: creating channel ti.icve addr 0xd
[   66.549381] rpmsg_eth: create_request: create request called
[   66.552999] virtio_rpmsg_bus virtio0: channel ti.icve:ffffffff:d already exist
[   66.564419] virtio_rpmsg_bus virtio0: rpmsg_create_channel failed
[   66.569894] virtio_rpmsg_bus virtio0: creating channel ti.icve addr 0xd
[   66.575827] virtio_rpmsg_bus virtio0: channel ti.icve:ffffffff:d already exist
[   66.582264] virtio_rpmsg_bus virtio0: rpmsg_create_channel failed
[   66.587727] virtio_rpmsg_bus virtio0: creating channel ti.icve addr 0xd
[   66.593639] virtio_rpmsg_bus virtio0: channel ti.icve:ffffffff:d already exist
[   66.600093] virtio_rpmsg_bus virtio0: rpmsg_create_channel failed
[   66.605544] virtio_rpmsg_bus virtio0: creating channel ti.icve addr 0xd
[   66.611486] virtio_rpmsg_bus virtio0: channel ti.icve:ffffffff:d already exist
[   66.618010] virtio_rpmsg_bus virtio0: rpmsg_create_channel failed
[   66.661006] rpmsg_eth virtio0.ti.icve.-1.13: Failed to receive response within 25 jiffies
[   66.668328] rpmsg_eth_ndo_set_rx_mode_work: set rx mode work called
[   66.673913] rpmsg_eth_add_mc_addr: add multicast address called
[   66.679205] rpmsg_eth_create_send_request: create send request called
[   66.684941] rpmsg_eth: create_request: create request called
[   66.792961] rpmsg_eth virtio0.ti.icve.-1.13: Failed to receive response within 25 jiffies

Devices:

In the /dev there is only listed

rpmsg_ctrl0 as rpmsg or remoteproc device.

autofs           loop7         ptyp9        tty18  tty42  ttyS0    vcs6
block            mapper        ptypa        tty19  tty43  ttyS1    vcsa
btrfs-control    mem           ptypb        tty2   tty44  ttyS2    vcsa1
bus              mmcblk0       ptypc        tty20  tty45  ttyS3    vcsa2
char             mmcblk0boot0  ptypd        tty21  tty46  ttyp0    vcsa3
console          mmcblk0boot1  ptype        tty22  tty47  ttyp1    vcsa4
cpu_dma_latency  mmcblk0p1     ptypf        tty23  tty48  ttyp2    vcsa5
cuse             mmcblk0p2     random       tty24  tty49  ttyp3    vcsa6
disk             mmcblk0p3     rfkill       tty25  tty5   ttyp4    vcsu
fd               mmcblk0p4     rpmsg_ctrl0  tty26  tty50  ttyp5    vcsu1
full             mmcblk0p5     shm          tty27  tty51  ttyp6    vcsu2
fuse             mmcblk0rpmb   snapshot     tty28  tty52  ttyp7    vcsu3
gpiochip0        mqueue        snd          tty29  tty53  ttyp8    vcsu4
gpiochip1        net           stderr       tty3   tty54  ttyp9    vcsu5
hugepages        null          stdin        tty30  tty55  ttypa    vcsu6
hwrng            port          stdout       tty31  tty56  ttypb    vfio
initctl          ptmx          tty          tty32  tty57  ttypc    vga_arbiter
kmsg             pts           tty0         tty33  tty58  ttypd    watchdog
log              ptyp0         tty1         tty34  tty59  ttype    watchdog0
loop-control     ptyp1         tty10        tty35  tty6   ttypf    watchdog1
loop0            ptyp2         tty11        tty36  tty60  urandom  zero
loop1            ptyp3         tty12        tty37  tty61  vcs
loop2            ptyp4         tty13        tty38  tty62  vcs1
loop3            ptyp5         tty14        tty39  tty63  vcs2
loop4            ptyp6         tty15        tty4   tty7   vcs3
loop5            ptyp7         tty16        tty40  tty8   vcs4
loop6            ptyp8         tty17        tty41  tty9   vcs5

Additional information:

My Colleague discovered a strange behavior, when running linux for a long time:

[ 1383.825552] rpmsg_eth virtio0.ti.icve.-1.0: Failed to receive response within 25 jiffies
[ 1383.832825] omap-mailbox 29020000.mailbox: Try increasing MBOX_TX_QUEUE_LEN
[ 1383.839009] platform 78000000.r5f: failed to send mailbox message, status = -105
[ 1383.949536] rpmsg_eth virtio0.ti.icve.-1.0: Failed to receive response within 25 jiffies
[ 1443.823902] omap-mailbox 29020000.mailbox: Try increasing MBOX_TX_QUEUE_LEN
[ 1443.830123] platform 78000000.r5f: failed to send mailbox message, status = -105
[ 1443.941625] rpmsg_eth virtio0.ti.icve.-1.0: Failed to receive response within 25 jiffies
[ 1443.948926] omap-mailbox 29020000.mailbox: Try increasing MBOX_TX_QUEUE_LEN
[ 1443.955113] platform 78000000.r5f: failed to send mailbox message, status = -105
[ 1444.073557] rpmsg_eth virtio0.ti.icve.-1.0: Failed to receive response within 25 j

Many thanks in advance.

BR

Dominik 

  • Hello Dominik,

    I assume you are working with Luka on this project?
    AM6442: EtherNet/IP Adapter Intercore Tunneling Demo: Network interface rpmsg_eth not working properly 

    Is the same SDK version being used across the board?

    Please clarify:

    1) Linux SDK version

    2) MCU+ SDK & industrial SDK versions (I see both "mcu+ (version 11.00.00.15) and industrial (version 11.00.00.08)", and "MCU+ SDK Version 09.01.00 (for ti lwip stack), Industrial SDK Version 09.02.00 (currently not in use)"

    This software was only tested with the same SDK version across the board. i.e., Industrial SDK v9.2 was tested with Linux SDK 9.2 & MCU+ SDK 9.2, and Industrial SDK v11.0 was only tested with MCU+ SDK 11.0, and Linux SDK 11.0. I will not be able to help debug if you are using industrial SDK 11.0 with MCU+ SDK 9.1, or any combination like that.

    Did you make any modifications to the Linux code? 

    Also, please point me to exactly where you got your Linux code from so that I know I am looking at the exact same version of SW.

    Checking the boot log 

    The "create_channel" error is happening because the Linux code is attempting to create a channel a second time. So that's an obvious bug. Once you tell me what version of software you are using, whether you made any code modifications, etc, then we can look into it more.

    // virtio infrastructure initializes properly
    // no RPMsg endpoints assigned during boot
    [    6.111329] omap-mailbox 29020000.mailbox: omap mailbox rev 0x66fc9100
    [    6.517054] platform 78000000.r5f: R5F core may have been powered on by a different host, programmed state (0) != actual state (1)
    [    6.626408] platform 78000000.r5f: configured R5F for IPC-only mode
    [    6.652051] platform 78000000.r5f: assigned reserved memory node r5f-dma-memory@a0000000
    [    6.729469] remoteproc remoteproc0: 78000000.r5f is available
    [    6.779652] remoteproc remoteproc0: attaching to 78000000.r5f
    [    6.841043] platform 78000000.r5f: R5F core initialized in IPC-only mode
    [    6.913405] rproc-virtio rproc-virtio.1.auto: assigned reserved memory node r5f-dma-memory@a0000000
    [    7.035426] virtio_rpmsg_bus virtio0: rpmsg host is online
    [    7.043940] rproc-virtio rproc-virtio.1.auto: registered virtio0 (type 7)
    [    7.071093] remoteproc remoteproc0: remote processor 78000000.r5f is now attached
    
    // channel is created with endpoint 0xd
    // Currently unclear if this happens in the probe function, or somewhere else
    [   65.822444] virtio_rpmsg_bus virtio0: creating channel ti.icve addr 0xd
    [   65.828665] rpmsg_eth_probe: probe called
    [   65.834573] rpmsg_eth virtio0.ti.icve.-1.13: start_addr = 0xa0200000
    [   65.841775] rpmsg_eth virtio0.ti.icve.-1.13: size 0xc00000
    [   65.847035] rpmsg_eth_init_ndev: init ndev called
    [   65.851340] rpmsg_eth virtio0.ti.icve.-1.13: Default MAC Address = 00:00:00:00:00:00
    [   65.858306] rpmsg_eth virtio0.ti.icve.-1.13: Assigning random MAC address
    [   65.864389] rpmsg_eth virtio0.ti.icve.-1.13: New MAC Address = fa:cd:d6:80:28:83
    [   65.872040] virtio_rpmsg_bus virtio0: msg received with no recipient
    [   65.881795] rpmsg_eth_set_mac_address: set mac address called
    [   65.885130] virtio_rpmsg_bus virtio0: msg received with no recipient
    
    // this looks like a bug potentially within rpmsg_eth_create_send_request
    // channel is already created
    // is there a check that is failing here? No check at all?
    [   65.887588] rpmsg_eth_create_send_request: create send request called
    [   65.892690] virtio_rpmsg_bus virtio0: creating channel ti.icve addr 0xd
    [   65.901073] rpmsg_eth: create_request: create request called
    [   65.904281] virtio_rpmsg_bus virtio0: channel ti.icve:ffffffff:d already exist
    [   65.915777] virtio_rpmsg_bus virtio0: rpmsg_create_channel failed

    Code not starting for 15-300 seconds, mailbox settings, other questions 

    I will wait to look into this stuff until after you verify that you are using the same SDK version for all three SDKs, and that after using the same SDK version for all 3 SDKs, you are still running into problems.

    Regards,

    Nick

  • Hi Nick,

    thanks for your answer!

    Yes, it is the same project.

    1) Linux:

    We are using a custom debian based linux. I can share the device tree memory carveouts inside the device tree, if that helps:

    reserved-memory { 
    
    #address-cells = <2>; 
    
    #size-cells = <2>; 
    
    ranges; 
    
     
    
    // Data/ code memory regions for R5F0_0 core 
    
    main_r5fss0_core0_memory_region: r5f-memory@90000000 { 
    
    compatible = "shared-dma-pool"; 
    
    reg = <0x00 0x90000000 0x00 0x00400000>; 
    
    no-map; 
    
    }; 
    
    // cores R5F0_1, R5F1_0, R5F1_1 
    
    main_r5f_ddr_memory_region: r5f-ddr-memory@90000000 { 
    
    reg = <0x00 0x90400000 0x00 0x00c00000>; 
    
    no-map; 
    
    }; 
    
    secure_ddr: optee@9e800000 { 
    
    reg = <0x00 0x9e800000 0x00 0x01800000>; /* for OP-TEE */ 
    
    alignment = <0x1000>; 
    
    no-map; 
    
    }; 
    
     
    
    // Shared memory regions for R5F0_0 core 
    
    // IPC communication 
    
    main_r5fss0_core0_dma_memory_region: r5f-dma-memory@a0000000 { 
    
    compatible = "shared-dma-pool"; 
    
    reg = <0x00 0xa0000000 0x00 0x200000>; 
    
    no-map; 
    
    }; 
    
    // Virtual Ethernet shared memory 
    
    main_r5fss0_core0_memory_region_shm: virtual-eth-shm@a0200000 { 
    
    compatible = "shared-dma-pool"; 
    
    reg = <0x00 0xa0200000 0x00 0xc00000>; 
    
    no-map; 
    
    }; 
    
    }; 

    We do not want to use the PROCESSOR SDKs Linux, because we made adaptions for our custom board, to run it.

    2) SDK Versions:

    Just to clarify for FreeRTOS:

    We did have a look and applied the logic of the RPMsg communication from "the mcu+ (version 11.00.00.15) and industrial (version 11.00.00.08)", but actually API code that is USED in our application is only the MCU+ SDK Version 09.01.00. We did use the other SDKs just for educational purposes.

    Do you see any problems in this specific version of our MCU+ SDK, when it comes to the functionality? There should not be a compatibility problem between SDKs, when just using 1 SDK, right?

    3) Boot Log Linux:

    I assume, the obvious bug of creating the channel a second time comes from the previous described behavior:

    The announce message is not being received correctly by the Linux system. (reason unknown)

    That's why we implemented a while loop, which is sending the announce message every 2 seconds, just to clarify, that there aren't any tasks for linux to finish before the message can be received. This was just a test, but it turned out, that it resulted in the described behavior in the first message. So sending the message several times causes linux to create the channel several times. We are not very concerned about that.

    We are more concerned about the delay. Which drivers should be triggered first when receiving the announce message? And how are they triggered? Normally just from receiving it?

    Thank you very much!

    BR

    Dominik

  • Hello Dominik,

    What version of Linux kernel is used?

    Where is your Linux code?

    Debug process

    You have changed so many things that we are not currently able to support this question.

    Whenever there are problems like this, the development flow is always the same:

    1) Start from a "known good" point. That means the TI example code, with TI SDKs, running on a TI EVM. If this starting point doesn't work, then any changes aren't going to work either

    2) change one thing at a time, and make sure that the code still works after each change.

    3) slowly work towards the final design. It is tempting to change multiple things at once, but that complicates the debug exponentially when something goes wrong. "Move slow to go fast"

    Let me talk a bit more about why I suggest starting with a TI SDK (or at least ti-linux-kernel, rather than mainline Linux) and a TI EVM:

    We are carrying patches related to Ethernet and R5F/Linux communication in ti-linux-kernel that are NOT mainlined yet. I'm not saying that the behavior you are observing is definitely related to any of those patches, but this is another level of complexity that you don't want to mess around with when you are trying to establish a "known good" starting point.

    Next steps 

    In order for me to help you, I need you to verify the "known good" starting point first, either on SDK 11.0 for all 3 SDKs, or SDK 9.2 for all 3 SDKs. Then we can go from there.

    Regards,

    Nick

  • I see that Luka has started work on establishing a "known good" starting point here. I'll take a look at that thread later today:
    TMDS64EVM: EthernetIP Adapter Tunneling not working on evalboard 

  • Hello Nick,

    we are following your suggestions and setting up the evalboard with an example first.

    Are there any known issues for SDK 9.1 for shared ethernet across the MCU+, Industrial and Linux SDK?

    What SDK version do you recommend starting with?

    BR

    Dominik

  • Hello Dominik,

    What software version are you planning on using in your final application?

    If you will be using Linux kernel 6.12 and the latest version of MCU+ SDK anyway, I would start with SDK 11.x. If you are planning on using Linux kernel 6.1, then SDK 9.1 would be better.

    We did not add the tunneling demo until SDK 9.2. I am not sure what code had to be added on top of SDK 9.1 to add that functionality.

    There are some "known issues" for EtherNet/IP Tunneling discussed in the SDK release notes:
    https://software-dl.ti.com/processor-industrial-sw/esd/ind_comms_sdk/am64x/11_00_00_08/docs/api_guide_am64x/RELEASE_NOTES_11_00_00_PAGE.html

    The same issues apply to both SDK 9.2 and SDK 11.0. The team did confirm that they are actively working on bugfixes for those known issues.

    Regards,

    Nick