This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

PROCESSOR-SDK-AM64X: HSR offload PRU firmware Packet Lost

Part Number: PROCESSOR-SDK-AM64X

Hi

I'm using the latest TI SDK release https://software-dl.ti.com/processor-sdk-linux/esd/AM64X/09_02_01_09/exports/docs/devices/AM64X/linux/Release_Specific_Release_Notes.html .

I have some issues when pinging between EVMs boards with offload hsr solution...

Here is our set up :

(eth1 of Tesboard1 is connected to eth2 of Testboard2 ,  eth2 of Testboard2 is connected to eth1 of the DUT , eth2 of the DUT is connected to eth1 of Testboard2)

We have 3 boards with hsr offload solutions. For each board after booting, first we disable the network using :

systemctl disable dhcpcd.service
systemctl disable NetworkManager.service
systemctl disable systemd-resolved.service
systemctl disable systemd-networkd.service
sync
reboot

Then we load pru firmware by upping network interfaces :

root@am64xx-evm:~# ip link set dev eth1 up
[   23.533828] remoteproc remoteproc10: unsupported resource 5
[   23.540407] remoteproc remoteproc12: unsupported resource 5
root@am64xx-evm:~# ip link set dev eth2 up

Then we load hsr offload pru firmware.

We do that for the 3 boards.

We observe that when we start ping from Testboard1 to DUT, then we detach manually one of the cable of the Testboard1, there are some packets lost.

Could you confirm on your side this issue ?

Tianyi

  • To be more accurate We observe that when we start ping from Testboard1 to DUT, then we detach manually cable linked to eth2 of the Testboard1, there are some packets lost.

  • Hi Tianyi,

    Can you first check if the same packet loss can be found without systemctl commands to disable the network?

    Can you also share the entire log output after running the hsr setup script?

    I will verify if I see the same with the systemctl commands to disable the network and back to you.

    -Daolin

  • (eth1 of Tesboard1 is connected to eth2 of Testboard2 ,  eth2 of Testboard2 is connected to eth1 of the DUT , eth2 of the DUT is connected to eth1 of Testboard2)

    How can eth2 of Testboard2 be connected to both Testboard1 and DUT at same time?

    we detach manually cable linked to eth2 of the Testboard1

    Do you mean detach cable A according to your diagram?

    I ran some tests and with the following configs

    EVM1 (192.168.2.20) eth1 Left right arrow eth2 EVM2 (192.168.2.21) eth1 Left right arrow eth2 EVM3 (192.168.2.22) eth1 Left right arrow eth2 EVM1

    systemctl $1 dhcpcd.service
    systemctl $1 NetworkManager.service
    systemctl $1 systemd-resolved.service
    systemctl $1 systemd-networkd.service
    sync
    1. Disable network services with the above and reboot
    2. Set up HSR offload with script on all EVMs
    3. Ping from EVM1 to EVM3
    4. Disconnect cable between EVM1 and EVM3

    ~54% packet loss (varies depending on how long cable kept disconnected)

    Notice ping stops and doesn't resume after several minutes of running

    1. Disable network services with the above commands
    2. On EVM2 set up HSR with non-offload mode
    3. Ping from EVM1 to EVM3
    4. Disconnect cable between EVM1 and EVM3
    0% packet loss
    1. Disable network services with the above commands
    2. On EVM2 (middle board), set up HSR offload with duplicate offload OFF in the script
    3. Ping from EVM1 to EVM3
    4. Disconnect cable between EVM1 and EVM3
    0% packet loss
    1. Disable network services with the above commands
    2. On EVM2 (middle board), after previous test of turning duplicate offload off, turn duplicate offload back on
    3. Ping from EVM1 to EVM3
    4. Disconnect cable between EVM1 and EVM3
    0% packet loss

    In summary I can also see packet loss with disable of network services and only if I run through the sequence in red text above, then 0% packet loss is resumed.

    Could you try 

    1. What results when not disabling network service

    2. What result when disabling network service + run through the sequence I have in red text?

    -Daolin

  • Hi Tianyi,

    I'm just checking in on if there is an update on your results on this?

    -Daolin

  • Hi Daolin,

    2. What result when disabling network service + run through the sequence I have in red text?

    (As a reminder, the tests are donewith

    - disable of network services (1 time)

    - then when we (re)boot each the EVM board, we set eth1 and eth2 up (load regular PRU firmware) )

    I can reproduce

    1

    • Disable network services with the above and reboot
    • Set up HSR offload with script on all EVMs
    • Ping from EVM1 to EVM3
    • Disconnect cable between EVM1 and EVM3

    2

    1. Disable network services with the above commands
    2. On EVM2 set up HSR with non-offload mode
    3. Ping from EVM1 to EVM3
    4. Disconnect cable between EVM1 and EVM3

    and 4

    1. Disable network services with the above commands
    2. On EVM2 (middle board), after previous test (test3) of turning duplicate offload off, turn duplicate offload back on
    3. Ping from EVM1 to EVM3
    4. Disconnect cable between EVM1 and EVM3

    with the same result.

    However, for the 3 :

    1. Disable network services with the above commands
    2. On EVM2 (middle board), set up HSR offload with duplicate offload OFF in the script
    3. Ping from EVM1 to EVM3
    4. Disconnect cable between EVM1 and EVM3

    I still have some packets lost when detach manually the cable between EVM1 <-> EVM3. You can find the terminal commands above :

    #EVM1 terminal : send ping to EVM3 : detach manually the cable that 
    # connects EVM1 to EVM3 => packet lost 
    
    am64xx-evm login: root
    root@am64xx-evm:~# ip link set dev eth1 up
    [   28.767242] remoteproc remoteproc7: unsupported resource 5
    [   28.775904] remoteproc remoteproc9: unsupported resource 5
    root@am64xx-evm:~# ip link set dev eth2 up
    root@am64xx-evm:~# ./hsr_setup.sh hsr_hw eth1 eth2 192.168.200.1
    hsr_hw eth1 eth2 192.168.200.1
    ip=192.168.200.1
    if=hsr0
    mac=70:ff:76:1e:e7:8c
    slave-a=eth1
    slave-b=eth2
    device=platform/icssg1-eth
    hsr-tag-ins-offload: off
    hsr-tag-rm-offload: off
    hsr-fwd-offload: off
    hsr-dup-offload: off
    hsr-tag-ins-offload: on
    hsr-tag-rm-offload: on
    hsr-fwd-offload: on
    hsr-dup-offload: on
    hsr-tag-ins-offload: off
    hsr-tag-rm-offload: off
    hsr-fwd-offload: off
    hsr-dup-offload: off
    hsr-tag-ins-offload: on
    hsr-tag-rm-offload: on
    hsr-fwd-offload: on
    hsr-dup-offload: on
    [   74.778520] remoteproc remoteproc7: unsupported resource 5
    [   74.788153] remoteproc remoteproc9: unsupported resource 5
    root@am64xx-evm:~# ping 192.168.200.2
    PING 192.168.200.2 (192.168.200.2): 56 data bytes
    64 bytes from 192.168.200.2: seq=0 ttl=64 time=0.711 ms
    64 bytes from 192.168.200.2: seq=1 ttl=64 time=0.396 ms
    64 bytes from 192.168.200.2: seq=2 ttl=64 time=0.390 ms
    64 bytes from 192.168.200.2: seq=3 ttl=64 time=0.346 ms
    64 bytes from 192.168.200.2: seq=4 ttl=64 time=0.430 ms
    64 bytes from 192.168.200.2: seq=5 ttl=64 time=0.351 ms
    64 bytes from 192.168.200.2: seq=6 ttl=64 time=0.416 ms
    64 bytes from 192.168.200.2: seq=7 ttl=64 time=0.368 ms
    64 bytes from 192.168.200.2: seq=8 ttl=64 time=0.379 ms
    64 bytes from 192.168.200.2: seq=9 ttl=64 time=0.411 ms
    64 bytes from 192.168.200.2: seq=10 ttl=64 time=0.399 ms
    64 bytes from 192.168.200.2: seq=21 ttl=64 time=0.456 ms
    64 bytes from 192.168.200.2: seq=22 ttl=64 time=0.445 ms
    64 bytes from 192.168.200.2: seq=23 ttl=64 time=0.397 ms
    64 bytes from 192.168.200.2: seq=24 ttl=64 time=0.423 ms
    64 bytes from 192.168.200.2: seq=30 ttl=64 time=0.455 ms
    64 bytes from 192.168.200.2: seq=31 ttl=64 time=0.456 ms
    64 bytes from 192.168.200.2: seq=32 ttl=64 time=0.412 ms
    64 bytes from 192.168.200.2: seq=33 ttl=64 time=0.437 ms
    64 bytes from 192.168.200.2: seq=34 ttl=64 time=0.425 ms
    64 bytes from 192.168.200.2: seq=35 ttl=64 time=0.431 ms
    64 bytes from 192.168.200.2: seq=36 ttl=64 time=0.421 ms
    64 bytes from 192.168.200.2: seq=37 ttl=64 time=0.375 ms
    64 bytes from 192.168.200.2: seq=38 ttl=64 time=0.369 ms
    64 bytes from 192.168.200.2: seq=39 ttl=64 time=0.386 ms
    64 bytes from 192.168.200.2: seq=40 ttl=64 time=0.405 ms
    64 bytes from 192.168.200.2: seq=41 ttl=64 time=0.449 ms
    64 bytes from 192.168.200.2: seq=42 ttl=64 time=0.433 ms
    64 bytes from 192.168.200.2: seq=43 ttl=64 time=0.388 ms
    64 bytes from 192.168.200.2: seq=44 ttl=64 time=0.379 ms
    64 bytes from 192.168.200.2: seq=45 ttl=64 time=0.446 ms
    64 bytes from 192.168.200.2: seq=46 ttl=64 time=0.393 ms
    64 bytes from 192.168.200.2: seq=47 ttl=64 time=0.397 ms
    64 bytes from 192.168.200.2: seq=48 ttl=64 time=0.424 ms
    64 bytes from 192.168.200.2: seq=49 ttl=64 time=0.382 ms
    64 bytes from 192.168.200.2: seq=50 ttl=64 time=0.430 ms
    64 bytes from 192.168.200.2: seq=51 ttl=64 time=0.387 ms
    64 bytes from 192.168.200.2: seq=52 ttl=64 time=0.418 ms
    64 bytes from 192.168.200.2: seq=53 ttl=64 time=0.415 ms
    ^C
    --- 192.168.200.2 ping statistics ---
    54 packets transmitted, 39 packets received, 27% packet loss
    round-trip min/avg/max = 0.346/0.416/0.711 ms

    #EVM3 terminal : 
    am64xx-evm login: root
    root@am64xx-evm:~# ip link set dev eth1 up
    [   29.850178] remoteproc remoteproc10: unsupported resource 5
    [   29.861750] remoteproc remoteproc12: unsupported resource 5
    root@am64xx-evm:~# ip link set dev eth2 up
    root@am64xx-evm:~# ./hsr_setup.sh hsr_hw eth1 eth2 192.168.200.2
    hsr_hw eth1 eth2 192.168.200.2
    ip=192.168.200.2
    if=hsr0
    mac=70:ff:76:1e:e6:f3
    slave-a=eth1
    slave-b=eth2
    device=platform/icssg1-eth
    hsr-tag-ins-offload: off
    hsr-tag-rm-offload: off
    hsr-fwd-offload: off
    hsr-dup-offload: off
    hsr-tag-ins-offload: on
    hsr-tag-rm-offload: on
    hsr-fwd-offload: on
    hsr-dup-offload: on
    hsr-tag-ins-offload: off
    hsr-tag-rm-offload: off
    hsr-fwd-offload: off
    hsr-dup-offload: off
    hsr-tag-ins-offload: on
    hsr-tag-rm-offload: on
    hsr-fwd-offload: on
    hsr-dup-offload: on
    [   86.617418] remoteproc remoteproc10: unsupported resource 5
    [   86.630323] remoteproc remoteproc12: unsupported resource 5
    root@am64xx-evm:~#

    #EVM2 (middle ) terminal : set up HSR offload with duplicate offload OFF in the script
    
    am64xx-evm login: root
    root@am64xx-evm:~# ip link set dev eth1 up
    [   30.820317] remoteproc remoteproc12: unsupported resource 5
    [   30.832171] remoteproc remoteproc10: unsupported resource 5
    root@am64xx-evm:~# ip link set dev eth2 up
    root@am64xx-evm:~# ./hsr_setup_dup_off.sh hsr_hw eth1 eth2 192.168.200.10                           
    hsr_hw eth1 eth2 192.168.200.10
    ip=192.168.200.10
    if=hsr0
    mac=70:ff:76:1e:9f:54
    slave-a=eth1
    slave-b=eth2
    device=platform/icssg1-eth
    hsr-tag-ins-offload: off
    hsr-tag-rm-offload: off
    hsr-fwd-offload: off
    hsr-dup-offload: off
    hsr-tag-ins-offload: on
    hsr-tag-rm-offload: on
    hsr-fwd-offload: on
    hsr-dup-offload: off
    hsr-tag-ins-offload: off
    hsr-tag-rm-offload: off
    hsr-fwd-offload: off
    hsr-dup-offload: off
    hsr-tag-ins-offload: on
    hsr-tag-rm-offload: on
    hsr-fwd-offload: on
    hsr-dup-offload: off
    [   48.927542] remoteproc remoteproc12: unsupported resource 5
    [   48.937402] remoteproc remoteproc10: unsupported resource 5
    root@am64xx-evm:~#

    Also, I tried on my side with the

    testcase 3 :

    1. Disable network services with the above commands
    2. On EVM2 (middle board), set up HSR offload with duplicate offload OFF in the script
    3. Ping from EVM1 to EVM3
    4. Disconnect cable between EVM1 and EVM3

    pinging from EVM2 (middle ) to EVM3 (and detach cable EVM1<->EVM3) to verify is the packets are duplicated on the Kernel or the PRU level, here is the output , we can see that the packets are duplicated with (DUP) :

    # EVM2 with duplicat offload off 
    am64xx-evm login: root
    root@am64xx-evm:~# ip link set dev eth1 up
    [   28.585565] remoteproc remoteproc11: unsupported resource 5
    [   28.596461] remoteproc remoteproc13: unsupported resource 5
    root@am64xx-evm:~# ip link set dev eth2 up
    root@am64xx-evm:~# ./hsr_setup_dup_off.sh hsr_hw eth1 eth2 192.168.200.10                           
    hsr_hw eth1 eth2 192.168.200.10
    ip=192.168.200.10
    if=hsr0
    mac=70:ff:76:1e:9f:54
    slave-a=eth1
    slave-b=eth2
    device=platform/icssg1-eth
    hsr-tag-ins-offload: off
    hsr-tag-rm-offload: off
    hsr-fwd-offload: off
    hsr-dup-offload: off
    hsr-tag-ins-offload: on
    hsr-tag-rm-offload: on
    hsr-fwd-offload: on
    hsr-dup-offload: off
    hsr-tag-ins-offload: off
    hsr-tag-rm-offload: off
    hsr-fwd-offload: off
    hsr-dup-offload: off
    hsr-tag-ins-offload: on
    hsr-tag-rm-offload: on
    hsr-fwd-offload: on
    hsr-dup-offload: off
    [   38.936865] remoteproc remoteproc11: unsupported resource 5
    [   38.946688] remoteproc remoteproc13: unsupported resource 5
    root@am64xx-evm:~# ping 192.168.200.2                    
    PING 192.168.200.2 (192.168.200.2): 56 data bytes
    64 bytes from 192.168.200.2: seq=0 ttl=64 time=0.722 ms
    64 bytes from 192.168.200.2: seq=1 ttl=64 time=0.399 ms
    64 bytes from 192.168.200.2: seq=1 ttl=64 time=0.469 ms (DUP!)
    64 bytes from 192.168.200.2: seq=2 ttl=64 time=0.385 ms
    64 bytes from 192.168.200.2: seq=3 ttl=64 time=0.388 ms
    64 bytes from 192.168.200.2: seq=4 ttl=64 time=0.364 ms
    64 bytes from 192.168.200.2: seq=5 ttl=64 time=0.381 ms
    64 bytes from 192.168.200.2: seq=6 ttl=64 time=0.354 ms
    64 bytes from 192.168.200.2: seq=7 ttl=64 time=0.456 ms
    64 bytes from 192.168.200.2: seq=8 ttl=64 time=0.437 ms
    64 bytes from 192.168.200.2: seq=9 ttl=64 time=0.387 ms
    64 bytes from 192.168.200.2: seq=10 ttl=64 time=0.385 ms
    64 bytes from 192.168.200.2: seq=11 ttl=64 time=0.358 ms
    64 bytes from 192.168.200.2: seq=12 ttl=64 time=0.394 ms
    64 bytes from 192.168.200.2: seq=13 ttl=64 time=0.413 ms
    64 bytes from 192.168.200.2: seq=14 ttl=64 time=0.399 ms
    64 bytes from 192.168.200.2: seq=15 ttl=64 time=0.357 ms
    64 bytes from 192.168.200.2: seq=16 ttl=64 time=0.415 ms
    64 bytes from 192.168.200.2: seq=16 ttl=64 time=0.488 ms (DUP!)
    64 bytes from 192.168.200.2: seq=17 ttl=64 time=0.417 ms
    64 bytes from 192.168.200.2: seq=17 ttl=64 time=0.488 ms (DUP!)
    64 bytes from 192.168.200.2: seq=18 ttl=64 time=0.436 ms
    64 bytes from 192.168.200.2: seq=18 ttl=64 time=0.507 ms (DUP!)
    64 bytes from 192.168.200.2: seq=19 ttl=64 time=0.426 ms
    64 bytes from 192.168.200.2: seq=19 ttl=64 time=0.498 ms (DUP!)
    64 bytes from 192.168.200.2: seq=20 ttl=64 time=0.426 ms
    64 bytes from 192.168.200.2: seq=20 ttl=64 time=0.499 ms (DUP!)
    64 bytes from 192.168.200.2: seq=21 ttl=64 time=0.390 ms
    64 bytes from 192.168.200.2: seq=21 ttl=64 time=0.462 ms (DUP!)
    64 bytes from 192.168.200.2: seq=22 ttl=64 time=0.441 ms
    64 bytes from 192.168.200.2: seq=23 ttl=64 time=0.404 ms
    64 bytes from 192.168.200.2: seq=24 ttl=64 time=0.511 ms
    64 bytes from 192.168.200.2: seq=25 ttl=64 time=0.403 ms
    64 bytes from 192.168.200.2: seq=26 ttl=64 time=0.437 ms
    64 bytes from 192.168.200.2: seq=27 ttl=64 time=0.343 ms
    64 bytes from 192.168.200.2: seq=28 ttl=64 time=0.478 ms
    64 bytes from 192.168.200.2: seq=29 ttl=64 time=0.505 ms
    64 bytes from 192.168.200.2: seq=29 ttl=64 time=0.692 ms (DUP!)
    64 bytes from 192.168.200.2: seq=30 ttl=64 time=0.442 ms
    64 bytes from 192.168.200.2: seq=30 ttl=64 time=0.518 ms (DUP!)
    64 bytes from 192.168.200.2: seq=31 ttl=64 time=0.395 ms
    64 bytes from 192.168.200.2: seq=31 ttl=64 time=0.461 ms (DUP!)
    ^C
    --- 192.168.200.2 ping statistics ---
    32 packets transmitted, 32 packets received, 10 duplicates, 0% packet loss
    round-trip min/avg/max = 0.343/0.441/0.722 ms
    root@am64xx-evm:~#

    Just to summarize for :

    2. What result when disabling network service + run through the sequence I have in red text?

    - I can reproduce testcase 1,2, and 4

    - By putting duplicate feature off, then on on the EVM2 (middle) : it seems there is no packet lost (testcase4)

  • About :

    1. What results when not disabling network service

    I still observe same behaviour ... there are some packets lost even when not disabling network service

    Tianyi

  • Some update about the testcase 1 :

    1

    • Disable network services with the above and reboot
    • Set up HSR offload with script on all EVMs
    • Ping from EVM1 to EVM3
    • Disconnect cable between EVM1 and EVM3

    Instead of Disconnect Physically cable between EVM1 and EVM3,

    We first down the EVM1's interface (with ip link set ) that is linked to EVM2's interface (in my setup eth2), and ping from EVM1 to EVM2 (middle - DUT ). I observe some packets lost.

    • Disable network services with the above and reboot
    • Set up HSR offload with script on all EVMs
    • ip link set dev eth2 down
    • Ping from EVM1 to EVM3
      • Packet Lost

    But when we down EVM1's interface that is linked to EVM3's interface (in my setup eth1), and ping from EVM1 to EVM2. No packet lost. Here is the procedure :

    • Disable network services with the above and reboot
    • Set up HSR offload with script on all EVMs
    • ip link set dev eth2 down
    • Ping from EVM1 to EVM3
      • No Packet Lost

    # EVM1 terminal that is sending to EVM2 (middle, dut): 
    
    am64xx-evm login: root
    root@am64xx-evm:~# ip link set dev eth1 up                                           
    [   32.090969] remoteproc remoteproc9: unsupported resource 5
    [   32.100495] remoteproc remoteproc11: unsupported resource 5
    root@am64xx-evm:~# ip link set dev eth2 up                                                          
    root@am64xx-evm:~# ./hsr_setup.sh hsr_hw eth1 eth2 192.168.200.1                               
    hsr_hw eth1 eth2 192.168.200.1
    ip=192.168.200.1
    if=hsr0
    mac=70:ff:76:1e:e7:8c
    slave-a=eth1
    slave-b=eth2
    device=platform/icssg1-eth
    hsr-tag-ins-offload: off
    hsr-tag-rm-offload: off
    hsr-fwd-offload: off
    hsr-dup-offload: off
    hsr-tag-ins-offload: on
    hsr-tag-rm-offload: on
    hsr-fwd-offload: on
    hsr-dup-offload: on
    hsr-tag-ins-offload: off
    hsr-tag-rm-offload: off
    hsr-fwd-offload: off
    hsr-dup-offload: off
    hsr-tag-ins-offload: on
    hsr-tag-rm-offload: on
    hsr-fwd-offload: on
    hsr-dup-offload: on
    [   43.802612] remoteproc remoteproc9: unsupported resource 5
    [   43.813590] remoteproc remoteproc11: unsupported resource 5
    root@am64xx-evm:~# ping 192.168.200.10 -c 30 -i 0.1                                                 
    PING 192.168.200.10 (192.168.200.10): 56 data bytes
    64 bytes from 192.168.200.10: seq=0 ttl=64 time=0.660 ms
    64 bytes from 192.168.200.10: seq=1 ttl=64 time=0.266 ms
    64 bytes from 192.168.200.10: seq=2 ttl=64 time=0.200 ms
    64 bytes from 192.168.200.10: seq=3 ttl=64 time=0.201 ms
    64 bytes from 192.168.200.10: seq=4 ttl=64 time=0.198 ms
    64 bytes from 192.168.200.10: seq=5 ttl=64 time=0.223 ms
    64 bytes from 192.168.200.10: seq=6 ttl=64 time=0.198 ms
    64 bytes from 192.168.200.10: seq=7 ttl=64 time=0.180 ms
    64 bytes from 192.168.200.10: seq=8 ttl=64 time=0.266 ms
    64 bytes from 192.168.200.10: seq=9 ttl=64 time=0.251 ms
    64 bytes from 192.168.200.10: seq=10 ttl=64 time=0.231 ms
    64 bytes from 192.168.200.10: seq=11 ttl=64 time=0.206 ms
    64 bytes from 192.168.200.10: seq=12 ttl=64 time=0.189 ms
    64 bytes from 192.168.200.10: seq=13 ttl=64 time=0.182 ms
    64 bytes from 192.168.200.10: seq=14 ttl=64 time=0.210 ms
    64 bytes from 192.168.200.10: seq=15 ttl=64 time=0.197 ms
    64 bytes from 192.168.200.10: seq=16 ttl=64 time=0.194 ms
    64 bytes from 192.168.200.10: seq=17 ttl=64 time=0.369 ms
    64 bytes from 192.168.200.10: seq=18 ttl=64 time=0.367 ms
    64 bytes from 192.168.200.10: seq=19 ttl=64 time=0.273 ms
    64 bytes from 192.168.200.10: seq=20 ttl=64 time=0.322 ms
    64 bytes from 192.168.200.10: seq=21 ttl=64 time=0.281 ms
    64 bytes from 192.168.200.10: seq=22 ttl=64 time=0.225 ms
    64 bytes from 192.168.200.10: seq=23 ttl=64 time=0.385 ms
    64 bytes from 192.168.200.10: seq=24 ttl=64 time=0.252 ms
    64 bytes from 192.168.200.10: seq=25 ttl=64 time=0.198 ms
    64 bytes from 192.168.200.10: seq=26 ttl=64 time=0.202 ms
    64 bytes from 192.168.200.10: seq=27 ttl=64 time=0.191 ms
    64 bytes from 192.168.200.10: seq=28 ttl=64 time=0.201 ms
    64 bytes from 192.168.200.10: seq=29 ttl=64 time=0.267 ms
    
    --- 192.168.200.10 ping statistics ---
    30 packets transmitted, 30 packets received, 0% packet loss
    round-trip min/avg/max = 0.180/0.252/0.660 ms
    root@am64xx-evm:~# ip link set dev eth1 down 
    root@am64xx-evm:~# ping 192.168.200.10 -c 30 -i 0.1
    PING 192.168.200.10 (192.168.200.10): 56 data bytes
    64 bytes from 192.168.200.10: seq=0 ttl=64 time=0.446 ms
    64 bytes from 192.168.200.10: seq=1 ttl=64 time=0.279 ms
    64 bytes from 192.168.200.10: seq=2 ttl=64 time=0.260 ms
    64 bytes from 192.168.200.10: seq=3 ttl=64 time=0.226 ms
    64 bytes from 192.168.200.10: seq=4 ttl=64 time=0.264 ms
    64 bytes from 192.168.200.10: seq=5 ttl=64 time=0.258 ms
    64 bytes from 192.168.200.10: seq=6 ttl=64 time=0.201 ms
    64 bytes from 192.168.200.10: seq=7 ttl=64 time=0.192 ms
    64 bytes from 192.168.200.10: seq=8 ttl=64 time=0.240 ms
    64 bytes from 192.168.200.10: seq=9 ttl=64 time=0.217 ms
    64 bytes from 192.168.200.10: seq=10 ttl=64 time=0.201 ms
    64 bytes from 192.168.200.10: seq=11 ttl=64 time=0.191 ms
    64 bytes from 192.168.200.10: seq=12 ttl=64 time=0.196 ms
    64 bytes from 192.168.200.10: seq=13 ttl=64 time=0.225 ms
    64 bytes from 192.168.200.10: seq=14 ttl=64 time=0.217 ms
    64 bytes from 192.168.200.10: seq=15 ttl=64 time=0.244 ms
    64 bytes from 192.168.200.10: seq=16 ttl=64 time=0.219 ms
    64 bytes from 192.168.200.10: seq=17 ttl=64 time=0.216 ms
    64 bytes from 192.168.200.10: seq=18 ttl=64 time=0.224 ms
    64 bytes from 192.168.200.10: seq=19 ttl=64 time=0.310 ms
    64 bytes from 192.168.200.10: seq=20 ttl=64 time=0.256 ms
    64 bytes from 192.168.200.10: seq=21 ttl=64 time=0.235 ms
    64 bytes from 192.168.200.10: seq=22 ttl=64 time=0.196 ms
    64 bytes from 192.168.200.10: seq=23 ttl=64 time=0.233 ms
    64 bytes from 192.168.200.10: seq=24 ttl=64 time=0.216 ms
    64 bytes from 192.168.200.10: seq=25 ttl=64 time=0.245 ms
    64 bytes from 192.168.200.10: seq=26 ttl=64 time=0.208 ms
    64 bytes from 192.168.200.10: seq=27 ttl=64 time=0.201 ms
    64 bytes from 192.168.200.10: seq=28 ttl=64 time=0.211 ms
    64 bytes from 192.168.200.10: seq=29 ttl=64 time=0.251 ms
    
    --- 192.168.200.10 ping statistics ---
    30 packets transmitted, 30 packets received, 0% packet loss
    round-trip min/avg/max = 0.191/0.235/0.446 ms
    root@am64xx-evm:~# ip link set dev eth1 up
    root@am64xx-evm:~# ip link set dev eth2 down                                                        
    root@am64xx-evm:~# ping 192.168.200.10 -c 30 -i 0.1
    PING 192.168.200.10 (192.168.200.10): 56 data bytes
    ^C
    --- 192.168.200.10 ping statistics ---
    30 packets transmitted, 0 packets received, 100% packet loss
    root@am64xx-evm:~# ip link set dev eth2 up 
    root@am64xx-evm:~# dmesg | grep -i dual 
    [    6.453333] icssg-prueth icssg1-eth: TI PRU ethernet driver initialized: dual EMAC mode
    root@am64xx-evm:~#

    Is it possible to reproduce this on your side as well ?

    Tianyi

  • Hi Tianyi,

    We also see the packet loss out of box not disabling network service as well and the software development team is currently working on debugging that issue. The sw dev team believes the issue is related to the middle board getting stuck when trying to forward the packets from EVM1 to EVM3 when the disconnection between EVM1 and EVM3 happens. As of yesterday, they are working on implementing some changes to fix this but overall are still debugging. We will be having an internal meeting later to discuss the current progress. 

    You mentioned being able to reproduce test 1,2,4 for your network disable use case. I also verified that this potential workaround is only achievable with the network disable use case (not the out of box use case). If it is okay with you and your team, would this be a potential workaround until the sw dev team is able to fix the out of box use case issue? 

    -Daolin

  • Sorry I just saw your message after I sent my previous message 15 minutes ago. I will give your test sequence a try and let you know.

    -Daolin

  • Hi Daolin,

    Thank you for your fast answer !

    We need to discuss with our customer about this workaround for the first identified (detached manually the cable)!

    Tianyi

  • Ok no problem let me know if you can reproduce on your side !

    Tianyi

  • Hi Tianyi, 

    Have you done the same link down EVM1-EVM3 interface + ping from EVM1 to EVM2 with manual disconnect the cable as well (cable between EVM1 and EVM3)? 

    My tests reveal that link down shows the same results as disconnect the cable manually. This can be seen when linking down EVM1-EVM2 interface + ping EVM1 to EVM2 and see packet loss, the same result can be observed with manual cable disconnection. Alternatively, no packet loss can be observed when manual cable disconnecting EVM1-EVM3 and ping EVM1 to EVM2.

    In summary, I get similar results to yours but I think the behavior is similar to disconnecting the cable manually. The root issue is still when disconnecting the direct link between two EVMs, the packets must travel through the other path through the middle board (for ping EVM1 to EVM2 instance, packets have to travel through EVM3). The middle board is having issues forwarding packets, most likely resulting the packet loss issue.

    -Daolin

  • Hi Daolin,

    You are right ! The root cause seems to be the same !

    Tianyi

  • I retry the workaround with some modification. Indeed of pinging between case 3 case and case 4 like you mentioned above :

    1. Disable network services with the above commands
    2. On EVM2 (middle board), set up HSR offload with duplicate offload OFF in the script
    3. Ping from EVM1 to EVM3
    4. Disconnect cable between EVM1 and EVM3
    0% packet loss
    1. Disable network services with the above commands
    2. On EVM2 (middle board), after previous test of turning duplicate offload off, turn duplicate offload back on
    3. Ping from EVM1 to EVM3
    4. Disconnect cable between EVM1 and EVM3

    I directly turn duplicate offload on without any pinging in between :

    • Disable network services with the above commands
    • On EVM2 (middle board), set up HSR offload with duplicate offload OFF in the script
    • On EVM2 (middle board), turn duplicate offload on
    • Ping from EVM1 to EVM3
    • Disconnect cable between EVM1 and EVM3
      • Disconnect the cable eth1 on EVM1 side
      • Disconnect the cable eth2 on EVM1 side

    I observe some packet Lost, you can see the logs above :

    Here is the terminal of EVM1 :

    # Terminal EVM1 : SENDER
    
    am64xx-evm login: root
    root@am64xx-evm:~# ip link set dev eth1 up
    [   32.726961] remoteproc remoteproc10: unsupported resource 5
    [   32.738671] remoteproc remoteproc12: unsupported resource 5
    root@am64xx-evm:~# ip link set dev eth2 up
    root@am64xx-evm:~# ./hsr_setup.sh hsr_hw eth1 eth2 192.168.200.1
    hsr_hw eth1 eth2 192.168.200.1
    ip=192.168.200.1
    if=hsr0
    mac=70:ff:76:1e:e7:8c
    slave-a=eth1
    slave-b=eth2
    device=platform/icssg1-eth
    hsr-tag-ins-offload: off
    hsr-tag-rm-offload: off
    hsr-fwd-offload: off
    hsr-dup-offload: off
    hsr-tag-ins-offload: on
    hsr-tag-rm-offload: on
    hsr-fwd-offload: on
    hsr-dup-offload: on
    hsr-tag-ins-offload: off
    hsr-tag-rm-offload: off
    hsr-fwd-offload: off
    hsr-dup-offload: off
    hsr-tag-ins-offload: on
    hsr-tag-rm-offload: on
    hsr-fwd-offload: on
    hsr-dup-offload: on
    [   44.306561] remoteproc remoteproc10: unsupported resource 5
    [   44.316597] remoteproc remoteproc12: unsupported resource 5
    root@am64xx-evm:~# ping 192.168.200.2 -i 0.5
    PING 192.168.200.2 (192.168.200.2): 56 data bytes
    64 bytes from 192.168.200.2: seq=0 ttl=64 time=0.703 ms
    64 bytes from 192.168.200.2: seq=1 ttl=64 time=0.344 ms
    64 bytes from 192.168.200.2: seq=2 ttl=64 time=0.349 ms
    64 bytes from 192.168.200.2: seq=3 ttl=64 time=0.307 ms
    64 bytes from 192.168.200.2: seq=5 ttl=64 time=0.315 ms
    64 bytes from 192.168.200.2: seq=6 ttl=64 time=0.335 ms
    64 bytes from 192.168.200.2: seq=7 ttl=64 time=0.350 ms
    64 bytes from 192.168.200.2: seq=8 ttl=64 time=0.423 ms
    64 bytes from 192.168.200.2: seq=9 ttl=64 time=0.430 ms
    64 bytes from 192.168.200.2: seq=10 ttl=64 time=0.328 ms
    64 bytes from 192.168.200.2: seq=12 ttl=64 time=0.392 ms
    64 bytes from 192.168.200.2: seq=13 ttl=64 time=0.352 ms
    64 bytes from 192.168.200.2: seq=15 ttl=64 time=0.336 ms
    64 bytes from 192.168.200.2: seq=17 ttl=64 time=0.354 ms
    64 bytes from 192.168.200.2: seq=18 ttl=64 time=0.372 ms
    64 bytes from 192.168.200.2: seq=19 ttl=64 time=0.437 ms
    64 bytes from 192.168.200.2: seq=20 ttl=64 time=0.349 ms
    64 bytes from 192.168.200.2: seq=21 ttl=64 time=0.345 ms
    64 bytes from 192.168.200.2: seq=22 ttl=64 time=0.346 ms
    64 bytes from 192.168.200.2: seq=24 ttl=64 time=0.322 ms
    64 bytes from 192.168.200.2: seq=26 ttl=64 time=0.346 ms
    64 bytes from 192.168.200.2: seq=27 ttl=64 time=0.416 ms
    64 bytes from 192.168.200.2: seq=28 ttl=64 time=0.437 ms
    64 bytes from 192.168.200.2: seq=29 ttl=64 time=0.428 ms
    64 bytes from 192.168.200.2: seq=31 ttl=64 time=0.354 ms
    64 bytes from 192.168.200.2: seq=32 ttl=64 time=0.307 ms
    64 bytes from 192.168.200.2: seq=33 ttl=64 time=0.341 ms
    64 bytes from 192.168.200.2: seq=34 ttl=64 time=0.324 ms
    64 bytes from 192.168.200.2: seq=36 ttl=64 time=0.316 ms
    64 bytes from 192.168.200.2: seq=37 ttl=64 time=0.347 ms
    64 bytes from 192.168.200.2: seq=38 ttl=64 time=0.392 ms
    64 bytes from 192.168.200.2: seq=39 ttl=64 time=0.443 ms
    64 bytes from 192.168.200.2: seq=40 ttl=64 time=0.338 ms
    64 bytes from 192.168.200.2: seq=41 ttl=64 time=0.351 ms
    64 bytes from 192.168.200.2: seq=42 ttl=64 time=0.324 ms
    64 bytes from 192.168.200.2: seq=43 ttl=64 time=0.425 ms
    64 bytes from 192.168.200.2: seq=44 ttl=64 time=0.369 ms
    64 bytes from 192.168.200.2: seq=45 ttl=64 time=0.338 ms
    64 bytes from 192.168.200.2: seq=46 ttl=64 time=0.346 ms
    64 bytes from 192.168.200.2: seq=48 ttl=64 time=0.424 ms
    64 bytes from 192.168.200.2: seq=49 ttl=64 time=0.427 ms
    64 bytes from 192.168.200.2: seq=50 ttl=64 time=0.345 ms
    64 bytes from 192.168.200.2: seq=51 ttl=64 time=0.401 ms
    64 bytes from 192.168.200.2: seq=52 ttl=64 time=0.344 ms
    64 bytes from 192.168.200.2: seq=53 ttl=64 time=0.325 ms
    64 bytes from 192.168.200.2: seq=54 ttl=64 time=0.340 ms
    64 bytes from 192.168.200.2: seq=55 ttl=64 time=0.383 ms
    64 bytes from 192.168.200.2: seq=56 ttl=64 time=0.448 ms
    64 bytes from 192.168.200.2: seq=57 ttl=64 time=0.352 ms
    64 bytes from 192.168.200.2: seq=58 ttl=64 time=0.345 ms
    64 bytes from 192.168.200.2: seq=59 ttl=64 time=0.397 ms
    64 bytes from 192.168.200.2: seq=80 ttl=64 time=0.530 ms
    64 bytes from 192.168.200.2: seq=81 ttl=64 time=0.321 ms
    64 bytes from 192.168.200.2: seq=82 ttl=64 time=0.348 ms
    64 bytes from 192.168.200.2: seq=83 ttl=64 time=0.333 ms
    64 bytes from 192.168.200.2: seq=84 ttl=64 time=0.468 ms
    64 bytes from 192.168.200.2: seq=85 ttl=64 time=0.452 ms
    64 bytes from 192.168.200.2: seq=86 ttl=64 time=0.387 ms
    64 bytes from 192.168.200.2: seq=87 ttl=64 time=0.413 ms
    64 bytes from 192.168.200.2: seq=88 ttl=64 time=0.336 ms
    64 bytes from 192.168.200.2: seq=89 ttl=64 time=0.397 ms
    64 bytes from 192.168.200.2: seq=90 ttl=64 time=0.441 ms
    ^C
    --- 192.168.200.2 ping statistics ---
    91 packets transmitted, 62 packets received, 31% packet loss
    round-trip min/avg/max = 0.307/0.377/0.703 ms
    root@am64xx-evm:~#

    # EVM3 Terminal : RECEIVER
    
    am64xx-evm login: root
    root@am64xx-evm:~# ifconfig
    lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
            inet 127.0.0.1  netmask 255.0.0.0
            inet6 ::1  prefixlen 128  scopeid 0x10<host>
            loop  txqueuelen 1000  (Local Loopback)
            RX packets 80  bytes 6320 (6.1 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 80  bytes 6320 (6.1 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    root@am64xx-evm:~# ip link set dev eth1 up
    [   19.610528] remoteproc remoteproc10: unsupported resource 5
    [   19.622169] remoteproc remoteproc12: unsupported resource 5
    root@am64xx-evm:~# ip link set dev eth2 up
    root@am64xx-evm:~# ./hsr_setup_no_dup_off.sh hsr_hw eth1 eth2 192.168.200.2
    hsr_hw eth1 eth2 192.168.200.2
    ip=192.168.200.2
    if=hsr0
    mac=70:ff:76:1e:e6:f3
    slave-a=eth1
    slave-b=eth2
    device=platform/icssg1-eth
    hsr-tag-ins-offload: off
    hsr-tag-rm-offload: off
    hsr-fwd-offload: off
    hsr-dup-offload: off
    hsr-tag-ins-offload: on
    hsr-tag-rm-offload: on
    hsr-fwd-offload: on
    hsr-dup-offload: off
    hsr-tag-ins-offload: off
    hsr-tag-rm-offload: off
    hsr-fwd-offload: off
    hsr-dup-offload: off
    hsr-tag-ins-offload: on
    hsr-tag-rm-offload: on
    hsr-fwd-offload: on
    hsr-dup-offload: off
    [   28.461834] remoteproc remoteproc10: unsupported resource 5
    [   28.474477] remoteproc remoteproc12: unsupported resource 5
    root@am64xx-evm:~# ip link set eth1 down
    root@am64xx-evm:~# ip link set eth2 down
    root@am64xx-evm:~# ethtool -k eth1 | grep hsr
    hsr-tag-ins-offload: on
    hsr-tag-rm-offload: on
    hsr-fwd-offload: on
    hsr-dup-offload: off
    root@am64xx-evm:~# ethtool -k eth2 | grep hsr
    hsr-tag-ins-offload: on
    hsr-tag-rm-offload: on
    hsr-fwd-offload: on
    hsr-dup-offload: off
    root@am64xx-evm:~# ethtool -K eth1 hsr-dup-offload on
    | grep hsrroot@am64xx-evm:~# ethtool -k eth1 | grep hsr
    hsr-tag-ins-offload: on
    hsr-tag-rm-offload: on
    hsr-fwd-offload: on
    hsr-dup-offload: on
    root@am64xx-evm:~# ethtool -K eth2 hsr-dup-offload on
    root@am64xx-evm:~# ethtool -k eth2 | grep hsr
    hsr-tag-ins-offload: on
    hsr-tag-rm-offload: on
    hsr-fwd-offload: on
    hsr-dup-offload: on
     runtime4xx-evm:~# devlink dev param set "platform/icssg1-eth" name hsr_offload_mode value true cmod 
    root@am64xx-evm:~# ip link set eth1 up
    [   74.875823] remoteproc remoteproc10: unsupported resource 5
    [   74.876573] remoteproc remoteproc12: unsupported resource 5
    root@am64xx-evm:~# ip link set eth2 up 
    root@am64xx-evm:~# ifconfig
    eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet6 fe80::72ff:76ff:fe1e:e6f3  prefixlen 64  scopeid 0x20<link>
            ether 70:ff:76:1e:e6:f3  txqueuelen 1000  (Ethernet)
            RX packets 0  bytes 0 (0.0 B)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 37  bytes 2934 (2.8 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    eth2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet6 fe80::72ff:76ff:fe1e:e6f3  prefixlen 64  scopeid 0x20<link>
            ether 70:ff:76:1e:e6:f3  txqueuelen 1000  (Ethernet)
            RX packets 0  bytes 0 (0.0 B)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 31  bytes 2534 (2.4 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    hsr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1494
            inet 192.168.200.2  netmask 255.255.255.0  broadcast 0.0.0.0
            inet6 fe80::72ff:76ff:fe1e:e6f3  prefixlen 64  scopeid 0x20<link>
            ether 70:ff:76:1e:e6:f3  txqueuelen 1000  (Ethernet)
            RX packets 0  bytes 0 (0.0 B)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 19  bytes 1348 (1.3 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
            inet 127.0.0.1  netmask 255.0.0.0
            inet6 ::1  prefixlen 128  scopeid 0x10<host>
            loop  txqueuelen 1000  (Local Loopback)
            RX packets 240  bytes 18960 (18.5 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 240  bytes 18960 (18.5 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    root@am64xx-evm:~#

    Could you confirm if I enable the offload correctly (please check on the logs above)? If so, could you try to reproduce on your side ?

    Tianyi LIU

  • Hi Tianyi,

    Based on your logs of EVM3, I'm noticing that you disabled HSR duplicate offload on and then turned it back on, on EVM3. My tests were just performing that sequence on EVM2 and keeping EVM1 and EVM3 all offloaded.

    Additionally, only by establishing ping between the dup offload OFF with 0% packet loss THEN reenabling dup offload will produce 0% packet loss. However, as we discussed in today's call, this workaround is not viable for your customers who may not be able to do this in production.

    Moving forward, I have prompted the SW developers working on this for an internal call to discuss and will give an update as soon as possible.

    -Daolin

  • Hi Daolin,

    Thank you for the update !

    Tianyi

  • Hello Tianyi,

    New update:

    Our SW team has fixed this issue and tested both OOB use case and your use case of disabling the network services and verified 0% packet loss. The fixes are to the HSR PRU firmwares and patches to enable HSR VLAN on the Linux drivers files. 

    These fixes will be officially in SDK 9.2.1 which will be released in a few weeks' time. However, if this timeframe does not work for you and your team, you can use the following branches now for testing. Please note: use these -cicd and -firmware-next branches with caution as these are not as stable as the official SDK 9.2.1 release.

    Linux Patches: https://git.ti.com/cgit/ti-linux-kernel/ti-linux-kernel/log/?h=ti-linux-6.1.y-cicd

    Firmwares: https://git.ti.com/cgit/processor-firmware/ti-linux-firmware/log/?h=ti-linux-firmware-next

    I verified on my 3-board EVM setup (EVM1 (eth1) <--> eth2 EVM2 eth1 <--> eth2 EVM3 eth1 <--> eth2 EVM1) the following tests with disabling network services use case pass with 0% packet loss:

    1. Ping EVM1 to EVM3, disconnect cable between EVM1-EVM3

    2. Ping EVM1 to EVM3, disconnect cable between EVM1-EVM2

    3. Ping EVM2 to EVM3, disconnect cable between EVM2-EVM3

    4. Iperf3 client EVM1, Iperf3 server EVM3, check CPU load on EVM2 to be around 100% idle. You can use "mpstat -P ALL 1" to check CPU load idle

    5. Multicast filtering without cable disconnection

    6. Multicast filtering with cable disconnection

    7. HSR VLAN tests specified in the below test sequence. 

    HSR VLAN Tests
    Enable HSR offloads on all 3 boards
    
    Test ping/iperf between a pair of Nodes (A ↔ B, B ↔C, A ↔ C) on both direct and forwarding path by plugging and unplugging appropriate LAN cables
    
    1) Add multiple VLAN interfaces on Node A and Node C
    
    On Node A
    
    # ip link add link hsr0 name hsr0.2 type vlan id 2
    
    # ifconfig hsr0.2 192.168.20.10
    
    # ip link add link hsr0 name hsr0.3 type vlan id 3
    
    # ifconfig hsr0.3 192.168.30.10
    
    # ip link add link hsr0 name hsr0.4 type vlan id 4
    
    # ifconfig hsr0.4 192.168.40.10
    
    # ip link add link hsr0 name hsr0.5 type vlan id 5
    
    # ifconfig hsr0.5 192.168.50.10
    
    On Node C
    
    # ip link add link hsr0 name hsr0.2 type vlan id 2
    
    # ifconfig hsr0.2 192.168.20.50
    
    # ip link add link hsr0 name hsr0.3 type vlan id 3
    
    # ifconfig hsr0.3 192.168.30.50
    
    # ip link add link hsr0 name hsr0.4 type vlan id 4
    
    # ifconfig hsr0.4 192.168.40.50
    
    # ip link add link hsr0 name hsr0.5 type vlan id 5
    
    # ifconfig hsr0.5 192.168.50.50
    
    2) Now ping on the VLAN and non VLAN interfaces on Node A and Node C; 
      
    
    ping 192.168.10.50
    
    or
    
    ping 192.168.20.50
    
    or
    
    ping 192.168.30.50
    
    or
    
    ping 192.168.40.50
    
    or
    
    ping 192.168.50.50
    
    While the ping is executing, disconnect LAN cable between Node A and C; Node B would forward packets even if it is not part of VLAN domains
    
    Repeat the test with Node C as iperf Server and Node A  iperf client for both VLAN and non VLAN interfaces
    
    3) Add few VLAN interfaces on Node B
    
    # ip link add link hsr0 name hsr0.2 type vlan id 2
    
    # ifconfig hsr0.2 192.168.20.30
    
    # ip link add link hsr0 name hsr0.4 type vlan id 4
    
    # ifconfig hsr0.4 192.168.40.30
    
    Ping VLAN (VID = 2 and VID = 4)and non VLAN interfaces between Node A and Node B; Ping on both direct and forward path should be functional
    
    Now ping VLAN (VID = 3, VID = 5) interfaces between Node A and Node C; Ping on both direct and forward path should be functional.
    
    4) Delete VLAN interface on Node B
    
    # ip link delete hsr0.2
    
    Now test VLAN, non VLAN interfaces between Node A and Node B , Node A and Node C;
    
    Note: Once VLAN interfaces are added, inorder to re-run the hsr_setup.sh script, all the added VLAN interfaces must be deleted

    If you do end up using the -cicd branches and -firmware-next branches, please let us know if you encounter issues.

    -Daolin

  • Hi Daolin

    We will test these patches on our side with internal integration. Can we just confirm next

    • what is for you few weeks, can we have some date for release?
    • current release is 09.02.01.09, and what is for you next "officially in SDK 9.2.1"... We are confused with numbers

    Thanks a lot

    Best regards

    Milan 

  • Hello Milan,

    >>>what is for you few weeks, can we have some date for release?

    We currently cannot determine a specific date for release other than a general timeline of sometime this week or next week. The reason for this is due to dependency on other internal releases/testing (not specifically related to the HSR issues). 

    >>>current release is 09.02.01.09, and what is for you next "officially in SDK 9.2.1"... We are confused with numbers

    Apologies for the confusion, let me check on the version numbering with the team and get back to you on this.

    -Daolin

  • >>>current release is 09.02.01.09, and what is for you next "officially in SDK 9.2.1"... We are confused with numbers

    Okay, I checked with the team. My understanding on the numbering is the "SDK 9.2.1" I referred to earlier will be named something similar to "09.02.xx.yy" (zz May 2024), most likely looking at something like "09.02.01.yy" or "09.02.02.yy". xx, yy, zz will be determined about a week from now.

    Again, we cannot provide an exact date of the release due to some internal testing required to meet sw quality before public release. The best case scenario is sometime this week, worst case is in the next 2 weeks.

    -Daolin

  • Hello Daolin,

    We have successfully integrated the patches and conducted testing!

    - we performed a ping test between the board and detached manually or with command line the cable during the test, resulting in no packet lost !

    - We also conducted various tests for HSR VLAN, and each test has passed !

    Tianyi

  • Hello Tianyi,

    Thanks for confirming that your tests also pass with the latest fixes. May I ask if you ran any other tests for HSR VLAN other than the sequence I shared? If so, is it possible for you to share those test sequences? The reason why I ask is to gather more information on how HSR VLAN can be tested (other than the method we used). 

    Additionally, I recall during the last biweekly call, Milan mentioned sharing the HSR test sequences your team was performing but I didn't see that documentation in an email correspondence. Is it still possible for that documentation to be shared with us?

    -Daolin

  • Hi Daolin,

    Yes we can share all tests by mail no issue... Can you confirm me if you get by mail previous test doc as I resend mail again?

    There is .7z file attached in this mail

  • Hi Daolin,

    we did integration on our side next tag

    https://git.ti.com/cgit/arago-project/meta-ti/tag/?h=09.02.00.010

    It is 09.02.00.010

    We did not see your official release yet. Could you confirm if this tag will be at the end release?

  • Hi Milan,

    The current information I'm getting is that 09.02.01.10 will be the version number for the official release. Official release is slated to be out this week or next week. 

    According to our previous communications, the CI/CD branch (or sharing a patch set) is okay for your team. Is this still the case? Of course, once the official release is out we recommend using the official version with the fixes. 

    I believe the tag you referenced should correspond to the official 09.02.01.10 release, based on inference from https://git.ti.com/cgit/ti-linux-kernel/ti-linux-kernel/tag/?h=09.02.00.010 (similar tag date). I will confirm with the internal team.

    -Daolin

  • Hi Milan,

    The information I got from the internal team is that 09.02.00.010 will be the tag for the new release that will be out this week.

    -Daolin