This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
Hi,
My customer has a problem on the cut-through operation of Ethernet, so they would like you to confirm below.
They’re testing the cut-through of Ethernet frame by using AM64x GP EVM.
However, Multicast frame is not made cut-through. Broadcast frame or unicast frame not for itself are forwarded by cut-through.
It means that eth1 port does not forward the multicast frame received at eth0 port from the network, to the network.
Opposite also, eth0 port does not forward the multicast frame received at eth1 port from the network, to the network.
At first, could you please confirm if you can reproduce this phenomenon ?
After that, could you please advise them about the root cause and the solution.
Multicast frame which they used.
MRP Multicast : 01:15:4E:00:00:01 (For test), 01:15:4E:00:00:02 (For Linkup)
LLDP Multicast : 01:80:c2:00:00:0E
IPv4 Multicast : 01:00:5E:7F:FF:FA (SSDP, IP address : 239.255.255.250)
Note: the cut-through works correctly between eth1 and eth2 when eth1 (CPSW3g) and eth2 (PRU_ICSSG) are used.
The setting of Cut-through on CPSW3g
Perform between eth0 and eth1 of CPSW3g
Set the contents described in the “! Note” in the following site.
3.2.2.7. CPSW3g — Processor SDK AM64X Documentation
The settings which they actually set are in the attached file.
#!/bin/sh ip link set dev eth0 down ip link set dev eth1 down ip link set dev eth2 down ip link set dev eth0 address 70:ff:76:1e:2c:9e ip link set dev eth1 address 70:ff:76:1e:2c:9f ip link set dev eth2 address ac:1f:0f:84:04:c8 devlink dev param set platform/8000000.ethernet name switch_mode value true cmode runtime echo 1 > /sys/kernel/debug/8000000.ethernet/Port1/cut_thru_rx_pri_mask echo 1 > /sys/kernel/debug/8000000.ethernet/Port1/cut_thru_tx_pri_mask echo 1 > /sys/kernel/debug/8000000.ethernet/Port2/cut_thru_rx_pri_mask echo 1 > /sys/kernel/debug/8000000.ethernet/Port2/cut_thru_tx_pri_mask ethtool --set-priv-flags eth0 cut-thru on ethtool --set-priv-flags eth1 cut-thru on ip link add name br0 type bridge ip link set dev br0 address 70:ff:76:1e:2c:9d ip link set dev br0 type bridge ageing_time 1000 ip link set dev eth0 up sleep 1 ip link set dev eth1 up sleep 1 ip link set dev eth0 master br0 ip link set dev eth1 master br0 ip link set dev br0 type bridge stp_state 0 ip link set dev br0 up ip addr add 172.18.96.65/16 dev br0 bridge vlan add dev br0 vid 1 pvid untagged self
Also, the following Multicast flooding is set as “on”.
3.2.2.7. CPSW3g — Processor SDK AM64X Documentation
Thanks and regards,
Hideaki
Hi,
I will have to research a bit. Could more detail about the test environment and components of the setup be provided?
Best Regards,
Schuyler
Hi Schuyler,
Thank you for your response. They're using the following setup..
HW : TMDS64GPEVM
SW : Processor SDK Linux 08.05.00.21 (linux-rt-5.10.153-rt76-g29dbc132eb)
Regards,
Hideaki
Hi Hideaki,
Thank you for HW and SW descriptions. What I would like to know about the application the customer is using to send the multicast frames. What devices are upstream and downstream of the GPEVM.
If cut-through is not enabled are the multicast packets forwarded?
As I mentioned earlier this will take a few days to work on to setup and research with fellow team members. I expect to have more details mid-next week.
Best Regards,
Schuyler
If the same script is run without cut-through, so comment out
echo 1 > /sys/kernel/debug/8000000.ethernet/Port1/cut_thru_rx_pri_mask echo 1 > /sys/kernel/debug/8000000.ethernet/Port1/cut_thru_tx_pri_mask echo 1 > /sys/kernel/debug/8000000.ethernet/Port2/cut_thru_rx_pri_mask echo 1 > /sys/kernel/debug/8000000.ethernet/Port2/cut_thru_tx_pri_mask ethtool --set-priv-flags eth0 cut-thru on ethtool --set-priv-flags eth1 cut-thru on
do you observe the same issue?
Hi Schuyler, Pekka,
Thank you for your reply.
If cut-through is not enabled are the multicast packets forwarded?
No, Multicast packets are not forwarded even though the cut-through is disabled.
Is there something wrong in their setting ?
What I would like to know about the application the customer is using to send the multicast frames.
They are currently evaluating passing PROFINET communication through the CPSW3g.
Therefore, MRP multicast is the diagnostics and status sent from the MRP manager in the commercial PROFINET L2SW.
What devices are upstream and downstream of the GPEVM.
PROFINET L2SW is connected to eth0, and PROFINET IO Device is connected to eth1.
PROFINET Engineering PC is connected to the L2SW mentioned above.
Thanks and regards,
Hideaki
If cut-through is not enabled are the multicast packets forwarded?No, Multicast packets are not forwarded even though the cut-through is disabled.
Is there something wrong in their setting ?
So it seems like the base issue is multicast is not working, regardless of cut-through or not.
Also, the following Multicast flooding is set as “on”.
How is this done? Was it:
bridge link set dev eth0 mcast_flood on bridge link set dev eth1 mcast_flood on
Pekka
Hi Pekka,
Thank you for your comment.
Yes, they sometimes explicitly set it to ON using a command like you mentioned above, but the SDK clearly states that the default is ON. If you are concerned about the default status of Multicast flooding, they would like you to tell them how to display the status of Multicast flooding using a command.
They've asked again, they'd like you to confirm if you also can reproduce the same issue.
At first, could you please confirm if you can reproduce this phenomenon ?
After that, could you please advise them about the root cause and the solution.
Thanks and regards,
Hideaki
Can you run the command:
bridge mdb show
would display the current multicast memberships.
Another thing to try is manually adding the multicast entry, something like:
bridge mdb add dev br0 port eth0 grp 239.255.255.250 permanent
bridge mdb add dev br0 port eth1 grp 239.255.255.250 permanent
Pekka
Hi Pekka,
Thank you for your reply. I'll ask them to try it.
Let me confirm, you was able to verify the multicast and cut-through on your setup ?
Thanks and regards,
Hideaki
I think we already agreed the issue is unrelated to cut-through in this thread. I know customer needs cut-through, but the issue is independent of the cut-through feature. So the problem is Ethernet multicast frame forwarding when switching is offloaded.
No I've not been able to directly reproduce, I'm trying a 9.0 and 8.6 SDKs and various configurations. I suspect this is "just" a configuration issue with LLDP and/or multicast, something needs to have multicast/LLDP enabled, and currently does not have.
Main issue in reproducing is generating and consuming Ethernet multicast. We don't have the setup with Windows 10 application and PROFINET IO devices, I'm looking to use Linux commands and traffic generators for mulcticast.
Pekka
I looked into this and it looks likely this is related to the ALE address management. Is the use case with VLANs? Either adding the addresses explicitly or then turning VLAN aware mode off. did you have a chance to try the
bridge mdb add
commands?
Pekka
Hi Pekka,
Thank you for additional advice. I'll ask the customer to try it, but could you respond their reply below before asking them ?
Main issue in reproducing is generating and consuming Ethernet multicast. We don't have the setup with Windows 10 application and PROFINET IO devices,
They think that you can send UDP/IP Multicast on the network if you have a Linux machine connected the network like below.
$ ping -c 5 224.0.0.1
PING 224.0.0.1 (224.0.0.1) 56(84) bytes of data.
64 bytes from 192.168.1.100: icmp_seq=1 ttl=255 time=0.307 ms
64 bytes from 192.168.1.16: icmp_seq=1 ttl=255 time=2.95 ms (DUP!)
64 bytes from 192.168.1.17: icmp_seq=1 ttl=64 time=80.8 ms (DUP!)
64 bytes from 192.168.1.100: icmp_seq=2 ttl=255 time=0.294 ms
64 bytes from 192.168.1.16: icmp_seq=2 ttl=255 time=1.14 ms (DUP!)
64 bytes from 192.168.1.100: icmp_seq=3 ttl=255 time=0.291 ms
64 bytes from 192.168.1.16: icmp_seq=3 ttl=255 time=1.14 ms (DUP!)
64 bytes from 192.168.1.17: icmp_seq=3 ttl=64 time=13.8 ms (DUP!)
64 bytes from 192.168.1.100: icmp_seq=4 ttl=255 time=0.296 ms
64 bytes from 192.168.1.16: icmp_seq=4 ttl=255 time=1.14 ms (DUP!)
64 bytes from 192.168.1.17: icmp_seq=4 ttl=64 time=27.5 ms (DUP!)
64 bytes from 192.168.1.100: icmp_seq=5 ttl=255 time=0.293 ms
--- 224.0.0.1 ping statistics ---
5 packets transmitted, 5 received, +7 duplicates, 0% packet loss, time 4032ms
rtt min/avg/max/mdev = 0.291/10.847/80.848/22.529 ms
I suspect this is "just" a configuration issue with LLDP and/or multicast, something needs to have multicast/LLDP enabled, and currently does not have.
They don’t think so, because this issue doesn’t occur if eth1 port and eth2 port are used as mentioned below.
Note: the cut-through works correctly between eth1 and eth2 when eth1 (CPSW3g) and eth2 (PRU_ICSSG) are used.
They think that it’s SSDP, not LLDP which you mentioned because of 239.255.255.250 UDP/IP multicast. Correct ?
Also, they tried the following command which you asked, but no changed.
bridge mdb add dev br0 port eth0 grp 239.255.255.250 permanentbridge mdb add dev br0 port eth1 grp 239.255.255.250 permanent
Thanks and regards,
Hideaki
Sorry for the delay, I have some progress. I did a setup of a small daisy chain:
Ubuntu desktop <-> AM64x CPSW3G <-> AM62A CPSW3G
On the middle one I configured it as switch using this script (/cfs-file/__key/communityserver-discussions-components-files/791/am64x_5F00_cpsw_5F00_switch_5F00_vlan_5F00_on.sh) . The vlan specific commands at the end of the script are required in 9.0 SDK, for 8.6 this script achieves the same /cfs-file/__key/communityserver-discussions-components-files/791/5025.am64x_5F00_cpsw_5F00_switch_5F00_on.sh.
To check multicast I used ping 224.0.0.1 which is the default group that shows up also when you check ip maddr show .
The Linux kernel by default ignores multicast pings so in order to ping using multicast address, in each target AM6x device I want to respond to ping I needed to set:
sysctl -w net.ipv4.icmp_echo_ignore_broadcasts=0
With this the AM64x in the middle responds to the ping, but multicast is still not getting sent to the AM62A. I'm checking on this and will add to this post.
Sorry about taking a few days on this, but now it looks like I have a proposed solution for also the switched traffic. With CPSW3G you need to manually add the multicast addresses as shown below for 224.0.0.1 (which is 01:00:5e:00:00:01)
bridge mdb add dev br0 port br0 grp 01:00:5e:00:00:01 permanent vid 1 bridge mdb add dev br0 port eth0 grp 01:00:5e:00:00:01 permanent vid 1 bridge mdb add dev br0 port eth1 grp 01:00:5e:00:00:01 permanent vid 1
With the ICSSG based switch the multicast flooding can be enabled with
bridge link set dev eth0 mcast_flood on bridge link set dev eth1 mcast_flood on
I'm checking to see if the mcast_flood would be expected to work also with CPSW3G. But with the mdb commands above multicasts work. I tested these with AM64x SK and EVM using ping 224.0.0.1. We are trying to figure out if the correct SW interface to configure this is the mcast_flood. But for proceeding with testing the mdb addition works.
Yes, they sometimes explicitly set it to ON using a command like you mentioned above, but the SDK clearly states that the default is ON
Note the default is flood to the local CPU port "CPU port mcast_flooding is always on" in SW documentation, but not on in the switched ports. So default is br0 gets multicast, eth0 and eth1 do not.
Pekka
Hi Pekka,
Thank you for your reply, but unfortunately this has not yet been solved.
The customer has tried the steps like below which you mentioned above.
bridge mdb add dev br0 port br0 grp 01:00:5e:00:00:01 permanent vid 1
bridge mdb add dev br0 port eth0 grp 01:00:5e:00:00:01 permanent vid 1
bridge mdb add dev br0 port eth1 grp 01:00:5e:00:00:01 permanent vid 1
bridge link set dev eth0 mcast_flood on
bridge link set dev eth1 mcast_flood on
However, they have an error like below when they tried the above bridge commands. It doesn’t work like you mentioned.
They’re using the bridge command (Ver. 5.10.0) in SDK 08.06.00.
Which version are you using ?
Could you confirm this with the same version at your end as well ?
--------------------------------------------------------------------------------
# bridge mdb add dev br0 port br0 grp 01:00:5e:00:00:01 permanent vid 1
Invalid address "01:00:5e:00:00:01"
# bridge -V
bridge utility, 5.10.0
--------------------------------------------------------------------------------
Thanks and regards,
Hideaki
I ran this using 9.0 SDK, not 8.6. 9.0 is a significant update, looks like bridge command was updated to version 5.17. Is there a reason to stay with 8.6 as opposed to using the latest?
Looks like with the older SDK I see the same error. Rather than try to figure out what has changed, would it be possible to move to the latest SDK.
Pekka
Hi Pekka,
Thank you so much for supporting this. As told you off-line, my customer was able to confirm the cut-through with the latest PSDK 9.1, but another issue occurred. Please support another thread as well.
Thanks and regards,
Hideaki