Tool/software:
eth0 and eth1 are network ports in cpsw switch mode, I want to config the network ports below:
(1) There is no speed limit between the two network ports
(2) But the rate of the two network ports entering the mpu is limited to 10Mbps?
This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
Tool/software:
eth0 and eth1 are network ports in cpsw switch mode, I want to config the network ports below:
(1) There is no speed limit between the two network ports
(2) But the rate of the two network ports entering the mpu is limited to 10Mbps?
Hello Zitong,
Can you first help me understand your questions by answering the below?
1. Are you evaluating on a TI AM64x EVM or is this on a custom designed board featuring AM6422?
2. Which operation system are you using? What SDK version is it?
3. What is your test topology? In other words, what do you have connected to eth0 and/or eth1 and what tests are you running?
4. Can you explain what you mean by "speed limit" between the two ports?
-Daolin
Thank you Daolin, I'm sorry I didn't describe my problem clearly.
1、I am using a custom designed board featuring AM6422
2、I am using Linux Debian OS,and it build on the senux platform not build directly with the TI SDK
3、I will connect eth0 and eth1 to my PC, for example, the eth0 and eth1 bridge ip is 192.168.1.xxx, my PC eth port has two ip 192.168.2.10 and 192.168.2.11, I can run both iperf server and iperf client on my PC to test linux eth data transfer rate inside the switch
Hi Zitong,
Thanks for sharing these details.
3、I will connect eth0 and eth1 to my PC, for example, the eth0 and eth1 bridge ip is 192.168.1.xxx, my PC eth port has two ip 192.168.2.10 and 192.168.2.11, I can run both iperf server and iperf client on my PC to test linux eth data transfer rate inside the switch
What netmask are you using for your IP address configurations on your custom board and your host PC?
Does your PC only have one eth port with two ip addresses? Or does it have two eth ports with one IP address for each?
Can you share a diagram of your custom board to PC connections and label the IP address for each port?
-Daolin
The netmask on both my PC and my linux board is 255.255.255.0
My pc has two eth port, connect to my linux port respectively
Hello Zitong,
The netmask on both my PC and my linux board is 255.255.255.0
My pc has two eth port, connect to my linux port respectively
Thanks for clarifying this. However, it would help if you also shared a picture of your connections. Right now, it appears that you have something like the below?
port1 PC (192.168.2.10) <--> eth0 br0 (DUT 192.168.1.xxx) eth1 <--> port2 PC (192.168.2.11)
-Daolin
Hi Zitong,
Your switch IP address is on a different subnet than the two eth interfaces on your PC. In order to ensure network communication between all interfaces (PC eth interfaces and the br0 on the DUT) you should try testing with the bridge on the same subnet (e.g. change to 192.168.2.101).
Additionally, how are you setting up the switch on the DUT? Specifically what commands did you use to configure the CPSW as a switch?
-Daolin
My switch ip address is definitely on a different subnet than the PC IP, they can't communicate. However, I just want to test the transfer rate inside the switch. So I run iperf server and iperf client both on my PC, the linux eth just work to transfer packet. It can work.
I use devlink command to configure the CPSW as a switch
devlink dev param set platform/8000000.ethernet name switch_mode value true cmode runtime
Hi Zitong,
I use devlink command to configure the CPSW as a switch
The devlink command is used to ensure the CPSW is configured as a hardware switch. You should also ensure that the following is run to fully set up the switch.
devlink dev param set platform/c000000.ethernet name switch_mode value true cmode runtime ip link add name br0 type bridge ip link set dev br0 type bridge ageing_time 1000 ip link set dev eth0 up ip link set dev eth1 up ip link set dev eth0 master br0 ip link set dev eth1 master br0 ip link set dev br0 up ip link set dev br0 type bridge vlan_filtering 1 bridge vlan add dev br0 vid 1 self bridge vlan add dev br0 vid 1 pvid untagged self
Just for testing purposes, do you see a similar rate limit issue if your switch IP address is on the same subnet?
-Daolin
I exactly run all the command above. If my switch IP address is on same subnet , it also can work .
I think the IP question we talked is not very important. I just want to set rate limit below
(1) There is no speed limit inside the two network switch ports
(2) But the rate of the two network ports entering the mpu is limited to 10Mbps?
I just want to set rate limit below
(1) There is no speed limit inside the two network switch ports
(2) But the rate of the two network ports entering the mpu is limited to 10Mbps?
Just to clarify, are you saying the problem is you're seeing the throughput is limited to 10Mbps when testing with iperf3 or are you saying you want to be able to configure the rate to be limited to 10Mbps?
I'm currently out of office and will not be to reproduce your test setup for the time being.
-Daolin
Hello Zitong,
Daolin is out of the office for the rest of March. Feel free to ping the thread in early April if you have not received a response.
Regards,
Nick
Hi,
I have reviewed the thread and I have some questions. But first if possible lets keep all the devices on the same subnet.
- Is 10Mbps the desired speed or that is what you are measuring?
- Please attach the results of ethtool eth0 and ethtool eth1, I am looking to see what data rates the ports are connecting to.
Best Regards,
Schuyler
Hi Schuyler,
(1)10Mbps,is the desired speed that eth port to mcu.
(2) the ethtool results are below
OK, let's keep all devices on the subnet, I will verify this through the connections below, first I will run iperf client and iperf server both on my PC to verify the function of eth switch mode, the iperf will run 100Mbps, I hope to see the 100Mbps will have no limit. Second, I will run iperf server on my linux, run iperf client on my PC, I want to verify the rate limit to mcu, for example , I will run the command
Thanks
Hi,
After discussing with the developer we recommend reviewing this link documention:
To limit the rate there is a feature using the Credit based shaper already enabled for CBS to limit the rate at which traffic moves from sender to receiver: Also we don't think there is an internal rate limiting of any kind in bridge setup, it works similar to how it works for any hardware switch, that is forwarding a traffic based on MAC address.
Best Regards,
Schuyler
Hi,
I have read the link you were given. But I think it maybe not we want. The link give some examples to limit rate by port, like 5001 and 5002, the data transfer through corresponding port will limit to the setting rate.
However, we want all the packet transfer througth eth or eth1 to cpu will limit 10Mbps
I'm not sure my understanding about the link is correct or not.
Thanks
Hi,
I should have asked this question earlier, have you looked at changing the DTS to limit the CPSW port to 10Mbps? Example would be add this property to the CPSW node
max-speed = <10>;
This limit the interface to 10Mbps.
Best Regards,
Schuyler
I try this method today, it seems that no changes happen. The rate between switch port and the rate from port to cpu are still 100Mbps.
Hi,
I apologize I told you the wrong node to put the property in, could you please try putting the property in the phy node? Also please remove from the cpsw node. Again apologies for not indicating the correct place to put the property.
Best Regards,
Schuyler
This is an example of how it would work:
&phy1 {
max-speed = <100>;
};
Hi,
I try to put the property in the phy node today, the changes are below
but after this property setted, the rate between switch port and the rate from port to cpu are both change to 10Mbps,I guess this property is setting the ingress rate of the port. However, what I expect is the rate between switch port is 100Mbps.
Hi,
This property only sets the MAC to a max speed for both 10Mbps for both RX and TX. Inside the CPU so to speak the transfer is not limited. The CPSW is rated for 1Gbps speeds in both direction. I may not be following what you are trying to do. Could you please explain the how the 100Mbps is necessary at the switch level?
Best Regards,
Schuyler
We want our device have no rate limit when it only worked as a switch, but in our application fileld, we have a request that rate limit to 10Mbps, literally is the rate limit to cpu.
Hi,
I might understand what you are saying but please allow me to add this additional explanation. The max rate property is setting the MAC/PHY interface between the link partners to this bit rate. So all packets entering or departing the MAC are limited to this bit rate. This does not limit the internal processing bit rate.
Best Regards,
Schuyler
We want our device have no rate limit when it only worked as a switch, but in our application fileld, we have a request that rate limit to 10Mbps, literally is the rate limit to cpu.
Hi Zitong
which direction do you want to limit the traffic speed?
for output from CPU, you can use this command, this command can limit traffic to 90Mbps about
tc qdisc replace dev eth1 root tbf limit 120mb rate 90mbit burst 5mb
you can adjust the parameter to fit your speed configuration
Regards
Semon
Hello Zitong,
We want our device have no rate limit when it only worked as a switch, but in our application fileld, we have a request that rate limit to 10Mbps, literally is the rate limit to cpu.
If both the ingress and egress ports (eth0 and eth1 of the switch) are limited to 10Mbps, then this will limit the overall rate of traffic, even if the switch was at 100Mbps, the traffic is still limited to 10Mbps for both the ingress and egress ports. This would be a different story if you are only limiting 10Mbps for one port of the two ports in the switch. Is your requirement to have both ports limited to 10Mbps at the same time or just one of the ports?
Additionally, I'm not fully understanding your requirement having a rate limit in the switch. Since you have configured a hardware switch, the traffic will simply be transferred from one port to the other as quick as the hardware can pass the traffic from one port to the other port. Were you able to verify that the switch was transferring at a rate of 100Mbps in original test without the modifications in the device tree for 10Mbps limit?
-Daolin
Hi, Semon
We want to limit both the path input to CPU and the path output from CPU
Hi Daolin,
We want to limit two ports to 10Mbps at the same time.
Yes, I have already verified the switch transfer can achieve at 100Mbps without any modifications. As you can see, for emample, the packet into one port may have two ways, one way is to another switch port, the other is into cpu, we only want to limit the packet into cpu at the reate of 10Mbps.
We want to limit both the path input to CPU and the path output from CPU
Hi Zitong
you can try the command I provided, it can limit the output traffic from CPU
let's work together to find the proper command to limit the input traffic to CPU
Regards
Semon
Hi, Schuyler
This actually doesn't limit the internal rate, but due to all the packets into MAC are limited to 10Mbps, the rate between switch port certainly can't achieve at 100Mbps
Hi Semon,
I have tried the command `tc qdisc replace dev eth0 root tbf limit 20mb rate 10mbit burst 5mb`, it actually limit the egress rate to 10Mbps.
Hi team
This is customer needs after meeting with them on-site.
Customer will use CPSW to achieve the ring topology of ethernet.
1. device in the ring want to communicate with other device in the ring, rate is 100M.
2. device want to communicate with CPU, which will pass through internal port Port0, this rate is limited to 10M to avoid some ethernet storm or other impacts.
Regards
Zekun
Hello Zekun,
Thanks for clarifying, the specific requirements and sharing the block diagram.
I have tried the command `tc qdisc replace dev eth0 root tbf limit 20mb rate 10mbit burst 5mb`, it actually limit the egress rate to 10Mbps.
We want to limit both the path input to CPU and the path output from CPU
I assume the suggestion from Semon addresses the transmit from CPU to external port rate limit? It appears that using tc qdisc command to rate limit ingress traffic is not very straightforward since Linux does not seem to support traffic shaping on ingress. I am not an expert in using tc qdisc but some resources indicate that it could be possible rate limit the ingress traffic by redirecting ingress traffic to a virtual ethernet interface and then shaping the egress of that virtual interface: https://linux-man.org/2021/09/24/how-to-limit-ingress-bandwith-with-tc-command-in-linux/
Another option is to use interrupt pacing to indirectly limit the rate of ingress traffic: https://software-dl.ti.com/processor-sdk-linux/esd/AM62X/latest/exports/docs/linux/Foundational_Components/Kernel/Kernel_Drivers/Network/CPSW3g.html#interrupt-pacing. This however doesn't seem to give you complete control to configure the actual rate of the ingress traffic.
I will need to discuss internally on what other options we can do to limit the ingress traffic. Please kindly ping this thread if I haven't responded with an update by Friday.
-Daolin
I assume the suggestion from Semon addresses the transmit from CPU to external port rate limit? It appears that using tc qdisc command to rate limit ingress traffic is not very straightforward since Linux does not seem to support traffic shaping on ingress. I am not an expert in using tc qdisc but some resources indicate that it could be possible rate limit the ingress traffic by redirecting ingress traffic to a virtual ethernet interface and then shaping the egress of that virtual interface: https://linux-man.org/2021/09/24/how-to-limit-ingress-bandwith-with-tc-command-in-linux/
Another option is to use interrupt pacing to indirectly limit the rate of ingress traffic: https://software-dl.ti.com/processor-sdk-linux/esd/AM62X/latest/exports/docs/linux/Foundational_Components/Kernel/Kernel_Drivers/Network/CPSW3g.html#interrupt-pacing. This however doesn't seem to give you complete control to configure the actual rate of the ingress traffic.
I will need to discuss internally on what other options we can do to limit the ingress traffic. Please kindly ping this thread if I haven't responded with an update by Friday.
-Daolin
Hello Daolin
any update about how to limit the ingresss traffic?
Regards
Semon
Hi Semon,
In order to limit from a 100Mbps incoming traffic to 10Mbps to the host CPSW port, it would be expected that a significant amount of packets are lost/dropped (90Mbps drop) to meet 10Mbps. Is this your intention for the use case? If so, is there a requirement on what type of packets get dropped and which should be passed through to the host CPSW port?
Rate limiting on the ingress path looks to be a very nontrivial, so it would take us more time to look into this.
-Daolin
In order to limit from a 100Mbps incoming traffic to 10Mbps to the host CPSW port, it would be expected that a significant amount of packets are lost/dropped (90Mbps drop) to meet 10Mbps. Is this your intention for the use case? If so, is there a requirement on what type of packets get dropped and which should be passed through to the host CPSW port?
Rate limiting on the ingress path looks to be a very nontrivial, so it would take us more time to look into this.
Hello Daolin
rate limit is used to protect CPU from broadcast storm, so any kind of traffic type is possible to be dropped, the only requirement is to limit traffic to CPU to 10Mbps
Regards
Semon
Hi Semon,
To limit the rate there is a feature using the Credit based shaper already enabled for CBS to limit the rate at which traffic moves from sender to receiver: Also we don't think there is an internal rate limiting of any kind in bridge setup, it works similar to how it works for any hardware switch, that is forwarding a traffic based on MAC address.
I have read the link you were given. But I think it maybe not we want. The link give some examples to limit rate by port, like 5001 and 5002, the data transfer through corresponding port will limit to the setting rate.
However, we want all the packet transfer througth eth or eth1 to cpu will limit 10Mbps
I would like to go back to the credit-based shaper suggestion that Schuyler previously proposed. https://software-dl.ti.com/processor-sdk-linux/esd/AM64X/latest/exports/docs/linux/Foundational_Components/Kernel/Kernel_Drivers/Network/CPSW-CBS.html#cbs-in-switch-mode
First, the documentation indicates that in order to limit the rate on ingress, the sender device needs to be configured with the rate limit on its own egress port connected to the ingress of the switch device. To me, this indicates that the rate out on the line would be limited as well, essentially performing the same as using "ethtool -s ethX speed 10 duplex full" which you and Zitong have indicated is not what you need for your use case.
Another section: https://software-dl.ti.com/processor-sdk-linux/esd/AM64X/latest/exports/docs/linux/Foundational_Components/Kernel/Kernel_Drivers/Network/CPSW-CBS.html#rate-limiting-host-port-ingress-on-am625-sk could be something to try to see if the ingress rate does actually get limited. However, this involves the use of a specific port where the traffic needs to be sent through so that the rate limiting gets applied. From my understanding, this is necessary because in order to use CBS, the Fixed priority mode is needed, which requires traffic to be specified to a specific priority which is identified by which port the traffic passes through.
Another option we need to look into more is related to eBPF but from my understanding, it's likely that our CPSW driver probably doesn't support this eBPF currently.
-Daolin
Hi Daolin,
I wonder if it has a method can achieve the rate limit by modifying the kernel?
I wonder if it has a method can achieve the rate limit by modifying the kernel?
Hello Zitong
I tried following two commands to limit the ingress traffic, it woks in EVM,
------------------------------------------
method1:
tc qdisc add dev eth0 ingress
tc filter add dev eth0 parent ffff: protocol ip u32 match ip src 0.0.0.0/0 flowid :1 police rate 30.0mbit mtu 10000 burst 20k drop
method2:
tc qdisc add dev eth0 handle ffff: ingress
tc filter add dev eth0 parent ffff: u32 match u32 0 0 police rate 14mbit burst 20k
------------------------------------
please try it in your system to see the result
Regards
Semon
Hi Semon,
Thank you very much for the method you given.
I want to know how to change the parameter in method 1 to modify the rate limit, I look up some information about tc command, I think the most critial parameter is `rate 30.0mbit`, I think 30.0mbit is the average rate, but the iperf result shows the rate is lower than I expect. And if I want to change my rate limit to 40Mbps, what parameter should I change?
I want to know how to change the parameter in method 1 to modify the rate limit, I look up some information about tc command, I think the most critial parameter is `rate 30.0mbit`, I think 30.0mbit is the average rate, but the iperf result shows the rate is lower than I expect. And if I want to change my rate limit to 40Mbps, what parameter should I change?
I test in evm, it's about 10Mbps with the above setting, you can adjust these 3 parameters, it's not very accuracy, need to test
Hello Zitong,
It sounds like Semon's suggestion has enabled you to limit the ingress rate. Were there any additional inquiries you had?
-Daolin