This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VH-Q1: question about using TSN-CBS

Part Number: TDA4VH-Q1
Other Parts Discussed in Thread: TDA4VH

hi,experts

  I am currently using the TDA4VH development board with SDK 8.6. I used the switch mode in native driver for testing.

When I was testing the features of CBS,I saw the following article
https://e2e.ti.com/support/processors-group/processors/f/processors-forum/1233877/dra821-cpsw-cbs-configuration-and-linux-standard-tc-cbs-interface?tisearch=e2e-sitesearch&keymatch=CBS#

The article mentions that both standard CBS shaping and the method of using “shaper bw_rlimit” with “min rate” and “max rate” are supported.

I would like to verify the standard CBS configuration, so I used the following command to configure the CBS queue.

ifconfig eth1 down
ifconfig eth2 down
ifconfig eth3 down
ifconfig eth4 down
ethtool -L eth2 tx 8
ethtool --set-priv-flags eth2 p0-rx-ptype-rrobin off
devlink dev param set platform/c000000.ethernet name switch_mode value true cmode runtime
ip link add name br0 type bridge
ip link set dev br0 type bridge ageing_time 1000

ip link set dev eth1 up
ip link set dev eth2 up
ip link set dev eth3 up
ip link set dev eth4 up

ip link set dev eth1 master br0
ip link set dev eth2 master br0
ip link set dev eth3 master br0
ip link set dev eth4 master br0

ifconfig br0 up

bridge vlan add dev eth1 vid 11 master
bridge vlan add dev eth2 vid 11 master
bridge vlan add dev eth3 vid 11 master
bridge vlan add dev eth4 vid 11 master

tc qdisc add dev eth2 handle 100: parent root mqprio num_tc 8 \
            map 0 1 2 3 4 5 6 7 0 0 0 0 0 0 0 0 \
            queues 1@0 1@1 1@2 1@3 1@4 1@5 1@6 1@7 \
            hw 0
            
tc qdisc replace dev eth2 parent 100:1 cbs \
            locredit -1470 hicredit 30 sendslope -980000 idleslope 20000

tc qdisc replace dev eth2 parent 100:2 cbs \
            locredit -1470 hicredit 30 sendslope -980000 idleslope 20000
            
tc qdisc replace dev eth2 parent 100:3 cbs \
            locredit -1470 hicredit 30 sendslope -980000 idleslope 20000
tc qdisc replace dev eth2 parent 100:4 cbs \
            locredit -1470 hicredit 30 sendslope -980000 idleslope 20000
tc qdisc replace dev eth2 parent 100:5 cbs \
            locredit -1470 hicredit 30 sendslope -980000 idleslope 20000
tc qdisc replace dev eth2 parent 100:6 cbs \
            locredit -1470 hicredit 30 sendslope -980000 idleslope 20000
tc qdisc replace dev eth2 parent 100:7 cbs \
            locredit -1470 hicredit 30 sendslope -980000 idleslope 20000
tc qdisc replace dev eth2 parent 100:8 cbs \
            locredit -1470 hicredit 30 sendslope -980000 idleslope 20000

When I test the bandwidth using iperf, it does not achieve the desired bandwidth limitation effect.

So please tell me, how can I correctly use CBS configuration?

  • Hi ,Tanmay

             I have implemented bandwidth restriction functionality according to the instructions in the documentation.

    Our current requirement is to configure it using the standard credit-based approach. 

    If the following command is not supported for configuration,

    can you please tell me the difference between using “min_rate” and “maxrate” compared to using “idleslope” and “sendslope”?

    tc qdisc add dev eth2 handle 100: parent root mqprio num_tc 8 \
                map 0 1 2 3 4 5 6 7 0 0 0 0 0 0 0 0 \
                queues 1@0 1@1 1@2 1@3 1@4 1@5 1@6 1@7 \
                hw 0
                
                
    tc qdisc replace dev eth2 parent 100:1 cbs \
                locredit -1470 hicredit 30 sendslope -980000 idleslope 20000
                

  • Hi Matthew,

    The rate limiting will only be configured in hardware by the min_rate/max_rate options in mqprio qdisc with the flag "hw" as "1". Otherwise, the rate limiting will be in software only.

    Using your configuration too, the rate limiting will be in software only. To configure it in hardware, we need min_rate/max_rate in the mqprio qdisc.

    The min_rate/max_rate is not credit based in the same scope as the idleslope/sendslope options. one are hardware credits other are software.

    Regards,
    Tanmay

  • Hi ,Tanmay

             When configuring software credits, I use iperf for bandwidth testing, and the results are very unstable. Sometimes the bandwidth is 400M, sometimes it’s 200M, and sometimes it’s 600M.

  • Hi Matthew,

    As credits are only configured in software, we do not test credits with our implementation. Only max_rate and min_rate is validated.

    From your snapshots, we can see that the difference in the rates is due to lost datagrams. You can get the statistics of the ethernet port using the command "ethtool -S eth2". Can you see where are you observing the drops in the statistics. If drops are not visible in statistics, it means that the issue might be due to some buffer size somewhere. If possible, I suggest you to enable flow control on the link partner to not have drops due to udp communication.

    Do you still see drops when using tcp communication?

    Regards,
    Tanmay