This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VM: When I use the EST function, the default priority packet has no gate control characteristics

Part Number: TDA4VM
Other Parts Discussed in Thread: TDA4VH

hello,

I am using TDA4VH and TDA4VM boards to verify EST function. The SDK version I use is SDK8.5. The VH board uses the Native Driver mode ETH3 to send UDP packets, 80 bytes in length,use ETH0 to receive messages on the VM board for receiving.

 

the gate control list is

#!/bin/sh
ifconfig eth4 down
ifconfig eth1 down
ifconfig eth2 down
ifconfig eth3 down
ethtool -L eth3 tx 2
sleep 5

ethtool --set-priv-flags eth3 p0-rx-ptype-rrobin off

phc2sys -s clock_realtime -c eth3 -m -O 0 > /dev/null &
ip link set dev eth3 up
sleep 5
tc qdisc replace dev eth3 parent root handle 100 taprio \
num_tc 2 \
map 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 \
queues 1@0 1@1 \
base-time 0 \
sched-entry S 2 5000 \
sched-entry S 1 40000 \
flags 2

tc qdisc add dev eth3 clsact

tc filter add dev eth3 egress protocol ip prio 1 u32 match ip dport 5003 0xffff action skbedit priority 3r

When I sends a UDP stream with Port 5003,the  result is correct

But when I sends a UDP stream with Port 5000,the  result is wrong

I think the 5000 port stream should be send within 5us time, rather than continuously send,but the result is not the case, I want to know why?

best regards!

  • Hi Yang,

    I see that you have multiple thread open on the various TSN functionalities. Can we get together on a call so that I would be able to understand and resolve your queries all at the same time.

    Please let me know about the same.

    Regards,
    Tanmay

  • Hi,Tanmay

    Thank you for your response. I have resolved most of your questions. Currently, I have only two remaining questions.

    Firstly, I am using tsn switch mode. After configuring the gate control function for EST, I noticed that when I send a VLAN priority 0 packet, it seems that VLAN priority 0 packets are not being sent according to the open time of Gate 0. Assuming VLAN priority 0 is always associated with a closed gate, theoretically, VLAN priority 0 packets will never be sent. is that right?

    Secondly, what is the handling mechanism when the queued queues are full and there are a large number of packets to be sent? I performed a test by continuously forwarding VLAN priority 6 packets at a rate of 1000Mbps and observing the duration of the received packets to determine the gate’s open time. It was found that if I set the gate open time to be 0.3ms, the actual window size observed is only around 0.13ms.

    Regards,

    yang

  • Hi yang,

    Secondly, what is the handling mechanism when the queued queues are full and there are a large number of packets to be sent? I performed a test by continuously forwarding VLAN priority 6 packets at a rate of 1000Mbps and observing the duration of the received packets to determine the gate’s open time. It was found that if I set the gate open time to be 0.3ms, the actual window size observed is only around 0.13ms.

    How did you observe the duration of received packets?

    In general, once the FIFO gets full any new incoming packet will get dropped. Hence I don't expect the dropping behavior to be smooth. For eg, if multiple packets arrive in close proximity to CPSW hardware, there are chances that all the packets will get dropped even if those were to be sent in the next cycle. I think you will find this behavior to be better if you decrease the packet size. As the FIFO will be able to hold more packets in this case.

    Firstly, I am using tsn switch mode. After configuring the gate control function for EST, I noticed that when I send a VLAN priority 0 packet, it seems that VLAN priority 0 packets are not being sent according to the open time of Gate 0. Assuming VLAN priority 0 is always associated with a closed gate, theoretically, VLAN priority 0 packets will never be sent. is that right?

    The VLAN to queue priority mappings comes into picture in this case. We need to see if the packets are ending up in correct queue number. After that the hardware should take care of it.

    Regards,
    Tanmay

  • Hi Tanmay,

    How did you observe the duration of received packets?

    I measure the size of the window by detecting the first packet received when the gate is open and the last packet received before the gate closes, just like in the diagram.  Theoretically, the window size should be close to 0.2ms, but in practice, it is only around 0.13ms.

    I have also used the plget tool to perform testing, and the results are as shown in the diagram below. As you can see, the actual transmission time is only 0.125ms.

    Due to this measurement result, I am curious about the operations when the FIFO queue becomes full and how it is handled after the queue is emptied.

    Regards,

    yang

  • Hi Yang,

    I will doubly confirm you observation with our hardware experts and check what is the expected behavior in this case.

    According to my knowledge, the dropping of packets is not graceful and can cause some abrupt behavior. I will let you know about this in a couple of days.

    Regards,
    Tanmay

  • Hi Tanmay,

             Is there any progress? If there is any, please let me know.

    Regards,

    yang

  • Hi Tanmay,

             Is there any progress? If there is any, please let me know.

    Regards,

    yang

  • Hi Tanmay,

             Is there any progress? If there is any, please let me know.

    Regards,

    yang