This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4VH-Q1: EST: transmit queue timeout

Part Number: TDA4VH-Q1
Other Parts Discussed in Thread: AM6412, TDA4VM, PCM3168A

Tool/software:

Hi TI,

Referring to this article, TI has fixed an issue. Can you help provide a patch? thanks

AM6412: EST: Transmit queue timeout when no slot available to transmit the packet - Processors forum - Processors - TI E2E support forums

Thanks,

Ruijie

  • Hi Tanmay,

    pls help check this issue, this is pending for over 2 weeks.

    BR,

    Biao

  • Hi Ruijie

        is this issue fixed now?

    Regards

       semon

  • Hi,

    AM6412: EST: Transmit queue timeout when no slot available to transmit the packet - Processors forum - Processors - TI E2E support forums

    Above issue is not fixed yet, It will be addressed in 11.0 SDK.
    EXT_EP-12066: is still open, Please check latest release notes from SDK documentation.
    https://software-dl.ti.com/processor-sdk-linux-rt/esd/AM64X/latest/exports/docs/devices/AM64X/linux/Release_Specific_Release_Notes.html#issues-open

    Best Regards,
    Sudheer

  • Above issue is not fixed yet, It will be addressed in 11.0 SDK.
    EXT_EP-12066: is still open, Please check latest release notes from SDK documentation.
    https://software-dl.ti.com/processor-sdk-linux-rt/esd/AM64X/latest/exports/docs/devices/AM64X/linux/Release_Specific_Release_Notes.html#issues-open

    Hello Sudheer

         do you know any patch available for this issue, if yes, could you help share it out?

    Regards

       Semon

  • Hi Semon,

         do you know any patch available for this issue, if yes, could you help share it out?

    It is not addressed yet. 

    Ideally, sending of priority packets mapped to traffic which doesn't have any gate open period i.e. always gate was closed is not expected.

    Best Regards,
    Sudheer

  • Hi ti,

    The following configuration can successfully ping the other end,

    ethtool -L eth0 tx 8
    ethtool -L eth1 tx 8
    ethtool -L eth2 tx 8
    ethtool -L eth3 tx 8
    ethtool -L eth4 tx 8
    ethtool -L eth5 tx 8
    ethtool -L eth6 tx 8
    ethtool -L eth7 tx 8
    
    tc qdisc replace dev eth0 parent root handle 100 taprio num_tc 2 map 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 queues 5@0 3@5 base-time 0 sched-entry S 2 40000 sched-entry S 1 40000 flags 2
    tc qdisc replace dev eth2 parent root handle 101 taprio num_tc 2 map 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 queues 5@0 3@5 base-time 0 sched-entry S 2 40000 sched-entry S 1 40000 flags 2
    tc qdisc replace dev eth7 parent root handle 102 taprio num_tc 2 map 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 queues 5@0 3@5 base-time 0 sched-entry S 2 40000 sched-entry S 1 40000 flags 2
    
    ip link set br0.2 type vlan egress 0:7 1:1 2:2 3:3 4:4 5:5 6:6 7:7
    ip link set br0.3 type vlan egress 0:7 1:1 2:2 3:3 4:4 5:5 6:6 7:7
    ip link set br0.4 type vlan egress 0:7 1:1 2:2 3:3 4:4 5:5 6:6 7:7
    ip link set br0.5 type vlan egress 0:7 1:1 2:2 3:3 4:4 5:5 6:6 7:7
    ip link set br0.7 type vlan egress 0:7 1:1 2:2 3:3 4:4 5:5 6:6 7:7
    ip link set br0.8 type vlan egress 0:6 1:1 2:2 3:3 4:4 5:5 6:6 7:7
    ip link set br0.27 type vlan egress 0:6 1:1 2:2 3:3 4:4 5:5 6:6 7:7

    but the configuration below will cause a crash when pinging the other end.

    ethtool -L eth0 tx 2
    ethtool -L eth1 tx 2
    ethtool -L eth2 tx 2
    ethtool -L eth3 tx 2
    ethtool -L eth4 tx 2
    ethtool -L eth5 tx 2
    ethtool -L eth6 tx 2
    ethtool -L eth7 tx 2
    
    tc qdisc replace dev eth0 parent root handle 100 taprio num_tc 2 map 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 queues 1@0 1@1 base-time 0 sched-entry S 0x1 40000 sched-entry S 0x2 40000 flags 2
    tc qdisc replace dev eth2 parent root handle 101 taprio num_tc 2 map 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 queues 1@0 1@1 base-time 0 sched-entry S 0x1 40000 sched-entry S 0x2 40000 flags 2
    tc qdisc replace dev eth7 parent root handle 102 taprio num_tc 2 map 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 queues 1@0 1@1 base-time 0 sched-entry S 0x1 40000 sched-entry S 0x2 40000 flags 2
    
    ip link set br0.2 type vlan egress 0:7 1:1 2:2 3:3 4:4 5:5 6:6 7:7
    ip link set br0.3 type vlan egress 0:7 1:1 2:2 3:3 4:4 5:5 6:6 7:7
    ip link set br0.4 type vlan egress 0:7 1:1 2:2 3:3 4:4 5:5 6:6 7:7
    ip link set br0.5 type vlan egress 0:7 1:1 2:2 3:3 4:4 5:5 6:6 7:7
    ip link set br0.7 type vlan egress 0:7 1:1 2:2 3:3 4:4 5:5 6:6 7:7
    ip link set br0.8 type vlan egress 0:6 1:1 2:2 3:3 4:4 5:5 6:6 7:7
    ip link set br0.27 type vlan egress 0:6 1:1 2:2 3:3 4:4 5:5 6:6 7:7

  • Hi,

    Is the ping in second case with sock_prio 0 or 7?

    Can you share the crash logs?

    Regards,
    Tanmay

  • but the configuration below will cause a crash when pinging the other end.

    Hello Ruijie

        I configure EVM with following configuration, set TX queue to 2

    -----------------------------

    ip link add name br0 type bridge
    sleep 2
    ip link set dev br0 up
    sleep 2
    ip link set eth2 up
    sleep 2
    ip link set eth4 up
    sleep 2
    ip link set eth2 master br0
    sleep 2
    ip link set eth4 master br0
    sleep 2
    ip addr add 192.168.2.10/24 dev br0
    sleep 2
    ip link add link br0 name br0.10 type vlan id 10
    sleep 2
    ip link set dev br0.10 up
    sleep 2

    ------------------------

    ip link set eth2 down
    sleep 2
    ip link set eth4 down
    sleep 2
    ethtool -L eth2 tx 2
    sleep 2
    ethtool -L eth4 tx 2
    sleep 2
    ip link set eth2 up
    sleep 2
    ip link set eth4 up
    sleep 2
    tc qdisc replace dev eth2 parent root handle 100 taprio num_tc 2 map 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 queues 1@0 1@1 base-time 0 sched-entry S 0x1 40000 sched-entry S 0x2 40000 flags 2
    sleep 2
    tc qdisc replace dev eth4 parent root handle 101 taprio num_tc 2 map 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 queues 1@0 1@1 base-time 0 sched-entry S 0x1 40000 sched-entry S 0x2 40000 flags 2
    sleep 2
    ip link set br0.10 type vlan egress 0:6 1:1 2:2 3:3 4:4 5:5 6:6 7:7
    sleep 2

    ----------------------

    after the configuration, ping the remote PC succeeded

    ---------------------

    root@j7200-evm:~# ping 192.168.2.30
    PING 192.168.2.30 (192.168.2.30) 56(84) bytes of data.
    64 bytes from 192.168.2.30: icmp_seq=1 ttl=64 time=0.362 ms
    64 bytes from 192.168.2.30: icmp_seq=2 ttl=64 time=0.374 ms
    ^C
    --- 192.168.2.30 ping statistics ---
    2 packets transmitted, 2 received, 0% packet loss, time 1005ms
    rtt min/avg/max/mdev = 0.362/0.368/0.374/0.006 ms
    root@j7200-evm:~# ping 192.168.2.20
    PING 192.168.2.20 (192.168.2.20) 56(84) bytes of data.
    64 bytes from 192.168.2.20: icmp_seq=6 ttl=64 time=0.508 ms
    64 bytes from 192.168.2.20: icmp_seq=7 ttl=64 time=0.250 ms

    -----------------------

    please provide more information for this issue

    Thanks

       Semon

  • Is the ping in second case with sock_prio 0 or 7?

    Can you share the crash logs?

    Regards,
    Tanmay

    ---------------------------------

    Hi Ruijie

       could you provide the crash logs as required by TI?

    Thanks

       Semon

  • but the configuration below will cause a crash when pinging the other end.

    Fullscreen
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    ethtool -L eth0 tx 2
    ethtool -L eth1 tx 2
    ethtool -L eth2 tx 2
    ethtool -L eth3 tx 2
    ethtool -L eth4 tx 2
    ethtool -L eth5 tx 2
    ethtool -L eth6 tx 2
    ethtool -L eth7 tx 2
    tc qdisc replace dev eth0 parent root handle 100 taprio num_tc 2 map 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 queues 1@0 1@1 base-time 0 sched-entry S 0x1 40000 sched-entry S 0x2 40000 flags 2
    tc qdisc replace dev eth2 parent root handle 101 taprio num_tc 2 map 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 queues 1@0 1@1 base-time 0 sched-entry S 0x1 40000 sched-entry S 0x2 40000 flags 2
    tc qdisc replace dev eth7 parent root handle 102 taprio num_tc 2 map 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 queues 1@0 1@1 base-time 0 sched-entry S 0x1 40000 sched-entry S 0x2 40000 flags 2
    ip link set br0.2 type vlan egress 0:7 1:1 2:2 3:3 4:4 5:5 6:6 7:7
    ip link set br0.3 type vlan egress 0:7 1:1 2:2 3:3 4:4 5:5 6:6 7:7
    ip link set br0.4 type vlan egress 0:7 1:1 2:2 3:3 4:4 5:5 6:6 7:7
    ip link set br0.5 type vlan egress 0:7 1:1 2:2 3:3 4:4 5:5 6:6 7:7
    ip link set br0.7 type vlan egress 0:7 1:1 2:2 3:3 4:4 5:5 6:6 7:7
    ip link set br0.8 type vlan egress 0:6 1:1 2:2 3:3 4:4 5:5 6:6 7:7
    ip link set br0.27 type vlan egress 0:6 1:1 2:2 3:3 4:4 5:5 6:6 7:7

    Hello Ruijie

         I tried the above configuration in TDA4VM CPSW-9G, but not trigger the crash error, ping remote PC OK

         ----------------------------------------------------

    ip link set eth1 down
    ip link set eth2 down
    ip link set eth3 down
    ip link set eth4 down
    ethtool -L eth2 tx 2

    ethtool --set-priv-flags eth2 p0-rx-ptype-rrobin off

    ip link set dev eth2 up


    tc qdisc replace dev eth2 parent root handle 100 taprio num_tc 2 map 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 queues 1@0 1@1 base-time 0 sched-entry S 0x1 40000 sched-entry S 0x2 40000 flags 2

    tc qdisc add dev eth2 clsact

    tc filter add dev eth2 egress protocol ip prio 1 u32 match ip dport 5003 0xffff action skbedit priority 3
    tc filter add dev eth2 egress protocol ip prio 1 u32 match ip dport 5002 0xffff action skbedit priority 2
    tc filter add dev eth2 egress protocol ip prio 1 u32 match ip dport 5001 0xffff action skbedit priority 1

    vconfig add eth2 10

    ifconfig eth2.10 200.1.1.10 up

    ip link set eth2.10 type vlan egress 0:0 1:1 2:2 3:3 4:4 5:5 6:6 7:7

    ---------------------------------------------------------------

    if I change mapping configuration as follow:

    ip link set eth2.10 type vlan egress 0:7 1:1 2:2 3:3 4:4 5:5 6:6 7:7

    then remote PC can't receive traffic

    please share your configuration so I can re-produce your test

    Thanks

       Semon

  • but the configuration below will cause a crash when pinging the other end

    Hi Ruijie

         after long time wait, I capture one crash, is this the phenomenon you watched ?

    ---------------------------------

    root@j721e-evm:~# ping 200.1.1.30
    PING 200.1.1.30 (200.1.1.30): 56 data bytes


    ^C
    --- 200.1.1.30 ping statistics ---
    102 packets transmitted, 0 packets received, 100% packet loss
    root@j721e-evm:~# ping 200.1.1.30 -Q 32
    ping: invalid option -- 'Q'
    BusyBox v1.35.0 () multi-call binary.

    Usage: ping [OPTIONS] HOST
    root@j721e-evm:~# ping 200.1.1.30
    PING 200.1.1.30 (200.1.1.30): 56 data bytes
    ^[[A^C
    --- 200.1.1.30 ping statistics ---
    3 packets transmitted, 0 packets received, 100% packet loss
    root@j721e-evm:~# iperf3 -c 200.1.1.30 -u -b100M -p 5003 -l1472 -t10
    iperf3: error - unable to connect to server - server may have stopped running or use a different port, firewall issue, etc.: No route to host
    root@j721e-evm:~# iperf3 -c 200.1.1.30 -u -b100M -p 5002 -l1472 -t10


    ^C- - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
    iperf3: interrupt - the client has terminated
    root@j721e-evm:~#
    root@j721e-evm:~#
    root@j721e-evm:~#
    root@j721e-evm:~# ping 200.1.1.30
    PING 200.1.1.30 (200.1.1.30): 56 data bytes
    [ 458.089197] ------------[ cut here ]------------
    [ 458.093818] NETDEV WATCHDOG: eth2 (am65-cpsw-nuss): transmit queue 0 timed out
    [ 458.101053] WARNING: CPU: 0 PID: 0 at net/sched/sch_generic.c:525 dev_watchdog+0x214/0x220
    [ 458.109311] Modules linked in: 8021q garp stp mrp llc act_skbedit cls_u32 sch_ingress sch_taprio xhci_plat_hcd pci_endpoint_test rpmsg_ctrl rpmsg_char pru_rproc ti_am335x_adc irq_pruss_intc cdns_csi2rx omap_rng kfifo_buf v4l2_fwnode cdns_pltfrm cdns3 cdns_usb_common crct10dif_ce snd_soc_j721e_evm display_connector phy_can_transceiver overlay bluetooth cfg80211 ecdh_generic ecc rfkill ti_k3_r5_remoteproc cdns_mhdp8546 drm_display_helper k3_j72xx_bandgap pruss ti_am335x_tscadc pvrsrvkm(O) snd_soc_pcm3168a_i2c ti_k3_dsp_remoteproc virtio_rpmsg_bus sa2ul rpmsg_ns snd_soc_pcm3168a ti_k3_common ti_j721e_ufs vxd_dec vxe_enc j721e_csi2rx videobuf2_dma_sg videobuf2_dma_contig v4l2_mem2mem videobuf2_memops videobuf2_v4l2 videobuf2_common v4l2_async videodev tidss drm_dma_helper drm_kms_helper syscopyarea mc sysfillrect ina2xx cdns_dphy_rx sysimgblt fb_sys_fops cdns3_ti m_can_platform snd_soc_davinci_mcasp m_can snd_soc_ti_udma snd_soc_ti_edma pci_j721e_host snd_soc_ti_sdma pci_j721e
    [ 458.109431] pcie_cadence_host pcie_cadence can_dev rti_wdt optee_rng rng_core cryptodev(O) fuse drm drm_panel_orientation_quirks ipv6
    [ 458.207482] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G O 6.1.80-ti-g2e423244f8c0 #1
    [ 458.216333] Hardware name: Texas Instruments J721e EVM (DT)
    [ 458.221888] pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--)
    [ 458.228831] pc : dev_watchdog+0x214/0x220
    [ 458.232828] lr : dev_watchdog+0x214/0x220
    [ 458.236825] sp : ffff800008003e30
    [ 458.240125] x29: ffff800008003e30 x28: 0000000000000005 x27: 0000000000000020
    [ 458.247244] x26: ffff8000089f5330 x25: ffff8000091479c0 x24: ffff00085f79e1a8
    [ 458.254362] x23: ffff800009147000 x22: 0000000000000000 x21: ffff00080204539c
    [ 458.261480] x20: ffff000802045000 x19: ffff000802045448 x18: ffffffffffffffff
    [ 458.268598] x17: 6f2064656d697420 x16: 3020657565757120 x15: 74696d736e617274
    [ 458.275716] x14: 203a297373756e2d x13: ffff800009161440 x12: 000000000000081c
    [ 458.282834] x11: 00000000000002b4 x10: ffff8000091b9440 x9 : ffff800009161440
    [ 458.289952] x8 : 00000000ffffefff x7 : ffff8000091b9440 x6 : 0000000000000000
    [ 458.297068] x5 : ffff00085f79db60 x4 : 0000000000000040 x3 : 0000000000000001
    [ 458.304186] x2 : 0000000000000000 x1 : 0000000000000000 x0 : ffff8000091529c0
    [ 458.311304] Call trace:
    [ 458.313738] dev_watchdog+0x214/0x220
    [ 458.317390] call_timer_fn.constprop.0+0x24/0x80
    [ 458.321995] __run_timers.part.0+0x1f4/0x234
    [ 458.326250] run_timer_softirq+0x3c/0x7c
    [ 458.330159] _stext+0x124/0x28c
    [ 458.333288] ____do_softirq+0x10/0x20
    [ 458.336937] call_on_irq_stack+0x24/0x4c
    [ 458.340846] do_softirq_own_stack+0x1c/0x30
    [ 458.345013] __irq_exit_rcu+0xb4/0xe0
    [ 458.348664] irq_exit_rcu+0x10/0x20
    [ 458.352138] el1_interrupt+0x38/0x70
    [ 458.355703] el1h_64_irq_handler+0x18/0x2c
    [ 458.359785] el1h_64_irq+0x64/0x68
    [ 458.363173] arch_cpu_idle+0x18/0x2c
    [ 458.366736] default_idle_call+0x30/0x6c
    [ 458.370646] do_idle+0x248/0x2c0
    [ 458.373868] cpu_startup_entry+0x38/0x40
    [ 458.377779] kernel_init+0x0/0x130
    [ 458.381168] arch_post_acpi_subsys_init+0x0/0x18
    [ 458.385772] start_kernel+0x650/0x694
    [ 458.389420] __primary_switched+0xbc/0xc4
    [ 458.393417] ---[ end trace 0000000000000000 ]---
    [ 458.398032] am65-cpsw-nuss c000000.ethernet eth2: txq:0 DRV_XOFF:0 tmo:7380 dql_avail:-169 free_desc:504
    [ 463.977201] am65-cpsw-nuss c000000.ethernet eth2: txq:0 DRV_XOFF:0 tmo:12964 dql_avail:-169 free_desc:504
    [ 469.097200] am65-cpsw-nuss c000000.ethernet eth2: txq:0 DRV_XOFF:0 tmo:18084 dql_avail:-169 free_desc:504

    ----------------------------------

    Regards

       Semon

  • Hello Runjie

        internally discuss TI BU engineer, the conclusion is:

           1. it is not a bug

           2. It's mis configuration 

         for there only are 2 TX queues enabled for the system, then map traffic to a non-exist TX queue (in the provided example, map priority 0 to 7 or 6 which is not existed), thus trigger the error.

         if all traffics are mapped to the existed TX queues, this phenomenon will not happen

    Regards

       Semon

  • Hello Runjie

        I tried to run sdk10.1 on TDA4VM TI-EVM, with following configurations, ping remote PC succeed,

        ---------------------------------------------------------

    root@j721e-evm:~# ip link set eth1 down
    root@j721e-evm:~# ip link set eth2 down
    [ 79.614359] am65-cpsw-nuss c000000.ethernet eth2: Link is Down
    root@j721e-evm:~# ip link set eth3 down
    root@j721e-evm:~# ip link set eth4 down
    [ 91.834289] am65-cpsw-nuss c000000.ethernet eth4: Link is Down
    root@j721e-evm:~# ethtool --set-priv-flags eth2 p0-rx-ptype-rrobin off
    root@j721e-evm:~# ip link set dev eth2 up
    [ 104.595028] am65-cpsw-nuss c000000.ethernet eth2: PHY [c000f00.mdio:10] driver [Microsemi GE VSC8514 SyncE] (irq=POLL)
    [ 104.605727] am65-cpsw-nuss c000000.ethernet eth2: configuring for phy/qsgmii link mode
    root@j721e-evm:~# [ 107.690027] am65-cpsw-nuss c000000.ethernet eth2: Link is Up - 1Gbps/Full - flow control off

    root@j721e-evm:~#
    root@j721e-evm:~#
    root@j721e-evm:~# tc qdisc replace dev eth2 parent root handle 100 taprio num_tc 2 map 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 queues 1@0 1@1 base-time 0 sched-entry S 0x1 40000 sched-entry S 0x2 40000 flags 2
    Warning: sch_taprio: Size table not specified, frame length estimations may be inaccurate.
    root@j721e-evm:~#
    root@j721e-evm:~# tc qdisc add dev eth2 clsact
    root@j721e-evm:~# tc filter add dev eth2 egress protocol ip prio 1 u32 match ip dport 5003 0xffff action skbedit priority 3
    [ 140.001666] u32 classifier
    [ 140.004375] input device check on
    [ 140.008037] Actions configured
    root@j721e-evm:~#
    root@j721e-evm:~# tc filter add dev eth2 egress protocol ip prio 1 u32 match ip dport 5002 0xffff action skbedit priority 2
    root@j721e-evm:~# tc filter add dev eth2 egress protocol ip prio 1 u32 match ip dport 5001 0xffff action skbedit priority 1
    root@j721e-evm:~#
    root@j721e-evm:~# vconfig add eth2 10
    [ 167.731727] 8021q: 802.1Q VLAN Support v1.8
    [ 167.735955] 8021q: adding VLAN 0 to HW filter on device eth0
    [ 167.741627] 8021q: adding VLAN 0 to HW filter on device eth2
    [ 167.747955] am65-cpsw-nuss c000000.ethernet: Adding vlan 10 to vlan filter
    root@j721e-evm:~# ifconfig eth2.10 200.1.1.10 up
    root@j721e-evm:~# ip link set eth2.10 type vlan egress 0:7 1:1 2:2 3:3 4:4 5:5 6:6 7:7
    root@j721e-evm:~# ping 200.1.1.30
    PING 200.1.1.30 (200.1.1.30) 56(84) bytes of data.
    64 bytes from 200.1.1.30: icmp_seq=1 ttl=64 time=0.671 ms
    64 bytes from 200.1.1.30: icmp_seq=2 ttl=64 time=0.519 ms
    64 bytes from 200.1.1.30: icmp_seq=3 ttl=64 time=0.396 ms

    -------------------------------------------------------------------

     so the conclusion is the sdk10.1 fix this issue, 

     please help verify in your side,

     if no problem, this issue can be closed

    Regards

       Semon