This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

[K2H, RTM-BOC] K2H's 10GigE interface verified with RTM-BOC ?

Hello,

I have asked about 10GigE benchmark numbers at the following post, but unfortunately I could not get the numbers for K2H.

https://e2e.ti.com/support/dsp/c6000_multi-core_dsps/f/639/t/423161

My customer's interest is now whether K2H's 10GigE interface has been verified with RTM-BOC or not because the following link says it has not been verified yet.
My customer has a K2H EVM and they are considering to get an RTM-BOC to check 10GigE performance, but before that, they are asking us that this can really work with RTM-BOC.

www.mistralsolutions.com/.../rtm-break-out-card-mistral-solutions-rtm-boc-for-keystone-ii-family-of-evms

Do you have any information for that ?

Best Regards,
Kawada
 

  • Hi Kawada,
    I will try with my K2H EVM board and let me update.
  • Titus,

    Thank you so much for your help. I'll visit the customer to discuss 10GigE test environment with K2H EVM tomorrow or so.
    So your quick feedback would be so appreciated.
    Thank you in advance.

    Best Regards,
    Kawada
  • Hi Kawada,
    I've seen some hang when I do "ifconfig" so, I asked factory to team to check this.
  • Hi, Kawada,

    Please find the K2H 10GbE connectivity using RTM-BoC in the attached file.

    Rex

    K2H_RTM_10GbE.docx

  • Titus S. and Rex,

    Thank you so much for your great help! It looks working correctly in K2H + BOC environment . I'll talk the customer with this.

    Best Regards,
    Kawada
  • Did you tested K2H with RTM-BOC by iperf?

    I did same test as you did. but max TX throughput was only below 2.0 Gbits (TCP/UDP) without packet loss.

    To be more specific, There is no packet loss in UDP bandwidth 5 Mbits,

    but about 33% packet loss is occurred in 30 Mbits bandwidth.

    the test result is very poor under K2H with host PC(x86) environment. (attached file.)

    Could you share here your test result by running iperf?

    I want to see high performance with K2H's 10Ge interface.

  • Did you try enabling the jumbo packets? Without enabling the jumbo packets, you won't get throughput higher than what you got.

    Rex

  • Hello, Rex,

    Why to try enabling the jumbo packets? The point is packet loss issue above the test result screenshot.
    It looks to Gilbert-Kim like so problem is 33% loss rate when bandwidth set 30Mbit/sec.
    I'd like to want to know your performance test result of throughput using K2H with RTM-BOC.

    Regards,
    MinKeun Park

  • Hi, MinKeun,

    With jumbo packet enabled, I can get around 6Gbps inbound which is similar to what we get on K2E as well. That is 6Gbps ingress and 7.6Gpbs egress. I didn't play around with the packet sizes. I may be able to squeeze a few tens mb more by tuning it.

    ~$ iperf -c 192.168.1.44 -P 2 --format m -u -b 3000M --len 8300 -t 100
    ------------------------------------------------------------
    Client connecting to 192.168.1.44, UDP port 5001
    Sending 8300 byte datagrams
    UDP buffer size: 0.16 MByte (default)
    ------------------------------------------------------------
    [  4] local 192.168.1.22 port 58805 connected with 192.168.1.44 port 5001
    [  3] local 192.168.1.22 port 57479 connected with 192.168.1.44 port 5001
    [ ID] Interval       Transfer     Bandwidth
    [  4]  0.0-100.0 sec  35915 MBytes  3013 Mbits/sec
    [  4] Sent 4537360 datagrams
    [  3]  0.0-100.0 sec  35910 MBytes  3012 Mbits/sec
    [  3] Sent 4536656 datagrams
    [SUM]  0.0-100.0 sec  71825 MBytes  6025 Mbits/sec
    [  3] WARNING: did not receive ack of last datagram after 10 tries.
    [  4] WARNING: did not receive ack of last datagram after 10 tries.

  • Well this thread looks having big audience.
    Today I talked with my customer and here is feedback to you.
    My customer is considering to use K2H with 2 10GigE ports and they are assuming its throughput could be reaching totally 16-18Gbps. My thought was it might be difficult because 10GigE is physically being connected to 3 port switch. I'm wondering if the switch could be bottleneck to the throughput in HW point of view.
    I would like to know your answers to the concern with your benchmarks (if possible).

    Best Regards,
    Kawada
  • Hi, Rex Chang,

    Thank you for your kindly reply.

    I want to see that without jumbo packet enabled, how much really performance and packet loss issue.
    At this point, why that to happen the packet loss issue using K2H with RTM-BOC without jumbo packet setting.

    My interest is packet loss issue but big throughput between K2E or K2H.
    If it must be set jumbo packet, could you explain us about the means testing as above it.

    Thanks
  • Hi, Kawada,

    The concern is not the throughput but after the packets are taken in, the processing of them. To reach high throughput, the jumbo frame needs to be enabled. If checksum velidation is required, that will be after all bytes of the large frame are in and processing them. The switch is not an issue.

    Rex

  • Hi, Rex Chang,

    I need to know the test result that rx packet loss on receive side clearly.
    This issue is important for me and other people.

    Thanks
  • Hi, MinKeun,

    At around 5Gbps throughput, the rx packet loss rate is less than 0.02%.

     

  • Hi, Rex Chang,

    Thanks for your reply.

    If you possible, could you tell me how to set your test environment such as kernel version.

    Best & Regards

  • Hi, MinKeun,

    All software are pre-built images from MCSDK release. I used the version of 3.1.3.6. u-boot env variable, boot=net. NFS mounts tisdk-rootfs-k2e-evm file system after it gets untarred. Followed the instruction in User's Guide to enable 10GbE at bootup time (using MAC-to-MAC Link Interface option, http://processors.wiki.ti.com/index.php/MCSDK_UG_Chapter_Exploring#Enabling_10Gig_Ethernet_Driver_Device_Tree_Bindings). Connect the XGE port on the RTM-BoC to the PC XGE adaptor using a SPF+ cable. That's it.

    Rex

  • Hi, Rex Chang.

    In spite of all settings are same with you, but my test result is different to yours.

    I don't know whether K2H's RX loss has occurred by configuration or hardware.

    For verifying this test, Could you share two binaries for K2H (dtb, uImage) at here ?

    Gilbert.

  • Hi, Gilbert,

    As I mentioned, I used pre-release images from MCSDK 3.1.3.6 including the dtb and uImage. Did you enable the jumbo packet on both of your systems?

    Rex

  • Hi Rex.

    prebuilt dtb file on MCSDK 3.1.3.6 is not operating because xge(eth4/5) interfaces are not activated.

    so, i used a dtb file which is made according to below link.

    http://processors.wiki.ti.com/index.php/MCSDK_UG_Chapter_Exploring#Enabling_10Gig_Ethernet_Driver_Device_Tree_Bindings

    jumbo frame is enabled on both side. (ip link set ethN mtu 9000)

    iperf application differed from yours. (i did test by using iperf3)

    test result be pretty similar as you did so long as i used iperf which is on the k2e file system.

    well, small udp packet loss during rx is still there regardless of decreasing bandwidth or packet length.

    there is no packet loss in case my host PC is receiving, and target is sending. 

    (this test's bandwidth is set to 100Mb on purpose.)

    but in case counter direction, small packet loss is occurred.

    could you confirm this situation is appeared also your test environment?

    if the result is same, i want to know where packets lose.

    Gilbert.