This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TivaC NDK TCP: Can't receive large packets (>=1460 bytes)

Other Parts Discussed in Thread: EK-TM4C1294XL

TivaC NDK TCP: Can't receive large packets (>=1460 bytes)

Setup:

  1. CCS6.0
  2. tirtos_tivac_2_10_01_38 (ndk_2_24_01_18)
  3. TivaWare_C_Series-2.1.0.12573
  4. Tcp.transmitBufSize = 1024;
  5. Tcp.receiveBufSize = 10240;

For other configuration, please see the attached picture or .cfg file.

The other side of the connection is Windows 7 or Windows 2008 Server.

When the peer sends large packets, it will split them into smaller packets of size 1460 (the reason I think is that the MSS is 1460). However, TivaC can't receive it. It seems that it drops it completely. For windows XP, it seems that it will split the packet into evern smaller (536 bytes), and then transmission will succeed.

I can modify Tcp.receiveBufSize to 1024 (as in the Tcp Echo example), and the max packet size will be 1024, and transmission could also succeed. But I think this will reduce the performance.

Any idea to solve this? Is there any mistaks in my configuration?

Thank you so much.

 2843.APmain.cfg

  • You can also take a look at the wireshark log. In line 68, the peer sends a packet of 2048, and then split them into 1460, but TivaC doesn't receive them, because the ACK value remains unchanged.

    4428.wirelog.dat

  • Hi Jianyi,

    A few questions/comments:

    - In the wireshark capture, could you tell us which IP address correspond to the TivaC and which one is the PC?
    - Do you know why there is such a large buffer being sent on line 68? This is a packet that is larger than what is sent in a normal Ethernet frame. Just wondering if this is the correct behavior.
    - There is a new connection being established on line 87. Do you know why the application is doing that?
    - Also, one thing to try is to increase the Global.pktNumFrameBufs to more than 10. Just in case frames are dropped because the system is running out of packet buffers.

    Best regards,
    Vincent
    1. PC IP:114.55.24.183(Port 1000, as server) and Tiva IP: 115.236.37.19 (as client)
    2. Actual the application in PC sends two TCP messages of each 1024 bytes one by one immediately, and I think TCP stack cascades them in the PC side.
    3. The app in TivaC checks for heart beat messages. If it can't receive any message in 15s, it will tear the connection and start a new one. When this issue happens, TivaC can't receive any message.
    4. I increased it to 15, and it still happens. I think this should not be the cause, because (1) every time a large packet is sent, this issue happens. (2) I could reproduce this issue by using a socket debug tool, creating a socket and sending a large packet, and the issue will always occur. At this moment, no message has been sent and the NDK should have sufficient memory. Also we could see from the wireshark log that TivaC says the win is 10240, the same as our RxPacketBufferSize, so there is no message in the buffer.

    Some other details about my TivaC application:

    1. There are two socket server threads, on two ports (one for printing logs, and the other for normal messages). And each port allows only one connection. That is, when a new socket is requested, the old one will be torn down. However, when this issue occurs, there are no connections on either ports, but they are both listening.
    2. There is another socket client thread for normal messages (for the same purpose of the server thread above), trying to connect to a server in PC. The wireshark log in the attachment is in this case. If I use the PC as the client, and connect to the server thread in TivaC, the same issue happens (no large message could be sent to Tiva).

    Here is code for server thread:

    SocketListenResultT TcpSocketConnectionListenerC::Listen(const uint16 Port, const uint32 maxCount)
    {
      SOCKET socketHandle;
      struct sockaddr_in sLocalAddr;
      struct sockaddr_in client_addr;
      int addrlen=sizeof(client_addr);
      int optval;
      int optlen = sizeof(optval);
      int status;
      
      fdOpenSession(TaskSelf());
      
      socketHandle = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
      if (IS_SOCKET_INVLIAD(socketHandle))
      {
        return SOCKET_LISTEN_RESULT_SOCKET_ERR;
      }
      
      memset((char *)&sLocalAddr, 0, sizeof(sLocalAddr));
      sLocalAddr.sin_family = AF_INET;
      sLocalAddr.sin_addr.s_addr = htonl(INADDR_ANY);
      sLocalAddr.sin_port = htons(Port);
      
      status = bind(socketHandle, (struct sockaddr *)&sLocalAddr, sizeof(sLocalAddr));
      if (status < 0) 
      {
        fdClose(socketHandle);
        return SOCKET_LISTEN_RESULT_BIND_ERR;
      }
      
      if (listen(socketHandle, maxCount) != 0)
      {
        fdClose(socketHandle);
        return SOCKET_LISTEN_RESULT_LISTEN_ERR;
      }
      
      if (setsockopt(socketHandle, SOL_SOCKET, SO_KEEPALIVE, &optval, optlen) < 0)
      {
        fdClose(socketHandle);
        return SOCKET_LISTEN_RESULT_SETOPT_ERR;
      }
      
      SOCKET clientfd = INVALID_SOCKET;
    
      while (1) 
      {
        SOCKET clientfOld = clientfd;
        
        /* Wait for incoming request */
        clientfd = accept(socketHandle, (struct sockaddr*)&client_addr, &addrlen);
        
        Log_info1("Socket created, port=%d\n", Port);
    
        if(!IS_SOCKET_INVLIAD(clientfd))
        {
          //Close old socket fd, as only one could exist.
          if(!IS_SOCKET_INVLIAD(clientfOld))
          {
            SocketConnectionTearDown(&clientfOld);
          }
    
          OnConnected(clientfd);
        }
      }
    
    }

    And here is the client code:

    static SOCKET RequestTcpSocketConnect(void)
    {
      SOCKET socketHandle = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
      
      if (IS_SOCKET_INVLIAD(socketHandle))
      {
        return INVALID_SOCKET;
      }
    
      IPN serverIpAddr;
    
      if(!GetServerIpAddr(serverIpAddr))
      {
        fdClose(socketHandle);
    
        Log_info0("Failed to resolve server addr");
        
        return INVALID_SOCKET;
      }
    
      Log_info5("RequestTcpSocketConnect start to connect:%d.%d.%d.%d:%d",
            (serverIpAddr>>0)&0xFF,
            (serverIpAddr>>8)&0xFF,
            (serverIpAddr>>16)&0xFF,
            (serverIpAddr>>24)&0xFF,
            ApIpGetServerPortComm());
    
      struct sockaddr_in server_addr;
    
      memset((char *)&server_addr, 0, sizeof(server_addr));
    
      server_addr.sin_family = AF_INET;
      server_addr.sin_addr.s_addr = serverIpAddr;
      server_addr.sin_port = htons(ApIpGetServerPortComm());
    
      int status = connect(socketHandle, (PSA)&server_addr, sizeof(server_addr));
      BOOL alreadyConnected = FALSE;
    
      if(-1 == status)
      {
        const int fdErr = fdError();
        
        if(EISCONN == fdErr)
        {
          alreadyConnected = TRUE;
        }
    
        Log_info1("RequestTcpSocketConnect err=%d", fdErr);
      }
    
      if(0 == status || alreadyConnected)
      {
        Log_info1("RequestTcpSocketConnect connected, status=%d", status);
    
        return socketHandle;
      }
      else
      {
        fdClose(socketHandle);
        
        return INVALID_SOCKET;
      }
    }

  • Hi Jianyi,

    Your use case bears similarities to what is being done in the tcpEcho example that TI-RTOS supplies in the TI Resource Explorer for the TivaC. The example has one server thread on the TivaC and uses an application on Windows to send a packet with a default payload size of 1024 bytes to the TivaC, and waits for the packet to be returned before repeating. I modified the Windows app to send two times back to back before waiting for the buffers to return, and did not see the Windows TCP stack cascade the two packets into one. Something on your Windows system/application may be misconfigured for it to behave the way you described.

    I'd suggest you first verify that the tcpEcho example does indeed work for you. If it does, then try the attached modified version of tcpSendReceive.exe and see if you get the 'cascade' behavior. I have attached my modified source code for your reference as well.

    Best regards,

    Vincent

    4532.tcpSendReceive.zip

  • Hi, Vincent,

    1. RxPacketBufferSize in TcpEcho example is 1024, so windows shall never send packet larger than 1024.
    2. Windows doesn't always cascade packets, but does sometimes.

    I will modify the original Tcp Echo example (by modifying RxPacketBufferSize to 10240), and do tests to see what happens.

    Thanks

     

  • Jianyi Bao said:
    You can also take a look at the wireshark log. In line 68, the peer sends a packet of 2048, and then split them into 1460, but TivaC doesn't receive them, because the ACK value remains unchanged.

    Looking at that packet 68, the PC sends an Ethernet frame of 2102 bytes which contains 2048 bytes of TCP data. The Ethernet frame of 2102 bytes is a "jumbo frame".

    The Ethernet Controller in the TMC4129 device can receive jumbo frames, but from looking at the datasheet the JFEN bit needs to be set in the Ethernet MAC Configuration (EMACCFG) register to allow jumbo frames to be received without reporting a giant frame error in the receive frame status. Looking at the tcpEcho example for the EK-TM4C1294XL in TI-RTOS for TivaC v2.14.0.10 the JFEN bit is clear meaning support for jumbo frame hasn't been enabled.

    Therefore, think that to allow the TivaC to receive the jumbo frames the program needs to enable jumbo frames in the TivaC Ethernet Controller and also possibly the TI-RTOS NDK configuration.

    In your TivaC program have any steps been taken to enable jumbo frame support?

    From a quick look at the TI-RTOS NDK there is a _INCLUDE_JUMBOFRAME_SUPPORT macro which looks like it enabled support for jumbo frames, but I haven't yet investigated enabling it.

  • Hi, Chester Gillon

    Thanks. My program has no steps to enable jumbo frames.

    Line 68 is a jumbo frame. However, I think line 73/74/75/76 are not, since their length is only 1514, and they are also discarded by TivaC. Please see below. So I think jumbo frame is not the only problem.

     

  • Line 68 is a jumbo frame. However, I think line 73/74/75/76 are not, since their length is only 1514, and they are also discarded by TivaC. Please see below. So I think jumbo frame is not the only problem.

    Agree that the jumbo frame is not the (only) problem. Looking again at the capture, after the PC has failed to get an Ack to the jumbo packet with a TCP payload of 2048 bytes it tries a re-transmission with a maximum normal size packet with a TCP payload of 1460 bytes. These normal size packets are still not Acked by the TivaC.

    I have managed to repeat the failure with the tcpEcho program when the Tcp.receiveBufSize was increased.

    The steps were:

    1) Import the tcpEcho_EK_TM4C1294XL_TI_TivaTM4C1294NCPDT example from TI-RTOS for TivaC v2.16.0.08

    2) In the tcpEcho.cfg increase the size of the Tcp.receiveBufSize from 1024 to 10240.

    3) Run the tirtos_tivac_2_16_00_08/packages/examples/tools/tcpSendReceive program on a Linux host which doesn't have jumbo frames enabled (MTU set on the Linux host Ethernet as 1500).

    If I run the tcpSendReceive host program with the buffer length set to 2048:

    tcpSendReceive 192.168.0.3 1000 1 -l2048

    Then the tcpSendReceive program fails to get an Ack from the tcpEcho program running on the EK_TM4C1294XL. The packet summary from a Wireshark capture is (192.168.0.117 is the Linux host, 192.168.0.3 is the TivaC):

    If I leave the Tcp.receiveBufSize at the default size of 1024 bytes then I don't see the failure, and the tcpSendReceive Linux host program can still successfully communicate with the tcpEcho program if tcpSendReceive is set to a buffer length of 2048 bytes.

  • Chester Gillon said:
    The steps were:

    1) Import the tcpEcho_EK_TM4C1294XL_TI_TivaTM4C1294NCPDT example from TI-RTOS for TivaC v2.16.0.08

    2) In the tcpEcho.cfg increase the size of the Tcp.receiveBufSize from 1024 to 10240.

    TI-RTOS for TivaC v2.16.0.08 uses ndk_2_25_00_09.

    If I import the tcpEcho_TMDXDOCKH52C1_M3_TI_F28M35H52C1_CortexM from TI-RTOS for C2000 v2.16.00.08 which also uses ndk_2_25_00_09, then in the TMDXDOCKH52C1 example if I increases the Tcp.receiveBufSize from 1024 to 10240 I can't repeat the problem.

    Therefore, the problem with TI-RTOS for TivaC v2.16.0.08 appears to be in the TivaC specific network components, rather than the common NDK components.

    Will attempt to find the cause of the problem with the TI-RTOS for TivaC v2.16.0.08 getting into a state of not Acking TCP segments when the Tcp.receiveBufSize is increased.

  • Hi Jianyi,

    Yes, I forgot to mention I was using a window size of 10240.

    In light of yours and Chester's findings, I experimented with tcpEcho on TivaC today and I seem to see an issue with the packet being dropped whenever the TCP packet payload size is 1460. On Linux I see exactly the same behavior as Chester did. On Windows, I saw Windows eventually brings the transfer size down to 536, and it got the application over the hump and TcpEcho successfully completes:

    I'll try to dig further tomorrow to see why the TivaC is dropping packets of size 1460. It is possible there is a problem in the driver as Chester mentioned.

    When the receive window size is less than 1460, the PC would attempt to use TCP payloads smaller than 1460, hence the problem is masked.

    I am not sure why on your Windows system it didn't try to retransmit with a smaller size after failed attempts with size 1460. Maybe that is something you can look into in the meantime.

    Best regards,

    Vincent

  • Hi Vincent,

     Thanks. I did this test in some PC, and found on some Windows XP, message size will eventually become to 536 and transfer will succeed in the end, while on Win7 and WinServer2008, message size will always be 1460 and transfer will fail.

     So I think windows version might be a cause. Or some other settings in the Windows, such as network interface drivers. But I just guess.

  • Hi Jianyi,

    I am also using Windows 7, so there must be some sort of network setting on your machine that is preventing the size from shrinking.

    I didn't make it as far as I'd like in the debug today - will continue on tomorrow.

    Best regards,
    Vincent
  • Vincent W. said:
    I'll try to dig further tomorrow to see why the TivaC is dropping packets of size 1460. It is possible there is a problem in the driver as Chester mentioned.

    From debugging, found that the IPRxPacket() function in tirtos_tivac_2_16_00_08/products/ndk_2_25_00_09/packages/ti/ndk/stack/ip/ipin.c is dropping the packets due to failing a check on the packet length in the following lines:

        /* Make sure total length is reasonable, and not bigger than the */
        /* packet length */
        w = (uint)pIpHdr->TotalLen;
        if( HNC16(w) > pPkt->ValidLen )
        {
            ips.Badlen++;
            PBM_free( pPkt );
            return;
        }
    

    When the packet is dropped, the IP packet length HNC16(w) is 1500, but pPkt->ValidLen is 1494. The IP packet length of 1500 is correct, and so the problem appears to be how the packet ValidLen is being set.

  • When the packet is dropped, the IP packet length HNC16(w) is 1500, but pPkt->ValidLen is 1494. The IP packet length of 1500 is correct, and so the problem appears to be how the packet ValidLen is being set.

    I believe the problem is in the tirtos_tivac_2_16_00_08/products/tidrivers_tivac_2_16_00_08/packages/ti/drivers/emac/EMACSnow.c source file, which is the driver for the Ethernet Controller for the TM4C129x devices.

    When creating the Ethernet receive descriptors the buffer size is set to ETH_MAX_PAYLOAD (1514). The issues with this are:

    a) Doesn't allow for the 4 bytes of CRC which the TM4C129x Ethernet Controller stores at the end of the received frame.

    [The EMACSnow_handleRx() function in EMACSnow.c is expecting the CRC of the received frame to be stored, since it removes the CRC from the frames passed up to the network stack]

    b) The buffer size of 1514 is not a multiple of 4 bytes, since 1514 / 4 = 378.5. According to the TM4C129x datasheet if the buffer size is not a multiple of 4 the behavior is undefined:

    Based upon this, I modified the EMACSnow.c driver to:

    a) Add the following macros to be used to calculate the receive buffer size:

    /* The size of the CRC stored at the end of the received frames */
    #define CRC_SIZE_BYTES 4
    
    /* The receive descriptor buffer size must be a multiple of 4 bytes */
    #define RX_BUFFER_SIZE_MULTIPLE 4
    
    #define RX_BUFFER_SIZE_ROUNDUP(X) ((((X) + (RX_BUFFER_SIZE_MULTIPLE - 1)) / RX_BUFFER_SIZE_MULTIPLE) * RX_BUFFER_SIZE_MULTIPLE)
    
    /* The buffer size for receive descriptors to allow for receipt of a maximum length ethernet payload (ETH_MAX_PAYLOAD)
     * allowing for:
     * - The CRC also being stored by EMACSnow port
     * - Rounding up the size to the multiple required by the EMACSnow port
     */
    #define RX_BUFFER_SIZE RX_BUFFER_SIZE_ROUNDUP (ETH_MAX_PAYLOAD + CRC_SIZE_BYTES)
    

    b) In the EMACSnow_primeRx() and EMACSnow_InitDMADescriptors() functions replace the use of ETH_MAX_PAYLOAD with RX_BUFFER_SIZE for setting the receive buffer size.

    c) Also, in the EMACSnow_handleRx() function use the new constant CRC_SIZE_BYTES when removing the CRC.

    With these changes, the tcpEcho_EK_TM4C1294XL_TI_TivaTM4C1294NCPDT example was able to successfully receive packets with a TCP payload of 1460 bytes.

    The differences to the EMACSnow.c with context are:

    diff --git a/opt/ti/ti_ccs6_1_1/tirtos_tivac_2_16_00_08/products/tidrivers_tivac_2_16_00_08/packages/ti/drivers/emac/EMACSnow.c b/home/Mr_Halfword/workspace_v6_1/tcpEcho_EK_TM4C1294XL_TI_TivaTM4C1294NCPDT/EMACSnow.c
    index de5c20b..feb65a6 100644
    --- a/opt/ti/ti_ccs6_1_1/tirtos_tivac_2_16_00_08/products/tidrivers_tivac_2_16_00_08/packages/ti/drivers/emac/EMACSnow.c
    +++ b/home/Mr_Halfword/workspace_v6_1/tcpEcho_EK_TM4C1294XL_TI_TivaTM4C1294NCPDT/EMACSnow.c
    @@ -93,6 +93,21 @@ uint32_t g_ulStatus;
     
     bool enablePrefetch = FALSE;
     
    +/* The size of the CRC stored at the end of the received frames */
    +#define CRC_SIZE_BYTES 4
    +
    +/* The receive descriptor buffer size must be a multiple of 4 bytes */
    +#define RX_BUFFER_SIZE_MULTIPLE 4
    +
    +#define RX_BUFFER_SIZE_ROUNDUP(X) ((((X) + (RX_BUFFER_SIZE_MULTIPLE - 1)) / RX_BUFFER_SIZE_MULTIPLE) * RX_BUFFER_SIZE_MULTIPLE)
    +
    +/* The buffer size for receive descriptors to allow for receipt of a maximum length ethernet payload (ETH_MAX_PAYLOAD)
    + * allowing for:
    + * - The CRC also being stored by EMACSnow port
    + * - Rounding up the size to the multiple required by the EMACSnow port
    + */
    +#define RX_BUFFER_SIZE RX_BUFFER_SIZE_ROUNDUP (ETH_MAX_PAYLOAD + CRC_SIZE_BYTES)
    +
     /**************************************************/
     /* Debug counters for PHY issue                   */
     /**************************************************/
    @@ -320,7 +335,7 @@ static void EMACSnow_primeRx(PBM_Handle hPkt, tDescriptor *desc)
     
         /* We got a buffer so fill in the payload pointer and size. */
         desc->Desc.pvBuffer1 = PBM_getDataBuffer(hPkt) + PBM_getDataOffset(hPkt);
    -    desc->Desc.ui32Count |= (ETH_MAX_PAYLOAD << DES1_RX_CTRL_BUFF1_SIZE_S);
    +    desc->Desc.ui32Count |= (RX_BUFFER_SIZE << DES1_RX_CTRL_BUFF1_SIZE_S);
     
         /* Give this descriptor back to the hardware */
         desc->Desc.ui32CtrlStatus = DES0_RX_CTRL_OWN;
    @@ -386,7 +401,7 @@ static void EMACSnow_handleRx()
                           DES0_RX_STAT_FRAME_LENGTH_M) >> DES0_RX_STAT_FRAME_LENGTH_S;
     
                     /* Remove the CRC */
    -                PBM_setValidLen(hPkt, len - 4);
    +                PBM_setValidLen(hPkt, len - CRC_SIZE_BYTES);
     
                     /*
                      *  Place the packet onto the receive queue to be handled in the
    @@ -809,7 +824,7 @@ void EMACSnow_InitDMADescriptors(void)
          * allow packets to be received.
          */
         for (i = 0; i < NUM_RX_DESCRIPTORS; i++) {
    -        hPkt = PBM_alloc(ETH_MAX_PAYLOAD);
    +        hPkt = PBM_alloc(RX_BUFFER_SIZE);
             if (hPkt) {
                 EMACSnow_primeRx(hPkt, &(g_pRxDescriptors[i]));
             }
    

    The modified EMACSnow.c is attached 1464.EMACSnow.c

  • Hi, Chester Gillon

    Thanks. With these changes, I could receive large packets.

    However I found even if EMAC_CFG_JFEN is not set, jumbo frame could be received. Plase see the attachment.

    7536.packet len 2920.zip

  • Thanks Chester. There does seem to be a bug in the code. I have filed a bug report, and we might either use your fix or set the hardware to process the CRC, in which case it won't be passed up to the driver and we can remove the code to subtract the CRC length from the frame in handleRx.

    Jianyi, how are you able to receive jumbo frame now? In your original screen shot, on lines 68 to 76, the jumbo frame was split up into smaller chunks because it was not received by the TivaC. We are wondering why it is being received now. Did you change anything besides what Chester suggested?

    Best regards,

    Vincent

  • Hi Vincent,

    I didn't change anything else but Chester's modification, and we could see JFEN bit is not set.

  • Jianyi Bao said:
    I didn't change anything else but Chester's modification, and we could see JFEN bit is not set.

    Can you summarize how the TivaC and PC are connected, and the point at which the Wireshark capture is made?

    I don't understand how the TivaC would be able to accept Jumbo frames when the JFEN bit is clear, so was wondering if there are an intermediate routers which may be fragmenting packets.

  • I did the test using two setup:

    1. PC0, PC1 and TivaC are connected to a same Hub using RJ45. PC0 and TivaC are connected together by socket, and PC0 sends large packet to TivaC, and wireshark is running on PC1, and the log shows packets are split into size 1460. I think this is the packet size TivaC receives.

    2. PC and TivaC are connected directly using RJ45, and wireshark is running on the same PC. And the log shows packets are split into size 2920 (exactly twice of 1460). I thik this is the packet size PC sends, and I didn't see it is further split to smaller size.

    What could we get from the above tests? Thanks

     

  • Jianyi Bao said:
    2. PC and TivaC are connected directly using RJ45, and wireshark is running on the same PC. And the log shows packets are split into size 2920 (exactly twice of 1460). I thik this is the packet size PC sends, and I didn't see it is further split to smaller size.

    What could we get from the above tests? Thanks

    I still don't understand what is happening in case 2.

    I tried sending jumbo packets to investigate, but in my setup after enabling jumbo packets in two Linux hosts (used ifconfig eth0 mtu 9000) I wasn't even able to successfully exchange jumbo packets between two Linux hosts since turns out that my Ethernet switch doesn't support jumbo packets.

    For case 2 can you start the Wireshark capture before starting the test programs, and post the complete Wireshark capture? This is to be able to see the MSS and other IP options exchanged when the connection is established - as well as as the size of TCP packets sent.

  • 5381.socket.zip

    Hi,

      Please see the attachement.

  • Please see the attachement.

    A screen capture of the summary lines is:

    Lines 1 and 2 contain the IP options from the PC and TivaC when the connection is established and show that both ends are using a Maximum Segment Size (MSS) of 1460 bytes.

    While line 4 contains a RST, it is from a previous connection (different port) and so are ignoring that line.

    Line 8 appears to show the PC sending a packet with a TCP payload of 2920 bytes in a jumbo frame. The TCP payload of 2920 bytes is twice the MSS of 1460 when the connection was established, and so in theory the PC shouldn't have sent such a jumbo frame. In the next line 9 the TivaC does ACK all of the TCP payload so the TCP payload does get received by the TivaC.

    I note that for all the packets sent from the PC, Wireshark is reporting that the IP header checksum is 0x0000 rather than the expected value, and that this might be caused by "IP checksum offload" being used on the PC.

    I am confused about if the PC really did send a single jumbo frame with a TCP payload of 2920 bytes to the TivaC, in violation of the MSS of 1460, or there were two actual frames sent each with a MSS of 1460 sent.

  • Hi Jianyi,

    At this point, your application is working, but you are just investigating the JFEN bit to see if you are truly getting jumbo frames without the bit enabled, correct?

    One way to verify if you are receiving large packets is to check the validLen of the received packets. I believe you made this change based on Chester's earlier post:

    PBM_setValidLen(hPkt, len - CRC_SIZE_BYTES);

    Could you add this if-statement immediately afterwards as an experiment:

    if (len - CRC_SIZE_BYTES > 1500) {
    System_printf("Jumbo frame received. Size=%d\n", len - CRC_SIZE_BYTES);
    }

    If there is a printout, then you are indeed receiving jumbo frames. This would tell you if the frames were not getting split up somehow.

    Best regards,
    Vincent

  • I can see two prints for one packet of size 3200.

    #0000000374 [t=0x00000001:3460c08f] xdc.runtime.Main: Jumboo frame received. Size=1514
    #0000000375 [t=0x00000001:3460fbe8] xdc.runtime.Main: Jumboo frame received. Size=1514

    The attached is the wireshark log.

    5123.w.zip

  • Jianyi Bao said:
    I can see two prints for one packet of size 3200.

    #0000000374 [t=0x00000001:3460c08f] xdc.runtime.Main: Jumboo frame received. Size=1514
    #0000000375 [t=0x00000001:3460fbe8] xdc.runtime.Main: Jumboo frame received. Size=1514

    1514 bytes is the maximum size of "normal" Ethernet packets, which has 1500 bytes of IP payload and 14 bytes of Ethernet headers.

    The wireshark log shows one Ethernet packet with a length of 3254 bytes. Therefore, the wireshark capture is appearing to report larger Ethernet packets than were sent on the the wire to the TivaC device. From a search found that if wireshark is run on the sending host, then wireshark can capture Ethernet frames with a larger size than the MTU if "TCP Large Segment Offload is enabled" is enabled on the host - see https://ask.wireshark.org/questions/24699/tcp-packet-length-was-much-greater-than-mtu. Is "TCP Large Segment Offload" enabled on the host used to perform the wireshark capture?

  • Hi Chester

      You are right. TSO is enabled in my network driver settings. If I disable it, I can see the max packet size is 1460.

  • That's a great find, Chester. Thank you for sharing!

    Jianyi, are you fully unblocked at this point?

    Best regards,
    Vincent
  • Just to resurrect this thread after a year. This is still an issue (unless I'm running an old NDK - 2.25.0.9? ). However, the modified EMACsnow.c enables the 129 to receive large TCP packets. Thank you Chester!

  • Hi Tom,
    We generally discourage posting a new question to an old closed thread because the person who answered before may no longer be available, and also it will allow whomever is currently assigned to monitor the forum to respond to you more quickly. For these reasons, I suggest you start a new thread with your question and reference this thread.

    Thank you