This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

UDP fragments sent much slower than TCP fragments?

I'm sending the same 32KB packet via both TCP and UDP for comparison, looking at the timing with Microsoft Network Monitor. The TCP fragments are all sent in about 15ms while the UDP fragments are taking about 15ms EACH, about a factor of 20 slower. Is there a setting I can adjust to send UDP fragments at a higher rate? Thanks!

    -- Carl

  • Hi,

    We are checking out on your request and will get back you shortly on this.

    Thanks for your patience.

    Regards,
    Sivaraj K
  • Hi,

    Thanks for your post.

    I think, you are observing  ethernet packet loss within UDP frames and possibly, there could be multiple reasons for the data corruption/UDP packet loss. Have you tried capturing the ethernet packet dump through any ethernet packet analyzer like wireshark etc., thereby, you could see more details on fragmented IP packet information through timestamps, source & destination IP, UDP, ID etc. ?

    I think, you have to implement EMAC fragmented ethernet packet structure in your application project since you will have corresponding descriptors for each fragment to identify uniquely. I think, you are fragmenting the original file into multiple fragments and implementing the packet transfer through multiple send/receive calls, when you do this, you may expect a ACK delay from the NDK server on send/receive the first packet and there could be multiple reasons for the data corruption/packet loss. Even, can you check, whether do you receive any bad fragmented UDP message transmission?

    Please check the below E2E posts which would help you better:

    http://e2e.ti.com/support/embedded/tirtos/f/355/t/244852.aspx

    http://e2e.ti.com/support/dsp/omap_applications_processors/f/42/t/208488.aspx

    Thanks & regards,

    Sivaraj K

    -------------------------------------------------------------------------------------------------------

    Please click the Verify Answer button on this post if it answers your question.

    -------------------------------------------------------------------------------------------------------

  • Hi,

    Also, in Ethernet There is a Maximum Transmission Unit (MTU) =1500. So you can't send in one frame more data than MTU .

    But in 1Gb ehternet there are a JUMBO frames  which can be greater  than 1500.

    In other TCP/IP protocol stack implementation(such as Linux, windows, lwip) , Application layer can send maximum 32768 bytes in one udp packet and IP layer do the fragmentation automatically.

    So I think it is the limitation of NDK.

    In linux or Windows big frame will be sent like some frames with length MTU. When you send big Ethernet buffer, your big buffer will be sent like several small packet. NDK don't do it.

    Just re-think on the above.

    Thanks & regards,

    Sivaraj K

    -------------------------------------------------------------------------------------------------------

    Please click the Verify Answer button on this post if it answers your question.

    -------------------------------------------------------------------------------------------------------

  • Let me add some more information. I'm using NDK 1.93 (maintaining a legacy project) and UDP packet fragmentation appears to be working properly. Here's a sample trace from Microsoft Network Monitor:

    I can see all the data going out, my complaint is the time delay (10 to 15 milliseconds) between fragments. When I look at a trace for TCP packets of the same size the fragments are closely-spaced but these are being spread out. Seems like something in the IP layer of the NDK must be inserting the delay between UDP fragments, it's below anything I'm doing.

        -- Carl

  • >I think, you are observing ethernet packet loss within UDP frames and possibly, there could be multiple reasons for the data
    >corruption/UDP packet loss. Have you tried capturing the ethernet packet dump through any ethernet packet analyzer like
    >wireshark etc., thereby, you could see more details on fragmented IP packet information through timestamps, source &
    >destination IP, UDP, ID etc. ?

    I have to apologize, I don't think I clearly expressed my issue, I'll try again... :)

    This is a private network dedicated to the application I'm developing, no other traffic. I can see the data getting across just fine with both Microsoft Network Monitor and with my application. There is no packet loss, all of the data is making it from one end to the other. It doesn't matter whether I use TCP or UDP, both fragment/reassemble my large packets and get the data across reliably (because my network environment is locked down).

    It's the difference in the time BETWEEN fragments being sent that I'm wondering about. My understanding is that both TCP and UDP are layers on top of IP, and the IP layer is what does the fragmentation. When a TCP packet gets fragmented there is less than 1ms between fragments going out on the wire, but when a UDP packet gets fragmented there is 10 to 15ms between fragments. If the IP layer is handling fragmentation of both, why is there a factor of 20 in the amount of time between fragments?

    -- Carl