Hello,
I use the NDK (v2.26.0.08 because of compatibility reasons) to implement a TCP/IP server on a AM3352.
The server works quite well so far, but when I send data, the TCP/IP stack only transmits 1 databyte per TCP packet.
We use a higher level protocol over this TCP/IP connection which is able to move only 1 byte per statemachine call into the send buffer.
So I use the NDK "send" function to move 1 byte into the TCP/IP buffer.
This happens as long as the whole "protocol packet" has been sent (done in a while loop, so practically multiple "send" calls in a row).
I have *not* enabled the "TCP_NODELAY" mechanism.
To be sure I also have tried to actively disable the "TCP_NODELAY" mechanism but without any success.
int32_t opt = 0;
if (setsockopt(tcpip->clientSocket, IPPROTO_TCP, TCP_NODELAY, &opt, sizeof(opt)) == SOCKET_ERROR) {
UART_printf("*** Disable NODELAY error\n");
} else {
UART_printf("Disable NODELAY OK\n");
}
AFAIK the Nagle algorithm (=TCP_NODELAY disabled) should prevent such behaviour, isn't it?
Below a screenshot of WireShark capturing the AM3352 response 1-databyte packets:
192.168.1.10 -> PC
192.168.1.20 -> AM3352
NDK buffer configurations:
NDK TCP settings:
Thank you,
Markus