This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

CC3200 SimpleLink TCP Stack - TCP Socket keep-alive messages interval not configurable

Other Parts Discussed in Thread: CC3200

Hi,

Why is the interval duration of the keep-alive messages is hard-coded to 5 minutes - and not "user-configurable" ?

I've got a server socket on the CC3200 in AP mode.
In the event the client "hard-disconnect" (e.g. loose wifi connectivity to AP) prior to terminate its client socket the CC3200 will not notice before the keep-alive message is sent - that is 5 minutes.
So during those 5 minutes one socket will be left in EAGAIN/EWOULDBLOCK multiplied by used sockets multiplied by successive "hard-disconnected" clients - this can under certain circumstances quickly consume all available sockets/resources.

One can easily implement an "application timeout" or match AP clients to Socket clients to have the desired granularity - agreed - but keep-alives are, well, designed for that.

Thanks for any info on this !

  • Hi,

    Apologies for the delayed response.

    The 5 minutes is the default configuration of the TCP keep alive that we consider to be good for most of use cases in terms of the trade off between power consumption, stability and amount of traffic on the network.
    If you have a proprietary system and can always send a packet from the server on that socket and the system will raise an error if the client is not connected.

    Regards,
    Ankur
  • Hi Ankur,

    Thanks for your reply.

    I agree with you on both points.
    My question was in fact more like a feature request.

    If we leave the "proprietary system" on the side, and take an HTTP/1.1 implementation, this still raise concerns in AP mode (in a lesser way in STA mode) :
    - If the HTTP/1.1 implementation honours the HTTP KeepAlive client request (as it could also in HTTP/1.0) this connection will be left in an active connections pool
    - If the client abruptly disconnect from AP
    -> the connection will be polled until TCP KeepAlive packets come into play
    Add to this that most (like in all) modern browsers* open pre-emptive TCP connections to the HTTP host, the HTTP implementation might be handling a handful of ghost connections for a while with no (easy) way to check for disconnection.

    *Chrome may even tries to open up to 10 sockets for a basic HTML page with few resources (let say 2 or 3 css/js/jpgs/whatever)

    The "demo webserver" example badly suffers from this, but tries to circumvent this by handling an application timeout, and hardly fails under certain circumstances.
    I feel like the buit-in webserver doesn't suffer from this as it looks like it's HTTP/1.0 and certainly close after response, though i did not test it. But obviously HTTP KeeAlives are a huge benefit user-experience wise.

    All in all, I am not sure leaving the application handling the timeout is really always a good performance/stability trade-off.

    Don't get me wrong, I've spent time working around this, and have something robust without using TCP KeepAlives below the hard limit of 5 minutes in a non-proprietary way (in a hackish way to be honest, as I mentioned in the opening post) - but I feel like giving a "socket keepalive configure option" could make life easier for app developers.
    But maybe/certainly I miss something on the NWP side that makes this not an viable solution.

    That still makes me wonder if TI would consider configurable TCP KeepAlive timeouts as a feature request or not ?

    Regards,
    0xff
  • Hi,

    Thanks for providing the detailed feedback/request. I will share this with the team.

    Regards,
    Ankur