This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TM4C1294NCPDT: LWIP 1.4.1 Multiple Connections Eventually Results in Corrupted PCB List

Part Number: TM4C1294NCPDT


I'm using LWIP 1.4.1 from TivaWare 2.1.4.178.  My firmware allows for five connections at a time.  I have a client application that connects to the TM4C1294 board and transfers/receives a small amount of data.  Doing some stress testing where I continually make multiple connections to the firmware and then drop the connections and repeat, I've noticed an issue where the active_pcb_list within LWIP eventually becomes corrupted with a self-referential "next" pointer.  Example:

This results in an infinite loop in a function in LWIP's tcp.c code where it tries to walk the list of active PCB's.  In an attempt to identify where and when the active PCB list is being corrupt, I've modified LWIP to look for any self referential "next" pointers anytime the list is modified.  I set a breakpoint when this happens.  I continually see it happening in the tcp_listen_input function in tcp_in.c right after TCP_REG_ACTIVE(npcb) occurs.  Here's the call stack and the location in tcp_in.c when it first see's it corrupt:

I've spent a lot of time reviewing the code and making sure that LWIP functions are only called during the TM4C's ethernet interrupt.  I'm wondering if anyone else has experienced this issue or something similar to it or had any ideas of what might be going on?  ...Thanks in advance.

  • Hello Terence,

    Can you see if increasing your stack and heap size helps with the issue? Could be an overflow issue.
  • Thanks for the suggestion, Ralph. I doubled both of the stack and head size from 16384 to 32768 . but unfortunately same issue occurred after 15 minutes of stress testing. :(
  • Hello Terence,

    I've been discussing this with our Ethernet expert to brainstorm for further ideas. Unfortunately as we are more device experts than LWIP experts, it is difficult for us to come up with meaningful suggestions. This is not an issue that has come up before and is more advanced than the more basic LWIP questions we are often able to support.

    I would recommend to investigate if there is a cutoff of how many connections have to be established before seeing the issue, and then see if a root cause can be tracked from there. Or if it goes away with a lesser number of simultaneous connections, maybe see if the amount of connections is sufficient for the application.

    Beyond that for more detailed help, you should try the LWIP user forums, as this topic really need to guidance from more experienced LWIP practitioners than we are.
  • Hi Ralph - I appreciate the reply and your efforts in brainstorming with others to come up with some ideas.  No problem, totally understand this is more of an LWIP issue than a TM4C issue.  I had already posted on the LWIP forum without much feedback and posted here in hopes that someone else had experienced something similar.

    After more testing, it appears to have something to do with closing client connections using tcp_close().  My application limits the number of concurrent client connections at five and closes any other attempts at connecting.  It appears the problem is happening there - when denying connections.  I think calling tcp_abort() might be more appropriate - still testing.

    Thanks again for your response.

  • Hello Terence,

    We do try what we can to help with LWIP topics, sorry to hear their forum hasn't offered anything else too. Maybe a community member on here will come across the thread and have an idea.

    If you end up getting a solution, we'd be interested to know as well to add to our knowledge (plus other community members may run into the issue in the future).

    In the meantime, I am going to go ahead and close this thread on my end, but if you have a further question you want to bounce off of us for feedback, just reply on here and I'll still see it :)