This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

RTOS/66AK2H12: Socket options to detect TCP connection break

Part Number: 66AK2H12

Tool/software: TI-RTOS

XDCtools 3.32, SYS/BIOS 6.46, CCS 7.4, PDK 4.0.7, NDK 2.25

I have a pair of applications, both running on EVMK2HX that need to communicate over TCP/IP.

I have one configured as a server using the network stack very similar to the NIMU_emacExample except setup for TCP. I have dameons set up as listeners:

DaemonNew( SOCK_STREAMNC, 0, DATA_PORT, data_connection_established, TCP_LISTENER_TASKPRI, OS_TASKSTKNORM, DATA_PORT, 2 );

On the other EVM I create a socket and connect it to the server. I then have one task call recv() and another task used to occasional send(). I am using the Legacy Non-BSD Sockets Interface. No socket options are set. (no socksetopt calls).

The above is working fine. I can connect and exchange data. My problem is detecting a network disconnect. If I physically remove the Ethernet cable, I get no indication that there is a problem in the client application. I would have expected that the recv() would have returned a -1.

I am not sure what happens on the server application at this point – I only have CCS connected to the client EVM at the moment. The server should be continuing trying to send data.

After some long period of time (5 minutes?) the client recv() call finally returns -1 and the connection is recognized as down.

My question is; how can I configure the sockets so that if the cable is removed it is detected? I can set options on both the client and the server side. Keep Alive? Socket TO? Any suggestion would be very welcome.

Mike

  • The team is notified. They will post their feedback directly here.

    BR
    Tsvetolin Shulev
  • Mike,

    The short answer is that the EMAC driver on the K2 device does not support cable disconnect notification. If you wanted to add this to the driver, we recommend looking at the Tiva C driver as a reference implementation.

    TI-Drivers ti/drivers/emac/EMACSnow.c

    The EMAC raises an interrupt on various events. This invokes the interrupt handler:

    void EMACSnow_hwiIntFxn(UArg callbacks)

    The driver tracks the link status. In the interrupt handler, it always checks the current link status. If it has changed, it signals the network stack thread of this change.

    static void signalLinkChange(STKEVENT_Handle hEvent, uint32_t linkUp, unsigned int flag)
    {
        if (linkUp) {
            /* Signal the stack that the link is up */
            STKEVENT_signal(hEvent, STKEVENT_LINKUP, flag);
        }
        else {
            /* Signal the stack that the link is down */
            STKEVENT_signal(hEvent, STKEVENT_LINKDOWN, flag);
        }
    }

    When the network stack thread runs again, it will notice this change and invoke a callback to inform the application. See the NDK User Guide Section 3.2.5 NDK Initialization.

    However, there is a keep alive option you can enable on the socket.  This option will periodically ping the far-end and wait for a reply.  If there is no reply, it should cause the recv() funtion to return with failure. This option is configure with setsockopt(), however, the SO_KEEPALIVE option is documented in the getsockopt() API. See the NDK Reference Guide Section 3.3.3 Sockets API Functions.

    There is also some interesting details in the NDK Reference Guide Section A.7.1 ARP Revalidation Logic.

    Finally, you can also set a receive timeout (SO_RCVTIMEO) value on the socket. But I'm guessing this is impractical if there are quiescent periods in your data flow.

    One question for you, how are you connecting your two EVMs? Are you using a cross-over cable or is there a switch in-between?

    ~Ramsey

  • Thank you for your response.

    It turns out that I do have the SO_KEEPALIVE socket option set. I suspect that is why the disconnected Ethernet cable is eventually detected and reported. I see that there are three settings that effect how long it takes to detect the broken connection:

    keepIdleTime : Socket keep idle time
    keepProbeInterval : Socket keep alive probe interval time
    keepProbeTimeout : Socket keep alive probe timeout

    I am using a .cfg file to configure. How can I check my above values and modify them, either via the configuration file of via code? I see them listed in XGCONFIG under : NDK Core Stack->Transport->TCP but I do not have the “Add the TCP module to my configuration” checked. Below is my network configuration script.

    Thanks again,
    Mike

    network.cfg.xs

    //////////////////////////////////////////////////////////////////////
    // Include this section to add network support
    //////////////////////////////////////////////////////////////////////
    
    // Load the CSL packagage
    var csl = xdc.loadPackage('ti.csl');
    csl.Settings.deviceType = "k2h"
    
    // Load the OSAL package
    var Osal = xdc.useModule('ti.osal.Settings');
    Osal.osType = "tirtos";
    
    // Load the CPPI package
    var Cppi = xdc.loadPackage('ti.drv.cppi');
    
    // Load the QMSS package
    var Qmss = xdc.loadPackage('ti.drv.qmss');
    
    // Load the PA package
    var Pa = xdc.useModule('ti.drv.pa.Settings');
    Pa.deviceType = "k2h";
    
    var Nimu = xdc.loadPackage('ti.transport.ndk.nimu');
    Nimu.Settings.socType = "k2h";
    
    // Use this load to configure NDK 2.2 and above using RTSC. In previous versions of
    // the NDK RTSC configuration was not supported and you should comment this out.
    
    var Ndk = xdc.loadPackage('ti.ndk.config');
    var Global = xdc.useModule('ti.ndk.config.Global');
    Global.enableCodeGeneration = false;
    
    var Cache = xdc.useModule('ti.sysbios.family.arm.a15.Cache');
    Cache.enableCache = true;
    
    var Mmu = xdc.useModule('ti.sysbios.family.arm.a15.Mmu');
    // Enable the MMU (Required for L1/L2 data caching)
    Mmu.enableMMU = true;
    
    // descriptor attribute structure
    var peripheralAttrs = new Mmu.DescriptorAttrs();
    Mmu.initDescAttrsMeta(peripheralAttrs);
    peripheralAttrs.type = Mmu.DescriptorType_BLOCK;  // BLOCK descriptor
    peripheralAttrs.noExecute = true;                 // not executable
    peripheralAttrs.accPerm = 0;                      // read/write at PL1
    peripheralAttrs.attrIndx = 1;                     // MAIR0 Byte1 describes
    // memory attributes for
    // Define the base address of the 2 MB page
    // the peripheral resides in.
    var peripheralBaseAddrs = [
        { base: 0x02620000, size: 0x00001000 },  // bootcfg
        { base: 0x0bc00000, size: 0x00100000 },  // MSMC config
        { base: 0x02000000, size: 0x00100000 },  // NETCP memory
        { base: 0x02a00000, size: 0x00100000 },  // QMSS config memory
        { base: 0x23A00000, size: 0x00100000 },  // QMSS Data memory
        { base: 0x02901000, size: 0x00002000 },  // SRIO pkt dma config memory
        { base: 0x01f14000, size: 0x00007000 },  // AIF pkt dma config memory
        { base: 0x021F0200, size: 0x00000600 },  // FFTC 0 pkt dma config memory
        { base: 0x021F0a00, size: 0x00000600 },  // FFTC 4 pkt dma config memory
        { base: 0x021F1200, size: 0x00000600 },  // FFTC 5 pkt dma config memory
        { base: 0x021F4200, size: 0x00000600 },  // FFTC 1 pkt dma config memory
        { base: 0x021F8200, size: 0x00000600 },  // FFTC 2 pkt dma config memory
        { base: 0x021FC200, size: 0x00000600 },  // FFTC 3 pkt dma config memory
        { base: 0x02554000, size: 0x00009000 },  // BCP pkt dma config memory
        { base: 0x30000000, size: 0x04000000 },  // emif 16 space: nand flash TRC
        { base: 0x21000a00, size: 0x00000100 },  // emif config 
    ];
    
    // Configure the corresponding MMU page descriptor accordingly
    for (var i = 0; i < peripheralBaseAddrs.length; i++) {
        for (var j = 0; j < peripheralBaseAddrs[i].size; j += 0x200000) {
            var addr = peripheralBaseAddrs[i].base + j;
            Mmu.setSecondLevelDescMeta(addr, addr, peripheralAttrs);
        }
    }
    
    // Reconfigure DDR to use coherent address
    Mmu.initDescAttrsMeta(peripheralAttrs);
    
    peripheralAttrs.type = Mmu.DescriptorType_BLOCK;
    peripheralAttrs.shareable = 2;            // outer-shareable (3=inner, 0=none)
    peripheralAttrs.accPerm = 1;              // read/write at any privelege level
    peripheralAttrs.attrIndx = 2;             // normal cacheable (0=no cache, 1=strict order)
    
    for (var vaddr = 0x80000000, paddr = 0x800000000; vaddr < 0x100000000; vaddr += 0x200000, paddr += 0x200000) {
        Mmu.setSecondLevelDescMeta(vaddr, paddr, peripheralAttrs);
    }
    
    // Add MSMC as coherent
    for (var addr = 0x0c000000; addr < 0x0c600000; addr += 0x200000) {
        Mmu.setSecondLevelDescMeta(addr, addr, peripheralAttrs);
    }
    

  • Forgot to answer your question  re how connected. They are connected via a Ethernet hub and normal Ethernet cables, not cross over.

    I added :

    var Tcp = xdc.useModule('ti.ndk.config.Tcp');
    Tcp.keepIdleTime = 20;
    Tcp.keepProbeInterval = 5;
    Tcp.keepProbeTimeout = 10;

    to my conf.xs file and I'm trying to test now ...

  • The above settings in my .cfg.xs did not help. I disconnected the Ethernet cable and waited over 8 minutes and never had a recv() error reported.
  • Mike,

    You are doing it correctly. The keep alive properties are configured in the config script as you are doing. In addition, you must enable this option on every socket you create.

    int optval;
    int optlen;

    optval = 1;
    optlen = sizeof(optval);
    setsockopt(socket, SOL_SOCKET, SO_KEEPALIVE, &optval, optlen);

    You might find this thread on keep alive helpful. Steve is our network expert, so his posts are authoritative.

    In the CCS debugger, add 'tcps' to your expressions window. Turn on continuous refresh, then run your program. You should see the following values increment when the connection times out.

    tcps.KeepTimeout
    tcps.KeepProbe
    tcps.KeepDrops

    You can also add '_ipcfg' to your expression window. You should see the configuration values here.

    _ipcfg.TcpKeepIdle          (Tcp.keepIdleTime)
    _ipcfg.TcpKeepIntvl         (Tcp.keepProbeInterval)
    _ipcfg.TcpKeepMaxIdle       (Tcp.keepProbeTimeout)

    I'm not sure why the call to recv() is not returning. I'm still investigating this part.

    ~Ramsey

  • Mike,

    I have confirmed that when the connection times out, the call to recv() will return with -1 and errno will be set to ETIMEDOUT. Furthermore, the internal state of the socket will be made unusable, so you need to close the socket and attempt to reestablish your connection by creating and connecting a new socket.

    One detail regarding the timeout detection. Once the NDK stack detects the connection has entered the idle state, it will attempt to confirm the presence of the far end by sending probes. During this probing period, if the connection is reestablished, then your socket is still valid. In this scenario, the call to select will not have returned with -1.

    For example, if you define idle time as 25 seconds (Tcp.keepIdleTime = 250), and no data has been received for this period of time, the NDK stack considers the socket to have entered the idle state. The NDK will begin probing the far end. Let's say you have configured the probing period to be 5 seconds (Tcp.keepProbeTimeout = 50) and your probe interval at 1 second (Tcp.keepProbeInterval = 10). If the connection is still valid, the first probe will be acknowledged, the NDK will stop probing and reset the idle timer. However, if the connection has been broken (i.e. cable is disconnected), the probe will be unanswered, so the NDK will keep probing every second for 5 seconds. Once the probe period has expired, the NDK will flag the socket as ETIMEDOUT and the call to recv() will return with -1.

    Note that the keep alive properties as configured in units of 0.1 seconds.

    Remember that every socket must have the keep alive feature enabled by setting SO_KEEPALIVE using the setsockopt(). Also, the keep alive feature is only available for TCP connections. It does not work wit UDP connections.

    ~Ramsey

  • Ramsey,

    Ok I added the expressions and here is what is displayed. (I've hit Suspend and currently my connection is up and operating).

    The configuration is not matching what I set in my network.cfg.xs where I have added:

    var Tcp = xdc.useModule('ti.ndk.config.Tcp');
    Tcp.keepIdleTime = 20;
    Tcp.keepProbeInterval = 5;
    Tcp.keepProbeTimeout = 10;

    Below is my entire cfg.xs file. It is called from my project's .cfg file with:

    // Network Stack 
    xdc.loadCapsule("../network/network.cfg.xs");

    I am setting SO_KEEPALIVE on my sockets and the disconnected Ethernet cable is eventually detected as I mentioned.

    Mike

  • Mike,

    Well, I see two potential issues. First, it does not look like your build is picking up your Tcp configuration settings. The _ipcfg values look like the defaults. Either your build is not using your config script or your Tcp settings are changing again after your network.cfg.xs file is parsed.

    I often use the following code to debug my configuration. It forces the configuration to stop immediately and it prints values of interest. Add this right after your Tcp configuration code.

    throw new xdc.global.Error("Tcp.keepIdleTime=" + Tcp.keepIdletime);

    If this gives correct result, moved this code to the very end of your configuration script to see if someone might have changed the Tcp configuration again. If the Tcp variable is out-of-scope, just define it again. Its okay to define the same variable multiple times (it just references the object already in memory).

    If this still looks correct, then it might be a runtime issue. The configuration values are applied "sometime after the network stack is started". To be more precise, the main network task, ti_ndk_config_Global_stackThread, will create another transient task, NS_BootTask, which actually applies the Tcp configuration values to the _ipcfg object. The main NDK task runs at priority 2 and the boot task runs at priority 5. So, it should all work out. Let's make sure the Tcp config values have been applied before you open your socket. Do this by setting a hardware watchpoint on _ipcfg.TcpKeepIdle (write operation). Make sure this watchpoint is reached before your socket has been created. When you hit this watchpoint, use ROV to see what tasks are in play.

    The second issue I see is that your probe interval (0.5 sec) is quite short. Remember, the units are 0.1 sec. I'm wondering if it is getting truncated to zero because the tcps.KeepProbe count is not incrementing. Try using larger values. I would suggest my values above.

    Finally, you should be able to refresh the CCS Expressions window without having to halt the target. I typically turn on continuous refresh (in the toolbar). This will update the view at about 1 Hz. As your program runs, you should be able to see the counters incrementing.

    ~Ramsey

  • OK I think I might need some (more?) hand holding...

    First I added the throw code to the .cfg file at the very end. My value of 20 was indeed set. I removed the throw code.

    I changed the settings to the values you suggested:

    Tcp.keepIdleTime = 250;
    Tcp.keepProbeInterval = 10;
    Tcp.keepProbeTimeout = 50;

    I set a hardware breakpoint on _ipcfg.TcpKeepIdle but you said "(write operation)". I'm not sure what you mean by that. I put additional breakpoints on my socket creation call. That breakpoint is hit before the hardware breakpoint and the values in my expression monitor are still at the defualts (72000, 750, 6000). I removed my socket create watchpoint and resumed. The hardware breakpoint on _ipcfg.TcpKeepIdle is never hit.

    Here is my breakpoint:

    As a note, I am going to want to be able to detect a pulled cable as soon as possible so I'm going to be looking for the minimum values I can set. Yes I know they are in 0.1 second intervals.

    Thanks,
    Mike

  • Mike,

    No worries. _ipcfg.TcpKeepIdle is a data object, not code; you need a watchpoint, not a breakpoint. You already know how a breakpoint works. A watchpoint is a trigger that keeps an eye on a memory location (data object), and will halt the processor when the CPU does a write or read operation to that memory location. You can also trigger on either operation.

    In the CCS Breakpoints view, pull-down the menu next to the breakpoint icon and select Hardware Watchpoint. In the location field, enter _ipcfg.TcpKeepIdle. In the Memory menu, select Write.

    Since your _ipcfg object is never getting updated, I'm wondering how many tasks you have and their respective priorities. Use the ROV view to inspect your tasks. Note their priority. Your application task should be lower than the network tasks.

    I understand you want to detect cable disconnect as soon as possible. This should really be done by the EMAC driver. Unfortunately, on the K2 device, this is currently not supported. Maybe you can request this feature through your FAE. The keep alive feature is really intended for servers to detect non-responsive clients (on the order of minutes or hours). We are using it here as a fix for the driver. I don't think it will be that responsive. I would allow several seconds to detect cable disconnect. Also, if your connection does go idle, the keep alive feature will generate more network traffic. This should be minimized as much as possible.

    ~Ramsey

  • Ramsey earlier you posted:

    "In the CCS debugger, add 'tcps' to your expressions window. Turn on continuous refresh, then run your program. You should see the following values increment when the connection times out.

    tcps.KeepTimeout
    tcps.KeepProbe
    tcps.KeepDrops"

    I have a lot of sockets with sets connected to the same EVM. (I am connecting to 4 EVM servers, each with up to three sockets). So watching these tcps values shows me what, the aggregate?
  • Ramsey,

    Thanks for the continued assistance. I’m still not clear on _ipcfg.TcpKeepIdle as a data object, not code. I made the watchpoint as you instructed, and it still was not ‘hit’.

    I also set the same watchpoint on the server side and it wasn’t hit on that application either. That application is only running 3 sockets (verses the 3x4 + 2 where I'm connecting to four servers each with up to three ports and is also providing two server ports).

    The comment about ‘how many tasks’ deeply concerns me. Is there some hard or practical limit that I am not aware of? I am in for a major redesign if that is the case. As far as priorities I need to go back and revisit that. I’ve got another thread going on that topic and I still am not clear. (ref: e2e.ti.com/.../2518565 . Right now my Network stack task is at priority 8, and the socket tasks are at 7 as are many other tasks that are not dealing with socket comms. ROV task screen capture below for reference.

    Mike

    PS – I’ve got another more pressing issue that I’m going to open another E2E thread on. Basically, if my server sends a large stream of data to my client, my client receive returns -1 and give a socket error of 54 (ECONNRESET). If I send a smaller amount of data it is received OK. This is way more pressing than the cable disconnect detection so I need to focus on that.

  • Mike,

    Yes, the tcps object is an aggregate of all statistics for the TCP protocol.

    Looking at your ROV screen shot, I can see that NDK task has terminated. Since you never observe _ipcfg getting updated, I'm guessing the NDK task terminated before it had a chance to spawn the boot task which would have updated _ipcfg. I also see GDU Net Stack and GNU Control. Are these your tasks or some other networking tasks?

    At this point, we need to figure out why the NDK task is terminating. The NDK User Guide, Section 3.5.2 Controlling Debug Messages gives some information on how to instrument the NDK code. This would be a good place to start.

    I'm concerned that the watchpoint on the server side is not working. I would check the NDK task over there (using ROV) and make sure it is running. Maybe have a look at _ipcfg in the expression window to see if you have the expected values.

    As for the number of tasks, that should not be an issue. As long as you have sufficient memory, there is no real limit on the number of tasks. However, task priority is important. Your network task should have higher priority than your application tasks. The real-time nature of the network requires prompt attention. Once the data is in memory, then application tasks can process it while the network is idle. However, if the network provides more data than your application can handle, then you are overcommitted. You would need to drop some data in order to catch up, but this should happen at the application layer, not the transport layer.

    I understand you have other issues which are higher priority. When things slow down, poke this thread to resume this effort.

    ~Ramsey

    PS. Maybe we will rendezvous on your new thread.

  • Well I'm getting more confused....

    "Looking at your ROV screen shot, I can see that NDK task has terminated." "At this point, we need to figure out why the NDK task is terminating."

    The task has always shown as terminated. I start with the NDK example: NIMU_emacExample_EVMK2H_armBiosExampleProject

    I just reloaded that unaltered example, ran it and then hit suspend. Attached is the ROV screen capture with the Tasks displayed and it is showing that the NDK Stack is terminated. I always assumed that was therefor normal.

    Mike

  • Mike,

    My mistake. I have come to learn that the examples in the Processor SDK do not use the NDK task but provide their own. The NDK task is still invoked but all it does is start the NDK heart beat and then terminates. This is enabled with the following configuration code:

    var Global = xdc.useModule('ti.ndk.config.Global');
    Global.enableCodeGeneration = false;

    Now, a side-effect of doing this is that it disables most other configuration code, including the Tcp configuration we have been discussing. This explains why you are never seeing the Tcp keep alive values applied to the _ipcfg object.

    When using this approach, it becomes the responsibility of the networkStack task to apply the Tcp configuration using C code. Please add the following code to this task:

    {
        Int keepIdleTime = 250;
        CfgAddEntry(hCfg, CFGTAG_IP, CFGITEM_IP_TCPKEEPIDLE,
                CFG_ADDMODE_UNIQUE, sizeof(uint), (UINT8 *)&keepIdleTime, 0);
    }
    {
        Int keepProbeInterval = 10;
        CfgAddEntry(hCfg, CFGTAG_IP, CFGITEM_IP_TCPKEEPINTVL,
                CFG_ADDMODE_UNIQUE, sizeof(uint), (UINT8 *)&keepProbeInterval, 0);
    }
    {
        Int keepProbeTimeout = 50;
        CfgAddEntry(hCfg, CFGTAG_IP, CFGITEM_IP_TCPKEEPMAXIDLE,
               CFG_ADDMODE_UNIQUE, sizeof(uint), (UINT8 *)&keepProbeTimeout, 0);
    }

    You already should have code similar to this. Add the new code in the same place. Just make sure this code is added before the task calls NC_NetStart().

    One more detail. The code above writes the values to an internal object, not to the _ipcfg object we have been looking at. When you invoke NC_NetStart(), it creates the NS_BootTask task which transfers the values over to _ipcfg. In case you are watching for this in the debugger, you won't see _ipcfg update until after you call NC_NetStart().

    You might want to delete the Tcp configuration from your config script just to avoid confusion in the future.

    Finally, I see that most of the stack (8KB) for the terminated NDK task was never used. This memory is essentially wasted. If you want to minimize the memory usage, you could reduce the NDK task stack size to about 0x200. You must still do this using the configuration script (more confusion):

    var Global = xdc.useModule('ti.ndk.config.Global');
    Global.ndkThreadStackSize = 0x200;

    I hope this clears up the confusion and explains the behavior your have been observing with the keep alive feature.

    ~Ramsey

  • No problem Ramsey. I sort of arrived at the same conclusion last evening when investigating my other problem. I'd added _ipcfg to my expressions (not just the few members we were discussing) to see buffer sizes. It then dawned on me that if I needed to change the members we were interested here in this discussion that I most likely would need to do it in the same place vice in the .cfg.

    The terminated NDK Stack task was another issue. That had me very confused so thank you for clearing it up. I'll make the cleanups you suggested including reducing the stack.

    Cheers,
    Mike

    PS - here is the E2E thread on my new problem. This one is quite serious for me, a 'show stopper' as they say.
  • There is an entry in the example:

    rc = 4096; // increase stack size
    CfgAddEntry(hCfg, CFGTAG_OS, CFGITEM_OS_TASKSTKBOOT,
    CFG_ADDMODE_UNIQUE, sizeof(uint), (UINT8 *)&rc, 0 );

    but CFGITEM_OS_TASKSTKBOOT is not documented in the NDK API. Can someone provide the purpose?

    Mike

  • Mike,

    I'm not finding CFGITEM_OS_TASKSTKBOOT in the NDK product. Maybe its a custom configuration added by the example. I downloaded the Processor SDK RTOS K2HK 4.01.00.06, which contains PDK K2HK 4.0.7. Would you point me to the example you are using which contains this configuration symbol.

    Thanks,
    ~Ramsey

    PS. Your link above to the new thread is broken.

  • The example is: NIMU_emacExample_EVMK2H_armBiosExampleProject
    Here is the link again: e2e.ti.com/.../2520120
    The title is: RTOS/66AK2H12: Client Socket recv() returns -1 and gives ECONNRESET
  • Mike,

    Thanks for the example pointer. I finally found CFGITEM_OS_TASKSTKBOOT in the NDK (I had been looking in the wrong release). This config param defines the stack size of the transient NDK boot task (NS_BootTask). This is the same task we discussed previously in this thread.

    This config param is defined in the following file:

    <ndk>/packages/ti/ndk/inc/nettools/netcfg.h

    #define CFGITEM_OS_TASKSTKBOOT     10  /* Stack size for NS_BootTask */

    I figure you will want to tune this value to match the actual stack usage of your NDK boot task. This is a little tricky because this task lives for such a short time. It is created in NS_NetStart(), runs for a short time, and then self terminates. After that, the Idle loop will delete this task, thus removing all traces of its existence.

    There are three places you can stop the processor to inspect the actual stack usage of this task.

    1. Set breakpoint at end of NS_BootTask. Use the Disassembly window to view the NS_BootTask code. Scroll down to the end of the function, set a breakpoint on TaskExit.
    2. Set breakpoint on Task_delete. This is probably the easiest. My guess is that this will be the first task deleted. If not, this might not work out so well.
    3. Set breakpoint on Task_deleteTerminatedTasks. This function is called every time through the Idle loop. If might get called before the task has terminated, in which case it would be too early. I just mention it for completeness.

    When you hit any of these breakpoint, use ROV to open the Task Detailed view. You should see the NS_BootTask. If you use method #2 or #3 above, the task should be in the terminated state. You can inspect the stackPeak value to tune the stack size.

    I will file a bug to add this config param to the NDK API Reference Guide.

    ~Ramsey

    PS. I'll have a look at your other forum thread (but it's currently assigned to someone else).

  • Hi Ramsey, This is RJ Hall in the Catalog Marketing group. The customer has made me aware of this issue. Is there anything else we can do on our end, or are we basically constrained to the 3 steps that you outlined above, along with the bug-fix request for adding this config param? Feel free to PM me at rhall@ti.com if you would like. Thanks for your help. RJ
  • Hi Mike,

    I think Ramsey has provided all the needed information, so I'm going to mark this as "TI Thinks Resolved". If you disagree, please post a response.

    Note: after some number of days without activity (I think it's 30 days now), threads will get locked. If this occurs and you are still having issues, just open a new thread and reference this one.

    Todd
  • I haven't been able to test because I'm down to one EVM. It sounds like the bottom line, at least for this thread, is that the driver does not support physical layer disconnect and there are no plans to add it thus I need to figure out an application level work around.