This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

CC2642R: LL Connection Paramater Update Issues

Part Number: CC2642R

I'm not having any luck getting a LL connection parameter update to go through on my peripheral application running on the CC2642. 

I have things configured so that the App should have to approve any update requests, i.e. 

#define DEFAULT_PARAM_UPDATE_REQ_DECISION     GAP_UPDATE_REQ_PASS_TO_APP
  // Configure GAP
  {
    uint16_t paramUpdateDecision = DEFAULT_PARAM_UPDATE_REQ_DECISION;

    // Pass all parameter update requests to the app for it to decide
    GAP_SetParamValue(GAP_PARAM_LINK_UPDATE_DECISION, &paramUpdateDecision);
  }

I would expect receive the GAP_UPDATE_LINK_PARAM_REQ_EVENT in my SimplePeripheral_processGapMessage() function. However, I never get this event. It doesn't seem to matter if the update requests are initiated locally or from the central's side, the event never triggers. All update requests are rejected for an "Unacceptable Connection Interval". I see this on the BLE sniffer as well as receiving the GAP_LINK_PARAM_UPDATE_EVENT with a status of 0x3b in the application (LL_STATUS_ERROR_UNACCEPTABLE_CONN_INTERVAL). We do run the shortest connection interval possible as latency is very important to our application. Here's what the central's asking for:

Control Pkt: LL_CONNECTION_PARAM_REQ
Minimum Interval: 6
Maximum Interval: 6
Latency: 0
Timeout: 2000
Preferred Periodicity: 0
Reference Connection Count: 11
Offset 0: 0
Offset 1: 1
Offset 2: 2
Offset 3: 3
Offset 4: 4
Offset 5: 5

I'll have to take a look a the BT5 spec to make sure these tight connection interval's are still valid, maybe they increased the minimum interval and that's all I'm running into? I don't have control over the central's code, or I would relax things a bit and give it a test.

I've also tried setting DEFAULT_PARAM_UPDATE_REQ_DECISION to GAP_UPDATE_REQ_ACCEPT_ALL as a temporary workaround, but I get the same behavior where the stack rejects the updated parameters. 

Any thoughts on how to troubleshoot this issue? I've checked all the usual suspects,:stack / heap blowouts, confirmed I am getting all the other GAP messages in the application, etc.

     

  • Hi,

    The parameters looks correct, can your try with a higher connection interval for example 200ms and see if you get the same behavior?

    Best wishes
  • Hello Zahid,

    I'm working on getting a build of our central code that will allow me to do this. Right now the central is always requesting the tight connection interval, even though I'm passing it some arguments that should prevent this. I will report back and let you know what I find out once I get this sorted out.

    Something else I've discovered that is worth mentioning is that the 7.5 ms connection interval does work with the CC2642 peripheral when I test against a RaspberryPi simulating my central. There are a few differences between the simulator and my actual central device, the biggest being that the Pi runs Bluetooth 4.1, and the true central runs 4.2. On logs from the Pi I can see that it waits for the parameter request from the peripheral (with the 7.5 ms connection interval), the Pi quickly responds with a connection parameter update indication that the peripheral accepts. On logs with the true central I see the central trying to initiate the first parameter update, and stuck in between the request and the peripherals response is a LL_LENGHTH_REQ/LL_LENGTH_RSP. I'm wondering if the order/timing of these requests is tripping something up in the stack? I should be using whatever configuration simple_peripheral shipped with for MTU/DLE extension settings.

    I've attached a screenshot from my sniffer to help illustrate things. The device starting with f0 in the BD Address column is the peripheral.

    -Josh

  • It looks like it is in the middle of negotiating DLE, can you also try delaying the connection update to see if the timing affects the connection parameter update?

    Best wishes
  • We're still having trouble getting our central to delay it's connection parameter update request, no matter how we confiure the app/stack it still sends one out early in the connection (central is not a TI device). I may be setting up a simulated central on another 264X to aid in development of another feature, if I do end up doing that I will try to test this issue against it.

    I have found a workaround for the LL connection parameter update, which is explicitly disabling the feature and letting the L2CAP procedure handle the update. In the process of finding this workaround I've seen some very odd behavior form the 2642's stack, I'm highly suspicious that there are some minor issues with the stack that I'm seeing symptoms of. I'll try to keep these descriptions succinct, but there's a lot to convey.

    To try some different settings I've started calling HCI_LE_ReadLocalSupportedFeaturesCmd() at init. When I receive the HCI_LE_READ_LOCAL_SUPPORTED_FEATURES event I execute the following code:


    case HCI_LE_READ_LOCAL_SUPPORTED_FEATURES: { if (status == SUCCESS) { uint8_t featSet[8]; // Get current feature set from received event (bits 1-9 of the returned data memcpy((void*)featSet, (void*)pMsg->pReturnParam[1], 8); Log(3, "HCI_LE_READ_LOCAL_SUPPORTED_FEATURES succeeded\n"); Log(3, "Current featSet[0]: 0x%x\n", featSet[0]); #if ENABLE_DLE // DLE is enabled by default, no need to log this SET_FEATURE_FLAG(featSet[0], LL_FEATURE_DATA_PACKET_LENGTH_EXTENSION); #else // Clear bit 1 of byte 0 of feature set to disable LL Connection Parameter Updates Log(3, "Disabling DLE!\n"); CLR_FEATURE_FLAG(featSet[0], LL_FEATURE_DATA_PACKET_LENGTH_EXTENSION); #endif #if ENABLE_LL_CONN_PARAM_UPDATES Log(3, "Enabling LL connection parameter updates!\n"); SET_FEATURE_FLAG( featSet[0], LL_FEATURE_CONN_PARAMS_REQ ); #else Log(3, "Disabling LL connection parameter updates!\n"); CLR_FEATURE_FLAG(featSet[0], LL_FEATURE_CONN_PARAMS_REQ); #endif Log(3, "Setting featSet[0] to: 0x%x\n", featSet[0]); HCI_EXT_SetLocalSupportedFeaturesCmd(featSet); } else { Log(3, "StackMsg: HCI_LE_READ_LOCAL_SUPPORTED_FEATURES failed: %d\n", status); } break; }

    The default featSet[0] I get from the stack only has the LL_FEATURE_ENCRYPTION and LL_FEATURE_DATA_PACKET_LENGTH_EXTENSION bits set (0x21). However, if I only read the feature set, and make no calls to HCI_EXT_SetLocalSupportedFeaturesCmd() then the LL connection parameter update procedure still gets used (perhaps because central asks to uses it in the feature exchange at the beginning of the connection) and the failure noted in my original post occurs.

    Is this the expected behavior? Is this feature only truly disabled when you explicitly write the default feature set back to the stack with a HCI_EXT_SetLocalSupportedFeaturesCmd() call? When I do explicitly disable the LL parameter updates everything works great. The connection parameters are updated via L2CAP, and the data LL_LENGTH_REQ's go through without issue. It's definitely a bit confusing that writing back the same value I read changes the stack's behavior. Maybe the stack still honors requests from centrals to use the procedure unless the app explicitly disables it?

    So that's all well and good, but things get weird when I set my switches to ENABLE_DLE = 0 and ENABLE_LL_CONN_PARAM_UPDATES = 1. The 2642 responds to the central's LL_CONNECTION_PARAM_REQ with LL_UNKOWN_RSP. Unless I'm missing something or this is a forbidden combination of parameters this seems like a bug? I don't think we'll ever need to use this exact configuration in my application, but I thought I'd pass along the information.

  • Still interested in some input on my post from Feb 16. I've found an acceptable workaround, but I'm still a bit confused about the default behavior of the stack w/respect to LL connection parameter updates, as well as the strange behavior (potential stack bug?) I mentioned in the last paragraph.