Other Parts Discussed in Thread: CC2640
We have a custom board running firmware based on the ble5 simple peripheral example. (using CCS v10, CC2640R2 SDK v4.30.00.08) The application is running well, except when transferring large amounts of data to a client. By "large", I mean over a megabyte (> 80,000 packets). The application is using notifications on a characteristic to repeatedly send chunks of data to the client (an android (v8.1.0) app on a tablet) which aggregates the chunks back into the original blob. It will send successfully for anywhere from 20,000 to 60,000 packets, but at an arbitrary point in the transfer, it goes through this series of calls:
GATTServApp_ProcessCharCfg
gattServApp_SendNotiInd
GATT_Notification
icall_directAPI
ICall_waitMatch
and ICall_waitMatch returns ICALL_ERRNO_TIMEOUT, and it falls to ICall_abort
The peripheral configuration includes these parameters:
// Minimum connection interval (units of 1.25ms, 80=100ms) for parameter update request #define DEFAULT_DESIRED_MIN_CONN_INTERVAL 80 // Maximum connection interval (units of 1.25ms, 104=130ms) for parameter update request #define DEFAULT_DESIRED_MAX_CONN_INTERVAL 104 // Slave latency to use for parameter update request #define DEFAULT_DESIRED_SLAVE_LATENCY 0 // Supervision timeout value (units of 10ms, 300=3s) for parameter update request #define DEFAULT_DESIRED_CONN_TIMEOUT 300
I tried increasing DEFAULT_DESIRED_MAX_CONN_INTERVAL, but performance was significantly worse.
I am working on getting a packet sniffer set up to see if it sheds any light on where things are breaking down, but does this timeout point to any particular area to focus on?