CC2652R7: sending multiple packets within one connection interval

Part Number: CC2652R7

Tool/software:

Hi

I have two devices with my own project based on the Multi_role example.
One device acts as a master, and the other as a slave. On both devices, INIT_PHYPARAM_MIN_CONN_INT/INIT_PHYPARAM_MAX_CONN_INT equals 80 (i.e. 80*1.25ms = 100ms)

DEFAULT_INIT_PHY = INIT_PHY_1M
MAX_NUM_PDU = 5
MAX_PDU_SIZE = 255

The master connects to the other device and starts sending data to the characteristic using GATT_WriteCharValue. I monitor the result of successful sending using the ATT_WRITE_REQ event in GATT. Judging by the execution time, about 165ms (+-) passes between sending and acknowledgment.

It looks like the data packet is sent on one connection interval, and the acknowledgment on another.
Having studied the information, I understand that in theory BLE should be able to send multiple packets within one connection interval. This in turn increases the exchange rate.
What actions should I take to see the speed increase?
If I decrease the value of the connection interval, the exchange rate will increase. If I use recording without confirmation, the speed will increase.
I am interested in how to increase the exchange rate by sending multiple packets within one interval (without changing the interval time and with confirmation of successful recording)

  • Hello Nick,

    Thanks for reaching out. I think you have covered the possibilities here. Are you able to see that your write packet is actually the size you intent (251 approx without header)? I would also suggest to take a look at the GATT_WriteLongCharDesc() function, which will do the fragmentation of the data packet for you, however with the max pdu size supported of 255 bytes (for Data Lenght Extension). You could also use a GATT_WriteNoResponse() function as you mentioned, and after all the data has been transmitted, provide some sort of acknowledgement to the central through the use of a notification.

    BR,

    David.

  • Unfortunately, I am interested in the write function with a subsequent response, the option without a response does not work. Using notifications as confirmation of a received packet seems to me to take longer.
    Do I understand correctly that the exchange of packets between two devices can be depicted as in image #1 (I reduced the packet size from 251 to 32 bytes, but the exchange also takes about 160-200ms, which is equal to 2 connection intervals)? And is it impossible to achieve an exchange as in image #2 using only the provided stack, without additional operations (notifications or something similar)?

    Image #1

    Image #2

  • Hello Nick,

    Apologies for the delay.

    It is possible for the central to transmit several TX packets if the MD (more data flag is set) - this is taken care by the stack itself. You can take a look at section 4.5.6 Closing Connection Events from the BLE 5.0 core spec. The MD bit of the Header of the Data Channel PDU is used to indicate that the
    device has more data to send. If neither device has set the MD bit in their packets, the packet from the slave closes the connection event. If either or both of the devices have set the MD bit, the master may continue the connection event by sending another packet, and the slave should listen after sending its packet.

    BR,

    David.

  • Hi

    you say "this is taken care by the stack itself"

    Is there any way I can track that this bit is being set? Is there any way I can configure (both central and peripheral devices) to be sure that this bit will be set when possible? The documentation you provided says the following:

    The MD bit of the Header of the Data Physical Channel PDU is used to indicate
    that the device has more data to send. If neither device has set the MD bit in
    their packets, the packet from the Peripheral closes the connection event. If
    either or both of the devices have set the MD bit, the Central may continue the
    connection event by sending another packet, and the Peripheral should listen
    after sending its packet

    How do devices decide that there is more data? I found a define for the MD bit in the source libraries, but they are not used anywhere.

  • Hello Nick,

    Apologies for the delay. Do you have a bluetooth LE sniffer? The MD bit is set at a link layer level. This bit will be set based on the MTU packet size, the amount of data it is requested to be send, and the time left to transmit during the connection interval. May I ask what is the throughput you are aiming for?

    BR,

    David.

  • Hi

    I took two devices for exchanging packets. One device (master) sends a data packet to the characteristic, the other device (slave) simply receives it. On the master side, I record the time between sending a message and confirming its sending (the countdown starts after calling the GATT_WriteCharValue function and the countdown ends when receiving the ATT_WRITE_RSP event in GATT msg at the application level). As a result of the calculations, I get about 185ms.

    In the master settings, the connection interval (max/min) is 80 (80 * 1.25 = 100ms). That is, it looks like I send data at one interval and receive it during another (that's why I usually do not go beyond 2 * interval). The size of the data that I send is 32 bytes, but I tried 250 and the result is the same.

    I tried to scan the data exchange, tracking packets for the slave device. I see my data being sent from the master to the slave. After that, two Empty PDU packets follow (I don't understand why yet) and only after that, it seems, the slave sends confirmation of successful recording to the master. Judging by the analyzer time, the time delta is 100 ms. At the same time, the MD bit is not set to 1 in all packets. For some reason, the data in the measurements diverge.

    1. Why do the time calculations by the analyzer and inside the application diverge by almost two times? Is it possible that this time is spent on internal operations inside the stack?

    2. Why is the MD bit not set?

    3. Why are there some more packets between sending and confirmation?

    for packet master --> slave as example

  • Hi

    My question regarding the strange time measured in the app and via the sniffer MAYBE be explained by the real-time system and tasks. I tried to exclude some tasks (lower priority) and measured the time in the app again. I see that now the time changes in steps. That is, on the first transmission and acknowledgement I measured 190ms, on the next iteration 183ms, etc. up to 100ms, after which again about 190ms and so on down.

    If I return all the necessary tasks to the code, the time becomes stable 183ms.

    But this still does not explain why the stack does not set the MD bit?