This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

CC1312R: TI 15.4 Stack Data Send – LBT Algorithm Issue

Part Number: CC1312R
Other Parts Discussed in Thread: SYSCONFIG

Tool/software:

Hello,

I'm starting a new thread regarding an issue related to the LBT algorithm in the TI 15.4 stack messages transmission.
I'm working on an application based on the collector and sensor examples provided in the TI 15.4 stack. The hardware is based on CC1312, using SDK version 8.30.01.01.

TI 15.4 Stack Configuration:
* Mode: Beacon-enabled
* Frequency Band: 863–869 MHz
* Regulatory Type: ETSI
* PHY Type: 200 kbps, 2-GFSK
* MAC Beacon Order (BO): 10 (~4.9 s)
* MAC Superframe Order (SO): 3 (~38 ms RX window)

The application operates in beacon mode. The test setup consists of a collector and two sensor devices. There are two types of beacons:
* General beacon: Not intended for any specific sensor - sensors do not respond in normal conditions.
* Dedicated beacon: Addressed to a specific sensor - only the intended sensor responds.

Under normal conditions, the system works as expected with these beacon types. However, issues arise when the collector sends data packets to the sensors using indirect (AUTO_REQUEST_ON) data transfer. In this case, during each beacon interval:
* The sensor that receives a dedicated beacon sends a response.
* Another sensor, for which data is pending, sends a data request to retrieve the data.
Problem: Occasionally, RF packet collisions occur, leading to lost messages, despite all transmissions theoretically fitting within the superframe duration.

Debugging Setup:
Used RF output debug signals to observe TX/RX timing on GPIOs (Debugging RF output). Below is a summary of the test cases and findings based on sniffer logs and oscilloscope traces:

Test Cases:
Case 1 – Expected behavior:
* Collector sends a beacon dedicated to sensor_2 (data pending for sensor_1).
* Sensor_2 responds to its beacon.
* Collector ACKs the response.
* Sensor_1 sends DATA_RQ, receives ACK and data from the collector, and sends final ACK.

Case 2 – Expected behavior:
* Collector sends a general beacon (no response from sensor_2).
* Sensor_1 sends DATA_RQ, receives ACK and data from collector, and sends ACK.

Case 3 – Unexpected behavior:
* Collector sends a beacon dedicated to sensor_2 (data pending for sensor_1).
* Sensor_1 sends DATA_RQ first:
* Collector sends ACK and sensor_2 responds to the beacon at the same time.
* Result: RF collision, packets are lost.

Case 4 – Unexpected behavior:
* Collector sends a beacon dedicated to sensor_2 (data pending for sensor_1).
* Sensor_2 responds to the beacon and receives ACK.
* Sensor_1 sends DATA_RQ, receives ACK, but the data packet is not transmitted from the collector.

Summary:
In cases 3 and 4, overlapping transmissions or lost packets are observed. This suggests that multiple devices may be transmitting simultaneously. We experimented with adjusting parameters such as macMinBE, macMaxBE, CONFIG_MAC_MAX_CSMA_BACKOFFS. However, the problem persists.

Main Questions:
What could be causing these packet collisions or losses and how exactly does the LBT mechanism operate in such beacon-enabled, data transmission scenarios?
Is it possible to avoid these conflicts and what parameters should be fine-tuned to improve reliability?

Any insights, guidance, or recommendations are highly appreciated.

  • Hi Simonas,

    Thank you for the detailed issue description. I will discuss with the software designers and get back to you tomorrow.

    One question: When you say the sensor responds to a beacon, is it responding with a data packet or ACK packet?

    Cheers,

    Marie H

  • Hi Marie,

    Thank you for the follow-up.

    To clarify: when I mention that the sensor "responds to a beacon," I mean that it typically transmits a data packet in response to a dedicated beacon addressed to it, and the collector acknowledges the data packet with an ACK.

  • Hi Simonas,

    I talked to the SW RND team.

    If all of your devices have the same backoff timing setting, they might be listening for collisions at exactly the same time and thus not be aware that they are about to send a packet at exactly the same time.

    Also, for ACK packets the device will not perform LBT.

    Hope this helps you adjust your settings to avoid these collisions!

    Cheers,

    Marie H

  • Hi Marie,

    Thank you for your response.

    Regarding the point about ACK packets, I just wanted to confirm: is it correct that Listen-Before-Talk (LBT) is not performed before sending ACK packets? For example, according to your statement in Case 3, while one device is waiting for an ACK after sending a data packet, another device begins its own transmission. As a result, the collector's ACK and the second device's data are transmitted simultaneously, causing all data packets to be lost — is that the expected behavior?

    Additionally, could you please explain the LBT algorithm in more detail — specifically, how the following parameters are used in the algorithm: CONFIG_MAC_MAX_CSMA_BACKOFFS, CONFIG_MAX_RETRIES, CONFIG_MIN_BE, CONFIG_MAX_BE. In a previous thread, it was mentioned that the LBT listening duration is at least 5 ms. However, I didn’t observe this clearly in the oscillograms for Case 3. Could you help interpret what’s happening in that case?

  • Hi Simonas,

    Yes, and so in Case 3 I would increase the interval for second device to avoid this collision.

    The LBT algorithm is proproetary to the IEEE spec. We have some additional information if you search here on the forum. But in order to get the full picture you would need to buy the spec.

    Cheers,

    Marie H

  • Hi Marie,


    Thank you for the clarification. Regarding 'increasing the interval', which specific stack parameter are you referring to? Additionally, it's still unclear whether the CONFIG_MIN_BE and CONFIG_MAX_BE settings have any effect in my case.

  • Hi Simonas,

    If you want to delay the data packet you can increase the  Reporting Interval (Sysconfig -> TI 15.4-Stack -> Network -> Application). 

    CONFIG_MAX_BE  and CONFIG_MIN_BE are not used for LBT (only for CSMA). I don't think it's possible to configure the listening period in LBT.

    Cheers,

    Marie H