This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

CC1312R: Sensor send message algorithm

Part Number: CC1312R

Hello,

I'm implementing an application based on ti154stack examples "collector" and "sensor". MCU's are CC1312, SDK 6.41.00.17.

Mode: beaconed; BO:10 (~4.9s); SO: 2(~19ms RX); Freq: 868MHz; Regulation: ETSI

Application uses beaconed mode, devices are battery powered. Idea of communication is that Collector coordinates when particular sensor sends data packets. Basically collector fills beacon payload from which contents sensor device knows when it is expected to send message. Message should be delivered within same beacon RX time portion of collector.  I have two types of messages from sensor: tracking and alarms. With tracking is all good, since only one device per beacon is sending. Other story with alarm type message. Every second beacon announces that any of the sensors can send alarm type of message if needs to. There might be zero or several devices with alarm messages pending. My aim is to let sensors send data with disabled CSMA_BACKOFFS, RETRIES and not waiting ACK. Sensor will know if message were delivered from next beacon contents. If there are several sensors with alarm messages pending, only one will be able to deliver since other will detect that channel is occupied. In this case I expect message to be dropped and announced as not delivered. Most of the times application behaves like I described. But there are cases when two sensors tries to send alarm within same beacon, one delivers successfully while the other one also delivers successfully but on next beacon or even two beacons later. This behaviour interfares with tracking messages and is not welcome.

From this arises two questions:

  1. Is there a possibility on collector application to know when beacon has been sent exactly? Or when beacon RX part finished? I need to know when it is safe to set next beacon payload. 
  2. Sensor has configuration and message txOptions:

/* Number of max data retries */
#define CONFIG_MAX_RETRIES 0

/*! MAC MAX CSMA Backoffs */
#define CONFIG_MAC_MAX_CSMA_BACKOFFS 0

txOptions.ack = false
txOptions.indirect = true
txOptions.pendingBit = false
txOptions.noRetransmits = true
txOptions.noConfirm = false
txOptions.useAltBE = false
txOptions.usePowerAndChannel = false
txOptions.useGreenPower = false

Can you confirm that message should be dropped if LBT detected hat channel is not accessible? If yes, can you point out what can cause such message to be delivered several beacons later (there a no other messages in TX queue)? 

  • Hi Zilvinas,

    I have assigned this thread to a colleague. They will get back to you soon. 

    Regards,
    Sid

  • Hi Zilvinas,

    To your questions:

    1.) As far as I know there is no indication when a beacon has started or stopped in the collector.

    2.) There is no 100 percent guarantee the message is deleted as queued up massages are still possible. Possible solutions for you could be: Increase MAC Super Frame order to give more airtime and ensure all alarm messages can be sent within one frame. This is not wanted but maybe better then receiving the message with the next beacon.

    However a further possibility is reducing the Back off exponents to ensure a collision will happen and messages will not be transmitted.

    I will do some tests and measurements on how to do this but we cannot guarantee that frames are deleted as its not ensured that those are sent in parallel.

  • Hi Zilinas,

    I did some measurments to test my suggestion.

    This is how the communication looks like in our standard sensor collector example with Beacon Order set to 8 (4.8s) and report interval set to 5 seconds:

    I disabled Acknowledgment and retransmission and defined  

    CONFIG_MAC_MAX_CSMA_BACKOFFS 0 and CONFIG_MAX_RETRIES 0

    I also set the minimum Backoff interval to 1 and maximum backoff interval to 2.

    Afterwards most of the messages were dumped and only one message was transmitted within the beacon interval. 

    This should also apply to your application. 

    Regards,

    Alex

  • Hi Alex, 

    thank you for responses. 

    Increasing Superframe order doesn't work for me, cause power consumption will be too high for battery powered coordinator. 

    Thank you for tests, I will apply this strategy to my application and will confirm if this solved my problem.

    Žilvinas

  • Hi Zilvinas,

    Did you manage to work on your issue?

    Regards,

    Alex

  • Hi, Alex,

    I did testing on my own and here are results. First I updated application for better test results observability. Application on every second beacon has message for all joined  sensors to send alarm message while other beacons doesn't contain any request from sensors and should not receive any message. 

    Test #1. 868MHz ETSI, BO = 10, SO=3(RxOn ~38ms), Min Be = 1, Max Be = 2. Joined sensors = 3

    Sniffer data shows application to behave as expected. Every second beacon, one or two sensors transmits alarm message. Silent beacons left for tracking messages that are not implemented currently. But this Stack configuration is not suitable, because of high current consumption.

    Test #2. 868MHz ETSI, BO = 10, SO=2(RxOn ~19ms), Min Be = 1, Max Be = 2. Joined sensors = 3

    Second test has SO=2 and from sniffer data we can observe that only few beacons are silent. This is not suitable. 

    Test #3. 913MHz FCC, BO = 10, SO=2(RxOn ~19ms), Min Be = 1, Max Be = 2. Joined sensors = 3

    Here application behaves as expected with desired RX airtime. 

    This test I did because I think that LBT algorithm is making application behave like in test #2 and puts messages for retransmission if CCA stage detected occupied channel.

    So my question, how LBT algorithm is implemented and how does it relate with CONFIG_MAC_MAX_CSMA_BACKOFFS parameter. If you can point out thread or link to document.

    Žilvinas

  • Hi Zilvinas,

    We use LBT our algorithm is compliant to ETSI EN 300 220-1 polite spectrum access.

    However how do you ensure that the application fills the queue just with every second Beacon and not inbetween?

     With your setup

    2 beacons  for 1 report there migth  be the issue that the devices are not syncronized  and the devices are not always writing to the queue in the same beacon interval. 

    Attached see a measurement I did with your setup. (Standard:ETSI / BO:5 SO:2/ Report Interval: 10s ) 

    When they write the the queue in the same interval you can clearly see that data is dumped. 

    you can also the the CCA Reject counter increasing while sending.

     As soon as I unplug one device and plug it again they might not be writing to tx queue within the same interval which causes them to send in different beacon intervals all messages arrive and no dumping happens anymore.

    When you want to sent every message within the same Beacon Interval (or dump it) you will at least have to consider some kind of syncronization between the devices application to write the tx queue within the same interval.

    Regards,

    Alex

  • Hi Alex,

    regarding your question about timing of filling transmit queue. Sensor application does fill TX queue upon receiving beacon which naturally does a synchronization. However I made a setup where I'm capturing signals of 3 sensor devices.

    Configuration of application is the same:

    Captured Signals are:

    Rx - device has RX of radio on

    TX - device is transmitting

    SetMsg - moment when MAC is filled with message request (ApiMac_mcpsDataReq is called)

    Next are images of sniffer captured data and same moment of captured device electrical signals. 

    Here is an example of expected behavior. Highlighted beacon in sniffer log is what we can see in electrical signals capture printscreen. RX of all 3 devices shows that beacon received, SetMsg signal is moment when MAC is filled with message request (ApiMac_mcpsDataReq is called). Device 0x0002 does not have SetMsg signal captured, but it does the same as 0x0001 and 0x0003. We can see that all three devices does CCA and only 0x0003 is able to Transmit. DataCnf callback returned statuses for 0x0001 and 0x0002 within same beacon with codes indicating "ApiMac_status_channelAccessFailure = 0xE1" This example is what I expect.

     

    Other example is undesired behavior.

    Here we can see that beacon received ad all sensor devices immediately puts message to TX queue. 0x0001 device is able to deliver message. 0x0002 device during CCA stage detects that channel is occupied and returns "ApiMac_status_channelAccessFailure = 0xE1" status. However 0x0003 clearly puts message to TX queue in time but not even trying to do CCA. On next beacon 0x0003 successfully delivers message.

    Could you explain me a little bit about LBT mechanism according to EN 300 220-1 Polite spectrum access. If I understand correctly, after message has been put to TX queue, MAC stack does a Randomly generated waiting time before CCA stage. If detected that channel is occupied, event is dropped because I have CONFIG_MAC_MAX_CSMA_BACKOFFS 0 and CONFIG_MAX_RETRIES 0. If it's true, what is random time formula based on Back off exponents? Can it be that randomised wait time is longer than coordinator RX airtime (which is ~19ms) and that message set to deliver on next beacon? 

    Žilvinas

  • Hi Zilvinas,

    sorry for the late response.

    In our stack the LBT is implemented as follows:  

    - the channel is checked out if it is idle for up to 5msec before transmitting a packet

    -If the channel is not idle for the time window, 5msec + random time is waited before the next listening

    So this should not cause the issue. Did you measure how long it takes from receiving the beacon to write to the tx-queue this might cause a delay to.

    A different approach that might work here is using a timer to delete messages from the tx queue. Did you already consider this to solve your issue ? 

    Regards, 
    Alex

  • Hi Alex,

    thank you for response. 

    Did you measure how long it takes from receiving the beacon to write to the tx-queue this might cause a delay to.

    Consistently ~1ms. 

    -If the channel is not idle for the time window, 5msec + random time is waited before the next listening

    I assume that if on the first try channel is occupied means no more listening, cause I have CONFIG_MAC_MAX_CSMA_BACKOFFS 0. If it's true then random time is never involved in my case. Could you clarify for me a sequence of transmission of data packet from the moment it is put to TX queue? I lost how CONFIG_MAC_MAX_CSMA_BACKOFFS, CONFIG_MAX_RETRIES, CONFIG_MIN_BE, CONFIG_MAX_BE are involved in this sequence.

    A different approach that might work here is using a timer to delete messages from the tx queue. Did you already consider this to solve your issue ? 

    Thank you for a suggestion. I have in my pocket this approach but at the time this is more like a workaround. I want to understand what is the root of the problem. 

    Žilvinas

  • Hi Zilvinas,

    CONFIG_MAC_MAX_CSMA_BACKOFFS is still be used and is the amount of retries performed after LBT failed.
     CONFIG_MAX_RETRIES defines how often the Mac resends the frame after access procedure failed.

    CONFIG_MIN_BE, CONFIG_MAX_BE is not used in LBT.

    CONFIG_MIN_BE, CONFIG_MAX_BE is not used in LBT.

    These functions are replaced by listening for 5 seconds on first try

    and listen for 5s + a random number (0-5 ms) .

    I still think there might be an issue with your application taking to long to write the message the queue and there might be an issue with Timing. 

    I adapted the example and made the application to send a message to tx-queue as soon as I press the button a message is sent to tx-queue. 

    However as soon as i press all 3 buttons just one message is sent and the other messages are dumped.

    Can you try to write to the queue after the active period part of the beacon interval  and see what happens within the next beacon interval ? 

    My measurements show: 

    Testing all 3 buttons (5) I just sent data with device 1

    (8) I just sent data with device 2

    (11) I just sent data with device 3

    (14) pressing all three buttons and just one packet is sent 

    I could repeat this multiple times at my table and the messages are dumped.

    I think in your case there might be a slight delay by receiving beacon --> using content --> writing txqueue and waiting for access to handle everythin reliable in one in your case also very short period.

    Thus i would recommend you to sent the message after the active periode and see whether this fixes your issue.

    Regards,

    Alex

  • Alex,

    thank you for clarification. 

    These functions are replaced by listening for 5 seconds on first try

    Possibly you meant 5ms of LBT on first try. I did measured radio TX and RX duration. I used Debugging RF output instruction to see TX and RX timing on GPIO's. I used LAUNCHXL-CC1312R1 and LP-CC1312R7 boards and examples "sensor_CC1312R1_LAUNCHXL_tirtos7_ticlang" and "collector_LP_CC1312R7_tirtos7_ticlang". SimpleLink SDK is 6.41.0.17 for both example projects. Here are the results.

    LBT duration is 240us and I don't see 5ms. Next I tried same examples, but from SDK5.20.00.52 and it's not the same. First, with SuperFrame order SO=2 I even couldn't make them join. Increased SO = 3. They joined and I measured LBT.

    As we can see, here LBT is exactly 5ms. So I believe that we are trying to solve an issue working on different SDK's. 

    Can you try to write to the queue after the active period part of the beacon interval  and see what happens within the next beacon interval ? 

    I did this. I delayed putting message to TX queue by 1 second and sniffer log showed me that message being transmitter on next beacon. However issues persists and one of 3 sensors will try to transmit message on second next beacon. Overall application acts the same just transmission is shifted forward by 1 beacon period. Just a reminder, that If I use 913MHz freq FCC regulation (means no LBT), this is never happening, so I pretty sure that timing of putting message to TX queue is not the case.

    Alex, could you check if we are working on same SDK version?

  • Hi Zilvinas,

    This is a good idea. 

    I am using the latest SDK (7.10.00.98).

    Using this I do not have your issue. At least not with my configuration.

    Sending messages from 3 devices in one period, one message will be sent and all the others will be dumped.

    Regards, 

    Alex

  • Hi Zivilnas, 

    is your issue solved ?

    Regards,

    Alex

  • Hello Alex,

    I apologize for not being able to address the issue recently.

    I did updated my application to latest SDK (7.10.00.98).

    And it didn't solved my issue. Here are captured sniffer images. Every 4th beacon requests alarm messages from sensors.

    From my long term tests, I can tell that from putting message to TX-queue until message being sent can be 1 or 2 beacons later as we can see from sniffer log.

    Regarding how LBT works:

    These functions are replaced by listening for 5 seconds on first try

    and listen for 5s + a random number (0-5 ms) .

    seems like it's not correct with SDK 7.10.00.98.

    From images above I can tell that listening part is always 240us, but there is random part on first try and that is wait period before listening. From my observation, period from receiving beacon until doing listening takes from 2.4 ms till 7 ms. So it's still unclear how LBT algorithm is implemented. I assume that method by which I'm measuring timing is correct. That method defined here Debugging RF output. Could you check if you get 5ms listening part using latest SDK? 

  • Hi Alex,

    Since I will no longer be working on this project, I would like to introduce my colleague Mindaugas who will continue to communicate with you. Thank you for your support regarding this issue.

    Žilvinas