This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

CC1120: 433MHz - Radio Channel Switching times

Part Number: CC1120

In our application, we need to constantly switch between listening on a broadcast channel and a dedicated channel. The duty cycle period is 100ms, so we spend 50ms on each channel, but depending on the use case, the application may spend a larger time on either channel. What I find is that with a 50% duty cycle, we miss packets that we ought to be receiving which is problematic.

Note that when we transmit, we may transmit on the broadcast channel or up to 3 other dedicated channels.

My questions are:

1. If we calibrate the radio each time we switch channels, how long does the radio take typically?

2. If we use pre-calibrated values, how often do we need to do a re-calibration of these two channels?

3. I've been looking at the swrc253e frequency scan example. From looking at this example, it seems that I could benefit in adapting the 

radioRxISR_timer state machine in radio_scan_drv.c to my application's radio driver for the CC1120 to alleviate the problem of missing packets.
3.1 Would this help?
3.2 If so, is it possible to use on GDO0 since my MCU only has GDO0 tied to an interrupt pin i.e. I don't have other interrupt pins available to tied to GDO2.
 
  • 1) The timing could be found in www.ti.com/.../cc1120.pdf
    2) If you have the same VDD and temperature you should in theory not have to do recalibration. But you should do recalculations from time to time. How often depends on your system. How high chance is it for a change in temp or other system parameters and how critical is it if a packet is lost?
    3.1) You wrote: " What I find is that with a 50% duty cycle, we miss packets that we ought to be receiving which is problematic.". Could you describe the issue in more detail? It's not possible to state what would help and what would not help without a detailed description of what you are seeing and why you expect it to work. For one, how do you know when to send on a given channel when you are hopping between two?
    3.2) The example is written to be using both GDO0 and GDO2. Not sure how easy it is to rewrite the code to just use one line since you need to know what caused the interrupt.
  • Please see my responses below. I have also asked some further questions as well.

    1. Thanks for the timing link. The channel scan example sets scan timer to run every 250us.

    #define RF_RX_SCAN_TIMER 8 /* set a timer tick to ~250us */

     So you are switching through the 50 channels every 250us with pre-calibrated values. How would this impact performance of other "tasks" in a real application? Like, if I run my current channel switch timer using pre-calibration, I find that the processor spends so much time servicing the 250us timer interrupt for the radio, that I miss key presses on the keypad. How was the 250us scan time arrived at?

    2. This is a metering product, so there is high chance of temperature variation depending on environment installed. So, I will take this into account and add a regular recalibration. As for packet losses, missing broadcast packets is critical. Missed unicast packets will result in retries from the sending node, but we want to avoid retries as far as possible to keep the network overall as quiet as possible.

    3.1 Ok, here's some more details on our application. We are running a wireless network with meters and IHD's (in-home devices) paired to them. IHDs' periodically request certain info from meter its paired to. This typically occurs every minute. When a new meter and IHD are installed into an existing network, the user needs to pair his IHD to his meter. A paring consists of a transaction of 3 packets (request, response and ACK).  These have retries built in and the entire pariting process needs to take place within 10 seconds which is a reasonable amount of time without frustrating the user. Other transactions occur between the meters and the typical data concentrator unit (DCU). New meters have to register to the DCU within a reasonable amount of time (typically within 10 minutes). DCU and meters need to communicate in an ad-hoc fashion and here timing is more critical. DCU's also send broadcasts on the broadcast channel and we need to ensure 99% coverage to meters in range. 

    3.2 Can GDO2 be polled via the GPIO_STATUS register? How does the GPIO_STATUS work? If GDO2 interrupt occurs. I assume GPIO_STATUS bit 2 will be set and we can read this. But what causes this bit to be cleared (I assume the radio automatically clears it)? How long does it remain set so that polling software has time to read it before it is cleared?

  • 1) swrc253e is the software part of www.ti.com/.../swra482.pdf
    Have you read through the app note and the listed timing requirements?
    3.1: Thank you for the description of the system. Since this is metering, is this a standard/ protocol you have to follow? But what I asked about was a description of the issue you are seeing.
    3.2: Yes, but you would require to poll the register with a high frequency to get the pulse. Use a scope to measure the actual pulse length.
  • 1. Thank you for sharing the link to swra482.pdf. I have not looked at this app note before. This is useful info and I will study the timing requirements.
    3.1 Regarding the system description, this is a standard we have to follow. In terms of the description of the actual issue, what I see is that if a meter receives a request from a DCU and responds, the DCU may miss the response due to it having now switched channels. Same happens with the meter - it may miss the request from the DCU and the DCU has to unnecessarily retry. This occurs too frequently to be acceptable.
    3.2 Ok thanks for this. Looks a bit risky to implement. We will modify our design to get GD02 connected to an MCU interrupt pin.
  • I have read through swra482.pdf.
    There are missing references in the document on pages 15 and 16. The text in these places refers to a previous section or description, but the reference is missing. Instead there is text in brackets like: "(where?)", (what earlier section???), (described where???). What are the sections that are being referred to in these paragraphs?

    My next question is the frequency offset estimation. It is not clear to me if this is needed and how this works. On the receiver side, reading the FREQOFF_EST1 and FREQOFF_EST0 registers, from what I understand, gives the offset from which we can gather the actual frequency that we received the packet on. On the transmitter side, if we know this offset, we can adjust the the transmitters carrier by moving the carrier frequency up or down in 1 kHz steps. But in a system such as ours, how do we ensure we get the frequency error below 8 kHz for the 25 kHz RX BW setting or less than 4 kHz for the 12.5 kHz setting? It seems like the receiver would need to respond to the transmitter by sending its error estimate in the payload and the transmitter can then make the correction for the next transmit. However, with our use case, it would mean a DCU would need to get the error estimate from all registered meters each time it communicates or gather this info at a set regular interval (maybe when it gathers meter profile data every 15 minutes), and keep a record of the last error for each meter so that it can correct on the next transmit. Is my understanding correct here?

  • Before going into details from the app note, regarding 3.1, could you draw a timing diagram that describe the system with which unit that operate on which channel when etc.

    Also, what is known about when which unit sends data?