This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

C1101 exceeding allowed power in th US?

Other Parts Discussed in Thread: CC1101, CC1190, CC1121, SIMPLICITI, CC1200, CC1201, CC1125, CC1120, CC110L, CC1100

Hi, I just learned that FCC rules allow only a certain field strength for narrow band signals in e g the 915MHz band, corresponding to about 0 dBm ERP (i e including antenna gain) and we were hoping to use CC1101 with +10 to +12 dBm output power.

I read in datasheet §28.3 that you have tried to make the circuit compliant to this. Am I misunderstanding something? You mention only FSK here, and thus far (outside US) we have been using ASK (or on/off keying, OOK, to be precise). If I understand it correctly, we have to either have a bandwidth of >500kHz or use frequency hopping between 50 channels to be allowed to use >0dBm by FCC? With ASK I can't see how this can be done, sounds like we have to change to FSK then? Although I don't readily know what bandwidth that would give.

We need only one RF channel.

  • You are correct that you need to do FHSS or a digital modulation technique that gives a 6 dB BW > 500 kHz in order to transmit high output powers (FCC 15.247). ASK is not an option. Have a look at the following design note: www.ti.com/lit/swra123. You need to increase the frequency deviation to meet the 6 dB requirement. When doing so you also need to increase the RX filter BW (this needs to be wide enough to fit the transmitted signal). Increased RX filter BW means degraded sensitivity....

    Look at the total link budget (sensitivity + output power) to see if it make sense to do wide band digital modulation (FCC 15.247) over a FCC 15.249 compliant system (lower output power, but better sensitivity for the same data rate). See tables 5 and 6 in the above mentioned design note.

  • Thanks. Could be worth checking. Seems swra123 table 4 implies it is degraded only a few dB (from 112 in datasheet to 109 at ~1-2kbps in table 4) so with +12dBm instead of -1 dBm TX power it would be an improvement of close to 10 dBm in total, i e might well be worth it? Tables 5-6 say it even better.

    I suppose frequency hopping is an option where I could keep the small BW (per channel) and high sensitivity but then I understand I need to hop between 50 channels (or 25 channels if >250kHz BW I heard, but then I might as well go for 50). Frequency hopping is mentioned in the datasheet but I assume it is done by the MCU not the transceiver? Fitting 50 channels into the 915MHz band I suppose is no problem being 26MHz wide. I'm not sure yet how easy it is to get the receiving transceiver to work with this though. Won't I have to have a very large BW in the RX then? If I e g assume 50kHz BW per channel and 50 channels I roughly get 50*50=2500kHz required total BW for the RX e g even worse than with just a simple one-channel 500kHz wide FSK?

    An idea I got just now is that since we send very little data, the transmitter could hop between 50 channels but the receiver listen to only one of them?! Do the hops just to fulfill the requirements to use 1W.

    A third option would be aiming at closer to 1W to compensate for the lower sensitivity using a booster like maybe CC1190 (seems to give 27 dBm), it could maybe work for us but consumes more power and costs more.

    I'm a bit surprised how table 2 shows very similar BW between 2.4 and 100 kbps, I thought it would increase a lot with bit rate (as table 4 implies) - how come the big diff between table 3 and 4 RX filter BW? And in table 5 at 10 kbps output power is only 7.8dBm but the device can give +11 dBm @ 915MHz according to datasheet, is this due to limitation at lower bit rate or what? (We need 10kbps but were thinking of increasing it significantly at least before we realized how much it can affect sensitivity.)

  • Starting with the last question first......

    Table 2 shows the TX 6 dB BW. The transmitted BW is made wide by increasing the frequency deviation. The RX filter BW in table 3 (not table 4) is then set wide enough to fit the transmitted signal in table 2 . In table 4 the frequency deviation is lower than used in table 3. Hence, the transmitted signal BW will be lower and consequently a lower RX filter BW can be used.

    When you do frequency hopping you can use low data rate and low frequency deviation (Table 4 - preferred settings). The RX filter can be low, which gives better sensitivity. The TX and RX units need to have a synchronized hopping sequence so no need for a wide RX filter BW (they will operate on the same channel).

    Never heard about anyone doing TX hopping only and keep the RX unit on a fixed channel. That said, I do not think you will violate any FCC rules with this approach. The TX current will then be 50 times larger (assuming 50 channels) than it will be if you use frequency hopping.

    The third option of using CC1190 and wide band transmission will consume more current, but the protocol will be a lot easier than doing FHSS. You will find CC1101+CC1190 reference design on TI web.
  • Thanks. I wonder if ASK is allowed with frequency hopping? And if you have any application notes on FHSS - I understand the principle but suppose it can be tricky to keep it synchronixed? And I can imagine it slows down "Wake on RX" by a factor 50.
  • Update: I'm guessing FSK has slightly better sensitivity than ASK (lower noise) so we should abandon ASK since we worry about sensitivity? And choose CC1121 which has several dB better sensitivity? And I found swra077 on FHSS, but if you have more tips they are welcome.

    But I wonder if the impressive sensitivity of cc121 is of no use if used with 500kHz FSK, i e does cc1121 require low RX BW or else it will have sensitivity similar to cc1101?

    And in swra table 5 at 10 kbps output power is only 7.8dBm but the device can give +11 dBm @ 915MHz according to datasheet, is this due to limitation at lower bit rate or what?

    And I still suppose wake on RX will be 50 times slower?
  • The only advantage of using ASK is the reduced average current in TX. For a new design I would recommend FSK modulation.

    SimpliciTI supports FHSS. Maybe this simple star network can be used for your application (do a search on E2E for SimpliciTI and FHSS).

    CC1121 cannot be used if the TX signal has a 6 dB BW > 500 kHz as the RX filter BW is limited to 200 kHz. CC1200 is an alternative to CC1101 tough.... You will not gain much (have not measured this use case) in sensitivity using CC1200 over CC1101, but in terms of co-existence (selectivity and blocking) CC1200 is the better choice. 

    The maximum output power is limited by the power spectral density requirement of  max +8 dBm/3 kHz (that is, an entirely flat spectrum of 8 dBm/3 kHz in a 500 kHz BW corresponds to 1 W). The CC1101 and CC1200 can do more, but limited by PSD.....

    If you do FHSS the TX and RX units are synchronized (in time) so I do not understand the WOR being 50 times slower. Btw: the automatic WOR feature inside CC1200 cannot be used with FHSS as this only works for a single channel.

    Sorry for being slow to respond and somewhat short, but I'm currently travelling in Asia.     

  • Thanks for the reply, I'm happy to get such qualified support, it saves a lot of time and avoids mistakes. Some of these things we wouldn't discover before in a proper EMC lab with a working circuit, and we haven't even decided on which CC1xxx to use yet.

    CC1120 and CC1125 boast better sensitivity, but I'm thinking if used with >500kHz BW they will have no advantage, so I compared sensitivity figures for cc1120/1121/1125/1101/110L/1200/1201 and it seems to be correct that if we aim at >500kHz BW all C1xxx will have the same sensitivity? But if we use narrow BW FHSS, provided that we can live with a low bitrate (around 1kbps or so), then cc1125 will be noteably better? So for greatest flexibility, CC1125 seems to be the best? But then I checked max RX BW and it is only 200kHz for cc1120/21/25.. Which means there is not one IC that can be said to be optimized for best flexibility and sensitivity i e for both FHSS and >500kHz FSK? Meaning we will have to decide on FHSS or >500kHz before we choose IC, or test both? Comparing the ones with >500kHz BW, cc12xx has significantly better sensitivity (6dB at 38,4kbps) compared to cc1101/110L. So cc1125 is optimal for FHSS and cc1200 for >500kHz FSK (or cc1201, I can't tell them apart since they are specified at different dev/CHF values)

    PSD: I saw a complete FCC test report for a product with a ~900MHz CC110L based circuit and >500kHz BW, and in that, they were just on the verge of crossing the PSD limit while only having about 7 to 13dBm output power (depending on channel and data rate 1.2-250 kbps). I haven't reflected over this before but from your reply I suppose a perfectly square shaped (flat) spectrum with 30dBm and 500kHz BW would equal 8dBm for each 3kHz interval (although 10*log(1000*30/500) = 17,8dBm so I can't see why off hand, something wrong with that equation I'm sure). I suppose "squareness" has little to do with which CC1xxx we choose and mostly depends on FSK parameters, but do you know of any way to improve the situation or will the result always be max allowed output power in reality being about 12dBm?

    WoR: I'm thinking that if the receiver is sleeping and listening to one of 50 channels, then worst case it would take 50 jumps before the receiver hits the right channel. But re-thinking, I suppose at startup the TX could start with the channel we decided the RX will listen to (but there are a lot of details to consider, like being allowed to transmit long enough for the RX to wake up).

    Datasheets for CC1120/1121/1125 on sensitivity (missing info): I'm guessing they mean 2FSK when not stated, except for 38,4kbps where I guess they mean 2GFSK?

    And in swra table 5 at 10 kbps output power is only 7.8dBm but the device can give +11 dBm @ 915MHz according to datasheet, is this due to limitation at lower bit rate or what?
  • See attached powerpoint presentation on CC1200 and digital modulation. 50 kbps with 4 chips per bit processing gain (i.e 200 kchips/s) will provide a good solution. 

    CC1200_Wideband_Digital_Modualtion.pptx


    CC1200 will give very good sensitivity for both low data rate FHSS and wideband digital modulation. CC1125 and CC1200 will give the same sensitivity for low dat rates (low RX filter BW). Make sure the system parameters are the same when comparing CC1125 vs CC1200. In the data sheet GFSK is used except for 1.2 kbps, which is uses 2-FSK. It is specified for the blocking/selectivity - I see this should have been more clear for the sensitivity figures.

    Convert 8 dBm to mW: 10^(8/10) = 6.3 mW. Multiply this with 500/3 and you get 1 W.

    You are correct the PSD depends FSK parameters. With reference to the attached powerpoint: When the chip is configured for +14 dBm the PSD is measured s +4.8 dBm/3 kHz. That is, the maximum output power that can be used in this example is +17.2 dBm (note: CC1200 without external PA can only do +14 dBm).

    Link budget with 200 kchips/s assuming ideal antennas and assuming maximum output power is +17.2 - (-105) dB = 112.2 dB. For 1.2 kbps FHSS the link budget with 1 W ouptut power is +30 - (-122) = 152 dB. The latter gives much longer range but also requires  a more complex protocol. The decision needs to be taken based on your system needs: throughput, range, current consumption, 

    "And in swra table 5 at 10 kbps output power is only 7.8dBm but the device can give +11 dBm @ 915MHz according to datasheet, is this due to limitation at lower bit rate or what?", The output power is +8.1 dBm. The PSD is +7.8 dBm. The former is the total power and measured with large RBW (1 MHz say). The latter is measured with RBW = 3 kHz (i.e the power in "each point" on the spectrum analyzer is integrated over a 3 kHz BW).

  • Thanks.

    At higher bitrate (wider BW) I can't see why cc1200 should be better than cc112x. E g at 915MHz 38kbps 20 kHz dev 100-104kHz filter BW all 7 ICs are specified, presumably for 2GFSK, to -110dBm except cc1100 and cc110L which are only -104dBm. And for low bitrates, it looks to me like cc1200 and cc1120 are nearly identical: These are both specified for 1kbps FSK (presumably 2FSK) 4kHz dev 10-11kHz filter BW to -122 and -123dBm respectively. Or do you know something I don't?

    A question on swra472: It says 433MHz +6dBm is allowed in FCC CFR47 Part 15.231 but I heard elsewhere that 433MHz is not a good choice in the US, and that 915MHz is much better. I'm a bit surprised to find this info in swra472, I would think swra048 is more intended to cover that topic and it doesn't even mention 433MHz specifically and table 3 doesn't include 433MHz. To me, swra472 seems odd and swra048 correct but I'm not an expert. Do you understand why swra472 table 2 denotes 433MHz "almost ‘world-wide’ " and says +6dBm is allowed?

    Datasheets transceivers:
    I tried to understand AFC for cc112x and 120x to be able to select crystal and deviation but was left with one uncertainty and what looks like a bit numbering mixup. Looking at the UG for CC1200 (cc112x is similar) here are some citations:
    * §6.3 (p32) says "increased PLL BW" is off with FREQOFF_CFG = 0x22 and on if 0x30 or 0x34, indicating that bit number 4 (0x10) activates this function (assuming LSB is bit No. 0)
    * §8.9.2 (p61) says FREQOFF_CFG.FOC_EN turns on/off autom freq comp, and that FREQOFF_CFG.FOC_KI_FACTOR=3 selects slowest loop
    * p97 desribes register FREQOFF_CFG. According to the right column, FOC_KI_FACTOR | MDMCFG0.TRANPARENT_MODE_ENABLE is 3 bits, but column 1 says it is only 2 bits (bits 0-1) and column 2 mentions only FOC_KI_FACTOR even though the last column handles 3 bits.
    * p97 also says bit 5 enables FOC (i e 0x20), and that bits 3-4 control what I assume is the same as that mentioned in §6.3, but in §6.3 it rather seems bit No. 4 (0x10) switching on/off freq comp.
    * §9.12 (p74) says that with SAFC=1 you enable automatic copying of FREQOFF_EST to FREQOFF but there are no details.

    Questions on that:
    A. Uncertainty: By reading § 9.12 I thought the automatic estimation and copying of FREQOFF_EST to FREQOFF was a completely different functionality from the 25% increase of PLL BW in §6.3 etc (FREQOFF_CFG). If it is: Do you have any details on how this estimation works like max possible value?
    B. Bit numbering mixup: The bit numbering and the hex values above don't match between the paragraphs and within page 97.
    * I also wonder if you have any details on establishing required minimum deviation and, as part of that, requirements on crystal? Most manufacturers seem to recommend no less than ~30kHz dev no matter how low bitrate but for TI I suppose what's listed in the datasheet specs under sensitivity are recommended (since specified error rate is 1% and they are also listed in the software)?
    * And in §6.1 it says SmartRF Studio is recommended to get these required values but I started the program (without HW attached) and saw only the datasheet values in a list and the parameters in boxes but no way to make it suggest any values. BTW I assume only crystal stability (i e temp drift) should be considered when determining required deviation (not tolerance and ageing, but these affect required RX BW though).
    * And why can only 2 xtal frequencies be chosen in the software? I thought any freq within a span was ok. If I double click cc1120 I can select only 32.0 and 33.6 MHz, nothing in between.
  • In my previous post I wrote "CC1125 and CC1200 will give the same sensitivity for low dat rates (low RX filter BW). Make sure the system parameters are the same when comparing CC1125 vs CC1200." The same is also true for CC1120.

    433 MHz is covered in swra048. See section 2.2. According to 15.231(e) [other periodic applications than control applications] the duration of one transmission shall be shorter than 1 second and the off time between transmission shall be at least 30 times the duration of the transmission or 10 seconds, whichever is greater. @433 MHz the output power is -22.4 + 20*log(TX_on_time/100ms) [max -2.4 dBm], where TX_on_time is the maximum transmission time in any 100 ms period.

    For control applications (garage opener, remote switch, alarm system) 15.231(a) is applicable. @433 MHz the output power is -14.4 + 20*log(TX_on_time/100ms) [max 5.6 dBm]

    433 MHz is indeed used almost everywhere and is "world wide". In the US there will be limitations in the total TX time.

    "Bit numbering mixup": User Guide is correct
    - FREQOFF_CFG[5] enables frequency offset compensation
    - 1 bit in MDMCFG0.TRANPARENT_MODE_ENABLE combined with 2 bits in FREQOFF_CFG[1:0] is 3 bits in total. MDMCFG0.TRANPARENT_MODE_ENABLE is the MSB of the 3 bits and FREQOFF_CFG[1:0] the 2 LSB's.

    In Studio you can type in whatever crystal frequency you like (between the limits).

    If you provide your system requirements we can generate an XML file you can import into SmartRF Studio. Choose the "typical settings" which are the closest match to your parameters and manually change data rate, RX filter BW, frequency,... etc.
  • Thanks. I did compare at exactly the same system parameters, see above.

    Uncertainty UG CC1200: By reading § 9.12 I thought the "automatic estimation and copying of FREQOFF_EST to FREQOFF" was a completely different functionality from the "25% increase of PLL BW in §6.3 etc (FREQOFF_CFG)". If it is: Do you have any details on how this estimation works like max possible value? If it is not: Please confirm that all of this is part of the same function (AFC).

    UG: I see now that TRANPARENT_MODE_ENABLE is in a completely different register. But if "increased PLL BW" is switched on/off by bit #5 then how does that match §6.3 where 0x30/0x34/0x22 to me indicates it's bit #4?

    Calculating sensitivity for a different bitrate than what's mentioned in the datasheet, and selection of deviation, is there any way to do that?

    I see now in SmartRF Studio that I can enter xtal freq manually and also that it gives me a warning if RX BW is too narrow, nice. However it accepts e g dev 4 kHz and bit rate 100 kbps which is unrealistic so it doesn't seem to help with selection of dev. I found various suggestions on how to select dev on other sites (some say symbol rate / 2 even though MSK is only bitrate / 4, some say about same at symbol rate, several say min ~30kHz) but I understand this (especially "min 30kHz") has changed the last few years due to the better xtals being cheaper (e g 30ppm stability and 30ppm tol seems easy to find) and due to the AFC of TI's circuits, which makes the question rather TI specific. For TI I suppose deviations listed in the datasheet specs under sensitivity are recommended (since specified error rate is 1%) which means down to 4kHz is recommended, even 1kHz for CC1125. But this depends on symbol rate and we won't have exactly the same symbol rate as in the examples.

    PSD: In swra123b table 2 shows PSD almost 8dBm (i e just at the FCC limit) while output power is only 7.5 to 11.4dBm (if bit rate < 100kbps). Unless it is possible select some other modulation scheme with flatter spectrum I suppose this means it is hardly possible to have higher output power than about 8 to 12 dBm for bit rates up to almost 100kbps, and that output power in reality is limited not by the allowed 1W but by PSD. And that also means that with cc112x and cc120x, which can give up to 16dBm, unless the antenna is really crappy, there is never any point in adding a cc1190 booster. cc112x/cc120x wouldn't improve the link budget if >500kHz BW I suppose. swra123b is for cc1101, and cc1200/cc112x can also handle 4-FSK but I don't know if the spectrum is flatter? (This indicates it might be a bit flatter but not much: www.ni.com/.../en)

    I will buy a CC1120 dev kit today and test both with -1dBm and with >500kHz. I will have to try different variants so I don't know the system parameters yet, but as a baseline between 2 and 10kbps using 2-FSK (but we might try a lower bit rate to make use of sensitivity provided reliability is so high we don't need to re-send a lot) and as part of >500kHz BW we might well try up to 100kbps (to optimize TX current consumption and wake up time, if we decide to use WoRX). I think CC1120 will be a good starting point and maybe the final selection, but CC112x and CC120x are all pin compatible so it would be easy to optimize selection of transceiver later.

    BTW are you one of the authors of e g swra123b? :)
  • CC112x and CC120x implements sync search using the a correlator. The demodulator uses the sync word itself to do bit synchronization and to find the frequency offset. The frequency offset that is found in the correlator is copied to FREQOFF_EST.

    The frequency compensation to increase the PLL BW uses a different regulation loop and is configured through register FREQOFF_CFG.

    FREQOFF_CFG = 0x30 and TRANSPARENT_MODE_ENABLE = 0 (typical use case).
    - FREQOFF_CFG[5] = 1. Frequency offset correction enabled
    - FREQOFF_CFG[4:3] = 10b. FOC in FS enabled (1/256)
    - FREQOFF_CFG[2] = 0. RX filter BW/2
    - FREQOFF_CFG[1:0] = 0. Frequency offset compensation disabled after sync. Note: TRANSPARENT_MODE_ENABLE = 0 so look at 000 setting in table

    FREQOFF_CFG = 0x22 and TRANSPARENT_MODE_ENABLE = 0 (typical use case).
    - FREQOFF_CFG[5] = 1. Frequency offset correction enabled
    - FREQOFF_CFG[4:3] = 0. FOC after channel filter
    - FREQOFF_CFG[2] = 0. Don't care since FREQOFF_CFG[4:3] = 0
    - FREQOFF_CFG[1:0] = 10b. Frequency offset compensation during packet reception with loop gain factor = 1/64 Note: TRANSPARENT_MODE_ENABLE = 0 so look at 010b setting in table

    Calculating sensitivity: Sensitivity = -174 dBm + SNR + NF + 10log(RX_BW). SNR changes with modulation format and data rate but use 10 dB. NF (noise figure): use 7 dB.

    Theoretically, there is an optimum separation/datarate setting if you simultaneously minimize the receiver filter bandwidth. Every halving of receiver filter bandwidth increases sensitivity with 3 dB whereas sensitivity vs separation/datarate decreases with about 1.5-2.5 dB per halving down to a certain limit where the loss increases very fast. In our experience a modulation index (=separation/datarate) = 1 is a good design compromise.

    The transmitted signal will have a certain signal bandwidth (BWsignal), which depends on the data rate and modulation format. This bandwidth can be approximated by Carson's rule:

    BWsignal = 2*fm + 2*fdev (= data rate + frequency separation)

    where
    - fm is the highest modulating frequency. 2*fm = data rate
    - fdev is the frequency deviation. 2*fdev = frequency separation

    On the receiver side there is a channel filter, which is centered on the down-converted received RF frequency, i.e. the intermediate frequency (IF). The channel filter has a programmable bandwidth BWchannel. The signal bandwidth has to be less than the receiver channel filter bandwidth, but we also have to take the frequency error of the transmitter and receiver into account.

    If there is an error in the transmitter carrier frequency and the receiver LO frequency, there will also be an error in the IF frequency. For simplicity assume the frequency error in the transmitter and receiver is equal (same type of crystal). If the receiver has an error of –X ppm and the transmitter has an error of +X ppm the IF frequency will have an error of +2*X ppm. Conversely, if the receiver has an error of +X ppm and the transmitter an error of -X ppm the IF frequency will have an error of -2*X ppm.

    BWchannel has to be larger than the maximum signal bandwidth BWsignal plus the maximum frequency error due to crystal inaccuracies. Worst case scenario will be for the crystal errors on Tx and RX side to be of opposite signs

    BWchannel > BWsignal + 4* XTALppm* fRF

    where
    - XTALppm is the total accuracy of the crystal including initial tolerance, temperature drift, loading, and ageing
    - fRF is the RF operating frequency.

    Assuming modulation index = 1, 10 ppm crystal crystal accuracy and fRF = 868 MHz

    62.5 kHz RX BW: Maximum data rate of 10 kbps and 10 kHz frequency separation (+/-5 kHz frequency deviation). 62.5 kHz > 10k + 10k + 4*10*868 = 55 kHz

    100 kHz RX BW: Maximum data rate of 10 kbps and 10 kHz frequency separation (+/-5 kHz frequency deviation). 100 kHz > 32k + 32k + 4*10*868 = 98.7 kHz

    Note: using GFSK instead of 2-FSK will give a lower BWsignal and the above approximation will be very conservative.
    99% occupied BW for 2-FSK, 10 kbps, +/-5 kHz deviation: 21.75 kHz
    99% occupied BW for GFSK, 10 kbps, +/-5 kHz deviation: 16.75 kHz

    The warning in Studio regarding RX BW is tied to the data rate, which has to be less than RX BW/2 for CC1120.

    You are correct that the maximum output power will be limited by PSD (and I have mentioned this before) when doing wideband modulation under FCC 15.247. 4-GFSK will "smear out" the energy and give a flatter spectrum and thus allow a higher output power.

    Finally (and I probably missed some of your questions), yes, I am one of the authors of swra123
  • Thanks. It is very interesting to read about the two different functionalities that both affect AFC/BW.

    Above you say "FREQOFF_CFG = 0x22" and then "FREQOFF_CFG[5] = 1. Frequency offset correction enabled". I assume this means UG §6.3 "FREQOFF_CFG = 0x22 (i.e. feedback to PLL disabled)" is incorrect then? I e with 0x22, feedback to PLL is actually enabled, not disabled as it says there. Because "feedback to PLL" and "Frequency offset correction" is the same thing, right?

    Thanks for the sensitivity formula. It doesn't seem to match the datasheet well though: At 10kHz BW it is -117dBm and at 42kHz it is -111dBm, but datasheet cc1120 page 13 has much better values. But maybe it works better for approximating diff (using a datasheet value at bit rate x and use the formula to get a value at bit rate x+y)?

    Thanks for the rule of thumb m=1 i e deviation same as symbol rate. swra122c has suggestions on how to determine xtal requirements based on sensitivity measurements. I was considering doing such measurements, but assuming required min BW = symbol rate * 2 + 4*XTALerr seems sufficient to me.

    But for TI, your AFC (as you described) must have an influence that relaxes the xtal requirements, but I don't know by how much. Even with a xtal with 30ppm in total (which is about the best you can get: Init tol + drift + ageing is at best around 10+10+10ppm) I would get 915e6*30e-6*4 = 110kHz increased BW. So the CHF of 10kHz in datasheet sensitivity table for 1,2 kbps with cc1120 is nowhere near that.

    And I read a drawback if GFSK is higher required power but perhaps that is true only if you take into account adjacent channel intereference I can imagine? Since we have only one channel, GFSK might have no disadvantages?
  • With FREQOFF_CFG = 0x22, feedback to PLL is disabled. The frequency compensation is in this case done after the channel filter (set by FREQOFF.CFG.FOC_CFG).

    Sensitivity = -174 dBm + SNR + NF + 10log(RX_BW). It is fair to assume NF = 7 dB, but the SNR is the "unknown" and depends on modulation format, data rate, and deviation. Thus, the equation shall only be used as an indication.

    There is a trade-off between RX filter BW (sensitivity) and crystal cost. The crystal 10 ppm initial tolerance can be compensated for in production by writing a correction factor to register FREQOFF. The aging can be compensated for in the field if there is a central unit with a tight crystal tolerance. Not always possible I know.... 10 ppm aging over time seems to be on the high side. The crystal we use in the reference design has 1 ppm/year, but it should be noted that the ageing in subsequent years is lower than for the first year.

    Anyways, assuming 10 ppm temp drift and 10 ppm for crystal aging the total will be 20 ppm. The absolute worst case will be if the RX and TX units drift in opposite directions (very conservative) and the RX filter BW must then be increased by 915 x 20 x 4 = 73 kHz.

    2-FSK, 1.2 kbps, 4 kHz deviation have a signal BW of 11 kHz and require an accurate crystal reference (TCXO). With the crystal you mention, and assuming initial tolerance is compensated for, you can use an RX filter BW of 55 kHz and enable feedback to PLL (increase of +/-RX filter BW /4). Going from 11 kHz RX filter BW to 55 kHz will give a degradation in sensitivity of 10log(55/11) = 7 dB (or a reduction in range with a factor 2^(7/6)).

    My recommendation is to use GFSK over FSK. Difference in sensitivity is not much. TX BW (signal BW) is much lower.
  • The question regarding xtal requirements vs BW is crucial to us and normally I would say that trimming is necessary AND the best xtals available; if trimming removes initial tol of 10ppm, and temp drift is 10ppm from -40 to +85 i e maybe 4ppm from 0 to 50C, then e g Digikey have only 10 types to choose from and the ones with reasonable price have aging max 1ppm (only one), 3ppm or 5ppm per year; assuming logarithmic dependance and 10 years and choosing <3ppm/year it will be about <6ppm in 10 years. So in total we get 4+6=10ppm, which increases BW by 37kHz. so even with trimming, a BW of 10kHz as CC1120 sensitivity is speciified for, or even 3,8kHz which CC1125 is specified for, seems unobtainable in a system like ours. To me, the conclusion is that the sensitivities specified for cc112x can not be obtained even if buying the best crystal there is (not selecting the one with 1ppm ageing since we want a second source) and trim during production. Meaning the best sensitivity I can get is with around 40kHz BW which means around -120dBm, and it also means there's no point choosing cc1120 since the cheaper cc1121 has the same data for the useful settings. And since we have more than 2 units involved, we can't calibrate them as pairs either.

    Two questions on this:

    * Does your SAFC (copying of FREQOFF_EST to FREQOFF) remove or relax the mentioned trimming? And how does that work if we have more than two units in the system - if it's supposed to be done during production I can see how we can use a separate reference, but I can't see how it can be automatic after that point, or perhaps SAFC is intended for use during production? It is called "automatic" though. Perhaps "automatic" works only if just two units involved - so would we have no use for it, using an external ref?

    * It seems to me, with FCC rules, it would be better to go for the >500kHz 2-FSK solution? Since then we will get a longer range for the same transceiver and reduced crystal cost, since we could probably either skip trimming or choose a cheaper crystal. The only drawback would be higher TX current consumption for two reasons: higher RF power as part of the solution (doubled power doesn't double current though), and higher consumption due to the increased BW (but since we can use almost 100kbps the transmission bursts will be much shorter). So all in all it's hard to say which consumes most current without testing, but probably slightly more for this solution? And since there are not many figures on this in the datasheet, I suppose we will have to measure it, by lifting bead L1 on the EM board and put a 1 ohms series resistor there.
  • It would be really helpful to get an explanation how SAFC/FREQOFF_EST works and what precision this gives me, so that we can decide if we have to manually calibrate or not. If we calibrate manually, we can calibrate all systems against the same reference which has 2ppm precision. If we use SAFC I don't know what we get. And at what point is the "automatic" part of this used (see above)?