This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

CC1200: BER increase with frequency offset correction on

Part Number: CC1200

We are making BER tests for product using CC1200. Our symbol rate is 4800 and deviation about 2.4 kHz. Modulation is 4-GFSK. Channel filter is around 19 kHz wide.

We have noticed that turning frequency offset correction on causes higher BER than with correction off. Frequency error is less than 100 Hz between transmitter and receiver so there is no real need for correction in this test situation. In the final product we need to use correction as frequency errors can be as high as 2-3 kHz and CC1200 tolerates 200-300 Hz error when correction is off.

In the tests we transfer 100-byte packets with random content. We receive 1000 packets per signal level and the signal level is swept between -60 and -120 dBm.

When frequency error correction is turned off, we cannot see bit errors at signal levels higher than about -90 dBm and BER performance level of at least 10^-6 is observed (10^-6 is our current limit in BER graph and taking into account the amount of data sent). When we turn correction feature on bit errors start appearing and our BER will be limited to around 10^-5. I know this is still a good level for most applications but our reference devices are some of our earlier products with different HW and they are capable of error free performance at these signal levels.

I wonder why BER performance degrades when correction is turned on?

Our frequency offset correction setting is “FOC after channel filter” so we don’t use/need FB2PLL feature. Our preamble is 21 bytes, preamble detector is on and part of the preamble is used for sync word as well.

  • You write that you send packets but you measure BER. Does that mean that you calculate BER based on the PER numbers?

    Could you post the full register settings you are using to avoid any doubt?
  • It means that because above about -90 dBm we don't lose any packets because of not being able to sync and all errors are individual characters in packets, we are able to calculate bit error rate even we don't have access to raw bits from the demodulator.

    Below are registers which we have configured. All other registers are in their default settings:

    REGISTER    ADDR    VALUE
    =============================
    IOCFG3        0x00    0x06
    IOCFG2        0x01    0x19
    IOCFG0        0x03    0x00
    SYNC3        0x04    0xCC
    SYNC2        0x05    0xCC
    SYNC1        0x06    0xCC
    SYNC0        0x07    0x55
    SYNC_CFG1    0x08    0xAB
    SYNC_CFG0    0x09    0x00
    DEVIATION_M    0x0A    0x7D
    MODCFG_DEV_E    0x0B    0x28
    DCFILT_CFG    0x0C    0x5D
    PREAMBLE_CFG1    0x0D    0x1F
    PREAMBLE_CFG0    0x0E    0x8A
    IQIC        0x0F    0xCB
    CHAN_BW        0x10    0x97
    MDMCFG1        0x11    0x40
    MDMCFG0        0x12    0x05
    SYMBOL_RATE2    0x13    0x5F
    SYMBOL_RATE1    0x14    0x75
    SYMBOL_RATE0    0x15    0x10
    AGC_REF        0x16    0x34
    AGC_CS_THR    0x17    0xEC
    AGC_GAIN_ADJUST    0x18    0x8E
    AGC_CFG2    0x1A    0x01
    AGC_CFG1    0x1B    0x51
    AGC_CFG0    0x1C    0x87
    FIFO_CFG    0x1D    0x00
    SETTLING_CFG    0x1F    0x0B
    FS_CFG        0x20    0x14
    PKT_CFG2    0x26    0x00
    PKT_CFG1    0x27    0x03
    PKT_CFG0    0x28    0x20
    PA_CFG1        0x2B    0x37
    PKT_LEN        0x2E    0xFF
    IF_MIX_CFG    0x2F00    0x1C
    FREQOFF_CFG    0x2F01    0x00
    MDMCFG    2    0x2F05    0x0C
    FREQ2        0x2F0C    0x59
    FREQ1        0x2F0D    0x95
    FREQ0        0x2F0E    0xC2
    IF_ADC1        0x2F10    0xEE
    IF_ADC0        0x2F11    0x10
    FS_DIG1        0x2F12    0x07
    FS_DIG0        0x2F13    0xAF
    FS_CAL1        0x2F16    0x40
    FS_CAL0        0x2F17    0x0E   
    FS_CHP        0x2F18    0x28
    FS_DIVTWO    0x2F19    0x03
    FS_DSM0        0x2F1B    0x33
    FS_DVC0        0x2F1D    0x17
    FS_PFD        0x2F1F    0x00
    FS_PRE        0x2F20    0x6E
    FS_REG_DIV_CML    0x2F21    0x1C
    FS_SPARE    0x2F22    0xAC
    FS_VCO4        0x2F23    0x14
    FS_VCO2        0x2F25    0x00
    FS_VCO1        0x2F26    0x00
    FS_VCO0        0x2F27    0xB5
    IFAMP        0x2F2F    0x01
    XOSC5        0x2F32    0x0E
    XOSC1        0x2F36    0x03
    FSCAL_CTRL    0x2F8D    0x01

  • "It means that because above about -90 dBm we don't lose any packets because of not being able to sync and all errors are individual characters in packets, we are able to calculate bit error rate even we don't have access to raw bits from the demodulator."

    Not sure if I understand this correctly. Do you mean that for a signal level lower than -90 dBm you receive the packets but with bit errors?

    Why do you run with part of the preamble as sync word? Setting the sync word equal to preamble increase the probability for bit errors due to failure to proper bit sync. Do you see the same if you use the default sync word?

    Does FREQOFF_CFG = 0x30 or FREQOFF_CFG = 0x33 give a different result?
  • >Not sure if I understand this correctly. Do you mean that for a signal level lower than -90 dBm you receive the packets but with bit errors?

    When signal level is near sensitivity level (-120 ... -100 dBm) large amount of errors are due to packets that are missed totally. So when BER is calculated from packet error rate, all bits from such packets are interpreted as missed bits but that gives too pessimistic BER result. But above about -90 dBm (-90 ... -60 dBm) syncronization is never missed and CC1200 always receives a packet and errors are instead character errors in packets and we can calculate bit error rate for such packet. Therefore in these signal levels this method gives accurate BER.

    >Why do you run with part of the preamble as sync word? Setting the sync word equal to preamble increase the probability for bit errors due to failure to proper bit sync. Do you see the same if you use the default >sync word?

    I do it because I was istructed to do so in your reply to my earlier question Oct 4, 2017. I cannot search the true sync word from this legacy waveform using CC1200's sync detector because it is 4-level FSK modulated. Your reply:

    "I would recommend setting part of the preamble as sync word to ensure bit sync. My experience is that turning off the sync word gives poor performance.

    Then the MCU has to find sync etc in the received packet."

  • I remember the question now. My assumption here is that as long as you get sync you can use the MCU to search for the real sync word in the bit stream/ content of the FIFO. You set the sync word to 0xCCCCCC55. I assume that 0xCC is the preamble, where does the 0x55 come from?

    I'm a bit curious about the poor sensitivity but on the other hand the modulation index on the inner symbols are low, the deviation should have been higher to get better performance.

    Back to the original question:
    How accurate is the frequency reference in your system? If you have a very accurate reference you don't need to activate freqoff compensation. I'm still curious if setting FREQOFF_CFG = 0x30 or FREQOFF_CFG = 0x33 give a different result?
  • We will be running a series of additional tests and I will get back to this but I can already tell that doubling the deviation had no impact on the result.

    About frequency reference; our 40 MHz  VCTCXO is +-0.5ppm over temperature and +-1ppm aging. In addition we need to be able to operate against some legacy devices in the field and they can be off frequency as much as 1-2 kHz (2.5 ... 5 ppm). Maybe even more. So to my understanding and based on our tests with CC1200 with frequency error correction off we really need the correction.

  • Measured PER vs Input power level vs Frequency offset on one CC1200. Results in the attached Excel spreadsheet together with the register settings used.

    100 packets were transmitted for each input power level for all frequency offsets. The data payload was 10 bytes. The measurement took 1h 20 minutes. As you can see from the spreadsheet a frequency offset of at least +/-6 ppm is possible. This is without FB2PLL.

    I have started a measurement with 0 frequency offset using 1000 packets and 100 byte payload. Will provide this result when ready (probably tomorrow).

    4.8ksps 4GFSK.xlsx

  • Below is a PER measurement with 0 frequency offset using 1000 packets and 100 byte payload for each input power level.

    Input level PER (%) at 0 offset
    -120.0 100.0
    -119.0 100.0
    -118.0 100.0
    -117.0 100.0
    -116.0 100.0
    -115.0 99.0
    -114.0 97.4
    -113.0 70.1
    -112.0 29.0
    -111.0 11.5
    -110.0 3.5
    -109.0 0.5
    -108.0 0.6
    -107.0 0.6
    -106.0 0.1
    -105.0 0.0
    -104.0 0.0
    -103.0 0.0
    -102.0 0.0
    -101.0 0.0
    -100.0 0.0
  • When products using CC1200 are used on both ends of the link, we were able to achieve error-free reception on signal levels above -105 dBm with frequency error correction on (2F01=0x20). We had to change the last 55 byte to CC in sync word and use Strict sync word check level 2 instead of 1. What seems strange to me is that parameters which are said to affect only to the acceptance of sync word have a tremendous effect on reception of the payload part of the message and errors occurring there. I underline that we have never had any synchronization problems and sync word has always been found. But when we change parameters related to sync word acceptance certain error types start appearing in payload. For example, very common error which is easy to reproduce is about 10th character of the payload received erroneously. So it looks like something is turned off or on inside the CC1200 at this time.

    Another case is when we would need to operate against legacy 4FSK modems where the baseband filtering is root raised cosine (BT=0.2) instead of Gaussian. All other parameters remain the same (deviation, symbol rate, packet structure with random content). I understand that the performance of CC1200 is probably not characterized or tested in this case but can you give some advice about recommended register settings? We are currently seeing worse BER performance against RRC filtered 4FSK transmitter than against TX using CC1200 which is not surprise to me because of the filter mismatch.

    So as a summary: performance between CC1200 TX/RX pair very good although I don’t fully understand why sync word parameters affect payload errors. Worse BER against RRC filtered 4FSK transmitter.