This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Wireless Communication Between CC2500EM (Using MSP430F2112)

Other Parts Discussed in Thread: MSP430F2112, CC2500

For weeks I have been trying to gain connection between two CC2500EM (Evaluation Modules). As per the TI HAL programming example my end goal is to be able to send one packet with one board and receive it on the other board. I do not have the HAL evaluation kit, but I am trying to simply receive one packet per second using the polling method utilising the CC2500EM.

I am unable to consistently receive packets. I get a large amount of packet loss and byte corruption (No CRC at the moment). Currently I have a RX program set to poll RXFIFO, when a packet arrives it will read the length byte, then read “length” number of bytes from RXFIFO and into an array (using variable packet setting, but sending a fixed size of packets). Every time it reads a new packet, it will append the packet to the array so I can see what I am receiving.

My TX program is correctly set up to send 4 bytes (Length, 3xData). Using the MSP430 Timer it will send this packet 10 times a second. With this setup (current code attached), I am able to receive the majority of packets, however some are duplicated. I thought when you read off RXFIFO, they are removed from the stack. Therefore in the next iteration, it cannot read the same data? This takes about 4 seconds to receive 30 packets, which I think is a reasonable rate. Is this normal?

My key issue is when I reduce the number of packets sent per second (pps) to 5, the packet loss percentage increases dramatically (say from 25% to 60%+). To receive 30 packets, it then takes around 30 seconds to correctly receive 30 packets, with a larger amount being corrupted (20-30% compared to 5-150%). A higher resolution results in a higher accuracy. Can anyone please explain the logic behind this? In my application, I want to be able to send 3-4 packets per second, and be able to receive 100% of these. Is the amount of packets I am losing normal? What can I do to reduce packet loss?

In a previous coding example I have attempted to send strings such as “AAAAAA” in ASCII. I commonly had another issue which would occasionally change some of these bytes in the string. This change was consistent however. When I sent “AAAA” (41 41 41 41 hex) , 80% of the time I would get a match, but the other 20% I would get something like “AA  ” (41 41 01 01 hex). This change is some kind of consistent bit shift. When I changed it to “XXXX” (58 58 58 58 hex) some of these changed to “X@@X” (58 40 40 58 hex). Lastly “UUUU” (or 55 55 55 55 hex) sometimes changed to “UUEE” (55 55 45 45). Is there anything in my programming or hardware (further detailed below) that may cause this?

Attached are 3 images showing the result of receiving packets (Also found here: http://imgur.com/zMbr18v,prHmgbr,Q5ARxKa). The result should be incrementing every packet sent (packets separated by 0x00 0x00). As you can see, in perhaps 20% of packets, there is a byte that is 0x00 rather then what it should be. What could be the cause of this?

I have been banging my head against a wall for weeks and have finally posted here. I have tried to connect the two EM boards using shielded coaxial cable to ensure it is not a connection issue. I have tried hundreds of different changes, tested different SmartRF Studio settings, and tested sending various packet sizes/speeds with limited success. Another key thing to note is that when I use 32 bit sync bytes I get an even higher packet loss percentage (perhaps due to the bit shifting detailed above).

Further details of my setup:

Software: My functions are sourced from the DN400 application code (and other examples like that - https://github.com/alvarop/msp430-cc2500). I know the way I am sending/receiving packets is not ideal, but I am just trying to troubleshoot the problem but should be correct to the very core elements. Initially, I used all of the send/receive functions as per the examples. Without any success, I have been slowly pulling apart the functions to the very basics (as shown in RX.c). I apologise for my program being very messy, this is due to it being pulled apart countless times.

The program is attached and can also be found here: https://www.dropbox.com/s/4uy4ucyiyfoaprb/CC2500.rar

 

Hardware: I am currently using a surface mount MSP430F2112 connected to the CC2500 Evaluation board through breadboard. The MSP430 has minimum components to get it running (crystal, reset network, debouncing caps etc) and is connected to IAR (v4.11) through a JTAG connector (MSP430-JTAG-TINY-V2). The SPI is connected from vias connected to the MSP430 pins, into the breadboard, through the CC2500EM connector to the relative pins on the CC2500. Could the data be getting effected due to the current setup? I thought this was unlikely but would like to flag it anyway.

 

Any assistance in any area would be greatly appreciated.

CC2500EM.rar
  • My first thought when I read through this post is to check the SPI. From your description I find it a bit difficult to understand exactly how you have connected this but we have seen in the lab when connecting to customer board that long SPI lines could be an issue. A breadboard add quite a cap load on the lines and could cause the rise/ fall times to be too long for reliable communication. Check that you always read back the contents of registers correctly and use as short wire as possible between the MSP and CC2500. Skip the breadbord.  

  • No problem, I have wrote a function to write and read to all of the registers (on a loop as well). I can see no problem in that area. Is it possible for the SPI to corrupt sending/receiving packets but work fine when reading/writing to registers in the CC2500?

    I have progressed a bit further with these boards. Currently one board is set up to send 10 packets of data per second (1x Length, 3xData, 16 bit sync, CRC checking discarding corrupt packets) using the MSP430 timer.

    On the receiving end I was able to receive 30 complete packets in around 25 seconds (average time, varied from 15 to 40). I noticed with 10kBaud rate I was able to retrieved 30 packets in around 30 seconds, at this point I didn't need CRC checking because next to none of our packets were corrupt.

    For the application I need 250/500kBaud. When I increased it to 250kBaud, I noticed that a far greater number of packets became corrupt. I Implemented CRC checking and got rid of packets that didn't meet the CRC check. As a result I am able to get 30 packets in 25 seconds. This rate of packet loss is still very high isn't it? If I am sending 250 packets in 25 seconds, but only getting 30? Is there any way to improve this?

    Thank you for your reply.

  • I took a quick look at your code and it looks like you don't use the default sync word. Any reason for this? If you have a poor sync word the probability for finding a false sync word in the datastream increases. What I also have seen before with 32 bit sync with poor sync word is that you get false sync on the first 16 bit and then it never finds the last 16 bit.

  • Hi,

    Thank you for your response. I haven't't had a reason for changing the sync bytes, it was just for testing different factors. For the most part these sync bytes were usually the default but one instance we just tested it with different sync byte -> which must have been when that program was packaged.

    I have came to the conclusion that the pi-matching values were not correct, hence the very poor results. I have now changed to a PCB antenna following the reference design, but now I am still running into signal strength issues. Please refer to the new post I have made here for further details. Any help would be greatly appreciated, and I'll be sure to reply daily. 

    http://e2e.ti.com/support/low_power_rf/f/155/t/305518.aspx

    Thanks.