This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

RTOS/LAUNCHXL-CC2640R2: ble5stack - host_test - NPI - notifications missing

Part Number: LAUNCHXL-CC2640R2
Other Parts Discussed in Thread: CC2564, CC2540, CC2541

Tool/software: TI-RTOS

Hi,

I just tested the ble5stack host_test (compiled wihtout changes from demo source) with some of our devies.

The peripherals send notifications 100 HZ data @ CI 30 => 3-4 Packets per CI => 20byte data per packet (devies are fully verified and working correctly)

Connecting one device I get all notifications from the peripheral at the UART side.

With two devices connected - even if just one is sending data - I get just about one third (27Hz) of the notifications at the UART side.

UART @115200 should be easily capable recieving the amount of data - so the bottleneck should be on the controller side - can you assume where the bottleneck is/wich optimisations would be possible?

I need to optimize for as many devices as possible with the settings above.

Also: Would there be any advantage using blestack instead of ble5stack for the task (BLE5 is not needed yet).

regards

  • Hello Frederik,

    We can't provide a guarantee on throughput when multiple connections are in effect. This is due to scheduling limitations in the controller.

    If you are not using BT5, then I would recommend using the blestack (BT4.2) variant of the Host Test application.

    Best wishes
  • So how would be the solution to that problem? Can the BT4.2 stack guarantee stable throughput? Can a dual mode chip like the CC2564 provide this? How can I find that informations?

    I really need a solution for multiple devices reception (min 4 devices) urgent.

    We already have a windows usb dongle wich can provide 5-8 devices in that setup but we need an embedded solution to this!

  • Hey JXS,
    anything to the above? Its really time pressing here...

    regards
  • Maybe one more thing here...

    I've been evaluating the CC2540 as central some time ago - even with that chip it was possible to recieve about 3 devcies with stable datarate.
    Now the CC2640R2 is much newer - and it's performance is worse - why?
  • Can you try reducing (slowing) the CI to 50ms or slower?

    Best wishes
  • Hi JSX,

    our former test showed that CI 30 is the maximum we can use. Using higher CI's has a huge impact on reliability of the datarate, even with just one devcie connected.

    Our periperal is a CC2541 wich doesn't act as pure NPI but also has a lot of additional code. As far as I know the CC2541 can transfer maximum 4-5 packages per CI. Increasing the CI to 50ms means the CC2541 would have to send 5 per CI constantly so I suspect this is the reason for the drops.

    Anyway I'm still curious where the actual bottleneck is for the CC2640R2 since we already have sucessful tests with a CC2540 as reciever (Bluegiga Stack, CI 12, 3 devices before datarates drop) and also some nordic chips wich perform as well.

    Still the more important question is - which TI chip CAN handle this? (min. 4 devices)

    regards