Because of the holidays, TI E2E™ design support forum responses will be delayed from Dec. 25 through Jan. 2. Thank you for your patience.

This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Slave Latency understanding

Hi, why should I use slave latency instead of higher value of connection interval if Effective Connection Interval=Connection Interval*(1+Slave Latency). I appreciate example:).

from documentation:

  • Slave Latency This parameter gives the slave (peripheral) device the option of skipping a number of connection events. This gives the peripheral device some flexibility, in that if it does not have any data to send it can choose to skip connection events and stay asleep, thus providing some power savings. The decision is up to the peripheral device. 

 I don't understand when peripheral make decision to skip connection event:/

  • Hi Patryk

    there is a difference in slave latency and simply increasing the connection interval.

    with an increased connection interval both sides are bound to this interval, with slave latency only the peripheral is allowed to skip connection events. the central still has to wake up for each and every one of them.

    because the peripheral is considered the "weaker" device (in terms of power) it is completely feasible to give it the ability to skip a given number of connection events if there is no data to send from peripheral to central, but to enable it to send out its data "faster" if it needs to.

    HTH

    Andre

  • Hello Patryk,

    Imagine the remote control scenario. If you press a button, you want the remote to "instantly" send a notification, with a maximum latency of maybe 30ms. However, you don't want the remote to have to wake up every 30ms simply to acknowledge the link (which is required with Bluetooth low energy technology). This is where slave latency is useful.

    As an example, with a connection interval of 30ms and a slave latency of 10, the remote can be sleeping for 300ms cycles instead of 30ms which will reduce the overall power consumption and since the master (tv for example) is keeping the 30ms sync, a notification can "quickly" be sent once button is pushed.

    I hope that made sense to you.

    Best Regards

    Joakim

  • Hi Patrick,

    it was also difficult for me to understand this.

    It helps me to think as this:

    Larger Connection Events are useful to have sporadic communications.

    If you program a Connection Event half of the previous one and introduce a latency value of 1, you will have the chance to transmit in the middle of the Connection Event again just in case you need :)

    Think about a watch notifying an incoming call.

    Bye!

  • Hi Guys again!:) it is not clear to me to understand when peripheral device know when not to skip connection event. For example if slave latency is 10 and connection event is 30ms - how peripheral knows to handle event after 300ms? I don't understand it from documentation. a) I have thought that during slave latency peripheral get info from central and somehow it counts connection events. b)  But when I read about events connections there appears statement about clock sync. So how to understand it? Peripheral clock and central clock are sync so during slave latency peripheral device do nothing but wait 10x30ms = 300ms (not counts connection events as I thought in a) )?

    So in a) scenario peripheral radio is on every connection event and in b) radio is fully off and that's why i think b is correct:)