This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Read characteristic longer than 20 bytes

Can anyone point me to examples of how to set up a read characteristic to be longer than 20 bytes? Presumably if the characteristic is longer than 20 bytes it will automatically start using the BLOB methods internally to transfer the data - or do I need to do something different to handle this data?

I can set the basic attribute up fine - but when the ReadAttrCB callback is called it is not at all clear what to do with offset, maxLen etc?

What is maxLen - the length of data required to be read into the destination?

Should offset be applied to the pValue pointer - or has that already had the offset applied to it?

I'm assuming the offset needs to be applied to the pAttr->pValue pointer to access the right bit of data.

Should pLen be set to the total size of the characteristic or the amount of data that was actually transferred across in this call?

Current in the callback I have the following:

     case SIMPLEPROFILE_CHAR5_UUID:
        *pLen = maxLen;
        VOID osal_memcpy( pValue, (pAttr->pValue)+offset, maxLen );
        break;

but it does not work and I'm really guessing what these parameters mean at this point.

Thanks,

Simon.

  • Yes, please answer this question,  TI.

    There is no API documentation for this, and I don't dare to assume anything, since the TI-libraries are so sloppy written!

     

  • Hi guys,

    As far as I understand you just copy at whatever offset you are requested until the lowest of actual_len-offset and maxLen, it is then up to the other device to request a higher offset read the next time. A blob response will automatically be sent from a blob request.

    If you insert a breakpoint, do you see that maxLen is ATT_MTU-1 and pLen is whatever length remains of the total parameter length?

    When you debug, do you get to the memcpy part of the code? I guess you have removed the if ( offset > 0 ) part of the code?

    Best regards,
    Aslak 

  • Thomas and Aslak,

    I did get this working by a bit of trial and error in the end.

    Yes maxLen is always 22 - so you return 22 bytes of data in each call apart from the last one where you send the remainder.

    So this is the code I ended up using in ReadAttrCB:

          case SIMPLEPROFILE_CHAR5_UUID:
            {
               uint8 len = maxLen;
            
               if(offset + maxLen > SIMPLEPROFILE_CHAR5_LEN)
                  len = SIMPLEPROFILE_CHAR5_LEN - offset;
            
               *pLen = len;
               VOID osal_memcpy( pValue, (pAttr->pValue)+offset, len );
            }
            break;

    Simon.

  • That worked. To read the value, I used the advanced command GATT_ReadLongCharValue.

  • Can the information in this thread be added to the documentation? I had to struggle quite a bit to figure this out and it pains me to know others have as well. This thread is not a result of Google searching "ti e2e ble max characteristic transmit size" so the answer here isn't helping as many people as it should.

    Also, in the TI BLE Vendor Specific guide, there is a ATT_ReadBlobRsp, but the documentation is poor. This leads me to believe the TI will create a GATT_ReadBlobRsp function. But in the interim, TI  needs to document how peripherals respond to blob requests somewhere more reasonable than this thread. 

  • I too am looking for some help on sending/receiving 'blob'-sized data in BLE - and am finding it very difficult to find anything . Does anyone have any sample code for reading/writing a packet of 255 bytes or more?

    Thanks

  • I agree that one has to really "dig" to find this info.

    The documentation states that:

    "Attribute value - encoding of the octet array is defined in the applicable profile. The maximum length of an attribute value shall be 512 octets."

    So is there an advantage to using the "Long" versions (ReadLongChar, WriteLongChar") of CharValues over partitioning data (which is larger than ATT_MTU octets) into smaller, individual characteristic units?   Can I assume that the stack does a more efficient job of getting larger data transferred than I can via multiple smaller calls?

    Why does the stack distinguish between Long and "Regular" values/descriptors?  Is there that much overhead in checking for size in the stack to determine if everything can be taken care of in one PDU or if multiple PDUs are needed?

    Thanks...

  • Hi,

    To have a 'Long' characteristic in your service, all you have to do is something like this in the read callback

    static uint8 myService_ReadAttrCB( uint16 connHandle, gattAttribute_t *pAttr, 
                                       uint8 *pValue, uint8 *pLen, uint16 offset, uint8 maxLen )
    {
      bStatus_t status = SUCCESS;
     
      
      if ( pAttr->type.len == ATT_BT_UUID_SIZE )
      {
        // 16-bit UUID
        uint16 uuid = BUILD_UINT16( pAttr->type.uuid[0], pAttr->type.uuid[1]);
        switch ( uuid )
        {
          /* 16bit uuid readCB switch */
        default:
          *pLen = 0;
          status = ATT_ERR_ATTR_NOT_FOUND;
          break;
        }
      }
      else if (pAttr->type.len == ATT_UUID_SIZE )     // 128-bit UUID
      {
    
        if (osal_memcmp(pAttr->type.uuid, myCharUUID, ATT_UUID_SIZE))
        {
          // verify offset
          if ( offset >= sizeof(myCharValue) )
          {
            status = ATT_ERR_INVALID_OFFSET;
          }
          else
          {
            // determine read length
            *pLen = MIN( maxLen, (sizeof(myCharValue) - offset) );
            // copy data
            osal_memcpy( pValue, pAttr->pValue + offset, *pLen );
          }
        }
        else
        {
          status = ATT_ERR_ATTR_NOT_FOUND; // Should never get here!
        }
      }
      else
      {
        
        *pLen = 0;
        status = ATT_ERR_INVALID_HANDLE;
      }
    }

    However, you will find that both long reads and writes are MUCH slower than e.g. several Write Without Response (write command) from client to server, or several Notifications from server to client.

    This is because the long reads will spend 2 connection intervals sending ~MTU bytes, because of the BLE protocol that requires Read Req->Read Rsp->Read Req->Read Rsp->Read Req.. etc, whereas notification and write command do not require a response, and so several packets can be transmitted in the same connection interval.

    Best regards,
    Aslak

  • Thanks, Aslak, that appears to put me on the right track. The sample callback code you have modified here looks quite a bit different from all the callbacks in the TI sample app, all of which reject anything with (offset > 0) and assume all UUID's are 16-bit.

    But your suggestion does raise some further questions, sorry!

    I will be wanting to send big blocks of data (up to 8 kbytes) - is this going to be practical with the 'long' characteristic? What is the MTU size for the BLE protocol?

    Any hints for matching code at the client end?

    I'm a bit confused by your comments on notifications etc not requiring responses; without getting deep into the protocol how does this allow data to be resent if it is received corrupted? - does a small notification include its own error-correction algorithm?

  • What is TI's policy on sharing modified stack code? I want to upload a modified version of the sensor tag to Github for the general public, would that be OK? 

  • Hi Chris,

    You will find code like this in the HidAdvRemote project for the HID Report Map (long read), and the SensorTag will show how to deal with long UUIDs.

    The MTU size for the CC254x is 23 bytes of ATT payload, which for notifications means 20 bytes per frame.

    These frames can be sent back to back since notifications don't have an ATT response associated. But data will be automatically re-sent by the link layer if it's corrupted, since all frames are error checked and acked at this level. Nothing is ever lost, except if the connection is lost.

    For a really long transmission (like 8kB) you have no other practical choice than notifications. You can then implement some TCP window-like protocol where a WriteCmd is sent back now and again to acknowledge receipt. This means you should put some sequence number at the start of every notification.

    Best regards,
    Aslak

  • Hi Peter,

    If you only upload the changed files, as a sort of patch, that's perfectly fine. There's some legal business related to the clickwrap license which would prevent distribution of the entire installer via github.

    If you want, you can use the TI Git server http://git.ti.com/

    That said, if you're up for it, our wiki page is editable, and feel free to add a page with a tutorial, embedding code or linking to some gists for example.

    Best regards,
    Aslak

  • Hi Aslak

    Thanks, this is getting me on the right track. I can't find the code you refer to in the HidAdvRemote project, but I have had partial success with sending back-to-back notifications - I have added a new service which sends 16 bytes at a time, and when the event is due it tries to repeat the operation 'x' times.

    The odd thing is that when 'x' is about 16-20 (i.e. 16-20 notifications) have been sent, no more are received at the client. It looks like some other event is preventing all of them being sent, but none of my other events are faster then 2 seconds. Is something in the BLE stack causing this?

    These 16 or so notifications (250-300 bytes of data) take about 150 ms so my 8kb block would take less than10 seconds which is OK. But unless I can get more notifications sent back-to-back in one service, I'll have to create 30-odd more services!

  • Hi,

    Adding more services will not help, it's not tied to that at all. There is a common send buffer for all services/characteristics.

    What you can do is
    - Call GATT_Notification() or for that matter GATTServApp_ProcessCharCfg() until the return status is not 0x00 SUCCESS. This means the buffer is full.
    - Use the command HCI_EXT_ConnEventNoticeCmd( yourTaskID, yourOsalEvent ); to get a callback when a connection event is finished
    - Repeat from step 1.

    This will send as many notifications back to back as is possible.

    You may also want to, during the task init for your task, call the function HCI_EXT_OverlappedProcessingCmd(HCI_EXT_ENABLE_OVERLAPPED_PROCESSING) to enable sending more than 3-4 notifications per connection event.

    Best regards,
    Aslak

  • Thanks Aslak, all working well now. It's taking 15 seconds for my 8 kbytes; the 'overlapped processing' suggestion doesn't seem to make any difference (unless I have used it wrongly?)

    It took some time to sort out when to use the 'HCI_EXT_ConnEventNoticeCmd' function. Anyway, for anyone else wanting to so something similar here's my (working) code below.

    p.s. the 'verify answer' button doesn't appear to work in my browser. I did try :-(

    //////////////////////////
      //  DATA STORE   //
      //////////////////////////
      if ( events & DATASTORE_READ_EVT )
      {
         HCI_EXT_OverlappedProcessingCmd(HCI_EXT_ENABLE_OVERLAPPED_PROCESSING);
         if ( gapProfileState == GAPROLE_CONNECTED )
         {
           // start to send data blocks, as many as we can before the BLE stack buffer fills
         
          for ( storedDataIndex = 0; storedDataIndex < TEMPSTORE_NOTIFICATIONS; storedDataIndex++)
          {
                readTempStoreData(storedDataIndex);
                if (TempStore_ConnectionStatus()!= SUCCESS)
                {
                  // buffer is full; wait until it has been emptied (comms task completed)
                  HCI_EXT_ConnEventNoticeCmd(sensorTag_TaskID,DATASTORE_CONN_EVT_COMPLETE_EVT);
                  break;
                }
          }
        }
        // set up event for next time. Period must be longer than it takes to send all required data
        osal_start_timerEx( sensorTag_TaskID, DATASTORE_READ_EVT, TEMPSTORE_DEFAULT_PERIOD );
        return (events ^ DATASTORE_READ_EVT);
      }
      if ( events & DATASTORE_CONN_EVT_COMPLETE_EVT )
      {
        // this event mustn't fire again until we want it to, i.e. when e are ready to send a new batch of data
        HCI_EXT_ConnEventNoticeCmd(sensorTag_TaskID,0);
        if ( storedDataIndex > 0)
        {
          // we have sent some blocks already, and the communication is now complete: carry on with sending more blocks
          for ( ; storedDataIndex < TEMPSTORE_NOTIFICATIONS; storedDataIndex++)
          {
                readTempStoreData(storedDataIndex);
                if (TempStore_ConnectionStatus()!= SUCCESS)
                {
                  HCI_EXT_ConnEventNoticeCmd(sensorTag_TaskID,DATASTORE_CONN_EVT_COMPLETE_EVT);
                  break;
                }
          }
          if (storedDataIndex >= TEMPSTORE_NOTIFICATIONS)
          {
            // all done!
            HCI_EXT_OverlappedProcessingCmd(HCI_EXT_DISABLE_OVERLAPPED_PROCESSING);
          }
        }
        return (events ^ DATASTORE_CONN_EVT_COMPLETE_EVT);
      }

    The function TempStoreConnectionStatus() is defined in the service profp

    .p file where all the required arguments are already present:

    bStatus_t TempStore_ConnectionStatus( )
    {
        return    GATTServApp_ProcessCharCfg( sensorDataConfig, sensorData, FALSE,
                                     sensorAttrTable, GATT_NUM_ATTRS( sensorAttrTable ),
                                     INVALID_TASK_ID );
    }