This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

LP-CC2652RB: Could not setup automatic reporting of multiple endpoints clusters' attributes

Part Number: LP-CC2652RB
Other Parts Discussed in Thread: SYSCONFIG, Z-STACK, UNIFLASH,

Good morning.

I'm developing a custom router (based on zr_genericapp)  firmware acting as a multi-endpoint smart plug

Following Ryan's hints and YK's post about multiple endpoints, i successfully managed to setup 8 endpoints of  "Smart plug" type (with Metering and On/Off clusters), with their attributes and callbacks (tested working).

Now i'm trying to configure reporting of the attributes OnOff and CurrentSummationDelivered for the two clusters of each of the 8 endpoints.

I'm using Zigbee2Mqtt software on a ZNP coordinator, and by its frontend i can successfully send the Configure Report messages. But i'm stuck at some kind of memory limitation which i try to explain:

if i configure just 4 endpoints, each attribute is correctly reported, and i can see a total of 8 reportAttribute messages on the frontend (4 OnOff and 4 CurrentSummationDelivered).

If i configure more than 4 endpoints, the report works only for the last endpoints attributes.

I found that i have to act on some defines in Stack/bdb/bdb_interface.h:

//Your JOB: Set this value according to your application
//Maximum size in bytes used by reportable attributes registered in any
//endpoint for the application (for analog attributes)

BDBREPORTING_MAX_ANALOG_ATTR_SIZE 8 - i suspect i have to make it at least 64 (8 (CurrentSummationDelivered attributes) times 8 bytes (uint64)), but the maximum supported is 8, what am i missing?

//Your JOB: Set this value according to your application
//Max num of cluster with reportable attributes in any endpoint
//(eg. 2 endpoints with same cluster with reportable attributes counts as 2,
//regardless of the number of reportable attributes in the cluster)

BDB_MAX_CLUSTERENDPOINTS_REPORTING 16  (8 endpoint times 2 clusters with reportable attributes) default was 5.

In this situation, after the configure report messages by the coordinator, the module only reports the last 4-5 endpoints' attributes (not regular) once, then freezes.

I also increased binding table size to 16, default was 4.

Please, any advice would be very very appreciated.

Thank you in advance.

Roberto

  • Hello Roberto,

    Thank you for the detailed description.  BDBREPORTING_MAX_ANALOG_ATTR_SIZE is the maximum size in bytes of individual analog attributes and thus should be 4 or 8 according to the attribute's needs.  Setting BDB_MAX_CLUSTERENDPOINTS_REPORTING to 16 seems correct based on your application description and I assume that you increased the SysConfig -> Z-Stack -> Advanced -> Max Table Sizes -> Binding Table Size? 

    Otherwise, it appears that you have taken care of the important definitions.  You could increase them past the expected requirements to determine if that changes behavior.  Can you debug your zr_genericapp to discover why it freezes? 

    You could try to increase non-volatile memory allocation although I do not believe this is the issue.  Are you ensuring complete device erasure or factory resetting (i.e. deleting all NV memory) before re-programming your new settings? 

    If you removed one reportable attribute cluster then would all eight endpoints report the other cluster attribute?  This may require more details on how you set up the clusters/attributes and endpoints in your application.

    Regards,
    Ryan

  • Hello Ryan. Thanks again for your support

    Yes, i have increased Binding Table Size to 16 in SysConfig, but it seems not enough.

    Then i set BDBREPORTING_MAX_ANALOG_ATTR_SIZE to 8 (because CurrentSummationDelivered attribute is uint64_t) and BDB_MAX_CLUSTERENDPOINTS_REPORTING to 16 (because each one of the 8 endpoints has 2 clusters with reportable attributes).

    I also have increased non-volatile memory from 2 to 3 pages. No differences.

    Now i can only try to debug the zstack thread, but i think it could be not so easy.
    Which other information could i try to search?

  • Can you try to increase the Binding Table Size and BDB_MAX_CLUSTERENDPOINTS_REPORTING to 32 each?  Are you ensuring complete device erasure or factory resetting (i.e. deleting all NV memory) before re-programming your new settings? If you removed one reportable attribute cluster then would all eight endpoints report the other cluster attribute?  Do you have any recommendations for recreating this behavior on my system?

    Regards,
    Ryan

  • I increased BDB_MAX_CLUSTERENDPOINTS_REPORTING and Binding Table Size to 32.

    I usually used btn2 + reset of the launchpad to erase NV memory between different re-flashing, now i'm also doing the full flash erase in Uniflash software to be sure.

    - The current reporting situation, after all 16 "configure report" messages sent by the coordinator, is the periodic reception of the same partial pattern like this:

    as you can see my device is missing endpoints 2 and 1, but this is not regular, some times it is worse and misses more endpoints in a random pattern.

    After a while, in this case after 6 minutes, it completely stops sending reports and responding to commands ("Toggle" command and read attributes attempts result in timeout). Also the btn2, which i usually use to leave the network, does not work anymore.

    - If i try to configure only one attribute cluster (for example, only "OnOff" attribute and NOT "CurrentSummationDelivered") for each endpoint to be reported, sending only 8 "configure report" messages by the coordinator, now i correctly receive the 8 single attributes by each endpoint, no crashes occurs and the leave button is working. The same happens if i remove ''OnOff'' and setup the report of ''CurrentSummationDelivered'' only.

    If you want to reproduce this on a LP-CC2652RB board like the one of mine. I think you can just follow YK's article in order to make 8 endpoints. I can share here the shared zclGenericApp_Attrs cloned 8 times (zclGenericApp_Attrs_2, zclGenericApp_Attrs_3 etc...). and the zclGenericApp_SimpleDesc shared for the 8 endpoints. Also i share the zclGenericApp_Init() function, still experimental, in which i will try to make the endpoints generation procedural for any number, but that could be another story.

    static void zclGenericApp_Init( void )
    {
      ///////////// Add the led ////////////////
      LED_Params ledParams;
      LED_Params_init(&ledParams);
      gRedLedHandle = LED_open(CONFIG_LED_RED, &ledParams);
      //////////////////////////////////////////////
    
      ////////////// Add my timer ////////////////
      Timer_Params_init(&myTimerParams);
      myTimerParams.periodUnits = Timer_PERIOD_US;
      myTimerParams.period = 1e3;
      myTimerParams.timerMode  = Timer_CONTINUOUS_CALLBACK;
      myTimerParams.timerCallback = myTimerCallbackFunction;
      myTimerHandle = Timer_open(CONFIG_TIMER_0, &myTimerParams);
    
      if (myTimerHandle == NULL) {
          // Timer_open() failed
          while (1);
      }
    
      if (Timer_start(myTimerHandle) == Timer_STATUS_ERROR) {
          // Timer_start() failed
          while (1);
      }
      //////////////////////////////////////////////
    
      ////////////// Add Meter Inputs /////////////
      GPIO_setCallback(CONFIG_GPIO_PULSEIN_MET_1, MeterPulseInCallback);
      GPIO_enableInt(CONFIG_GPIO_PULSEIN_MET_1);
      //////////////////////////////////////////////
    
      // Set destination address to indirect
      zclGenericApp_DstAddr.addrMode = (afAddrMode_t)AddrNotPresent;
      zclGenericApp_DstAddr.endPoint = 0;
      zclGenericApp_DstAddr.addr.shortAddr = 0;
    
      Initialize_UI();
    
      for (uint8_t endId = 1; endId <= ENDPOINTS_NUMBER; endId++) {
          //Register Endpoints
          zclGenericAppEpDesc[endId-1].endPoint = endId;
          zclGenericApp_SimpleDesc.EndPoint = endId;
          zclGenericAppEpDesc[endId-1].simpleDesc = &zclGenericApp_SimpleDesc;
    
          if (!zclport_registerEndpoint(appServiceTaskId, &zclGenericAppEpDesc[endId-1])) {
              // Failed to register endpoint
              LED_setOn(gRedLedHandle, LED_BRIGHTNESS_MAX);
          }
    
          // Register the ZCL General Cluster Library callback functions
          if (zclGeneral_RegisterCmdCallbacks( endId, &zclGenericApp_CmdCallbacks ) != SUCCESS) {
              // Failed to allocate callbacks
              LED_setOn(gRedLedHandle, LED_BRIGHTNESS_MAX);
          }
    
          if (zclSE_RegisterCmdCallbacks( endId, &zclGenericApp_SECmdCallbacks ) != SUCCESS) {
              // Failed to allocate callbacks
              LED_setOn(gRedLedHandle, LED_BRIGHTNESS_MAX);
          }
    
          // Register the application's attribute list
          zclGenericApp_ResetAttributesToDefaultValues();
    
          switch (endId)
          {
          case 1:
              if (zcl_registerAttrList(endId, zclGenericApp_NumAttributes, zclGenericApp_Attrs) != SUCCESS) {
                // Failed to register attributes list
                LED_setOn(gRedLedHandle, LED_BRIGHTNESS_MAX);
              };
            break;
          case 2:
              if (zcl_registerAttrList(endId, zclGenericApp_NumAttributes, zclGenericApp_Attrs_2) != SUCCESS) {
                // Failed to register attributes list
                LED_setOn(gRedLedHandle, LED_BRIGHTNESS_MAX);
              };
            break;
          case 3:
              if (zcl_registerAttrList(endId, zclGenericApp_NumAttributes, zclGenericApp_Attrs_3) != SUCCESS) {
                // Failed to register attributes list
                LED_setOn(gRedLedHandle, LED_BRIGHTNESS_MAX);
              };
            break;
          case 4:
              if (zcl_registerAttrList(endId, zclGenericApp_NumAttributes, zclGenericApp_Attrs_4) != SUCCESS) {
                // Failed to register attributes list
                LED_setOn(gRedLedHandle, LED_BRIGHTNESS_MAX);
              };
            break;
          case 5:
              if (zcl_registerAttrList(endId, zclGenericApp_NumAttributes, zclGenericApp_Attrs_5) != SUCCESS) {
                // Failed to register attributes list
                LED_setOn(gRedLedHandle, LED_BRIGHTNESS_MAX);
              };
            break;
          case 6:
              if (zcl_registerAttrList(endId, zclGenericApp_NumAttributes, zclGenericApp_Attrs_6) != SUCCESS) {
                // Failed to register attributes list
                LED_setOn(gRedLedHandle, LED_BRIGHTNESS_MAX);
              };
            break;
          case 7:
              if (zcl_registerAttrList(endId, zclGenericApp_NumAttributes, zclGenericApp_Attrs_7) != SUCCESS) {
                // Failed to register attributes list
                LED_setOn(gRedLedHandle, LED_BRIGHTNESS_MAX);
              };
            break;
          case 8:
              if (zcl_registerAttrList(endId, zclGenericApp_NumAttributes, zclGenericApp_Attrs_8) != SUCCESS) {
                // Failed to register attributes list
                LED_setOn(gRedLedHandle, LED_BRIGHTNESS_MAX);
              };
            break;
          default:
            break;
          }
    
            // TODO find a better way like this
            //       if (zcl_registerAttrList(endId, zclGenericApp_NumAttributes, zclGenericApp_Attrs_Array[endId]) != SUCCESS) {
            //           // Failed to register attributes list
            //           LED_setOn(gRedLedHandle, LED_BRIGHTNESS_MAX);
            //       };
    
          // Register the Application to receive the unprocessed Foundation command/response messages
          if (!zclport_registerZclHandleExternal(endId, zclGenericApp_ProcessIncomingMsg)) {
              // Failed to register the application
              LED_setOn(gRedLedHandle, LED_BRIGHTNESS_MAX);
          };
      }
    
    #if !defined (DISABLE_GREENPOWER_BASIC_PROXY) && (ZG_BUILD_RTR_TYPE)
      gp_endpointInit(appServiceTaskId);
    #endif
    
      //Write the bdb initialization parameters
      zclGenericApp_initParameters();
    
      //Setup ZDO callbacks
      SetupZStackCallbacks();
    
    for (uint8_t endId = 1; endId <= ENDPOINTS_NUMBER; endId++) {
    
        #ifdef ZCL_DISCOVER
          // Register the application's command list
          zcl_registerCmdList( endId, zclCmdsArraySize, zclGenericApp_Cmds );
        #endif
    
         #ifdef ZCL_DIAGNOSTIC
           // Register the application's callback function to read/write attribute data.
           // This is only required when the attribute data format is unknown to ZCL.
           zcl_registerReadWriteCB( endId, zclDiagnostic_ReadWriteAttrCB, NULL );
    
           if ( zclDiagnostic_InitStats() == ZSuccess )
           {
             // Here the user could start the timer to save Diagnostics to NV
           }
         #endif
    
    }

    CONST zclAttrRec_t zclGenericApp_Attrs[] =
    {
      // *** General Basic Cluster Attributes ***
      {
        ZCL_CLUSTER_ID_GENERAL_BASIC,             // Cluster IDs - defined in the foundation (ie. zcl.h)
        {  // Attribute record
          ATTRID_BASIC_HW_VERSION,            // Attribute ID - Found in Cluster Library header (ie. zcl_general.h)
          ZCL_DATATYPE_UINT8,                 // Data Type - found in zcl.h
          ACCESS_CONTROL_READ,                // Variable access control - found in zcl.h
          (void *)&zclGenericApp_HWRevision  // Pointer to attribute variable
        }
      },
      {
        ZCL_CLUSTER_ID_GENERAL_BASIC,
        { // Attribute record
          ATTRID_BASIC_ZCL_VERSION,
          ZCL_DATATYPE_UINT8,
          ACCESS_CONTROL_READ,
          (void *)&zclGenericApp_ZCLVersion
        }
      },
      {
        ZCL_CLUSTER_ID_GENERAL_BASIC,
        { // Attribute record
          ATTRID_BASIC_MANUFACTURER_NAME,
          ZCL_DATATYPE_CHAR_STR,
          ACCESS_CONTROL_READ,
          (void *)zclGenericApp_ManufacturerName
        }
      },
      {
        ZCL_CLUSTER_ID_GENERAL_BASIC,
        { // Attribute record
          ATTRID_BASIC_MODEL_IDENTIFIER,
          ZCL_DATATYPE_CHAR_STR,
          ACCESS_CONTROL_READ,
          (void *)zclGenericApp_ModelName
        }
      },
      {
        ZCL_CLUSTER_ID_GENERAL_BASIC,
        { // Attribute record
          ATTRID_BASIC_POWER_SOURCE,
          ZCL_DATATYPE_ENUM8,
          ACCESS_CONTROL_READ,
          (void *)&zclGenericApp_PowerSource
        }
      },
      {
        ZCL_CLUSTER_ID_GENERAL_BASIC,
        { // Attribute record
          ATTRID_BASIC_PHYSICAL_ENVIRONMENT,
          ZCL_DATATYPE_ENUM8,
          (ACCESS_CONTROL_READ | ACCESS_CONTROL_WRITE),
          (void *)&zclGenericApp_PhysicalEnvironment
        }
      },
      {
        ZCL_CLUSTER_ID_GENERAL_BASIC,
        {  // Attribute record
          ATTRID_CLUSTER_REVISION,
          ZCL_DATATYPE_UINT16,
          ACCESS_CONTROL_READ,
          (void *)&zclGenericApp_basic_clusterRevision
        }
      },
    
    #ifdef ZCL_IDENTIFY
      // *** Identify Cluster Attribute ***
      {
        ZCL_CLUSTER_ID_GENERAL_IDENTIFY,
        { // Attribute record
          ATTRID_IDENTIFY_IDENTIFY_TIME,
          ZCL_DATATYPE_UINT16,
          (ACCESS_CONTROL_READ | ACCESS_CONTROL_WRITE),
          (void *)&zclGenericApp_IdentifyTime
        }
      },
    
      {
        ZCL_CLUSTER_ID_GENERAL_IDENTIFY,
        {  // Attribute record
          ATTRID_CLUSTER_REVISION,
          ZCL_DATATYPE_UINT16,
          ACCESS_CONTROL_READ,
          (void *)&zclGenericApp_identify_clusterRevision
        }
      },
    #endif
    
    //////////////////////////////////// ADDED ATTRIBUTES /////////////
    #ifdef ZCL_SE_METERING_SERVER
      // *** Smart Energy Metering Server Cluster Attributes ***
      {
       ZCL_CLUSTER_ID_SE_METERING,
        { // Attribute record
          ATTRID_SE_METERING_CURR_SUMM_DLVD,
          ZCL_DATATYPE_UINT48,
    //      ACCESS_CONTROL_READ, // Set attribute reportable manually
          ACCESS_CONTROL_READ | ACCESS_CONTROL_WRITE | ACCESS_REPORTABLE,
          (void *)&zclGenericApp_CurrentSummationDelivered[0]
        }
      },
      {
       ZCL_CLUSTER_ID_SE_METERING,
        { // Attribute record
          ATTRID_SE_METERING_STATUS,
          ZCL_DATATYPE_BITMAP8,
          ACCESS_CONTROL_READ,
          (void *)&zclGenericApp_Metering_Status[0]
        }
      },
      {
       ZCL_CLUSTER_ID_SE_METERING,
        { // Attribute record
          ATTRID_SE_METERING_UOM,
          ZCL_DATATYPE_ENUM8,
          ACCESS_CONTROL_READ,
          (void *)&zclGenericApp_UnitofMeasure[0]
        }
      },
      {
       ZCL_CLUSTER_ID_SE_METERING,
        { // Attribute record
          ATTRID_SE_METERING_SUMM_FMTG,
          ZCL_DATATYPE_BITMAP8,
          ACCESS_CONTROL_READ,
          (void *)&zclGenericApp_SummationFormatting[0]
        }
      },
      {
       ZCL_CLUSTER_ID_SE_METERING,
        { // Attribute record
          ATTRID_SE_METERING_DEVICE_TYPE,
          ZCL_DATATYPE_BITMAP8,
          ACCESS_CONTROL_READ,
          (void *)&zclGenericApp_MeteringDeviceType[0]
        }
      },
      {
       ZCL_CLUSTER_ID_SE_METERING,
        {  // Attribute record
          ATTRID_CLUSTER_REVISION,
          ZCL_DATATYPE_UINT16,
          ACCESS_CONTROL_READ,
          (void *)&zclGenericApp_metering_clusterRevision
        }
      },
    #endif // ZCL_SE_METERING_SERVER
    #ifdef ZCL_ON_OFF
      // *** On/Off Cluster Attributes ***
      {
        ZCL_CLUSTER_ID_GENERAL_ON_OFF,
        { // Attribute record
          ATTRID_ON_OFF_ON_OFF,
          ZCL_DATATYPE_BOOLEAN,
          ACCESS_CONTROL_READ | ACCESS_REPORTABLE | ACCESS_CONTROL_WRITE,
          (void *)&zclGenericApp_OnOff[0]
        }
      },
      {
        ZCL_CLUSTER_ID_GENERAL_ON_OFF,
        {  // Attribute record
          ATTRID_CLUSTER_REVISION,
          ZCL_DATATYPE_UINT16,
          ACCESS_CONTROL_READ,
          (void *)&zclGenericApp_onoff_clusterRevision
        }
      },
    #endif // ZCL_ON_OFF
    ///////////////////////////////////////////////////////////////////
    
    };
    SimpleDescriptionFormat_t zclGenericApp_SimpleDesc =
    {
      GENERICAPP_ENDPOINT,                  //  int Endpoint;
      ZCL_HA_PROFILE_ID,                     //  uint16_t AppProfId;
      ZCL_DEVICEID_SMART_PLUG,              //  uint16_t AppDeviceId;
      GENERICAPP_DEVICE_VERSION,            //  int   AppDevVer:4;
      GENERICAPP_FLAGS,                     //  int   AppFlags:4;
      ZCLGENERICAPP_MAX_INCLUSTERS,         //  byte  AppNumInClusters;
      (cId_t *)zclGenericApp_InClusterList, //  byte *pAppInClusterList;
      ZCLGENERICAPP_MAX_OUTCLUSTERS,        //  byte  AppNumInClusters;
      (cId_t *)zclGenericApp_OutClusterList //  byte *pAppInClusterList;
    };

    I know it could be not so fun. But I think i followed all the guidelines and hints correctly, so that kind of behaviour and freezes are not promising well.

    Also, a detail which i hope is not relevant, in the zclGenericApp_Attrs i have set the Attribute CurrentSummationDelivered reportable and writable setting the access control flags: ACCESS_CONTROL_READ | ACCESS_CONTROL_WRITE | ACCESS_REPORTABLE. by default it was only readable, having only ACCESS_CONTROL_READ.

    Thank you very much for any advice.

    Have a nice day.

    Roberto

  • Thank you for all of this information.  It seems like a memory leak based on your description of the crash behavior.  It must be caused by multiple attribute reporting per endpoint if it does not happen when only reporting one attribute.  What's odd is that the behavior did not appear when reducing the number of endpoints.  Are they all queuing to be sent at the same time?  You should consider increasing the NWK_MAX_DATABUFS_* inside nwk_globals.c, and it would be best if these endpoint attribute reports could be staggered.  Do you have a sniffer log of this behavior?  It will take some time for me to be capable of replicating and further investigating this behavior.  Setting the CurrentSummationDelivered to ACCESS_REPORTABLE is not a setting advised by the ZCL 7 Specification but it should not affect Z-Stack operation.

    Regards,
    Ryan

  • Good morning Ryan. Thank you again for the suggestions.

    I've tried to increase those buffers limits like this:

    // Maximums for the data buffer queue
    // #define NWK_MAX_DATABUFS_WAITING    8     // Waiting to be sent to MAC
    // #define NWK_MAX_DATABUFS_SCHEDULED  5     // Timed messages to be sent
    // #define NWK_MAX_DATABUFS_CONFIRMED  5     // Held after MAC confirms
    // #define NWK_MAX_DATABUFS_TOTAL      12    // Total number of buffers
    
    #define NWK_MAX_DATABUFS_WAITING    (8*4)     // Waiting to be sent to MAC
    #define NWK_MAX_DATABUFS_SCHEDULED  (5*4)     // Timed messages to be sent
    #define NWK_MAX_DATABUFS_CONFIRMED  (5*4)     // Held after MAC confirms
    #define NWK_MAX_DATABUFS_TOTAL      (12*4)    // Total number of buffers

    No differences.

    Then i tried to send the report config messages one by one from the coordinator frontend, it seems to work perfectly till a maximum of 8 reportable attributes, (it doesn't matter if of 8 different endpoints of 4 endpoints with 2 reportable attributes each), it seems to start freezing and missing messages when i try to add other reportable attributes over those 8.

    For now i'm not able to make the reports staggered in time, it seems that the chip automatically try to report all of them in the same time, no matter the timing of report config messages that i sent to it before, do you remember if there is a setting for that? I tried to search without success.

    At the moment i didn't set up any sniffer, we have some other Launchpads here and i know that i can use one for that purpose, but i avoided that because at first eye it didn't seem useful for my needs (i saw some Wireshark screenshot in the docs and, after all, you can only see low level Zigbee messages there, no decoding of any kind, maybe there is more elsewhere). I will try to set up the sniffer and post the log.

    Do you remember if CCS provides some kind of memory usage monitor in debug mode which can be useful to identify when the problem start to exist? Or do you have any advice in debugging the zstack thread in order to search the critical point? It seems to search a needle in an haystack to me.

    Roberto

  • Hey Roberto,

    Thanks for trying out those new definitions, could you also try to increase MAC_CFG_TX* from Stack/Config/f8wrouter.opts? 

    The attributes report based off of a single timer, hence it is difficult to stagger reports, but you could configure different maxReportInt for different endpoints/clusters so that they report at a different interval.

    Please see the Debugging section of the Z-Stack User's Guide.

    Regards,
    Ryan

  • Hello Ryan. Good Morning.

    I tried increasing those limits like this:

    -DMAC_CFG_TX_DATA_MAX=(5*3)
    -DMAC_CFG_TX_MAX=(8*3)
    -DMAC_CFG_RX_MAX=(5*3)

    no differences.

    Then i finally managed to differentiate the maxReportInt param for the two categories of cluster attributes (7 for the On/Off and 13 for the CurrentSummationDelivered), this allowed me to perfectly get all the reports from the device. So we have the confirmation that the problem occurs when the trasmissions are scheduled together, also the problem reoccurs in occasion of period collisions if i set maxReportInt of 5 and 10.

    This could be a very useful feature, but now my doubts aren't finished, our project consist of a straight line of N routers (N possibly at least 30 to max 200) using the 2652P modules (i'm developing on the RB launchpads for now).

    Is the automatic attributes reporting way really more convenient than a polling way? The number of hops are of the same order of magnitude, here is a comparison beetween the two configurations (x = devices, y = minimum hops to get the metering of each node):

    but now i'm worried about the possible overload of reports scheduled at the same time in the reported way. Is the protocol smart enough to avoid missed reports with so many devices scheduled to report their attributes in minutes? Or maybe it is convenient for me to simply poll each of them synchronously to be sure?

    I will probably continue with these and other questions in another thread. Thank you Ryan for all the extremely useful and ready support!

    Have a nice day,

    Roberto

  • I'm glad to hear that changing the reporting interval improved your system's performance.  I will check with the Software R&D Team regarding limitations on queueing multiple messages into the output buffer simultaneously.

    Whether polling or reporting is chosen is the developer's decision based on their application needs.  Reporting can still be enabled with maxInterval set to BDBREPORTING_NOPERIODIC and minInterval at BDBREPORTING_NOLIMIT so that a value is reported only when using Zstackapi_bdbRepChangedAttrValueReq()

    There is also a limit on the number of RX buffers and the SimpleLink device's ability to process multiple incoming packets at once, thus a solid method of staggering reports will need to be implemented.  The number of hops is also concerning as robust Zigbee mesh networks typically consist of no more than 250 ZR & ZED devices zoned across a maximum of about ten hops.  An example of Z-Stack large network performance is provided in SWRA650.  You will need to determine how to ensure that a node 30 hops away can reliably reach its destination.

    Regards,
    Ryan