This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

CC2530: TI Z-Stack 3.0 - End Device reporting TABLE_FULL on first binding attempt

Part Number: CC2530
Other Parts Discussed in Thread: Z-STACK

Summary

On an End Device for which the code under development, using the CC2530, the first binding request sent by the coordinator to the end devices gets a reply with a "Table Full" status while the End Device should accept 4 bindings and 4 clusters.

While debugging of the binding process, the IAR Compiler hangs in the 'osal_memcpy' function so maybe expert advice is more appropriate to suggest what I might be missing in my setup.

I am looking for expert advice.

Discussion


Z-Stack API Documentation SWRA195 Version 1.10 says:

"The ZDO Binding API builds and sends ZDO binding requests and responses. All binding information (tables) are
kept in the Zigbee Coordinator. So, only the Zigbee Coordinator can receive bind requests."

The same document says:
"3.4.2 Binding Table Management
The APS Binding table is a table defined in static RAM (not allocated). The table size can be controlled by 2
configuration items in f8wConfig.cfg [NWK_MAX_BINDING_ENTRIES and MAX_BINDING_CLUSTER_IDS]. NWK_MAX_BINDING_ENTRIES defines the number of entries in the table and MAX_BINDING_CLUSTER_IDS defines
the number of clusters (16 bits each) in each entry. The table is defined in nwk_globals.c. The table is only included (along with the following functions) if REFLECTOR or COORDINATOR_BINDING"

The "Z-STack 3.0 Developer's Guide Version 1.14" says more or less the same thing in paragraph "4.2 Configuring Source Binding" - I believe that the configuration instructions have been implemented.

The 2007 version of the Zigbee Application Specification indicates that a device can have al ocal binding table where it adds or removes binding records.
The same document says that the apsBindingTable shal be maintained in persistent memory.

In one of the posts on this forum it was said that binding needs to be done only once, but the documentation indicates that the table is in static RAM without confirming that it is stored in non-volatile memory.


So, it is my understanding that the End Device has a binding table when REFLECTOR is defined and that the number of entries is set through the compile options set like this:
$ egrep '(BINDING|REFLECTOR)' * # check in the directory with cfg and xcl files
f8wConfig.cfg:-DREFLECTOR
f8wConfig.cfg:-DNWK_MAX_BINDING_ENTRIES=4
f8wConfig.cfg:-DMAX_BINDING_CLUSTER_IDS=4

Therefore, I assume that I can expect that the end device I am developing accepts a binding request.


However, on the actual device implementation, the commissionning process with TI's Linux Gateway works, but when binding, a BIND_TABLE_FULL is reported to the gateway by the end device.

tshark's output of sniffed and decrypted packets:
18075 2020-09-21 03:55:19.428833 0x0000 → 0x4993 ZigBee ZDP 67 Bind Request Src: TexasIns_00:21:27:e4:cf, Dst: 00:00:00_00:00:00:00:00, Thermostat (Cluster ID: 0x0201)
18079 2020-09-21 03:55:19.454334 0x4993 → 0x0000 ZigBee ZDP 47 Bind Response, Status: Table Full


The binding procedure works properly with all third party HVAC Thermostat devices that were tested with the Coordinator, so I expect it to work with this TI based implementation as well (I believe that one of the third party devices is a TI based product as well).


It seems that this is related to my understanding of the TI Z-Stack binding implementation.

Questions:

What might I have forgotten?
How should I debug this?
Is there one of the examples that I can look into to check best practices for the TI Z-Stack 3.0?
Do I need to do anything myself to ensure persistance of the binding table in the end device?

  • Hi le_top,

    Binding tables are stored in NV flash memory and persistent amidst device resets.  You can reference the SampleTemperatureSensor examples which has proven to work well when binding to the ZIGBEE_LINUX_SENSOR_TO_CLOUD gateway solution.  You should capture sniffer logs and compare a working solution to help debug your own setup.  Be sure to clean/rebuild projects and erase all device memory before programming.  Make sure to register the proper endpoint with valid clusters/attributes initialized. You can also debug ZDO_ProcessBindUnbindReq to figure out why the bindStat does not change from ZDP_TABLE_FULL (default) to ZDP_SUCCESS.

    Regards,
    Ryan