This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

CC2530: Message to Group is not delivered

Part Number: CC2530
Other Parts Discussed in Thread: Z-STACK,

Hi,

We have network containg one coordinator(ZNP) and many routers. Router are assigned to groups.
some times the write attribute command to a group is not delivered, while we get MT_AF_DATA_CONFIRM success.

what may cause this behavior?

  • Hi Aviral,

    Is the command being sent over-the-air? When you say sometimes you mean that at other times the command is sent out successfully? What is the average rate of failure? I recall you are using Z-Stack 3.0.1, please provide sniffer logs and the code excerpt used to assign groups and send the command.

    Regards,
    Ryan
  • Hi Ryan,

    who can we confirm that it was sent over air?

    As mentioned we only know that MT_AF_DATA_CONFIRM was successful.

    The rate of failure is very high, it was only delivered only once (first attempt) in 15-20 attempts.

    Zstack Version is 2.6.1 for this application.

    We do not have the access to the sniffer logs as of now, we will try to get it. SmartRF Packect sniffer logs will be fine?

    Code Excerpts:

    1 - Add Group

        zcl_header.frame_control = CLUSTER_SPECIFIC;
        zcl_header.commandID         = ADD_GROUP;
        zcl_header.transId  = TransID++;
        DataRequest.DstAddr = grp_record->nwk_addr_list[i].nwk_addr;
        DataRequest.DstEndpoint = 0x0B;
        DataRequest.SrcEndpoint = (char)server_config.coordinator_endpoint;
        DataRequest.ClusterID = 0x0004;
        DataRequest.TransID = zcl_header.transId;
        DataRequest.Options = 0;
        DataRequest.Radius = 0xEE;
        DataRequest.Data[0] = zcl_header.frame_control;
        DataRequest.Data[1] = zcl_header.transId;
        DataRequest.Data[2] = zcl_header.commandID;
    
        memcpy(DataRequest.Data+3,&grp_record->group_id,2);
        memcpy(DataRequest.Data+3+2,grp_record->group_name,strlen(grp_record->group_name));
        DataRequest.Len = 3 + 2 + strlen(grp_record->group_name);
    
        rc = afDataRequest(&DataRequest);

    2 - send write attribute command

        zcl_header.frame_control = CLUSTER_SPECIFIC & 0x00;
        zcl_header.commandID     = WRITE_ATTRIBUTE;
    
        memset(&DataRequest,0,sizeof(DataRequest));
        zcl_header.transId  = TransID++;
    
        DataRequest.DstAddrMode = 0x01;             /* Group Address*/
        GroupAddress = (uint16_t )group_detail->grp_id;
        memcpy(DataRequest.DstAddr,&GroupAddress,2);
        GroupAddress = DataRequest.DstAddr[0] | (DataRequest.DstAddr[1] << 8);
        DataRequest.DstEndpoint = 0x0B;
    
        DataRequest.DstPanID = 0x0000;
        DataRequest.SrcEndpoint = server_config.coordinator_endpoint;
        DataRequest.ClusterId = 0x0018;
        DataRequest.TransId = zcl_header.transId;
        DataRequest.Options = 0;
        DataRequest.Radius = 0xEE;
        /* ZCL Header*/
        DataRequest.Data[0] = zcl_header.frame_control;
        DataRequest.Data[1] = zcl_header.transId;
        DataRequest.Data[2] = zcl_header.commandID;
        DataRequest.Data[3] = 0x04;
        DataRequest.Data[4] = 0x00;
        DataRequest.Data[5] = 0x20;
        DataRequest.Data[6] = (unsigned char)(100 - (int)intensity_at_occ);
        DataRequest.Len = 3 + 1*4;
        rc = afDataRequestExt(&DataRequest);
    
    

  • How fast do you do multi-casting? Do you have sniffer log when you see the problem?
  • We seldom do multicast, only when triggered by the user. During testing we may have done it once in 2-3 minutes max.
    We do not have the sniffer log for the non working condition.

    Multicast is working fine for other groups in the same network.Any clues on what possibly may be going wrong?
  • Multicast is not delivery guaranteed so it’s difficult to judge without sniffer log or further detail. Does the group cast missing on those devices that are with weak signals?
  • No the LQI values are good (>70) for the nodes of the group.

    How is the multicast to the groups delivered ?
    is it delivered to all nodes and then only the nodes with the group id actually process these messages?
    or
    only the nodes with the group id registered in the network is delivered this message?
  • It is delivered to all nodes under radio coverage and then only the nodes with the group id actually process these messages
  • Are there any noticeable code differences between this group and the working ones? Is it always the last group to be formed in the network? This could be an issue with the maximum group bindings (NWK_MAX_BINDING_ENTRIES, MAX_BINDING_CLUSTER_IDS) allowed on the CC2530 ZC or the way groups are handled in the archived Z-Stack 2.6.1, it is difficult to tell with a legacy stack version.

    Regards,
    Ryan
  • Same code is running in all the nodes. Group formation is done at only at the time of installation. it was done months before and it all worked fine then , it is only recently that we are facing this problem.
    as KY mentioned that message is delivered to all the nodes and the node checks whether it has the group id and then process it.
    then in this case how does group bindings (NWK_MAX_BINDING_ENTRIES, MAX_BINDING_CLUSTER_IDS) come into play.

    Can you please explain how does the group cast actually work or point out a reference for the same?
  • Thank you for providing the background history, in this case you may need to increase the APS_DEFAULT_NONMEMBER_RADIUS, MAX_BCAST_RETRIES, and PASSIVE_ACK_TIMEOUT based on YK's reference.  Are you aware whether network routing has altered recently, or other changes that could affect your group behavior?

    Regards,
    Ryan

  • Hi Ryan,
    I'm part of the same team as Aviral. Can you please explain what could cause the network routing to be altered?
    Also, we have a feature in our system, where we would issue individual configurations to nodes that are part of a group. The way we do this is we issue a remove group command to these nodes and then issue the unicast configuration. The reason we do this is because when an individual command is issued to a node, we do not want it to receive any multicast commands unless this individual configuration is cleared. The users use this feature regularly. Is that something that would alter the routing of the network? If so, how would we recover?

    Thanks,
    Ashwin
  • Hi Ashwin,

    If the router locations changed or encountered new forms of interference then the link costs could become poor enough that a new route discovery process would begin, I was only curious if nodes on the network somehow went outside the bounds of APS_DEFAULT_NONMEMBER_RADIUS or MAX_BCAST_RETRIES.

    As group tables are maintained in NV, constantly removing then re-adding the nodes could be causing a memory issue that is corrupting the tables. It is once again difficult to discern given the deprecated Z-Stack version and lack of sniffer logs. You should be able to send a unicast message by changing the address mode of the configuration and without removing the node from a group.

    Regards,
    Ryan
  • Hi Ryan,

    Thanks for the update. I'll look up on APS_DEFAULT_NONMEMBER_RADIUS and MAX_BCAST_RETRIES and try to get a better understanding on these concepts. I'll get back to you in the next couple of days with packet sniffer logs (TI RF Packet sniffer). But just to clarify my previous piece of information, I'll provide an example.

    Let's assume there are 3 nodes (1,2,3) that belong to a group with ID 1. If the user wants to send some command to to node 1, we first issue a remove group command to node 1 and then unicast the command to node 1. Now later, if the user wants to issue a command to group 1, the coordinator multicasts the command to group 1. This way, only nodes 2 and 3 would receive this command. We do not want node 1 to receive multicasts for group 1 until the individual command to node 1 is cleared (this is when we would send the add group command to node 1).

    Hope this clarifies how our system works.

  • Hello Ashwin/Aviral,

    Do you require any further support or can this thread be closed?

    Regards,
    Ryan