This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

CC3220SF: AWS IoT shadow update

Part Number: CC3220SF


Hi,

My code was working correctly until I added a few more entries into the JSON document to be reported to the Shadow.

Initially I was reporting about twenty entries in aws_iot_shadow_add_reported, then I increased the count by another ten (and correspondingly added the new entries in the Shadow as well). After that I was getting "Update Timeout--" message from aws_iot_shadow_update, even though the Shadow seemed to be updating correctly.

Could you advise how to get rid of the "Update Timeout--" message?

Thanks.

David

 

  • Hi David,

    Are you getting any error code associated with the shadow update timeouts? If you are increasing the amount data in the shadow documents, then you will probably also need to increase the sizes of the buffers used by the AWS library. Please see my post here for info on how to do that:

    https://e2e.ti.com/support/wireless-connectivity/wifi/f/968/p/811035/3003070#3003070

    Does increasing the buffer size help?

    Regards,
    Michael

  • Hi Michael,

    I don't get an error code, just "Update Timeout--" message from the below command:

      rc = aws_iot_shadow_update(&mqttClient,
                                AWS_IOT_MY_THING_NAME, JsonDocumentBuffer,
                                ShadowUpdateStatusCallback, NULL, 4, true);

    What is the formula to calculate the right sizes of the buffers? 

    For example, I know that the number of characters in JsonDocumentBuffer is 650 max. How do I calculate the appropriate buffer size? 

    Thanks.

    David

  • Currently the buffer size is 2048:

    #define AWS_IOT_MQTT_TX_BUF_LEN 2048 ///< Any time a message is sent out through the MQTT layer. The message is copied into this buffer anytime a publish is done. This will also be used in the case of Thing Shadow
    #define AWS_IOT_MQTT_RX_BUF_LEN 2048 ///< Any message that comes into the device should be less than this buffer size. If a received message is bigger than this buffer size the message will be dropped.
    

  • The issue is in RX buffer. 

    Update Timeout--WARN: isJsonValidAndParse L#390 Failed to parse JSON: -1

    WARN: AckStatusCallback L#200 Received JSON is not valid

    As soon as I remove the newly added report entries from the JSON document the warning disappears.

    I printed the contents of the RX buffer and it seem valid. I can't understand what's wrong

  • /**
     * Allocates a fresh unused token from the token pull.
     */
    static jsmntok_t *jsmn_alloc_token(jsmn_parser *parser,
    		jsmntok_t *tokens, size_t num_tokens) {
    	jsmntok_t *tok;
    	if (parser->toknext >= num_tokens) {
    	    UART_PRINT("THE CURRENT TOKEN IS %d AND NUMBER OF TOKENS IS %d\n\r", parser->toknext, num_tokens);
    		return NULL;
    	}
    	tok = &tokens[parser->toknext++];
    	tok->start = tok->end = -1;
    	tok->size = 0;
    #ifdef JSMN_PARENT_LINKS
    	tok->parent = -1;
    #endif
    	return tok;
    }

    The error happens in the above portion, where the function returns NULL. 

    I played with MAX_JSON_TOKEN_EXPECTED variable, previously it was 220. The issue persistas up until 230. When I make the value 231, the loop hangs. 

  • Hi,

    Please see my post in your other thread for the significance of MAX_JSON_TOKEN_EXPECTED: https://e2e.ti.com/support/wireless-connectivity/wifi/f/968/t/835148

    When you say that the loop hangs at MAX_JSON_TOKEN_EXPECTED == 231, where does it hang? Is it the MCU triggering a hard fault, is there some other error condition occurring, or is it that the device simply enters a while(1) loop forever?

    Regards,

    Michael

  • Hi Michael,

    Your answer in the mentioned thread helps. Could you please answer the follow up question in that same thread? That would be really helpful to completely clarify the solution.

    Thanks.

    David

  • Hi David,

    I clarified your JSON question in your related thread.

    Regards,

    Michael