This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

RTOS: NDK Custom Tick period and struct timeval.sec value



Tool/software: TI-RTOS

Hello,

I would like to change the default 1000us Clock to 100us. This one:

/* ================ Clock configuration ================ */
var Clock = xdc.useModule('ti.sysbios.knl.Clock');
var Mailbox = xdc.useModule('ti.sysbios.knl.Mailbox');
var Http = xdc.useModule('ti.ndk.config.Http');
/*
 * Default value is family dependent. For example, Linux systems often only
 * support a minimum period of 10000 us and multiples of 10000 us.
 * TI platforms have a default of 1000 us.
 */
Clock.tickPeriod = 100;

In every task where I instantiate clock instances I have updated the period values ( multiplied them by 10) to get the right timing. In my application I have to use custom_ndk_thread so the clockParams.period is set to 1000:

/// --------------------------------------------------------------------------
/// Custom NDK Thread
Void custom_ndk_config_Global_stackThread(UArg arg0, UArg arg1)
{
    System_printf("Custom NDK Stack Thread Running\n");
    System_flush();

    int rc;
    HANDLE hCfg;

    ti_sysbios_knl_Clock_Params clockParams;

    /* Create the NDK heart beat */
    ti_sysbios_knl_Clock_Params_init(&clockParams);
    clockParams.startFlag = TRUE;
    // default value is 100, but Clock period decreased
    clockParams.period = NDK_TICK_PERIOD;
    ti_sysbios_knl_Clock_create(&llTimerTick, clockParams.period, &clockParams, NULL);


    /* THIS MUST BE THE ABSOLUTE FIRST THING DONE IN AN APPLICATION!! */
    rc = NC_SystemOpen(NC_PRIORITY_LOW, NC_OPMODE_INTERRUPT);
    if (rc) {
        xdc_runtime_System_abort("NC_SystemOpen Failed (%d)\n");
    }

    /* Create and build the system configuration from scratch. */
    hCfg = CfgNew();
    if (!hCfg) {
        xdc_runtime_System_printf("Unable to create configuration\n");
        goto main_exit;
    }

    {
        extern Void hook_StackInit();

        /* call user defined stack initialization hook */
        hook_StackInit(hCfg);
    }
// etc

my problem is the following:

I am using select() function the check if there is any data waiting to read on the socket:

static int read_packet(int timeout)
{
	if(timeout > 0)
	{
		fd_set readfds;
		struct timeval tmv;

		// Initialize the file descriptor set
		FD_ZERO (&readfds);
		FD_SET (socket_id, &readfds);

		// Initialize the timeout data structure
		tmv.tv_sec = timeout;
		tmv.tv_usec = 0;

		int retVal = select(socket_id + 1, &readfds, NULL, NULL, &tmv);

		// select returns 0 if timeout, 1 if input available, -1 if error
		if(retVal <= 0)
			return retVal;
	}

	int total_bytes = 0, bytes_rcvd, packet_length;
	memset(packet_buffer, 0, sizeof(packet_buffer));
	
	if((bytes_rcvd = recv(socket_id, (packet_buffer+total_bytes), RCVBUFSIZE, 0)) <= 0) {
	    System_printf("Socket Error: %d\n",fdError());System_flush();
	    return -1;
	}

but the tv_sec ignores the updated clock and it seem to working with the 10th of the set value. (when I set 10 timeout the real timeout is 1 sec) If I set the right timing for the NDK, why the sec parameter is ignored? Am I doing anything wrong?

Thank you for your reply