This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

System crashes after adding new task.

Other Parts Discussed in Thread: TMS320F28335

I am having a issue where adding simple task and crashing on bootup as a result.

The entire output of the console: " located entirely below 0x10000) xdc.runtime.Error.raise: terminating execution"

The ROV Statup State shows:

And looking at the task status under ROV view:

So all tasks are overunning theirs stacks supposedly.

Since there is not stack trace for the system once abort() occurs I put a break point in xdc_runtime_System_abort_F (found looking at the RPC regsiter) I get the following stack trace:

The error occurs in

Ptr TaskSupport_start(Ptr currTsk, ITaskSupport_FuncPtr enter, ITaskSupport_FuncPtr exit, Error_Block *eb)

    /*
     *  The SP register is only 16 bits on 28x. Ensure that the last address
     *  in the new stack is less than 0xffff
     */
    if (((ULong)tsk->stack) + (tsk->stackSize) >= MAX_SP_ADDR) {
        Error_raise(eb, TaskSupport_E_invalidStack, tsk->stack, 0);
        return (NULL);
    }

with tsk->stack = 0x006F0072  and task->stackSize = 0x00720045.

Looking inside Int Task_Module_startup (Int phase) i see that the task being initialized is the Zero'th statically created task that has an handle 0x0000C042 which Is not any of my tasks per ROV above.

What could be causing this? I only have 5 tasks with a total of 5k worth of stack and all evidence points to it being the first system task that is run.

Please advise.

  • Steve,

    As you found, this error is raised when the Task startup function detected the Task’s stack range would go beyond the reach of the stack pointer (above 0xFFFF).  

    In ROV the values for stackBase all indicate above this value.  But this may be a red herring, because if a corruption of several kernel data structures occurs ROV can display extra bogus data.

    Are you explicitly specifying the stacks in high memory?

    Can you post the contents of your application configuration file (with the .cfg extension)?

    Also, any details on what you did to add the fifth task will help…

    Scott

  • Steve,

    I think what is happening is that when you add the 5th task the allocated stack is getting placed above 0x10000.  If you have a map file I think you’ll see this.

    By default, Task.defaultStackSection for the C28 is: “.ebss:taskStackSection";

    In the linker command file .ebss is allowed to be placed in XRAM, which is above 0x10000

        .ebss               : >> L47SARAM | M01SARAM | XRAM 

     

    Also, I notice that the app configuration disables Task support for the dispatcher:

    Hwi.dispatcherTaskSupport = false;    

    Other stuff in the .cfg file implies that interrupts will post Tasks, so you’ll need to set

    Hwi.dispatcherTaskSupport = true;

    Otherwise Tasks may run immediately in the context of the interrupts.  This config parameter should be set to false only if interrupts will *not* be triggering Task scheduling.

    I think the overflow into XRAM is causing the Task startup issue, but the dispatcher support for Tasks being disabled will cause other issues.

    Scott

  • Gary,

       I will try these out and post results.

    Steve

  • Is there a way I can specifically put the task stacks in a certain memory range. I have some large statically initialized data arrays which appear to go into .ebss, and .ebss is quite large 0x1036c. So it needs to go into external ram.

  • I figured it out. I just assign the task stacks to go into on of the other sections that are totally in onboard ram.