This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TMS320C6678: OpenMP,srio work together

Part Number: TMS320C6678
Other Parts Discussed in Thread: SYSBIOS

Hi, everyone,

     I want to use OpenMP in my  srio project.I have problems making them work together.

     I have searched a lot from  e2e and wiki,and some has the same problems with me.for example https://e2e.ti.com/support/processors/f/791/t/468231?OpenMP-QMSS-manual-setup#pi320966=1 but when I

tried the the same operation with the forum ,the project could not work well.

 

     First of all, my program aims to realize fpga transfer data to dsp through srio, send doorbell interruption after the transfer is completed, and then divide the data processing to core0-core7 for calculation by using

omp parallel programming.

     This program is modified based on the SRIO_LpbkDioIsr_evmc6678_C66BiosExampleProject generated by the PDK. Many posts have been referenced before. Since srio uses qmss and omp also uses qmss,

follow http://downloads.ti.com/mctools/esd/docs/openmp-dsp/integrating_apps_with_qmss.html to manually initialize qmss. Since srio uses 128 descriptors and uses Qmss_MemRegion_MEMORY_REGION0,

modify the following in the cfg file.

 

ompSettings.runtimeInitializesQmss  =  false;

OpenMP.qmssMemRegionIndex = 1;

OpenMP.qmssFirstDescIdxInLinkingRam = 128;

add following code after the ddr.len.

var Cache = xdc.useModule('ti.sysbios.family.c66.Cache');
Cache.setMarMeta(msmcNcVirt.base, msmcNcVirt.len, 0);
Cache.setMarMeta(OpenMP.ddrBase, OpenMP.ddrSize, 
Cache.PC|Cache.PFX|Cache.WTE);
Cache.setMarMeta(OpenMP.msmcBase, OpenMP.msmcSize, 
Cache.PC|Cache.PFX|Cache.WTE);

add the customized qmss function before __TI_omp_initialize_rtsc_mode

var Startup = xdc.useModule('xdc.runtime.Startup');
Startup.lastFxns.$add('&qmssInitOmp');
Startup.lastFxns.$add('&__TI_omp_initialize_rtsc_mode');

The following phenomena exist:

1 If the task of srio is not started, the omp program runs normally;

2 If the omp program is not started, the srio program runs normally and can receive the doorbell interrupt;

3 If the srio and omp programs are started at the same time, the doorbell interrupt can still be received in the srio program, but the omp program reports an error INTERNAL ERROR: Unexpected NULL pointer-src / tomp_parallel.c, 224

My task is started statically through a cfg file. The srio has the lowest priority and the omp program has the highest priority. I strictly follow the instructions in the omp2.0 user guide , for example, modify the auto run options ans so on

My question is:

1 Is my qmss startup method correct?

2 When the omp program will show this kind of error INTERNAL ERROR: Unexpected NULL pointer-src / tomp_parallel.c, 224;

Post my cfg file and related code below.

I think I made some mistake with QMSS init but I can't find it by m own. Thank you in advance.

Best Regards

 

  • /*
     * Copyright (c) 2013, Texas Instruments Incorporated - http://www.ti.com/
     *   All rights reserved.
     *
     *  Redistribution and use in source and binary forms, with or without
     *  modification, are permitted provided that the following conditions are met:
     *      * Redistributions of source code must retain the above copyright
     *        notice, this list of conditions and the following disclaimer.
     *      * Redistributions in binary form must reproduce the above copyright
     *        notice, this list of conditions and the following disclaimer in the
     *        documentation and/or other materials provided with the distribution.
     *      * Neither the name of Texas Instruments Incorporated nor the
     *        names of its contributors may be used to endorse or promote products
     *        derived from this software without specific prior written permission.
     *
     * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" 
     * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 
     * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE 
     * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE 
     * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR 
     * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF 
     * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS 
     * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN 
     * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) 
     * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 
     * POSSIBILITY OF SUCH DAMAGE.
     */
    
    
    /***************************/
    /* SECTION MAPPING         */
    /***************************/
    /* Load the CSL package */
    var Csl                         =   xdc.loadPackage('ti.csl');
    
    /* Load the CPPI package */
    var Cppi                        =   xdc.loadPackage('ti.drv.cppi');     
    
    /* Load the QMSS package */
    var Qmss                        =   xdc.loadPackage('ti.drv.qmss');
    var srio                        =   xdc.loadPackage('ti.drv.srio');
    /* Load and configure SYSBIOS packages */
    var BIOS      = xdc.useModule('ti.sysbios.BIOS');
    var Task      = xdc.useModule('ti.sysbios.knl.Task');
    var Clock     = xdc.useModule('ti.sysbios.knl.Clock');
    var Mailbox   = xdc.useModule('ti.sysbios.knl.Mailbox'); 
    var Hwi       = xdc.useModule('ti.sysbios.hal.Hwi');
    var Ecm       = xdc.useModule('ti.sysbios.family.c64p.EventCombiner');
    var BiosCache = xdc.useModule('ti.sysbios.hal.Cache');
    var HeapBuf   = xdc.useModule('ti.sysbios.heaps.HeapBuf');
    var HeapMem   = xdc.useModule('ti.sysbios.heaps.HeapMem');
    var Exc       = xdc.useModule('ti.sysbios.family.c64p.Exception');
    var Cache     = xdc.useModule('ti.sysbios.family.c66.Cache');
    
    BIOS.taskEnabled = true;
    Task.common$.namedInstance = true;
    var program = xdc.useModule('xdc.cfg.Program');
    
    program.sectMap[".args"]        = new Program.SectionSpec();
    program.sectMap[".bss"]         = new Program.SectionSpec(); //
    program.sectMap[".cinit"]       = new Program.SectionSpec(); //
    program.sectMap[".cio"]         = new Program.SectionSpec(); //
    program.sectMap[".const"]       = new Program.SectionSpec(); //
    program.sectMap[".data"]        = new Program.SectionSpec(); 
    program.sectMap[".far"]         = new Program.SectionSpec(); //
    program.sectMap[".fardata"]     = new Program.SectionSpec(); //
    program.sectMap[".neardata"]    = new Program.SectionSpec(); //
    program.sectMap[".rodata"]      = new Program.SectionSpec(); //
    program.sectMap[".stack"]       = new Program.SectionSpec(); //
    program.sectMap[".switch"]      = new Program.SectionSpec(); //
    program.sectMap[".sysmem"]      = new Program.SectionSpec();
    program.sectMap[".text"]        = new Program.SectionSpec(); //
    Program.sectMap[".inputbuff"]   = new Program.SectionSpec(); // add when will deal with buffer alloocation
    
    // Must place these sections in core local memory 
    program.sectMap[".args"].loadSegment        = "L2SRAM";
    program.sectMap[".cio"].loadSegment         = "L2SRAM";
    
    // Variables in the following data sections can potentially be 'shared' in
    // OpenMP. These sections must be placed in shared memory.
    program.sectMap[".bss"].loadSegment         = "DDR3";
    program.sectMap[".cinit"].loadSegment       = "DDR3";
    program.sectMap[".const"].loadSegment       = "DDR3";
    program.sectMap[".data"].loadSegment        = "DDR3";
    program.sectMap[".far"].loadSegment         = "DDR3";
    program.sectMap[".fardata"].loadSegment     = "DDR3";
    program.sectMap[".neardata"].loadSegment    = "DDR3";
    program.sectMap[".rodata"].loadSegment      = "DDR3";
    program.sectMap[".sysmem"].loadSegment      = "DDR3";
    Program.sectMap[".inputbuff"].loadSegment  = "MSMCSRAM";
    
    // Code sections shared by cores - place in shared memory to avoid duplication
    program.sectMap[".switch"].loadSegment      = program.platform.codeMemory;
    program.sectMap[".text"].loadSegment        = program.platform.codeMemory;
    
    // Size the default stack and place it in L2SRAM 
    program.stack = 0x20000;
    program.sectMap[".stack"].loadSegment       = "L2SRAM";
    
    // Since there are no arguments passed to main, set .args size to 0
    program.argSize = 0;
    
    
    // Send System_printf output to the same place as printf
    var System = xdc.useModule('xdc.runtime.System');
    var SysStd = xdc.useModule('xdc.runtime.SysStd');
    System.SupportProxy = SysStd;
    
    
    /********************************/
    /* OPENMP RUNTIME CONFIGURATION */
    /********************************/
    
    // Include OMP runtime in the build
    var ompSettings = xdc.useModule("ti.runtime.openmp.Settings");
    
    // Set to true if the application uses or has dependencies on BIOS components
    ompSettings.usingRtsc = true;
    //ompSettings.runtimeInitializesQmss  =  false;
    
    if (ompSettings.usingRtsc)
    {
        /* Configure OpenMP for BIOS
         * - OpenMP.configureCores(masterCoreId, numberofCoresInRuntime)
         *       Configures the id of the master core and the number of cores
         *       available to the runtime.
         */
    
        var OpenMP = xdc.useModule('ti.runtime.ompbios.OpenMP');
    
        // Configure the index of the master core and the number of cores available
        // to the runtime. The cores are contiguous.
        OpenMP.masterCoreIdx = 0;
    
        
        //OpenMP.qmssMemRegionIndex = 1;//4; //1
        //OpenMP.qmssFirstDescIdxInLinkingRam = 128;//224; //160
        
        // Setup number of cores based on the device
        var deviceName = String(Program.cpu.deviceName);
        if      (deviceName.search("6670") != -1) { OpenMP.numCores      = 4; }
        else if (deviceName.search("6657") != -1) { OpenMP.numCores      = 2; }
        else                                      { OpenMP.numCores      = 8; }
    
        // Pull in memory ranges described in Platform.xdc to configure the runtime
        var ddr3       = Program.cpu.memoryMap["DDR3"];
        var msmc       = Program.cpu.memoryMap["MSMCSRAM"];
        var msmcNcVirt = Program.cpu.memoryMap["OMP_MSMC_NC_VIRT"];
        var msmcNcPhy  = Program.cpu.memoryMap["OMP_MSMC_NC_PHY"];
    
        // Initialize the runtime with memory range information
        OpenMP.msmcBase = msmc.base;
        OpenMP.msmcSize = msmc.len;
    
        OpenMP.msmcNoCacheVirtualBase  = msmcNcVirt.base;
        OpenMP.msmcNoCacheVirtualSize  = msmcNcVirt.len;
    
        OpenMP.msmcNoCachePhysicalBase  = msmcNcPhy.base;
    
        OpenMP.ddrBase          = ddr3.base;
        OpenMP.ddrSize          = ddr3.len;
    
        var Cache   = xdc.useModule('ti.sysbios.family.c66.Cache');
        Cache.setMarMeta(msmcNcVirt.base, msmcNcVirt.len, 0);
        Cache.setMarMeta(OpenMP.ddrBase, OpenMP.ddrSize, 
                                            Cache.PC|Cache.PFX|Cache.WTE);
        Cache.setMarMeta(OpenMP.msmcBase, OpenMP.msmcSize, 
                                            Cache.PC|Cache.PFX|Cache.WTE);
    
    
    
        // Configure memory allocation using HeapOMP
        // HeapOMP handles 
        // - Memory allocation requests from BIOS components (core local memory)
        // - Shared memory allocation by utilizing the IPC module to enable 
        //   multiple cores to allocate memory out of the same heap - used by malloc
        var HeapOMP = xdc.useModule('ti.runtime.ompbios.HeapOMP');
    
        // Shared Region 0 must be initialized for IPC 
        var sharedRegionId = 0;
    
        // Size of the core local heap
        var localHeapSize  = 0x8000;
    
        // Size of the heap shared by all the cores
        var sharedHeapSize = 0x8000000;
    
        // Initialize a Shared Region & create a heap in the DDR3 memory region
        var SharedRegion   = xdc.useModule('ti.sdo.ipc.SharedRegion');
        SharedRegion.setEntryMeta( sharedRegionId,
                                   {   base: ddr3.base,
                                       len:  sharedHeapSize,
                                       ownerProcId: 0,
                                       cacheEnable: true,
                                       createHeap: true,
                                       isValid: true,
                                       name: "DDR3_SR0",
                                   });
    
    
    
        // Configure and setup HeapOMP
        HeapOMP.configure(sharedRegionId, localHeapSize);
    
    
    
        // The function __TI_omp_reset_rtsc_mode must be called after reset
        var Reset = xdc.useModule('xdc.runtime.Reset');
        Reset.fxns.$add('&__TI_omp_reset_rtsc_mode');
    
        // __TI_omp_start_rtsc_mode configures the runtime and calls main
        var Startup = xdc.useModule('xdc.runtime.Startup');
        
        Startup.lastFxns.$add('&qmssInitOmp');
        
        Startup.lastFxns.$add('&__TI_omp_initialize_rtsc_mode');
    }
    else
    {
        /* Size the heap. It must be placed in shared memory */
        program.heap = sharedHeapSize;
    }
    var Task =xdc.useModule('ti.sysbios.knl.Task');
    /*
    var task0Params = new Task.Params();
    task0Params.instance.name = "task0";
    task0Params.priority=15;
    Program.global.task0 = Task.create('&dioExampleTask', task0Params);*/
    
    var task2Params = new Task.Params();
    task2Params.instance.name="taskDio";
    task2Params.priority=10;
    Program.global.taskDio=Task.create("&dioExampleTask",task2Params);
    
    var task1Params = new Task.Params();
    task1Params.instance.name = "task1";
    task1Params.priority=15;
    Program.global.task1 = Task.create('&FxnEx2', task1Params);
    
    var task0Params = new Task.Params();
    task0Params.instance.name = "task0";
    task0Params.priority=11;
    Program.global.task0 = Task.create('&FxnZh', task0Params);
    

  • this is my dioExampleTask

    Void dioExampleTask(UArg arg0, UArg arg1)
    
    {
    #ifdef SIMULATOR_SUPPORT
    #warn SRIO DIO LSU ISR example is not supported on SIMULATOR !!!
        System_printf ("SRIO DIO LSU ISR example is not supported on SIMULATOR. Exiting!\n");
        return;
    #else
        System_printf ("Executing the SRIO DIO example on the DEVICE\n");
    #endif
    
        /* Initialize the system only if the core was configured to do so. */
        if (coreNum == CORE_SYS_INIT)
        {
            System_printf ("Debug(Core %d): System Initialization for CPPI & QMSS\n", coreNum);
    
            /* System Initialization */
           //if (system_init() < 0)
               //return;
    
            /* Power on SRIO peripheral before using it */
            if (enable_srio () < 0)
            {
                System_printf ("Error: SRIO PSC Initialization Failed\n");
                return;
            }
    
            /* Device Specific SRIO Initializations: This should always be called before
             * initializing the SRIO Driver. */
          if (SrioDevice_init() < 0)
              return;
    
            /* Initialize the SRIO Driver */
            if (Srio_init () < 0)
            {
                System_printf ("Error: SRIO Driver Initialization Failed\n");
                return;
            }
    
            /* SRIO Driver is operational at this time. */
            System_printf ("Debug(Core %d): SRIO Driver has been initialized\n", coreNum);
    
            /* Write to the SHARED memory location at this point in time. The other cores cannot execute
             * till the SRIO Driver is up and running. */
            isSRIOInitialized[0] = 1;
    
            /* The SRIO IP block has been initialized. We need to writeback the cache here because it will
             * ensure that the rest of the cores which are waiting for SRIO to be initialized would now be
             * woken up. */
            CACHE_wbL1d ((void *) &isSRIOInitialized[0], 128, CACHE_WAIT);
    
        }
        else
        {
            /* All other cores need to wait for the SRIO to be initialized before they proceed. */
            System_printf ("Debug(Core %d): Waiting for SRIO to be initialized.\n", coreNum);
    
            /* All other cores loop around forever till the SRIO is up and running.
             * We need to invalidate the cache so that we always read this from the memory. */
            while (isSRIOInitialized[0] == 0)
                CACHE_invL1d ((void *) &isSRIOInitialized[0], 128, CACHE_WAIT);
    
            /* Start the QMSS. */
            if (Qmss_start() != QMSS_SOK)
            {
                System_printf ("Error: Unable to start the QMSS\n");
                return;
            }
    
            System_printf ("Debug(Core %d): SRIO can now be used.\n", coreNum);
        }
        System_printf("dio has entered\n");
        UInt8           isAllocated;
        Srio_DrvConfig  drvCfg;
        /* Initialize the SRIO Driver Configuration. */
        memset ((Void *)&drvCfg, 0, sizeof(Srio_DrvConfig));
    
        /* Initialize the OSAL */
        if (Osal_dataBufferInitMemory(SRIO_MAX_MTU) < 0)
        {
            System_printf ("Error: Unable to initialize the OSAL. \n");
            return;
        }
    
        /********************************************************************************
         * The SRIO Driver Instance is going to be created with the following properties:
         * - Driver Managed
         * - Interrupt Support (Pass the Rx Completion Queue as NULL)
         ********************************************************************************/
    
        /* Setup the SRIO Driver Managed Configuration. */
        drvCfg.bAppManagedConfig = FALSE;
    
        /* Driver Managed: Receive Configuration */
        drvCfg.u.drvManagedCfg.bIsRxCfgValid             = 1;
        drvCfg.u.drvManagedCfg.rxCfg.rxMemRegion         = Qmss_MemRegion_MEMORY_REGION1;
        drvCfg.u.drvManagedCfg.rxCfg.numRxBuffers        = 4;
        drvCfg.u.drvManagedCfg.rxCfg.rxMTU               = SRIO_MAX_MTU;
    
        /* Accumulator Configuration. */
        {
            int32_t coreToQueueSelector[4];
    
          /* This is the table which maps the core to a specific receive queue. */
            coreToQueueSelector[0] = 704;
            coreToQueueSelector[1] = 705;
            coreToQueueSelector[2] = 706;
            coreToQueueSelector[3] = 707;
            /* Since we are programming the accumulator we want this queue to be a HIGH PRIORITY Queue */
            drvCfg.u.drvManagedCfg.rxCfg.rxCompletionQueue = Qmss_queueOpen (Qmss_QueueType_HIGH_PRIORITY_QUEUE,
                                                                             coreToQueueSelector[coreNum], &isAllocated);
            if (drvCfg.u.drvManagedCfg.rxCfg.rxCompletionQueue < 0)
            {
                System_printf ("Error: Unable to open the SRIO Receive Completion Queue\n");
                return;
            }
    
            /* Accumulator Configuration is VALID. */
            drvCfg.u.drvManagedCfg.rxCfg.bIsAccumlatorCfgValid = 1;
    
            /* Accumulator Configuration. */
            drvCfg.u.drvManagedCfg.rxCfg.accCfg.channel             = coreNum;
            drvCfg.u.drvManagedCfg.rxCfg.accCfg.command             = Qmss_AccCmd_ENABLE_CHANNEL;
            drvCfg.u.drvManagedCfg.rxCfg.accCfg.queueEnMask         = 0;
            drvCfg.u.drvManagedCfg.rxCfg.accCfg.queMgrIndex         = coreToQueueSelector[coreNum];
            drvCfg.u.drvManagedCfg.rxCfg.accCfg.maxPageEntries      = 2;
            drvCfg.u.drvManagedCfg.rxCfg.accCfg.timerLoadCount      = 0;
            drvCfg.u.drvManagedCfg.rxCfg.accCfg.interruptPacingMode = Qmss_AccPacingMode_LAST_INTERRUPT;
            drvCfg.u.drvManagedCfg.rxCfg.accCfg.listEntrySize       = Qmss_AccEntrySize_REG_D;
            drvCfg.u.drvManagedCfg.rxCfg.accCfg.listCountMode       = Qmss_AccCountMode_ENTRY_COUNT;
            drvCfg.u.drvManagedCfg.rxCfg.accCfg.multiQueueMode      = Qmss_AccQueueMode_SINGLE_QUEUE;
    
            /* Initialize the accumulator list memory */
            //memset ((Void *)&gHiPriAccumList[0], 0, sizeof(gHiPriAccumList));
           // drvCfg.u.drvManagedCfg.rxCfg.accCfg.listAddress = l2_global_address((UInt32)&gHiPriAccumList[0]);
        }
    
        /* Driver Managed: Transmit Configuration */
        drvCfg.u.drvManagedCfg.bIsTxCfgValid             = 1;
        drvCfg.u.drvManagedCfg.txCfg.txMemRegion         = Qmss_MemRegion_MEMORY_REGION1;
        drvCfg.u.drvManagedCfg.txCfg.numTxBuffers        = 4;
        drvCfg.u.drvManagedCfg.txCfg.txMTU               = SRIO_MAX_MTU;
    
        /* Start the Driver Managed SRIO Driver. */
        hDrvManagedSrioDrv = Srio_start(&drvCfg);
        if (hDrvManagedSrioDrv == NULL)
        {
            System_printf ("Error(Core %d): SRIO Driver failed to start\n", coreNum);
            return;
        }
    
    
        /* Get the CSL SRIO Handle. */
        hSrioCSL = CSL_SRIO_Open (0);
        if (hSrioCSL == NULL)
            return -1;
        Hwi_Params      hwiParams;
        Hwi_Handle      myHwi;
        Error_Block     eb;
    
        Hwi_Params_init(&hwiParams);
        Error_init(&eb);
        hwiParams.arg       = (UArg)hDrvManagedSrioDrv;
        hwiParams.eventId   = 20;
    //
        myHwi = Hwi_create(11, (CpIntc_FuncPtr)myDoorbellCompletionIsr, &hwiParams, &eb);
    
        System_printf("Finish dispatch plug\n");
    
    }

  • this is my openmp task,this is only the demo that verify the srio and omp work together .so I make while(1) to operate the omp again and again.To see if the omp can continued run.

    Void FxnZh(UArg arg0,UArg arg1){
        int tid;
        omp_set_num_threads(8);
        while(1){
        #pragma omp parallel private(tid)
            {
                tid=omp_get_thread_num();
                System_printf("core %d has activated\n",tid);
            }
            System_printf("Fxn_zh has join the main thread\n");
            Task_sleep(100);
        }
    
    
    }

  • By the way I use pdk2.0.015,and omp2.06 

    compiler :ti 8.3.2

    sysbios:6.76.2.02

    ipc:3.50.4.07

    xdcTools:3.55.2.22

  • Hi,

    Sorry for the delayed reply. Just wanted to let you know that I am digging into this and will update as soon as I come up with any idea for debugging your problem.

    Best Regards,
    Yordan

  • Hi ,Yordan

    Thank you very very much for your reply .I made a great progress for this issue just now ,and I also made my UDP program in this project .but there still has some problems, 

    First of all INTERNAL ERROR: Unexpected NULL pointer-src / tomp_parallel.c, 224 has been solved by modify my memory map ,I found that srio destination addresss is 0xc000000,but my openmp also use the same address for the nonCache region which will  store shared state.I modify the address thar my srio use.then the problem has been solved.

    But there still have 2 phenomenon.

    1 there is a hwi  In my srioTask ,the interrupt is from the fpga.if the program run well ,the interrupt is recevied correctly, but sometimes it looks like goes all right,the ndk network also work well ,the openmp task also work well ,and the fpga has also transfered the data to my memory,only the interrupt(doorbell )has not received.

    2 I use two way to initialize qmss.

        A.Using the way in the above post,it is the same as the omp2.0 user guide instruct ,add my own qmss initialize function before __TI_omp_initialize_rtsc_mode function  in the startup lastfxns,and use the folling program in cfg.the ndk use memRigion0,the srio use memRegion1,the openmp use memRegion2;this way to initlalize the qmss worked well.

    ompSettings.runtimeInitializesQmss  =  false;

    OpenMP.qmssMemRegionIndex = 2;

    OpenMP.qmssFirstDescIdxInLinkingRam = 256;

    var Cache = xdc.useModule('ti.sysbios.family.c66.Cache');
    Cache.setMarMeta(msmcNcVirt.base, msmcNcVirt.len, 0);
    Cache.setMarMeta(OpenMP.ddrBase, OpenMP.ddrSize,
    Cache.PC|Cache.PFX|Cache.WTE);
    Cache.setMarMeta(OpenMP.msmcBase, OpenMP.msmcSize,
    Cache.PC|Cache.PFX|Cache.WTE);

      B.I use the original method to initialize the qmss,which the my own qmss initialize function is in not add at cfg file ,it is called in the main(),and the omp qmss configuration in cfg file is commented (that means I dont use the above code to configure the omp qmss in cfg ),but  this method still work well .I think I still dont understand the qmss initialize with omp ,in the user guide it seems like if I dont use this code(ompSettings.runtimeInitializesQmss  =  false;),the qmss will be auto inilialized ,and will use the default memRegion0,so when I call my own qmss initialize function,in my opinion it will conflicted with the omp initialized ,but the reuslt is the program work well.

    could you please explain the phenomenon,or the omp qmss initialize ussing either way above is fine  .

    At last  I have to say thank you very much for reply.

    Best Regards 

    Zhang

  • Hi ,Yordan

    I encountered a new problem,when I add my udp program into the srio and omp project ,I find my udp task run ,it wil shut down the omp task,at this time I remove all the System_printf() in my omp task,or if I use the System_printf() in my omp parallel ,because print to console will cost so many time,that the udp task will not start, it seems like the they cant work togerther well .what can I do to solve it.

    my omp task is start in cfg and the priority is 10, and the udp task start in main(),the priority is 3;

    Best Regards 

    Zhang

  • Hi,

    UDP task priority 3 is lower than the OMP task, is any reason to put this way? Are you able to lower the OMP priority? See the NDK user guide for priority discussion.

    SPRU 523:

    3.1.2.2 Priority Levels for Network Tasks
    The stack is designed to be flexible, and has a OS adaptation layer that can be adjusted to support any
    system software environment that is built on top of SYS/BIOS. Although the environment can be adjusted
    to suit any need by adjusting the HAL, NETCTRL and OS modules, the following restrictions should be
    noted for the most common environments:
    1. The Network Control Module (NETCTRL) contains a network scheduler thread that schedules the
    processing of network events. The scheduler thread can run at any priority with the proper adjustment.
    Typically, the scheduler priority is low (lower than any network Task), or high (higher than any network
    Task). Running the scheduler thread at a low priority places certain restrictions on how a Task can
    operate at the socket layer. For example:
    • If a Task polls for data using the recv() function in a non-block mode, no data is ever received
    because the application never blocks to allow the scheduler to process incoming packets.
    • If a Task calls send() in a loop using UDP, and the destination IP address is not in the ARP table,
    the UDP packets are not sent because the scheduler thread is never allowed to run to process the
    ARP reply.
    These cases are seen more in UDP operation than in TCP. To make the TCP/IP behave more like
    a standard socket environment for UDP, the priority of the scheduler thread can be set to high
    priority. See Chapter 4 for more details on network event scheduling.

    SPRU 524:

    TaskSetPri Set Task Priority

    Regards, Eric