Hi everyone,
I’ve built a Single Program Multiple Data program using OpenMP (omp_1_01_02_06) and MCSDK (mcsdk_2_01_01_04) using TMDSEVMC6678L board and GEL file Ver 2.004 to initialize it (currently using ‘no boot’ configuration). I use the following pragma omp and for loop to assign the iteration to each thread:
#pragma omp parallel
#pragma omp master
{
no_threads = omp_get_num_threads();
platform_write("Executing in %d threads\n\n", no_threads);
}
#pragma omp parallel shared(/*some vars here*/) private((/*some vars here*/)
{
thread_id = omp_get_thread_num();
for (a = thread_id; a < iteration; a = (a + no_threads))
{
......
}
}
My .cfg is as attached. 7331.image_processing_openmp_evmc6678l.cfg
The project is about processing a cube of data. At the beginning of the code, the variable ‘iteration’ is determined first and if ‘iteration’ is less or equal to 8, omp_set_num_threads(iteration) is used to set the number of cores used and 8 is the maximum value for ‘iteration’.
I’ve executed the code and when the iteration is equal to 1 (i.e. using only 1 core), the code could execute without any problem, as well as 2 cores. However, when I use larger number of iteration, some of the cores keep running and some has exited (with exception error message at console). The exception message that I could capture from the ROV are as follows:
Exception,
Address,0x0c00d110
Decoded,Internal: Opcode exception;
Exception Context,
$addr,0x0084d2f0
$type,ti.sysbios.family.c64p.Exception.Context
A0,0x00000000
A1,0x00000402
A10,0x00000000
A11,0x00000000
A12,0x00000000
A13,0x00000000
A14,0x00000000
A15,0x00000000
A16,0x00000000
A17,0x00000000
A18,0x9007fbd4
A19,0x00000020
A2,0x00000000
A20,0x902407a0
A21,0x00000000
A22,0x600c8144
A23,0xc09291b4
A24,0x00000000
A25,0x00000000
A26,0x00000000
A27,0x00000000
A28,0x00000400
A29,0x00000001
A3,0x00000031
A30,0x00000000
A31,0x00000000
A4,0x00000030
A5,0x41ba44e2
A6,0x80000000
A7,0x40200000
A8,0x00000031
A9,0x0c0e5ca0
AMR,0x00000000
B0,0x00000000
B1,0x00000001
B10,0x00000000
B11,0x00000000
B12,0x00000000
B13,0x00000000
B14,0xa02515a0
B15,0x9007ffe8
B16,0x00000000
B17,0x00000000
B18,0x00004980
B19,0x00000000
B2,0x9007fb38
B20,0x00000000
B21,0x00000000
B22,0x00000000
B23,0x00000000
B24,0x00000030
B25,0x00000000
B26,0x00000000
B27,0x00000030
B28,0x00000000
B29,0x00000030
B3,0x0c00d0f0
B30,0x00000000
B31,0x00000000
B4,0x00000100
B5,0x00000100
B6,0x00000000
B7,0x00000000
B8,0x21000000
B9,0x41ba44e2
EFR,0x00000002
IERR,0x00000008
ILC,0x00000000
IRP,0x0c06f434
ITSR,0x0000000f
NRP,0x0c00d110
NTSR,0x0001000f
RILC,0x00000031
SSR,0x00000000
The idea of using SPMD is to reduce memory usage for each core. So, the memory used when using 1 core is larger than when use 2 cores. Can anyone give any idea/feedback of what is the problem that I’m facing?
Thanks and kind regards,
Rizuan






