I am having trouble in using memset and long "for loop" in main loop program. The problem is that the memset and the loop seem to cause wrong floating point calculation results.
The program initializes 117 KB globally defined "unsigned int[6000][5]" array with memset from RTS library(Run-Time-Support Library in CCS) and executes a long custom quick sort "for loop" taking over 2 milliseconds with 8 Hz. There is also 8 Hz interrupt, so the main loop algorithm and the ISR runs simulaneously with 8 Hz interval. During executing memset and sorting, the ISR is interrupted with 8 Hz.
However the 8 Hz ISR does not handle or modify algorithm floating point calculation data at all. Only the quick sort "for loop" sorts a 1 dimensional floating point array. I know you still consider of the custom quick sort code but, the code is commonly used and verified by many people. I do not use heap in the quick sort "for loop", the quick sort loop just uses global variables without malloc or heap memory.
I suspect RTS memset and the long "for loop" operation caues issues when the ISR is handled in the DSP. Is there any clue??
I use TMS320C6701 and CCS 9.2.0.00013. The compiler is C6000 7.4.24 for the DSP and the compile option is below.
"C:/ti/ccs920/ccs/tools/compiler/c6000_7.4.24/bin/cl6x" -mv6700 --abi=coffabi -O2 -ms3 --diag_wrap=off --diag_warning=225 --display_error_number --interrupt_threshold=1 --preproc_with_compile "my_application.c"
.cmd file has just fills .pinit, .text, .cinit, .switch, .data, .const, .bss, .cio, .sysmem, .far, .args, .ppinfo, .ppdata one by one in a SRAM.