Hello all
I'm using DM648 device and trying to allocate L1D cache as SRAM.
I edited the example project "dm648_evm_vport_st_hd_compositor_sample", and changed following steps.
- followed this page and set 64P L1DCFG Mode to 16KB.
- added the detection() into the project, and memset() is in detection().
- set break point at memset().
- run.
When this program arrive at the break point, the program stack looks like below.
detection()
test_video_compositor()
tskHdCompositor()
TSK_exit()
Then I press F10, the program stack becomes....
detection()
0x00000000
What's happened when I memset the L1D cache?
Following is the pieces of source code.
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
#define MAX_CORNER_NUM 1920
#pragma DATA_SECTION(CORNER_X, "L1D_SRAM")
#pragma DATA_SECTION(CORNER_Y, "L1D_SRAM")
#pragma DATA_SECTION(OF_X, "L1D_SRAM")
#pragma DATA_SECTION(OF_Y, "L1D_SRAM")
static short CORNER_X[MAX_CORNER_NUM];
static short CORNER_Y[MAX_CORNER_NUM];
static short OF_X[MAX_CORNER_NUM];
static short OF_Y[MAX_CORNER_NUM];
#pragma DATA_SECTION(BUFFER, "L1D_SRAM")
static unsigned char BUFFER[384];
static void detection(ChannelInfo *inChInfo)
{
.........
memset(CORNER_X, 0, MAX_CORNER_NUM * 8);
.........
}
/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
Best regards