This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

J784S4XEVM: TDA4VH

Part Number: J784S4XEVM


In deep learning neural network demo, e.g. app_tidl_od, which memory section(L1,  L2, L3, or EXTERNAL) could be  allocated when operating on feature map (inPtrs[], outPtrs[]) of convolution layer, pooling layer, etc. ?

int32_t TIDL_conv2dProcess(
TIDL_Handle intAlgHandle,
sTIDL_AlgLayer_t * algLayer,
sTIDL_Layer_t * tidlLayer,
void * inPtrs[],
void * outPtrs[],
sTIDL_sysMemHandle_t * sysMems)


or


int32_t TIDL_poolingProcess(TIDL_Handle intAlgHandle,
sTIDL_AlgLayer_t *algLayer,
sTIDL_Layer_t *tidlLayer,
void *inPtrs[],
void *outPtrs[],
sTIDL_sysMemHandle_t *sysMems)

  • Hi,

    At time of TIDL node creation input buffer pointers are fetched from external mem (DDR). The computing output of inference is stored at DDR source.

    However during node graph execution for model inference, the intermediate buffers are placed in internal memories, this is buffer placement is one of core logic which is being generated by model compiler and this may vary based on mode to model.

    However you can try out one thing on experimental basis, one can print the buffer addresses and then comparing them with source buffers base address locations to get an idea where is the buffer placement.

    You can refer /$PSDK_INSTALL_PATH/vision_apps/platform/<$SoC>/rtos/app_mem_map.h for memory address mapping.

    Please note that buffer placement logic is internal to TI.