This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

DM365 IPNC-MT5 Cpture-Resize-Encode process



Hi,

i found that the captured image in the ipnc source code, being stored in a file.

source code in videoCaptureThr.c    VideoCaptureCreate() --> Video_captureTskRunIsifIn() -->DRV_imageTuneIsSaveDate() --> IMAGE_TUNE_SaveData()

IMAGE_TUNE_SaveData() in framework\image_tune\imageTune.c

memset(tmpBuf, 0, fileHeader.validDataStartOffset);
memcpy(tmpBuf, &fileHeader, sizeof(fileHeader));

  fp = fopen(filename, "wb");
  if(fp==NULL) {
    free(tmpBuf);
    return OSA_EFAIL;   }

  writeDataSize = fwrite(tmpBuf, 1, fileHeader.validDataStartOffset, fp);
  writeDataSize += fwrite(info->frameVirtAddr, 1, dataSize, fp);

  fclose(fp);
  free(tmpBuf);

but in encode process the input data to the

VIDENC1_process(pObj->hEncode, &inBufDesc, &outBufDesc, &h264InArgs, &outArgs); in algVidEnc.c

from where the captured image or resized image data is getting coppied to inBufDesc ?

how to know wethere the captured data/resized data is input to the video encoder or not ?

  • hi Anshuman,

    along with my previous thread, i found the following.

    32 queue of buffers are used and controoled via the bufid via OSA_BufHndl{    } --> OSA_BufInfo { } structures.

    i am not able to understand the requirement of the buffer queue's and how the buffer queue's are used accross all processes capture, resize and encode.

    please tell me the buffer queue implemented here and its need.

  • Hi,

    If you refer to videoCaptureThr.c file, you can notice that there is a function call AVSERVER_bufPutFull(). This API actually puts the captured YUV buffer in the queue which is sent to encoder. Because encoder, capture, display are all different threads, the buffers are passed through these threads using queues implemented in OSA layer.

    There is no co-relation between saving data to the file and the buffers passed to the encoder. The only relation is that the DRV_imageTuneSaveData() API is called before passing the buffer to the encoder, so that you can look at the YUV data offline for any sort of debugging.

    Please refer to AVServer_DesignGuide to understand where the software data buffer queues are used.

    Regards,

    Anshuman

    PS: Please mark this post as verified, if you think it has answered your question. Thanks.

  • Hi Anshuman,

    thnaks for your info. sorry for late reply. but there is not much information in AVServer_DesignGuide.pdf.

    what does VIDEO_fdCopyRun() and VIDEO_displayCopyRun() do then. as i understood earlier these two functions are copying the captured/resized data in to the buffers.

    i have gone through the function AVESERVER_bufPutFull() and its calls but not able to understand how does it copies captured data in to buffer.