This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

DM355 VPBE decode

hi

       I'm using DM355 EVM trying to decode stream which encode by VPFE from a IPCamera, but in "decodeVideoBuffer()"  function "VIDDEC2_process(hDecode, &inBufDesc, &outBufDesc, &inArgs, &outArgs)" failed and "outArgs.bytesConsumed = 0" as i use the same function could decode the example "*_pal.mpeg4" file; so I want know if the VPBE decoder need a header or what format  for the example file like "_pal.mpeg4"? does it have a header or just stream data?

   thanks in advance!

  • our codecs work on elementary streams, no headers needed.   As a matter of fact, if the media files have any headers, you would need to strip these out and just pass elemantary stream data to our decoders.

  • As you say, but how does the hardware know what size with one frame?  you know when i copy a big size like 1382400 to "decodeVideoBuffer()" as input buf, it just consumed 4530 as a frame, who has control this? 

    you should know that i have assign  "params.maxWidth = 960   params.maxHeight  = 720" in "videoDecodeAlgCreate()" which i thank would be assign output stream's resolution, but i just got the right resolution like "decStatus.outputWidth 720  decStatus.outputHeight  576", i am so confused and need your help

    some printf message you should:

    params.maxWidth = 960   params.maxHeight  = 720

    inBufDesc.descs[0].bufSize  1382400, outBufDesc.bufSizes 1095945052  outArgs.bytesConsumed 266
    video1.c  get dec_status  decStatus.outputWidth 720  decStatus.outputHeight  576

  • Eric,

    From a video  capture/display perspective, the size is determined by synchronization signals (hsync, vsync, fid) which can be defined via hardware registers or receieved from an external master chip.

    From a codec compression/uncompression perspective, the size is defined by paramters you pass via create API you mentioned above; however, you should refer to the codec data-sheet (included with DVSDK install) as well to make sure your codec can handle the resolution you are asking for, otherwise it may defualt to a smaller resolution. 

    In addition, as you can imagine, if you want to decode and display, there will be some relationship that would need to be observed between the video size you define for your display driver and the size you define for your codec....

  • Juan

    I know exactly what you have talk above, and i'm sure the hardware can decode even 1080I resolution, the problem which confused me is that as your decode demo show when i copy the same size like 260000 to "decodeVideoBuffer()" as inbuf, but the params "outArgs.bytesConsumed"  just consumed smaller like 4500 or 400 as one frame, it was right for encoded .mpeg4 file as i know every frame didn't have the same size, but  as i passed the same params to the decode Algorithm who control this?  even as the same params we can decode "*_ntsc.mpeg4" and "*_pal.mpeg4", so without a header, who control one frame size to decode right?

    thanks

  • I think I understand your question now.  As you noted, during encode, your input frame size is the same, but your encoded frame size varies depending on how much compression is done on that frame.  Conversely, during decode your input frame size varies, but your output frame size is expected to be the same (say NTSC frames being produced).  If we focus on decode case, since there is no header information, you are expected to know the resolution of your encoded elementary stream (e.g. output frame size), you pass in this information to the codec via VIDDEC_Create VISA API call.  As far as your input frame size is concerned, the codec knows how much data makes up the next frame and the application can get this information via VIDDEC_Control call.  Therefore during your VIDDEC_process call to decode, you need to pass in the input buffer size returned from VIDDEC_Contol and the output buffer size specified in VIDDEC_create; if you pass in a larger buffer size as the input buffer, it will likely only consume the necessary bytes that make up a single frame..and ignore the rest...., which mean you will need to keep tabs on how many bytes where consumed so that you do not loose any data by discarding the rest and continuing to read new buffer from file.

    FYI, the decode demo example included in the DVSDK is a good reference example on the calls that need to be made to decode a video stream from a file.