This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

dm365 h.264 codecs integration on DM36X EVM

I am trying to integrate h.264 platinum codecs into DVSDK 3.10 demos on DM36X EVM following the instructions in:

http://processors.wiki.ti.com/index.php/Migration/Integration_Guide_for_DM36x_H.264_version_2.x_codecs#How_to_integrate_ver_2.0_codec_in_DVSDK_3.1_demo.3F

The decoder works OK, although ARM usage is much higher than for the version 1.1 of the decoder. And the encoder gives me segmentation fault message during the creation. The log with CE_DEBUG=3 is attached. Does anyone know if something else has to be changed (extended parameters perhaps)? The encode demo does not use extended parameters at all.

  • Alexander,

    For the DM365 H264 v2.00 (platinum) Decoder, with base parameters you get the Universal Decoder (v1.1)  which is slower, with extended parameters you enable the closed loop decoder which is faster. I have benchmarked the v2.00 decoder with the DMAI file IO apps and have only noticed a 1% difference with v. 1.1.  I have not benchmarked the demos yet. There may be other system issues involved.

    For the DM365 H264 v2.00 Encoder, I have also experienced segmentation faults when the MEMTCM module was not intialized in the .cfg file. After updating the .cfg file and the loadmodules.sh I had no problems with the encoder. If you continue experiencing issues with the demos can you please try the DMAI sample application video_encode_io1?

    Thanks

    cesar

  • Cesar,

    For the DM365 H264 v2.00 (platinum) Decoder I have used extended parameters as recommended in the Migration Guide:

    extnParams.displayDelay = 8;
    extnParams.levelLimit = 0;
    extnParams.disableHDVICPeveryFrame = 0;
    extnParams.inputDataMode = 1;
    extnParams.sliceFormat = 1;
    extnParams.frame_closedloop_flag = 1;

    For both the decoder and encoder, I have initialized MEMTCM in the .cfg file, and updates loadmodules.sh with allowOverlap=1 phys_start_1=0x00001000 phys_end_1=0x00008000 pools_1=1x28672 :

    var MEMTCM = xdc.useModule(’ti.sdo.fc.ires.memtcm.MEMTCM’);
    MEMTCM.cmemBlockId = 1;
    var EDMA3 = xdc.useModule(’ti.sdo.fc.edma3.Settings’);
    EDMA3.maxRequests = 128;

    Without MEMTCM initialisation, the demo fails with the error " Assignment of alg resources through RMAN FAILED (0x7)"


    Running video_encode_io1 results in the same segmentation fault at exactly the same place.

    BTW, how can I run video_decode_io2? CMEM complains about various pool sizes. What are the CMEM parameters to run this app?

  • Alexander,

    The CMEM pool sizes depend on the resolution of the stream decoded. You can adiust the sizes based on the error message you get. The video_decode_io2 is a simple application and a few iterations of this process should provide correct CMEM pool sizes

    Cesar