Hi. I'm working on a DM368 custom board.
DM368 432MHz, DVSDK 2.10.01.18, H.264 codec 2.30
I want to encode H.264 1080p@30 + JPEG D1@30.
But, H264 single encoding takes 30~31ms.. and JPEG takes slightly over 4ms. and total time is about 36ms
In some threads of this forum, i read that H.264 single encoding takes 27~28ms, so h.264+jpeg is possible.
i tried MSTPRI , RSZ_DMA, SDRAM Setting. But, nothing can help to my case.
only when dm368 is overclocked to 480MHz, h.264 single stream encode takes 27ms.
i set h.264 codec like below.
IH264Params.videncParams.size = sizeof(IH264VENC_Params);
IH264Params.videncParams.encodingPreset = XDM_HIGH_SPEED;
IH264Params.videncParams.maxHeight = 1088;
IH264Params.videncParams.maxWidth = 1920;
IH264Params.videncParams.maxFrameRate = 120000;
IH264Params.videncParams.maxInterFrameInterval = 0;
IH264Params.videncParams.inputChromaFormat = XDM_YUV_420SP;
IH264Params.videncParams.reconChromaFormat = XDM_YUV_420SP;
IH264Params.videncParams.dataEndianness = XDM_BYTE;
IH264Params.videncParams.inputContentType = IVIDEO_PROGRESSIVE;
IH264Params.profileIdc = 100;
IH264Params.levelIdc = 40;
IH264Params.transform8x8FlagIntraFrame = 0;
IH264Params.transform8x8FlagInterFrame = 1;
IH264Params.entropyMode = 1;
IH264Params.Log2MaxFrameNumMinus4 = 0;
IH264Params.ConstraintSetFlag = 0;
IH264Params.enableVUIparams = 0;
IH264Params.meAlgo = 0;
IH264Params.seqScalingFlag = 1;
IH264Params.encQuality = 3;
IH264Params.enableARM926Tcm = 0;
IH264Params.enableDDRbuff = 0;
IH264Params.sliceMode = 0;
IH264Params.numTemporalLayers = 0;
IH264Params.svcSyntaxEnable = 0;
IH264Params.EnableLongTermFrame = 0;
IH264Params.outputDataMode = 1;
IH264Params.sliceFormat = 1;
IH264Params.videncParams.rateControlPreset = IVIDEO_STORAGE;
IH264Params.videncParams.maxBitRate = 10000000;
IH264DynParams.rcAlgo = 1;
IH264DynParams.videncDynamicParams.targetBitRate = 6000000;
IH264DynParams.intraFrameQP = 28;
IH264DynParams.interPFrameQP = 28;
IH264DynParams.videncDynamicParams.size = sizeof(IH264VENC_DynamicParams);
IH264DynParams.videncDynamicParams.inputWidth = 1920;
IH264DynParams.videncDynamicParams.inputHeight = 1080;
IH264DynParams.videncDynamicParams.captureWidth = 0;
IH264DynParams.videncDynamicParams.targetFrameRate = 30000;
IH264DynParams.videncDynamicParams.refFrameRate = 30000;
IH264DynParams.videncDynamicParams.interFrameInterval = 0;
IH264DynParams.videncDynamicParams.intraFrameInterval = 30;
IH264DynParams.videncDynamicParams.generateHeader = XDM_ENCODE_AU;
IH264DynParams.sliceSize = 0;
IH264DynParams.airRate = 0;
IH264DynParams.initQ = -1;
IH264DynParams.rcQMax = 48;
IH264DynParams.rcQMin = 8;
IH264DynParams.rcQMaxI = 48;
IH264DynParams.rcQMinI = 8;
IH264DynParams.maxDelay = 2000;
IH264DynParams.aspectRatioX = 1;
IH264DynParams.aspectRatioY = 1;
IH264DynParams.lfDisableIdc = 0;
IH264DynParams.enableBufSEI = 0;
IH264DynParams.enablePicTimSEI = 0;
IH264DynParams.perceptualRC = 0;
IH264DynParams.idrFrameInterval = 30;
IH264DynParams.mvSADoutFlag = 1;
IH264DynParams.resetHDVICPeveryFrame = 2;
IH264DynParams.enableROI = 0;
IH264DynParams.metaDataGenerateConsume = 0;
IH264DynParams.maxBitrateCVBR = 0;
IH264DynParams.interlaceRefMode = 0;
IH264DynParams.enableGDR = 0;
IH264DynParams.GDRduration = 0;
IH264DynParams.GDRinterval = 0;
IH264DynParams.LongTermRefreshInterval = 0;
IH264DynParams.UseLongTermFrame = 0;
IH264DynParams.SetLongTermFrame = 0;
IH264DynParams.VUI_Buffer = NULL;
IH264DynParams.CustomScaleMatrix_Buffer = NULL;
IH264DynParams.CVBRsensitivity = 0;
IH264DynParams.CVBRminbitrate = 0;
IH264DynParams.LBRmaxpicsize = 0;
IH264DynParams.LBRminpicsize = 0;
IH264DynParams.LBRskipcontrol = 0;
IH264DynParams.maxHighCmpxIntCVBR = 0;
IH264DynParams.disableMVDCostFactor = 0;
IH264DynParams.putDataGetSpaceFxn = NULL;
IH264DynParams.dataSyncHandle = NULL;
in my .cfg file i added
MEMTCM.cmemBlockId = 1;
EDMA3.maxRequests = 128;
and add below line to my loadmodules.sh
allowOverlap=1 phys_start_1=0x00001000 phys_end_1=0x00008000 pools_1=1x28672
according to codec 2.0 integration guide.
anybody can help?
thanx in advance.
PS. i saw below link when googling.
in this article, "codec MemTab[] buffers which are needed at the time of codec create should be allocated from cached region."
what does this mean? when i create codec instances, i just use the memory from the pool which is allocated by cmem.
Or should i do some more?