Hi,
we're porting Codec Engine / Framework Components to the QNX Neutrino OS on a TMS320DM365,
aiming to do H264 video decoding. Work is based upon CodecEngine version 2.25.05.16. The port is
largely derived from the Linux version and mainly finished, but now we are hitting some trouble trying
to actually get the codec to run.
Instantion of the h264 video decoder (via VIDDEC2_create()) appears to work nicely.
Parameter settings are:
viddecDynamicParams.size = sizeof(IH264VDEC_DynamicParams);
viddecDynamicParams.decodeHeader = XDM_DECODE_AU;
viddecDynamicParams.displayWidth = 0;
viddecDynamicParams.frameSkipMode = IVIDEO_NO_SKIP;
viddecDynamicParams.frameOrder = IVIDDEC2_DISPLAY_ORDER;
viddecDynamicParams.newFrameFlag = XDAS_FALSE;
viddecDynamicParams.mbDataFlag = XDAS_FALSE; // don't want macroblock data.
getDataFxn = NULL;
dataSyncHandle = NULL;
resetHDVICPeveryFrame = 1;
When creating the codec, 14 memory areas and 29 different resources are allocated; the latter ones
being 26 edma3chans, a vicp2, a hdvicp, and an addrspace. This appears to be in line with the expected
H264VDEC resource usage.
Also setting the codec parameters and getting buffer info seems to function - VIDDEC2_control() is
called twice, once with id=1, the other time with id=5, and the reported (and later on successfully
allocated) buffer sizes are:
input buffer: 0x200048
output buffer 0: 0x23d800
output buffer 1: 0x11ec00
My application uses the raw H264 stream 'colorful_toys_cif_5frms_420P.264' provided with the H264DEC
package.
When VIDDEC2_process() is called, the algorithm gets activated, a couple of physical addresses
are determined, HDVICPSYNC_start() is called and returns ok, then HDVICPSYNC_wait() is called
and calls VICP_wait(). Here things stop.
In a server process that is handling all system-related requests (analogous to the Linux kernel modules
in linuxutils), IRQ 10 (HDVICP0) has been attached to upon an earlier VICP_register() call from the CE
and is now waited for (VICP_wait())... but it never comes.
By now I've checked pretty much anything I could think of, still can't find the cause, and am
running out of ideas of what else to check or make sure. Below is a rough overview of what I
can see when the app is waiting for an interrupt:
In the DM365's System Control Module:
Bit 2 in ARM_INTMUX was set such that IRQ 17 should be used for EDMA CC error interrupts
Bit 0 in ARM_INTMUX was set to get IRQ 10 from HDVICP
EDMA_EVTMUX is set to 0x007fc004, switching the 9 releveant edma events to HDVICP
MISC indicates normal HDVICP bus operation
PERI_CLKCTL says HDVICP is using PLLC2SYSCLK2
Both PLLs are set up and running.
In the EDMA3CC configuration area, I can see that channels 0..25 are mapped into shadow
region 0 (DRA[0]=0x03ffffff) and that PaRam sets 1 and 2 have been set up to transfer
data into HDVICP DMA port 1 (see below), looking at src and dst memory areas, the transfers
appear completed. No EDMA errors or missed events are reported.
Strangely, though, EER[H], IER, and IPR (as well as CER and SER) are entirely 0.
Also, all of the shadow region are completely zero.
ER has a few bits set, but only ones that should not be affected by HDVICP.
These are the two param sets mentioned above:
opt src dst acnt bcnt ccnt sbi dbi sci dci lnk bcld
PaRam[1]: 00101008 8581e000 12061980 0540 0001 0001 0000 0000 0000 0000 ffff 0001
PaRam[2]: 00102008 85838000 12060010 1938 0001 0001 0000 0000 0000 0000 ffff 0001
Also, although according to the opt fields' TCINTEN and TCC values, completion interrupts
should have been generated, I didn't get any of those. In a test program that 'manually'
initiated an EDMA transfer, the interrupts came without hesitance.
Does anybody have any hints on how to proceed? Where to look or what to check?
Which bits to look at or to tweak?
I even 'faked' the HDVICP interrupt by just delaying about 0.1s and then returning OK
to VICP_wait()... No soup. It just gets called over and over, without VIDDEC2_process()
ever returning.
Thanks in advance for any feedback,
- Thomas