Because of the holidays, TI E2E™ design support forum responses will be delayed from Dec. 25 through Jan. 2. Thank you for your patience.

This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

H.264 interlaced picture encoding

Other Parts Discussed in Thread: TVP5158

Hi,

   I am encoding in H.264 interlaced frames i.e. frames composed of two fields in the even and odd lines.

If I encode my picture (PAL D1) as a progressive frame the decoder image has the right colors but obviously I see the comb on moving edges.In this case the inputContent is set as progressive and the image width in 720:

EncParams->inputContentType =  IVIDEO_PROGRESSIVE

 

According to the SPRABA9 I set

EncParams->inputContentType =  IVIDEO_INTERLACED;

dynEncParams->captureWidth = 2 * EncParams->maxWidth;

since "For interlaced content, captureWidth should be equal to the pitch/stride value needed to move to the next row of pixels in the same field."

In this case however the chroma appears completely out of phase I see a big green area on the left and magenta on the right with ghosts of the main image. The image format is not considered in the right way.  I think I am missing some encoder setting even if looking at the documentation I don't find other useful suggestions.

thank you

  • Hi,

    Peregrinus said:

    dynEncParams->captureWidth = 2 * EncParams->maxWidth;

    What is EncParams->maxWidth in your case? Is it equal to the EncParams->inputWidth?

    BTW, i have tried interlaced mode of encode and attaching the sample code to try interlaced mode encoding. Refer to "enableInterlaced" flag.

    Hope this helps.

    Regards,

    Anshuman

    PS: Please mark this post as verified, if you think it has answered your question. Thanks.

  • Hi Anshuman,

       sorry for the late reply. I confirm

          EncParams->maxWidth is equal to EncParams->inputwidth which is 720.

    I take a look to the code you forwarded to me. I am using TVP5158 video encoder with the UDwork interface. The ouput is a PAL frame 720x576 with the two field interleaved. As far I am concerned I cannot get the two field from the capture driver.


    When I encode my picture as a progressive frame I can encode, decode and display my image but when there are large objects moving I see the comb pattern, I think it is because the encoder consider the image progressive and the bitrate is not enough to encode the combs at the edges (with small objects moving as for instance a pen I don't see the comb pattern). I set target bitrate to 2Mbps and it is 2Mbps.

    On the contrary, if I set the capture width to 1440 and i set the IVIDEO_INTERLACED flag, the image seems better but when I have motion I see something like the even and odd frames have been swapped so that a moving object advances with a step backward and two step forward. I set target bitrate to 2Mbps but it is 1Mbps. However if I save the encoder output on file and I see it with the VLC enabling its deinterlacer the output appears good. So maybe there is some parameter to set on the decoder to enable the decoding of interlaced frames. I am investigating in this regard.

     Have you some idea about the origin of the motion artifacts?
    In this post is said that to encode interlaced MPEG the encoder must be called twice. Do I need to do the same for H.264?

     

    Thank you for the support.

    mario

  • If you are using DVR Reference Design interface and codebase, then i hope you would have the source code for it. Please refer to alg_vidEnc.c file in <dvr_base_src>/framework/alg/src folder.

    DVR Ref Design supports interlaced encoding for H.264 and you can directly use it by just switching on the flag to do interlaced encode. You are right, that you need to call VIDENC1_process call twice (once for each field) and configure the captureWidth parameter as double of inputWidth to encode each fiedl individually.

    Regards,

    anshuman

    PS: Please mark this post as verified, if you think it has answered your question. Thanks.

  • Thank you for the useful information Anshuman,

          I am using the DVR framework only for the TVP5158 front end and for the resizer but for the encoding I am using the standard DMAI interface.

    So have I only to call the VIDENC1_process twice with the same input buffer? I already doubled the captureWidth (1440) but in my last code I call the encode function only once. When I save the encode stream into a file and I watch it with VLC I see something odd: the image size is right 720x576, I see the typical comb pattern of non-deinterlaced signals but not on even and odd lines:  lines 1,2,5,6,9,10,.. come from field A, lines 3,4,7,8,11,12 from field B.

    I will be out of office until the 10th of Jan. On my return I will check if calling the encode function twice fixes the issue and I will provide some feedback.

    Thank you again for your support.

  • Hi,

    Peregrinus said:
    So have I only to call the VIDENC1_process twice with the same input buffer? I already doubled the captureWidth (1440) but in my last code I call the encode function only once.

     

    Yes, you need to call VIDENC1_process call twice. Have you referred to alg_vidEnc.c file from DVR RDK? It shows how input addresses have to be passed for the two process calls.

    Regards,

    Anshuman

    PS: Please mark this post as verified, if you think it has answered your question. Thanks.

  • Great Anshuman, 

       it  works, I have still to check the decoding with the DM365 but I verified  calling the VIDENCprocess function twice and  saving the encoded signal to a file. When I play the encoded video with VLC it looks right.

    Thank you for the support.

     

    If someone is interested the relevant parts of my code for interlaced signals are:

    ...

    dynEncParams->captureWidth = (bInterlaced ? (2*(envp->imageWidth)) : (EncParams->maxWidth));

    ....

    while(framesavailable)

    {

    /*------- Encode the video buffer ---------*/
            if (Venc1_process(hVe1, hCapBuf, hDstBuf) < 0) {
                ERR("Failed to encode video buffer\n");
                cleanup(THREAD_FAILURE);
            }

            NBytesFld1 = Buffer_getNumBytesUsed(hDstBuf);
            NBytesFld2 = 0;
            // for interlaced signals Venc1_process must be called twice
            if (bInterlaced)
            {
                // Input Buffer
                pBuff = Buffer_getUserPtr(hCapBuf);
                pBuff += dynEncParams->inputWidth;
                Buffer_setUserPtr(hCapBuf2ndField, (Int8*)pBuff);


                // Output Buffer
                pBuff = Buffer_getUserPtr(hDstBuf);
                pBuff += NBytesFld1;
                Buffer_setUserPtr(hDstBuf2ndField, (Int8*)pBuff);
              

                if (Venc1_process(hVe1, hCapBuf2ndField, hDstBuf2ndField) < 0) {
                    ERR("Failed to encode video buffer\n");
                    cleanup(THREAD_FAILURE);
                }

                NBytesFld2 = Buffer_getNumBytesUsed(hDstBuf2ndField);

            }

            Buffer_setNumBytesUsed(hDstBuf, NBytesFld1 + NBytesFld2);

    }

  • Sorry this post was messed up because Internet Explorer 9beta doesn't work with the forum for posting.

  • Thanks for posting this sample.  I intend to capture and encode interlaced as soon as I get some more important milestones complete.  I have a question regarding the encoded data.  Right now each frame is returned to me as one single slice.  If I am encoding interlaced do I get two separate slices from the encoder?  I use Videolan Client as a decoder, so I'm concerned about how it will handle the incoming data.

    John A

  • Hi John, 

        you have to set the interlaced flag for the encoder and call the encoder two times for each frame. Capture width must be 2 times the line length if your fields are interleaved. So I assume that field coding is applied i.e. each field is coded separately and each field is returned as a single slice. 

    For each frame I put the encoder outputs of the two fields in the output stream (file at this time but I the final release I should use RTP).  Then I use Videolan to play the file and it runs fine.

     

  • Peregrinus,

    So you give the encoder an output buffer for each field?  Then you get two slices as separate buffers?

    John A

  • Yes John, I confirm. The important thing is to remember to set the interlaced flag and to set the capture width as two times the image width (720 x 2 in my case).

    Then, once you have the image buffer you call the encoder the first time and you get the first slice. Then you have to increment the pointer to the image buffer of one line in order to point to the odd field, call the encoder again and you get the second slice.

    I send these buffers to VLC through RTP and it decodes them correctly.

    If you need to decode them with the TI decoder you have to call the decoder two times providing the two slices with the same ouput buffer. After the first buffer the decoder will return Dmai_EFIRSTFIELD, with the second buffer you get the full frame. 

  • Peregrinus,  I just got interlaced capture working.  I'm getting an error from vpfe_capture saying I should use ipipe.  But I wrote the captured frames to a file and wrote a small windows app to display the UYVY and it looks fine.

    So now I need to run the resizer to convert the UYVY to 420semip.  Did you find that to be an easy task?  I'm thinking the resizer should be run in the encoder thread and not the capture thread, but not sure.  I definitely don't want to miss any frames and since the current mode is called "single shot" I'm afraid that if a capture request is not active then a frame might be missed.  Perhaps I should start a new thread to ask about that.

    John A

     

  • Hi John, 

       I use the UDworks interface both for capture and resize. I had to fight a little but not too much to make it run. I do capture and resize in the same thread (the capture thread) and I have no problems of missing frames however you can do it in another thread maybe it is more reliable. You have to use the single-shot mode. The only thing I noticed is that the resizer does not like when you place breakpoints and stop for debugging, in this case sometimes it hangs up. If I remember, there is one example about resizing in the dvsdk and some discussion in this forum. 

  • Perigrinus, I just noticed in your code sample that you have a different capture buffer for each call to Venc1_process.  Are you capturing each field into a separate buffer?

    I'm capturing both fields in the same buffer.  The UYVY format looks fine.  Although I haven't tried looking directly at the 420 after the resizer.  When I encode I call Venc1_process twice.  The second time I increment the UserPtr of the captured buffer to the next line.  But it doesn't look like interlaced fields.

    John A

  • John,

    John Anderson said:
    I'm capturing both fields in the same buffer. 

     I assume you are capturing the fields interleaved in the same buffer. I mean if the buffer is 720x480 then you have Field 0 (720x240) and Field1 (720x240) interleaved in the same buffer.

    If your case is like above, you need to pass the address of line 0 in first Venc1_process and address of line1 in second Venc1_process. The captureWidth parameter of the codec has to be set to double the line size that is 720x2. I have assumed that you have got YUV420SP output that is given to Encoder.

    Regards,

    Anshuman

    PS: Please mark this post as verified, if you think it has answered your question. Thanks.

  • Hi John, I have the two fields interleaved as in your case so I use two buffers which point almost to the same area, the second one is shifted of a line. If you increment the pointer after encoding the first field is the same thing. What does you mean with "But it doesn't look like interlaced fields."?  Follow the hints of Anshuman, in particular you have to double the capture width and obviously set the interlaced flag.

  • I capture UYVY 720x480 in single shot mode.  I've checked the images in this format and they look good and are interlaced.  I run the UYVY through the resizer to convert to 420SEMIP and then feed the 420 buffer through the encoder twice.  The first time with the User Ptr set to point to the first line and the second time I move the User Ptr 736 bytes ahead to the next line.  After the second encode I move the usr ptr back to the first line before it goes back to the resizer for the next frame.

    The line length of the 420 buffer must be 736 for the resizer.  I don't change it to 2x736 before encoding, but I do set the captureWidth of the encoder parameters to 2x736.  I'm not sure if the encoder uses the line length in the buffer.  If it did then what would be the purpose of setting the captureWidth?

    I say it doesn't look interlaced after encoding because, well... it no longer looks interlaced.  When I display interlaced video with VLC, it's exhibits the comb efffect when video is moving or panning on the horizontal.  Basically the image looks like it did when I captured 420 with a dropped field.  I.E. I seemed to have accomplished nothing.  At this time I'm guessing that the resizer is doing the same thing as it did in chained mode.  It seems to be dropping a field when it performs the UYVY to 420 conversion.

    John A

  • John Anderson said:
    I say it doesn't look interlaced after encoding because, well... it no longer looks interlaced.

    Do you mean that the video does not look like interlaced encoded? Can you share the video stream?

    Also, can you please check resizer registers specially for the resize ratio RSZA_H_DIF and RSZA_V_DIF? This should tell you whether a field was dropped and you upscaled one field to a full frame in vertical direction. If this happening, then surely you are not adding any value by interlaced mode of encoding. But ideally, when resizer driver is run in single shot mode, it should not do such default scaling.

    Regards,

    Anshuman

     

     

  • Anshuman,

    I'm writing an app on the PC to view the 420 buffers.  I'll already seen them in gray scale and it does appear the 420 output from the resizer is interlaced.  Right now I'm trying to get the color information correctly included to get a good view of the pic.  So at this point I'm thinking the problem is with my encoding.  I'll post a video when after I get done checking the 420 picture.

    I'm still troubled that the size of the information in the 420 buffer is larger than the size of the buffer after returning from the resizer.

    When you ask me to check the resizer registers, do you mean to do into the resizer drivers and printk from the kernel?  I have virtually no documentation for the resizer and couldn't find it when I searched the technical documents.

    John A

  • Turns out I was getting an error trying to move the user pointer.  I needed to created another identical 420 buffer with the reference attribute set to true and use that for the second encode.  You can only move the pointer in a reference buffer.

    John A

  • John Anderson said:
    needed to created another identical 420 buffer with the reference attribute set to true and use that for the second encode

    John,

    You are mentioning the above attribute w.r.t. DMAI Buffer structure. Isnt it? I dont remember seeing anything like this in encoder.

    Regards,

    Anshuman

     

  • That's correct.  It's in the DMAI Buffer code.

    John A

  • Hi ,

    Peregrinus,

    Now ,i met this question as your post, can give me the video.c file?

    Best regards,

    Z_Star

  • Great Peregrinus,

        my email is 653633938@qq.com.

        pls send video.c to my email,thanks!

    regards,

    Z_Star,