Other Parts Discussed in Thread: TVP5150
Hello team, from customer:
Now that the buffer issue is solved, I've some more questions related to the VPE.
What i'm doing is trying to load the camera buffer as a OpenGL texture, therefore my process chain is:
Sensor -> TVP5150 -> VIP -> VPE -> OpenGLES Texture -> Surface -> Surface_Flinger -> Screen
I'm using the VPE to accomplish to major task: de-interlace video frames, and convert YUYV format to ARG24 which is the pixel format accepted by OpenGLES.
Up to know, the camera frames where presenting on screen using the drm device directly but we were facing some problems with the interaction between our camera module and surface_flinger / hardware composer. That's why I've decided to explore new alternatives.
What I found strange with this new approach is that:
- VPE output frames are still at 60fps. I was expecting to have 30fps because I'm feeding VPE with interlaced fields at 60fps. Could you please confirm if VPE's output as 60fps is correct even when doing de-interlace?
- VPE output video quality is not that good. It seems that the DEI process is not that good because if you wave your hand quickly in front of the camera you can see some video artifacts like when the fields are not "merged" correctly or the motion vector is not working. Could this behavior related to the fact of having 60fps at the VPE's output (issue #1)?
- High CPU usage of the surface_flinger process (~15%). I was expecting to have a lower CPU usage of the surface_flinger process due to the fact that OpenGLES is supposed to be hw accelerated and I'm just applying the texture from the camera into a static rectangle. Based on your experience is this ~15% normal for SoC?
Thanks in advance