I'm developing a stereo vision application using two mipi cameras. We're using a TDA4VM for prototyping, but we're considering j721s2-family SoCs as well.
For our test setup, we've built:
- A Linux kernel driver for our camera CMOS which is successfully streaming through V4L2
- A TiOvx pipeline for lens distortion correction etc, and stereo depth inference
- Everything is running from Linux on the A72.
- We're using the latest processor sdk linux
This is working well, though we need to start optimizing for throughput. We've got several unnecessary copies related to acquiring frames that we're trying to now eliminate.
Using a `tivxCaptureNode` seemed appealing because its output wouldn't require mapping into OVX images, but I wanted to confirm a few things:
- Are V4L2+KernelDriver and tivxCaptureNode mutually exclusive systems?
- Does using the capture node require a (kernel) driver at all? If not, how are common operations like setting exposure or starting the stream done? Do you just do the I2C writes directly?
- Can tivxCaptureNodes be used from A72/Linux?
- Does the presence of a camera + linux kernel driver somehow disable tivxCapture? Put differently, what might I need to do to convert my existing V4L2+KernelDriver setup to a tiovx based setup? Should I remove the driver, the device tree node.
Finally, I just wanted to understand if there are inherent advantages to using tivxCapture over V4L2? Could achieve equally good performance with V4L2, if i were smarter about MMAPing, using vxSwapHandle etc.