Tool/software:
Hi TI,
We have TDA4VEN - DS90UB960 - IMX390 (builtin DS90UB953), and the device tree relay is set. Now, we want to test the sensor powering on and working or not using the following code:
# after boot and login, config code provided by manutacturer $ i2cset -y 7 0x3d 0x4c 0x01 $ i2cset -y 7 0x3d 0x58 0x5e $ i2cset -y 7 0x3d 0x1f 0x02 $ i2cset -y 7 0x3d 0x20 0x20 $ i2cset -y 7 0x3d 0x33 0x03
#include <stdio.h> #include <stdlib.h> #include <fcntl.h> #include <unistd.h> #include <sys/ioctl.h> #include <sys/mman.h> #include <linux/videodev2.h> #include <string.h> #include <errno.h> #define DEVICE "/dev/video4" #define WIDTH 1936 #define HEIGHT 1100 int main() { int fd = open(DEVICE, O_RDWR); if (fd == -1) { perror("Error opening video device"); return 1; } // Set video format struct v4l2_format fmt; memset(&fmt, 0, sizeof(fmt)); fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; fmt.fmt.pix.width = WIDTH; fmt.fmt.pix.height = HEIGHT; fmt.fmt.pix.pixelformat = V4L2_PIX_FMT_SRGGB12; // 10-bit Bayer Packed fmt.fmt.pix.field = V4L2_FIELD_NONE; if (ioctl(fd, VIDIOC_S_FMT, &fmt) == -1) { perror("VIDIOC_S_FMT failed"); printf("Error: %d\n", errno); close(fd); return 1; } // Request buffers struct v4l2_requestbuffers req; memset(&req, 0, sizeof(req)); req.count = 1; // Request 1 buffer req.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; req.memory = V4L2_MEMORY_MMAP; if (ioctl(fd, VIDIOC_REQBUFS, &req) == -1) { perror("VIDIOC_REQBUFS failed"); close(fd); return 1; } // Query buffer struct v4l2_buffer buf; memset(&buf, 0, sizeof(buf)); buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; buf.memory = V4L2_MEMORY_MMAP; buf.index = 0; if (ioctl(fd, VIDIOC_QUERYBUF, &buf) == -1) { perror("VIDIOC_QUERYBUF failed"); close(fd); return 1; } // Map buffer to user space void* buffer = mmap(NULL, buf.length, PROT_READ | PROT_WRITE, MAP_SHARED, fd, buf.m.offset); if (buffer == MAP_FAILED) { perror("Memory mapping failed"); close(fd); return 1; } // Queue buffer if (ioctl(fd, VIDIOC_QBUF, &buf) == -1) { perror("VIDIOC_QBUF failed"); close(fd); return 1; } // Start streaming int type = V4L2_BUF_TYPE_VIDEO_CAPTURE; if (ioctl(fd, VIDIOC_STREAMON, &type) == -1) { perror("VIDIOC_STREAMON failed"); close(fd); return 1; } // Dequeue buffer (capture frame) if (ioctl(fd, VIDIOC_DQBUF, &buf) == -1) { perror("VIDIOC_DQBUF failed"); close(fd); return 1; } // Save raw Bayer frame FILE* file = fopen("frame.raw", "wb"); if (file) { fwrite(buffer, buf.length, 1, file); fclose(file); printf("Frame captured and saved as frame.raw\n"); } else { perror("Error saving frame"); } // Stop streaming if (ioctl(fd, VIDIOC_STREAMOFF, &type) == -1) { perror("VIDIOC_STREAMOFF failed"); } // Cleanup munmap(buffer, buf.length); close(fd); return 0; }
The execution hangs at line 88, and if we interrupt with ctrl + c and check the 0x20 and 0x33 registers of 954, the value was modified back to 0xf0, 0x02, respectively. The values seem to be modified after line 81 executed.
We also tried `v4l2-ctl --device /dev/video4 --stream-mmap --stream-count=1 --stream-to=frame.raw` and hanged too.
Regards
Hello,
Could you share which specific IMX390 module you are using? If it is not D3RCM-IMX390-953, I recommend going through the FAQ list linked below to help you debug.
AM67 Academy: Use Camera
Please let me know if you have any further questions.
Thank you,
Fabiana
Hi, we are using this imx390 module. Our vendor does not have / give us the configuration about 953 <-> sensor in Linux (they said this is done in firmware). The module is only under developed and tested in RTOS.
Does D3RCM-IMX390-953 need configuration too? I don't see any step on their website. We consider buying a new one if it is a plug-and-play module (to also reduce the works of integrating the camera).
Thank you.
Hi,
When you say configuration, are you referring to the device tree overlay? D3RCM-IMX390-953 is supported on both TDA4VEN Linux and RTOS platforms. The Discovery IMX390 module is supported on RTOS only. The sensor you have linked has not been validated by us. Please take a look at the following pages for more information.
J722S Linux SDK Documentation: https://software-dl.ti.com/jacinto7/esd/processor-sdk-linux-j722s/10_01_00_04/exports/docs/linux/Foundational_Components/Kernel/Kernel_Drivers/Camera/CSI2RX.html#enabling-camera-sensors
List of sensors supported out-of-box: https://software-dl.ti.com/jacinto7/esd/processor-sdk-rtos-j722s/10_01_00_04/exports/docs/imaging/imaging_release_notes.html
Thank you,
Fabiana
Hi, the mentioned "configuration" means the i2c settings. Our vendor gives us a few lines of commands to setup the register of ub960. It looks like:
0x4C,0x01 // set TI954 config 0x58,0x5E 0x1F,0x02 0x20,0x20 0x33,0x03 // stream on
I believe these are RTOS settings. If we have not the corresponding i2c settings on the Linux (edgeai sdk), we could not receive the data from sensor, right (the dtbo has setup)?
Does the D3RCM-IMX390-953 need the extra i2c settings? or does it work directly after being plugged onto the deserializer?
Regards
Hi,
It seems you are using UB954 deserialzier, is it? Then this is not supported by default in the SDK. you would need to update imaging component to support this deserializer.
Regards,
Brijesh
Hi, we have both ub954 and ub960, and their configurations provided by sensor manufacturer are same (as above). We finally use ub960 to avoid some unexpected error.
Sorry for pasting the misleading code.
Regards
So, does D3RCM-IMX390-953 with FOV70 work without any extra steps on Linux after booting from the sd card? We want to simply demo the edgeai gallery in TI's prebuilt Linux tarball using this.
Regards
Hi,
Yes, D3-RCM camera with UB960 on Fusion board works on EVM without any extra steps.
Regards,
Brijesh
Hi,
Thank you for the confirmation. We'd order this sensor for the demonstration.
Regards
Sorry, I found we need the camera with about FOV100. Does D3 Discovery IMX390 be tested and could be the alternative choice for the simply Linux edgeai-gst-apps gallery demonstration currently?
Regards
Edit: I found another OV2312 sensor at the Supported Image Sensors EdgeAI section in here, but not be listed in here. Is it okay for out-of-the-box edgeai demonstration?
Hello,
Based on the last test case, j722s + fusion1 + ov2312 works fine in RGB-only and IR-only modes, but simultaneous streaming fails. I have reached out to our imaging team to get a status on this issue.
Thank you,
Fabiana
Hi, however, I do not find the corresponding configuration under edgeai-gst-apps' configs (something like ov2312_cam_example.yaml) but only OV5640 one. If I create a new one by this template, could I to make it only stream the RGB or IR at a time? I found this description for am62a but this section is missing for am67a/j722s.
Sorry for the detailed confirmation. We have already buy two unsupported camera..
Regards
Hello,
Have you already purchased the ov2312 sensor? Are you wanting to simply stream or capture frames from ov2312 or would you like to also run the sample edge AI GStreamer-based applications with this sensor as your input? See the pipelines and sample configuration file that can accomplish either task. Although I have shared the pipeline for simultaneous RGB + IR streaming, please keep in mind that it is not validated to work on this device. Because I do not have the ov2312 sensor with me at the moment, I have not tested the example configuration file yet so please let me know if you run into any issues when trying to use it.
Stream RGB only:
gst-launch-1.0 v4l2src device=/dev/video-ov2312-rgb-cam0 io-mode=5 ! \ video/x-bayer, width=1600, height=1300, format=bggi10 ! queue leaky=2 ! tiovxisp sensor-name=SENSOR_OV2312_UB953_LI \ dcc-isp-file=/opt/imaging/ov2312/linear/dcc_viss.bin \ sink_0::dcc-2a-file=/opt/imaging/ov2312/linear/dcc_2a.bin sink_0::device=/dev/v4l-ov2312-subdev0 format-msb=9 \ sink_0::pool-size=8 src::pool-size=8 ! \ video/x-raw, format=NV12, width=1600, height=1300, framerate=30/1 ! kmssink driver-name=tidss sync=false
Stream IR only:
gst-launch-1.0 v4l2src device=/dev/video-ov2312-ir-cam0 io-mode=5 ! \ video/x-bayer, width=1600, height=1300, format=bggi10 ! queue leaky=2 ! tiovxisp sensor-name=SENSOR_OV2312_UB953_LI \ dcc-isp-file=/opt/imaging/ov2312/linear/dcc_viss.bin \ sink_0::dcc-2a-file=/opt/imaging/ov2312/linear/dcc_2a.bin format-msb=9 \ sink_0::pool-size=8 src_0::pool-size=8 ! \ video/x-raw, format=GRAY8, width=1600, height=1300 ! \ videoconvert ! video/x-raw, format=NV12 ! kmssink driver-name=tidss sync=false
Stream RGB + IR simultaneously:
gst-launch-1.0 v4l2src device=/dev/video-ov2312-rgb-cam0 io-mode=5 ! \ video/x-bayer, width=1600, height=1300, format=bggi10 ! queue leaky=2 ! tiovxisp sensor-name=SENSOR_OV2312_UB953_LI \ dcc-isp-file=/opt/imaging/ov2312/linear/dcc_viss.bin \ sink_0::dcc-2a-file=/opt/imaging/ov2312/linear/dcc_2a.bin sink_0::device=/dev/v4l-ov2312-subdev0 format-msb=9 \ sink_0::pool-size=8 src::pool-size=8 ! \ video/x-raw, format=NV12, width=1600, height=1300 ! queue ! mosaic.sink_0 \ v4l2src device=/dev/video-ov2312-ir-cam0 io-mode=5 ! video/x-bayer, width=1600, height=1300, format=bggi10 ! queue leaky=2 ! \ tiovxisp sensor-name=SENSOR_OV2312_UB953_LI \dcc-isp-file=/opt/imaging/ov2312/linear/dcc_viss.bin \ sink_0::dcc-2a-file=/opt/imaging/ov2312/linear/dcc_2a.bin format-msb=9 sink_0::pool-size=8 src_0::pool-size=8 ! \ video/x-raw, format=GRAY8, width=1600, height=1300 ! videoconvert ! \video/x-raw, format=NV12 ! queue ! mosaic.sink_1 \ tiovxmosaic name=mosaic \ sink_0::startx="<0>" sink_0::starty="<0>" sink_0::widths="<640>" sink_0::heights="<480>" \ sink_1::startx="<640>" sink_1::starty="<480>" sink_1::widths="<640>" sink_1::heights="<480>" ! \ queue ! kmssink driver-name=tidss sync=false
edgeai-gst-apps example config for ov2312:
title: "OV2312 Camera" log_level: 2 inputs: input0: source: /dev/video-ov2312-rgb-cam0 subdev-id: /dev/v4l-ov2312-subdev0 width: 1600 height: 1300 format: rggi10 framerate: 30 input1: source: /dev/video-ov2312-ir-cam0 subdev-id: /dev/v4l-ov2312-subdev0 width: 1600 height: 1300 format: rggi10 framerate: 30 models: model0: model_path: /opt/model_zoo/TVM-CL-3090-mobileNetV2-tv topN: 5 model1: model_path: /opt/model_zoo/ONR-OD-8200-yolox-nano-lite-mmdet-coco-416x416 viz_threshold: 0.6 model2: model_path: /opt/model_zoo/ONR-SS-8610-deeplabv3lite-mobv2-ade20k32-512x512 alpha: 0.4 outputs: output0: sink: kmssink width: 1920 height: 1080 overlay-perf-type: graph output1: sink: /opt/edgeai-test-data/output/output_video.mkv width: 1920 height: 1080 output2: sink: /opt/edgeai-test-data/output/output_image_%04d.jpg width: 1920 height: 1080 output3: sink: remote width: 1920 height: 1080 port: 8081 host: 127.0.0.1 encoding: jpeg overlay-perf-type: graph flows: flow0: [input0,model1,output0,[320,150,1280,720]]
Thank you,
Fabiana