This thread has been locked.
If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.
Tool/software:
Hello Expert,
I am trying to create a simple graph with v4l2 and viss module (see code below) for an IMX568 camera (linux driver for camera was developed in house).
We are using SDK 09_02_00_05 and the edgeai-app-stack repo: b6ff3ac11bf1c672eaa63f6dfadfa819da02c4b9.
Currently the final YUV image is very blurry.
Original image and viss module output:
I have separated the v4l2 and viss module. Using only the VISS module, which reads in a raw image from a file, is working fine. Using the v4l2 module and mapping the DMA from the tiovx_buffer->handle inside the v4l2_capture_module also gives back a good raw image. Something between these modules seems to distort either the raw image or something else is happening inside the v4l2 module which I don't understand.
I have tried to reduce the fps to ~30 and change the graph scheduling, but the problem is still the same.
Any help or insight would help! Thank you very much!
Best Regards,
Andras
CODE:
#include <stdio.h> #include <stdlib.h> #include <getopt.h> #include <unistd.h> #include <TI/tivx.h> #include <tiovx_modules.h> #include <v4l2_capture_module.h> #include <tiovx_utils.h> #include <app_init.h> #define APP_BUFQ_DEPTH (7) //#define APP_NUM_CH (1) #define APP_NUM_ITERATIONS (20) #define INPUT_WIDTH (1236) #define INPUT_HEIGHT (1032) #define SENSOR_NAME "SENSOR_SONY_IMX568" #define DCC_VISS "/opt/imaging/imx568/linear/dcc_viss.bin" int main(int argc, char *argv[]) { int32_t statusInit = 0; statusInit = appInit(); if (statusInit) { printf("App init error!\n"); } vx_status status = VX_FAILURE; GraphObj graph; NodeObj *node = NULL; TIOVXVissNodeCfg cfg; BufPool *in_buf_pool = NULL, *out_buf_pool = NULL; Buf *inbuf = NULL, *outbuf = NULL; char* output_filename; v4l2CaptureCfg v4l2_capture_cfg; v4l2CaptureHandle *v4l2_capture_handle; tiovx_viss_init_cfg(&cfg); sprintf(cfg.sensor_name, SENSOR_NAME); snprintf(cfg.dcc_config_file, TIVX_FILEIO_FILE_PATH_LENGTH, "%s", DCC_VISS); cfg.width = INPUT_WIDTH; cfg.height = INPUT_HEIGHT; sprintf(cfg.target_string, TIVX_TARGET_VPAC_VISS1); cfg.input_cfg.params.format[0].pixel_container = TIVX_RAW_IMAGE_8_BIT; cfg.input_cfg.params.format[0].msb = 7; status = tiovx_modules_initialize_graph(&graph); if(VX_SUCCESS != status) { printf("Init graph error!\n"); } node = tiovx_modules_add_node(&graph, TIOVX_VISS, (void *)&cfg); node->sinks[0].bufq_depth = APP_BUFQ_DEPTH; graph.schedule_mode = VX_GRAPH_SCHEDULE_MODE_QUEUE_AUTO; status = tiovx_modules_verify_graph(&graph); if (VX_SUCCESS != status) { printf("Verify graph error!\n"); } in_buf_pool = node->sinks[0].buf_pool; out_buf_pool = node->srcs[0].buf_pool; v4l2_capture_init_cfg(&v4l2_capture_cfg); v4l2_capture_cfg.width = INPUT_WIDTH; v4l2_capture_cfg.height = INPUT_HEIGHT; v4l2_capture_cfg.pix_format = V4L2_PIX_FMT_SRGGB8; v4l2_capture_cfg.bufq_depth = APP_BUFQ_DEPTH + 1; sprintf(v4l2_capture_cfg.device, "/dev/video-imx568-cam0"); v4l2_capture_handle = v4l2_capture_create_handle(&v4l2_capture_cfg); for (int i = 0; i < APP_BUFQ_DEPTH; i++) { inbuf = tiovx_modules_acquire_buf(in_buf_pool); v4l2_capture_enqueue_buf(v4l2_capture_handle, inbuf); } v4l2_capture_start(v4l2_capture_handle); for (int j = 0; j < 2; j++) { inbuf = v4l2_capture_dqueue_buf(v4l2_capture_handle); tiovx_modules_enqueue_buf(inbuf); } for (int i = 1; i < APP_NUM_ITERATIONS; i++) { do { inbuf = v4l2_capture_dqueue_buf(v4l2_capture_handle); } while (inbuf == NULL); tiovx_modules_enqueue_buf(inbuf); outbuf = tiovx_modules_acquire_buf(out_buf_pool); tiovx_modules_enqueue_buf(outbuf); //tiovx_modules_schedule_graph(&graph); //tiovx_modules_wait_graph(&graph); inbuf = tiovx_modules_dequeue_buf(in_buf_pool); outbuf = tiovx_modules_dequeue_buf(out_buf_pool); v4l2_capture_enqueue_buf(v4l2_capture_handle, inbuf); if ((i % 5) == 0) { sprintf(output_filename, "/opt/tiovx-imx568/output/imx568_1236x1032_%d.yuv", i); writeImage(output_filename, (vx_image)outbuf->handle); } tiovx_modules_release_buf(outbuf); } v4l2_capture_stop(v4l2_capture_handle); v4l2_capture_delete_handle(v4l2_capture_handle); tiovx_modules_clean_graph(&graph); printf("Running test successful!\n"); appDeInit(); return status; }
Output of code:
APP: Init ... !!! MEM: Init ... !!! MEM: Initialized DMA HEAP (fd=5) !!! MEM: Init ... Done !!! IPC: Init ... !!! IPC: Init ... Done !!! REMOTE_SERVICE: Init ... !!! REMOTE_SERVICE: Init ... Done !!! 30.807099 s: GTC Frequency = 200 MHz APP: Init ... Done !!! 30.810698 s: VX_ZONE_INIT:Enabled 30.810731 s: VX_ZONE_ERROR:Enabled 30.810746 s: VX_ZONE_WARNING:Enabled 30.813495 s: VX_ZONE_INIT:[tivxPlatformCreateTargetId:116] Added target MPU-0 30.813642 s: VX_ZONE_INIT:[tivxPlatformCreateTargetId:116] Added target MPU-1 30.813738 s: VX_ZONE_INIT:[tivxPlatformCreateTargetId:116] Added target MPU-2 30.813893 s: VX_ZONE_INIT:[tivxPlatformCreateTargetId:116] Added target MPU-3 30.813919 s: VX_ZONE_INIT:[tivxInitLocal:136] Initialization Done !!! 30.819293 s: VX_ZONE_INIT:[tivxHostInitLocal:101] Initialization Done for HOST !!! Running test successful! 33.497632 s: VX_ZONE_INIT:[tivxHostDeInitLocal:115] De-Initialization Done for HOST !!! 33.502108 s: VX_ZONE_INIT:[tivxDeInitLocal:204] De-Initialization Done !!! APP: Deinit ... !!! REMOTE_SERVICE: Deinit ... !!! REMOTE_SERVICE: Deinit ... Done !!! IPC: Deinit ... !!! IPC: DeInit ... Done !!! MEM: Deinit ... !!! DDR_SHARED_MEM: Alloc's: 13 alloc's of 16116181 bytes DDR_SHARED_MEM: Free's : 13 free's of 16116181 bytes DDR_SHARED_MEM: Open's : 0 allocs of 0 bytes MEM: Deinit ... Done !!! APP: Deinit ... Done !!!
One additional comment:
running the ./bin/Release/edgeai-tiovx-apps-main demo is also working fine, so on the display the camera image is showing correctly.
Hi Andras,
Can you confirm that you using the following Linux Edge AI image? https://www.ti.com/tool/download/PROCESSOR-SDK-LINUX-SK-TDA4VM/09.02.00.05
Thank you,
Fabiana
Hello Fabiana,
Yes, I can confirm that I indeed use the linked version. Also last week I have tried it with the latest sdk (ti-processor-sdk-linux-edgeai-j721e-evm-10_00_00_08-Linux-x86-Install) with the latest edgeai-app-stack repo (commit hash 886e47d52e6c7cfdc64ff3f828b77a7eeaa7f3c1), but the problem is still the same.
Andras
Hi Andras,
Do you experience the same distortion when running a test GStreamer pipeline in the command line?
Thank you,
Fabiana
Hello Fabiana,
Sorry for the late answer, I was on sick leave last week. We have managed to run the optiflow python app (which uses gst pipeline) and as already mentioned the edgeai tiovx demo app, which uses the underlying v4l2 module which I use in the referenced code above. Both are working fine.
Andras
Hi Andras,
I hope you are feeling better this week! Can you clarify if you ran the demo apps with IMX568 as an input?
Thank you,
Fabiana
Hi Fabiana,
Yes, for both the optiflow and for the demo app, IMX568 was used as an input.
Andras
Hi Andras,
Thank you for confirming. I will run a few tests using a different sensor since I do not have imx568 on hand.
-Fabiana
Hello Fabiana,
I don't know if you have managed to perform any tests, but I think it would also be helpful just to hint me towards any solution or possible issue which can cause this image distortion.
Of course, performing tests would also be very helpful from your side!
Waiting for your reply!
Thanks,
Andras
Hi Andras,
I recommend taking a look at the edge ai dataflows to get a better understanding of how the edge ai applications output good image from a sensor.
Thank you,
Fabiana
Hello Fabiana,
We have managed to get good images with resolution 2472*2036. There was some issue with the dcc file generation and the versioning of the edgeai-app-stack.
However when we try to run the app with other resolution (1236*1032) or just changing the bit depth from 12 bit to 8, we did not get any image (just plain white image). I have changed every resolution related properties inside the edgeai-app-stack code base.
Do you have any information regarding this? Is it sufficient to generate new dcc files with the proper resolution, give the proper resolution to the config structures for each node or is there any constraints about the resolution (min. value, etc.) or any other parameter which we did not consider?
Thank you again for your answer!
Best Regards,
Andras
Hi Andras,
Please see the resources linked below.
ISP Tuning Guide: https://www.ti.com/lit/an/sprad86a/sprad86a.pdf
DCC Tuning Tool: https://www.ti.com/drr/opn/ADAS-SW-IMAGING
Be sure that change your sensor configuration and DCC binaries when using a different resolution or format.
Thank you,
Fabiana