This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

TDA4 Issue about camera demo error on RTOS0801

Other Parts Discussed in Thread: TDA4VM

Hello,

When I driver some camera on our TDA4VM custem board used RTOS0801,using the single camera app run YUV output camera success,

can capture image normal,but run the RAW output camera just can capture one RAW frame then will be hung,and R core log shows:

[MCU2_0] 1292.332878 s: VX_ZONE_WARNING:[tivxVpacVissCreate:365] VISS H3A output is not generated due to DCC not being enabled
[MCU2_0] 1292.339639 s: VX_ZONE_WARNING:[tivxAewbCreate:1064] No DCC buffer passed. Disabling 2A
[MCU2_0] 1292.353846 s: ImageSensor_RemoteServiceHandler: IM_SENSOR_CMD_STREAM_ON
[MCU2_0] 1292.353932 s: IM_SENSOR_CMD_STREAM_ON: channel_mask = 0x1
[MCU2_0] 1292.354015 s: Sensor I2c address = 0x10
[MCU2_0] 1292.364093 s: max96712_cfgScript : max96716 I2cAddr = 0x29
[MCU2_0] 1292.364140 s: MAX96712 config start
[MCU2_0] 1292.364194 s: [max96712_cfgScript] :rawRegVal: 0x040b 0x02
[MCU2_0] 1292.390301 s: Read 96712 I2c address at 0x29, Reg[0x040b] = 0x2
[MCU2_0] 1292.390350 s: End of MAX96712 config
[MCU2_0] 1292.412523 s: src/fvid2_drvMgr.c @ Line 911:
[MCU2_0] 1292.412571 s: EBADARGS: NULL pointer!!
[MCU2_0] 1292.412617 s: VX_ZONE_ERROR:[tivxDisplayProcess:1009] DISPLAY: ERROR: Unable to queue frame!

what about this issue? how to resolve it let RAW output camera run success?

  • Hi xu ji,

    Can you please share some more information about this graph? are you connecting CSIRX output directly to display? Are you using raw_image?

    Because although display support raw image, by treating it as grey scale image, it supports only vx_image, raw_image data structure is not supported in the display. 

    Regards,

    Brijesh

  • I don't clear about app side just use the single camera demo app,attach the graph code,please help check

    when we run the RAW output camera in RTOS0703 is successful,now in 0801 can not what is diff between 0703 and 0801?

    vx_status app_create_graph(AppObj *obj)
    {
        vx_status status = VX_SUCCESS;
        int32_t sensor_init_status = -1;
        obj->configuration = NULL;
        obj->raw = NULL;
        obj->y12 = NULL;
        obj->uv12_c1 = NULL;
        obj->y8_r8_c2 = NULL;
        obj->uv8_g8_c3 = NULL;
        obj->s8_b8_c4 = NULL;
        obj->histogram = NULL;
        obj->h3a_aew_af = NULL;
        obj->display_image = NULL;
    
        unsigned int image_width = obj->width_in;
        unsigned int image_height = obj->height_in;
    
        tivx_raw_image raw_image = 0;
        vx_user_data_object capture_config;
        vx_uint8 num_capture_frames = 1;
        tivx_capture_params_t local_capture_config;
        uint32_t buf_id;
        const vx_char capture_user_data_object_name[] = "tivx_capture_params_t";
        uint32_t sensor_features_enabled = 0;
        uint32_t sensor_features_supported = 0;
        uint32_t sensor_wdr_enabled = 0;
        uint32_t sensor_exp_control_enabled = 0;
        uint32_t sensor_gain_control_enabled = 0;
    
        vx_bool yuv_cam_input = vx_false_e;
        vx_image viss_out_image = NULL;
        vx_image ldc_in_image = NULL;
        vx_image capt_yuv_image = NULL;
    
        uint8_t channel_mask = (1<<obj->selectedCam);
    
        vx_uint32 params_list_depth = 1;
        if(obj->test_mode == 1)
        {
            params_list_depth = 2;
        }
        vx_graph_parameter_queue_params_t graph_parameters_queue_params_list[params_list_depth];
    
        status = appSetSensorCSIInstance(obj->sensor_csi_instance,SINGLE_APP);
        if(VX_SUCCESS != status)
        {
            printf("appSetSensorCSIInstance returned %d\n", status);
            return status;
        }
        printf("Querying %s \n", obj->sensor_name);
        memset(&sensorParams, 0, sizeof(sensorParams));
        status = appQueryImageSensor(obj->sensor_name, &sensorParams);
        if(VX_SUCCESS != status)
        {
            printf("appQueryImageSensor returned %d\n", status);
            return status;
        }
    
        if(sensorParams.sensorInfo.raw_params.format[0].pixel_container == VX_DF_IMAGE_UYVY)
        {
            yuv_cam_input = vx_true_e;
            printf("YUV Input selected. VISS and AEWB nodes will be bypassed. \n");
        }
    
    
        /*
        Check for supported sensor features.
        It is upto the application to decide which features should be enabled.
        This demo app enables WDR, DCC and 2A if the sensor supports it.
        */
        sensor_features_supported = sensorParams.sensorInfo.features;
    
        if(vx_false_e == yuv_cam_input)
        {
            if(ISS_SENSOR_FEATURE_COMB_COMP_WDR_MODE == (sensor_features_supported & ISS_SENSOR_FEATURE_COMB_COMP_WDR_MODE))
            {
                APP_PRINTF("WDR mode is supported \n");
                sensor_features_enabled |= ISS_SENSOR_FEATURE_COMB_COMP_WDR_MODE;
                sensor_wdr_enabled = 1;
                obj->sensor_wdr_mode = 1;
            }else
            {
                APP_PRINTF("WDR mode is not supported. Defaulting to linear \n");
                sensor_features_enabled |= ISS_SENSOR_FEATURE_LINEAR_MODE;
                sensor_wdr_enabled = 0;
                obj->sensor_wdr_mode = 0;
            }
    
            if(ISS_SENSOR_FEATURE_MANUAL_EXPOSURE == (sensor_features_supported & ISS_SENSOR_FEATURE_MANUAL_EXPOSURE))
            {
                APP_PRINTF("Expsoure control is supported \n");
                sensor_features_enabled |= ISS_SENSOR_FEATURE_MANUAL_EXPOSURE;
                sensor_exp_control_enabled = 1;
            }
    
            if(ISS_SENSOR_FEATURE_MANUAL_GAIN == (sensor_features_supported & ISS_SENSOR_FEATURE_MANUAL_GAIN))
            {
                APP_PRINTF("Gain control is supported \n");
                sensor_features_enabled |= ISS_SENSOR_FEATURE_MANUAL_GAIN;
                sensor_gain_control_enabled = 1;
            }
    
            if(ISS_SENSOR_FEATURE_CFG_UC1 == (sensor_features_supported & ISS_SENSOR_FEATURE_CFG_UC1))
            {
                APP_PRINTF("CMS Usecase is supported \n");
                sensor_features_enabled |= ISS_SENSOR_FEATURE_CFG_UC1;
            }
    
            switch(sensorParams.sensorInfo.aewbMode)
            {
                case ALGORITHMS_ISS_AEWB_MODE_NONE:
                    obj->aewb_cfg.ae_mode = ALGORITHMS_ISS_AE_DISABLED;
                    obj->aewb_cfg.awb_mode = ALGORITHMS_ISS_AWB_DISABLED;
                    break;
                case ALGORITHMS_ISS_AEWB_MODE_AWB:
                    obj->aewb_cfg.ae_mode = ALGORITHMS_ISS_AE_DISABLED;
                    obj->aewb_cfg.awb_mode = ALGORITHMS_ISS_AWB_AUTO;
                    break;
                case ALGORITHMS_ISS_AEWB_MODE_AE:
                    obj->aewb_cfg.ae_mode = ALGORITHMS_ISS_AE_AUTO;
                    obj->aewb_cfg.awb_mode = ALGORITHMS_ISS_AWB_DISABLED;
                    break;
                case ALGORITHMS_ISS_AEWB_MODE_AEWB:
                    obj->aewb_cfg.ae_mode = ALGORITHMS_ISS_AE_AUTO;
                    obj->aewb_cfg.awb_mode = ALGORITHMS_ISS_AWB_AUTO;
                    break;
            }
            if(obj->aewb_cfg.ae_mode == ALGORITHMS_ISS_AE_DISABLED)
            {
                if(sensor_exp_control_enabled || sensor_gain_control_enabled )
                {
                    obj->aewb_cfg.ae_mode = ALGORITHMS_ISS_AE_MANUAL;
                }
            }
    
            APP_PRINTF("obj->aewb_cfg.ae_mode = %d\n", obj->aewb_cfg.ae_mode);
            APP_PRINTF("obj->aewb_cfg.awb_mode = %d\n", obj->aewb_cfg.awb_mode);
    
        }
    
        if(ISS_SENSOR_FEATURE_DCC_SUPPORTED == (sensor_features_supported & ISS_SENSOR_FEATURE_DCC_SUPPORTED))
        {
            sensor_features_enabled |= ISS_SENSOR_FEATURE_DCC_SUPPORTED;
            APP_PRINTF("Sensor DCC is enabled \n");
        }else
        {
            APP_PRINTF("Sensor DCC is NOT enabled \n");
        }
    
        APP_PRINTF("Sensor width = %d\n", sensorParams.sensorInfo.raw_params.width);
        APP_PRINTF("Sensor height = %d\n", sensorParams.sensorInfo.raw_params.height);
        APP_PRINTF("Sensor DCC ID = %d\n", sensorParams.dccId);
        APP_PRINTF("Sensor Supported Features = 0x%x\n", sensor_features_supported);
        APP_PRINTF("Sensor Enabled Features = 0x%x\n", sensor_features_enabled);
        sensor_init_status = appInitImageSensor(obj->sensor_name, sensor_features_enabled, channel_mask);/*Mask = 1 for camera # 0*/
        if(0 != sensor_init_status)
        {
            /* Not returning failure because application may be waiting for
                error/test frame */
            printf("Error initializing sensor %s \n", obj->sensor_name);
        }
    
        image_width     = sensorParams.sensorInfo.raw_params.width;
        image_height    = sensorParams.sensorInfo.raw_params.height;
        obj->cam_dcc_id = sensorParams.dccId;
        obj->width_in = image_width;
        obj->height_in = image_height;
    
    /*
    Assuming same dataformat for all exposures.
    This may not be true for staggered HDR. WIll be handled later
        for(count = 0;count<raw_params.num_exposures;count++)
        {
            memcpy(&(raw_params.format[count]), &(sensorProperties.sensorInfo.dataFormat), sizeof(tivx_raw_image_format_t));
        }
    */
    
    /*
    Sensor driver does not support metadata yet.
    */
    
        APP_PRINTF("Creating graph \n");
    
        obj->graph = vxCreateGraph(obj->context);
        if(status == VX_SUCCESS)
        {
            status = vxGetStatus((vx_reference) obj->graph);
        }
        APP_ASSERT(vx_true_e == tivxIsTargetEnabled(TIVX_TARGET_VPAC_VISS1));
    
        APP_PRINTF("Initializing params for capture node \n");
    
        /* Setting to num buf of capture node */
        obj->num_cap_buf = NUM_BUFS;
    
        if(vx_false_e == yuv_cam_input)
        {
            raw_image = tivxCreateRawImage(obj->context, &sensorParams.sensorInfo.raw_params);
    
            /* allocate Input and Output refs, multiple refs created to allow pipelining of graph */
            for(buf_id=0; buf_id<obj->num_cap_buf; buf_id++)
            {
                if(status == VX_SUCCESS)
                {
                    obj->cap_frames[buf_id] = vxCreateObjectArray(obj->context, (vx_reference)raw_image, num_capture_frames);
                    status = vxGetStatus((vx_reference) obj->cap_frames[buf_id]);
                }
            }
        }
        else
        {
            capt_yuv_image = vxCreateImage(
                                    obj->context,
                                    sensorParams.sensorInfo.raw_params.width,
                                    sensorParams.sensorInfo.raw_params.height,
                                    VX_DF_IMAGE_UYVY
                             );
    
            /* allocate Input and Output refs, multiple refs created to allow pipelining of graph */
            for(buf_id=0; buf_id<obj->num_cap_buf; buf_id++)
            {
                if(status == VX_SUCCESS)
                {
                    obj->cap_frames[buf_id] = vxCreateObjectArray(obj->context, (vx_reference)capt_yuv_image, num_capture_frames);
                    status = vxGetStatus((vx_reference) obj->cap_frames[buf_id]);
                }
            }
        }
    
        /* Config initialization */
        tivx_capture_params_init(&local_capture_config);
    
        local_capture_config.timeout = 33;
        local_capture_config.timeoutInitial = 500;
    
        local_capture_config.numInst  = 1U;/* Configure both instances */
        local_capture_config.numCh = 1U;/* Single cam. Only 1 channel enabled */
        {
            vx_uint8 ch, id, lane, q;
            for(id = 0; id < local_capture_config.numInst; id++)
            {
                local_capture_config.instId[id]                       = obj->sensor_csi_instance;
                local_capture_config.instCfg[id].enableCsiv2p0Support = (uint32_t)vx_true_e;
                local_capture_config.instCfg[id].numDataLanes         = sensorParams.sensorInfo.numDataLanes;
                local_capture_config.instCfg[id].laneBandSpeed        = sensorParams.sensorInfo.csi_laneBandSpeed;
    
                for (lane = 0; lane < local_capture_config.instCfg[id].numDataLanes; lane++)
                {
                    local_capture_config.instCfg[id].dataLanesMap[lane] = lane + 1;
                }
                for (q = 0; q < NUM_CAPT_CHANNELS; q++)
                {
                    ch = (NUM_CAPT_CHANNELS-1)* id + q;
                    local_capture_config.chVcNum[ch]   = q;
                    local_capture_config.chInstMap[ch] = obj->sensor_csi_instance;
                }
            }
        }
    
        //local_capture_config.chInstMap[0] = obj->selectedCam/NUM_CAPT_CHANNELS;
        local_capture_config.chInstMap[0] = obj->sensor_csi_instance;
        local_capture_config.chVcNum[0]   = obj->selectedCam%NUM_CAPT_CHANNELS;
        printf("set sensor csi instance [%d] in single sensor app config!!\n",obj->sensor_csi_instance);
        
        capture_config = vxCreateUserDataObject(obj->context, capture_user_data_object_name, sizeof(tivx_capture_params_t), &local_capture_config);
        APP_PRINTF("capture_config = 0x%p \n", capture_config);
    
        APP_PRINTF("Creating capture node \n");
        obj->capture_node = tivxCaptureNode(obj->graph, capture_config, obj->cap_frames[0]);
        APP_PRINTF("obj->capture_node = 0x%p \n", obj->capture_node);
    
        if(status == VX_SUCCESS)
        {
            status = vxReleaseUserDataObject(&capture_config);
        }
        if(status == VX_SUCCESS)
        {
            status = vxSetNodeTarget(obj->capture_node, VX_TARGET_STRING, TIVX_TARGET_CAPTURE2);
        }
    
        if(vx_false_e == yuv_cam_input)
        {
            obj->raw = (tivx_raw_image)vxGetObjectArrayItem(obj->cap_frames[0], 0);
            if(status == VX_SUCCESS)
            {
                status = tivxReleaseRawImage(&raw_image);
            }
    #ifdef _APP_DEBUG_
    
    
            obj->fs_test_raw_image = tivxCreateRawImage(obj->context, &(sensorParams.sensorInfo.raw_params));
    
            if (NULL != obj->fs_test_raw_image)
            {
                if(status == VX_SUCCESS)
                {
                    status = read_test_image_raw(NULL, obj->fs_test_raw_image, obj->test_mode);
                }
                else
                {
                    status = tivxReleaseRawImage(&obj->fs_test_raw_image);
                    obj->fs_test_raw_image = NULL;
                }
            }
    #endif //_APP_DEBUG_
    
            status = app_create_viss(obj, sensor_wdr_enabled);
            if(VX_SUCCESS == status)
            {
            vxSetNodeTarget(obj->node_viss, VX_TARGET_STRING, TIVX_TARGET_VPAC_VISS1);
            tivxSetNodeParameterNumBufByIndex(obj->node_viss, 6u, obj->num_cap_buf);
            }
            else
            {
                printf("app_create_viss failed \n");
                return -1;
            }
    
            status = app_create_aewb(obj, sensor_wdr_enabled);
            if(VX_SUCCESS != status)
            {
                printf("app_create_aewb failed \n");
                return -1;
            }
            viss_out_image = obj->y8_r8_c2;
            ldc_in_image = viss_out_image;
        }
        else
        {
            obj->capt_yuv_image = (vx_image)vxGetObjectArrayItem(obj->cap_frames[0], 0);
            ldc_in_image = obj->capt_yuv_image;
            vxReleaseImage(&capt_yuv_image);
        }
    
        if (obj->ldc_enable)
        {
            printf ("Enabling LDC \n");
            status = app_create_ldc(obj, ldc_in_image);
    
            if(status == VX_SUCCESS)
            {
                status = vxSetNodeTarget(obj->node_ldc, VX_TARGET_STRING, TIVX_TARGET_VPAC_LDC1);
            }
            else
            {
                printf("app_create_ldc returned error \n");
                return status;
            }
            if(status == VX_SUCCESS)
            {
                status = tivxSetNodeParameterNumBufByIndex(obj->node_ldc, 7u, obj->num_cap_buf);
            }
    
            /*Check if resizing is needed for display*/
            if((obj->table_width >= obj->display_params.outWidth) && (obj->table_height >= obj->display_params.outHeight))
            {
                vx_uint16 scaler_out_w, scaler_out_h;
                obj->scaler_enable = vx_true_e;
                appIssGetResizeParams(obj->table_width, obj->table_height, obj->display_params.outWidth, obj->display_params.outHeight, &scaler_out_w, &scaler_out_h);
                obj->scaler_out_img = vxCreateImage(obj->context, scaler_out_w, scaler_out_h, VX_DF_IMAGE_NV12);
                obj->scalerNode = tivxVpacMscScaleNode(obj->graph, obj->ldc_out, obj->scaler_out_img, NULL, NULL, NULL, NULL);
                if(status == VX_SUCCESS)
                {
                    status = tivxSetNodeParameterNumBufByIndex(obj->scalerNode, 1u, obj->num_cap_buf);
                }
                obj->display_params.outHeight = scaler_out_h;
                obj->display_params.outWidth = scaler_out_w;
                obj->display_image = obj->scaler_out_img;
            }else /*No resize needed*/
            {
                obj->scaler_enable = vx_false_e;
                obj->display_image = obj->ldc_out;
    
                /* MSC can only downsize. If ldc resolution is lower,
                   display resolution must be set accordingly
                */
                obj->display_params.outWidth = obj->table_width;
                obj->display_params.outHeight = obj->table_height;
            }
        }
        else /*ldc_enable*/
        {
            if(NULL != obj->capt_yuv_image)
            {
                /*MSC does not support YUV422 input*/
                obj->scaler_enable = vx_false_e;
            }
            else
            {
                if ((image_width >= obj->display_params.outWidth) && (image_height >= obj->display_params.outHeight))
                {
                    obj->scaler_enable = vx_true_e;
                }
                else
                {
                    obj->scaler_enable = vx_false_e;
                    /* MSC can only downsize. If viss resolution is lower,
                       display resolution must be set accordingly
                    */
                    obj->display_params.outWidth = image_width;
                    obj->display_params.outHeight = image_height;
                }
            }
    
            if(vx_true_e == obj->scaler_enable)
            {
                vx_uint16 scaler_out_w, scaler_out_h;
                appIssGetResizeParams(image_width, image_height, obj->display_params.outWidth, obj->display_params.outHeight, &scaler_out_w, &scaler_out_h);
                obj->scaler_out_img = vxCreateImage(obj->context, scaler_out_w, scaler_out_h, VX_DF_IMAGE_NV12);
                obj->scalerNode = tivxVpacMscScaleNode(obj->graph, ldc_in_image, obj->scaler_out_img, NULL, NULL, NULL, NULL);
                if(status == VX_SUCCESS)
                {
                    status = tivxSetNodeParameterNumBufByIndex(obj->scalerNode, 1u, obj->num_cap_buf);
                }
                if(status == VX_SUCCESS)
                {
                    status = vxSetNodeTarget(obj->scalerNode, VX_TARGET_STRING, TIVX_TARGET_VPAC_MSC1);
                }
                obj->display_params.outHeight = scaler_out_h;
                obj->display_params.outWidth = scaler_out_w;
                obj->display_image = obj->scaler_out_img;
            }
            else
            {
                obj->display_image = ldc_in_image;
            }
        }
    
        if(NULL == obj->display_image)
        {
            printf("Error : Display input is uninitialized \n");
            return VX_FAILURE;
        }
        else
        {
            obj->display_params.posX = (1920U - obj->display_params.outWidth)/2;
            obj->display_params.posY = (1080U - obj->display_params.outHeight)/2;
            obj->display_param_obj = vxCreateUserDataObject(obj->context, "tivx_display_params_t", sizeof(tivx_display_params_t), &obj->display_params);
            obj->displayNode = tivxDisplayNode(obj->graph, obj->display_param_obj, obj->display_image);
        }
    
        if(status == VX_SUCCESS)
        {
            status = vxSetNodeTarget(obj->displayNode, VX_TARGET_STRING, TIVX_TARGET_DISPLAY1);
            APP_PRINTF("Display Set Target done\n");
        }
        int graph_parameter_num = 0;
    
        /* input @ node index 1, becomes graph parameter 0 */
        add_graph_parameter_by_node_index(obj->graph, obj->capture_node, 1);
    
        /* set graph schedule config such that graph parameter @ index 0 is enqueuable */
        graph_parameters_queue_params_list[graph_parameter_num].graph_parameter_index = graph_parameter_num;
        graph_parameters_queue_params_list[graph_parameter_num].refs_list_size = obj->num_cap_buf;
        graph_parameters_queue_params_list[graph_parameter_num].refs_list = (vx_reference*)&(obj->cap_frames[0]);
        graph_parameter_num++;
    
        if(obj->test_mode == 1)
        {
            add_graph_parameter_by_node_index(obj->graph, obj->displayNode, 0);
    
            /* set graph schedule config such that graph parameter @ index 0 is enqueuable */
            graph_parameters_queue_params_list[graph_parameter_num].graph_parameter_index = graph_parameter_num;
            graph_parameters_queue_params_list[graph_parameter_num].refs_list_size = 1;
            graph_parameters_queue_params_list[graph_parameter_num].refs_list = (vx_reference*)&(obj->display_image);
            graph_parameter_num++;
        }
    
        if(status == VX_SUCCESS)
        {
            status = tivxSetGraphPipelineDepth(obj->graph, obj->num_cap_buf);
        }
    
        /* Schedule mode auto is used, here we dont need to call vxScheduleGraph
         * Graph gets scheduled automatically as refs are enqueued to it
         */
        if(status == VX_SUCCESS)
        {
            status = vxSetGraphScheduleConfig(obj->graph,
                            VX_GRAPH_SCHEDULE_MODE_QUEUE_AUTO,
                            params_list_depth,
                            graph_parameters_queue_params_list
                            );
        }
        APP_PRINTF("vxSetGraphScheduleConfig done\n");
    
        if(status == VX_SUCCESS)
        {
            status = vxVerifyGraph(obj->graph);
        }
    
        if(vx_true_e == obj->scaler_enable)
        {
            tivx_vpac_msc_coefficients_t sc_coeffs;
            vx_reference refs[1];
    
            printf("Scaler is enabled\n");
    
            tivx_vpac_msc_coefficients_params_init(&sc_coeffs, VX_INTERPOLATION_BILINEAR);
    
            obj->sc_coeff_obj = vxCreateUserDataObject(obj->context, "tivx_vpac_msc_coefficients_t", sizeof(tivx_vpac_msc_coefficients_t), NULL);
            if(status == VX_SUCCESS)
            {
                status = vxCopyUserDataObject(obj->sc_coeff_obj, 0, sizeof(tivx_vpac_msc_coefficients_t), &sc_coeffs, VX_WRITE_ONLY, VX_MEMORY_TYPE_HOST);
            }
            refs[0] = (vx_reference)obj->sc_coeff_obj;
            if(status == VX_SUCCESS)
            {
                status = tivxNodeSendCommand(obj->scalerNode, 0u, TIVX_VPAC_MSC_CMD_SET_COEFF, refs, 1u);
            }
        }
        else
        {
            printf("Scaler is disabled\n");
        }
    
        if(status == VX_SUCCESS)
        {
            status = tivxExportGraphToDot(obj->graph, ".", "single_cam_graph");
        }
    
    #ifdef _APP_DEBUG_
        if(vx_false_e == yuv_cam_input)
        {
            if( (NULL != obj->fs_test_raw_image) && (NULL != obj->capture_node) && (status == VX_SUCCESS))
            {
                status = app_send_test_frame(obj->capture_node, obj->fs_test_raw_image);
            }
        }
    #endif //_APP_DEBUG_
    
        APP_PRINTF("app_create_graph exiting\n");
        return status;
    }

  • Hi xu ji,

    Single camera application is as such working in SDK8.1, so dont see any issue in that.

    Typically this null pointer error comes when either node is not replicated correctly or the output images are not allocated properly.

    I had quick look into this API. it looks to be correct. Can you please compare it against single camera application and see if it exactly matches? 

    In SDK8.0, the change that was done was related to channel mask in the imaging component. In Imaging component, instead of using number of channels, it use channel mask, with the bit set to 1 when channel is enabled.. Other than this, there is change. Can you please check if this is taken care for your sensor?

    Regards,

    Brijesh

  • Which point need I check in code?APP or sensor driver?please show some example about code need check

  • I found that 0801 app has app_create_viss API,but 0703 app not have,does 0703 not encapsulate the app_create_viss API?

    the two app create viss node are same?and this issue may about the viss node not running success then no correct frame to display node?

  • No, even 0703 release have some mechanism to initialize VISS. It may not be same API, but VISS is used in single camera example in the 0703 release.. 

    Yes from the VISS point of view, VISS node is same in 0703 and 0801 release. 

    Can you please share the exact data flow that you are using?

    Is it CSIRX -> VISS -> LDC -> MSC -> Display? 

    Because for one of the module, application is not allocating buffers and so display is failing.. 

    Are you using single camera example also from 0801 release or you have ported your example from 0703 to 0801 release? 

    Regards,

    Brijesh 

  • I used 0801 release and I think it is CSIRX -> VISS -> LDC -> MSC -> Display,but we add CSI instance select function in app,I will revert this and try again

  • Sure, please check with the released example. You could use your sensor. But lets first check with the released example..

  • when I revert our app changes,the single camera demo works success with RAW camera,

    but multi camera demo can not work well,it will exit itself,log shows:

    Max number of cameras supported by sensor OS_AR0820_MAX9295_LI_MULTI = 2
    Please enter number of cameras to be enabled
    2
    2510.524167 s: ISS: Initializing sensor [OS_AR0820_MAX9295_LI_MULTI], doing IM_SENSOR_CMD_PWRON ... !!!
    2510.524718 s: ISS: Initializing sensor [OS_AR0820_MAX9295_LI_MULTI], doing IM_SENSOR_CMD_CONFIG ... !!!
    2514.895721 s: ISS: Initializing sensor [OS_AR0820_MAX9295_LI_MULTI] ... Done !!!
    Invalid DCC size for VISS
    2514.896493 s: VX_ZONE_ERROR:[ownReleaseReferenceInt:307] Invalid reference
    2514.896502 s: VX_ZONE_ERROR:[ownReleaseReferenceInt:307] Invalid reference
    2514.896532 s: ISS: De-initializing sensor [OS_AR0820_MAX9295_LI_MULTI] ... !!!
    2514.896926 s: ISS: De-initializing sensor [OS_AR0820_MAX9295_LI_MULTI] ... Done !!!
    2514.897078 s: VX_ZONE_ERROR:[ownReleaseReferenceInt:307] Invalid reference
    2514.897091 s: VX_ZONE_ERROR:[ownReleaseReferenceInt:307] Invalid reference
    2514.897102 s: VX_ZONE_ERROR:[ownReleaseReferenceInt:307] Invalid reference
    2514.897109 s: VX_ZONE_ERROR:[ownReleaseReferenceInt:307] Invalid reference
    2514.897118 s: VX_ZONE_ERROR:[ownReleaseReferenceInt:307] Invalid reference
    2514.897128 s: VX_ZONE_ERROR:[ownReleaseReferenceInt:307] Invalid reference
    2514.897135 s: VX_ZONE_ERROR:[ownReleaseReferenceInt:307] Invalid reference
    2514.897142 s: VX_ZONE_ERROR:[ownReleaseReferenceInt:307] Invalid reference
    2514.897151 s: VX_ZONE_ERROR:[ownReleaseReferenceInt:307] Invalid reference
    2514.897160 s: VX_ZONE_ERROR:[ownReleaseReferenceInt:307] Invalid reference
    2514.897168 s: VX_ZONE_ERROR:[ownReleaseReferenceInt:307] Invalid reference
    2514.897178 s: VX_ZONE_ERROR:[ownReleaseReferenceInt:307] Invalid reference
    2514.897184 s: VX_ZONE_ERROR:[ownReleaseReferenceInt:307] Invalid reference
    2514.897190 s: VX_ZONE_ERROR:[ownReleaseReferenceInt:307] Invalid reference
    2514.897198 s: VX_ZONE_ERROR:[ownReleaseReferenceInt:307] Invalid reference
    2514.897208 s: VX_ZONE_ERROR:[ownReleaseReferenceInt:307] Invalid reference
    2514.897512 s: VX_ZONE_INIT:[tivxHostDeInitLocal:100] De-Initialization Done for HOST !!!
    2514.901951 s: VX_ZONE_INIT:[tivxDeInitLocal:193] De-Initialization Done !!!
    APP: Deinit ... !!!
    REMOTE_SERVICE: Deinit ... !!!
    REMOTE_SERVICE: Deinit ... Done !!!
    IPC: Deinit ... !!!
    IPC: DeInit ... Done !!!
    MEM: Deinit ... !!!
    MEM: Alloc's: 2 alloc's of 308 bytes
    MEM: Free's : 2 free's of 308 bytes
    MEM: Open's : 0 allocs of 0 bytes
    MEM: Deinit ... Done !!!
    APP: Deinit ... Done !!!

    what problem about multi camera demo?

  • Hi ,

    Have you updated the API appIssGetDCCSizeVISS in vision_apps\utils\iss\src\app_iss_common.c for your new sensor? 

    I see this error "Invalid DCC size for VISS" error in the log, this comes when this api appIssGetDCCSizeVISS returns negative DCC size.. 

    So please check if this API is updated for your sensor for both WDR and non-WDR mode. 

    Regards,

    Brijesh

  • After update app_iss_common.c,the multi camera demo can running,I see the des /serdes and sensor register write ok,but when I press 's' can not save picture press 'x' can not exit,log shows:

    Max number of cameras supported by sensor OS_AR0820_MAX9295_LI_MULTI = 2
    Please enter number of cameras to be enabled
    2
    2614.277327 s: ISS: Initializing sensor [OS_AR0820_MAX9295_LI_MULTI], doing IM_SENSOR_CMD_PWRON ... !!!
    2614.277882 s: ISS: Initializing sensor [OS_AR0820_MAX9295_LI_MULTI], doing IM_SENSOR_CMD_CONFIG ... !!!
    2618.648432 s: ISS: Initializing sensor [OS_AR0820_MAX9295_LI_MULTI] ... Done !!!
    2618.792486 s: ISS: Starting sensor [OS_AR0820_MAX9295_LI_MULTI] ... !!!


    =========================
    Demo : Camera Demo
    =========================

    s: Save CSIx, VISS and LDC outputs

    p: Print performance statistics

    x: Exit

    Enter Choice:


    =========================
    Demo : Camera Demo
    =========================

    s: Save CSIx, VISS and LDC outputs

    p: Print performance statistics

    x: Exit

    Enter Choice: 2618.865446 s: ISS: Starting sensor [OS_AR0820_MAX9295_LI_MULTI] ... !!!
    s

    =========================
    Demo : Camera Demo
    =========================

    s: Save CSIx, VISS and LDC outputs

    p: Print performance statistics

    x: Exit

    Enter Choice:


    =========================
    Demo : Camera Demo
    =========================

    s: Save CSIx, VISS and LDC outputs

    p: Print performance statistics

    x: Exit

    Enter Choice: x

  • Hi ,

    ok, any change in the mipi output for RAW output format, compared to YUV output format? 

    I guess YUV output is working fine for you, but seems RAW output is not working.. What all changes are done in the sensor and/or deserializer to get RAW output? Any change in the lane speed, data format or virtual channel?

    Regards,

    Brijesh

  • yes,YUV output is ok,sensor deserializer and serializer setting is same with 0703 can output data setting and write success,

    it is 8M RAW camera ,data format set TIVX_RAW_IMAGE_16_BIT, 11    speed set CSIRX_LANE_BAND_SPEED_720_TO_800_MBPS

    and where is virtual channel setting?

  • But when you change to raw camera, are you also changing datatype to RAW12 ie 0x2C in the deserializer mipi output? Because if virtual channel and data type combination is used in the CSIRX to identify the channel. If they dont match, it will not be captured.. 

    Virtual channel is set in chVcNum variable in the CSIRX.. This is not set the sensor driver, but set in the CSIRX. For multi-camera example, it is set in vision_apps\modules\src\app_capture_module.c file..

    Regards,

    Brijesh

  • yes,deserializer mipi out must changed,the config in deserializer,serializer  and sensor config same with that on RTOS0703,in RTOS0703 the multi RAW camera can output data success,but in RTOS0801 can not run now.

    vision_apps\modules\src\app_capture_module.c not changed,used TI origin code,I think virtual channel is ok

  • Hi ,

    Strange, there is not much change in the CSIRX node or CSIRX PDK driver from SDK07.03 to SDK08.01. It is just migrated from TI RTOS to FreeRTOS.

    The main changes are in the imaging component. In the imaging component, we started using channel mask to identify the number of channels enabled.. Can you please make sure that this is taken care in the driver that you have added? Also please check if you are using correct lane band speed in the imaging component. I think this is one more change in the imaging component. Earlier lane speed was fixed in the single camera or multi-camera example. But in the new framework, the lane speed comes from the imaging component.. 

    Can you try commenting out below line in vision_apps\modules\src\app_capture_module.c and see if it helps in multi-camera example?

    captureObj->params.instCfg[id].laneBandSpeed        = sensorObj->sensorParams.sensorInfo.csi_laneBandSpeed;

    Regards,

    Brijesh

  • Hi, Brijesh,

    I get the log in app_capture_module.c that is aptureObj->params.instCfg[0].laneBandSpeed = 13

    does it correct?

    And I see iss_sensors.c has channel mask,but in camera driver sample file I am not found,does it mean driver added?

  • Hi xu ji,

    I get the log in app_capture_module.c that is aptureObj->params.instCfg[0].laneBandSpeed = 13

    This essentially mean you are configuring CSIRX to receive data at 1.5Gbps to 1.7Gbps lane speed, is this correct? Is the deserializer outputting within this lane band speed? 

    And I see iss_sensors.c has channel mask,but in camera driver sample file I am not found,does it mean driver added?

    True, i just looked at the IMX390 camera driver and found that it does not get this channel id mask. So it should be ok, as long as this camera is registered to the imaging framework.. 

    One more question, can you please check and confirm that camera driver and deserializer output is enabled only in streamon callback? 

    Regards,

    Brijesh

  • This essentially mean you are configuring CSIRX to receive data at 1.5Gbps to 1.7Gbps lane speed, is this correct? Is the deserializer outputting within this lane band speed? 

    In sensor driver I set CSIRX_LANE_BAND_SPEED_720_TO_800_MBPS  this macro

    One more question, can you please check and confirm that camera driver and deserializer output is enabled only in streamon callback? 

    yes enabled only in streamon

  • In sensor driver I set CSIRX_LANE_BAND_SPEED_720_TO_800_MBPS  this macro

    oh sorry, i misread it. Yes, laneBandSpeed to value 13 is correct for 720 to 800 Mbps lane speed. But what i was asking was, is the deserializer also configured to output at this output lane speed? 

    yes enabled only in streamon

    Can you also please check and see what CSI data type you are outputting from deserializer? 

    Regards,

    Brijesh

  • oh sorry, i misread it. Yes, laneBandSpeed to value 13 is correct for 720 to 800 Mbps lane speed. But what i was asking was, is the deserializer also configured to output at this output lane speed? 

    in 0703 the CSI clock set 800, I think it is ok

    Can you also please check and see what CSI data type you are outputting from deserializer? 

    it is RAW 12bit data type

  • it is RAW 12bit data type

    If the deserializer is configured to output in RAW12 datatype, can you please keep the datatype in CSIRX to be RAW12 and not YUV422? 

    Regards,

    Brijesh

  • I make this camera driver to single camera driver,all registers setting and data type CSI clock setting from multi camera driver.

    And running the single camera demo app can success,press 'p' can print right FPS:

    GRAPH: graph_85 (#nodes = 5, #executions = 592)
    NODE: CAPTURE2: node_96: avg = 33700 usecs, min/max = 32804 / 499156 usecs, #executions = 592
    NODE: VPAC_VISS1: VISS_Processing: avg = 13912 usecs, min/max = 13814 / 14153 usecs, #executions = 592
    NODE: MCU2-0: 2A_AlgNode: avg = 2224 usecs, min/max = 2018 / 2552 usecs, #executions = 592
    NODE: VPAC_MSC1: node_107: avg = 19646 usecs, min/max = 19581 / 19959 usecs, #executions = 592
    NODE: DISPLAY1: node_109: avg = 8431 usecs, min/max = 84 / 16768 usecs, #executions = 592

    PERF: TOTAL: avg = 32999 usecs, min/max = 32808 / 33158 usecs, #executions = 68

    PERF: TOTAL: 30.30 FPS

    's' can save RAW picture

    but one sensor register write falied:

    [MCU2_0] 1561.948096 s: Error writing to register 0x301a
    [MCU2_0]
    [MCU2_0] 1561.948145 s:
    [MCU2_0] 1561.948166 s:
    [MCU2_0] 1561.948201 s: AR0820A_MAX: Sensor Reg Write Failed for regAddr 0x301a

    this register write failed also happened on Multi camera app

    But all single and multi case on 0703 not happened this register write failed issue,I used same HW about TDA4 and camera,the register setting

    and IIC write delay all same with 0703 in code.

    I don't know the Multi camera app hung related with this register write failed,and why 0703 not have this failed issue?

    please help about above information 

  • Hi xu ji,

    In SDK 07.03 release, multi-camera example was by default using lane speed as 1.5Gbps ie TIVX_CAPTURE_LANE_BAND_SPEED_1350_TO_1500_MBPS bandspeed. Even for single camera application, this was fixed.. 

    So can you please try commenting out below line from vision_apps\modules\src\app_capture_module.c in SDK0801 and check the multi-camera output? 

    captureObj->params.instCfg[id].laneBandSpeed        = sensorObj->sensorParams.sensorInfo.csi_laneBandSpeed;

    Regards,

    Brijesh

  • After commenting out this multi camera demo can run success, why?

  • Wow, Glad to know it finally working. This essentially means the lane speed is 1.5Gbps ie TIVX_CAPTURE_LANE_BAND_SPEED_1350_TO_1500_MBPS. Please set this it in the sensor driver also. I think it should work even for single camera application.. 

  • yes, use TIVX_CAPTURE_LANE_BAND_SPEED_1350_TO_1500_MBPS works fine,this issue fixed, thanks