This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

DM365 IPNC & Appro Software : Av_Server debugging



 Hello all,

Following the integration of my camera into the IPNC application, I am starting to play with the AV_Server.out application. After modifying the avserser_UI files to include my use case (VGA only), i try to run the av_server.

It quickly reaches an OSA_ThrCreate error on the first task (video_create). After looking at  the ipnc user guide, it looks like the av_server COULD be run without having the system_server process running. Can someone confirm this point ?

But looking at the source code, it looks like the av_server when creating task needs some messages from another process (which i have not yet identified in the code, but which is said to be system_server).

the problem is that i would like to user the av_server and be able to store in a file some video data. for this do i need to make the system_server run ?

To anticipate answer, I run the scripts av_capture_load.sh, and loadmodules_ipnc.sh

thank you

 

  • Hi,

    reda38 said:
    It quickly reaches an OSA_ThrCreate error on the first task (video_create). After looking at  the ipnc user guide, it looks like the av_server COULD be run without having the system_server process running. Can someone confirm this point ?

    Yes, you can run av_server without system_server.

    Can you please provide following information:

    1. How are you calling av_server.out and what are the parameters you are passing?

    2. What is the version of your IPNC Ref Design?

    3. What exactly is the error log that you are getting when running av_Server.out?

     

    Regards,

    Anshuman

  • Hello Anshuman,

    1. How are you calling av_server.out and what are the parameters you are passing?

    after calling the av_capture_load.sh; loadkmodules.sh and loamodules_ipnc.sh I run :

    av_server.out VGA H264 3000000 CVBR

    I have added the VGA case in he avserver_UI.c

    2. What is the version of your IPNC Ref Design?

    Revision 1.0

    3. What exactly is the error log that you are getting when running av_Server.out?

     AVSERVER UI: Initializing.
     AVSERVER API: Creating TSKs.

    DRV_SyncRst: module = 47, domain = 0, state = 0
    DRV_SyncRst: module = 47, domain = 0, state = 3

     CLK Hz,
     ARM   Hz =  297000000
     DDR   Hz =  243000000
     VPSS  Hz =  243000000
     IMCOP Hz =  243000000

     *****************1
     VIDEO CAPTURE CREATE...ENTER
     ERROR  (osa_thr.c|OSA_thrCreate|42): OSA_thrCreate() - Could not create thread [12] : Cannot allocate memory
     ASSERT (osa_tsk.c|OSA_tskCreate|32)

    And from here there is no control over the application. It is like blocked in an infinite loop.

     

    Thank you for any idea.

    reda

     

  • Reda,

    This error is coming from pthread_create() call. Have you changed the memory requirements of the kernel? It seems like pthread library is not able to get enough memory for allowing pthread_create to succeed. Can you check the "mem= " setting of your boot args? Also, please check the stack size that you are passing for the capture thread creation?

    Were you able to get av_server.out working when you got the software release?

    BTW, there are much newer release (ver 1.9) available for IPNC reference design. I would recommend you to move to that release.

    Regards,

    Anshuman

  • Hello Anshuman,

    >Can you check the "mem= " setting of your boot args?

    mem=60M

    >Also, please check the stack size that you are passing for the capture thread creation?

    I have this define in avserver_thr.h

    #define VIDEO_CAPTURE_STACK_SIZE       (VIDEO_STACK_SIZE_DEFAULT)
    #define VIDEO_LDC_STACK_SIZE           (VIDEO_STACK_SIZE_DEFAULT)
    #define VIDEO_VNF_STACK_SIZE           (VIDEO_STACK_SIZE_DEFAULT)
    #define VIDEO_RESIZE_STACK_SIZE        (VIDEO_STACK_SIZE_DEFAULT)
    #define VIDEO_ENCODE_STACK_SIZE        (VIDEO_STACK_SIZE_DEFAULT)
    #define VIDEO_ENCRYPT_STACK_SIZE       (VIDEO_STACK_SIZE_DEFAULT)
    #define VIDEO_STREAM_STACK_SIZE        (VIDEO_STACK_SIZE_DEFAULT)
    #define VIDEO_2A_STACK_SIZE            (VIDEO_STACK_SIZE_DEFAULT)
    #define VIDEO_VS_STACK_SIZE            (VIDEO_STACK_SIZE_DEFAULT)
    #define VIDEO_SWOSD_STACK_SIZE        (VIDEO_STACK_SIZE_DEFAULT)
    #define VIDEO_FD_STACK_SIZE            (VIDEO_STACK_SIZE_DEFAULT)
    #define VIDEO_DISPLAY_STACK_SIZE       (VIDEO_STACK_SIZE_DEFAULT)
    #define VIDEO_MOTION_STACK_SIZE                 (VIDEO_STACK_SIZE_DEFAULT)
    #define AUDIO_CAPTURE_STACK_SIZE           (VIDEO_STACK_SIZE_DEFAULT)
    #define AVSERVER_MAIN_STACK_SIZE       (VIDEO_STACK_SIZE_DEFAULT)

    with : #define VIDEO_STACK_SIZE_DEFAULT    (0*KB)

    O KB! maybe it is the cause.

    there is a commented value set to 20KB.

    What is the recommended value for it to work?

     

    Maybe, indeed, the version i get is not proper to work. I will contact my TI application engineer.

    I have not seen it work on the ipnc.



  • Hello Anshuman,

    Having dont the quick trial to compile with 0KB and with 20KB, i found that the 20KB value enables the application to start.

    the problem is that setting 0KB makes the application hungs.

    But now i know there was an issue in the code i have.

    The question now is to know what is the proper size for the stack_size parameter to enable all thread launching.

    Thank you for your help

    reda

  • Hello anshuman,

    some news about the memory problem.

    it looks the problem is general, the system_server process is also experiencing problems. by default each filemangthread and alarmthread is set with attribute set to NULL. by printing the stacksize, it is set to its defautls which is more than 1MB for each. When I run 'free' on my prompt there is not enough memory (this i must investigate why). So it looks like there is a main problem of memory in my setting.

    My question is :

    Do you have the memory needs for each process in the IPNC application ?

    Thank you

    reda

     

  • Some follow-up information about this issue.

    Indeed memory assigned to linux kernel, declared on the kernel command line by the attribute mem=xxM was too low (60M!), Setting it at 76M is OK to release enough memory for the different processes.

    My question is still pending concerning the memory usage of each process, is there a summary ?

     

  • Hello Anshuman,

    As i progressed through debugging and source code reading, i found the problem of memory for thread. What I saw also is that the whole appro system_server application is far more complex to understand than av_server. I still need some code reading.

    What I try to test is the av_server in stand alone mode.

    I tried to change the numcapturestream to 1, because i only need a h264 stream, but it created me some problems during running, some division by zero in kernel and finally an Oops. I think the av_serverUi.c needs other parameters settings.

    What I want to do is just a VGA streaming in H264, there is no AEW, FD, and all other features, and the resizing factor should be 1:1.

    I have created the case for the VGA with dimensions and added this in the menu. I have also enabled the flag of uart menu so i can change things during running.

    What i type is : ./av_server.out VGA H264 1000000 CVBR

    1- And when running I type y to save the data to a yuv420sp file. The problem is that it outputs somthing in a Data000...yuvTI file of length 450KB which is the size of 1 image. It looks like duration is not proper.

    2- I tried to launch RTSP by typing this : ./av_server.out VGA H264 1000000 CVBR RTSP. And I tried to open a VLC stream, but with no result. Using the network analyser Wireshark, no RTSP stream was opened. wis-streamer was launched.

    Of course i launched the scripts as described in the doc. except he system_server executable which launches av_server with its own parameters. And you confirmed me that system_server was not necessary for this trial.

    It seems that there is a problem in the data which comes from the pipe. I tried to output debug messages, but nothing looked strange. If you need extra information...

    regards

    reda

     

     

  • Hi Reda,

    My answers are listed below:

    reda38 said:
    1- And when running I type y to save the data to a yuv420sp file. The problem is that it outputs somthing in a Data000...yuvTI file of length 450KB which is the size of 1 image. It looks like duration is not proper.

    Yes, the option 'y' saves only one YUV420SP frame. This is the implementation we have done.

    reda38 said:

    2- I tried to launch RTSP by typing this : ./av_server.out VGA H264 1000000 CVBR RTSP. And I tried to open a VLC stream, but with no result. Using the network analyser Wireshark, no RTSP stream was opened. wis-streamer was launched.

    Can you share the log on the terminal that you get? I would like to see whether the stream was generated and started or not. Also, what is it that you type on the VLC player to connect to the camera?

     

    I can suggest you a few things to ensure your system is looking good or not:

    1. Run './av_server.out VGA' --> This should do a VGA capture, assuming you have done the sensor driver integration correctly. You should also see the same on the display, if you have done the settings correctly for display.

     a.) Type 'y' and see if you get  a YUV420SP frame dumped correctly.

    2. Run './av_server.out VGA H264 1000000 CVBR' - This is VGA capture along with single stream H.264 codec operation

     a.) Type 'y' and see if you get  a YUV420SP frame dumped correctly.

    b.) Type '6' to start saving H.264 stream to file. Type '6' to stop writing to file again. See if you get  encoded frame dumped correctly or not.

    If 1 and 2 works for you, then encoder is working fine. Need to focus only on the RTSP. For this we can separately start wis-streamer and see if we can connect VLC player to camera or not.

    Regards,

    Anshuman

    PS: Please mark this post as verified, if you think this answers your question. Thanks.

  • Hi Anshuman,

    Thank you for the answers.

    OK, I understand better the av_server options.

    -y is for still image capture or snapshot, while 6 is for video recording.

    -also could you tell me how to put only one stream for one capture and one encode in the av_server_ui.c usecase part. As explained sooner, it looks like the fd is activated, and if i reduce the numCapture there are some division by zero (due to vdint0 and resz) and finally kernel oops. I do not understand the philosophy of this part. Do I need to declare all this stuff even if I do not use these features ?

    Well answering questions about the tests:

    My sensor was integrated correctly up to here. Except sometimes there were some pink or green rays from top to bottom of the image. With white noise. But for the moment I do not know if it comes from the camera or the davinci ipipe. I will redo the tests of imager integration.

    1a)When I type 'y' I got a YUV image saved. the problem is its size it is slightly bigger than a VGA.

    2a)Yes I got a YUV image. the size problem is ithe same than above.

    2b.) I got recorded two files, CHxxx640x480.bits and another with face infos. The .bits is supposed to be H264 encoded right ?

    I viewed this file, changing its extension in .h264, using VLC, and the video streamed OK.

    So 1 and 2 worked, I opened a network stream and tried to open RTSP protocol with the address : rtsp://192.2.5.73:855x/h264, i have tried the 2 ports 855x assigned to h264. Each is assigned to a stream id which do not correspond (id 1and 2) to the av_server_ui id number (0 and 1).

    And nothing streamed. I will check it again and check the launching of wis-streamer by av_server.out. Because using a network analyser no frame appeared.

    I will generate the log on the terminal and send it to you. Which debug messages do you want to appear ? the STREAMING ones only ?

    Regards

    reda

  • Hi,

    reda38 said:
    also could you tell me how to put only one stream for one capture and one encode in the av_server_ui.c usecase part. As explained sooner, it looks like the fd is activated, and if i reduce the numCapture there are some division by zero (due to vdint0 and resz) and finally kernel oops. I do not understand the philosophy of this part. Do I need to declare all this stuff even if I do not use these features ?

    Following options in avServerUI.c would help:

    config->numCaptureStream  = 1;

    config->faceDetectConfig.captureStreamId = 0;
       config->faceDetectConfig.fdEnable     = FALSE; //Disable FD even though your av_server.out might be trying to set it.

       config->captureConfig[i].ldcEnable     = FALSE;
       config->captureConfig[i].snfEnable     = FALSE;
       config->captureConfig[i].tnfEnable     = FALSE;
       config->captureConfig[i].vsEnable      = FALSE;

    Remove config->encodeConfig[i].XXXXXXXX for ( i > 0). This way there will be no configuration of the encoder for streams other than stream 0.


    reda38 said:
    I viewed this file, changing its extension in .h264, using VLC, and the video streamed OK.

    Seeing that you are able to get a good bitstream, it surely means that the encoder is working fine. Only thing now you need to worry about is the right settings of the network and the wis-streamer being launched correctly.

    BTW, as you are using YUV input mode to DM365, there is nothing that Davinci (DM365) IPIPE is going to do to the input data. It is more of a pass through. So for the artifacts and white noise that you are seeing, i would recommend to focus on the sensor driver and sensor settings.

    Regards,

    Anshuman

     

    PS: Please mark this post as verified, if you think this answered your question. Thanks.

  • Hello Anshuman,

    I will try your settings for the only one stream. It looks like there is a flag i missed in the configuration. I did not know that removing the parameter just sufficed. Concerning streamId.

    Concerning image quality as said before I had some pink rays on my images when I did isif/ipipeif/ipipe config tests. I will check this one more time on the sensor. But i did not have the white dots. So maybe encoding does some problems. I doubt it does. But i will check more my sensor on these 2 points.

    Concerning wis-streamer, in spite of its presence in the av_server program, it was not launched sometimes. When resetting the platform and ensuring the platform is well rebooted, it streamed on the :8557 port and video was visible on VLC. So I guess it is more a problem of scripts and al.

    So it looks lie the application is running now. Thanks for your help...again!

    regards

     

  • Reda,

    This is good news. I would still recommend you to jump to latest IPNC Ref Design version, which is ver 1.9. There have been quite a few changes from ver 1.0 to ver 1.9

    Regards,

    Anshuman

  • Hello Anshuman,

    OK i will check for the latest version. We plan to move to dvsdk3.0 too, and we are waiting for infos about compatibility of ipnc and dvsdk.

    Concerning our pending point I have tried your advice, and I got some division by zero and a kernel Oops. I have removed what was liinked with streamId >0 to get this :

    ****************************************************************************************

    case AVSERVER_UI_CAPTURE_MODE_VGA:
                     
                config->sensorMode          = DRV_IMGS_SENSOR_MODE_640x480;
                  config->sensorFps           = 10;

                config->vstabTskEnable      = FALSE;
                config->ldcTskEnable        = FALSE;
                config->vnfTskEnable        = FALSE;
                config->encryptTskEnable    = FALSE;
                config->captureRawInMode    = AVSERVER_CAPTURE_RAW_IN_MODE_ISIF_IN;
                config->captureSingleResize = FALSE;           
                config->captureYuvFormat    = DRV_DATA_FORMAT_YUV420;
               
                config->numCaptureStream    = 1;

                if(numEncodes > config->numCaptureStream)    numEncodes = config->numCaptureStream;
                config->numEncodeStream     = numEncodes;

                config->faceDetectConfig.captureStreamId = 0;
                config->faceDetectConfig.fdEnable        = FALSE;

                config->displayConfig.captureStreamId    = 0;
                config->displayConfig.width              = 640;
                config->displayConfig.height             = 480;
                config->displayConfig.expandH             = TRUE;
               
                config->audioConfig.captureEnable        = FALSE;
                config->audioConfig.samplingRate         = 8000;
                config->audioConfig.codecType            = ALG_AUD_CODEC_G711;
                config->audioConfig.fileSaveEnable       = FALSE;

                i=0;
                k=0;
                config->captureConfig[i].width              = 640;
                config->captureConfig[i].height             = 480;
                config->captureConfig[i].ldcEnable          = FALSE;
                config->captureConfig[i].snfEnable          = FALSE;
                config->captureConfig[i].tnfEnable          = FALSE;
                config->captureConfig[i].vsEnable           = FALSE;

                  if(numEncodes>0)    config->captureConfig[i].numEncodes         = 1;

                config->captureConfig[i].encodeStreamId[k++]= 0;
                config->captureConfig[i].frameSkipMask      = 0xFFFFFFFF;

                i=0;
                config->encodeConfig[i].captureStreamId          = 0;
                config->encodeConfig[i].cropWidth                = ALIGN_ENCODE(640);
                config->encodeConfig[i].cropHeight               = ALIGN_ENCODE(480);
                config->encodeConfig[i].frameRateBase             = 10000;
                config->encodeConfig[i].frameSkipMask            = 0xFFFFFFFF;
                config->encodeConfig[i].codecType                = gAVSERVER_UI_config.codecType[i];
                config->encodeConfig[i].codecBitrate             = gAVSERVER_UI_config.codecBitrate[i];
                config->encodeConfig[i].encryptEnable            = FALSE;
                config->encodeConfig[i].fileSaveEnable           = FALSE;
                config->encodeConfig[i].motionVectorOutputEnable = FALSE;
                config->encodeConfig[i].qValue                   = gAVSERVER_UI_config.codecBitrate[i];
                break;

    *****************************************************************************************************

    What is the problem with this configuration. It looks like something is badly initialized. It makes a pointer to be  fuzzy and so the kernel oops.

    Another point is that i do not have a display yet, and there is no display config enable bit.

    More generally, what I understand is that : I have only one capture stream and one encode stream. I could also choose one capture stream and 2 encode streams (h264 & mjpeg for instance). Is that the way it is coded? it is not really clear indeed in the code.

    regards

  • reda38 said:

    What is the problem with this configuration. It looks like something is badly initialized. It makes a pointer to be  fuzzy and so the kernel oops.

     Ideally, it should have worked but if it is giving some problem, then there has to be a initialization problem. We have single capture stream use case in ver 1.9 implemented so i am sure we might have fixed problem, if there existed one. Currently, i can only suggest you to remove encodeStream also and just validate if capture alone is working or that also is giving you kernel oops. This is only to eliminate the problem one-by-one.

    reda38 said:
    More generally, what I understand is that : I have only one capture stream and one encode stream. I could also choose one capture stream and 2 encode streams (h264 & mjpeg for instance). Is that the way it is coded? it is not really clear indeed in the code.

    Yes, you are right. same capture stream can go to different encoders. The D1 use case allows that where the same D1 resolution goes to H.264 encoder and MJPEG encoder.

    reda38 said:

    Another point is that i do not have a display yet, and there is no display config enable bit.

    You are right. The display does not have a enable, disable config. It is always on by default. You can switch it off by removing the TskRun function in display thread.

    Regards,

    Anshuman

  • Hello Anshuman,

    Thank you for your answer. I will perform the step-by-step diagnosis. It is the only possibility now to find the source of pbm.

    Concerning the version 1.9, to which dvsdk version is it aimed to ? Same question for the linux kernel version vs IPNC 1.9 ?

    Best regards

    Reda

  • Hi,

    reda38 said:

    Concerning the version 1.9, to which dvsdk version is it aimed to ? Same question for the linux kernel version vs IPNC 1.9 ?

    IPNC Ref Design 1.9, and in general future releases too, are based on DVSDK 2.10.xx and LSP 2.10.xx. TI has already moved to open source linux kernel tree in DVSDK 3.10.xx but we do not plan to migrate to this new DVSDK. Ofcourse, we dont see a big effort in moving to LSP/DVSDK 3.xx.xx and we can provide support to customers based on the requirements.

    Regards,

    Anshuman

    PS: Please mark this post as verified, if you think it has answered your question. Thanks.

  • Thank you for these valuable infos.

    We'll see when we need to move to dvsdk3.0 and Linux 2.6.33 to check. Concening the application code i do not see pbm. It is more on the framework components compatibility that some questions may arise.

    cheers

    reda

  • Hi:

         I have met the "Division by zero in kernel" error too when choose one capture stream and 2 encode streams, anyone who can help on this?

  • Hi Tracy,

    I believe it should be a simple debugging to do, to fix the issue. How have you made changes in the code to use one single capture stream to go to two different encode streams? We do the same operation in our D1 demo mode where D1 stream goes to two different encoders.

    Regards,

    Anshuman

  • Hi Anshuman:

              Appreciate for you response firstly.

              IPNC D1 demo support dual encode stream, but the numCaptureStream is not 1, the capture stream is 3 channel which is 720x480, 720x480 and 288x192.

  •  

    Dear Anshuman,

     

         We are working on migrating our products to TI chipset(DM365/DM368). To start with we bought IPNC from Appro and started working on software part.

         We want all our software to be based on open source, so, we are trying to port DVSDK 3.10(ULB, u-boot, linux-2.6.34) to IPNC. Since we are quite new to TI platform,

         it will be great help if you can provide some guidance for porting DVSDK 3.10 to IPNC.

     

         Thanks for any help in advance!!

     

    Kind Regards,

    Alex

  • Hi Alex,

    We have not ported IPNC software on DVSDK3.10 package. But it should not be very difficult. The main thing that you might want to focus on is looking at the EDMA related driver layer in IPNC. There has been some changes in the EDMA LSP driver, which IPNC uses. Other things should be quite similar.

    We can help you based on specific issues you face.

    Regards,

    Anshuman

    PS: Please mark this post as verified, if you think it has answered your question. Thanks.

  •  

    Hi Anshuman,

     

    Thanks for your reply. I will get back to you if I encounter any problems....

     

    Kind Regards,

    Alex

     

  •  

    Hi Anshuman,

    I was comparing UBL and u-boot for IPNC and DVSDK 3.10. Wondering if I can use same IPNC UBL and u-boot. We will not be using IPNC applications so, my problem is to port UBL, u-boot, kernel(2.6.32) and file system(yaffs)

    Thanks for any advice in advance!!

    Kind Regards,

    Alex