This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Basics of image path / pipe configurations in DM36x IPNC

In the many hours days weeks of reading this forum and assorted documentation I have seen lots of talk about using different imaging paths (for example "ISIF-->RSZ-->DDR") in the DM36x for different purposes, but I have not found any really clear information on how you actually set the path up from within the source code.

By lots of grepping of the source code and tinkering I have video capture working, but it would be really good to have some idea of all the code we should or should not be altering to configure the imager correctly. There doesn't seem to be one single place where an overall configuration file or header is stored, and all of the code is free from comments.

Our setup is Appro IPNC DM368-MT5 4.0.x, input is BT.1120 YUV422, either 720p60 or 1080p30, so all other resolutions must be generated by resizing. We do not need any corrections (LDC, AEWB, NR, etc.), simply to capture full res + resize to one or two smaller streams.

Can any of you gurus out there give either a link to the explanation that I've missed, or a summary of how to set up a couple of example paths/pipes/whatever they are properly called?

  • Hi Mark,

    That was quick! Unfortunately the Appro IPNC does not use V4L2, so everything's a bit different - even if the functionality is basically the same.

  • Okay, so they're back on 2.6.32 or something near there.  The only thing I remember from that kernel regarding the path selection was:

    * dm365_imp.oper_mode=0/1

    which specified either one-shot or continuous resizer operations and set the path accordingly.


    Perhaps they are doing some magic where things are added / removed from the pipeline depending on if you open / close the device entries.


    Is there no v4l2 at all, or just no media controller framework?

    Mark

  • Mark, I may be wrong (it's a new day with fresh coffee) there are V4L2 drivers in the kernel side of the code, it's GStreamer the Appro doesn't use, they use Live555 instead. I got the wrong popular Linux video handling thingy...

    However, I have not seen any V4L2 looking references in the Appro (IPNC) side of the code, including the image capture drivers (MT9P031, TVP514x etc.), it looks like the data is passed through the pipe/rsz etc. via DDR and streamed straight out by AVserver.

    I am re-RTFMing at the moment to check I'm not missing something.

  • Having spent a lot of time RTFMing of SPRUFG8c and the interwebs I'm not much the wiser - the reference manual gives a lot of information but doesn't really tie it together with complete use cases, and uses some terms interchangeably (I just found out TI renamed the CCDC to ISIF, but there's still references to both throughout the docs & code).

    Some sections seem to suggest different answers to the same question - for example it's not clear if we should use the BT656 example for BT1120 input or the "generic YCbCr" section. The Appro code / YUV Sensor Integration Guide uses the rec656Config structure for YUV_MODE_INTERLACED input but leaves it NULL for "ordinary" YUV_MODE, even though the documentation gives the impression that the rec656 interface also supports BT1120 and therefore would be the way to handle BT1120.

    A major source of confusion is that fact we are currently capturing & streaming from BT1120, with a few glitches, using a configuration that doesn't seem to be correct / optimal, but when I re-write the config it doesn't work... but there are so many things that *could* require changing or be interdependent it's hard to know if it just doesn't work or if it just needs a bit more tweaking.

    Capture task flows - the Appro guide says:
    AVSERVER_CAPTURE_RAW_IN_MODE_ISIF_IN: Raw data captured from sensor is directly sent to IPIPE. Should be used for Dual or single capture streams.
    AVSERVER_CAPTURE_RAW_IN_MODE_DDR_IN: Raw data captured from sensor is saved to DDR and IPIPE then process this data to generate YUV data. Should be used triple, quad capture streams
    But no indication of why one is better than the other, what the deciding factors are, etc.
    To me it seems like it would be best to use the same capture setup all the time and just vary the settings (number of output streams/encodes/resizes) but presumably that's not possible when encoding 1080p30 for example? Are these the only thing we need to change to alter the capture path or are there other values in the drv_ipipe.c / drv_isif.c / drv_resz.c / avServerUI.c files that should be modified?

    captureYuvFormat YUV format as defined in framework/drv/drv.h
    DRV_DATA_FORMAT_YUV422 or DRV_DATA_FORMAT_YUV420.
    Keep as DRV_DATA_FORMAT_YUV420 always. Not tested with DRV_DATA_FORMAT_YUV422...


    How we currently have our settings:
    ===================================
    I have copied a selection of settings / config flags etc. that I'm unsure of here in the hope that someone can either explain what I've done wrong or how it could be improved etc..
    drv_isif.c: DRV_isifSetParams() inDataConfig.inDataType = CSL_CCDC_IN_DATA_TYPE_YUV16
    drv_isif.c: DRV_isifSetParams() CSL_ipipeifSetInputSource1(&gCSL_ipipeifHndl, CSL_IPIPEIF_INPUT_SOURCE_SDRAM_YUV); - DOES NOT WORK
    CSL_IPIPEIF_INPUT_SOURCE_PARALLEL_PORT_RAW does work but I find it confusing that this is != YUV when we are trying to use YUV input?

    drv_ipipe.c: gDRV_ipipeObj.ipipeSetup.dataPath = CSL_IPIPE_IPIPE_DATA_PATH_YCBCR_IN_YCBCR_RGB_OUT
    drv_ipipe.c: gDRV_reszObj.inSrcConfig.inputSource2 = CSL_IPIPEIF_INPUT_SOURCE_SDRAM_YUV;
    drv_ipipe.c: gDRV_ipipeObj.rszInConfig.inputDataPath = CSL_RSZ_INPUT_DATA_PATH_IPIPEIF or CSL_RSZ_INPUT_DATA_PATH_IPIPE?
    drv_ipipe.c: DRV_ipipeRszSetParams() gDRV_ipipeObj.ipipeifInSrcConfig.inputSource2 = CSL_IPIPEIF_INPUT_SOURCE_SDRAM_YUV; - DOES NOT WORK
    drv_ipipe.c: DRV_ipipeRszSetParams() gDRV_ipipeObj.ipipeifClk.clkSel = CSL_IPIPEIF_IPIPEIF_CLOCK_SELECT_PCLK (0) looks correct for externally supplied PCLK

    drv_resz.c:
    gDRV_reszObj.inConfig.inputDataPath = CSL_RSZ_INPUT_DATA_PATH_IPIPE or CSL_RSZ_INPUT_DATA_PATH_IPIPEIF?
    gDRV_reszObj.inSrcConfig.inputSource2           = CSL_IPIPEIF_INPUT_SOURCE_SDRAM_RAW;
    gDRV_reszObj.inSrcConfig.inputSource2           = CSL_IPIPEIF_INPUT_SOURCE_SDRAM_YUV;
    Same question - which one of these two is correct for YUV/BT1120 input?

    In drv_isifCfgBT1120.c most stuff is disabled, but these are set:
    .syncConfig = {
     
        .interlaceMode = FALSE,
        .wenUseEnable  = FALSE,
        .fidPolarity   = CSL_CCDC_SIGNAL_POLARITY_POSITIVE,
        .hdPolarity    = CSL_CCDC_SIGNAL_POLARITY_POSITIVE,
        .vdPolarity    = CSL_CCDC_SIGNAL_POLARITY_NEGATIVE,
        .hdVdDir       = CSL_CCDC_SIGNAL_DIR_INPUT,
        .fidDir        = CSL_CCDC_SIGNAL_DIR_INPUT,
        .hdWidth = 0,
        .vdWidth = 0,
        .pixelsPerLine = 0,
        .linesPerFrame = 0,
      }
    It seems strange that we don't set .pixelsPerLine or .linesPerFrame, and that we then have to set the H/V padding as a constant elsewhere in the code.

    The avServerUI alters many settings, including resizer & input format on the fly (restarting the capture routines, server etc.) but the maths it does and the code is undocumented/uncommented.

    Capture Vs Encode/Stream config
    ===============================
    Our imager is only capable of outputting 720p30/p60 or 1080p30, but we can obviously decimate, frame-skip, etc. to acheive lower resolutions and frame rates.
    avServerUI.c configures a lot of parameters on the capture side AND the encode side, but features rather ambiguous variables like this:
    config->captureConfig[i].encodeStreamId[k++]    = 1;  // So is this the capture side or the encode side?
    config->encodeConfig[i].captureStreamId         = 0;  // So is this the capture side or the encode side?

    I'll admit I haven't chased to the bottom of this particular rabbit-hole just yet, but any pointers would be appreciated!
     
    Confusion of Terminology:
    =========================
    There are registers in several of the modules relating to width & height, start pixels, valid_start, valid_width etc., and it seems like the capture interface expects to count a certain number of pixels after HD/VD otherwise it will not work / the interrupt will never fire, is there any way to make this automatic? Due to the padding around the image, and the code assuming it is always central, it's possible to end up with an overall pixel number that is bigger than the Rx'd data and hence no image is received.

    It seems to be the case that the capture process effectively crops the "valid" portion of the image out for further use, using constants like IMGS_H/V_PAD, valid_start values etc. but I can't pin down which stage this happens at. NOTE this is not the same thing as cropWidth/cropHeight in VIDEO_EncodeConfig.

    I already added functions to switch the IMGS_PAD values if the BT1120 input is 720p or 1080p.

    Examples:
    In drv_resz.c DRV_reszRun():
    gDRV_reszObj.ddrInConfig.inputWidth = prm.inWidth+startX;
     gDRV_reszObj.ddrInConfig.inputHeight = prm.inHeight;
    gDRV_reszObj.ddrInConfig.inputLineOffset = bytesPerLine

    In videoResizeThr.c VIDEO_captureRszFunc()
        reszPrm.inStartX = 0;
        reszPrm.inStartY = 1;
        reszPrm.inWidth = gVIDEO_ctrl.captureInfo.isifInfo.ddrOutDataWidth
    which is set in ipnc_rdk/av_capture/framework/drv/usermod/src/drv_isif.c#DRV_isifOpen()
    gDRV_isifObj.info.ddrOutDataWidth = gDRV_isifObj.imgsModeInfo.validWidth
        reszPrm.inHeight = gVIDEO_ctrl.captureInfo.isifInfo.ddrOutDataHeight
    which is set in ipnc_rdk/av_capture/framework/drv/usermod/src/drv_isif.c#DRV_isifOpen()
    gDRV_isifObj.info.ddrOutDataHeight = gDRV_isifObj.imgsModeInfo.validHeight

    There is also a lot of misleading terminology in the TI/Appro docs where the various input sources are used interchangeably - the term "CCD/CMOS Sensor" or "video data input" is used where the input / case being discussed could be anything from a Bayer-pattern CCD to 8-10-16Bit YCbCr requiring no further processing. There are also omissions where the documentation talks about the input being "video data from SDRAM", without saying how the video data got into the SDRAM in the 1st place. The various formats (YCbCr, RGB, YUV, etc.) are also used in ways that are *sometimes* interchangeable, but then we fall over in the source code where the options may be xyz_YUV or xyz_RGB_RAW for example.

    According to SPRUFG8,"IPIPE has three different processing paths":
    • Case 1: IPIPE reads CCD raw data and applies all IPIPE functions and stores the YCbCr (or RGB) data to SDRAM.
    • Case 2: IPIPE reads CCD raw data and stores the Bayer data after white balance to SDRAM.
    • Case 3: IPIPE reads YCbCr-422 data and applies edge enhancement, chroma suppression, and Resize to output YCbCr (or RG B) data to SDRAM.
    ...It then goes on to show a diagram with SRC.FMT values of 0,1,2,3 (that's 4 modes).
        
    Uncommented Code Questions
    ==========================
    A lot of the code uses image width / height numbers, and padding numbers, but sometimes divides one or both by 2 without explanation, like this:
    pModeCfg->sensorDataWidth   = pFrame->W;
    pModeCfg->sensorDataHeight  = pFrame->H /2 ;

    The padding values (IMGS_H_PAD / IMGS_V_PAD) seem to assume the image is centred relative to the padding, with code like this:
    pModeCfg->validStartX       = IMGS_H_PAD/2;
    pModeCfg->validStartY       = IMGS_V_PAD/2;
    Which crop up in a few places in the code.

    It would be good if someone at Appro thought to put even a single comment on the line:
      pFrame->HBmin  = 346*(pFrame->row_bin+1)+64+(80/(pFrame->col_bin+1))/2;
    Which I'm sure is doing something important. There are quite a few like this in drv_ImgsCalcCfg.c which do some maths without explaining why.

    I am still digging...

  • Bob,

    I can answer few questions. It takes lot of time to understand all the questions and answer.

    Bob The Gerbil said:
    Having spent a lot of time RTFMing of SPRUFG8c and the interwebs I'm not much the wiser - the reference manual gives a lot of information but doesn't really tie it together with complete use cases, and uses some terms interchangeably (I just found out TI renamed the CCDC to ISIF, but there's still references to both throughout the docs & code).

    CCDC is different from ISIF. To have a better idea of the difference you can have a look at DM6446 VPFE controller and compare with DM36x VPFE. There is no concept of IPIPEIF there.

    http://www.ti.com/lit/ug/sprue38h/sprue38h.pdf 

    Bob The Gerbil said:
    Some sections seem to suggest different answers to the same question - for example it's not clear if we should use the BT656 example for BT1120 input or the "generic YCbCr" section. The Appro code / YUV Sensor Integration Guide uses the rec656Config structure for YUV_MODE_INTERLACED input but leaves it NULL for "ordinary" YUV_MODE, even though the documentation gives the impression that the rec656 interface also supports BT1120 and therefore would be the way to handle BT1120.

    A major source of confusion is that fact we are currently capturing & streaming from BT1120, with a few glitches, using a configuration that doesn't seem to be correct / optimal, but when I re-write the config it doesn't work... but there are so many things that *could* require changing or be interdependent it's hard to know if it just doesn't work or if it just needs a bit more tweaking.

    I think you need to segregate two things. One is the bus interface ie, BT656, BT1120, Parallel interface etc. Second is data format YUV, YCbCr or RAW/Bayer etc. In IPIPEIF you can select the source of data that it has to provide to IPIPE, RSZ or ISIF. This can be from parallel port or from an SDRAM location of output of ISIF directly. This info is clearly given in Figure 4-1 of SPRUFG8B.pdf. 

    Once you are clear with the data flow path, then its just about understanding the configuration that needs to be done in each module. For eg, you can configure BT656/1120/Bayer in ISIF module and not in IPIPEIF. To understand this please refer to Figure 4-32. Image Pipe Interface Processing Flow.

    I believe the answer for remaining questions lies here. I have not read the post completely though. If you have further questions about the flow, please feel free to ask. 

    One thing you've to keep in mind is that the whole ISS driver doesn't have a well written framework kind of code. It more or less configures each particular module individually and doesn't care for what the other module is really configured to. So, user needs to understand the architecture carefully before modifying the flow.

    The behavior is different in case of V4L2 drivers, where you have a well defined framework. 

  • Thanks Renjith.

    I have read the documentation, and am re-reading it. I think the biggest problem I have is not in understanding the documents, but linking the things in the documents to the variables/structures/functions in the Appro SDK. The TI documentation describes all of these things in relation to setting individual registers, whereas in the SDK there are existing data structures & functions that define the configuration, it seems better to work with these rather than try to over-ride them by writing registers directly.

    As I said in my post, it would be good to have example code that explains how you take a path configuration and apply that to the Appro IPNC code. At the risk of sounding like I want other people to do my work for me, it would be really helpful to see the correct code to set up the use-case that Anushman suggests is optimal in this post - ISIF-->RSZ-->DDR, as later on he states that [Appro] "IPNC SW av_server uses ISIF-->IPIPE-->RSZ-->DDR path for all 1080P30 usecases".

  • Hi Bob,

    The key is to continue reading the document and the code back to back. Another easier approach will be to setup a known working configuration and dump the registers and compare with the TRM. This will give a better idea about the register settings and their right usage. 

    It will be tough to get the sample source code, which will depict all the use cases.

  • I agree that's the expedient solution, but it would be unnecessary if the code was actually documented...

    Well, 50% of all the posts about the IPNC would be unnecessary if the code was documented.

  • Bob,

    That's the hard truth of Linux world. At the same time me (and many others) are getting their salaries just because of lack of documentation :)

  • If the code was free open source I'd expect to get exactly what I paid for... but when it's in the 3rd party SDK for which we've paid a fair chunk of cash it's not so great.