This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Need help to design IP CAM with 4 parallel sensor inputs using DM368

I have a customer discussing to run 4 CCD sensors into a FPGA, then merge as one 4xD1 resolution HD/2 video, needs an option to output 4 D1 video output locally with separated RCA connectors, Does DM368 will be good fit? or Which processors will be better? Thanks

  • DM368 can output 1080i60 or 720p60 over component analog output (separate RCA connectors), so DM368 can be used for this.

    ~Kedar

     

  • Do I need additional NTSC video encoder for this? by looking the reference design, seems to me that TV_Out is directly output to RCA connector, no NTSC/PAL is needed, is this true for for those components video output?

  • You will need to add an FPGA and 4 discrete NTSC encoders unfortunately.

    You might be able to fins a vendor who provides this in a single device, but I am not aware of any.

    The FPGA would be necessary to split a composted 4xD1 image back into 4 individual streams for re-encoding in a similar manner to your front end FPGA but in reverse.

    BR,

    Steve

  • What you are suggesting is: using FPGA to split one data stream into 4 separated D1 streams then feed to separated NTSC encoders?

    In this case, probably, simple CPLD will do it, right?

  • Sorry for cutting into your discussion, but it touches on a topic that interests me as well:

    as far as compression goes, is DM368 capable of doing H.264 on 4 separate D1 streams at 30 fps each? Pixel-wise 1080 is more than 4 times more than D1, but I'm not sure if it's possible to scale it this way? Had anyone tried it?

    Thanks!

     

  • It depends on how you compost the images together for sending out of the processor.

    The FPGA will need to ensure that it presents the video streams to each of the video encoders in a timely manner.

    If the video image is interleaved in the processor's frame buffer such that the FPGA/CPLD does not need to do any substantial buffering then you might get away with a CPLD. On the other hand if your frame buffer image is basically one image in each quadrant then the FPGA will need substantial buffering capabilities in order to store the image data until it is needed by the video encoder.

    Basically, if you are careful with how you combine the images in the frame buffer you might get away with a CPLD.

    BR,

    Steve

  • Alexander,

    I suggest starting a new thread for this since it is CODEC related and not hardware related.

    Not to push you out, but I think you might get a quicker/more accurate response with your own thread.

    BR,

    Steve

  • Data are output in an array of each block such as Ch1-Ch2-Ch3-Ch4-Ch1-Ch2-Ch3-Ch4-Ch1.....interleaved at 30Mhz /channel and 120 Mhz total for all channels. Do you think I can use the 16-Bit Parallel AFE directly getting data that way without glue logic FPGA?

    After processing each D1 image in frame based, then need to send 4 separated D1 images out locally. Since DM368 only has one 16bit digital output Bus, we have to interleave 4 D1 images again to output the data to FPGA/CPLD that splits data into 4 D1 images, then output NTSC video encoder. So, it seems that we cannot do the split without the FPGA or CPLD in output but may can do in the input, if AFE can support 120MHz input, is this possible? Thanks.

     

  • You are likely going to have an almost identical issue on the input, but which is likely more compounded by the fact that the 4 input cameras will likely not be either phase aligned or clock aligned, meaning that they will all likely be asynchronous to each other.

    If you truly have a sensor array which can output C1-C2-C3-C4 then it might be possible.

    BR,

    Steve

  • You are saying that it is possible to get data in via AFE without FGPA if my input data is truly aligned. If this is possible, I guess it is worth a try, not only save component, more importantly, it saves time. Thanks. Jim

  • Hi, Steve,

     

       IF I want to try the idea to use 120 MHz AFE to get data in, what is the master  clock in DM368 should be?

     

    Thanks

     

    JIm

  • Jim,

    What are the clock sources for your AFE? What AFE are you actually using?

    Basically, if you are a camera sensor that can receive a clock then the DM will need to source the clock. If the input is from a video decoder, video ADC etc... then the clock will need to come from that.

    Can you put a quick block diagram together to show me exactly what you are trying to do please?

    BR,

    Steve

  • Steve, above is the diagram I am using.

  • Does each CAM provide its own clock or does it expect to receive the clock from the processor?

    Is the MUX intended to merge all 4 cameras into a single video stream or simply to select one camera at a time?

    BR,

    Steve

  • CAM expected external clock. MUX will form a single c1-c2-c3-c4-c1-c2-c3-c4-.. streaming. Jim

  • jim,

    At what level is the multiplexing done? Pixel, pixel-pair, line, frame, other?

    The method most likely to work would be at the frame-level building a super-frame composed of 2x2 frame images. It would require the AFE being able to handle the total resolution, and there could be camera-boundary issues with filtering, but those might be solvable in an FPGA. What I am picturing is not at all a trivial multiplexer, but it could work.

    I am a bit confused about what you want on the output, too. Are you looking for analog component video to 4 TVs or (analog) composite video to 4 TVs? Do you want to demux 4 digital channels from a DSP to go to 4 encoders or a single interleaved digital stream to a demuxing encoder?

    Sorry for the questions. It looks like there has been a while since your last post, so maybe you have closed this. Please let us know either way.

    Regards,
    RandyP