This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Best solution for stereoscopic vision

Hello,

I am at a quite lost regarding my project, so I thought I would ask the people on this forum for some much needed help. 

Here is what I need to do :

There is a mobile robot platform. The on board system needs to read all sensor inputs as well as two HD cameras (stereoscopic fisheye vision) and send all the information in real time back to a network of computers for processing. Preferably the video should be 1080p at 60 fps. Now, I have found 3 options that (for me) seem to make sense. This project is for academic purposes, so I don't have access to high volume production. 

1) Use an OMAP3 processor to stream the video from 2 USB webcams via Wifi.

  • Is the OMAP3 platform fast enough to take care of sending this much data (~100Mpbs)?

2) Use "leopard imaging" camera modules (1080p60). https://www.leopardimaging.com/LI-CAM-IMX136-1.html.

  • With what processor?
  • I found information on the TI wiki leading to this source, but I cannot find any information or datasheet concerning the product. What is the method of transferring the image data (Bayer RAW RGB, YCrCb... etc)? Could a single DM8168 take care of receiving the data from these modules and only encode the two streams? 

3) Use two DM8147 processors to read the data from two CMOS sensor in bayer RAW format, encode the video stream and send it via wifi. 

  • If only one wifi connection is needed, what would be the optimal way for sending all the video data from one processor to the next for sending via wifi? McBSP? 

Final question, for all three options : How would I make sure that both cameras take images at the same time when recording video?

I would like your options on all of this. For my needs, what option would be best? Are the options I stated any good? Is there a better way to do this? Any input is appreciated!

Thank you,

Sam-Nicolai Johnston

  • i suggest you to go for option 3 for quick integration. 

    You can stream from two cameras at 60fs over wify at high bitrate (15-20 mbps) 

    Regarding synchronization of capture max delay that you might encounter at 60fps is 8 ms of phase shift.  

    If you want to avoid this phase shift you have two options. 

    1) use feedback loop to close on the phase.  in this you need to embed a real time counters in the video data and let the central server decode this and communicate back to the camera on the phase shift. This phase shift you can apply in the capture. For this you can use H264 meta data packets. 

    2) use single DM8147 and connect two cameras to it. in this you need to capture the data at 30 fps from each camera, pass the data to codec as two different streams and encode them.  in this you can control the phase shift very accurately. but you can get only 30fps. 

    Regards,

    Uday 

  • If you need to synchronize the two images better than +/- 8ms, consider using the "slave mode" provided by most sensors. You could trigger the frame capture at whatever frame rate you wish and could gain control over the frame timing between the sensors. If you want to go over the top you could overlay a control system over one sensor to  vary the inter-sensor delay and improve the sync automatically based upon an external reference visible to both sensors.

    NB: doing this with an 814x is going to be hard and frustrating. There are many posts on this forum regarding the difficulties that others are having. You might want to look around before you choose a solution.

    Also, you're not going to get access to any meaningful documentation without an NDA via your FAE.

    Good luck!

  • Thank you for the information!

    I believe syncing them with simple slave mode from the sensor will be sufficient. I am already confused enough as it is, I shouldn't add more stuff on my to-do list. 

    As for your last sentence, what meaningful documentation are you talking about? Reference designs or datasheets? What does FAE stand for?

    I don't mind signing a NDA, but is it possible to design this project using only the datasheets/technical references?

    Sam 

  • Sam,

    An FAE is a Field Applications Engineer. They will be your sales & tech support with TI. You won't be able to speak to TI directly and since your FAE will not be familiar with the 814x, this forum will be your only resource.

    Using the base PSP/board support package for the TI481x you will (probably) be able to capture video, but if you're not using a sensor supported by TI, you're going to have to go to the next level. This means spending more $$$ to buy Appro's IPNC dev kit. This will give you access to the source code required to create your own sensor driver. Of course, with great power comes great responsibility. You'll now find yourself alone in the wilderness with piles of code that almost does what you want but isn't documented and doesn't actually work. There are so many hidden gotchas that I've lost count. I've been working with the IPNC for nearly a year now and am only now at the point of having a fully functional system.

    And I'm not just talking about about the video processing system. Get ready to become a Linux expert, DDR3 expert, file system expert, NAND expert, etc.

    Think long and hard before you select your solution.