This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Question about New Point Cloud Generation SDK for DLP 3D Machine Vision!

Other Parts Discussed in Thread: DLP4500

It is great that TI has developed this whole application for users. I would like to try the SDK before I put order for new EVM kit. I have a EVM kit with DLP3000 projector, can the new SDK work with DLP3000? Is there any instruction on how to use DLP3000 instead of DLP4500? I would like to use my old DLP3000 and other brand USB camera with the new SDK. Any input is highly appreciated.

Thanks,

Hongsheng

  • Hello Hongsheng,

    Currently the reference design only supports the LightCrafter 4500 platform, but the software could be expanded to work with the LightCrafter if you're comfortable modifying the source code. Similarly, you can also modify the source code to implement a different camera.

    Are you using a different industrial camera with triggering or a simple webcam? If you're using a camera that does not have triggering, the code complexity will increase unfortunately because you'll need a way to identify the patterns and synchronize the captures from software.

    If you are using an industrial camera, which one are you planning to use?

    Best regards,

    Blair  

  • Blair,

    Thank you very much for your reply. I am using industrial USB3 camera, it supports external function. Do you guys use external trigger to synchronize the pattern and the camera capture in your application or source code? I read part of the source code, I think you are using some delay (sleep command) to wait for the software algorithm to recognize new pattern, and then to send command to camera capture. According to my understanding, it should work as below: A set of predefined pattern sequence with pre-defined frequency-->Every time a pattern is projected, it will send a TTL trigger to the camera-->camera is in external trigger mode and capture after the TTL trigger is received. By this, it will assure camera will capture different pattern each time, and should increase capture rate. Please let me know if it is not the case.

    The camera I plan to use is STC-MBE132U3V from SenTech, I also have an IDS USB3 camera.

    http://www.sentechamerica.com/cameras-usb/STC-MBE132U3V.aspx

    One more question is about calibration method, the current method is not practical for machine vision, especially when we have very short working distance (less than 70mm). Do you happen to know any other calibration method which works better with short working distance?

    Hongsheng

  • Hello Hongshen,

    The reference design does use external triggering to assure that each camera frame captures the correct pattern.

    The reason for the "search" you found in the captures is from a timing issue I encountered. While developing the design, I found that if I started the camera the sequence would also start projecting when the first camera frame was exposed. The issue was that I couldn't capture this first frame! So what I did was start the camera, let it take a few pictures, and then start the sequence. This way I would have some blank camera frames, but would definitely have all of the patterns.

    The reference design as is has been tested up to 120Hz successfully. The released design has settings to run at 100Hz with a USB3.0 port. Getting the frame rate up to 120Hz requires some tweaking of the strobe settings on the camera.

    Regardless, the having the projector trigger the camera also works. However you can run into an instance where the camera exposure misses part of the pattern exposure because of the delay between the trigger and capture. I found I was able to get faster speeds when the camera triggered the projector.

    This E2E thread has some more information on the triggering topic: http://e2e.ti.com/support/dlp__mems_micro-electro-mechanical_systems/f/924/t/360874.aspx

    I actually have a color version of that Sentech camera! However I have not setup the SDK to interface with it yet because I'm not familiar with Sentech's API. If you're interested we can connect offline and create a Sentech camera module for the SDK. Send me a friend request and private message if you're interested.

    Could you describe why the current calibration method would not work even if used with a smaller printed chessboard? Do you meam 70mm away from the projector/camera of is the scan envelope that small? If you give me more information to understand the physical setup of the scanner, I may be able to help with a different calibration method.

    Best regards,

    Blair