Hello,
I am developing a real-time application using the pico projector development kit (v2). I was hoping to get a little bit of information on the timing of input processing with the pico projector development kit. Specifically, I need to know the delay between the start of transmission of an external pattern through the DVI interface, and when the mirrors enter the first state in the prescribed pattern sequence.
I have reviewed the datasheet for the DLPC100, the programmers guide for the pico projector kit, and the application note describing using the kit for structured light applications. So I understand the various display modes that are available and the temporal dithering used to produce grayscale values, but I can't seem to find too much information on the pipeline within the DLPC100.
From the datasheet, it looks like the frame is buffered to the SDRAM, and I am guessing uses one of the following implementations:
1. DMD is programmed with first pattern state as the input data rolls in, input data is simultaneously buffered to SDRAM for use in determining subsequent states, the pattern sequence initiates after the DVI transmission completes -> delay roughly equal to DVI transmission period
2. Input data is buffered to SDRAM, then DMD is programmed, then pattern sequence initiates -> delay roughly equal to DVI transmission period + DMD programming interval
3. Input data is buffered to SDRAM, DMD is programmed, pattern sequence initiates at next vsync -> delay roughly equal to DVI transmission period + 1/framerate
Is the actual pipeline similar to any of these? I planned on using a "structured light illumination mode" (as per pg.5 of app note), does that affect the pipeline or the delay?
Any help is most appreciated,
Jim