This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

BMP Upload for EVM 6500

Other Parts Discussed in Thread: DLPC900

Hello all,


I am trying to understand the mechanism of the BMP upload process for pattern on-the-fly mode with EVM 6500. In the GUI, it seems that the time needed for uploading is directly related with the bit-depth selected. Smaller bit-depth results in shorter upload time and vice versa. Is this because of the compression differs for different bit-depths or the image is packed differently for different bit-depths?

In the programmer's guide, it seems that the uploading can only be conducted with 24-bit packed images. If I only need 1-bit patterns, is there a way to upload images with also 1-bit depth to save uploading time?

Regards,

Ding

  • Hi Ding,

    I regret the delay in replying to this. When you say the "upload time," are you referring to the time it takes to update the firmware for pre-stored patterns (in the Firmware tab of the GUI), the amount of time to Update LUT (look up table), or the exposure time (which is not related to upload time but does automatically change based on the bitdepth selected)?

    The upload time is generally related to the image size after compression, if it is being compressed. A 24-bit image has more information to compress, so it will generally be larger than an image of the same level of detail. When you update the firmware with these images included in the build, you will see the size of the firmware file change accordingly.

    When images are uploaded, they are grouped together to create 24-bit images for easy storage. For example, if you are uploading 24 1-bit images, they will be stored as 1 24-bit image. This "packaging" of the images is done automatically by the GUI.

    Are you looking to implement your own program as a solution separate from the GUI?

    Best regards,
    Trevor
  • Hi Trevor,


    Thank you very much for your reply! "Upload time" refers to the time needed to upload compressed BMP to the controller in pattern on-the-fly mode. I am indeed developing my own program separate from the GUI.

    I understand that 24 1-bit images can be packed into 1 24-bit image for uploading. My question is if it is possible to upload 1 1-bit image without wasting time and space in the packing process. In my application, I would like to send 1-bit patterns to the controller DYNAMICALLY. Therefore, if I have to send 24-bit packed image, only one bit plane is used as the pattern whereas all other 23 bit planes are simply a waste of space and transfer time for me. I would like to know if it is possible for the controller to take 1-bit image instead of 24-bit image.

    Regards,

    Ding

  • Hi Ding,

    The GUI follows the same image upload procedure as you would over code, meaning that when you add a 1-bit image in the GUI, it is packed in a 24-bit composite image where the rest of the image is not filled. However, if you add an 8-bit image in the GUI - even if it is only black and white with no gray - it will be interpreted as an 8 bit image and will fill 8 bits with image data. The same is true of loading images outside the GUI.

    When the image is properly compressed, blank bits should make little difference in the file size. There may be situations, if you have a very complex 1-bit pattern where that may not be the case. If you tell me more about your patterns and application I could help direct you on the effectiveness of compression.

    Best regards,
    Trevor
  • Hello Trevor,

    Thank you for the explanation about image packaging! At the moment it is hard to tell which type of 1-bit pattern we would use eventually. But in general, we would like to have the ability to change the pattern dynamically with a very fast speed (next pattern calculated and loaded based on result of measurement for previous pattern).

    I now understand that the board takes only 24-bit packed images as input. Is this due to hardware architecture or a limitation of the firmware? Meanwhile, is there a general guideline on which compression algorithm to choose based on the complexity of the pattern (for both 1-bit and 8-bit patterns)?

    Best,

    Ding

  • Hi Ding,

    That is a firmware limitation for the DLPC900.

    Very complex images sometimes do not compress well. For example a "checkerboard" pattern which alternates white, black, white, black... in every direction, where each square is one pixel wide, would represent the worse case for the compression algorithms. The compressed file would actually be larger than the original image. Images must be very complex for this to be the case.

    Another operating mode that you might consider (as it could eliminate some of the loading time) is Video Pattern Mode. The image data could be generated and then streamed over parallel RGB (HDMI or Display Port). This is generally the mode which is recommended if your intended use is streaming dynamic patterns. If you share more about your application, I can let you know how this could benefit you (if it works for your application).

    Best regards,
    Trevor
  • Hi Trevor,


    Thank you for the explanation. You mentioned that the Video Pattern Mode is recommended for streaming dynamic patterns, but I don't understand how that works exactly. Since the video is streamed with a constant frame rate, if I set different exposures to two patterns, how do I know if they are from two consecutive frames or not? Additionally, if I stream 24-bit frames and select all 24 bits for a pattern are all bit planes shown in order (G0-G7-R0-R7-B0-B7) or randomly (I remember reading somewhere that the bit planes are mixed randomly for better effect)? Lastly, I still have to update the LUT every now and then to make it really dynamically. If I update LUT for each pattern, is it still more efficient to use video pattern mode?

    Best regards,

    Ding

  • Hi Ding,

    Let me know if I missed any of these questions.

    1)  ...if I set different exposures to two patterns, how do I know if they are from two consecutive frames or not?

    The timing of the patterns can be controlled by sending data in sync with the VSYNC signal. The data sent with each VSYNC will be displayed following the pattern sequence settings that you set, including different exposure values. While exposure and other pattern settings could change with frames in the pattern sequence, the same pattern sequence would be repeated in Video Pattern Mode for the duration of the video pattern (more on this in question 3).

    2) Additionally, if I stream 24-bit frames and select all 24 bits for a pattern are all bit planes shown in order (G0-G7-R0-R7-B0-B7) or randomly...?

    You can select the bit frames to display in exactly the way you want them to display. For example, I took a screenshot of my GUI setting up a pattern where I display G0, G0 again, all 8 bits of R, and B0. You can set it the way you want. You may be remembering two things. The first is that Video Mode (not Video Pattern Mode) has spatial and temporal dithering algorithms because it is meant for display applications. I would not recommend Video Mode for your application. The second part worth noting is if you use our 8-bit pattern sequence, it was optimized for speed and because of that, the bits are not shown in 0-7 order, but they are all exposed for the proper amount of time within the pattern. This is true for any mode.

    3) Lastly, I still have to update the LUT every now and then to make it really dynamically. If I update LUT for each pattern, is it still more efficient to use video pattern mode?

    If you did have to update the LUT for every pattern, it may be more efficient to consider Pattern on-the-Fly mode. But it might be possible to use Video Pattern Mode and update the video data you are sending in a way that allows you to skip updating the LUT. For example for shorter or longer exposure, you could send and display more or fewer black patterns.

    Let me know what you think!

    Best regards,

    Trevor

  • Hello Trevor,


    Thank you very much for your detailed explanation!


    Regarding question #1, what I wanted to know is what happens if the VSYNC signal is not synced with the pattern exposure setting. For example, if the second VSNC arrives before/after the end of first pattern exposure, what is going to happen? If it is too early, does the VSYNC "overwrites" the exposure setting of the first pattern? And if it is too early, will there be a gap between first pattern and second pattern (since the second pattern has not been transferred by the end of first exposure)? I'm not very familiar with video streaming so I probably have some misunderstanding somewhere.

    The reason that I was asking question #2 is that I want to know the pattern rate limit of the video pattern mode. According to the manual, the maximum external pattern rate for 1-bit is listed as 2880 Hz, which I believe comes from 120 Hz (max frame rate with display port) * 24 Bit. However, if I understand correctly, there's no way of exploiting the 24 bits separately from one image unless the image is sent with several VSYNC repeatedly. And I guess the maximum frame rate of 120 Hz is a limit for the VSNYC signal? If this is the case and I want to use only 1-bit patterns, does it mean that I could only achieve 120 Hz pattern rate with video pattern mode (although for 24-bit frames the dmd is indeed operating at 2880 Hz for 1-bit)? I am still thinking in the way of sending images in the pattern on-the-fly mode, but I guess the data transfer with RGB interface for video is very different?

    Regarding question #3, the suggestion of controlling exposure with black patterns is very good. But if I have no idea of the exposures I want, I guess I have to make a LUT with all exposures taking minimum value, e.g. 105µs for 1-bit. Then my temporal resolution will be limited to multiples of 105 µs. However, in the pattern on the fly mode, if I understand correctly, I am able to set exposure values like 105µs for first pattern and 120 µs for second pattern and so on. Am I correct on this point?

    Best regards,

    Ding

  • Hi Ding,

    If VSYNC does not arrive when expected, the DLPC900 will lose its "Locked to External Source" status. The system will revert to a blank curtain in Video Mode. In order to get back to Video Pattern Mode, the video source would have to become "locked" again.

    Your math is correct, the 2880 1-bit pattern rate for Video Pattern Mode comes from the 120 Hz VSYNC and 24 1-bit patterns per frame. However, I would like to correct you because you can exploit the 24 bits in each frame separately from one 24-bit image. All of the 24 images can be unique and they can be displayed as 24 unique 1-bit images. In the GUI image above, you'll see that I selected a few individual bits as an example, each with their own display settings. Also, you should note that while I chose the 0th bit for each of my 1 bit patterns, I could have selected any of the 24 bits for each of them.

    Similar to Pattern on the Fly mode, in Video Patten mode, all of the bits can be set to different exposure values, and for both, to update those values, you would have to stop the pattern, update the Look Up Table (LUT) and re-start the pattern. The difference is that with Video Pattern Mode, you would have to make sure that you remain Locked to the video signal. To expand on my idea from earlier, you could set 24 different exposure values and send an image with "black" data during the exposure values you don't want to use and the normal image data for the bits that have the exposure values that you do want to use.

    It might help me suggest options and explain this if I knew a little more about your end application. What are you trying to achieve with your design?

    Best regards,
    Trevor
  • Hello Trevor,

    Thank you for the answer!

    I am using a second DMD and I have noticed that the GUI program will not be able to work with two 6500 simultaneously (I will write my own program anyways). My question is what is the best way to synchronize two DMD, say in a video pattern mode? To begin with, I guess I have to make sure the the two video output ports on the PC have to be synchronized? Do I need to establish a connection between two evaluation boards so that the output trigger of one board serves as the input trigger of the other?

    Best regards,
    Ding