This thread has been locked.

If you have a related question, please click the "Ask a related question" button in the top right corner. The newly created question will be automatically linked to this question.

Disabling interrupts in your video system and how it can affect your video drivers (DM6437)

It seems if you disable interrupts on your processor that is handling your video drivers for too long that it will cause problems with the video output, presumably because the drivers would not know when a frame has completed. Initially you might think that as long as you disable interrupts for less time than it takes a frame to complete that you would be in the clear (~1/30th of a second for NTSC), however since the processor does not necessarily have control over when the frames come in (i.e. the video is asynchronous to the processor), this is not the case. Because of this the interrupts would have to be disabled for less time than it takes for the video ports to go from the end of one frame to the start of the next.

In the case of NTSC, if we assume there are 525 lines in each NTSC frame and that 480 are considered active video than the time between frames should be the time for 525 - 480 = 45 lines. Since NTSC is operating at 30 fps we can assume that there are 30 * 525 = 15750 lines being drawn in a second so each line should take 1/15750th of a second or .000063492 which multiplied by 45 would be .002857 seconds or about 2.86 ms. So for an interrupt disabling to potentially cause a problem with the video drivers it would have to be longer than ~2.86 ms. 

However if you think about the double buffering that should be in the driver this should not really be a problem either, so we would be back to interrupts needing to be disabled for over 1/30th for a frame drop. If this is the case, why would having interrupts being disabled for relatively short periods cause frame interrupts?

  • Breaking this down ...

    Assume there is a need to capture video at 30 frames per second (i.e., need service a real time interrupt from the VPFE)  AND optimally execute an independent "compute" thread that generally takes longer than 1/30 second. 

    If optimizations are enabled, software pipelining will disable interrupts in the heavily optimized thread.   

    So, what happens when the interrupt is disabled because of software pipelining in the "compute" thread?   Is the processing of the interrupt delayed until the block of optimized code is complete, or is the interrupt lost? 

    If the interrupt is delayed, whoever is waiting for that frame could observe some timing jitter.  If lost, frames would be skipped.

    Since the interrupt timing is predictable, is their a way to synchronize the "compute" thread with the capture, such that the video interrupt is not dropped / lost.   The concern here is that any timer would also be rendered inaccurate due to the disabling of an interrupt (software pipelining), or is there a timer that is immune to software interrupts. 

     

  • Bandeg said:
    So, what happens when the interrupt is disabled because of software pipelining in the "compute" thread?   Is the processing of the interrupt delayed until the block of optimized code is complete, or is the interrupt lost? 

    The interrupt is just delayed, however if multiple interrupt events happen during the time that interrupts are disabled any additional events after the first are ignored, i.e. the depth of the interrupt detection logic is only one event.

    Unfortunately I do not believe there is a good way to synchronize the computation thread with the capture directly, though typically the compute thread will be driven by capture events (a new frame means a new round in the compute thread) so it will be inherently somewhat synchronized. The typical solution would be to ensure that your interrupts are never delayed for long enough to cause a real time deadline like a frame capture to be missed. The compiler can help you with this if you use the --interrupt_threshold=n option discussed in section 2.12 of SPRU187, this option tells the compiler to ensure that the generated code will not disable interrupts for greater than n cycles so you can set n to a value that allows your real time deadlines to be met.

  • Bandeg,

    Are you playing in the DSP/IOM dirver space or the ARM/Linux driver space?  Since this post was originally tagged for DM6437, the answer is from this point of view (minor caution).

  • Bernie Thompson said:
    It seems if you disable interrupts on your processor that is handling your video drivers for too long that it will cause problems with the video output, presumably because the drivers would not know when a frame has completed.

    Explanations are a little above me, for now. But from an brief point of view, can I ask that, does this causes any corruption on the image output?

  • Elric,

    When it comes to digital video, there is a timing frame (normally provided by master device to slave device) that defines the video resolution; this timing frame is composed of hsync, vsync, blanking interval sizes...., while all this data is important for the proper display of video, the one that is often used to indicate when a frame has completed (either displayed or captured) is the vsync signal.  Therefore, you will find that video drivers depend on vsync interrupts for proper updating of video frames.

    For example, let say the DaVinci VPBE (DM6437, DM6446, DM355m...) is the master (this is normally the case); the drivers program the DaVinci registers with the video frame timing information.  This means that for a given video clock, the VPBE hardware will generate the proper video synchronization signals and along with vsync interrupt which the VPBE software driver will use.  In pratice, you normally have multiple video buffers such that when one is being displayed, another one is being filled with data; therefore, when the VPBE driver receives a vsync interrupt, it sets the memory pointer to the next video buffer so that the hardware can read and display the next video frame.  However, since the hardware is running video timing in continous mode (e.g. NTSC or 720x480 @ 30 fps), there is only a finite amount of time (blanking interval = 45 lines for NTSC as Bernie computed above) which the software has to update to memory pointer to the new buffer address.  Once the video timing moves passed the blanking intervals into the 'valid data' region, it will start reading data from whatever memory pointer is programmed into the hardware.

    If interrupts are disabled, we can get into a situation where vsync event is not received by software when it is expected (hardware video timing continues as normal).  This could lead to displaying the same frame more than once (since software missed the opportunity to update to new buffer pointer) or worse software gets vsync in a delayed fashion since buffer pointer might be changed in the middle of frame causing noticeable video distortion. 

    Let me know if this helps.

  • Juan Gonzales said:
    In pratice, you normally have multiple video buffers such that when one is being displayed, another one is being filled with data; therefore, when the VPBE driver receives a vsync interrupt, it sets the memory pointer to the next video buffer so that the hardware can read and display the next video frame. 

    I see the point, but for example I modified the VPBE example and only one video buffer is enqued to the driver. In this situation, does it have a mechanism to understand that only one buffer is enqued and there is no such thing as "next video buffer", or does it dequeue the video buffer and enqueue again the same buffer?

  • This is up to the user space application.  When the user space application (e.g. encodedecode demo) comes back from its FBIO_WAITFORVSYNC request, it can decide to send an FBIOPAN_DISPLAY request to update to a new buffer pointer or chose not to make any call at all (same buffer will be used for display)

  • Juan Gonzales said:

    Bandeg,

    Are you playing in the DSP/IOM dirver space or the ARM/Linux driver space?  Since this post was originally tagged for DM6437, the answer is from this point of view (minor caution).

    DSP/IOM.   We are using DSP BIOS on the DM6437, not Linux. 

     

     

  • Please disregard my previous post regarding FBIO type requests; this was from a Linux perspective.

    As I have not played with DM6437 DVSDK yet, I will leave this to someone more qualified to provide an answer.