I have a problem: I asked many times about the same framebuffer going
to LCD port and PAL and I have different answers.
In Figure 15-70 of spruf98b.pdf I see the Graphics Path. I'm sorry I
ask again if I can select this (G) in both Overlay Managers.
Regarding Documentation/arm/OMAP/DSS it seems to be possible in
omap3503 and obviously in omap3530.
I'm really sorry but I can't understand why I get a different answer from TI.
Example: I write a graphic menu to a frame buffer and I'd like to see
it in 720p from LCD port and also scaled down to PAL from Video port.
TI told me that (G) can go to only one Overlay Manager.
So I can suppose that TI has this limitation with 2.6.22 official
kernel, but I think that if I start the project today with omap3503 now I
don't have this problem, because I'll use at least linux-omap-2.6.28,
or the mainline kernel, if it will be completed in about six months.
It is possible to have two indepedent displays from the OMAP3, one on the analog and one on the digital output, however these would require independent frame buffers of the appropriate size for the image (i.e. the scaling down functionality is not in place in the display pipeline, at least not in the current software). This being said you should be able to get the output you want, however you would have to do extra work with the CPU to get two frame buffers of the proper sizes to display on the analog and digital outputs.
There may also be additional driver limitations as you have mentioned, however the current driver package does allow you to switch the video and graphics pipelines to either output independently.
In reply to Bernie Thompson TI:
Looking at Documentation/arm/OMAP/DSS inside linux-omap-2.6.28 we see this video path:
Clone GFX overlay to LCD and TV-------------------------------tvline=`cat /sys/devices/platform/omapfb/displays |grep tv`w=`echo $tvline | cut -d " " -f 5 | cut -d ":" -f 2 | cut -d "/" -f 1`h=`echo $tvline | cut -d " " -f 6 | cut -d ":" -f 2 | cut -d "/" -f 1`echo "1 t:none" > omapfb/framebuffersecho "0 t:gfx,vid1" > omapfb/framebuffersecho "gfx e:1" > omapfb/overlaysecho "vid1 t:tv w:$w h:$h e:1" > omapfb/overlaysecho "tv e:1" > omapfb/displaysAfter this the configuration looks like (only relevant parts shown):FB0 +-- GFX ---- LCD ---- LCD \- VID1 ---- TV ---- TV
This is possible only with omap3530 and not with omap3503?
Or better, FB0 has to be "good" for the both outputs (rescaling not needed), for example PAL or NTSC could be good ?
The 2.6.28 kernel is still declared "unstable", isn't it?
Can you give me the kernel roadmap?
In reply to Raffaele Recalcati:
OMAP3530 and OMAP3503 have the same display subsystem capabilities.
It is possible for them to use different framebuffers and different resolutions. If they use the same framebuffer, yes, the output would have to be good for both display devices.
2.6.28 is stable, but the upstream kernel.org version does not include the updated DSS driver you are utilizing for dual-display. That driver is not yet upstream/stable.
Take a look at the user's guide and release notes for the PSP kernel to get a feeling for the kernel roadmap.
In reply to Jason Kridner:
Thanks for your answer.
So, starting a product today I'm going to use 2.6.28 compiled (and patched) using openembedded (today I'm still using -r14 version because I'm not working full time on omap3).
Or, if I would prefer a Ti certified kernel, I have to use 2.6.26 from Ti. This is a second choose only if I'll find big troubles in 2.6.28.
I'm obviously interested in an upstream kernel stable in all normal peripherals, I mean spi,i2c,dss,serials,usb host,usb otg,ethernet,audio and so on.
Dual display in this moment is not mandatory for me, but Apple Store usb to ethernet interface is obviously extremely important, so it has to work with 2.6.26 ti kernel.
I'm trying to come to ESC, but only with a miracle I'll be albe to come.
Then I'll read the kernel roadmap, I promise.
The hardware has three source paths, one is graphics and two are video. The graphics path can convert from indexed color to RGB and the video paths can convert from YUV to RGB. Additionally both path types support RGB frame buffers.
The video paths both have scalers but the graphics path does not.
Now, these three paths can point to anywhere in memory, so they could be the same frame buffers, completely different buffers or even parts of the same buffers (i.e. a clipped region of a buffer).
There are two possible outputs, the LCD output and the TV output. Each of the source paths can only feed one of the outputs, but it can be either. This means that you can have the graphics path feeding the LCD output and a video path feeding the TV output. The other video path can be used to overlay on EITHER the LCD output or the TV output but not both at the same time.
Given that the frame buffer pointers can point to anywhere in memory it is possible then to have a single frame buffer output on the LCD and also scaled down for the TV output.
Another example is if the video pipe1 is used to feed the LCD output and the video pipe2 is used to feed the TV output then the frame buffer could contain one resolution but the LCD and TV outputs can have yet independently different scaled outputs.
I do not know how well this flexibility is supported in the Linux code at the moment, but I believe that the upcomming DSS2 framework will make this much simpler to configure.
I have some slides which detail the hardware capabilities but can't upload them to the forum. In the meantime please feel free to e-mail me at firstname.lastname@example.org and I will send them to you.
In reply to Steve Clynes:
OK, here are the slides [:)]
Where is the image stored after overlay?
For example, if both video pipe 1 and graphics pipe connect to overlay manager 1 and then output to LCD, can we have the same image output to TV as well ?
In reply to Hsueh-szu Yang:
The resultant output is not stored anywhere but is constructed on the fly as necessary.
If you are blending any of the frame buffers then you will only be able to send the resultant display to one destination.
There are three possible source processing pipes. Each processing pipe can only go to one of either the TV output or the digital output.
Please have a look at the following Wiki page which has more detailed information on the possible combinations and limitations.
Thanks for the replying.
How about rotation? Suppose we have one frame buffer but outputing to both LCD and TV. The one to LCD, it requires VRBF rotation; but the one to TV, it does not need rotation but scaling. Will that work in this DSS system?
What is the actual resolution of the LCD you want to use?
What orientation do you want the LCD to be configured in (portrait or landscape)?
If rotation is enabled then it is possible to have different memory views of the frame buffer with different rotations, with one "window" being 0 rotation.
All content and materials on this site are provided "as is". TI and its respective suppliers and providers of content make no representations about the suitability of these materials for any purpose and disclaim all warranties and conditions with regard to these materials, including but not limited to all implied warranties and conditions of merchantability, fitness for a particular purpose, title and non-infringement of any third party intellectual property right. TI and its respective suppliers and providers of content make no representations about the suitability of these materials for any purpose and disclaim all warranties and conditions with respect to these materials. No license, either express or implied, by estoppel or otherwise, is granted by TI. Use of the information on this site may require a license from a third party, or a license from TI.
TI is a global semiconductor design and manufacturing company. Innovate with 100,000+ analog ICs andembedded processors, along with software, tools and the industry’s largest sales/support staff.