Phonon::VideoWidget embedded inside a QGraphicsProxyWidget
Zack Rusin
zack at kde.org
Fri Mar 26 20:11:56 CET 2010
On Friday 26 March 2010 14:37:26 Aaron J. Seigo wrote:
> On March 26, 2010, Marco Martin wrote:
> > the mediacenter as is now is a -completely- different problem.
> > here we want to display a fullscreen video, with occasionally some chrome
> > over it when we need.
>
> yep.
>
> > as a "temporary" solution (that will indeed last for years) putting the
> > video on a window behind the mediacenter chrome (and hiding completely
> > the plasma window when not needed) is the less hacky (believe or not :p)
> > more performant and simpler to implement solution
>
> i wonder if hiding the plasma window will be the best solution
> (showing/hiding windows can be slower than desired, and then you have to
> time it with animations...)
>
> if the plasma window is always there, but the background is painted 100%
> transparent and the QGraphicsWidgets shown are also transparent (or even
> hidden), not only will there be no hide/unhide of the window itself and no
> animation timing coordination needed but you can do all the event handling
> in the Plasma::Containment and/or the widgets on it.
>
> the only possible problems i can think of here are:
>
> * kwin automatically disabling compositing because there is a full screen
> app running
>
> * it won't work without compositing ... to which i say: tough. if the media
> center requires compositing for video but delivers a really good
> experience, then so be it.
Actually, it's gonna be dependent on what the driver does it won't work very
well in either case.
The issue is that on most hardware video doesn't go through the rendering
pipeline, meaning the TMUs never actually see the content, hence xcomposite is
completely useless - it just doesn't see the content.
Unfortunately that's going to be the case on most of those media boxes -
backend scalers (overlays) which are used to implement dedicated video
circuitry is cheaper and uses less power than the full pipeline.
The proper way of doing this stuff (or at least the way I would have done it)
is finishing the phonon::videoframe interface and using this with GL to
decode/render it. This approach will be the fastest and most flexible.
And if the video goes full screen, then just placing a videowidget that uses
the overlay on top would be nice - overlays use less power than the full
pipeline and have things that are usually a bit hard to do in shaders, like
e.g. motion compensation.
So it's:
- use textures and the full pipeline if you need to render something else than
video alongside it (phonon::videoframe + gl)
- use backend scalers if only video is important and you don't need to
composite things on top of it (probably straight up phonon::videowidget on
top)
z
More information about the Plasma-devel
mailing list