kinetic-declarative plasma integration

Aaron J. Seigo aseigo at
Mon May 18 09:33:53 CEST 2009

On Wednesday 13 May 2009, aaron.kennedy at wrote:
> Hi Aaron,
> Great list of questions!  I apologize in advance if my answers are a little
> "qml general" rather than specific to the plasma integration, but I'd
> prefer to let Alan handle those details rather than me just getting them
> wrong :)
> On 14/05/09 10:30 AM, "ext Aaron J. Seigo" <aseigo at> wrote:
> > * where can i find the qml language definition?
> Sadly a formal language specification hasn't been written yet (short of
> reading src/declarative/qml/parser/javascript.g which probably isn't what
> you want :)  The "QML For C++ Programmers" documentation shows you all the
> things you can do, but not in a formal way.
> Obviously this sort of documentation is essential for the released product!
> > * why does it implement it's own canvas system? i suppose the idea is to
> > allow non-QGraphicsView uses of this? a huge downside i see to this
> > approach is that there are now two places to get things wrong (and trust
> > me, if this ends up anything like QGraphicsView things will go wrong
> > repeatedly, it's just the nature of the beast) and two transformation
> > management code paths (to pick a random example) to have fun working out
> > the interactions between. is there really no way to have a common system
> > for this?
> Part of what we're exploring with this project is a new way of doing
> graphical UIs.  In addition to just the declarative language, this involves
> experimenting with things like how to best integrate OpenGL, measuring the
> maximum achievable performance and exploring any opportunities for
> optimization the "declarativeness" might give us.  To help with this the
> declarativeui can use its own canvas system.
> This is only a temporary situation - we fully intend to have only one
> canvas system at release time!
> > * how will this work with things like QGraphicsItem, which aren't
> > QObjects? will there be a QGraphicsItem bridge written or will we need to
> > roll our own? or will this simply be limited to QObject?
> Currently the QML engine can only create, and set properties on QObject
> derived types.  We're currently debating whether we should add support for
> wrapping non-QObject types - is the complexity of the wrapper itself and
> managing tricky details such as detecting instance deletion worth it?
> Fortunately you're not entirely out of luck.  We do support Qt interfaces
> (of which QGraphicsItem is one) so you can do:
> class MyWrapperType : public QObject, public MyGraphicsItemType
> {
> Q_INTERFACES(QGraphicsItem)
> };
> And then assign that type to a Qt property of type "QGraphicsItem *".
> > * there is a *lot* of manual API wrangling in the examples, such as:
> >
> >     QmlContext *ctxt = m_engine->rootContext();
> >     ctxt->activate();
> >     ctxt->setContextProperty("applet", this);
> >     m_root = m_component->create();
> >     ctxt->deactivate();
> >
> > that honestly looks pretty clumsy. what would make a lot more sense to me
> > is something like:
> >
> > {
> >     QmlContext ctx(m_engine);
> >     ctx.setContextProperty("applet", this);
> >     m_root = m_component->create();
> > }
> >
> > this hides all the "implementation detail" stuff; e.g. why should i care
> > about activating and deactivating the thing? for the internals, i get it,
> > but as an external API it seems like too many implementation details are
> > leaking out.
> >
> > as i look through the examples i see a lot of similar cases. thoughts?
> EEK!  Sounds like a left over from the early days when you *did* have to
> manage your contexts manually.
> Now you can just write:
> QmlContext *ctxt = new QmlContext(m_engine.rootContext());
> ctxt->setContextProperty("applet", this);
> m_root = m_component->create(ctxt);

that's nicer :)

i suppose it could also be written as:

QmlContext *ctxt = m_component->rootContext();
ctxt->setContextProperty("applet", this);
m_root = m_comonent->create();


> You still have to allocate your context on the heap, as when it is
> destroyed all the binding expressions that depend on it are invalidated.

that's sensible and fine...

> > * i see things like Script { source: "weather.js" } in the examples;
> > where do those scripts come from exactly? is it limited to files in the
> > same directory? is there protection against requests for random files?
> > can we hook our own "find the script" helper into it somehow?
> Yes, the scripts are loaded relative to the .qml file.  We are currently
> thinking of the collection of .qml and .js files in a directory as a
> "package" of sorts; that collectively define a component and its behavior.

while this can work, it's also not optimal. for instance, in Plasma can simply 
ask for the "image call foo" and it knows where to go looking without 
ambiguity introduced if there is a file called foo which isn't actually an 
image, something that is quite plausible when everything is one directory (esp 
if a GUI editor has been used to put everything together, btw, so the user is 
removed from these details one step further). that's just one example of how 
it can be nicer to allow location of files at runtime versus assuming their 

> We don't really want to introduce any form of search path which can just
> lead to security concerns and user confusion.

i'm not asking for a search path, but a locator, which is exactly what 
Plasma::Package is. it handles security in a clean, central manner while 
making it the opposite of confusing for the user.

> > * can we populate the JS runtime with whatever we want, or are we limited
> > to a QML JS runtime?
> At the moment the QScriptEngine that we instantiate internally is "ours".
> We don't allow applications to twiddle with it in any way.  What sorts of
> things would you like to do to it?

it would be nice and useful if we could inject all the objects/properties we 
do in the javascript ScriptEngine: DataEngines, Services, configuration, etc.

> > * if one wants to create QML items dynamically, e.g. when a DataEngine
> > returns a source create a new QML item and load it, how does that work
> > exactly? can that be done from a JavaScript file?
> We have a couple of ways of dealing with "lists" of things (which sounds
> like what the sources returned from a dataengine conceptually are).  All of
> our view classes can take sets of things, and we also have a generalized
> "Repeater" element for handling other cases.

that will work for simple cases, but it would fail in the case of the time 
engine, for instance, where the list of available sources isn't the list of 
sources actually created. another example might be where the contents of one 
source changes and triggers the need to show other sources.

i know that's not really "declarative" in nature, but it's the sort of 
manipulation that is very common. we can live without it, but it means that we 
won't be able to use the declarative UI stuff in some places. i suppose what 
we could end up doing is a bit of a hybrid in that case and create bits of QML 
on the fly in the regular C++ (or whatever language) code and use that to 
define parts of a widget.

> > * i see lots and lots of what look like pixel values in the weather
> > example; are there more dynamic ways to define position such as "to the
> > left of the Sun object" or "span the whole width, but offset by the
> > height of this object from the bottom?"
> We have an inbuilt "anchoring" system for exactly this kind of thing.  For
> example, here we anchor the moon to the left of the sun:

i suppose the anchoring system documentation is still coming, or is there 
something i could go and read right now?

> > * how does scaling work? e.g. if i want an item in the interface to grow
> > smaller when the container it's in grows smaller (think "layout like"),
> > how does one achieve that?
> Layouting is supported, although as you've identified we have focused
> mostly on fixed screen sizes.  We have vertical, horizontal and grid
> layouts as well as the builtin "anchoring" system mentioned above. 
> Hopefully you'll eventually be able to use any custom Qt layouts you have
> written.

and QGraphicsLayout i imagine :) i'll keep an eye on this aspect of the 
project as well in my "list of things to consider when we can pull the trigger 
to use it in plasma"

personally, i don't see the point of having a third layout system just for the 
declarative UI framework (we have QLayout and QGraphicsLayout already)

> > * when it comes to the timelines and other animations, has any thought
> > been put to how to map those to graphics APIs, e.g. OpenGL? in
> > particular, with this high level sort of language it would be absolutely
> > great to be able to determine what can be paralellized and what can't and
> > then use that information in the painting layer if that information can
> > be taken advantage of
> This has been the promise of every "declarative UI" solution ever
> conceived. Unfortunately, none have really delivered on it.  I, too, hope
> that we can take advantage of the additional context available and
> automagically optimize the graphics stack.  This technology is still new,
> so we'll have to wait and see what we can do.

isn't it the sort of thing you probably want to consider in the design from 
the start so there's any chance at all of being able to deliver on it? my 
concern would be that the design may unintentionally make it impossible to 
deliver on this kind of feature unless the requirements are considered at an 
early enough stage.

> > * when it comes to GUI editors for this language, how do you see
> > timelining to be handled? will the GUI editor need to parse through the
> > existing definitions and assemble timelines itself or is there a nice
> > introspection API that can be used outside of actually running the
> > resulting interface?
> A visual editor is a big part of what we want to enable.  Currently there's
> no simple introspection API, so there's every chance that we'll have to
> make some big changes before a final release to ensure that the GUI editing
> experience is good.  Fortunately there's a bunch of people looking into it


> > * is it possible to hook into and control things like the animation
> > pipelines? in particular, we need the ability to do things like turn off
> > all animations on demand so that an animation just skips from 0.0 to 1.0.
> Our focus so far has been principally on "fixed configuration" hardware
> where this sort of configurability isn't necessary.  Of course, this

right; it's pretty important for use where configuration isn't fixed, however. 
(to state the obvious :) .. i hope this becomes possible to control in the 
future so that we can take advantage of the declarative UI stuff in plasma at 
some point in the future.

it seems that as it stands right now, there's a few blockers to seriously 
using this in plasma. the idea is great, the implementation is obviously 
unfolding .. it just probably needs more time.

thanks for keeping us in the loop and working with us to iron out the various 
issues. i know plasma isn't a "Fixed device display" and so is a bit of a 
different use case than i'm sure you've been addressing, but fixed device 
displays are not the future (or even all of the present), so i think this is a 
good exercise for the declarative UI stuff :)

Aaron J. Seigo
humru othro a kohnu se
GPG Fingerprint: 8B8B 2209 0C6F 7C47 B1EA  EE75 D6B7 2EB1 A7F1 DB43

KDE core developer sponsored by Qt Software

-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 197 bytes
Desc: This is a digitally signed message part.
Url : 

More information about the Plasma-devel mailing list