kinetic-declarative plasma integration
Aaron J. Seigo
aseigo at kde.org
Thu May 14 02:30:09 CEST 2009
On Wednesday 13 May 2009, Alan Alpert wrote:
> Those of you following Qt Labs closely will have heard about Qt Kinetic's
> Declarative UI
thanks for providing some Plasma integration examples. looking at the code in
the kinetic-declarativeui git branch, i have some questions (apologies in
advance for the volume of them, and i guarantee you some of them are going to
be horribly naive ;) :
* where can i find the qml language definition?
* why does it implement it's own canvas system? i suppose the idea is to allow
non-QGraphicsView uses of this? a huge downside i see to this approach is that
there are now two places to get things wrong (and trust me, if this ends up
anything like QGraphicsView things will go wrong repeatedly, it's just the
nature of the beast) and two transformation management code paths (to pick a
random example) to have fun working out the interactions between. is there
really no way to have a common system for this?
* how will this work with things like QGraphicsItem, which aren't QObjects?
will there be a QGraphicsItem bridge written or will we need to roll our own?
or will this simply be limited to QObject?
* in the examples i see that the QmlComponent is creating a QGraphicsItem as
its root component. was there any reason for not using a QGraphicsWidget there
that we should be aware of, or was that just a "this is an example, it doesn't
have to perfect" sort of thing? because we can't reasonably use QGraphicsItems
in something like plasma without doing a bunch of manual geometry management.
in the weather example with the finishLoad method, due to this manual geometry
management, it will break the user's settings. really, the m_root item needs
to conform to the container, not the other way around.
i get the impression, actually, that this has been designed with fixed screen
sizes in mind. while this will work just swimingly on something like a phone
or a tablet device, it seems to be of extremely limited utility on a more
flexible device.
* there is a *lot* of manual API wrangling in the examples, such as:
QmlContext *ctxt = m_engine->rootContext();
ctxt->activate();
ctxt->setContextProperty("applet", this);
m_root = m_component->create();
ctxt->deactivate();
that honestly looks pretty clumsy. what would make a lot more sense to me is
something like:
{
QmlContext ctx(m_engine);
ctx.setContextProperty("applet", this);
m_root = m_component->create();
}
this hides all the "implementation detail" stuff; e.g. why should i care about
activating and deactivating the thing? for the internals, i get it, but as an
external API it seems like too many implementation details are leaking out.
as i look through the examples i see a lot of similar cases. thoughts?
* i see that there are QFxItem wrappers for things like Plasma::Svg. will we
need to wrap every element for it to be available? that seems highly
unwieldly, as we'll have to track API additions and changes; why isn't
QMetaObject providing the magic for us here?
* i see things like Script { source: "weather.js" } in the examples; where do
those scripts come from exactly? is it limited to files in the same directory?
is there protection against requests for random files? can we hook our own
"find the script" helper into it somehow?
* same for question for other resources like pictures
* can we populate the JS runtime with whatever we want, or are we limited to a
QML JS runtime?
* if one wants to create QML items dynamically, e.g. when a DataEngine returns
a source create a new QML item and load it, how does that work exactly? can
that be done from a JavaScript file?
* i see lots and lots of what look like pixel values in the weather example;
are there more dynamic ways to define position such as "to the left of the Sun
object" or "span the whole width, but offset by the height of this object from
the bottom?"
* how does scaling work? e.g. if i want an item in the interface to grow
smaller when the container it's in grows smaller (think "layout like"), how
does one achieve that?
* when it comes to the timelines and other animations, has any thought been
put to how to map those to graphics APIs, e.g. OpenGL? in particular, with
this high level sort of language it would be absolutely great to be able to
determine what can be paralellized and what can't and then use that
information in the painting layer if that information can be taken advantage
of
* when it comes to GUI editors for this language, how do you see timelining to
be handled? will the GUI editor need to parse through the existing definitions
and assemble timelines itself or is there a nice introspection API that can be
used outside of actually running the resulting interface?
* is it possible to hook into and control things like the animation pipelines?
in particular, we need the ability to do things like turn off all animations
on demand so that an animation just skips from 0.0 to 1.0.
i'm sure i'll come up with more as i go through it further, but i think the
above is more than a(n un)reasonable start ;)
i'm really happy to see something like this take shape; it could have huge
implications for Qt based applications and i'm sure we can abuse it to amazing
results within Plasma once it is all put together.
--
Aaron J. Seigo
humru othro a kohnu se
GPG Fingerprint: 8B8B 2209 0C6F 7C47 B1EA EE75 D6B7 2EB1 A7F1 DB43
KDE core developer sponsored by Qt Software
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 197 bytes
Desc: This is a digitally signed message part.
Url : http://mail.kde.org/pipermail/plasma-devel/attachments/20090513/71c8af93/attachment.sig
More information about the Plasma-devel
mailing list