PMC meeting summary

Shantanu Tushar jhahoneyk at gmail.com
Sun Jun 13 19:52:39 CEST 2010


----- Original message -----
> Hello list,
> here follows a short resume of what discussed during the yesterday's
> meeting :)
> 
> - *Media Fetcher Backends:*
> Hayri is responsible for this with his GSoC. Currently we have 2
> dataengines, called *picture *and *video*.
> Both were intended as a bridge among different fetching backends and the
> PMC browsing view.
> Their API is really similar except for little specific stuff and so we
> decided to merge them into one, unique,
> WebMedia DataEngine (the name can be, of course, changed to a better
> one): the common data structure we'll use to return the fetched
> information from web services will be wrapped into something like:
> MediaResource and MediaResourceCollection. The DataEngine and its backend
> will all be written in JS given the nature of the data: web-driven.
> We currently have everything to provide such kind of DataEngine except
> for the *add-on finder* which is something Aaron will likely work on
> soon. Also there's something we can eventually share with the kde-silk
> project, so maybe we should have a talk with them in case we already have
> tools for such stuff.
> Additional for Hayri: as stated by Aaron: Network   access is already in
> the bindings while xml parsing can be done via
> http://xmljs.sourceforge.net/
> 
> 
> - *QML into PMC:
> *We're gonna make use of QML for layouts in PMC and specifically in our
> State Machine.
> Christopher Blauvelt is contributing a lot with QML into PMC.
> Currently the aim is to have a new view for the mediabrowser, inheriting
> its AbstractItemView,
> that takes advantage of QML in order to show pictures in a fancy way :p
> For those who are curious some stuff is there under the declarative/
> folder in the root directory.

This will be great. Eager to check those out once exams are over :)

> 
> - *QGestures and input abstraction in PMC*
> This is a bit tricky part and probably needs some further discussion. The
> aim is to have a sort of layer that maps different input events
> into common Media Center events like *play, pause, next-picture,
> previous-picture, volume-up *and so on..
> This way we can have different input devices to be recognized and ready
> to be used (think about remotes and/or keyboards for example).
> As discussed we'll probably use the QGestureEvent and eventually create
> our own PMC events in order to ease the mapping.
> Later events will be sent to the components loaded into PMC that will be
> able to perform the proper action.
> I also think that we can have virtual event methods in our base MCApplet
> classes like playEvent, volumeUpEvent and such.
> In each applet the gesture recognition would call the proper virtual
> event method.
> Of course i might be wrong so, please, point out your thoughts about
> this.
> 
> I think this is all; please add what i eventually missed from the
> meeting.
> 
> Cheers!
> 
> P.S.: How to upload the IRC log to plasma.kde.org?
> 
send it to Aaron with a request and he'll upload it :)

> -- 
> Alessandro Diaferia
> KDE Developer
> KDE e.V. member

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.kde.org/pipermail/plasma-devel/attachments/20100613/a4126578/attachment.htm 


More information about the Plasma-devel mailing list