DataEngines and models
Mark Gaiser
markg85 at gmail.com
Tue Dec 24 18:29:28 UTC 2013
On Tue, Dec 24, 2013 at 5:34 PM, Marco Martin <notmart at gmail.com> wrote:
> Hi all,
> I decided to give a crack to an old pet peeve of mine (and quite a sore point,
> if we look around in dataengines): integration of dataengines and normal
> QAbstractItemModels.
> Now, one could ask why not doing everything as imports of either qobjects with
> an api or models.. fair enough and there are some reasons:
>
> * we have invested a lot in a big number of dataengines already, their design
> may be better, but ion most case precisely because they lacked a way to have
> multi line models
> * the design of the dataengines is still better for anything that has the data
> represented with a single item (system monitor, date, battery...) why?
> * because of its completely async nature, it's too easy exporting a qobject
> with slow, sync methods
> * data asked by multiple consumers is shared in the same DataEngine and
> DataContainer instances
> * services are neat for things async in nature like authentication on a
> website or so, where blocking calls is a big no (and reimplementing a similar
> job or future api every time in every import is meh)
>
> So, in many cases i think that having imported models is just fine, but in
> other cases would be nice to have dataengines and normal QAbstractItemModels
> merged in one..
Hi Marco,
Lets go back to the beginning here for a second. The "DataEngine"
principle exists because there was a good need for them in plasma but
no solution from Qt. Hence DataEngines got created by Aaron if i
recall correctly. At that point there was nothing else so that stuff
was the best in it's class and served very well.
In today's world there is still no alternative to dataengines as they
where initially designed. However, with the massive move to QML a new
alternative became possible: QML Plugins.
That leaves us with the question: What to use now? DataEngines or QML Plugins.
I personally always found the dataengines a bit "magical" and hard to
find what was possible in a dataengine and how to use it. Granted,
this is something i can easily live with since i can read the code,
figure out what it does and get it working.
I think that QML Plugins, in this case specifically models
re-implemented from QAbstractListModel, are most certainly the way
forward. I've been using that for quite a while now and am liking it
quite a bit. It makes stuff very easy if you ever want to add filters
on top of a model (for example via QSortFilterProxyModel), Flexibility
like that is not easy to get in dataengines.
Also you claim (or complain) about async and models not being async.
That should not be an issue since models work with signals/slots. If
data changes you simply notify the model of that. I don't see an
issue.
I understand that there has been an heavy investment in the dataengine
area in the past, but now it really seems more sane and clean to go
for the QAbstractListModel version in a QML Plugin. It's clean and
simple to use. You only need to describe a list of roles for the users
to use. If you ask me, that should be the way forward.
For the rest you said below: not commenting on that since i don't know.
> so:
>
> plasma-framework branch mart/modelsInDataEngine
>
> it adds a property in DataContainer, setModel()/model() that is just a
> QAbstractItemModel * so implementation is completely up to the dataengine
> developer and is not limited in any way.
> As in the example plasma-framework/examples/dataengines/sourcesOnRequest, when
> the user connects to a source, in the sourceRequestEvent of the dataengine
> implementation, a model can be set on the datacontainer.
>
> at that point, if the source is connected with a 0 interval, it's up to the
> model to update itself on demand, typical use case for local data.
>
> in the same example, it shows something that can be useful for remote data,
> when you need polling:
> in updateSourceEvent you can still access the model, so if it's like
> downloading rss feeds,twitter updates or whatever, you can just call there
> containerForSource(source)->model()->update() or whatever way you tell to your
> model to refresh. the normal data for the datacontainer is still available so
> one can put some global data in it like number of unread emails, total rss
> items and so forth.
>
> for all the "write" actions, services can still be used in conjunction, so the
> service of the same source of the model could do things like authenticating.
>
> another particular thing is that there will be a single model instance
> regardless how many plasmoids are there using it, as any DataContainer is
> refcounted so it will be deleted only when the last plasmoid using it
> disconnects.
>
> on the QML bindings there is a "models" property indexed by source just as
> "data", an example can be found in plasma-
> framework/examples/applets/dataenginemodel/
> it uses a stupid QStandardItemModel, that will be probably enough for a number
> of our dataengines.
>
> Opinions? comments?
>
> If this can be merged, I would like to start tasks to port most of our engines
> that have data that is behaving as "models" (like rss or notifications) to
> this "actual model" approach, and leave as normal DataEngine::Data only the
> stuff that is supposed to be a single, simple map (most of dataengines,
> luckily)
>
> After this is done, even if is pretty invasive it would be good to even remove
> the class DataModel in the plasmacore import (or at least dramatically
> simplifying it to allow only the sources as items code path)
>
> Cheers,
> Marco Martin
> _______________________________________________
> Plasma-devel mailing list
> Plasma-devel at kde.org
> https://mail.kde.org/mailman/listinfo/plasma-devel
More information about the Plasma-devel
mailing list