[Kde-hardware-devel] Getting rid of synchronous and uncached accessors in Solid

Dario Freddi drf54321 at gmail.com
Fri Jun 4 14:38:18 CEST 2010


On Friday 04 June 2010 09:27:00 Kevin Ottens wrote:
> On Friday 4 June 2010 01:27:39 Dario Freddi wrote:
> > This is mostly for Solid::Control, but caching properties in Solid would
> > be great as well.
> 
> Well, that's what "QMap<QString, QVariant> cache" in HalDevicePrivate is
> for. ;-)

Awesome, that was 2am effect :) (see below)

> 
> > Thoughts, opinions? (Kevin I'm mostly looking at you)
> 
> I clearly agree with all the above regarding stress and delays on the bus.
> 
> That said, it's not like there's any of the Solid::Control calls which are
> on some hot path AFAIK. They are for most cases called not fairly often.
> 
> I guess you sent this mail after some Telepathy-KDE/Collabora
> "brainwashing". In the case of Telepathy it makes perfect sense to be very
> aggressive on the way caching is done, and having most of the wrapper API
> written in an async fashion. The usage pattern of such an IM
> infrastructure is much more stressful on the bus and clients can't afford
> blocking for 100ms.

That is true :) However, performance is just a desired improvement, whereas 
the problem is of course the synchronous calls on the bus. The main point is 
that QDBusReply is actually the worst possible way for placing a call - a more 
convenient way would be using async calls and accessing QDBusPendingReply 
synchronously (which is indeed possible). This spawns an event loop but still 
keeps the call asynchronous.

My main concern here is that DBus is indeed failable, and even if I stole many 
ideas from my recent work on Telepathy, I found myself in the need of 
switching some months ago some stuff for KAuth - which still uses DBus way 
more than Solid will ever do, but still it was creating more than one issue. 

So yes - performance matters, but I'm mostly caring about all the rest, given 
that these days the use of QDBusReply + call against QDBusPendingReply + 
asyncCall is discouraged from Qt people - and you can still keep things 
synchronous.

> In the case of Solid::Control I'm very doubtful about
> the perceived performance benefit for the user on maintenance cost ratio
> (especially since what really uses to Solid::Control are policy daemons
> just reacting to some state changes, UIs being insulated in other
> processes just accessing simpler interfaces after all the data got
> crunched).
> 
> Also, I have to wonder if that'd be well invested time to do that on
> Solid::Control at that point when the plan is to make it slowly disappear.
> Why not just do this kind of cleanup for after you rolled it back into
> PowerDevil as we planned? This way you won't be tied to the Solid::Control
> interfaces anymore, I think you'd waste less time in the long run by
> delaying such a cleanup.
> 
> Let's focus on the "kill Solid::Control plan" instead, shall we?

I just realized I clearly missed the point in my last mail (it was 1-2am my 
time :) ). I have of course no plans in wasting time for updating S::C at the 
current status - all of this would be for the rework of PowerDevil+SC in 4.6 
(mail incoming for that as I have an $plan). Still, I felt like sharing this 
for getting feedback and eventually making other teams thinking about it, 
given the current flow of code in networking & bluetooth.

More than that, consider that we would have the "new" Solid::Control be a DBus 
interface on the PowerDevil daemon, and for this reason we would not have to 
care about synchronous stuff or whatnots, since the clients would take care of 
that. So yes - this is all for internal stuff.

The plan is indeed alive, and as I already said, I'm about to have a planning 
ready to be shared and reviewed :)

> 
> Regards.

-- 
-------------------

Dario Freddi
KDE Developer
GPG Key Signature: 511A9A3B
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part.
Url : http://mail.kde.org/pipermail/kde-hardware-devel/attachments/20100604/cca4b3ff/attachment.sig 


More information about the Kde-hardware-devel mailing list