Applications Lifecycle Policy
Kevin Ottens
ervin at kde.org
Thu Jul 6 06:52:23 BST 2017
Hello,
I don't want to dive too much in this particular thread. A couple of things
I'd like to bring though. I won't quote what I agree with but just what
worries me a bit.
On Wednesday, 5 July 2017 21:37:09 CEST Martin Flöser wrote:
> [...]
> With extragear gone I don't really see the need of playground any more.
> Playground applications are just like all the other things, except that
> they did not go through KDE Review. Which brings me to the next point:
> remove the review.
>
> To me the review process always felt weird and also like a relict from
> other times. I contributed to overall KDE something like 100 k lines of
> new code - none of them went through review. Would KWin pass review
> today? Just for the fun I opened up Krazy and see 444 open issues.
> Objectively speaking KWin is known as one of the products with highest
> quality in the KDE area and one of the window managers with highest
> quality and the Wayland compositor with largest test coverage. On the
> other hand it would fail our rules for review (granted Krazy reports so
> many false positives in the case of KWin, that I didn't check for
> years).
>
> KWayland entered frameworks without review. How come? Well it moved from
> Plasma to frameworks, so no review needed. How did it enter Plasma? Well
> it was split out of KWin. Back then it was a few files providing a very
> minimal library for Wayland clients. If we had started in Playground we
> would have had to go through review - today it's a code base of 50 k
> (according to cloc). Similar if we would have started a new Wayland
> compositor from scratch it would have had to go through review, but by
> extending KWin that was never needed.
>
> I see that the review process is there to ensure a "baselevel quality".
> But what is afterwards? When did KWin go through review? When Matthias
> forked it from KWM, or was it covered by KWM? That makes something like
> 15 years of nobody looking at this baselevel quality. Would we have
> needed it sometimes? Hell yes! Think of the compositor introduction, the
> xcb port, the Wayland port. To large parts like new products, but we
> passed review once (if at all) so it was no longer needed.
>
> Similar we see now for Kube. If it would have started as a "KMail 3" no
> review would be needed, but as Kube it needs to go through review.
> That's arbitrary.
I don't think that is arbitrary. There are lots of differences happening
because it didn't get started in the kmail or kdepim repository. Same thing
for Zanshin which had discrepancies in dealing with the translations or how
the releases were done. They wouldn't have happened if it was started as part
of kdepim or korganizer because you tend to mimic more when you grow within
something already established. We found those issues during the kdereview
phase and I'm glad it happened.
> If I would start a new project I would think that this process is a joke.
> The quality just doesn't get measured any more after review.
Now I agree that claiming kdereview is about quality is kind of a delusional
especially since it happens only once as you mentioned. To me it's really to
make sure it gets properly inserted in the rest of our infrastructure (for
translations, making releases, etc.) which might require changes either to the
application or the infrastructure. kdereview is the right time to find those.
> Today I think there are way better things to measure the quality than a
> two week process on kde review:
>
> * how many unit tests does a project have?
> * how large is the test coverage?
> * how often do tests fail on build.kde.org?
> * how often does the build fail on build.kde.org?
> * is it translated?
> * does it have appstream data?
> * is the code getting reviewed?
> * is the project a one person show?
> * ...
Out of curiosity, what happens to projects which are a one person show? We got
loads of those. Sink and Zanshin being in that set IMO.
Sink and Zanshin get a few commits here and there from other people but really
90%+ of the work is done by Christian and me respectively.
> So instead of a one time review I would propose a continuous review of
> the projects and make it available in an easy accessible way so that
> users can also see the objective quality of the application. And yes
> that would mean that many long standing applications would have a way
> lower quality than the new kids on the block.
I don't think that excludes having a defined point in time for the first one.
I even think that's necessary because we're not quite good at doing this kind
of continuous reviews. I like the idea otherwise.
> For KDE Applications, Plasma and Frameworks I expect to have additional
> rules for integration. Frameworks already has them, Plasma kind of has
> them, but I think they are not codified and KDE Applications could e.g.
> start with the current review process.
On Frameworks the point in time for the initial review is known though, it's
when the flip is switched to have it released as part of Frameworks. For
applications released on the KDE Applications cycle, it could simply be the
same thing... but for those not part of the KDE Applications cycle it's less
clear.
Regards.
--
Kévin Ottens, http://ervin.ipsquad.net
KDAB - proud supporter of KDE, http://www.kdab.com
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 195 bytes
Desc: This is a digitally signed message part.
URL: <http://mail.kde.org/pipermail/kde-community/attachments/20170706/7a3e3410/attachment.sig>
More information about the kde-community
mailing list