UI Design - add resource window more gesture oriented

Thomas Pfeiffer colomar at autistici.org
Wed Sep 14 09:13:11 UTC 2011


 On Tue, 13 Sep 2011 19:49:20 +0200, Marco Martin wrote:

>> http://share.basyskom.com/contour/UIDesign/1-addResource_window.jpg
>> #1: The Add content window (wording "content" sounds better then 
>> "items"
>> in my opinion):
>> Here the user can either search for content (like it works already 
>> now)
>> and/or select categories (a dropdown instead the current icons, that
>> rather confuse then show the filtering function).
>
> hmm, i find comboboxes rather pain for touchscreens, i know meego 
> tablet is
> using them in several places (and we have one in the battery since is 
> still
> the same plasmoid used in the desktop) but i would be careful with 
> the usage
> of that widget
 
 I'm not in favor of comboboxes here, either. I agree with Marco that 
 they
 (at least the Qt ones) don't work so well on touchscreens, plus they 
 always
 require two clicks to change the category (one for opening, one for 
 selecting).
 So I think we should come up with a new solution that avoids the 
 disadvantages
 of both the current icons and a combobox. Would be another good point
 for discussion next week, wouldn't it?

>> Underneath nepomuk already delivers best hits, that is content that 
>> is
>> already related to the current activity and shows the first 
>> 20something
>> resources based on that recommendation. As soon as the user inserts
>> something in search or dropdown, the content changes of course.
>
> i guess that would depend from
>
>> http://share.basyskom.com/contour/UIDesign/2-addResource_select.jpg
>> #2: Selection works like in the current development with single 
>> touch,
>> the selected file gets a highlight.
>>
>> 
>> http://share.basyskom.com/contour/UIDesign/3-addResource_dragndrop.jpg
>> #3: Now I start dreaming :) Instead of an add-button, the user 
>> simply
>> drags the selected content into the activity area behind the window. 
>> The
>
> yep, that's should be supported, as a possible way, not as the only
> way tough
> i think.
> btw, that would be possible as soon as we can switch to master for
> kde-runtime
> since there are drag and drop qml components there

 There has already been some discussion about a very similar idea in 
 this thread:
 http://mail.kde.org/pipermail/active/2011-August/000554.html
 We should consider the concerns voiced in that thread.
 I'm still for this, of course. Offering an alternative to drag & drop
 might make sense, though.

>> 
>> http://share.basyskom.com/contour/UIDesign/4-addResource_multiselect.jpg
>> #4: still dreaming :) Multiselection could also be done with a two
>> fingers gesture: indicate start point with the press of the first
>> finger, stay there and move the second finger over the surface 
>> withouth
>> loosing contact to select all items laying in between. This would be 
>> a
>> nice feature if I want to select a big amount of data, lets say all 
>> my
>> last vacation pics...
>
> that was by the way the exact problem i banged my head around today
> in making
> this experiment
> http://www.youtube.com/watch?v=Fvqip960aKg
>
> is pretty visible that doesn't work as good as it should be, since 
> the only
> multitouch thing is going to be exposed in Qt 4.8 is a pinch gesture 
> that
> doesn't let to know where the two individual points are (i think they
> want to
> keep raw touch events hidden, we can write some ourselves tough) 
> there could
> be enough use cases for it: both in that video, resizing boxes with 
> fingers,
> or drawing a rubberband requires having the coordinates of all touch 
> events.

 Having more precise multitouch gestures is quite important if we want 
 to
 offer a truly satisfying user experience, especially for those users 
 who
 are used to them from other devices / OSes.
 For example although I was happy that the pinch gesture worked in the 
 Active
 browser at all, it was obvious immediately how much more helpful it 
 would be
 if it actually registered where I touched. Usually I want to zoom in on 
 a
 particular point of the website instead of just zooming in and then 
 having
 to pan to what I actually want to see.

 We have to remember that we can never rely solely on multitouch, 
 though,
 since we do not know if all devices Plasma Active will run on have 
 multitouch
 capability. So it should be offered for convenience as often as 
 sensible,
 but there always has to be an alternative method.

>> I would love to see our development become more and more gesture
>> oriented - I think we already have cool features like the dragging 
>> from
>> the status bar and the wheely movement of our switcher... but maybe
>> there is even more potential. Would be a nice topic for next weeks
>> workshop, what do you think?
>
> yep, me as well.
> overall, i didn't try to puch many multitouch gestures is just
> technical right
> now.
> The bindings are still not api stable, so for just even having
> pinching in the
> web browser and in the image viewer i have to put the whole support
> in a patch
> just for the meego build to make the thing running everywhere, with a 
> stable
> gesture (and raw touch events) api we will do way more crazy stuff ;)
 
 +1. Gestural interfaces on tablets ftw! ;)

 Cheers,
 Thomas


More information about the Active mailing list