Touchpad Gestures - Experimental
Jens Reuterberg
jensreu at kolabnow.com
Sun Nov 27 17:23:37 UTC 2016
Hell and High water has been passed, here is the new and improved!
Martin G and whomever else is interested I would love to chat about it
tomorrow on IRC (I may be late up since I've been sick for a few days so my
wake time is a tad too late but I will be there with a sunny disposition)
/Jens
On Saturday, 26 November 2016 13:44:50 CET Jens Reuterberg wrote:
> It will, come hell or high water be finished tomorrow
> (Got three other projects that needs to be worked on today)
>
> Miscommunication is a bummer but it happens - no hard feelings or anything I
> hope. Let's laugh about it when we're older :)
>
> On Friday, 25 November 2016 16:23:44 CET Marco Martin wrote:
> > On Thu, Nov 24, 2016 at 3:44 PM, Martin Graesslin <mgraesslin at kde.org>
>
> wrote:
> > > Hi Jens,
> > >
> > > I'm extremely sorry, there must have been a huge misunderstanding.
> > >
> > > Most of the gestures you came up with are not at all supported. We only
> > > get
> > > the following touchpad gestures reported:
> > >
> > > * multi-finger pinch/rotate gesture
> > > * multi-finger swipe gesture
> >
> > There has been a problem in communication for sure. but, whatever
> > happened, is happened, let's try to do more back and forth in the
> > future.
> > In this particular case, how do we proceed?
> > Jens, can you adapt the document only treating cases regarding multi
> > finger pinch and multi finger swipe?
> >
> > --
> > Marco Martin
-------------- next part --------------
Touchpad Gestures
New Logic:
Forming Groups of current actions we need covering. These are not at all "any possible" but they are the ones we should consider or remove only with good reason. Please remember that they are all experimental, tentative design and not in any way a final design choice.
The idea is still to create a logical langauge of touchpad gestures with connections to the touchscreen notes taken earlier. Reasoning for that has already been presented.
The action groups:
These are sorted into Window Control, which is when the user want to handle her windows on the desktop. Desktop Control which is handling all actions for the desktop and session. Mouse Control are the actions we already cover and needs to be handled as well as zoom in-app. Further is a side idea, for a way to leave open existing gestures something the user can define.
Again: none of this will scream "wow" these are all ment to be USEFUL more than exciting. We should keep this in focus, our goal is to be functional and flexible - our key part is the users right to edit the shortcuts, for now only through a textfile instead of a GUI setting considering the state the system settings are still in.
------------------Relevant Notes---------------------
To simplify the actions with less possible gesture the one issue is three-finger tap to activate Middle Click. By exchanging it we are liberating a lot of different gestures - BUT this MUST, MUST be able to be set by the user otherwise we are forcing a gesture on a user.
------------------Actions Suggested---------------------
WINDOW CONTROL
Close Active Window
Maximize / Un-maximize
Alt-Tab, Swapping Window
Window Spread -> Leads to further controls
Move Window
Resize Window
DESKTOP CONTROL
Desktop Grid -> Leads to further controls
Switch Desktop (Move between desktops)
Lock Screen
Show Desktop
MOUSE ACTIONS
Single Click
Right Click
Middle Click
Zoom/Unzoom
Scroll Horizontal/Vertical
FURTHER
Specific App Launch or App Menu
------------------Gestures Available---------------------
After the last email some miscommunication was cleared up. These are the actions - one of them is NOT clear and need a "ok" from Martin G.
TAPPING
(touch and release, noticing on release)
Note on Tapping: there are some issues concerning the logic of the touchpad, for example touch and hold on most touchpads do not at all mimic holding the left mousebutton. Instead it is a deep press. The issue lies in that we don't have any test machines where there isnt this "deep press" available. So in most cases double tapping for example the window border triggers a move effect. But single tapping and dragging does not. This seems fairly natural. The issue of course is what happens if a machine does NOT have this "deep press" of the touchpad. Information on this would be interesting.
Tapping also have subgestures. Doubletaps for example where you click+release in quick succession. But interesting also the Double tap + Hold where the user double taps for example the window border but holds the second tap. Tap, release, tap and hold. This to activate moving the window.
Also there is the issue of touchpad areas - where top right is reserved for rightclick for example. All this shows the complexity of the touchpad and the issues we WILL run into. So we should consider first implementation something that need testing IRL.
SWIPING
(touch, move, release, noticing on move)
PINCH/EXPAND
(touch, move relative to each touch, release, noticing on move)
HOLD/LONG PRESS
(touch, hold, release, noticing on hold)
NOT CLEAR (Check with Martin G)
Hold + Tap (N fingers touch, hold as another finger taps, noticing on taps)
------------------Suggested Combinations---------------------
TWO FINGER PINCH/EXPAND - Zoom/Unzoom in app
ONE FINGER TAP - Left click
TWO FINGER TAP - Right click
TWO FINGER LONG PRESS - Middle click
TWO FINGER SWIPE - Scrolling Horizontally and Vertically
TWO FINGER PINCH - In App Zoom
TWO FINGER EXPAND - In App Unzoom
THREE FINGER DOUBLE TAP AND HOLD ON WINDOW - Move Window. Clicking anywhere on an active window with three fingers should be comparatively harmless.
THREE FINGER EXPAND - Show Active Windows
THREE FINGER PINCH DURING SHOW ACTIVE WINDOWS - Hide Show Active Windows
THREE FINGER SCROLL - Move between open windows. This one is risky - the reason for which is that three fingers is easy to get on a touchpad by mistake, a quick scroll on it could easily be activated. We have all by mistake scrolled with two fingers on the desktop activating a wild desktop switching.
The reason for this action is because "Three Finger Tap and hold + tapping with a single finger" might not be possible as a gesture. IF it is, it seems like it should replace THREE FINGER SCROLL.
THREE FINGER PINCH ON WINDOW - Minimize (this is one like the four finger pinch below is questionable but relevant, and needs some fine tuning to be function and may have to be removed)
FOUR FINGER EXPAND - Desktop Grid
FOUR FINGER PINCH ON DESKTOP GRID - Quit Desktop Grid
FOUR FINGER PINCH - Show Desktop
FOUR FINGER EXPAND ON SHOW DESKTOP - Untoggle Show Desktop
FOUR FINGER SWIPE - A scrolling motion, Horizontal and Vertical to move to desktop in that direction (using normal scrolling motion where the direction you pall FROM is the direction it scrolls to)
FIVE FINGER PINCH - Lock Screen (Yes this is a personal favourite as I think locking your screen, on a laptop should be an attractively EASY action considering how risky it is to leave your laptop in a public place)
------------------Rejected Gestures---------------------
This is a short walk through of the rejected gesture combinations and the reasoning why - if these reasonings are false then of course the gestures may be of interest. Otherwise it may give some hint to the background logic of the whole gesture set.
REJECTED
ONE FINGER HOLD + ONE FINGER TAP = Right Click
ONE FINGER HOLD + TWO FINGER TAP = Middle Click
(With both of these the focus of the Middle/RightClick is where the First Finger is held down, issue with both would be that this would mean single clicking and holding would be directly meaningless which would make the whole experience very experimental in nature as that then would have to be replaced)
------------------Rejected Gestures---------------------
These are ALL experimental, but those that are the most tricky for us are probably the pinch movements that toggle show active windows and show desktop. The fact that the same gesture, done in different scenarios does two different things is NOT an elegant solution and may be annoying to our users. But they CAN be good and need testing before they can be rejected out of hand.
The way tapping works on touchpads is a relevant issue, and something that hasn't been properly explored. The layers of touch for example is a fascinating detail and one which could be used more extensively with "deep press" and "tapping" doing different things in different scenarios in much the same natural way we move windows by deep pressing the window border and gladly accept double tapping the window border instead if we choose that.
Considering the fact that the sensation and difference is so very clear, the tactile sensation for the user is obvious, it can be useful.
Tap, hold, then press into the touchpad for example? Essentially a doubletap but done in a very clear "drill down gesture"
Finally we have to consider how a user should set all these gestures herself. Plasma is flexible and no matter HOW annoying it is that it is that flexible currently that is Plasma's main selling point that a user can edit. That on the other hand does not mean that the user need handholding. If she wants to edit and the options are available in well documented text files - that is still very much acceptible as an in between solution.
More information about the Plasma-devel
mailing list