Monday, March 16, 2015

4 design tips for edge gestures

This post is continuing my earlier article on harmonizing touch screen gestures.

Look closely at recent apps and mobile operating systems. Swiping over the screen edge to trigger navigation related action, is becoming increasingly popular. No wonder, as edge gestures are fast and comfortable way to interact with your mobile app or device. They have a huge potential in them, but only a fraction of it has been put into use.

It's not all unicorns, rainbows and marshmallows, though. These hidden gestures come with major drawback in discoverability and variation in use. Here's few tricks how to improve the way edge gestures can be put to use.

1. Keep it simple, consistent and fit for one hand.

I can't stress this enough. The less there's to memorize, the faster it's to master. Eliminate exceptions and special conditions when an edge gesture is not working, or does something completely different. Focus on a robust gesture recognition, and let the physical repetition do the work for you. It's like training in any sport, so make sure those training conditions are obvious to the user. Interfaces are just tools, and a good tool needs to be simple.

2. Think edges as physical buttons.

Edge gestures work especially well in controlling an operating system. A single edge gesture can be harnessed to perform similar kind of actions. For example, one edge can do things related to the power key (turn off screen, change profiles etc), while another one controls application window (minimize, close, windowed mode). Like in majority of other devices, notifications and whatnot could reside in another one.

So, with only three edges, an extremely competitive and simple interface can be built. If more than one action starts from the same edge, use absolute care in fine-tuning the feedback for it. The finger movement needs to feel different for your brain to associate them to their corresponding actions. Traveled distance, change in direction, speed or physical location, are all your usual suspects for separating them. Keep it simple, and prioritize the most used action.

The relation to hardware buttons helps people understand the idea behind it much better. Memorizing actions becomes faster and more natural, when there's a familiar relation between them. The reason for some actions not being available becomes more obvious that way. Take application window controls as an example. They are only available when there's an application window on the foreground.

3. The edge feedback is everything.

Just like with any other interactive elements, when user interacts with the edge, there should be an appropriate feedback on it. This is important for many users not familiar with interactive edges. Gestures in general can be performed so fast, that it's a good idea to keep the interacted edge highligted even after the gesture has been succesfully completed.

If your design uses gestures to control the application content, having different transitions for edge gestures and application content gestures is an advisable idea. It's a valuable difference to tell the two appart. After all, if edge gestures control the system level navigation, the feedback should be different than the application level navigation. Let's look at the hint animation for unlocking a smartphone as an example.

If you want to use edge gesture to unlock, you should direct attention to the interaction area. If everything moves (right side example), it implies parallel navigation (like going through images in gallery) instead. If you have plans for any lock screen controls (phone call controls, maps, flashlight, audio playback), you most likely should reserve that center screen flick for such actions.

4. Edge notifications and toggles.

This kind of edge indication can be used in several different ways to draw user attention to it. It can be indication of new content, or simply a reminder for a new user, that an edge gesture exist. The catch is that it doesn't introduce a tappable object on top of a keyboard or other interactive elements.

Since user is not able to control when someone sends a message to them (or when the system decides to emit one), it's annoying when a banner appears on top of link, only milliseconds before user touches it.

However, if the notification access is tied to the edge interaction, your tap will be registered by whatever it was intended for. The duration can also be shorter to avoid banners loitering on your screen for too long. You anyway know where to check those notifications.

Finally, if you want your edge to function as a toggle, the edge indication should also behave as one. This means that subsequent swipes across that edge turn the edge indication "on" and "off" again. Just like tapping on a regular toggle switch would.

The cool thing with edge toggles is in the effortless way to control it, compared to traditional notification panels. Those you need to close with the an opposite edge gesture, which require considerable thumb mobility to perform with a single hand.

With these tips, you should be able to considerably increase edge gesture benefits, while avoiding the common gesture pitfalls that plague major operating systems and applications.

Thanks for reading and see you in the next post. In the meantime, agree or disagree, debate or shout. Bring it on and spread the word.


  1. n9 was my favorite experience in gestures (never tried a jolla).
    But thinking a little more about today phablets i can't imagine how gestures can merge to that size, or even more the tablets, to make one consistent UI for all.
    Its not easy to handle a phablet with one hand, so i can assume that the correct way is one hand to handle and other to run.
    I like the ipad 4 fingers pinch gesture for return to home, maybe it can be done in a phablet with 3 fingers pinch.

  2. Gestures make it actually pretty trivial since there's no need for actual touch targets (small buttons) to go places. It's not completely trivial since gesture recognition has to be built to match human hand range of motion, instead of resolution or display size.

    If a UI is designed to be one hand operated, it doesn't mean it can't be used with two. It's just a sanity check for me, because it really helps two handed use as well (both hands being equally good for supporting a device, while the other one drives).

    The biggest advantage with edge gestures, over multitouch pinch, is that the supporting hand can do it (there's a display edge right next to it). If you haven't tried Jolla or Sailfish OS yet on a tablet size, you're in for a nice surprise :)

    Thanks for reading and stopping by to comment :)

  3. I guess you know my opinion well, Jaakko, as in

    I totally agree that gestures are here for the "fast and comfortable way to interact" and I want phone to help me be faster and move my fingers in more comfortable [for me] way, not in the more precise [for computer] way. I.e. the harder the area is to reach, the more OS should expect e.g. edge gesture to be not horizontal, but horizontal-to-down curve that naturally happens when I try making the horizontal one.

    As for the Sailfish OS, particularly hard to reach for my thumb is the top-down closing gesture. Without moving the holding hand I can reach only the top-right edge and even that is not always easy. It would have been great is OS could guess the top-down gesture when I am trying to make one, but in fact I am starting from the right edge a curve that very quickly becomes a top-down line.

    Makes some sense?

    1. Hi Artem,

      Yes, I still remember our talk at Tampere (and reading your post) about this topic. You're actually one of the people who inspired me to write this post. So thank you :)

      Using the natural movement range of thumb as a design driver makes sense. The difference between a closing and minimizing gesture should be how it "feels" to perform (different conditions are met), instead how hard it is to reach (different gesture starting locations).

      Thanks for taking your time to comment. Take care :)

  4. Gestures are great, however as screen size is getting larger, my thumbs cannot reach the edges of the phone comfortably, which undermines the whole gesture principle. Take as an example N9 and Jolla, btw Jolla phone. The actual screen on the Jolla phone is about 64% of the actual surface of the phone.

    PS, I do not have small hands, same story for my trousers' pockets!

    1. Hi, I agree it's unfortunate that phone screen sizes are literally getting out of hand. I hope smaller screens make a return someday.

      The gesture control, as seen on SailfishOS today, is only a one way to implement them, and can be certainly improved.

      The core idea of using edge gestures, is to make sure the most frequently used actions are next to your thumb. It's never possible to reach more than 2 edges with acceptable fomfort (especially if the device gets bigger). Even with huge hands, reaching to all four edges without adjusting your hand grip is hard :)

      Thanks for commeting, take care :)

    2. I sure hope that swipe down gesture to close an app isn't going to be replaced with ambience menu. As you said, features that are frequently used should be simple and easy to access. The truth is that closing an app is more common of those two (I almost never change my ambience, maybe once or twice a month, but I close apps more like 50 times a day! :)

    3. Actually, for your case, if you close an app around 50 times a day, the top edge is honestly the worst place for the close gesture, purely from the human hand ergonomy point of view.

      A common action like that should be much easier to perform with one hand. It should be as comfortable to do as minimizing an app, but with the added protection against accidental closing. You would need to pay attention for it for few days before it becomes a second nature.

      Also, it's tempting to call the added top edge menu an "Ambience menu", since that's pretty much the only thing you can see right now :)

      What people should call it, is system or power key menu, due lack of a better name. Think it as the replacement of power key when display is on. You can leave an app on the foreground and just blank the screen, so you can resume it again easily; or you can keep the gesture active to get an tappable option to turn the device off. The original idea of the top edge replacing power key globally like that is completely missing from all demos, but let's see how the development goes.

      If you've ever held a tablet in a folder/case and wondered where's the power key so you can turn off the display. Power key menu is exactly for that.

      Thanks for commenting and stopping by. Come back anytime :)

  5. Your post is very helpful for developers to design better Mobile App Design

    1. Hi Sarah,

      Great to hear it's useful to others as well. Thanks for reading and commenting. Take care :)

  6. Hello Jaakko,

    i am not really fully agree with you that the top edge activity is hard to use.
    If the main user activity is done on the bottom so that his and hold the device on the bottom, then yes.
    But if you observe that his activity is most of all on the top of the device (some application bring the user to this aspect, with menu or i don't know what). then the edge gesture from the top becomes to be easy and the bottom one hard, because the hand moved more on the top.

    What i mean with my remark. maybe it should be judicious to make more observation about the use of device. How the hand goes to a position for a typical use of functionalities, and bring some gesture harder as other one.

    For my part i have observe sometimes that the top one is hard, Because i used the keyboard. and sometimes the bottom one get hard, because before i used the status function and some menu on top....

    I think if we want to eliminate/limit the top edge gesture. it should be done a reduction of control or interactive-functionalities from the top...

    best regards

    1. my experiment is base only current sailfish software....
      because it is currently hardly discussed on the jolla community...
      i know that your subject handle more on the universal concept

    2. Hi cemoi71,

      Sorry about the late reply, trying to do some other things during the weekend.

      Yes, that's correct. The only screen edge we can accurately know user is able to comfortably reach, is the one he/she is using to hold the device. Anything outside that becomes a speculation. That's why going to home screen (minimize app) is assigned to that edge gesture.

      Now, because we don't know how user is holding the device, nor how large the device will be in the future, I'm more and more convinced that the close functionality should be accessible through the edge we know user is able to interact without adjusting grip. It’s the only thing, in addition to “Home”, that is only available when app is on the foreground. Hence, these two make a good match.

      That would leave top edge gesture for more global behavior like different power key presses (single and long press) in SfOS 2.0 design. Think about a physical power button as an example. How strange would it be to force user exit an app before the button would work to turn off display (SfOS 1.0 gesture design is like this)? This is just from the user perspective. The more there are exceptions and context dependencies in the edge gesture behavior, the harder everything is to understand, and ultimately use as well. This is pretty much how the market reacted to SfOS 1.0 when it shipped. If something is complicated, it’s not useful.

      I wouldn’t like to repeat that mistake again. My OS design goal is not to targeting any particular device or display size, but how our hands and minds work in different situations (mobile or stationary).

      Thanks again for your thoughtful comments. Keep them coming :)

    3. Hi Jaakko,

      no problem, we both have a lot's of things to do.
      I don't find that you answer so late. I find it quite quick (i have some things to do too).

      So this theme is a quite interesting one.
      The problem seems to be particularity with the phone.
      Because people are convinced to use it with just one hand. And i know that is the wish of jolla to do so.
      For the tablet seems to me that is not a problem, because you don't have other choice to use the both hands...
      Is that right?

      i have a question. Maybe it is a stupid one.
      If the problem concern mostly the phone. Could we use some gesture actions with the corner?
      Something to open with gesture swish from left bottom corner to display-middle.
      And something to close with swish from right-edge to left-bottom corner.
      Don't know how to combine with a display bigger like the tablet one. Maybe with gesture from edge to edge...

      have a nice day

    4. Hi cemoi71,

      One handed use is not a goal. It's a direct benefit of natural interaction design. Treating natural interaction design as a goal, means that an interface is easy to use also with a single hand, but it logically doesn't exclude two handed use.

      "One-handed use", as a term, is very loosely defined. It's not about making every use case there is one handed. It will be very hard because we do not know every use case there is. This is also optional. You can use it with one hand, but nobody is forcing you to. Still, even with two hands, it will feel more natural than the competition.

      What we do know, is that mostly everyone will use their hands to operate a touch screen device. Hence, the point of the one handed use is, to avoid unnecessarily hand grip adjustments during the most common interactions:
      - minimize or close app
      - go back to previous page
      - open attached page for more options
      - interact with screen content

      This, alone, is a huge target. It will require a lot more awarenes and attention to get right.

      What you suggest, is taking things to the next level. It's not at all a stupid question or idea, but something worth thinking about.

      It might also have to do with application controls, that are problematic most of the time, although sensible mobile app design can reduce the need to interact with the screen top area.

      Thanks for the comment, and have a nice day as well :)

  7. hmmm i now what you mean, what do you want to reach. but it is pretty hard. the user behaviour seems to me an other one, if the display has an other size.

    do you know what? I don't if you and your team have already done it. but maybe it will be fine that, an event should be organized with public (potential customers) and many devices. Will be organized in a pub, and design and people could speak together on specific things that on both side are needed.
    I think just to speak about it on a mailbox is quite difficult. Sometime contacts should be done....

    1. We've done some user testing in the past and got some feedback for different UI prototypes. It would be great to have a get-together in a bar or similar informal setting. My biggest worry is that people easily confuse what is natural from a physiological and neurological perspective (the boring stuff), and what we're familiar with. Our brains builds workarounds to overcome limitations of things we use daily, and we're totally blind to this. That's the only reason we can use smartphones with button navigation, because our brains don't question them, it just works with what it has.

      Still, it would be cool to talk about these things with anyone who's interested. Thanks for suggesting it, I'll forward it and hope something comes out. Or we could do it in a Mer/Nemo meetup at some point if there's a demand.

      Good stuff, thanks for that :)

    2. Hi Jaakko, I understand what you mean.
      Even if i'm not on the design business, i feel that not so easy to handle it.
      For my part i give you my experience in short words with the jolla phone.
      I found the design really great. Actions mostly based on gesture Keys. Relative simple and intuitive. The close gesture swipe from bottom is the most intuitive gesture. Swish down and tchao! that's cool. Naturally the fact that the display is so long, make it sometime complicated in just one hand.
      I want just tell that i accommodate me to make it with the other hand. That's not a problem for me.
      Since i heard about this design theme issue, i recognize that for some people it is not so practicable. but for other indeed it is great.
      You know the human is an ritual animal. All his life is organised with habits ritual, even if he want discover some other things.
      All the time new experiment is nothing good. But getting the possibility and feelings that every time could be something new, and have the access to this, is great.

      What i mean now with my small arguments. Forcing someone to do or to have something is negative. but giving a lots of possibilities to do a special thing, and let the user choose how he want to do, make the experiment greater.

      What i mean with it. If it is possible from software side, with not to much code or device power-use, to let the place for many type of gesture (in our case now, there is 2) configurable (enable/disable) in system configuration. To let the user the liberty to choose what is the best for him.
      For my part i want to still use the close gesture from the top.
      Maybe it could benefit me from the edge. I could test it. But don't want to be forced to use it. Otherwise there are good arguments for it, and they should be communicated concretely with transparency, even if it reveal some further strategic plans (here should jolla be pretty carefull about it...).

      Then after a while, someone could make an analyse and ask all the jolla users which one they use. To know if it stay or no in code.

      Do you know what i mean?

      That is the same for the cover action. Before there was two cover-actions possible. Now it is reduced to just one... sad...
      I had expected that it could be more configurable.
      It is difficult to understand by a software evolution, that something go less. Give a sensation to go backward instead of forward...

      Have a nice weekend

    3. it is not a critic. it is just a constructive point of view.
      I am glad to discuss with you on this point. Just hope that you well understand. sometimes with my English i could not really express myself, and could bring to misunderstanding.Please give your question if it is unclear what i mean.

    4. Hi cemoi71,

      Hmm, blogger commenting system decided to eat my comment :( Let's try again. Sorry about the shorter and more condensed reply.

      Anything is possible, it's just software. Making things customizable is only matter of someone doing it. What is sad in pretty much any modern software project, is that interfaces are unnecessary complex, and that results into adding customization options, which will result even more complexity.

      The more complex the interface part of the software stack is, the slower it is to maintain and develop in the future, taking resources from functionality work (listen music, watch videos, browser website, write emails, take photos etc). Reasons for people owning the device in the first place.

      The slower the OS is to develop, the less likely it will become a real alternative to iOS, Android or WP. The longer it takes to get the OS stable, reliable and functional; the more users will end up switching away from Sailfish OS.

      However, these are just my personal observations and predictions, and they should be treated as such. I'm sure SfOS will end up with a ton of customizations in the end, because all software projects that came before it, were also loaded with tons of settings to affect everything. And if we know anything about history, is that people are great in repeating it :)

      Thanks again for your comments and patience. These are very interesting topics, and I will most likely post something related to FOSS development and UX challenges at some point. Take care :)

    5. Hi Jaakko,

      yes you are right. i know this complexity problem by software development. I'm a sw developer too on really closely on software.
      That is a pretty fine exercise for the software manager to analyse and choose a good balanced development between good user experience and good code holding and planing.

      I don't know how it is organized by jolla. I don't want change something in it or criticize it. I have just ideas, and want to bring my experience to help. Then at the end i know that sw managers decide if it comes inside or not. I'll enjoy if yes, but understand if not.
      What i know (better said guess) their is many team for different parts of the OS. Maybe could jolla make an addons-team, which treat some high-level parts of software (more the ones of the gui that is better to maintain and should have not so much impact on the core). And this team could offer some other experience to the user, to give some old elements back with an addon...
      What i mean with it, is not to bring something new on gui.
      But something as limited on development team engagement.
      It could offer some old software design parts and beloved from a good part of users again possible. I think in this way, it should not be so hard to maintain, because the good and verified old source-code comes back on a addon, which will no more get evolution. And is impact will be reduced because it is an addon.

      I know that with this perspective, is for you as designer not interesting, and quite foolish. Because it's like looking backwards. And seems to be against the evolution.

      I fully understand the problem you have i think.
      And it seems to be not easy to solve it.
      Because some user are quite satisfied with some parts of the os where other not.
      The timing to make the transition is not bad i think. Between the v1.x and the 2.0. But maybe too short.
      I wanted just to discuss on the subject, and give ideas adn ask for possibilities.
      Maybe you and your team should speak a lot with the dev team and more on deeper levels to have a good structure which bring more agile development on the high level as well. The addon or plugin interface is on the first side maybe not a bad idea...

      Just a point now. If i annoy you with something you've already heard or if you think i repeat me. you can directly tell me. no problem. you must not be too careful.
      I have just in mind that in your blog to discuss about points and ideas, and maybe you could brings some ideas to your colleges or jolla devs (maybe i am wrong).
      I know there the together side, but sometimes is better to speak directly with people who are more nearer involved on things.
      And on this optic you can stop me and tell me to go on a better adapted place for it. No problem, i won't go evil for this ;-)

      have a nice day

    6. Hi Jaakko,

      I really understand your point of view about the software complexity, when some function are active or not through parameters.

      after a while thinking, i have the feeling that somehow there is not so much possibility for solving this issue to do something like that.
      This issue depends strongly on how big is the size of the display, and how is the user physionomie. Long or short finger.
      With the tablet all the people are on the same level.

      With the jolla phone, some have short finger, and the close gesture with swiping down is really uncomfortable.
      For some other, that rocks because finger are long enough, and hand could be wide too.

      For my part, the phone is either on the table near me.
      I like holding my hand over it and make the simple moves with my trigger finger. In this case, the swipe down to close is for me great. Edge moves are not bad only when they are straight not curvy (from edge to bottom or top) because really inaccurate.
      On my hand then i hold it with one hand and the other one make the moves, and make most of gestures with the other hand.
      I have long finger and wide hand, and i remark that is no problem for me with two hand or just one, to make the current gestures, (is fully functional and comfortable). And won't be a problem for edge gesture i think.

      For the nokia n8 (referenced form a lot of user last weeks), then seems to be not a problem for every one...
      is a small one.

      I think the software complexity is not really a real problem on this issue. But could be a problem that people want to imagine.
      The old concept is already developed, the new want is to be developed. And there is enough people who can used both. The only difficulty is to have a good test process in the database, which will be done by each software release, to be sure if it still functional as is should be...

      Have a nice day.

      Best regards

    7. Hi cemoi71,

      Apologies for my late reply, we had a busy last week and I got sick after it.

      You're right, there are definitely ways to manage different configurations, but it's not what I'm worried about.

      The more we think about our own ways of using devices, the less capable we become seeing anything outside that. Same happens when we get stucked on what other people/manufacturers currently do.

      If we focus on how a human hand works, and what capabilities our brains have, the simpler the design and end result will be.

      If you think about what every people in this world wants, you need a very complex software to make everyone happy, because everyone likes different things. It's like making a software that emulates every software that has ever existed. A task that will not end well.

      On the other hand, if you think about human anatomy, they're all very much the same. So the design should target that, instead of what people want, because how could anyone want something they haven't seen or used before?

      However, anyone can use something that's designed for their anatomy, because there's no external elements involved. Yes, it involves learning, but people are amazing at it. If a design is based on someone else's taste, it's feels wrong for you, and you have to change that to fit for yourself.

      If a company chooses that settings route, there's no end to it. You and I are just a very thin slice of the world population. The more people come over, the more settings are needed.

      A software like that cannot survive, as it's just is doomed to repeat what has been already done in the past. It will keep doing so until it slowly dies.

      It might or might not happen to SfOS. It's a very difficult thing to predict. If there's one thing I know as a fact, is that software complexity never helps.

      If settings are needed, it's not designed for humans, but for someone's taste.

      Thanks for commenting, take care :)

  8. just to come back to your subject (don't want to further pollut it with a lot's organisation ideas).
    What do you think about my ideas to use the corners too?

    1. Oh, I'm sorry. I haven't had time for thinking about corners much, since there's a huge problem with people understanding 3 edges in SfOS 1.0...

      And now SfOS 2.0 will feature 4 different functions (each edge does a different thing in different context, except that sometimes they don't), so it will be tough to communicate things to the user :)

      I'll need to keep your proposal in mind though. Maybe all corners have to function the same, so that it doesn't matter what corner you use, since all corners are not reachable. Just a thought.

      Thanks for the comment :)

    2. yes the problem with edge is huge...
      But you have a good speak partner with mark Dillon. Because as sw manager and user who could use just one hand. That is an advantage i think.

      you're right just 2 corners are reachable. and one of it is maybe not so practical (too short for long thumb).
      But if just one is comfortable, it could be used for a really specific and important event/gesture. That the corner smaller, that make The gesture really precise and specific.
      Don't you think so?
      That fast the same for those big buttons red with mushroom form, for the alarm event...
      here could be That a gesture for a specific event, completely different as all the others, and has its prominency...

    3. This implies that corners might work better on larger devices, like tablets and desktops (mouse cursor). Let's see how SfOS develops after 2.0...

    4. you're right... i think just on phone format... :-(

  9. But don't let that get you down. Screen corner gestures could have a good use case in multi-window use that's not that practical on phones or wearable displays. I haven't gone throuhg large screen windowing logic on a very detailed level, and might have missed something where your suggestion would help :)

    1. Ok ... i don't know. i think you have (or could have) a better overview of the whole thing. Because the design is every day a part of your life.
      i find interesting to talk about it. was just an idea. but can'T really go deeper on it.
      i'll be curious if you make an analyse one day on it or make an update on your blog.
      Have a nice day...

    2. Always fun to discuss with you about design. Corner interactions will be on my list when I get around to do another round for edge gestures.

      Thanks for the reply. Hope you had a great weekend :)

    3. i mean, that a pleasure too for me to discuss on it..
      have a nice day.

  10. This is my first time i visit here. I found so many entertaining stuff in your blog, especially its discussion. From the tons of comments on your articles, I guess I am not the only one having all the leisure here! Keep up the good work. I have been meaning to write something like this on my website and you have given me an idea
    Lunch Service Navi Mumbai