Showing posts with label Natural behavior. Show all posts
Showing posts with label Natural behavior. Show all posts

Tuesday, June 9, 2015

No more empty smartphone screens

Ever since I parted ways with my trusty Nokia 3310, empty standby screens of many smartphones have felt cold, distant and useless in comparison.

Although, various Windows phones, and a handful of Android devices come equipped with features that make their standby screens appear far less dead. After all, credit must be given where credit is due.

No need for power or home key presses, display double taps or other conscious interactions. The moment they're exposed to the world outside users pocket, both the phone and its user are already one step ahead of everyone else. A digital extension of a human intention.


Sailfish OS also has a similar feature in development we call "Sneak Peek". It's not ready yet, but I've been trying it out for almost one year now. Somehow the feature always carried over software upgrades, up until last week at least. I had to re-flash my phone, turning the feature off for good.

The sudden change in device behavior has left me staring at an empty screen more times than I'd like to admit. Looking and feeling like an idiot.

One step too much


Curiously enough, I realized that all those solutions I mentioned earlier, had one important piece missing. They all focused only on what user might want to see, but ignored where that would lead: what would people do next, after already holding the device in their hand, with the display showing relevant information?

Easy. You either want to interact with it, put it back to your pocket, or set aside on a surface near you.

And the problem with everything we have out there today, is that they all just create an additional state between the display being completely off and fully on. A glance or active screen is shown first, before you can see the lock screen. If you desire to interact with device functions below, you have to first go through that extra screen. This throws away part of the potential gained through anticipating user intentions.

To allow user interaction, it would make more sense to automatically show the lock screen, without any added steps. User would see the same information, intereact with lock screen controls, or continue to unlock their device.
Yes, it would requires some adjustments to how the lock screen behaves. It might be something like these wildly conceptual images, that are created to support this post. Take them for their illustrative value.


Moreover, the appearance is secondary in the long run. How it feels in the daily use becomes much more interesting and valuable quality. At first, it might sound strange for the phone to behave like this, but let's look at what would happen if it did.


The first thing you'll notice, you can get to whatever you're doing a bit faster. People use smartphones over 100 times a day, with majority of those instances starting with manually turning on the display. As the manual part is removed, less attention and accuracy is needed.

Second, the amount of user errors would decrease, because nothing was added. Every gesture and functionality works just the same way. It's the same lock screen, nothing more, nothing less. It's just working with you, not against you.

Worried about accidentally unlocking it? Don't be. Every lock screen has a built in protection mechanism to prevent that. Made famous by that "slide to unlock" slider on the first iPhone. We flick or swipe long enough distance to get past it.

Finally, removing your device from your pocket becomes much more friendlier event.


Just pick up the phone from your pocket and place it on a surface near you. Display will light up to greet you. An accelerometer inside the phone can tell whether you're holding it in your hand, or it's resting on a table. By following that information, it's easy to turn off the display sooner to save power.


Naturally, if the phone is on a table, the same sensor can be used to detect user picking up the device. And for cases you don't want to pick it up, you're just a double tap away from whatever you need.

By now, I'm sure some of you've already wondered why not use black background with colored text and icons on top. Well, it works great if there's no display backlight. If there is, too bad. I illustrated the problem below.


Those liquid crystals that are used to affect the light passing through, cannot block all of the backlight, resulting in gray appearance instead of black. This is very visible at night, inside movie theaters, clubs and ancient dungeons.

Using a background image will simply make the issue less apparent (can be turned off for AMOLED devices to save power). Also, a user selected image is much more personal option compared to someone saying it should always be black.


Making smartphones anticipate our needs is not rocket science. Especially when it comes to the lock screens scenario that we manually go through almost 100 times a day anyway. It's much more about seeing past of our past experiences. If you see through them, and get a taste of what things could be, it's going to be difficult to go back anymore.

You'll soon realize how passive most smartphones are. As if they didn't have the information available to anticipate the most basic thing we do. Once again, you've been staring at an empty screen. Looking and feeling like an idiot.

Welcome to the club.
 

Thanks for reading and see you in the next post. In the meantime, agree or disagree, debate or shout. Bring it on and spread the word.
 

Tuesday, November 11, 2014

Why do people get into fights with computers?

The internet is full of stories about the volatile relationship between people and computers. It's because by nature, both sides are completely foreign to each other, only separated by a thin layer called a user interface. It communicates the state the software is in, and provides methods for the user to control both software and hardware features of the computer.

To put the role and importance of user interface into a perspective, I'll compare it to an intergalactic interpreter. It's job is to prevent miscommunication and when possible, recover from situations caused by it. It works between two species that have nothing in common with each other. A misunderstanding between such parties can escalate quickly and have irreversible consequences. And naturally there are good and bad interfaces when it comes to doing interpreting. The former takes pride in focusing on efficiently getting the message across as authentic as possible, while the latter focuses on performing party tricks.

I personally value getting the message across. For example, we use a smartphone so many times throughout the day, that it's frustrating if an interpreter doesn't understand you, or treats your hand as something it's notA good interpreter is in tune with you. It knows what you're about to do, understands differences in your tone of voice and body language. A bad one requires constant focus from you, because it doesn't fully understand you or isn't compatible with the way you function. That means neither side can really function efficiently, and mistakes are bound to happen.

And at the end of the day, when machines finally turn against us, I'm confident in pinning the blame for that on the interface between the two. The user didn't understand why the machine wasn't doing anything, and the machine didn't understand why user was anyway doing something. The interpreter was most likely putting on some lipstick when all of that happened, and the resulting nuclear winter allows our kids to make glow-in-the-dark snowmen all year around.

To delay the inevitable, let's focus on both prioritizing and improving the interpreter qualities of user interfaces we build to communicate with machines. These two species so alien to each other absolutely require it. Because with the current rate of technological advancements, the smartphone of tomorrow will be capable of horrors far beyond running a Facebook client.

Thanks for reading and see you in the next post. In the meantime, agree or disagree, debate or shout. Bring it on and spread the word.

Monday, October 27, 2014

Just asking - a new image series launched

Just asked myself,

Why not to squeeze some of the points I make in this blog, into images that are.. say.. more approachable? I thought it would be a good idea. So I ran along with it and made a few over the weekend.

These two (just added 2 more) are just the beginning of a wider series of images, that either ask a simple question or challenge something in the current state of the smartphone industry. I really want to do this just to see if it's something worth continuing.

As always, the goal is to increase the awareness of more natural user interfaces, through the work we've already done for Sailfish OS at Jolla.



Here's the link in case the Picasa flash plugin crashed and burned.

Anyway, let me know how these work out for you. Any comments or image ideas are also very welcomed, so I can crank out more - or stop immediately. If you already haven't, this is the perfect time to visit the comment section.

Fantastic. Let's do this, since we don't try in Finland.

Thanks for reading and see you in the next post. In the meantime, agree or disagree, debate or shout. Bring it on and spread the word.

Friday, October 10, 2014

Multi-touch and bigger screens

Brace for disclaimer!

Note that this has nothing to do with Jolla. People have been asking me about how could Sailfish OS work on larger touch screens, so here it is folks. Some theoretical design thoughts about Sailfish OS user experience, on a bigger size. This is important for the sake of understanding a mobile operating system design and the effects different touch screen sizes have on it.

Fantastic, let's move on.

Multi-touch makes an excellent parallel subject, when talking about larger touch interfaces. I've personally grown to dislike majority of multi-touch implementations because they seem to be driven by technical capability to track, rather than supporting the way we use our hands.

So what should multi-touch be then?

Let's use another tool example to dig deeper. I like comparing things to tools because of their simple, efficient and purposeful interfaces.

Think about a hammer and a nail. The task is to hammer a nail into a piece of wood. Your other hand holds the nail while the other one hammers it in. When breaking that down, we can recognize multiple smaller operations inside that task. Pinching a nail while holding it perpendicular to the target surface, is one. Whacking it in with a hammer in your other hand, is another. This is a simple multi-touch use case from the real life. With two types of multi-touch.

Huh?

Yes, I think there's two kinds of multi-touch. The first type is related to the task itself. You need to use both hands simultaneously for the same task to succeed. Hold the nail and use the hammer. The second type is related to individual hand operations inside that task. Pinching a nail with at least two fingers, and gripping a hammer with up to five. The latter is the most common use of multi-touch to implement a pinch/spread-to-zoom for example.

Ok, two types of multi-touch. Two-handed task type and multi-fingered operation type. There is no need to discuss about how many fingers you need to do something, because then you're focusing on wrong things. And I feel many existing interfaces are limited to only the operation type, because they're not focused on user tasks, but in something completely different. Not to mention forgetting totally what our hands are capable of.

So, I ended up with this definition because there wasn't really anything tangible behind existing multi-touch interfaces, in the mobile OS space. At least I haven't ran into anything that made sense. What I've found abundantly, though, is a lot of complexity that multi-touch can help to add. It's very easy by introducing more and more fingers on the screen, and mapping that to do yet another thing in the software. The amount of needed fingers has lately gotten kind of out of hand. Pun intended.

If you need more than 1 finger to move around in your OS, you should seriously look at the interface architecture and feature priorities.

"But with multi-touch , I could have an OS feature to directly alter orbits of celestial objects and.."

No. Stop it. You'd be still browsing, watching videos and gaming. And the only celestial object you know is Starbucks. Stop looking at increasing OS features, and pay more attention to enhancing user potential.

Alright, apologies for the slow intro. This is where some illustrations comes into play, and hopefully make more sense of the multitasking stuff above.

Sailfish OS was designed to be less dependent on display sizes than other mobile operating systems. This is because the most common user interactions are not depending on the user handedness, hand size or thumb reach, so the gesture based interface naturally allowed one-handed use of a smartphone sized devices.

"Hah, you can't really use a huge device comfortably with one hand, so your one-handed use benefit is lost then?"

Yes and no. The same way that Sailfish OS made a small interface fit into a single hand, it makes any larger interfaces fit two. This opens new ways to interact with larger devices due to the analog nature of touch gestures.

We should also understand that people are very liberal in how they use and hold devices in real life environments. In commute, at home or during a holiday trip. Most of the time, it's resting against something, and simply hold in place with one hand.

This a two hands grip, while using a full-screen application. This is a precondition to completing a task. My tool comparison is holding a nail in the other hand and a hammer in another.

The left (or right, it doesn't matter) hand performs the Peek gesture to expose the Home screen. The hand with the nail is placing it against the wood surface.

While keeping finger on the screen, user is able to see what three other applications are doing in Home screen. The nail is ready to be hammered in.

Without releasing the left thumb, user performs a cover action to play/pause/skip a song with the right hand as an example. Releasing left thumb after interacting with any active cover, would keep user in the application 1 (first image). That's a fast way to look into Home (just like on the phone), perform an action (enabled by the larger screen) and get back to the app. All without even really leaving it.

Alternatively, if user would tap another application cover, or trigger a cover action that requires a full-screen state, the screen real-estate would be divided between the two. The nail has been hammered in, and user is back to default state that precedes the next task.

This behavior is a natural progression of Sailfish OS Peek gesture and how application windows are handled. Only the support for the other hand and screen division was added. Both hands performed an individual operation that alone, completed a single task (pinch a nail and hold/carry a hammer). When they're performed together, a different task is completed (a piece of wood is attached to another). Just like we do so many things with in our physical environment. I wanted to focus on illustrating the task type, because there are countless examples about the single hand multi-touch, the operation type.

The value in all of this, is that the entire interaction sequence is built into the same application usage behavior, without any additional windowing modes or mechanisms that need to be separately activated and used. It supports the way we work and enhances our natural potential. After all, you don't turn your hand into a separate mode when you're driving nails into planks of wood. No, it's the same hand, all the time.

Similarly, when you need to reach something from a tool drawer, you will not physically enter the drawer yourself. You wouldn't fit. Instead, you stand next to it, open the desired drawer and pick up the tool you needed, before closing it again. The Sailfish OS peek gesture is doing exactly the same on larger screens because of multi-touch. Exposing another location (Home/events = drawer) with the other hand, to see what's there and perform a task (trigger an action = take a tool) with another. All without actually going to that location.

That's what multi-touch should be.

Something that focuses on enhancing our potential, instead of enhancing features we are required to use.

Button based tablet operating systems (excluding Win 8 etc) are not going to do something like that any time soon. Not only because they treat active applications in a different way (as second class citizens), but it would be challenging to implement the behavior into the way how buttons work. Also the button locations do not support ergonomic use of individual hands when gripping the device from the bezel. On the other hand, the sliding gesture over the screen edge is very natural to perform, because it happens where your thumb is most comfortable any given time.

This conventional button approach, that many Android devices use, exhibit another problem in enforcing a hand preference in controlling the device. As you can see from the image above, the left hand is not able to reach notifications on the right, and similarly the right hand struggles in reaching Home, back and task switcher buttons on the left.

It's not about changing the interface between a phone and a tablet. Tasks are anyway the same. It's how the two-handed use is enhancing our potential through the increased touch screen area.

Don't try fitting an existing multi-touch solution into your interface, but think how an interface can handle both one- and two-handed use.

Then, the rest will find their own places naturally.

Thanks for reading and see you in the next post. In the meantime, agree or disagree, debate or shout. Bring it on and spread the word.

Monday, September 22, 2014

Pushing the touch interface to the next level

Earlier this year, at World Mobile Congress in Barcelona.

Before his main interview, I explained a journalist what Sailfish OS interface is all about. Why improving how touch interfaces work was important.

In the evening, I got to read this.

Breaking the smartphone mold isn't easy. Just ask Jolla

I was taken to school that day. Learned a thing or two about talking to media.

No regrets, though. To keep sucking, I got a vacuum cleaner as a gift (it was waiting for me at the office - I love you guys), and the resulting article title is spot on to wrap this post around.

Before you start creating anything, you have to make a choice.

Is it enough to stick with what you have? Is there anything you can keep and re-use? Or is it all beyond saving and starting from scratch is the right thing to do?

As I mentioned in my previous post, the problem is in the interface. The reason for the hurt came from the button based navigation. The solution that already existed before the problem did.

We raised our sledgehammer high, and brought it down hard. Buttons had to go.

As the dust settled, our creation emerged as pictured below. A smartphone with three buttons less. It doesn't look like much, now does it?

And that’s exactly the point. It’s not about the looks. It's how it works with your hand. It's your hand that completes the touch interface.

Not by adapting, but by the way it naturally works.

It's more than what you see. Much more. When you take advantage of how the human body works, several benefits are gained over other interfaces, that treat our hand mainly as a mouse cursor replacement.

First comes comfort, because your hand size is irrelevant. Then speed, since less accuracy is needed.

Finally, when your brain recognizes a familiar pattern, it can immediately perform the matching interaction without eyes confirming it. It's just the way our body works.

And the interface works with it. Not against it.

Meet Sailfish OS.

The most commonly used actions (Home, back...) are based on simple gestures. This means they can be performed exactly where your thumb is most comfortable during content interactions (and not like this).

The notifications page access is also moved to the screen bottom edge for easier access with larger devices. Your hand is usually closer to that edge. Especially with larger phones and tablets.

My next post will illustrate better how Sailfish OS interface works. Meanwhile, you can check some quick tutorials on Jolla's Youtube channel.

In short, you swipe over the screen edge to interact with what you feel. From the screen center, you interact with what you see or know. But, I'll do a more detailed post about it next.

For a touch screen interface, our ability to know at all times where our thumb is in relation to other fingers, is important. To test it, close your eyes and pick up a phone. With your eyes closed, try placing your thumb on the center of the display. Next, try finding the device edge.

Both are very easy to do, because you've had that hand (and brain) since you were born. It's natural for you.

However, allowing you to make use of it on a smartphone, someone has to break the smartphone mold.

The one with the buttons.

Help us break it.


We're few against the many. The perception of what a smartphone is doesn't give in easily. It's been the toughest thing to face in my professional career. Almost every day I hear or read from someone how both back and home buttons are etched in the minds of smartphone users so deep, that it's impossible to change.

And that's sad to hear because it's not true. It's improbable to happen overnight, but definitely possible when given time.

Just by following others and copying what they do, takes nothing forward. Copying is the reason we're stuck with over a decade old touch interface with buttons in wrong places. When you copy, you don't think. When you don't think, stupid stuff happens.

Sticking to a vision has already made a big difference for our community. Instead of copying what others do, we challenged the smartphone industry. A tiny company with a wonderful community succeeded where many companies have failed.

In being themselves.

If you ask me or anyone working at Jolla, you will hear it's not easy to break the mold. Ask our community the same question, and they'll tell you it's already been broken.

A more natural touch interface might not sound like a big thing. Until you try it yourself.

And you've just hammered off a good chunk out from the smartphone mold.

Roughly the size of you.

Thanks for reading and see you in the next post. In the meantime, agree or disagree, debate or shout. Bring it on and spread the word.