Friday, July 31, 2015

Man and its computer, part 2


In part 1, I shared my conclusions about how similar today's computers are. They all run different applications, ranging from entertainment to productivity, from simple to complex. This post will compare in closer detail how graphical user interfaces (GUIs) of said computers help people to get the best out of their investment in these devices.

Desktop and laptop computers

When you use one for the first time, you see en empty desktop. The apps that you bought the computer for, are hidden somewhere else. A few app shortcuts might be visible by default, and the rest are tucked away below multiple steps for user to configure and manage.


Indication of an application that has been sent to background, ranges from subtle to quesswork. The desktop wallpaper, as seen in examples 1, 2 and 3, has clearly the biggest emphasis. It's however not exactly why these computers exists.

Phone and tablet computers

Upon starting one of these devices, user sees that applications are also divided between multiple locations, with iOS being the only exception here (example 1). It shows everything on a single location. Android (2) has multiple home screens, with all but few installed apps hidden in yet another place. Windows phone (3) is a mixture, while Ubuntu Phone (4) has much bigger plans than apps.


For an unknown reason, all applications that have been started, are demoted and hidden to a task switcher view. A design that looks and works like an afterthought. Windowed apps are slowly starting to appear, but still feel clunky and bolted-on solutions. The experience doesn't change when the device is connected to a larger screen. Only WP and Ubuntu phone are pursuing scenarios beyond the traditional desktop and mobile divide. Kudos for both for focusing on the future.

Console computers

The same pattern is sadly repeated. The software that user benefits from, is divided and scattered around the main user interface. The current game/app is prominently shown, but when it comes to seeing what else is installed, or running in the background for that matter, it's not what these interfaces are intended for. And consoles are usually connected to over 40" screens, so it's not that they wouldn't have space to put it in.


There's no support for multiple screens, and these computers are sometimes even more limited than mobile ones, due to shortcomings of gamepad input. Xbox OS has an edge over its competition in doing several things at the same time by allowing windowed operation of some of its core features, without breaking the context user was in.

  The verdict

Even though all computers and their operating systems are near identical in terms of what they do; companies developing them have chosen very different graphical user interfaces for them to do it. It means, that:
  • users have to memorize different interface conventions between different computers
  • multiple OS'es (or variants of them) are needed to support different devices
  • only big companies have resources to develop multiple products from different categories
  • massive overlap in required effort when developing software for multiple devices and/or operating systems

Back in the days, with just few computers around, there was no need for a common approach to GUIs. Instead, there was plenty of time, ignorance, workforce and money. As a result, we have several user interface paradigms, that all fail with various degrees. The shared mistake is focusing on building physical products with 'art directed' interfaces. A direction based on a personal perception how a particular device should be used, easily masks any digital similarities underneath the glamorous surface, abstracting important qualities all operating systems commonly share.

To sum it up..


Every 'signature charasteristics' that desktop, mobile and other interface paradigms have managed to pile up over these years, are merely distractions. They occupy minds of designers, developers and and end users alike. Our digital world is a hot mess - partly because of our obsession over the current categorization of computer GUIs and OS'es.

If something is certain, it's that software has never needed such arbitrary categorization - and neither do people using them. Future user interfaces will leverage different screen sizes and input types when they become available; instead stubbornly serving a single form factor, like they do today.

How can we help people to see beyond their lust for yesterday? How can future user interfaces better focus on increasing our human potential, if our preferences and behavior explicitly tells them otherwise?


Thanks for reading and see you in the next post. In the meantime, agree or disagree, debate or shout. Bring it on and spread the word.

Friday, July 10, 2015

Man and its computer, part 1


Our world is loaded with different computers that we use for a variety of things, ranging from good to bad, from luxury to necessity. And at their very nature, they're essentially the same. This and the following post emphasizes how similar they are (post 1), and how different user interface designs they ended up with (post 2).

Most importantly, it doesn't matter what computer we're talking about. Value is always generated through some type of application. Gaming, content creation & consumption, communication, and many other domains depend on using applications. Web browsers are eating away that pie all the time, but they as well are applications. Just hugely complex ones.

Depending of what we're doing with a computer, and where that happens, we use different input devices to help us. Keyboards, mice, trackpads, styli, cameras, game controllers and microphones, just to name a few. The line between computer specific input devices is blurring, as devices increasingly support a wider range of peripherals.

A displays is the dominant output device when it comes to computing. With a larger screen, you can see more without scrolling. Smaller screens are more portable, but the screen content needs to be scaled and restructured to make up for the reduced screen area. The more different display sizes a computer can support, the less limiting it is for the user.

To sum it up..


The way most common computers generates value for the end user, is identical. The way we control them is too, as well as the way they respond back. In the next post, we'll have a look at some of the most common computer categories and their graphical user interfaces. Stay tuned for the next post.


Thanks for reading and see you in the next post. In the meantime, agree or disagree, debate or shout. Bring it on and spread the word.
 

Tuesday, June 30, 2015

Breaking free from my invisible prison

Everything that we know today is based on our past experiences. Our knowledge is limiting what we can create tomorrow.

When we solve a problem, we tend to stick with that solution and keep improving it. That affection prevents alternative discoveries from happening. Alternatives, that weren't possible at the time of our original idea. Alternatives, that have much higher potential in the long run.


Various limitations that are affecting our past tools, will silently keep limiting the potential of our future ones. It's not natural for us to consider our proven solutions as restraints. Well, this isn't a prison made of concrete and steel, but obsolete or incorrect knowledge that we fail to see. And what you can't see, you can't escape.

I joined Jolla in 2012. It took me almost three years to discover my self-imprisonment. Back then I could only work with knowledge withing those walls of mine. I was happy to repeat what had been done before. It didn't use to matter, as anything was always possible before. I was either creating concepts or working without time pressure. It all changed when I started working with Sailfish OS.

I guess it was the immense pressure that finally pitted me against my own knowledge. During these three years, I have questioned majority of what I know. Life of uncertainty and constant doubt has been hard, but at least those walls gave in before I did - ironically only to be replaced by tiredness and loneliness. Abandoning things I've held as facts for many years was a cruel journey. Mainly because I just traded one solitude for another.

Our existing knowledge is our happy place, and it's perfectly understandable to fight for that happiness. They say that ignorance can be a wonderful thing. It's only human to seek comfort through stability and order - until one dies. To me, that's a horrible waste. Loneliness I can deal with.

So remember. The knowledge you have gathered doesn't update itself. If there's something you really care about, you should question everything you know about it. Sure, it might get lonely for a while, but it's imperative that you do.

Because tomorrow will be just like yesterday if you don't.


Thanks for reading and see you in the next post. In the meantime, agree or disagree, debate or shout. Bring it on and spread the word.

Saturday, June 13, 2015

Tailoring graphical user interfaces for everyday life

When developing a graphical user interface for a product, it's easy to forget the outside world; the reality that your product will ultimately face.
 
It's tempting to downplay the importance of various everyday situations. Mundane, boring and even stupid situations, that have nothing to do with your amazing new product; yet everything to do with how much user attention they require. This common and critical mistake results in a struggle between the product and the environment it's being used in. Below is a simplified example of this conflict (click to enlarge).


The image shows how the environment affects our ability to focus and handle information. The more control we have over the environment we're in, the more demanding interfaces we can cope with.

Mobile and portable devices are widely adopeted because they conform to dynamic and unpredictable qualities of human life. We naturally have a lower barrier toward carrying small devices with us. Therefore a smartphone is more likely to be used inside a taxing situation than a desktop computer.

On the opposite ends of that scale, we can either be fully engaged with the environment, or with the graphical user interface. Even a familiar and simple interface will be problematic in a demanding situation. Like composing an email while outrunning a bear. Similarly, any smartwatch interface feels lethally boring and restrictive, while waiting for another meeting to end (you'd rather tussle a bear). For reference, see the following image (click to enlarge).


Our available time, at any given moment, affects what we consider important. When a situation requires any attention, completing another task will costs you situational control and awareness. The expense amount depending on both interface needs and the task complexity in question. In short, if you text and drive, you'll suck at both. Human multitasking in all its glory.

Therefore it's important for interfaces input requirements to scale accordingly. The problem is that many interfaces today, like Android, iOS and WP, are already beyond their capability to do so, forcing the user to give in. The reason is a devious one. Even if people don't like to carry around tower PCs, they still love the familiar interface logic derived from them. Even though many human interaction methods, that were developed for desktop computing, are far too demanding for the life outside those cubicles they were never meant to leave.

The smaller your product is, the more focused, effortless and fault tolerant the interface needs to be. I know that our work on Sailfish OS is not there yet either, but it's still easier to keep on building it on top of thoughts like these.

A mobile device that fits your life, is valuable. One preventing you from living yours to the fullest, is not.


Thanks for reading and see you in the next post. In the meantime, agree or disagree, debate or shout. Bring it on and spread the word.

Tuesday, June 9, 2015

No more empty smartphone screens

Ever since I parted ways with my trusty Nokia 3310, empty standby screens of many smartphones have felt cold, distant and useless in comparison.

Although, various Windows phones, and a handful of Android devices come equipped with features that make their standby screens appear far less dead. After all, credit must be given where credit is due.

No need for power or home key presses, display double taps or other conscious interactions. The moment they're exposed to the world outside users pocket, both the phone and its user are already one step ahead of everyone else. A digital extension of a human intention.


Sailfish OS also has a similar feature in development we call "Sneak Peek". It's not ready yet, but I've been trying it out for almost one year now. Somehow the feature always carried over software upgrades, up until last week at least. I had to re-flash my phone, turning the feature off for good.

The sudden change in device behavior has left me staring at an empty screen more times than I'd like to admit. Looking and feeling like an idiot.

One step too much


Curiously enough, I realized that all those solutions I mentioned earlier, had one important piece missing. They all focused only on what user might want to see, but ignored where that would lead: what would people do next, after already holding the device in their hand, with the display showing relevant information?

Easy. You either want to interact with it, put it back to your pocket, or set aside on a surface near you.

And the problem with everything we have out there today, is that they all just create an additional state between the display being completely off and fully on. A glance or active screen is shown first, before you can see the lock screen. If you desire to interact with device functions below, you have to first go through that extra screen. This throws away part of the potential gained through anticipating user intentions.

To allow user interaction, it would make more sense to automatically show the lock screen, without any added steps. User would see the same information, intereact with lock screen controls, or continue to unlock their device.
Yes, it would requires some adjustments to how the lock screen behaves. It might be something like these wildly conceptual images, that are created to support this post. Take them for their illustrative value.


Moreover, the appearance is secondary in the long run. How it feels in the daily use becomes much more interesting and valuable quality. At first, it might sound strange for the phone to behave like this, but let's look at what would happen if it did.


The first thing you'll notice, you can get to whatever you're doing a bit faster. People use smartphones over 100 times a day, with majority of those instances starting with manually turning on the display. As the manual part is removed, less attention and accuracy is needed.

Second, the amount of user errors would decrease, because nothing was added. Every gesture and functionality works just the same way. It's the same lock screen, nothing more, nothing less. It's just working with you, not against you.

Worried about accidentally unlocking it? Don't be. Every lock screen has a built in protection mechanism to prevent that. Made famous by that "slide to unlock" slider on the first iPhone. We flick or swipe long enough distance to get past it.

Finally, removing your device from your pocket becomes much more friendlier event.


Just pick up the phone from your pocket and place it on a surface near you. Display will light up to greet you. An accelerometer inside the phone can tell whether you're holding it in your hand, or it's resting on a table. By following that information, it's easy to turn off the display sooner to save power.


Naturally, if the phone is on a table, the same sensor can be used to detect user picking up the device. And for cases you don't want to pick it up, you're just a double tap away from whatever you need.

By now, I'm sure some of you've already wondered why not use black background with colored text and icons on top. Well, it works great if there's no display backlight. If there is, too bad. I illustrated the problem below.


Those liquid crystals that are used to affect the light passing through, cannot block all of the backlight, resulting in gray appearance instead of black. This is very visible at night, inside movie theaters, clubs and ancient dungeons.

Using a background image will simply make the issue less apparent (can be turned off for AMOLED devices to save power). Also, a user selected image is much more personal option compared to someone saying it should always be black.


Making smartphones anticipate our needs is not rocket science. Especially when it comes to the lock screens scenario that we manually go through almost 100 times a day anyway. It's much more about seeing past of our past experiences. If you see through them, and get a taste of what things could be, it's going to be difficult to go back anymore.

You'll soon realize how passive most smartphones are. As if they didn't have the information available to anticipate the most basic thing we do. Once again, you've been staring at an empty screen. Looking and feeling like an idiot.

Welcome to the club.
 

Thanks for reading and see you in the next post. In the meantime, agree or disagree, debate or shout. Bring it on and spread the word.
 

Sunday, May 31, 2015

Does the software you use have multiple personalities?

If you paid for it, or got it after signing up for a "free" service, it's a 100% yes. Some just hide it better than others.

The thing splitting a one good personality into several ones, is called business requirements. These requirements serve the existence of the company maintaining the software, and are kept hidden from the end user.

Imagine these requirements as another user next to you (the real end user). This one is just invisible. Naturally, these two users never share the same goals or values, because they're inherently different. Another one is a real person, while another just a set of objectives. This means, that the product has at least two reasons to exist; two separate masters to serve. In the light of my ponderings about good and bad software, this is how business requirements tend to change development focus.

Somehow, we all can sense this. At times, software can feel very fluid, smooth and purposeful for us. For those cases, the invisible business aspect is not interested what you or the software does. But sometimes you feel like being thrown through unnecessary hoops for no good reason. That's due to business requirements being met. Things like mandatory registration, DRM, enforced internet connection etc.

If a lot of code is needed to meet defined business requirements, it will be hard for the company to open source such a software, because it exposes all these questionable things. Not to mention making it dead obvious that a similar value is achievable with much less code elsewhere.

Therefore many companies refrain from open development; convincing themselves into believing these undocumented capabilities are for the good of everyone.

We all want better experiences, but they honestly have to deliver on that promise. There might be temptations to harness software to serve alternative masters, but it only leaves everyone wondering why it's so damn hard to openly develop software.

And why their software still has multiple personalities.


Thanks for reading and see you in the next post. In the meantime, agree or disagree, debate or shout. Bring it on and spread the word.
 

Tuesday, May 26, 2015

Opinions kill open software

It's happening. Silently, slowly, without exceptions. Dead, gone, deceased. You just don't know it yet.

Some background before proceeding. My previous post, about good and bad software, underlined how important it's for everyone to know why a particular piece of software exists. Especially in FOSS development.

The problem is user expectations. Our past experiences naturally affect our preferences, and we subconciously project them to new software. This pulls developer toward how, away from what the software was created to do. And since we're all unique, it's difficult to see the real reason from our equally subjective viewpoints, steering the software into a direction illustrated below.

The reason is that FOSS users engage much more in software development, when compared to proprietary software users. Everyone knows that much about software, that anything is possible with it. It's a holy grail of every software project to be pursue sophisticated frameworks that support our highly heterogenous user preferences.

You might not realize it, but the price a proprietary software user pays in cash, a FOSS user pays in responsibility. We're all priviledged to have an alternative, and we should respect the reason it exists. Don't neglect or avoid it by suggesting yet another user setting or customization framework. That's always away from what the software can do to everyone.

If FOSS alternatives will ever reach a wider consumer adoption, they'll do so being faster to develop and maintain. By going faster to places a proprietary software is too heavy and cumbersome to go. By helping people to do more, faster, simpler and more reliably. By giving us our time back.

That's why it's imperative that the development is focused. One software can't adapt to seven billion amazing opinions, but seven billion people can adapt to one amazing software.


Thanks for reading and see you in the next post. In the meantime, agree or disagree, debate or shout. Bring it on and spread the word.