Is Responsive Web Design a Lie?

I’ve previously written about the importance of keeping a focus on users when thinking about responsive Web design. I’ve also written about ways to think about responsive Web design in the context of doing interaction prototyping. I’ve personally had some experience designing responsive interfaces. I’ve spent time with my colleagues thinking about how to support responsive Web design in Indigo Studio, and I’ve talked to other designers about their experiences with RWD, including their experience with other tools.

Here’s the thing, so far, it seems to me that responsive Web design is a lie. It is snake oil. To be clear, I’m not saying that the actual technique of using CSS media queries to set breakpoints is a lie. That is a fact. You can do that. What I am saying is that what most people seem to take away from RWD after learning about it is this:

Hey lookie, I can design/code one thing instead of three (or more) and have it just work on all those things equally well.

Smoking SomethingThat. That is a lie. It’s a pipe dream. You’re smoking something. It’s just not that simple. It’s not that simple from a design perspective, nor is it that simple from a code/dev perspective, except in the rarest, simplest cases. Even if you can simply use media queries, it can get pretty convoluted pretty easily. Even something as conceptually simple as an image is troublesome in RWD.

The reality is that most folks, even big RWD advocates, usually advocate for more than media queries–like RESS–sometimes this is strictly to help with performance (especially on mobile), but it can also be to deal with complexities and/or to simply make the desired designs feasible.

And RWD is sneaky. You start out thinking, hey, I’ll just add this breakpoint, then I just move this here, this there, and so on. Then you realize, hey, that’s actually not that great for <the phone | the tablet | the desktop | the TV> and you change this little bit here, that little bit there, and before you know it, you’ve got this huge jumbled mess that actually makes it harder to improve and maintain than if, for instance, you had just created different, cleanly separated apps in the first place.

OR (even worse from a UX/Design perspective), you start making design concessions in order to keep the thing manageable from a technical perspective. Oh, it’s not that bad, really. Or you might think, if I do that better design, it’ll be hard to implement and hard to maintain, so we’ll stick with this half-ass solution–just for the sake of the principle of RWD. This is bad for prototyping and bad for design.

So let’s zoom out a bit. What are we actually trying to achieve? I’ve heard basically three goals:

  1. One URL. When you give people a URL (e.g., in an ad) or they find one (e.g., in something shared on social media), you don’t want to land them on a solution that was designed for the wrong device class. You don’t want to offer (normally) a “view this in our standard/full site” or vice versa. It should just work and just be optimized for their device class (see my article describing device classes).
  2. One set of code/design. Across device classes, there is a lot of stuff that can be shared. The benefits are less implementation time and when you update it in one, it is updated in others. The underlying benefit is simply that it is theoretically more feasible to target multiple device classes if you leverage this one-solution-for-all sharing.
  3. “Fluid” resizing. When orientation changes on a device or a browser window is resized, it fluidly rearranges.

RWD is promoted as the current/best solution to these. But the devil is in the details. Sure, it solves #1 hands down. But #2 is where things get hairy, as noted above. I would suggest that #3, in most cases, is really just not that important. If there were interactions in your solution that made resizing/changing orientation an integral activity, and you wanted that fluidity to enhance those interactions, maybe it is valuable. But in most cases, how fluidly it resizes is far less important than the fact that it does. Most people don’t sit there and resize just to observe the beauty of the fluidity of the layout system.

Where do we go from here, then?
One way forward is to keep holding onto the RWD dream and try to beat it into submission. Make it work, no matter how hairy things get. This is an ideological solution–you value the RWD pattern more than actually crafting solutions that are best for your users/business. The other reason to move ahead with RWD is that you are naive and believe that RWD will deliver everything it promises. In both cases, it’s not a great premise.

So should we be asking ourselves, are there better solutions to achieve these goals?

Some observations:

  1. There are no silver bullets. If you want a great experience for people on each device class, you have to design for each device class. This is work. It may or may not imply that you can leverage RWD as an implementation technique, but there is no free design here. If you want things to fluidly and intelligently reflow and rearrange, you have to define the rules for your context. And beyond very basic layouts/nav, you are going to need to reimagine how some things work depending on the device class–what works well from an interaction perspective on a phone is different from a tablet is different from a desktop is different from a TV is different from a kiosk is different from wearable devices… No tool, whether you hand-code RWD or use a WYSIWYG tool like Indigo can protect you from having to think about and design for these different device classes and their related contexts of use.
  2. The more complex your solution is (on one platform) the more complex it will be to make it work across platforms, and that complexity increase is non-linear, especially when you try to make one set of code work for all of them.
  3. What we have here is essentially another incarnation of separation of concerns. The same reasons you should keep your behavior separate from your structure separate from your styling are fundamentally the same reasons you don’t want to bunch up trying to serve multiple device classes in one solution.

So I think if we consider the problem from a view to separation of concerns, the solution is clearer.

One App, One Platform
One solution is of course the independent, per-platform app solution. Most people will agree, as a rule, that this is the best approach to maximally optimize the experience for each platform (much less device class). What will feel best on Windows Phone is an app that was designed for Windows Phone, and the same applies for the other major platforms.

This implies the least amount of reuse across platforms and the highest cost of implementation and maintenance. The problem is that this is not feasible for a very large segment of people who need to make software run on many devices.

One App, All Platforms
This is essentially the RWD solution. In theory, it sounds great. In reality, it’s a lot more complicated than it sounds and often can result in suboptimal experiences on each device class. Further, there are hidden design, implementation, and maintenance costs.

There’s also a lot of talk about “future proofing,” but that too is only a half truth. At a minimum, it presumes similar input/interaction modalities for new devices, which are almost certainly not going to be there in many cases. And if the new interaction modalities can be mapped to existing ones (think, for example, touch gestures to mouse/keyboard), it again could easily result in suboptimal experiences for these new device classes. So it may or may not work on new devices/interfaces–that’s the best we could claim. Hardly a compelling reason to adopt an approach.

One App, One Device Class
This is the path that seems most viable to me. The basic premise is both your designs and your implementations are cleanly separated across device classes (i.e., you have a phone solution, a tablet solution, a desktop solution, etc.). You can even map device classes together if they are close enough–you could have a tablet/desktop solution and a phone solution, or a desktop/TV solution. If the similarities are close enough/good enough, you can still make those trade-offs and combine classes, without (as with RWD) assume all in one.

  1. It is honest about the complexities involved, both from a design and (potentially) implementation perspective. It treats the experience with a particular device class as something that should be considered on its own.
  2. It makes it easiest to optimize for each class. Through separation of concerns, the design effort is cleaner and, even more so, the code is cleaner. You don’t have elements and their styles stomping on each other.
  3. It strikes a good balance between optimization and reusability. You don’t have to make an app per platform, nor do you have to contort your mind and code and (often) the experience to suit all possible interaction modalities and layouts in one solution.
  4. It is honestly future proofed. Probably the underlying technology will still be Web. And as new devices emerge, you can see if the device can be assigned to an existing class or not (based on more than simple width of viewport). If so, it’s simple enough to “turn it on” for that device and have it share an existing device class solution. If not, then at least you’re being honest about it and not serving up some half-baked solution for it. You can choose if the new device class is worth investing in a specialized solution. If it is, probably there will be multiple vendors with that class of device, so you still can target cross platform that way within the class.
  5. You can be more strategic about what is shared or not shared. Often your data services can be shared across most classes. Your content can be shared. Your styling, to an extent, can be shared. Even individual pieces of the UI can be shared. Plus, you can encapsulate what is shared more cleanly. Instead of starting from a base that makes everything shared by default (RWD), you select the things that make sense and share them. This makes the sharing and the per-device-class code cleaner.
  6. You can avoid improvement paralysis. If every change you make has to be simultaneously made to every device class, it makes you that much more hesitant to make changes–you have to be ready to deal with them all at once. This equally applies to changes that make sense for all device classes as well as optimizations that make sense for one device class. It doesn’t matter. When everything is mixed in together, you always have to worry about unintended consequences to everything that is sharing it. With a per-device-class approach, you can feel much more confident that your design changes are safe. You can tackle each class at a time, which is ironically more manageable than all at once, and you can make sure the change is optimized for that class.
  7. You can still achieve fluid resizing per device class (whether that is resizing windows or changing orientation), if you feel that’s important enough to invest in. If it is Web, you can still leverage its built-in reflowing capabilities and even use some RWD techniques.
  8. You can still manage URLs and loading the appropriate experience for a device class, in most cases. Again, if it is Web, there are plenty of techniques you could use based on user agent information and serving up the appropriate device class solution based on that, with a good default.

None of this means that RWD is never a solution (it could even still be part of the solution). The problem is that it has become the proverbial hammer that makes all apps look like a nail. People are assuming it is the way to approach both prototyping and implementation. The hype around it causes expectations to be very disproportionate to the reality–which ends up sending people down the wrong road and causing all sorts of unexpected pain for them. It ties them to that approach, which is like a spiral of doom that it is hard to break out of if you need to.

If all you have is a basic informational Web site with basic navigation, you can probably get away with just RWD without it being too painful. There may be enough things in its favor to warrant that approach. Even so, you risk writing checks that will bounce if, for instance, you think this means that all this responsiveness is “for free” or that it means you are future proofed. On the other hand, it seems to me that assuming a baseline of per-device-class designs and solutions, strategically sharing across them, is a much more realistic, honest, and optimal approach in most cases. What do you think?

Nativist Nonsense and Idiotic Idealism

Hard to see when blinded by ideology

I very much appreciate, understand, and value design aesthetics and well built technology. I’m also an amateur philosopher in my free time, so I can appreciate ideas, ideals, and ideologies in themselves. All of this is all well and good, but what I don’t get is people who get so wrapped up in some design or technological ideology that they blind themselves to what is good apart from that. Let me give you some examples that I have heard and seen many times in my career in one flavor or another:

  • Blindly preferring some piece of software or technology purely on the basis that it is “open” or even “standards based.”
  • Blindly preferring some piece of software or technology purely on the basis that it is made by your pet favorite company.
  • Refusing to install or use some piece of software or technology on the basis that it is made by some company you don’t like.
  • Refusing to install or use some piece of software or technology on the basis that it is “open” or “free.”
  • Irrationally assuming that because some company had a challenge with a bug, virus, security, privacy, free-ness, openness, whatever, then everything that company does thereafter is tainted and to be avoided.
  • Irrationally assuming that because something is “native” that it must be better than a non-native alternative.
  • Refusing to code in some language on the basis that you don’t like it/it’s not your preferred one.
  • Prejudging a piece of software because it is built on <insert name of technology stack you don’t like>.

And there are a host of other, even less defensible positions that otherwise quite intelligent people take in relation to design and technology. Especially for people who are supposed to be professionals in technology and/or design, this sort of blind prejudice and ideology-based thinking is inanity; it is out of place, unbecoming, and simply unacceptable.

Most of us in design and technology are not paid to promote ideologies; we are paid to produce things. At the end of the day, the things that make us more productive and solve each particular problem best are the things we should be using. There are good ideas everywhere, and if we blind ourselves to them, we are injuring our careers and doing an injustice to those who pay us with the understanding that we will make the best thing for them in the most productive way possible.

Sure, you can have your preferences. Sure, you can espouse best practices and design philosophies that make sense to you. Heck, you can even advocate for them. But just don’t let those loom so large in your mind’s eye that you cannot see the good in things that don’t align with them. Don’t get so stuck on a technology or a framework or a practice or a pattern or a principle that you choose it when there are better options available for the problem at hand. Everything is not a nail, no matter how superior you think your hammer is. Don’t let your ideals become prejudices that instead of fostering awesomeness rather become a roadblock for you and those you work with and for.

And this extends, importantly, to people as well. Don’t treat those who don’t share your ideals with disdain. Don’t imagine for a second that because you adhere to some ideology (“craftsmanship” or “big ‘D’ Design” or whatever) this makes you more professional or better than they are. I’ve even heard people judge other professionals by when they purportedly clock in and out, as if having a healthy work-life balance somehow makes you less professional or capable!

In our line of work, it is the output, the products of our efforts, that matter most, not how we get there, and there are most definitely many paths to good outcomes. The judges of these outcomes are our clients, our customers, our markets, our users–not us. And the primary criterion in judging a good outcome is most certainly not how well our work aligned with any given ideology, however well-intentioned it may be.

Showing Passwords by Default on Mobile Apps

Unlocked DoorI just ran across LukeW’s post on showing passwords by default on mobile apps. It’s an interesting idea, at least something to consider.  However, it seems like it is a bit self-contradictory.  On the one hand, he says that showing the password is important because people can’t see it when entering it, that is, because they are looking at the keyboard, they can’t easily see the typical delayed-show characters.  Then he goes on to say that the * approach doesn’t really hide the characters being typed because the touch keyboards show what is being typed really largely.

To me, this begs the question, if it is so easy for others who might be looking over the shoulder to see the keys being typed, how is it so hard for the person looking straight at the keyboard to see them?  A bit contradictory to say the least.

Now I’m all for improving usability, especially for notoriously problematic things like this. But it seems to me that this is, at least, a questionable practice to be encouraging. It’s akin to saying that because there are lock picks, you shouldn’t bother locking your doors. In security, nothing is guaranteed 100%. It is all on a spectrum, and many things are simply deterrents.  Masking the password is one such deterrent.

On the other hand, it is in the context of mobile, and one could argue that yes, it is easier to shield the screen in most cases than it would be with laptop or desktops, as Luke does argue. There’s something to be said for that. The flip side to that is that mobile contexts are more variable and often have more potential security threats than even you know about.  Sitting at your desk in your room at home or in the office, it is not very likely someone will be looking over your shoulder (that you’d be worried about).  Standing on a subway?  Who knows?

As a software architect and interaction designer, I just can’t endorse this practice as a good default. Even if your app doesn’t have sensitive information, it’s highly likely that users will use the same password they use for other things with more sensitive information, and the same email/login.  So while you may think you’re only chancing your app’s data, you are not.  If you want to let people confirm their password is right, go with the optional show password toggle. Don’t show it by default. Security practices are always inconvenient; that doesn’t mean we can just do away with them.

UPDATE (7 Nov 2012 13:57): Luke responded to me on Twitter that the larger concern is security cameras capturing passwords. Good point. Both cameras and people you may not notice are the problem. All the more reason to not leave it showing by default.

He later mentioned someone at Sprint saying they did this who claims “No security issues.” The problem is: 1) just how do you measure that this caused no security issues? Even just for Sprint itself, that seems a tall order to verify.  But it can’t address the other problem I mentioned, which is that 2) people often use the same logins across apps/sites. So if someone captured the login/password combo thanks to Sprint’s unmasked form and they later used it across other popular sites, they could gain access to the individual’s information and Sprint would never be able to track it was their form’s fault. To claim “no security issues” is, it would seem, completely impossible to verify and so shouldn’t be claimed.

Again, the better option is to provide a way to show the password, but don’t show it by default. This makes the user think about what they are doing, and they’ll be more likely to ensure nobody is peeking if they explicitly show their password.  On the other hand, they could very well be looking at the keyboard and type their whole password before noticing it is being broadcast to the world around them. Indeed, when people know that each character shows briefly, they could be more inclined to try to type their password quickly, increasing the likelihood of this problem.  I’m rapidly thinking this should be classified as an antipattern, hopefully before it becomes a pattern.