Design Does Not Mean Sitting in a Room

With Q&A, however, Microsoft “started with just a pure design view,” Netz said. That meant the design team sat in a room for weeks and thought about how to make the user experience as simple and addictive as possible. They worked around the clock and on weekends, he said, with the understanding they’d move on only when “everybody in the room feels that what we have is just going to be awesome.”

If you can believe it, this was published under an article talking about a supposedly new “design-first” strategy at Microsoft. Now, I suspect and hope this is just a mistranslation/interpretation by the journalist here, and that those folks at MS don’t think that “design-first” means sitting in a room for weeks and thinking about awesomeness. But it’s a common enough misconception, I thought I had to comment on it. 🙂 

For example, I remember attending UX Week 2009, and one of the speakers was the lead designer from the then-new Palm WebOS. I couldn’t believe my ears as I sat there in a UX/Design conference and heard a keynote speaker saying that his approach to designing a new OS for a business-user-oriented mobile phone was to stay in a room and reimagine what desktops and calendars were.  (I am probably not remembering exactly, but that was the gist.) The fella clearly thought he had stumbled onto something amazing, and I guess I was ignorant enough of who he was to not be overawed by that. Well, we know how that story ended..

Now don’t get me wrong, I’m not here to bash on Microsoft nor WebOS. But we gotta get away from this notion that the way to do great Design is to lock yourself in a room and dream about awesomeness. You may as well just put some sketches up on a whiteboard, number them, and roll some polyhedral dice to select the best, most “awesome” design. 

In fact, hiding in a room should really be considered a Design antipattern. It implies a few bad things:

  • Isolation from Ideas – good designers regularly seek out inspiration all over the place. They become rabid consumers of others’ designs and design ideas. By isolating yourself, you block out those avenues of inspiration.
  • Echo Chamber/Focus Group Effect – you will very quickly suffer the echo chamber effect, that is, you will soon start coalescing in your viewpoints and ideas, which serves both to further isolate and falsely confirm awesomeness. 
  • Loss of Perspective – the more you are “in” the details of a design project, the more you lose the perspective of those who are “out” of it. Concepts that used to be not-so-familiar or obvious become more familiar and, ergo, more “obvious.” 
  • Isolation from Real Users – this is a key element for ongoing awesomeness and serves to counteract most of the  problems above. Without ongoing design evaluation with users (or at least user-like substances), you can think your designs are as awesome as sliced bread, and it won’t matter a bit. Until the rubber meets the road, it’s all just wishful thinking. 

I think I can safely say that sitting in a room and demanding awesomeness has very little to do with great Design. If what Microsoft (and WebOS) came up with was awesome, it would be because the people involved probably had some talent, solid Design experience, some luck, and a relatively good understanding of the target audience and the possible solutions. And more likely than not, they involved actual users in design evaluations sooner rather than later in their process. 

Looking at more of what the author said, you see some indications of that. Those involved were clearly some sort of subject matter experts and have had a good bit of prior experience with the target audience. He says they consulted experts in solution domains to understand solution possibilities, which likely served to expose them to all sorts of new design ideas. No doubt it took more time than six weeks, and it seems unlikely they were just sitting in the room that whole time. 

It seems safe to assume they were employing some Design professionals and/or at least that they did a fair bit of evaluation with users before sharing it with a journalist. And he does highlight one important element–holding off on technology selection/specification until the Design vision was in place. 

It’s early to say if this particular solution will be successful, and success in the market depends on more than just good product design. Certainly, following a good Design-first approach will help. Demanding Design awesomeness is important, especially in the details of the execution on the Design vision. Waiting on technology selection is important. But there’s a lot more to great Design than just that, and certainly more than just sitting in a room and working hard on it.

Designers and Artists, and Hiring

A colleague pointed me to this good piece on “how to hire designers” today. Lots of good stuff there. My favorite line was this:

Great art workers, great graphic artists, certainly. But not designers.

Having been on both the hiring/managing side and the interviewee/employee side in Design, much of what he says resonates. There surely is a common misconception that “designer” = “artist.” In other words, if you are a designer, you must excel in visual design, more specifically graphic art/styling. The worst perpetuators of this misconception are visual designers themselves, from what I have seen (i.e., the artsy fartsy types who land in a career in business/software). The Dribbble debate certainly reinforces I’m not the only one seeing that.

(As an aside, I don’t really have any axe to grind with visual designers or artsy fartsy types. I actually like them a lot–some of my favorite colleagues and family members are numbered among them, and I have found they bring immensely valuable perspectives, ideas, talents, and skills to the table. So my critique on this small point is in friendship and mutual respect.)

“Creatives” and “designers” are words that people associate and use typically in relation to visual/graphic designers. The reality is that “creativity” is abundant in all sorts of disciplines, even in “boring” engineering, and certainly in other areas of Design. Because Design is more about creative problem solving than art. 

Design as a human-oriented activity is fundamentally about empathy. It is an invitation to others to come into your consciousness, to take up a certain residence there, and to become, to some extent, one with them. The designer is ultimately trying to solve problems for other people in a way that, while it may have never occurred to those people–once that way is realized–seems as natural (dare I say intuitive) as if those people had designed it specially for themselves. That’s the ideal, and all designs are progressive steps away from that ideal.

Art, on the other hand, is essentially an expressive endeavor. Rather than inviting others into yourself, the artist seeks to make him or herself known to another. It is a going out into, not a welcoming into. It is a “get to know me” rather than a “I want to know you.” It is “listen to what I have to say” rather than “let me listen to you.”

Now, it follows, to a degree, that someone that is very good at art, i.e., good at effectively communicating and evoking intended responses, will have some natural proclivity in the realm of Design. But such expression is only good in Design inasmuch as it is reflective communication, in a sense, evoking in people what they already desire, giving form and function to satisfy those desires in a way that speaks their language. Put another way, if each person had the time and artistic talent, the design would be a kind of art for them–it would be their expression of that aspect/extension of themselves.

But it does not follow that a good artist necessarily is a good designer. An artist must take their expressive talent and use it on behalf of others in order to bridge the gap from art to design. The skills, techniques, and talents used in design are distinct, however. And to a large extent, they are more learned than innate. 

This is why it is possible for a non-artist to be a good designer. Someone with low talent for self expression can learn the techniques and practices the skills needed to effectively design. Their designs may not be as wholly effective as someone with the same skills and tools who also is good at expression; on other hand, a talented artist who has not become as skilled at design–or who has devalued/allowed their design skills to wane–can often create far worse designs than a skilled designer who is a so-so artist. 

That I think is what the Dribbble concern is about. At its heart, that sort of thing is about self expression. It is art. A Design portfolio almost shouldn’t include beautiful, pixel perfect mockups. Certainly, it should not be the extent of such a portfolio. It is somewhat sad and distressing that the majority of portfolio in a box sites are essentially that–show some pictures with a few words.  

Again, this is not to say (by any means) that artistic talent in design is not important. Let me be clear: artistic talent is important and valuable. And depending on the design problem being solved, it can be very important and valuable. But if your motivation is to find a designer that can turn you into the next, say, Apple (a common businessperson desire), your first priority should not be to look for a “beautiful” or “kick ass” portfolio of graphic design (which are both terms I’ve seen in job descriptions).

TO HIRE A GOOD DESIGNER…

First, unless you yourself are a talented, experienced designer, you need to take a humble pill. You are probably not very qualified to judge whether or not someone is a good designer. Treat the people you are interviewing with mutual respect and acknowledge that you are not really the best person to judge their design skills, but maybe you are the “lucky” person who has to make the hiring decision. With that in mind, here are some suggestions that may help.

  1. Take some time to understand what went into any given design they share with you. You need to see evidence (or at least have a discussion) of how the designer came to empathize with the people she or he was designing for.
  2. Similarly, you would want to find evidence of how the designer came to an understanding of the value/desired outcomes for any given project. If you are a businessperson, you should be especially sensitive here–this is how they will interface with you.
  3. You would want to see things that show they are aware of common techniques in Design. You would want to see things like how they captured user stories, and how those flowed into their design process. You ideally would see multiple sketches, a progression of fidelity. You would see evidence of thinking in terms of interaction flow, not just pictures or screens–those only exist to support the flow. The flow should reflect upon the captured stories.
  4. As important, you want want to see evidence of execution–not their doing the coding, but evidence that there was a realization of their design intent. Is/was there attention to detail? How did the design evolve as the project progressed? If the design intent was not realized, why not? This is perhaps the biggest and most important area to drill into. It’s one thing to paint up some pretty pictures in Photoshop and show those as evidence of “design”; it’s entirely another thing to see the actual outcome. If the designer is reluctant to share the running implementation because “it didn’t turn out how I wanted it,” you need to understand why–because the outcomes are what matters, not the the pretty pictures and idealized intent in designers’ heads.
  5. Did the design succeed? How much did it change based on actual usage? How did it change? How was the need for change discovered? When was it discovered? It’s still quite common in software for significant changes to only be discovered after a release, and depending on the project/process used, that can be a really big red flag that the designer did not do due diligence to evaluate his or her designs with the people intended to use it. If the learning was after, you want to hear that this was intended and expected (such as in Lean-type processes).
  6. Lastly, you want to find out how they have dealt with teams where their influence was not accepted or respected. I can guarantee that if they are a designer with any experience, they have stories about their contributions/skills being devalued. It’s just a fact of life for designers, and it’s important to understand how they dealt with that. Did they just wash their hands and walk away? Did they find a successful way to integrate and have the influence they felt they needed? What did they learn from these experiences and how did they apply that learning in the future?

To sum up, if you are hiring a designer, and you let yourself be guided primarily by a beautiful portfolio, you are doing it wrong, unless maybe you are literally looking for a graphic artist for some static medium (and possibly even then). As a rule (despite its inverted importance in the industry today), a “beautiful”/”kick ass” portfolio should be one of the last things you consider, and the importance of that should be conditioned based on your needs.

You will eventually want to look at their artistic evidence, but not first or foremost, unless you are specifically hiring for artistic talent as the primary goal. It is far more feasible to augment great design talent with good artistic talent than it is to expect a great artist to be a great designer. And if you can find both in one person, hang on to them!  They are rare and valuable creatures. More commonly, you will find that people tend to excel in various areas (as illustrated in this post), so be cognizant of that and hire accordingly.

P.S. I intentionally avoided the “UX” moniker here. It is such an abused term that it is almost valueless in identifying good designers. Anyone can self-identify as a “UX designer,” from any number of backgrounds. And that’s okay, because people can come from many backgrounds to learn UX/Design skills. That’s also why, though, you need to consider the above suggestions as a way to suss out those who are just claiming the name. At the end of the day “UX” is, from what I have discovered, essentially the same thing as good ol’ Design (with a big D). But “UX” can help to at least identify aspiring candidates who want to make great software for people.

It’s hard to hire good designers–good luck!

When to Listen to Customers

A colleague recently shared this article saying that Steve Jobs never listened to his customers. That, IMO, is a bit off. I am fairly sure it is not accurate to say that of Steve Jobs, but more importantly, I don’t think it is something that should inform how we do design ourselves.

When you rely on consumer input, it is inevitable that they will tell you to do what other popular companies are doing.

Exactly. Biz stakeholders do this, too. But nobody wants you to replicate exactly what someone else has done–what would be the point of that? The takeaway is just inspiration–there is something about these examples they are giving you that is inspiring to them. Maybe you can drill in on what they like about the existing solutions, or maybe you just take them as indicative and try to find the good in them yourself (or both!).

…your insights are backed up with an enormously expensive creative process populated by world-class designers

A key point–valuable innovation does not come cheaply. It’s not a first-idea-out-of-your-head-is-right kind of thing. It’s not a process that prioritizes efficiency and cost management above discovering the best solution. This doesn’t mean good design is unattainable without a big budget and “world-class designers,” but it needs to be recognized that it doesn’t come free. It can’t really be tacked on as an afterthought, and the team and stakeholders at least needs to be invested in the goal of great design balanced with other concerns (usually cost and time).

It’s really hard to design products by focus groups.

Of course. This is a truism in UX design; maybe it wasn’t a truism when Steve said it–I don’t know.

Should you listen to customers is not a simple yes/no question. The question is when should you listen to customers, how should you listen to/ask them, and how should you incorporate what they tell you into your design process.

Asking customers what they want can be valuable, especially when it comes to refinement. Once you release something, potentially something innovative, customers will (hopefully) begin to use it and tell you (if you are listening) things they like, don’t like, pain points, and aspirations.  All of that is extremely valuable, and you are stupid and arrogant if you ignore it.

But you don’t just make what people ask for. What they ask for is just an indicator of what they need, and sometimes it is a misleading indicator. For example, on Indigo Studio, we sometimes get asked “are you gonna do X?” where X is something they are used to doing with another tool. Sometimes X doesn’t fit at all with the design principles and goals of Indigo; in those cases, we’d have to have most of our customers demanding it before we’d do it, and even then, we would adapt it as best we can to the Indigo design language. More often, you can actually do X, but not quite in the way they are used to doing it, that is, you can meet the need but in a different way than they are used to.

The important thing is that you don’t take what customers say at face value. You try to understand what the real need is and design for that in the context of what makes the best design given all of your constraints and goals.

Focus groups are one of the least valuable ways to get feedback due to the group bias factor. Surveys can help, but they have to be crafted and analyzed well. One-on-one interviews are better, but you have to be careful not to lead people too much (and they are one of the more expensive methods). In-context observation usually yields design insights you wouldn’t get from a dialogue, and that’s also where a lot of potential innovation can come from. All of these are different ways you can listen to customers, and they can be applied effectively when they are appropriate.

At the end of the day, though, you have these inputs, but they are just that–inputs, and you have to lean on designers to come up with creative solutions. You have to foster that creativity and provide room for it in your process and environments.

Having customers involved early usually won’t–on its own–lead you to groundbreaking, innovative solutions, but their input into the design process provides good signposts to help add healthy constraints to your solutions, point you roughly in the right directions, and to refine existing solutions in use. So while we might agree with Steve that you shouldn’t “design by focus group,” that hardly means “never listen to your customers.”

Is Responsive Web Design a Lie?

I’ve previously written about the importance of keeping a focus on users when thinking about responsive Web design. I’ve also written about ways to think about responsive Web design in the context of doing interaction prototyping. I’ve personally had some experience designing responsive interfaces. I’ve spent time with my colleagues thinking about how to support responsive Web design in Indigo Studio, and I’ve talked to other designers about their experiences with RWD, including their experience with other tools.

Here’s the thing, so far, it seems to me that responsive Web design is a lie. It is snake oil. To be clear, I’m not saying that the actual technique of using CSS media queries to set breakpoints is a lie. That is a fact. You can do that. What I am saying is that what most people seem to take away from RWD after learning about it is this:

Hey lookie, I can design/code one thing instead of three (or more) and have it just work on all those things equally well.

Smoking SomethingThat. That is a lie. It’s a pipe dream. You’re smoking something. It’s just not that simple. It’s not that simple from a design perspective, nor is it that simple from a code/dev perspective, except in the rarest, simplest cases. Even if you can simply use media queries, it can get pretty convoluted pretty easily. Even something as conceptually simple as an image is troublesome in RWD.

The reality is that most folks, even big RWD advocates, usually advocate for more than media queries–like RESS–sometimes this is strictly to help with performance (especially on mobile), but it can also be to deal with complexities and/or to simply make the desired designs feasible.

And RWD is sneaky. You start out thinking, hey, I’ll just add this breakpoint, then I just move this here, this there, and so on. Then you realize, hey, that’s actually not that great for <the phone | the tablet | the desktop | the TV> and you change this little bit here, that little bit there, and before you know it, you’ve got this huge jumbled mess that actually makes it harder to improve and maintain than if, for instance, you had just created different, cleanly separated apps in the first place.

OR (even worse from a UX/Design perspective), you start making design concessions in order to keep the thing manageable from a technical perspective. Oh, it’s not that bad, really. Or you might think, if I do that better design, it’ll be hard to implement and hard to maintain, so we’ll stick with this half-ass solution–just for the sake of the principle of RWD. This is bad for prototyping and bad for design.

So let’s zoom out a bit. What are we actually trying to achieve? I’ve heard basically three goals:

  1. One URL. When you give people a URL (e.g., in an ad) or they find one (e.g., in something shared on social media), you don’t want to land them on a solution that was designed for the wrong device class. You don’t want to offer (normally) a “view this in our standard/full site” or vice versa. It should just work and just be optimized for their device class (see my article describing device classes).
  2. One set of code/design. Across device classes, there is a lot of stuff that can be shared. The benefits are less implementation time and when you update it in one, it is updated in others. The underlying benefit is simply that it is theoretically more feasible to target multiple device classes if you leverage this one-solution-for-all sharing.
  3. “Fluid” resizing. When orientation changes on a device or a browser window is resized, it fluidly rearranges.

RWD is promoted as the current/best solution to these. But the devil is in the details. Sure, it solves #1 hands down. But #2 is where things get hairy, as noted above. I would suggest that #3, in most cases, is really just not that important. If there were interactions in your solution that made resizing/changing orientation an integral activity, and you wanted that fluidity to enhance those interactions, maybe it is valuable. But in most cases, how fluidly it resizes is far less important than the fact that it does. Most people don’t sit there and resize just to observe the beauty of the fluidity of the layout system.

Where do we go from here, then?
One way forward is to keep holding onto the RWD dream and try to beat it into submission. Make it work, no matter how hairy things get. This is an ideological solution–you value the RWD pattern more than actually crafting solutions that are best for your users/business. The other reason to move ahead with RWD is that you are naive and believe that RWD will deliver everything it promises. In both cases, it’s not a great premise.

So should we be asking ourselves, are there better solutions to achieve these goals?

Some observations:

  1. There are no silver bullets. If you want a great experience for people on each device class, you have to design for each device class. This is work. It may or may not imply that you can leverage RWD as an implementation technique, but there is no free design here. If you want things to fluidly and intelligently reflow and rearrange, you have to define the rules for your context. And beyond very basic layouts/nav, you are going to need to reimagine how some things work depending on the device class–what works well from an interaction perspective on a phone is different from a tablet is different from a desktop is different from a TV is different from a kiosk is different from wearable devices… No tool, whether you hand-code RWD or use a WYSIWYG tool like Indigo can protect you from having to think about and design for these different device classes and their related contexts of use.
  2. The more complex your solution is (on one platform) the more complex it will be to make it work across platforms, and that complexity increase is non-linear, especially when you try to make one set of code work for all of them.
  3. What we have here is essentially another incarnation of separation of concerns. The same reasons you should keep your behavior separate from your structure separate from your styling are fundamentally the same reasons you don’t want to bunch up trying to serve multiple device classes in one solution.

So I think if we consider the problem from a view to separation of concerns, the solution is clearer.

One App, One Platform
One solution is of course the independent, per-platform app solution. Most people will agree, as a rule, that this is the best approach to maximally optimize the experience for each platform (much less device class). What will feel best on Windows Phone is an app that was designed for Windows Phone, and the same applies for the other major platforms.

This implies the least amount of reuse across platforms and the highest cost of implementation and maintenance. The problem is that this is not feasible for a very large segment of people who need to make software run on many devices.

One App, All Platforms
This is essentially the RWD solution. In theory, it sounds great. In reality, it’s a lot more complicated than it sounds and often can result in suboptimal experiences on each device class. Further, there are hidden design, implementation, and maintenance costs.

There’s also a lot of talk about “future proofing,” but that too is only a half truth. At a minimum, it presumes similar input/interaction modalities for new devices, which are almost certainly not going to be there in many cases. And if the new interaction modalities can be mapped to existing ones (think, for example, touch gestures to mouse/keyboard), it again could easily result in suboptimal experiences for these new device classes. So it may or may not work on new devices/interfaces–that’s the best we could claim. Hardly a compelling reason to adopt an approach.

One App, One Device Class
This is the path that seems most viable to me. The basic premise is both your designs and your implementations are cleanly separated across device classes (i.e., you have a phone solution, a tablet solution, a desktop solution, etc.). You can even map device classes together if they are close enough–you could have a tablet/desktop solution and a phone solution, or a desktop/TV solution. If the similarities are close enough/good enough, you can still make those trade-offs and combine classes, without (as with RWD) assume all in one.

  1. It is honest about the complexities involved, both from a design and (potentially) implementation perspective. It treats the experience with a particular device class as something that should be considered on its own.
  2. It makes it easiest to optimize for each class. Through separation of concerns, the design effort is cleaner and, even more so, the code is cleaner. You don’t have elements and their styles stomping on each other.
  3. It strikes a good balance between optimization and reusability. You don’t have to make an app per platform, nor do you have to contort your mind and code and (often) the experience to suit all possible interaction modalities and layouts in one solution.
  4. It is honestly future proofed. Probably the underlying technology will still be Web. And as new devices emerge, you can see if the device can be assigned to an existing class or not (based on more than simple width of viewport). If so, it’s simple enough to “turn it on” for that device and have it share an existing device class solution. If not, then at least you’re being honest about it and not serving up some half-baked solution for it. You can choose if the new device class is worth investing in a specialized solution. If it is, probably there will be multiple vendors with that class of device, so you still can target cross platform that way within the class.
  5. You can be more strategic about what is shared or not shared. Often your data services can be shared across most classes. Your content can be shared. Your styling, to an extent, can be shared. Even individual pieces of the UI can be shared. Plus, you can encapsulate what is shared more cleanly. Instead of starting from a base that makes everything shared by default (RWD), you select the things that make sense and share them. This makes the sharing and the per-device-class code cleaner.
  6. You can avoid improvement paralysis. If every change you make has to be simultaneously made to every device class, it makes you that much more hesitant to make changes–you have to be ready to deal with them all at once. This equally applies to changes that make sense for all device classes as well as optimizations that make sense for one device class. It doesn’t matter. When everything is mixed in together, you always have to worry about unintended consequences to everything that is sharing it. With a per-device-class approach, you can feel much more confident that your design changes are safe. You can tackle each class at a time, which is ironically more manageable than all at once, and you can make sure the change is optimized for that class.
  7. You can still achieve fluid resizing per device class (whether that is resizing windows or changing orientation), if you feel that’s important enough to invest in. If it is Web, you can still leverage its built-in reflowing capabilities and even use some RWD techniques.
  8. You can still manage URLs and loading the appropriate experience for a device class, in most cases. Again, if it is Web, there are plenty of techniques you could use based on user agent information and serving up the appropriate device class solution based on that, with a good default.

None of this means that RWD is never a solution (it could even still be part of the solution). The problem is that it has become the proverbial hammer that makes all apps look like a nail. People are assuming it is the way to approach both prototyping and implementation. The hype around it causes expectations to be very disproportionate to the reality–which ends up sending people down the wrong road and causing all sorts of unexpected pain for them. It ties them to that approach, which is like a spiral of doom that it is hard to break out of if you need to.

If all you have is a basic informational Web site with basic navigation, you can probably get away with just RWD without it being too painful. There may be enough things in its favor to warrant that approach. Even so, you risk writing checks that will bounce if, for instance, you think this means that all this responsiveness is “for free” or that it means you are future proofed. On the other hand, it seems to me that assuming a baseline of per-device-class designs and solutions, strategically sharing across them, is a much more realistic, honest, and optimal approach in most cases. What do you think?

Adobe Tools Are Not UX Designer Tools

If you’re looking to hire a competent UX professional, do not ask for “Experience with Adobe Tools” in your job description. Especially don’t ask for PhotoShop. Even visual designers are waking up to the fact that PhotoShop is not a good software UI design tool.

UX design is a distinct skill set from visual/graphic design. They are complementary, and some UX designers are competent visual designers while some visual designers are competent UX designers, but they are still distinct skills, much like development is distinct from design.

A UX designer should basically never use Photoshop. Illustrator is a decent hackable tool, but if you’re going to go that route, you might as well just use OmniGraffle. Still, all of these are basically just for static wireframing/UI comps, with varying levels of hackability to communicate interaction design intent. Adobe had an interesting UX design tool for a while–called “Flash Catalyst,” but they killed it, because they killed Flash (I deduce).

If you’re going to pick a software tool for interaction design, it should be one that is suited to exploring interactions, which implies interactivity, i.e., as a designer, I can say, “when a user does <insert name of user action here>, the app should do this…” At a super bare minimum, clicking should be supported, but seriously, what viable apps these days only support static, full-page/screen refresh navigation?? So then you get into needing to explore and express transitions and animations. I’m not talking about fancy dingbat silly animations. No, I’m talking about animations that help users understand and interact effectively with a given UI design.

At this point, the software tools that your average competent UX designer can grapple with get reduced. You can of course code prototypes, but that’s generally not the best idea. So you want a tool that allows a UX designer to explore and express user interactions and app responses to those interactions but doesn’t get them bogged down in code.

Now I am biased having worked extensively on it, but the only tool that really qualifies there is Indigo Studio. Sure, Axure is another alternative, but it is significantly more complex to use and tied to the details of the Web platform.

So if you’re going to ask for a software tool competency for a UX designer, pick one of these. But really, as long as a UX designer can effectively explore and communicate design ideas, it doesn’t matter what tool they use. If you are constraining them to specific tools, something is wrong with your process. What you need to look for is evidence of good designs–both designs and implementations, as well as evidence of design research and evidence of design evaluation. Ask about their process and techniques they use to discover the best designs. Just don’t ask for Adobe tool competency.

It Feels Good to Know and Do Things

He-Man says I have the Power!

Every so often another article appears somewhere advocating creating prototypes by coding. There are many drawbacks in doing that, not the least of which is simply wasted time–time spent dorking around with code that would be better spent evaluating, iterating, and synthesizing design ideas. In response to one such article, I penned “Yes, Ditch Traditional Wireframes, But Not for Code” that goes over the various drawbacks.

Prototyping is Hard
I suspect part of the reason people want to jump into code is potentially a misunderstanding about what a prototype needs to be. Many people, when you say “prototype” think something like a near full-on app simulation, they worry about whether or not it is responsive, or at least, there is some latent idea that it is time consuming and involved. This does not have to be the case, and in fact, I would suggest that it is not good if that is the case, for the most common prototyping needs–the ones that enable you to explore interaction designs and find the best.

Prototyping Tools Are Hard
Another part of the problem, related to the weighty idea, is that most prototyping tools are themselves time consuming to learn and use, even if you don’t want to build a particularly deep, complex prototype. That is a core problem we have tried to address with Indigo Studio; we focused on the idea of sketching prototypes, that is, to make creating a prototype as easy and simple as sketching out ideas on paper/whiteboard (and even faster than that).

You’re Just Biased
Now, some have said, “Ambrose, you only advocate code-free prototyping because you have a vested interest in hawking Indigo Studio.” Well, leaving aside that this would be an ad hominem fallacy, I will first point out that Indigo Studio v1 is totally free of charge, and that you can keep it forever–you never have to upgrade. Everything I advocate for is essentially contained in the free version, so I have little to gain. I am also not saying Indigo Studio is your only code-free option; I just happen to think it is the best. 😉

Second, I invite anyone to spend the amount of time it takes to become effectively familiar with any code-based prototyping framework. Then spend the same amount of time familiarizing yourself with Indigo Studio. I kid! You need spend nowhere near that much time to become effective with Indigo!  

And once you are passingly capable with both tools, do a head-to-head challenge, starting from zero. I guarantee that in the time it takes you to just get a project environment set up with your favorite prototyping framework, you will already have created a working prototype in Indigo. It’s just that fast and easy.

Nope. It Really is More Efficient and Effective for Design Exploration
What I’m saying is that, essentially, by any objective measure, it will be faster to create prototypes that are good enough for evaluation in a tool like Indigo. Not only that, Indigo helps keep you from being unnecessarily distracted with unimportant details, while coding does the opposite. Indigo also helps you stay focused on users and their concerns, while coding does the opposite. 

Now granted, there are exceptional circumstances, but I’m talking about a general rule here. If nothing else, one doesn’t need to invest a lot to sketch prototypes with Indigo, so you don’t lose much if you find that for whatever reason, Indigo is not sufficing for your evaluation/design exploration. The inverse is absolutely not true with coding frameworks.

It Feels Good to Know and Do Things
Given all this, I have been thinking about why people would still cling to the idea that jumping right into a coded prototype is the best way to go, as a rule, for designing. I think at least part of it, if not a large part of it, has to do with simply feeling more knowledgeable and competent.

There is a certain satisfaction that comes with knowing arcane knowledge (like how to code)–one joins the ranks of the elite designers who can code. There is also a certain sense of accomplishment in using that knowledge, struggling with code, and coming out on top in the end (assuming you do come out on top and don’t walk away defeated). It’s like He-Man–by the power of code school, I have the powerrrr! 

As someone who first learned to code and worked for years as a professional developer, and then learned to design as a professional interaction designer, I can relate. (I can also, thereby, speak from experience and not ignorance that coding prototypes is as a rule a less effective starting point for design exploration.) The challenge for those who can code is to ensure that we are making choices for what is best for the design problem at hand, and not what is best to stimulate our own sense of empowerment and accomplishment.

It can be fun to code–especially when you are new to it. It’s similar to making cookies from scratch, the way grandmama use to make them, instead of just buying the pre-made dough you just break apart. That’s fine when it’s for our own entertainment and enrichment, but when we’re being paid as professionals to be as effective and efficient as possible to design the best thing we can, we probably should think twice about taking the slow prototyping approach because we enjoy it more.

There Is Satisfaction in a Job Well Done
And that’s not to say that there is no enjoyment in using code-free tools. It’s just a different kind of enjoyment and satisfaction, one that comes from feeling more efficient and effective in solving design problems rather than coding problems.

I am not saying definitively that one should never code a prototype–far from it. But in their enthusiasm for their skills, I am concerned about this trend in the software design community to advocate coding as somehow better, more superior, or more effective in doing design work. Most of the reasons given for doing so are missing the mark for design/human concerns, all the while ignoring the many hidden drawbacks.

The rule should be to avoid coding except when you are fairly sure it is the only or most effective way to prototype your design ideas.