Designers and Artists, and Hiring

A colleague pointed me to this good piece on “how to hire designers” today. Lots of good stuff there. My favorite line was this:

Great art workers, great graphic artists, certainly. But not designers.

Having been on both the hiring/managing side and the interviewee/employee side in Design, much of what he says resonates. There surely is a common misconception that “designer” = “artist.” In other words, if you are a designer, you must excel in visual design, more specifically graphic art/styling. The worst perpetuators of this misconception are visual designers themselves, from what I have seen (i.e., the artsy fartsy types who land in a career in business/software). The Dribbble debate certainly reinforces I’m not the only one seeing that.

(As an aside, I don’t really have any axe to grind with visual designers or artsy fartsy types. I actually like them a lot–some of my favorite colleagues and family members are numbered among them, and I have found they bring immensely valuable perspectives, ideas, talents, and skills to the table. So my critique on this small point is in friendship and mutual respect.)

“Creatives” and “designers” are words that people associate and use typically in relation to visual/graphic designers. The reality is that “creativity” is abundant in all sorts of disciplines, even in “boring” engineering, and certainly in other areas of Design. Because Design is more about creative problem solving than art. 

Design as a human-oriented activity is fundamentally about empathy. It is an invitation to others to come into your consciousness, to take up a certain residence there, and to become, to some extent, one with them. The designer is ultimately trying to solve problems for other people in a way that, while it may have never occurred to those people–once that way is realized–seems as natural (dare I say intuitive) as if those people had designed it specially for themselves. That’s the ideal, and all designs are progressive steps away from that ideal.

Art, on the other hand, is essentially an expressive endeavor. Rather than inviting others into yourself, the artist seeks to make him or herself known to another. It is a going out into, not a welcoming into. It is a “get to know me” rather than a “I want to know you.” It is “listen to what I have to say” rather than “let me listen to you.”

Now, it follows, to a degree, that someone that is very good at art, i.e., good at effectively communicating and evoking intended responses, will have some natural proclivity in the realm of Design. But such expression is only good in Design inasmuch as it is reflective communication, in a sense, evoking in people what they already desire, giving form and function to satisfy those desires in a way that speaks their language. Put another way, if each person had the time and artistic talent, the design would be a kind of art for them–it would be their expression of that aspect/extension of themselves.

But it does not follow that a good artist necessarily is a good designer. An artist must take their expressive talent and use it on behalf of others in order to bridge the gap from art to design. The skills, techniques, and talents used in design are distinct, however. And to a large extent, they are more learned than innate. 

This is why it is possible for a non-artist to be a good designer. Someone with low talent for self expression can learn the techniques and practices the skills needed to effectively design. Their designs may not be as wholly effective as someone with the same skills and tools who also is good at expression; on other hand, a talented artist who has not become as skilled at design–or who has devalued/allowed their design skills to wane–can often create far worse designs than a skilled designer who is a so-so artist. 

That I think is what the Dribbble concern is about. At its heart, that sort of thing is about self expression. It is art. A Design portfolio almost shouldn’t include beautiful, pixel perfect mockups. Certainly, it should not be the extent of such a portfolio. It is somewhat sad and distressing that the majority of portfolio in a box sites are essentially that–show some pictures with a few words.  

Again, this is not to say (by any means) that artistic talent in design is not important. Let me be clear: artistic talent is important and valuable. And depending on the design problem being solved, it can be very important and valuable. But if your motivation is to find a designer that can turn you into the next, say, Apple (a common businessperson desire), your first priority should not be to look for a “beautiful” or “kick ass” portfolio of graphic design (which are both terms I’ve seen in job descriptions).


First, unless you yourself are a talented, experienced designer, you need to take a humble pill. You are probably not very qualified to judge whether or not someone is a good designer. Treat the people you are interviewing with mutual respect and acknowledge that you are not really the best person to judge their design skills, but maybe you are the “lucky” person who has to make the hiring decision. With that in mind, here are some suggestions that may help.

  1. Take some time to understand what went into any given design they share with you. You need to see evidence (or at least have a discussion) of how the designer came to empathize with the people she or he was designing for.
  2. Similarly, you would want to find evidence of how the designer came to an understanding of the value/desired outcomes for any given project. If you are a businessperson, you should be especially sensitive here–this is how they will interface with you.
  3. You would want to see things that show they are aware of common techniques in Design. You would want to see things like how they captured user stories, and how those flowed into their design process. You ideally would see multiple sketches, a progression of fidelity. You would see evidence of thinking in terms of interaction flow, not just pictures or screens–those only exist to support the flow. The flow should reflect upon the captured stories.
  4. As important, you want want to see evidence of execution–not their doing the coding, but evidence that there was a realization of their design intent. Is/was there attention to detail? How did the design evolve as the project progressed? If the design intent was not realized, why not? This is perhaps the biggest and most important area to drill into. It’s one thing to paint up some pretty pictures in Photoshop and show those as evidence of “design”; it’s entirely another thing to see the actual outcome. If the designer is reluctant to share the running implementation because “it didn’t turn out how I wanted it,” you need to understand why–because the outcomes are what matters, not the the pretty pictures and idealized intent in designers’ heads.
  5. Did the design succeed? How much did it change based on actual usage? How did it change? How was the need for change discovered? When was it discovered? It’s still quite common in software for significant changes to only be discovered after a release, and depending on the project/process used, that can be a really big red flag that the designer did not do due diligence to evaluate his or her designs with the people intended to use it. If the learning was after, you want to hear that this was intended and expected (such as in Lean-type processes).
  6. Lastly, you want to find out how they have dealt with teams where their influence was not accepted or respected. I can guarantee that if they are a designer with any experience, they have stories about their contributions/skills being devalued. It’s just a fact of life for designers, and it’s important to understand how they dealt with that. Did they just wash their hands and walk away? Did they find a successful way to integrate and have the influence they felt they needed? What did they learn from these experiences and how did they apply that learning in the future?

To sum up, if you are hiring a designer, and you let yourself be guided primarily by a beautiful portfolio, you are doing it wrong, unless maybe you are literally looking for a graphic artist for some static medium (and possibly even then). As a rule (despite its inverted importance in the industry today), a “beautiful”/”kick ass” portfolio should be one of the last things you consider, and the importance of that should be conditioned based on your needs.

You will eventually want to look at their artistic evidence, but not first or foremost, unless you are specifically hiring for artistic talent as the primary goal. It is far more feasible to augment great design talent with good artistic talent than it is to expect a great artist to be a great designer. And if you can find both in one person, hang on to them!  They are rare and valuable creatures. More commonly, you will find that people tend to excel in various areas (as illustrated in this post), so be cognizant of that and hire accordingly.

P.S. I intentionally avoided the “UX” moniker here. It is such an abused term that it is almost valueless in identifying good designers. Anyone can self-identify as a “UX designer,” from any number of backgrounds. And that’s okay, because people can come from many backgrounds to learn UX/Design skills. That’s also why, though, you need to consider the above suggestions as a way to suss out those who are just claiming the name. At the end of the day “UX” is, from what I have discovered, essentially the same thing as good ol’ Design (with a big D). But “UX” can help to at least identify aspiring candidates who want to make great software for people.

It’s hard to hire good designers–good luck!

TypeScript: To Be or Not To Be


I’ve been having an ongoing convo with some of my colleagues (e.g.) at Infragistics about TypeScript as of late. Now by way of preface, I started my professional programming career in ASP, have done more than my fair share of ASP.NET, was an ASP/ASP.NET MVP for years, and an ASPInsider for even longer, so let’s just say I’ve had a ringside seat, as it were, to Microsoft DevDiv’s relationship with the Web over the years. That said, for the last few years, I’ve been more focused on UX and the broader Web community at large, and I’ve spent a lot more time coming at Web dev from the non-MS perspective. So my angle in the discussion is something of a friendly Devil’s advocate.

Let me start by saying that Microsoft wins the overall dev productivity competition hands down. Visual Studio is an absolutely top-notch, amazing IDE, and there really is nothing that can compete with it in the overall category. On top of that, Microsoft has produced some top shelf Web dev tooling over the years. There really is no doubt about that, but there is always room for improvement.

The Web, in particular, presents a very challenging, wild-west environment for tooling. The technologies in themselves don’t lend themselves well to that, and there are all sorts of hacks and workarounds to address the problem, including some produced by Microsoft. To be a guru in Web dev requires mastery of so many different moving pieces and nuances, multiple, not-very-well-defined languages, innumerable libraries and frameworks, and a vast number of potential runtime environments. It’s a hard egg to crack when it comes to productivity tools, not to mention just general dev productivity, defined as time spent actively producing value-adding software assets.

The Problems for TypeScript

Unfortunately, TypeScript comes across as a sort of “here, let me fix that for you” in the broader JavaScript world.

The problem is exacerbated because some advocates of TypeScript position it as “better” than what everyone else is doing. They say things like, “it’s for ‘large scale’, ‘complex'” apps. Heck, even the homepage says it in big bold letters: “TypeScript is a language for application-scale JavaScript development.” In other words, Rest of World, all those apps you’ve been making with JavaScript for years without TypeScript are puny, simplistic, toy apps.

I am Locutus of BorgThere’s also an undercurrent of fear of assimilation–oh great, the Borg have discovered JavaScript. Microsoft has at least partially earned that sentiment over the years, even if it is exaggerated by most critics, and even if all the great folks producing the technology themselves are not to blame. Trust me; I’ve known many of them, and they really aren’t agents of evil any more than you or I. I promise!

In any case, these perceptions are compounded due to various other biases and prejudices against Microsoft, however well or ill founded those may be for any given individual. All that stacks up to a lot working against TypeScript, certainly more than basically any other individual or organization creating a similar solution would face. And then you have folks who:

  • are just against any sort of preprocessor (WTF!?)
  • really prefer dynamic over static (for whatever reasons)
  • feel like this is just one more thing to learn
  • are fans of one of the existing quasi-competing solutions
  • are well invested in an alternative solution
  • are worried about adopting a niche technology, when the whole point of the Web is openness and interoperability


Measuring adoption and market share is always challenging, especially with the Web. However, here are some indicators around TypeScript.

First off, looking at job listings is always interesting in this area. It can tell you what people currently are doing as well as what is in the immediate plans. Numbers were accurate when I wrote this down. 🙂

That’s 0.08% of JavaScript jobs that are asking for TypeScript! I listed some other major JS solutions for dealing with complexity and increasing productivity in JS app dev, for comparison. Angular and Ember have been around roughly as long.


Oh, and that’s another thing–CodePlex? Really? 😉

Language Popularity (GitHub)

I just ran across this today, TOP GITHUB LANGUAGES FOR 2013 (SO FAR). You have to scroll down to the Top 100 lists to see TypeScript. It’s pretty puny. Whether or not this is a “fair” comparison is debatable, but it is just one data point/indicator of current popularity.

The Buzz

This is impossible to reasonably quantify, but I do keep a good eye on Web dev stuff in general, and except for asking “is anyone using TypeScript,” pretty much the only time you see people talking about TypeScript is if they are Microsoft devs or Microsoft employees. Granted, it’s still very young/early, but it is a data point nonetheless.

Even though the solutions compared above are different (language vs framework vs library), the core problems of productivity, maintenance, large scale, and so on are the target for all. The fact is that real, large-scale Web apps have been built and are being built using these technologies as tools to enhance productivity and overall effectiveness. And this is the space that TypeScript competes in.

All this doesn’t paint a pretty picture for TypeScript, so…

What Does TypeScript Need to Be Widely Adopted?

Ken and Dennis

A really great beard! Come on, Anders! It’s a miracle that C# ever went anywhere! 🙂 I kid…

Let’s face it, there are a percentage of Web devs out there who are more or less lost to Microsoft, those who are deeply and irrationally against any and everything that Microsoft does. In a word, haterz. There’s no point in focusing on them, but I honestly don’t think that’s the vast majority. Most devs at heart value technical merit, and most, however altruistic they may be, do have to answer for their productivity and/or simply want to be productive for their own satisfaction.

So that’s the key thing:

TypeScript needs to demonstrate that it makes you significantly more productive.

Forget making claims about it being for apps that are supposedly more complex or larger scale. Forget about inane, academic, ideological arguments about the value of static typing. They smack of arrogance. They inevitably lead to rabbit holes, and they stir up all sorts of irrational passions in otherwise intelligent people.  Just stay focused on productivity.

And the thing is, I think the team behind it gets how valuable this is. I mean, Microsoft DevDiv is all about dev productivity. Consider this from Anders shortly after the launch:

Specifically, [Microsoft] wanted to make it possible to create world-class developer tools that would make building large web applications easier, without breaking compatibility with existing browsers and standards. (Source)

Okay then. Let’s see more of the world-class dev tooling built on top of TypeScript!

Oh and ideally, let’s see the great tooling on all of the major desktop OSes. This latter would help a lot to dispel the notion of borgishness and is, IMO, essential to the widespread adoption of Web technologies in general. From a Microsoft business perspective, it makes sense in numerous ways, not the least of which would be as a way to make Azure even more attractive to a broader audience. Of course, you can’t be pushy about that, either, but I’m talking about things like increasing awareness, increasing positive associations (through enhanced productivity), making using Azure that much easier, and so on.

When to Listen to Customers

A colleague recently shared this article saying that Steve Jobs never listened to his customers. That, IMO, is a bit off. I am fairly sure it is not accurate to say that of Steve Jobs, but more importantly, I don’t think it is something that should inform how we do design ourselves.

When you rely on consumer input, it is inevitable that they will tell you to do what other popular companies are doing.

Exactly. Biz stakeholders do this, too. But nobody wants you to replicate exactly what someone else has done–what would be the point of that? The takeaway is just inspiration–there is something about these examples they are giving you that is inspiring to them. Maybe you can drill in on what they like about the existing solutions, or maybe you just take them as indicative and try to find the good in them yourself (or both!).

…your insights are backed up with an enormously expensive creative process populated by world-class designers

A key point–valuable innovation does not come cheaply. It’s not a first-idea-out-of-your-head-is-right kind of thing. It’s not a process that prioritizes efficiency and cost management above discovering the best solution. This doesn’t mean good design is unattainable without a big budget and “world-class designers,” but it needs to be recognized that it doesn’t come free. It can’t really be tacked on as an afterthought, and the team and stakeholders at least needs to be invested in the goal of great design balanced with other concerns (usually cost and time).

It’s really hard to design products by focus groups.

Of course. This is a truism in UX design; maybe it wasn’t a truism when Steve said it–I don’t know.

Should you listen to customers is not a simple yes/no question. The question is when should you listen to customers, how should you listen to/ask them, and how should you incorporate what they tell you into your design process.

Asking customers what they want can be valuable, especially when it comes to refinement. Once you release something, potentially something innovative, customers will (hopefully) begin to use it and tell you (if you are listening) things they like, don’t like, pain points, and aspirations.  All of that is extremely valuable, and you are stupid and arrogant if you ignore it.

But you don’t just make what people ask for. What they ask for is just an indicator of what they need, and sometimes it is a misleading indicator. For example, on Indigo Studio, we sometimes get asked “are you gonna do X?” where X is something they are used to doing with another tool. Sometimes X doesn’t fit at all with the design principles and goals of Indigo; in those cases, we’d have to have most of our customers demanding it before we’d do it, and even then, we would adapt it as best we can to the Indigo design language. More often, you can actually do X, but not quite in the way they are used to doing it, that is, you can meet the need but in a different way than they are used to.

The important thing is that you don’t take what customers say at face value. You try to understand what the real need is and design for that in the context of what makes the best design given all of your constraints and goals.

Focus groups are one of the least valuable ways to get feedback due to the group bias factor. Surveys can help, but they have to be crafted and analyzed well. One-on-one interviews are better, but you have to be careful not to lead people too much (and they are one of the more expensive methods). In-context observation usually yields design insights you wouldn’t get from a dialogue, and that’s also where a lot of potential innovation can come from. All of these are different ways you can listen to customers, and they can be applied effectively when they are appropriate.

At the end of the day, though, you have these inputs, but they are just that–inputs, and you have to lean on designers to come up with creative solutions. You have to foster that creativity and provide room for it in your process and environments.

Having customers involved early usually won’t–on its own–lead you to groundbreaking, innovative solutions, but their input into the design process provides good signposts to help add healthy constraints to your solutions, point you roughly in the right directions, and to refine existing solutions in use. So while we might agree with Steve that you shouldn’t “design by focus group,” that hardly means “never listen to your customers.”

Is Responsive Web Design a Lie?

I’ve previously written about the importance of keeping a focus on users when thinking about responsive Web design. I’ve also written about ways to think about responsive Web design in the context of doing interaction prototyping. I’ve personally had some experience designing responsive interfaces. I’ve spent time with my colleagues thinking about how to support responsive Web design in Indigo Studio, and I’ve talked to other designers about their experiences with RWD, including their experience with other tools.

Here’s the thing, so far, it seems to me that responsive Web design is a lie. It is snake oil. To be clear, I’m not saying that the actual technique of using CSS media queries to set breakpoints is a lie. That is a fact. You can do that. What I am saying is that what most people seem to take away from RWD after learning about it is this:

Hey lookie, I can design/code one thing instead of three (or more) and have it just work on all those things equally well.

Smoking SomethingThat. That is a lie. It’s a pipe dream. You’re smoking something. It’s just not that simple. It’s not that simple from a design perspective, nor is it that simple from a code/dev perspective, except in the rarest, simplest cases. Even if you can simply use media queries, it can get pretty convoluted pretty easily. Even something as conceptually simple as an image is troublesome in RWD.

The reality is that most folks, even big RWD advocates, usually advocate for more than media queries–like RESS–sometimes this is strictly to help with performance (especially on mobile), but it can also be to deal with complexities and/or to simply make the desired designs feasible.

And RWD is sneaky. You start out thinking, hey, I’ll just add this breakpoint, then I just move this here, this there, and so on. Then you realize, hey, that’s actually not that great for <the phone | the tablet | the desktop | the TV> and you change this little bit here, that little bit there, and before you know it, you’ve got this huge jumbled mess that actually makes it harder to improve and maintain than if, for instance, you had just created different, cleanly separated apps in the first place.

OR (even worse from a UX/Design perspective), you start making design concessions in order to keep the thing manageable from a technical perspective. Oh, it’s not that bad, really. Or you might think, if I do that better design, it’ll be hard to implement and hard to maintain, so we’ll stick with this half-ass solution–just for the sake of the principle of RWD. This is bad for prototyping and bad for design.

So let’s zoom out a bit. What are we actually trying to achieve? I’ve heard basically three goals:

  1. One URL. When you give people a URL (e.g., in an ad) or they find one (e.g., in something shared on social media), you don’t want to land them on a solution that was designed for the wrong device class. You don’t want to offer (normally) a “view this in our standard/full site” or vice versa. It should just work and just be optimized for their device class (see my article describing device classes).
  2. One set of code/design. Across device classes, there is a lot of stuff that can be shared. The benefits are less implementation time and when you update it in one, it is updated in others. The underlying benefit is simply that it is theoretically more feasible to target multiple device classes if you leverage this one-solution-for-all sharing.
  3. “Fluid” resizing. When orientation changes on a device or a browser window is resized, it fluidly rearranges.

RWD is promoted as the current/best solution to these. But the devil is in the details. Sure, it solves #1 hands down. But #2 is where things get hairy, as noted above. I would suggest that #3, in most cases, is really just not that important. If there were interactions in your solution that made resizing/changing orientation an integral activity, and you wanted that fluidity to enhance those interactions, maybe it is valuable. But in most cases, how fluidly it resizes is far less important than the fact that it does. Most people don’t sit there and resize just to observe the beauty of the fluidity of the layout system.

Where do we go from here, then?
One way forward is to keep holding onto the RWD dream and try to beat it into submission. Make it work, no matter how hairy things get. This is an ideological solution–you value the RWD pattern more than actually crafting solutions that are best for your users/business. The other reason to move ahead with RWD is that you are naive and believe that RWD will deliver everything it promises. In both cases, it’s not a great premise.

So should we be asking ourselves, are there better solutions to achieve these goals?

Some observations:

  1. There are no silver bullets. If you want a great experience for people on each device class, you have to design for each device class. This is work. It may or may not imply that you can leverage RWD as an implementation technique, but there is no free design here. If you want things to fluidly and intelligently reflow and rearrange, you have to define the rules for your context. And beyond very basic layouts/nav, you are going to need to reimagine how some things work depending on the device class–what works well from an interaction perspective on a phone is different from a tablet is different from a desktop is different from a TV is different from a kiosk is different from wearable devices… No tool, whether you hand-code RWD or use a WYSIWYG tool like Indigo can protect you from having to think about and design for these different device classes and their related contexts of use.
  2. The more complex your solution is (on one platform) the more complex it will be to make it work across platforms, and that complexity increase is non-linear, especially when you try to make one set of code work for all of them.
  3. What we have here is essentially another incarnation of separation of concerns. The same reasons you should keep your behavior separate from your structure separate from your styling are fundamentally the same reasons you don’t want to bunch up trying to serve multiple device classes in one solution.

So I think if we consider the problem from a view to separation of concerns, the solution is clearer.

One App, One Platform
One solution is of course the independent, per-platform app solution. Most people will agree, as a rule, that this is the best approach to maximally optimize the experience for each platform (much less device class). What will feel best on Windows Phone is an app that was designed for Windows Phone, and the same applies for the other major platforms.

This implies the least amount of reuse across platforms and the highest cost of implementation and maintenance. The problem is that this is not feasible for a very large segment of people who need to make software run on many devices.

One App, All Platforms
This is essentially the RWD solution. In theory, it sounds great. In reality, it’s a lot more complicated than it sounds and often can result in suboptimal experiences on each device class. Further, there are hidden design, implementation, and maintenance costs.

There’s also a lot of talk about “future proofing,” but that too is only a half truth. At a minimum, it presumes similar input/interaction modalities for new devices, which are almost certainly not going to be there in many cases. And if the new interaction modalities can be mapped to existing ones (think, for example, touch gestures to mouse/keyboard), it again could easily result in suboptimal experiences for these new device classes. So it may or may not work on new devices/interfaces–that’s the best we could claim. Hardly a compelling reason to adopt an approach.

One App, One Device Class
This is the path that seems most viable to me. The basic premise is both your designs and your implementations are cleanly separated across device classes (i.e., you have a phone solution, a tablet solution, a desktop solution, etc.). You can even map device classes together if they are close enough–you could have a tablet/desktop solution and a phone solution, or a desktop/TV solution. If the similarities are close enough/good enough, you can still make those trade-offs and combine classes, without (as with RWD) assume all in one.

  1. It is honest about the complexities involved, both from a design and (potentially) implementation perspective. It treats the experience with a particular device class as something that should be considered on its own.
  2. It makes it easiest to optimize for each class. Through separation of concerns, the design effort is cleaner and, even more so, the code is cleaner. You don’t have elements and their styles stomping on each other.
  3. It strikes a good balance between optimization and reusability. You don’t have to make an app per platform, nor do you have to contort your mind and code and (often) the experience to suit all possible interaction modalities and layouts in one solution.
  4. It is honestly future proofed. Probably the underlying technology will still be Web. And as new devices emerge, you can see if the device can be assigned to an existing class or not (based on more than simple width of viewport). If so, it’s simple enough to “turn it on” for that device and have it share an existing device class solution. If not, then at least you’re being honest about it and not serving up some half-baked solution for it. You can choose if the new device class is worth investing in a specialized solution. If it is, probably there will be multiple vendors with that class of device, so you still can target cross platform that way within the class.
  5. You can be more strategic about what is shared or not shared. Often your data services can be shared across most classes. Your content can be shared. Your styling, to an extent, can be shared. Even individual pieces of the UI can be shared. Plus, you can encapsulate what is shared more cleanly. Instead of starting from a base that makes everything shared by default (RWD), you select the things that make sense and share them. This makes the sharing and the per-device-class code cleaner.
  6. You can avoid improvement paralysis. If every change you make has to be simultaneously made to every device class, it makes you that much more hesitant to make changes–you have to be ready to deal with them all at once. This equally applies to changes that make sense for all device classes as well as optimizations that make sense for one device class. It doesn’t matter. When everything is mixed in together, you always have to worry about unintended consequences to everything that is sharing it. With a per-device-class approach, you can feel much more confident that your design changes are safe. You can tackle each class at a time, which is ironically more manageable than all at once, and you can make sure the change is optimized for that class.
  7. You can still achieve fluid resizing per device class (whether that is resizing windows or changing orientation), if you feel that’s important enough to invest in. If it is Web, you can still leverage its built-in reflowing capabilities and even use some RWD techniques.
  8. You can still manage URLs and loading the appropriate experience for a device class, in most cases. Again, if it is Web, there are plenty of techniques you could use based on user agent information and serving up the appropriate device class solution based on that, with a good default.

None of this means that RWD is never a solution (it could even still be part of the solution). The problem is that it has become the proverbial hammer that makes all apps look like a nail. People are assuming it is the way to approach both prototyping and implementation. The hype around it causes expectations to be very disproportionate to the reality–which ends up sending people down the wrong road and causing all sorts of unexpected pain for them. It ties them to that approach, which is like a spiral of doom that it is hard to break out of if you need to.

If all you have is a basic informational Web site with basic navigation, you can probably get away with just RWD without it being too painful. There may be enough things in its favor to warrant that approach. Even so, you risk writing checks that will bounce if, for instance, you think this means that all this responsiveness is “for free” or that it means you are future proofed. On the other hand, it seems to me that assuming a baseline of per-device-class designs and solutions, strategically sharing across them, is a much more realistic, honest, and optimal approach in most cases. What do you think?

Adobe Tools Are Not UX Designer Tools

If you’re looking to hire a competent UX professional, do not ask for “Experience with Adobe Tools” in your job description. Especially don’t ask for PhotoShop. Even visual designers are waking up to the fact that PhotoShop is not a good software UI design tool.

UX design is a distinct skill set from visual/graphic design. They are complementary, and some UX designers are competent visual designers while some visual designers are competent UX designers, but they are still distinct skills, much like development is distinct from design.

A UX designer should basically never use Photoshop. Illustrator is a decent hackable tool, but if you’re going to go that route, you might as well just use OmniGraffle. Still, all of these are basically just for static wireframing/UI comps, with varying levels of hackability to communicate interaction design intent. Adobe had an interesting UX design tool for a while–called “Flash Catalyst,” but they killed it, because they killed Flash (I deduce).

If you’re going to pick a software tool for interaction design, it should be one that is suited to exploring interactions, which implies interactivity, i.e., as a designer, I can say, “when a user does <insert name of user action here>, the app should do this…” At a super bare minimum, clicking should be supported, but seriously, what viable apps these days only support static, full-page/screen refresh navigation?? So then you get into needing to explore and express transitions and animations. I’m not talking about fancy dingbat silly animations. No, I’m talking about animations that help users understand and interact effectively with a given UI design.

At this point, the software tools that your average competent UX designer can grapple with get reduced. You can of course code prototypes, but that’s generally not the best idea. So you want a tool that allows a UX designer to explore and express user interactions and app responses to those interactions but doesn’t get them bogged down in code.

Now I am biased having worked extensively on it, but the only tool that really qualifies there is Indigo Studio. Sure, Axure is another alternative, but it is significantly more complex to use and tied to the details of the Web platform.

So if you’re going to ask for a software tool competency for a UX designer, pick one of these. But really, as long as a UX designer can effectively explore and communicate design ideas, it doesn’t matter what tool they use. If you are constraining them to specific tools, something is wrong with your process. What you need to look for is evidence of good designs–both designs and implementations, as well as evidence of design research and evidence of design evaluation. Ask about their process and techniques they use to discover the best designs. Just don’t ask for Adobe tool competency.

Anti-Skeu is Still Tail Wagging

Navel GazingI originally was going to put a picture of my own navel here, but I just couldn’t bring myself to do that to ya. This pic is, according to wikipedia, the official representation of the human navel. So there you go. Okay, now look back over here.

Why a navel? Because I just stumbled upon yet another anti-skeuomorphic article by a designer. (Argh!) And if there was ever a fit of navel gazing in the design community, it is this one. The funny sad thing is that even when we try to not gaze (as this fella is), we do it.

This article is so paradoxical. It says it is not anti-skeu, but the bulk of the article is critiquing just that. And in its critique of skeuomorphism, it makes the same mistakes it is critiquing. It dances around and even says some good things, but then falls into the same trap that those who use skeuomorphism as a way to show off do.

Great experiences are the sum of multiple factors. Any one thing being off can ruin the overall effect. Content and context, focus and environment: they’re inseparable and necessary parts of the whole experience.

Software experiences aren’t just software; they’re also about the device. Alan Kay was right: “People who are really serious about software should make their own hardware.” We can’t literally do that, but we can accept that the hardware is part of the user’s context when using our software. The two exist in symbiosis, to create the user’s context.

The first paragraph is spot on. The second one rapidly falls off the cliff. Hardware and software by no means “create the user’s context.” The user’s context is a function of many other things, such as physical environment, mental state, personal history/experiences, current desires, other people they are engaged with, time of day, and so on. But that’s not the real issue.

The real issue is there seems to be an underlying mentality here that is problematic, one that does lip service to the human experience with designed things while fetishizing the designed things themselves–in this case software (and/or hardware together). When the designer’s focus is still the object–the thing we are designing, whether that is software or  something physical–then we still have a case of tail wagging.

When it comes to a question of tail wagging, it is no different to expend your designer expertise and effort to create a thing that is “true” to its medium than it is to expend that expertise and energy to create a thing that mimics something in a different medium. Both are subject to the same navel-gazing dangers because both are fundamentally focused on designing a thing according to some designer-appreciated principle. If in the past designers looked down on each other because a design wasn’t skeuomorphic, today they look down on each other if a design isn’t minimalist. You can just as easily show off how properly minimalist your design is as you can how skeu it is. In both cases, the focus is on things and designers’ pride in having designed those things.

Who bloody cares if a designed thing is “true” to its medium if the people it is designed for don’t like it or have a hard time engaging with it. It’s one thing to say that you want people to be immersed and not think about the thing, but then if you turn around and just talk about making the thing “true,” you’re still missing the point. People don’t want a chromeless photo browser–they want the experience of remembering what the photos call to mind. People don’t want a flat, clean calendar–they want to situate themselves in time and keep track of things in relation to it. People don’t really even want photos or calendars–the things–they are simply a means to an end.

“Content over chrome” is missing the mark. “Being true to your medium” is missing the mark. It’s all designer fetishes and fads. The only reason there is good in them is inasmuch as they, incidentally, cause the designer to better evoke the experiences that people want. You might say that’s the whole point of minimalism, but I say that 1) you can evoke those experiences with or without minimalism, 2) sometimes you can even do it better, and 3) why obscure the real goal with a goal focused on the things? It’s just as simple to ask yourself “does this chrome distract from the experience I intend to evoke for the people I am designing for in such and such a context?” as it is to ask “is this design ‘true’?” And the first question is actually more to the point and doesn’t put the focus on adhering to principles that may or may not incidentally answer the first question, which again, is really the question you should be asking.

If a design that looks like a paper book and imitates turning paper pages evokes an experience that people want–it is a good design. If it surprises and delights people, even better! Who effing cares if it is skeuomorphic or minimalist or whatever other fad, trend, or school of design it is? If that’s what you’re worried about, you’re doing it wrong.

As someone who is something of a freak in the Design community, maybe I find it easier to see it from an outsider’s perspective. One thing that I was surprised by was the tendency to embrace fads and trends. “That’s so 2004” is a valid criticism for many designers. One year big rounded jelly bean buttons are in. The next year straight edges with reflections are all the rage. Now “flat” is in and with that is “anti-skeu.” I get it; the need for novel stimulation is a built into the human psyche, as is group belonging and the desire to feel superior to others (i.e., to feel valuable).

Being fashionable is certainly a valid consideration to have–all other things being equal in terms of designing for people and their contexts, you might as well be fashionable to boot. But if you’re going to adopt minimalism, at least do it for the right reasons (i.e., not because it is fashionable to be anti-skeu), and don’t get so caught up in it that you lose sight of what really makes a design best–that it fits what the people you are designing for want and/or need.