Interface design patterns evolve

Interface design patterns evolve — it’s to be expected. However, doing so sometimes breaks the “don’t make me think” mantra. Case in point with the change to the window control’s orientation in the new iTunes 10:

Screen shot 2010-09-02 at 2.46.09 PM.png

The red, close button is in the expected location; However, the green, switch-to-mini-controller button is now below, rather than beside it.

The trio of gum-drop buttons has always been less than self-explanatory, but at least they were consistently located. iTunes 10 breaks the pattern, and not really for the benefit of other applications (since the vertical orientation doesn’t suit most UI designs quite as well).

It would have been interesting to be in the room when this new design was pitched. I’d guess that the rationale was to free-up additional vertical height (which is a common design direction seen in applications designed for widescreen monitors), but I would have loved to hear the answer to “Do we really need those 12 pixels so badly that we’ll violate our own user interface guidelines?” Sometimes the answer is “Yes”.


BTW, it seems you can switch back to the normal button orientation using the following:

defaults write com.apple.iTunes full-window -boolean YES

…and yes, I switched back to save the milliseconds it takes to orient to the new layout and select the right button.

Listening to customers

Back when I was in Product Management, I used surveys to gather feedback from beta testers. Given how valuable (and appreciated) the feedback could be, I now make a point to participate in surveys when asked. Unfortunately, even something as simple as a survey doesn’t always go as planned. Here’s what I was greeted with yesterday during an attempt to provide feedback:

Screen shot 2010-08-24 at 9.16.04 AM.png

Pretty awesome, huh?

I had better luck loading the page today; However, after spending a few minutes filling out a survey, guess which button didn’t work?

Screen shot 2010-08-25 at 10.58.30 AM.png

I generally expect only a very small percentage of customers to fill-out surveys, so the reliability of the survey service is of utmost importance — if you actually want to listen. In this case, I hope that web metrics can be used to track how many customers started the survey vs. how many completed the task. [NOTE: If you're designing surveys, tracking abandonment points during the survey process can also give you an idea whether your surveys are too long, or asking the wrong questions.]