About a year ago, Bryan Eisenberg gave an interview talking about Maybe The Best $100 You’ve Ever Spent, essentially raving over the ridiculously cheep rates charged by UserTesting.com. The always-astute Patrick Sullivan, Jr. of Edit Weapon picked up on this and decided to give UserTesting.com his own personal test and blog post/review.
Now, as a usability expert himself and a usability testing veteran, Edit Weapon’s initial reaction to UserTesting.com was:
“Well this will either put me out of business, cause me to cut my rates by 90%, or make my life 900% easier!“
The reasoning behind the first two reactions is obvious, but I bet me than a few viewers wondered how ultra-cheap (and effective) competition could possibly make Patrick’s life 900% easier?
Answer: because Patrick’s job isn’t primarily to provide user testing, but to help properly task the users and to expertly interpret the results of that testing. Turns out that actually conducting the tests was just a pre-requisite to these far more important – and less easily commoditized – skills.
So offloading the pain-in-the-butt process of sourcing the testers and running the tests to UserTesting.com has made Patrick’s life a lot easier. [Note that, in my opinion, that insight into what business Patrick is really in is worth a series of blog posts of its own, but that'll have to wait for another time...]
As Patrick put it in his blog post:
“…anyone can watch a user use a website, but *interpreting* usability tests and making recommendations is the secret sauce to being a kick ass information architect / interaction workflow designer, which of course, I am.”
Now, during that same video post, Patrick offhandedly mentioned that there were a few golden rules and guidelines to tasking users so that their test results would be optimally useful and easy to interpret, but that he’d have to cover these in a follow-up post.
Ever since hearing that I’ve been patiently waiting for Patrick to finally produce that promised follow-up post, until about a month ago when I broke down and offered to help with the post by turning it into a quasi-interview. So here they are, the Top 6 User Testing Tips as disclosed to me by Edit Weapon:
(With a special thank you to Sue Fischer – Patrick’s IA/Usability/learnability mentor and a human factors consultant who taught Patrick how to task users for usability tests and how to interpret the results.)
1) Never ask, “What do you think about this?”
First of all, most people will simply give you a polite, rather than bluntly honest answer. Second of all, you’re not really interested in what they think of an interface/Web design/piece of software; you’re interested in how well and how easily they can USE it. That’s why it’s called usability testing.
So you always want to put the question in the form of a goal/task. Tell the user what they want to do with the interface/software; give them an assigned scenario. This transforms the process into an objective exercise (rather than a subjective opinion) and allows you to watch how the testers go about using your tool. You can then get a much better idea of how easy or intuitive your interface is, where the friction occurs, etc.
2) Don’t feed the tester with your question.
As people learn new things they tend to be very literal – especially when it comes to tasking. If you ask people to accomplish a task and you use the exact words or phrases that are actually ON the interface labels, you’ll wind up with a false impression of how usable your interface is.
For example, if you ask a tester to “Compose an e-mail” and the button for writing a new e-mail is actually labeled “compose e-mail,” the tester will simply match the phrases up rather than thinking organically in terms of what they’re trying to accomplish and then figuring out the interface. This is “leading the tester” by “feeding” him/her information with your questions.
So you want to ensure that you ask question using terms that are not directly on the interface labels. Use synonyms. Don’t make your tasks so easy that the tester simply has to match up terms. Going a step further, if most users won’t think of a task in terms of multiple steps, but your interface requires multiple steps, don’t break your tasking down into steps to match the interface. Write the question or task in the way that most users would think of it within a given scenario.
3) Don’t let users be the designers.
When you get goal-oriented tasks, each user will have different levels at which they learn the interface and pick it up, and some users will do crazy things. So some users will offer suggestions. Don’t take those suggestions literally or at face value. You’re looking for what users DO more than what they say. This is similar to the rule against not asking users what they “think” of an interface.
4) Don’t let the statistics fool you.
If you’ve done 20 tests in a row and, let’s say, 5 out of 20 were failures, but as you’ve been working on it and creating new iterations, the last 5 tests went extremely smoothly, you’ve got a good design. You need to think of the results in terms of being 5 for 5 rather than 15 for 20.
This also applies to individual tasks within a test. If users find some minor tasks are more difficult to accomplish than operating the really commonly-used features, don’t let those “usability problems” count anywhere near as much as your successes with the main functions of your interface.
Basically, not everything can be a big red button in the middle of the screen. You have to balance things out and sometimes a few items are a bit more difficult to find and there’s really no perfect solution for a multiple use interface.
5) Don’t get discouraged.
That’s why it’s usually best to test and tweak your interface in iterations. You can’t design perfectly from the get-go because you are too much inside the bottle as the designer. But as you alternate insight generated from testing with new and improved interface iterations, you’ll find the magic if you’re willing to hang in there.
6. Don’t try to test too much at once.
You’ll get easier to analyze results if you limit your tasks to just 2-3. And at UserTesting.com’s prices, it’s not a big deal if you end up running additional tests instead of adding more tasks to the same test.
But wait, there’s more… Patrick also walked me through how these principles played out when used to evaluate Jigsaw Health’s landing page for Magnesium Supplements. Catch the walk-through on our next follow-up post.
Any tips, tricks or traps you want to share?
[Editor's note: the author of this post is now blogging at jeffsextonwrites.com]