From the Vault: Tools for "quick and dirty" user research
This post is “From the Vault”, a collection of old writings. Enjoy this throwback!
At my place of work, we use SCRUM as our project management methodology. It focuses on rapid, flexible development cycles where one plans and develops in standalone chunks. A traditional methodology might have you planning for three months, developing for five, then testing for another three. A software release here (depending on what’s going in) usually takes about 3 months, soup to nuts (the fact that our product is a web application certainly helps). We have two or three development-focused iterations, each with its own self-contained planning, designing, developing, and testing pieces, and a “stabilization” iteration to run heavy-duty regression testing.
As you might image, these quick development cycles don’t leave a ton of time for user experience research. Luckily, we have users from most of our clients who are happy to take a look at mockups and participate in interviews. They’re the experts!
I’ve found two services from Optimal Workshop (the makers of the incredibly useful card sorting webapp OptimalSort) that have proven invaluable in collecting “quick and dirty” feedback from our users. The first is ChalkMark, which operates on an interesting premise: provide a task and a screenshot, then ask people to click on the first UI element that they think will help them complete the task. It’s fantastic for figuring out how effective a particular UI layout for completing a task, and I use it frequently to check variations on a particular design. Users walk through the tasks, accompanied by scanned-in paper prototypes, and a heatmap is generated based on where people clicked. It’s not a substitute for honest-to-goodness user testing, but it provides useful information all the same.
The other product, called Treejack, is more oriented towards information architecture. You create a site map and a list of tasks, then ask people to navigate to where they would go first to complete the task. It can help you identify where people think certain features belong in our navigation (which also helps you figure out how well you’ve laid everything out). I’ve been using it to test alternative navigation layouts; it’s been invaluable so far.
The only issue here is that these tools are complements to things like field research and usability testing, not substitutes. “Fitting” UX/usability research in to Agile product development methodologies continues to be a topic of debate. We have to challenge ourselves to find ways to fit them together!