Digital Web Magazine

The web professional's online magazine of choice.

Practical Usability Testing

Got something to say?

Share your comments on this topic with other web professionals

In: Columns > Information Architecture for the People

By Joshua Kaufman

Published on February 13, 2006

When I started this column, part of my motivation was to write about tools to empower Web designers—techniques they could take away and apply immediately. I’ve written an article on how information architecture can be a natural progression from Web design and two articles containing short lessons to help new information architects be more effective on the job. My next several articles will focus on core information architecture practices and how Web designers and new information architects can use them effectively. I’ll focus on practical tips and keep the theory to a minimum.

The first article in this series is on one of my favorite practices: usability testing. The most critical aspect of user-centered design, usability testing breaks down the wall between the designer and user, and allows us to see how real users do real tasks in the real world. There are many benefits of usability testing, including uncovering pitfalls in a current system before a redesign and evaluating the usability of a system during and after design. Usability testing should be an iterative practice, completed several times during the design and development life-cycle. The end result is an improved product and a better understanding of the users that we’re designing for.

Planning a Test

The first thing to know about planning a usability test is that every test is different in scope, and results will vary a lot depending on the purpose and context of the test. Testing a single new feature will look very different from testing several key scenarios in a new site.

No matter what the scope is, there are a several points that you should consider as early as possible:

What Are You Going to Test?

Next, you need to decide what you’re going to test. The best way to do this is to meet with the design and development team and choose features that are new, frequently used, or considered troublesome or especially important. After choosing these features, prioritize them and write task scenarios based on them. A task scenario is a story that represents typical user activities and focuses on a single feature or group of related features. Scenarios should be:

Here’s an example scenario for a site that sells images:

You’re looking for an image that you can use on your company’s support site. Find an appropriate image and add it to your basket. Be sure to let me know when you’re done.

Who is Going to Evaluate the Site?

Who you choose to evaluate the site will have a massive effect on the outcome of the research. It’s very important to develop a thoughtful screener for recruiting your participants.

Imagine that you’re creating a site that sells images. Your customers are people who want to buy images—a huge group of people. Narrow your focus to a short and concise user profile, a picture of your ideal test participants. This profile should be based on your primary user (customer) segment and contain characteristics that those users share.

In this scenario, our participants are graphic designers or other people who use graphic design software and purchase images online. Create and order a list of these users’ characteristics. While you’re creating the user profile, you may realize that you have two or more equally important subgroups—people who buy images for business use and people who buy images for home use. This is fine as long as you can justify the relevance of each subgroup to the features that you’ll be testing.

Where Are You Going to Test?

At this point, you will know what you’re going to test and who is going to evaluate the site. To complete the usability testing trinity, you need to find a location where you can run the test sessions. Contrary to popular belief, you don’t need recording equipment or data-logging software. In fact, to run casual tests with a small group of users on an iterative design, all you need is a system to test (this could be a Web or paper prototype), a desk, two chairs and a participant.

More formal or larger-scope testing should be accompanied by video or audio recording equipment for analysis. Conduct formal tests in an environment that simulates normal use as much as possible. Data-logging software, like Morae, is nice to have and can be extremely useful for both the analysis and presentation of results.

Legalities

Every test should be accompanied by a legal form that addresses three important issues. The first is nondisclosure, a confidentiality agreement regarding a site or service that is under development. Participants should be instructed not to talk about the site or their opinions of it to anyone. Next is the waiver, which gives consent to use any recording made during the test for the purpose of evaluating the site. Finally, the legal form should clearly state the participant’s rights, which informs participants about their right to withdraw, take a break, have privacy and understand the purpose of the test.

Questionnaires

The typical usability test requires three short questionnaires: the pre-test, post-task and post-test questionnaire.

Use a pre-test questionnaire during the session introduction to verify the qualifications of the participant and gather additional background information to help you analyze and interpret test data. They can vary, but a typical pre-test questionnaire includes questions about the participant’s job title, years of experience and frequency of use. Use multiple choice answers where possible to accelerate analysis later.

After each task, you should provide the participant with a post-task questionnaire, which is usually more standard than the pre-test questionnaire. The purpose of the post-task questionnaire is to capture the participant’s perception of the task’s difficulty and to gather relevant comments where applicable. It should include standard questions such as, “How easy or difficult was it to complete the task?” and more specific questions about the site where appropriate.

The last questionnaire is the post-test questionnaire, which should capture the participant’s overall perception of the system’s usability and specific perception related to usability concerns.

The Test Script

Once you’ve completed the task scenarios, write your test script. The test script is a guide for you to follow so that the sessions are conducted consistently and correctly.

The first part of the script is the introduction, which allows you to break the ice and explain what’s going to happen during the session.

The second part is the introductory interview, which allows you understand the participant’s context with the site being evaluated. This can include questions such as, “When and why do you typically use this site?” Once you establish context, you’ll have a better understanding of later comments and will be able to follow them up with more intelligent questions.

Next comes the task scenarios and post-task questionnaires, followed by the post-test questionnaire and the wrap-up.

(If you’re subscribed to a service such as SurveyMonkey and have an Internet connection from your testing computer, the legal form and the testing script are the only materials that will contribute to deforestation. The remainder of the materials can be created as online forms using the fabulous SurveyMonkey, which provides a private and secure home for your questionnaire data.)

Pre-Test Checklist

A day or two before your tests, you should always run down a pre-test checklist, to make sure you have everything you need. Here’s a sample:

Now that you’re fully prepared, you’re ready to run your usability test.

Running the Test

Your first usability test will probably feel very daunting. Don’t worry, if you’ve planned accordingly, your test script should guide you from the moment you sit down.

After completing the legal form, I recommend providing the cash incentive up front. This is not only an incentive for giving you an hour and a half of their time and attention, but also an incentive for giving their best effort as a participant.

Throughout the remainder of the session, you have to be on your toes and completely engaged with the participant. The only way you’re going to improve your moderation skills is to practice, but I can offer some practical advice.

Moderating the Test

Once you’ve run a few sessions and are comfortable with the structure and format of the test, you can begin to dig deeper with the participant and gather the most useful feedback.

Advanced Test Moderation

After the session is over and the participant has left, complete your notes. Write down all the interesting and important behaviors the participant exhibited during the session. Discuss the test with any observers and reflect on their observations in your notes as well. Finally, if videos were created, review and note interesting and important behaviors from each participant.

Analyzing the Results

After completing all of your test sessions, you’ll have a ton of data. It’s now time to sift through this gold-mine and extract the most useful bits. Depending on how you collected your observations, this can be a formal quantitative analysis or more qualitative in nature. Regardless of the type of observations made, I’ve found that my best friend during analysis is Microsoft Excel. Enter all of your observations and questionnaire results into a spreadsheet, then analyze by grouping similar observations and extracting trends. Each of these groups should be described by a short sentence, defining the problem and its impact on the user experience. If there are many groups, organize the descriptions by severity of the usability issue. Later, translate these descriptions into key findings. By the end of analysis, you should be ready to present your findings to the rest of the team.

Learning From Your Mistakes

Once you’ve completed your first round of usability tests, you’ll have some ideas about how to improve your methodology next time around. Part of the beauty of usability testing is that there’s no single “right” way to run things, and every test is different. The most important lesson is to learn from your mistakes and feed those ideas back into the next round.

Until Next Time

It’s been a long road, but I hope that you’ve enjoyed reading this tutorial as much as I enjoyed writing it. If you have any ideas for future articles that focus on core information architecture practices, please submit them through the contact form. As always, I look forward to your feedback.

Got something to say?

Share your comments  with other professionals (1 comment)

Related Topics: Usability, User-Centered Design (UCD), Basics, Information Architecture

 

Joshua Kaufman is an interaction designer and user experience consultant. His personal website is unraveled.

Media Temple

via Ad Packs