Wednesday, September 9, 2009

Anatomy of a web usability test

A regular usability test at the Frenzoo office

In developing a new free 3D avatar community on the web, we've learned the hard way about the critical importance of real life usage feedback.

Web usability testing is a well developed field, yet is a practice we've only adopted in a structured way in past months, and it has been a revelation.

There is something reassuring about developing additions or changes that go to directly solving problem areas identified observing real, unbiased usage. Previous guessing about features best for the Frenzoo user (often well intentioned but misguided) has been replaced by practical improvements.

Previously we would rely on surveys, forum polls and usage stats for user feedback, but whilst useful - it can't tell you why a person doesn't know how to get to the closet from the shop, or how to change skin colors, or how to accept a chat conversation.

Now, we run in-office usability testing every 2 weeks, and it drives the majority of our roadmap. It goes something like this...

1. Recruitment

A week in advance, we make some posts in local active discussion forums looking for people who like games/avatars/internet to spend 2 hours helping try a new site. We don't divulge the name of the site, so as not to bias any of them beforehand - the aim is to see how they react and use the site for the first time.

We offer about $US 23 for the 2 hours or work, which gets a good response - especially from University students who have free spot in their study schedule.

2. The right number of people

We aim to get between 4-6 people each session. Less than that and we can't draw some strong conclusions across the whole group. We also need a balance of guys and girls and enough people so they can chat amongst themselves.

We don't exceed 6 mainly for practical reasons - we don't have enough people to individually watch the participants and make notes on what problems they face.

3. The setup

We equip screen broadcasting software on the participant machines and clear the browser cache and settings to default. We then enable the other team members to be able to see the participants screens so usually 1-2 people can observe "remotely", whilst 1 team member is sitting next to the participant to ask questions, take notes up close and help if they are really stuck.

4. The session


The people arrive, we give them a brief introduction and explain that they should use the website and try to do some things on the site. We make these fairly high level to give them flexibility to see how they should achieve it, and also to not "spell out the instructions" and devalue the test.

For example:
- Go to the website and tell us your first impressions
- Create an account
- Go shopping and make your avatar look great
- Chat with someone and make friends
- Create a cool Tshirt
- Enter a contest

We observe them, and typically only ask questions or give help if they appear stuck or confused: "what are you confused or frustrated about at this moment" etc

After the individual participants trying it themselves for 1 hour or so, there is a 30 minute summary discussion at the end to gather qualitative feedback

5. The debrief

After the participants have left, the team records all the usability problems observed in a shared Google spreadsheet (and any bugs found are also logged) then there is a team discussion to consolidate it. Lots of head nodding and everyone really understanding the issues.

5. The next steps

Then the design team will come up with the improvements suggested, be it new enhancements or UI tweaks, and assigns the priorities to them, and these are then scheduled into our release plan.

Then 2 weeks later we go through it all again, and see how the improvements we have implemented are working out in practice, and what are the next set of problems and opportunities to conquer...

No comments:

Post a Comment