At Cyber-Duck we’ve been carrying out in-house eye tracking studies since 2008, gathering useful data and real insights into user behaviour to improve our creations for the web. Here we’ve collected the most important lessons our team learned to help beginners hit the ground running.
We test existing websites, prototypes and development builds on our Tobii T60 eye tracker, hooked up to a Dell Precision M4500 laptop equipped with Tobii Studio. Your business, kit and approach will no doubt vary from ours, but the tips in this article are valid no matter what technology you use.
1. Recruit appropriately
The most valuable participants are real (current or future) users.
When working in-house or on an internal tool for a client it’s sensible and practical to recruit within the organisation concerned, but when the subject is a public facing website or application this approach can severely limit the effectiveness of the study.
A group of people sourced from one organisation (or, worse, one department or role) will likely share certain traits that drew them there, reducing the diversity needed for comprehensive results. Your client might want to draw participants from their organisation for monetary reasons, so you should make them aware of the effect this can have on the results of the study and, therefore, the final product. If you are planning on sending out e-mail invitations, then I encourage you to read this article that contains useful guidelines for inviting stakeholders to usability sessions.
2. Test with one to five participants
In a well-designed test, around 80% of usability problems will typically be found with the first five participants. Each participant after the first few will yield diminishing returns; you’ll probably witness the same issues again regularly, but discover fewer and fewer new issues.
If you have the budget for more than five participants, consider multiple tests spread throughout your development process. By benchmarking performance at an early stage then following up with more tests before launch, you’re able to measure the effectiveness of changes without exposing the whole user base to unproven ideas.
3. Check your equipment in advance
When the date of your test is confirmed, be sure to fully set up and test your eye tracking device, computer, peripherals and any other technology you’ll be relying on a few weeks beforehand. You don’t want to discover any of your kit needs service or replacement on the morning of the test. To be safe, arrive early to set up and run through your test with time to spare for an emergency trip to the local electronics shop.
4. Put participants at ease
When your participant is settled in and ready to be configured on the eye tracker, put them at ease by explaining a few simple points:
- “We’re testing the system, not you. Getting stuck or making mistakes is useful for us because it highlights things we can improve.”
- “If you do get stuck, continue as you would if you were on your own.”
- “Try not to ask us questions – pretend we’re not here.”
Keep task descriptions brief and simple, and refer to specifics in a slightly abstracted manner to avoid inadvertent clues on how to accomplish it. For example, if testing a registration process which is reached by a “register” button, describe the task as “signing up”.
The testing environment has a big effect on the outcome of the test. For the most accurate results you want the participant to become so engrossed in the task they forget about you and their surroundings. A dedicated testing lab will be optimised for this, but when testing ‘in the field’ the space must be carefully chosen and set up. You can minimise the number of immersion-breaking distractions with some simple considerations:
- Choose a quiet room away from busy spaces or high traffic areas.
- Specify a nearby spot for other participants to wait if they arrive early or a session runs over time.
- Stick a “do not disturb” sign on the door, with directions to the waiting area.
- Limit the number of observers in the room with the participant to two. Seat them to the side or even completely out of sight of the participant so your movement isn’t a distraction.
- Avoid making noise and don’t speak unless it’s really necessary.
5. Give off-screen prompts
In addition to a verbal introduction and on-screen instructions at the start of the test, give the user a printed page with their task and other essential information on it. If they forget the task partway through the test they won’t have to seek help from you to continue.
6. Ask participants to use their own information
When testing web forms that include personal information like name, address and contact details, ask users to fill fields with their genuine personal information.
If time is one of the factors you are examining, greater accuracy will come from asking participants to recall their own details. If you provide dummy details you’re actually measuring the time it takes for users to copy information from page to screen.
Dummy data brings other dangers with it, too. If you supply users with a telephone number without spaces, for example, you might overlook poor validation that can’t cope with those spaces. Different people will enter similar data in different ways, so track down those edge cases by letting them enter their own information. Consider:
- The format the system expects information to come in
- How that format is communicated to the user
- How gracefully alternative formats are handled
It’s understandable that some users may not be comfortable using their own information. Provide a consent form outlining exactly how you’ll be using the information and assure them that their data will remain private, unpublished and anonymous. Still, it pays to have some dummy data prepared for users who are hesitant – just be aware of how this may have unintended effects on your results.
7. Write comprehensive notes
An eye tracker is an invaluable piece of kit, capturing a mountain of data, but it doesn’t deliver recommendations and conclusions itself. To unearth real issues and avoid common pitfalls in eye tracking interpretation it’s essential to record details of participants and their performance that your kit can’t track.
By taking notes during the test you can refine follow-up questions and target areas for later study, where knowledge of a participant’s background, personality and skill level can often come into play. You’ll also begin recognising and analysing patterns immediately, even subconsciously.
You might already know some key information about each participant from your selection process, but if not, start each session with a short survey. Age, disabilities, confidence with computers and knowledge of written English can be associated with usability issues. Keep this information intact and include it in your final report so others who examine your research can use your data to follow your reasoning.
8. Follow up with verbal questions
Back up your observations with qualitative information by asking questions whilst the experience is fresh in the participant’s mind. Eye tracking will draw your attention to unexpected behaviour, but only questioning and deduction will lead you to the cause.
Your goal is to discover problems, so don’t ask for solutions. Don’t be too proud to consider a solution proposed by a client or user, but always pursue the problem this solution seeks to solve so you can be sure there isn’t a better alternative.
Phrase your questions carefully to avoid suggesting a particular answer, or making assumptions about the answer. This question subtly suggests that “yes, longer” is the correct answer:
“Was the form longer than you expected?”
This question assumes the form will be longer than the participant expects:
“Compared to your expectation, how much longer was the form?”
However, this question does not suggest either “longer” or “shorter” is the correct answer, or make assumptions about which answer the participant will give:
“How did the length of the form compare to your expectation?”
9. Make your results transparent
When it comes to reporting your findings to the client it’s important to remember you are a consultant with domain knowledge others don’t have. Although you are the expert, there are some simple steps you can take to make your research accessible to others:
- Include a glossary of terms like “fixation” and “saccade”. Your findings and recommendations may not include technical terms, but the evidence and notes probably will.
- Explain your interpretation of visualisations like gaze plots and heatmaps. Without introduction and analysis these images are pretty, but useless.
- Include raw statistics and notes as appendices in your report, and record the settings you used to generate and export visualisations. Your research will bear scrutiny if it’s considered and carried out well, so make it simple for others to verify your methods, follow your reasoning and corroborate your claims.
10. Think design
There is really one simple lesson here: the same critical thinking that guides your design should be applied to your test. Bring creativity, attention to detail and empathy for participants into studies to improve your results. Your solutions will be better for it.
Want to learn more?
If you’d like to…
- get an industry-recognized Course Certificate in Usability Testing
- advance your career
- learn all the details of Usability Testing
- get easy-to-use templates
- learn how to properly quantify the usability of a system/service/product/app/etc
- learn how to communicate the result to your management
… then consider taking the online course Conducting Usability Testing.
If, on the other hand, you want to brush up on the basics of UX and Usability, then consider to take the online course on User Experience. Good luck on your learning journey!