Let's talk
User Experience | 18th April

A guide to carrying out a heuristic evaluation (and how to say it)

Heuristic evaluations are a UX buzzword - everyone’s trying to pronounce, talking about and requesting - but not everyone knows what a heuristic evaluation is or why it’s just so god-damn good. Luckily, we’re here to explain it all.

What is a heuristic evaluation

A heuristic evaluation is a review of your user interface design, against a set of design principles (called heuristics) - with the main goal of optimising user experience. It differs from usability testing because the end user is not involved in the testing process - that comes later.

Why is a heuristic evaluation so good

A heuristic evaluation is a great way to identify initial user experience problems, improving the product for the end user, but the benefits extend far beyond that. Heuristic evaluations also shine because of:

Cost and time
They’re cheap if you run them with your team and quicker than other UX testing methods, making them great for project budgets and regular testing.

Flexibility
The ease of heuristic evaluations lends itself to regular testing during the design development process - helping you to shape your product as you go along and avoid the costs of substantial amendments later.

Sociability
Heuristic evaluations also complement other UX testing methods, meaning that you can carry them out and iron out issues before running useability tests with real users of your product; making your useability tests run smoother.

Structure
Finally, the structured nature of the evaluations helps testers to focus on specific issues, and the ability to regularly run evaluations helps your team to structure and prioritise their work.

The above is why we love heuristic evaluations.

Cons of a heuristic evaluation

As with any form of UX testing, there’s a “but”. Heuristic evaluations are great but they can:

Take time
They’re quick, but they’re not Usain Bolt quick. Time is required to develop your heuristics, get your evaluators up-to-speed, run the tests and conclude the evaluation.

Cost money
While you can use your own team members as evaluators, ideally you will need to find Heuristic experts to help with the evaluation.

Confuse problems
Heuristic evaluations won’t solve all your problems, you will need to run them alongside other UX tests, and aggregating the results from your experts can be difficult.

Considerations to bear in mind, but on the whole, heuristic evaluations are pretty awesome and definitely recommended.

How to conduct a heuristic evaluation

So how do you go about conducting a heuristic evaluation?

1. Determine what you’re testing
As with any UX testing, you need to identify what you want to achieve. What specific parts of the design are you going to test and when (ideally after each Design Sprint).

2. Create your heuristics
Next, you decide if you’re creating your own evaluation principles (we’ll come onto this next) or asking the evaluators to design their own.

3. Select your evaluators
You also need to choose your expert Heuristic evaluators or prepare your team.

4. Run the evaluations
Once you know who’s testing, you then need to let your evaluators loose on - ensuring that they take notes, screenshots and videos of any issues found.

5. Evaluate the results
Once finished, gather your evaluators together for a debrief on what issues were found, which heuristic they failed and how they can be fixed or improved. Then collate this data into one report, identifying the severity of each issue and any common themes.

6. Get fixing
Finally, start work on fixing the problems identified. Once complete, you can then move into user testing - exciting stuff.

Heuristic evaluation principles

As mentioned, a key part of heuristic evaluation is analysing the design using heuristic design principles. There are many sets of heuristic principles out there, but in the interests of word count and wavering attention, we’ll focus on the most popular ones: the Neilsen Heuristics.

The Neilsen Heuristics were developed back in the ‘90s (when the Fresh Prince was still fresh), but are still used today to form the basis of most evaluations. They consist of ten key areas of consideration for evaluators:

1. Error prevention
Can any errors that happen be prevented from happening in the first place?

2. Real world
Is the system familiar to its users, using common terms and explaining unfamiliar concepts?

3. Standards and consistency
Does the design create a sense of familiarity so that the user knows what to expect (for example, external links underlined in a different colour?)

4. Control and freedom
Do users have the control and freedom to do things themselves (for example being able to undo, redo and exit)?

5. Recognition not recall
Does the design remind users with visible options rather than asking them to remember?

6. Flexibility and efficiency
Is the design flexible for everyone (regardless of expertise) and efficient (with commonly performed tasks easily accessed)?

7. Design and aesthetics
Is the design minimalist and functional, and is everything necessary and useful?

8. Visibility of system status
Does the audience know what’s going on, how long a page will take to load or how many design principles are left to go (two)?

9. Error recognition, diagnosis and recovery
When an error does arise, is it easily identified and solved by the user?

10. Help
Finally, can your audience easily find help and follow any instructions?

Heuristic evaluation principles - designing your own

Some like designing their own principles to use and others like using Neilsen’s heuristics with a couple of additions to cover any advancements in tech since the 90’s Gameboy Colour (like smartphones). Either way, you want the design principles to reflect the goal of your design and, ideally, you want between five and ten - making the evaluation useful but not overwhelming.

Final thoughts

And that’s carrying out a heuristic evaluation. It’s an excellent way to test UX throughout the development process and optimise user experience - you just need to:

  • Determine what you’re testing;
  • Design your heuristics using Neilsen’s as a basis;
  • Pick your evaluators and arm them with everything they need to know;
  • Collate your findings; and
  • Get fixing.

Oh, and it’s pronounced heuristic (you’re welcome).

Written by
Jeremy King
Jeremy is a Director at New Socks Media.

Scouting for talent.

Interested in joining our growing team?
We'd love to see you among our team members.

Send your CV to
hello@newsocksmedia.co.uk

Software developer

We’re looking for a passionate developer to work alongside an equally passionate, and friendly team. You'll keep up with latest trends and implement technology best suited to the job.

Read job spec