Back to Basics 9: Inspecting the Interface
Heuristic evaluation & Expert review
Continuing our back to the basics blog series, we now turn our attention from how users interact with the product to whether the product design meets established usability guidelines.
During the formative testing stage (i.e., the design is still subject to considerable change), the goal is to detect and remove as many usability problems as possible. During this stage, heuristic evaluations and expert reviews are well-suited to provide a lot of benefit for the cost. These methods of systematic analysis often precede general usability testing with representative end users, as they are generally less expensive, can expose the ‘low-hanging fruit,’ and serve as input for further hypothesis-testing with real users.
Both heuristic evaluations and expert reviews are considered to be complementary methods to usability testing, which when used together, will yield the most thorough and useful results for improving a design. The methods discussed in this blog post can be used throughout the design life cycle, when the interface consists of a crude prototype, when deciding between multiple design alternatives, to find usability problems in existing interfaces, or for the purpose of competitive benchmarking.
Heuristic evaluation
What is it?
Similar to usability testing, heuristic evaluations provide a method of ‘bug detection’ for user interface designs. Essentially, this method utilizes a small group of evaluators who systematically examine the compliance of an interface with established design guidelines or usability heuristics. The major difference between heuristic evaluations and usability testing is that the evaluators are able to inspect the design for usability issues without including actual users. This approach is a beneficial first step to the formative testing stage as it allows researchers to identify as many usability problems as possible within a single testing phase, prioritize the issues based on their importance, and make design recommendations that will remove the issues. In heuristic evaluations, the prioritization of usability issues and the recommended design fixes is based entirely on a usability practitioner’s’ expert understanding and application of standard usability guidelines (i.e., heuristics). For example, one of the best-known sets of accepted usability heuristics was composed by Jakob Nielsen and Rolf Molich in 1990, and is commonly referred to as ‘Nielsen’s 10’.
Strengths
- Quickly eliminate a large number of “known” usability issues in as little as one day
- Affordable method that can often be completed by a small group of evaluators without research participants
- Feedback can be obtained and relayed to designers early in the design process
- Can provide a starting point for usability testing, in which potential errors can be further explored with real end users
- Can complement usability testing to reduce testing time
Weaknesses
- Generalizable heuristics will not identify idiosyncratic issues with your particular prototype design
- Requires an experienced evaluator with knowledge of usability heuristics
- May uncover many low-level usability issues that may not actually impact the user’s performance or experience in a negative manner (i.e., false positives)
- Provides little information about the impact of the usability problem
- Does not address the real-life contexts in which usability problems may occur
How to do it?
Each evaluator independently and systematically examines an interface for usability violations that are detected when the interface is compared against a set of established usability heuristics. These heuristics may be based on empirically-derived usability principles (e.g., Nielsen’s 10 or Schneiderman’s 8 Golden Rules), human factors principles (e.g., Gerhardt-Powals’ cognitive engineering principles), design guidelines, a company-specific set of heuristics, or a mixture of usability heuristics from various sources. Most established heuristics are predominantly geared towards website evaluations. For products or interfaces with different forms of navigation and feedback, it may be most useful to either modify existing heuristics or construct an entirely new set.
Before a set of heuristics can be applied to a design evaluation, the initial research stages of determining the users’ needs and defining their tasks must already have been completed. A good understanding of the target users and the tasks performed in order to achieve their goals will help to expose which tasks are most important, and therefore, should be subject to usability evaluation. The set of evaluators can approach the evaluation from a few different angles. Each evaluator can perform the evaluation individually by completing a series of tasks that are compared against the entire set of heuristics, or each evaluator can tackle a subset of the pre-defined heuristics. Either way, the results from multiple evaluators are compiled and synthesized at the end of the analysis.
Keep in mind that the heuristics themselves do not guarantee that the usability issues will be detected. Often times, the evaluators rely on their own experiences and knowledge of usability challenges to detect the issues rather than rigidly adhering to a specific set of heuristics. Multiple evaluators are usually involved as different evaluators typically find different usability issues. Three to five is considered to the be optimal number of evaluators in order to get the greatest benefit-cost ratio from the evaluation. The list of detected usability problems can be categorized according to a classification scheme of usability severity, which is determined by the assumed impact that each issue would have on usability and the end user’s experience.
The steps involved in conducting a heuristic evaluation typically include:
- Determine whether an established set of heuristics will be used or whether the researchers will compose their own
- Gather a small team of evaluators
- Provide some brief training on the decided-upon heuristics so that all evaluators are on the same page at the start of the evaluation
- Select a classification system for assigning the severity of the uncovered usability issues (e.g., a color scheme or number scale) and ensure that all evaluators are applying this scheme consistently
- Determine a list of realistic tasks that each evaluator will walk through while evaluating the interface
- Evaluators independently conduct the task walkthrough and the evaluation, while identifying potential violations of the agreed-upon heuristics and assigning corresponding severity ratings to each issue
- Each evaluator compiles a prioritized list of usability issues (based on severity) along with recommended design fixes for each issue
- Evaluators compare lists, reach a consensus across all evaluations, and compile a master list to be communicated to designers and stakeholders
What is the output?
The presentation of the results of a heuristic evaluation can assume various forms. It could be a formal report with design recommendations, a meeting with the client, or simply a list of usability issues. The usability issues will typically be discussed in order of severity. Screenshots and other visuals may be used to highlight the specific issues and demonstrate how certain fixes would align the design with best practices for usability.
Expert review
What is it?
Expert reviews are often viewed as being synonymous with heuristic evaluations, although they differ on one important point. The process of conducting an expert review is much less formal than the systematic examination of an interface against established usability heuristics. As the name implies, expert reviews are performed by usability experts or human factors specialists, who are assumed to have already learned and internalized important design guidelines and usability heuristics as part of their training and extensive domain experience.
Unlike usability testing, which typically involves having users complete a small set of representative tasks, the usability expert is able to inspect more of the interface without any limitations to their exposure. Therefore, they are able to uncover a greater number of potential issues. However, the usability expert must still assume the role of a typical user as they themselves walk through the interface. Given their domain expertise and knowledge of best practices, usability experts are able to appreciate the potential contexts of use, the users’ needs, and their goals when performing certain tasks within the system.
Strengths
- The quickest and most affordable evaluation available, requiring only one expert researcher to conduct
- Glaring usability issues can be discovered and remediated before the design is put in front of real users
- Can be a more flexible approach to usability evaluation compared to heuristic analyses
- Usability experts are able to offer insights beyond a chosen set of heuristics
Weaknesses
- The evaluator’s level of expertise will determine the outcome of the evaluation and the appropriateness of the design recommendations
- Although experts are trained to reduce personal biases, they will inevitably bring some to the table
- Real users will have subjective knowledge of their domain that the evaluator may not be privy to
- Actual users’ mental models, needs, or subjective experiences will not be taken into account
How to do it?
The actual process of conducting an expert review is very similar to that of
a heuristic evaluation. However, evaluators do not use a pre-determined list of guidelines when detecting usability issues, their determined severity, and the associated design recommendations. Although not necessary, it is recommended that more than one usability expert conduct the review, as each evaluator is only able to uncover a portion of the usability issues and may each have different skill sets that better enables them to discover certain issues. For example, some researchers specialize in visual design while others are in the tactile or auditory domain.
The steps involved in conducting an expert review typically include:
- Gain an understanding of the typical or intended users and what their unique mental models might entail (e.g., through personas)
- Determine the users’ goals they are trying to accomplish when using the interface
- Interact with the interface, noting both general and task-specific usability issues
- Assign severity ratings to each detected usability issue
- Compose a final report of the findings with included design recommendations
What is the output?
The findings from the evaluation and the associated design recommendations are often summarized in a final report or a presentation to the client and stakeholders. Reports might include screenshots of the interface with corresponding call-outs or video clips showing specific interactions in order to exemplify the consequences of the current design, and the suggested design fixes.
Final Thoughts
When determining which research strategy to use for detecting usability ‘bugs’ in your design, remember that heuristic (or expert) reviews and usability testing are complementary, but not interchangeable methods. Heuristic and expert reviews will often detect a different type of usability issue than those emphasized in usability studies with real end users. For example, heuristic evaluations will uncover “known” issues across a swath of interfaces, while usability testing reveals product-specific considerations. Similarly, it is important to remember that expert reviews only highlight how usable or unusable a system could be, rather than how usable it actually is in the hands of real users.
Jake Ellis is a User Experience Researcher with a background in Human Factors Psychology. At Human Interfaces, he and an expert team of UX professionals work with clients from a variety of industries to develop custom research solutions for any UX challenge. If you need help from a full-service UX research consultancy for a study, recruitment, or facility rental, visit our website, send us an email, or connect with us through LinkedIn.