Heuristic Evaluation is a discount usability engineering method involving three or more evaluators who evaluate the compliance of an interface based on a set of heuristics. Because the quality of the evaluation is highly dependent on their skills, it is critical to measure these skills to ensure evaluations are of a certain standard. This study provides a framework to quantify heuristic evaluation skills. Quantification is based on the number of unique issues identified by the evaluators as well as the severity of each issue. Unique issues are categorized into eight user interface parameters and severity is categorized into three. A benchmark computed from the collated evaluations is used to compare skills across applications as well as within applications. The result of this skill measurement divides the evaluators into levels of expertise. Two case studies illustrate the process, as well as its applications. Further studies will help define an expert’s profile.
Practitioner’s Take Away
- This paper offers a methodology to assess Heuristic Evaluation skills. Practitioners can use this methodology to identify an evaluator considering the context of the evaluation. For example, in an evaluation for an information providing website, the practitioner can choose an evaluator with required content skills.
- Assessment of skills using the HEQS (Heuristic Evaluation Quality Score) methodology can be customized as per the importance of UI parameters in an organization. For example, some organizations would consider visual design and information architecture as important skills pertaining to their environment, so they would tailor the methodology to suit their needs.
- Training programs can be targeted based on the evaluator’s weakness, identified using the HEQS method. These can eventually lead to a certification program.