New Frontiers in Usability for Users’ Complex Knowledge Work

Invited Essay

pp. 149-151

Abstract

For usability professionals, one of the top priorities of the coming decades is to assure that products are usable and useful for people’s complex work in complex systems. To meet this challenge we need to better understand the nature and practices of various domain-based complex tasks and the flow of people’s work across tools. This essay gives an overview of articles in this issue that address these challenges and their implications for usability and usefulness.

Article Contents


Two years ago, Mike Albers organized and held a workshop on the usability challenges of computer-supported complex work. The enthusiasm generated by that workshop led to this special issue of Journal of Usability Studies (JUS). This special issue explores the problem of how to make human-computer interactions simpler for dynamic knowledge work. Along these lines, many experts in exploratory analysis agree that answering the following questions is a top priority for the coming decades:

  • Why is support for knowledge work so distinctively challenging?
  • What does it take to achieve high quality design?
  • How is quality demonstrated and best assessed in regard to usefulness and usability?

The three articles in this issue take an important first step in tackling these questions. All of them strive to clarify why knowledge work is distinctively challenging by understanding how people perform their complex work. The authors of all three articles find that work typically considered well-structured is, in fact, not well-structured. Moreover, mistakenly supporting it as such has the following adverse effects:

  • Users are confident in work that is inaccurate.
  • Users misapply tool capabilities that results in sub-optimal results.
  • Users do not deem a system valuable and therefore stop using it.

To identify support that users need but do not get from their technologies, all three articles address certain decision points in users’ work. The articles examine users’ complex tasks and decisions at very different scales, in distinct domains, and with varying attention to levels of expertise. Yet they share the following findings and themes:

  • In decision making under uncertainty, seeking information and turning it into knowledge are vital parts of the decision process.
  • During decision episodes, people integrate formal and informal approaches.
  • Technologies and their built-in information and task models are not suitably matched to users’ actual informal ways of knowing, their transitions between formal and informal approaches, or their processes and needs for integrating formal and informal approaches.
  • Successful task performance depends on this integration. Experts expect to do it readily.

In the first article, “Creating Effective Decision Aids for Complex Tasks” Caroline Hayes and Farnaz Akhavi look at complex work at an activity level. Examining the activity of engineering design, they analyze choices and choice processes in decision making that designers perform when they winnow down large numbers of possible competing design options to just a few best alternatives.

As a classic optimizing task, it would seem that designers should welcome help from deterministic or fuzzy mathematical decision-making models. Yet results of the authors’ studies reveal otherwise. Hayes and Akhavi explain the effects of optimizing models on the quality of users’ design choices, designers’ naturalistic uses of them, and the misfit between these models (formalistic tools) and designers’ actual practices.

In their conclusions, Hayes and Akhavi speculate about strategies for incorporating formalistic tools into people’s complex decision making in ways that are sensitive to people’s varying levels of expertise, tasks, and core practices of decisions-in-the-making. One core practice, for example, is decision makers’ common process of moving recursively between information seeking and making comparative judgments about alternatives they are considering.

In “Switching Between Tools in Complex Applications,” Will Schroeder looks at complex tasks and decisions at a lower scale. He focuses on choices users make when switching from one tool to another in a complex software application. A better understanding of switching, Schroeder urges, can lead to improved designs of complex applications (toolkits) so that users can interact more efficiently and effectively with different components and tools.

In Schroeder’s study, “tools” are not different applications but rather various graphical user interfaces or windows in the MATLAB® environment (e.g., different GUIs for programming, working with graphics, working with matrices and arrays, or viewing documentation and help). Users performed two predefined tasks. In examining the results, Schroeder compares users’ switches in tools, and is able to make and interpret these comparisons across cases because of novel visualizations that he created to display usage log data over time. In addition to these comparisons, Schroeder correlates switches, quality of outcomes, and expertise.

Schroeder finds that scattered, frequent switches from tool to tool (including documentation) are associated with high task completion, high degrees of expertise, and high user satisfaction. His findings lead to such questions as the following:

  • How might tools cue users about these effective switches – making the right switch at the right time and place – that experts seem to master?
  • What cues for self-monitoring and encountering barriers might be productive?

Finally in “Unexpected Complexity in a Traditional Usability Study,” Tharon Howard looks at tasks at a fine scale. He gathers and analyzes data on people looking up grammatical choices and bibliographic formats in handbooks and applying them correctly to texts. Howard’s study shows that one of the “grey areas” making complex tasks complex is a user’s need to make choices when “usage” leaves many options open. Here “usage” relates to language but broadly speaking it is convention and context.

Howard recounts the ways in which usability testing, when it is designed to detect complexity in tasks, can prompt clients to reconceive prior notions of formulaic user tasks and redesign products accordingly. One important redesign involves enhancing information seeking through visual communications and scenario based presentations of information. Howard stresses, however, that these techniques must selectively guide attention based on users’ purposes and communities of practice.

Implications for usability raised by all the studies commonly highlight the need to further explore the following:

  • Situational awareness: Designing and testing for situational awareness are vital for adequately supporting complex work. Yet current notions in human-computer interaction about what needs to be cued in designs for situational awareness and how to do so only begin to scratch the surface for cues required for complex work.
  • Methodology: A mixed methodology for user studies and usability evaluations is necessary. The mixture may take diverse forms but it must fit the purposes of the research – and be sensitive to demands of numerous stakeholders.
  • Visual communication: Visual communication needs to be exploited more than it is today.

Many other issues not raised here are also important for improving usability. The following include some of these issues for further exploration:

  • Cognitive models: We need more domain-based cognitive models of complex work, a better understanding of what they provide that personae cannot, and how to better apply them to design.
  • Evaluation: We need more formative testing of actual designs that purposefully aim to support complex work.
  • Rationales: We need specifications, rationales and evidence of core user requirements and design criteria for systems that are useful and usable for complex work.

These issues are compelling and far from resolved. They intrigued and engaged the group of us who reviewed the submissions and helped to put this special issue together – Mike Albers, Whitney Quesenbery, Ginny Redish, and myself. The insights presented in the submitted manuscripts sparked many conversations among us about the multiple scales of complexity, about designing for usability, and about strategically evaluating systems. We hope that JUS will consider more special issues dedicated to this fascinating topic and professional challenge.