Abstract
Human interaction with computing and communication systems involves a mix of parallel and serial processing by the human-computer system. Moore’s Law provides an illustration of the fact that the performance of the digital components of any human-computer system has improved rapidly. But what of the performance of those human components? While we humans are amazing information processing machines, our information processing capabilities are relatively fixed. This paper reviews 100 years of the human performance literature and shows, graphically, the disparity between the non-growth in human performance and the geometrical improvements in computational capability. Further, Amdahl’s Law demonstrates, algebraically, that increasingly the (non-parallelizable) human performance becomes the determining factor of speed and success in most any human-computer system. Whereas engineered products improve daily, and the amount of information for us to potentially process is growing at an ever quickening pace, the fundamental building blocks of human-information processing (e.g., reaction time, short-term memory capacity) have the same speed and capacity as they did for our grandparents. Or, likely, for the ancient Greeks. This implies much for human-computer interaction design; rather than hoping our users to read or to type faster, we must look for optimally chosen human channels and maximally matched human and machine functions. This tortoise and the (hard- and soft-)ware race demands renewed enthusiasm for, and increased, systematic attention paid to the practice of usability and to research in human-computer interaction.
Tips for Usability Practitioners
It is our hope that an explicit awareness of the ever diverging functions in Figure 3, above, will motivate an increased level of intentionality among web and all designers to pursue, systematically, data about what the users of their designs will find usable. The following tips may help usability practitioners or, more likely, help them educate their designer/developer colleagues:
- When vying for resources (for time or money to conduct usability studies or to respond to the findings of such studies) it is important to advocate for your findings.
- The ever-growing disparity between the non-growth in human performance and the geometrical improvements in computational capability help argue for more attention to the user in any human-computer system.
- Humbly, we invite you to use (with proper acknowledgement of the Journal of Usability Studies) Figure 3, in this article, to help make this point.
Introduction
People do things. From birth until death, humans engage in tasks—intentional and automatic, visible and invisible, learned and reflexive, atomic and complex—performing them individually and in groups. Many of these tasks are heavily supported by computing and communication systems. Much of the technology that we use extends and enhances our abilities to act on the world. For example, the lever, much beloved by Archimedes who wanted to move the world with one, enhances the effective strength of its user. Early in human history technologies tended to increase our physical abilities in acting in the physical world probably because most tasks relied heavily on physical interactions with the world. In contrast, modern technology often extends and enhances our cognitive abilities to act in representational worlds. These technological systems are used to manipulate representations of the relevant domains of the tasks, such as airframes, insurance risks, employees of an organization, sounds, or disease organisms. Thus, the representational/technological systems are valuable because they map work from the domains of interest into representational domains in which the needed work can be performed more quickly, more accurately, more safely, or better in some other way than it can be done directly. These advantages of computational and communication systems have been fueled by stunning improvements in the speed and cost of the technologies concerned.
In nearly all of these systems, however, key operations remain to be performed by human participants. In typical systems, people have to configure the technology to apply it to particular tasks by creating new programs or by interacting with existing systems to specify needed parameters and options. Often, people must provide inputs and interpret outputs. Thus, typical computational systems are hybrids, combining operations carried out by digital technology with operations carried out by people.
The net value of any hybrid system depends on the contributions of disparate parts and the interactions among those parts. And those parts are likely to develop at different rates. For computing and communication systems, we know that the performance of the digital parts has increased rapidly. But what is happening to the performance of the human parts and the interactions among the parts? And what might the implications be—for the design of human-computer interfaces, for human-computer interaction (HCI) curricula, and for the HCI research agenda—of these evolving technological and human capabilities?
Background and Purpose: The Problem
Over two decades ago Donald Norman characterized usable design as “the next competitive frontier” (1990). Although user-centered design (UCD) motivated strides in our discipline, being seen as an area worthy of study and rigorous practice (e.g., Vredenburg, Isensee, & Righi, 2002), the progress has been slow and incomplete. Usability processes (requirements gathering, empirically-based design, prototype testing, end-of-the-cycle testing, field testing) grew from, among other disciplines, human factors (HF) and are concerned with people’s ability to carry out intended tasks with tools. These usability engineering practices have spawned information architecture and have evolved, in some quarters, to “user experience” or “information experience design.” But it is still too often the case that the gathering of user data to inform or evaluate user interfaces (UIs) is an afterthought or left to the feedback gleaned from post-ship customer gripes; as evinced by some remarkably unusable website UIs, sometimes a site or software product’s first visitors/users are still its first test subjects.
In 1999, Meister reported the results of a survey he conducted of HF professionals who received their degrees before 1965. He noted the early conflicts with other engineers and observed that HF professionals were often not taken seriously. In 1985 Newell and Card quoted Muckler’s (1984) “lament” about human-factors specialists not being taken seriously. Our intent here is to honor this sentiment, believing that the struggle persists decades later, in some measure, at least for the usability or UCD professional, but to honor it in a constructive way, illustrating possible causes and proposing solutions.
While there have been many demonstrations of the quantifiable value of following a UCD approach (e.g., Bias & Mayhew, 1994, 2005), the siren call of features and schedule can relegate usability to a post-ship issue. Good design can reduce the human costs associated with the human operations. Strategically, for example, a greater share of development resources could be devoted to the human aspects of design. And this, we assert, is best instantiated as systematic attention to usability in practice, in research, and in pedagogy. To support this assertion, we consider trends in machine and human performance.
Trends in Performance
Across the past four decades, improvements in computing technology/systems have been exciting and inexorable. One characterization of this march, and its inexorability, is Moore’s law: the empirical observation that the number of transistors on an integrated circuit doubles every 24 months (see Figure 1). (While it does not diminish our argument, we note that Moore suggested that a continued doubling at 24-month intervals cannot be sustained much longer due to physical limitations [Dubash, 2005], and the ability to translate increased transistor density to performance improvements has been tailing off for some time [see Patterson, 2010].) Along with the density of the transistor count on integrated circuits, other computing-technology-related features have shown similar advances, such as magnetic disk storage (Kryder & Kim, 2009) and the cost of transmitting information over an optical network (Tehrani, 2000). Henceforth we will use Moore’s Law to symbolize all of the general advancement of technology.
Figure 1. An example of Moore’s Law. Reprinted from File:Transistor Count and Moore’s Law – 2011.svg. In Wikimedia Commons, n.d. Retrieved July 8, 2014 from http://commons.wikimedia.org/wiki/File:Transistor_Count_and_Moore%27s_Law_-_2011.svg
Copyright 2011 by Wigsimon. Reprinted with permission.
In stark contrast are the human-information processing capabilities over the past few decades, represented in Figure 2. Key human capabilities include (a) a simple response indicating the perception of a visual or auditory stimulus, (b) storage of information in working memory, and (c) executive control of attention. These capabilities represent perceptual, cognitive, and motor performance. Figure 2 shows performance on tasks designed to measure these various capabilities. We searched the literature for papers that reported research on visual simple reaction time, auditory simple reaction time, working memory digit span, and the Stroop Effect. (The Stroop Effect is the interference caused by having color names appear in a task where people must name the color of the ink in which the words are printed. When a color name is printed in a different color ink, reaction times to announce the color of the ink are negatively influenced. This interference effect is popularly interpreted as a reflection of central processing in the brain.) We looked for research reports from early in the 20th century to early in the 21st century and tried to identify studies with similar conditions (e.g., age and health status of participants, type of stimulus, mode of response). The data from these studies are summarized in Table 1 and Table 2 and presented graphically in Figure 2.
Figure 2. Four human-information processing variables, across time. Note different time scales (y axes) of the four functions.
Table 1. Human-Information Processing Data: Visual and Auditory Reaction Time (RT)
Note. See Figure 2A and 2B to see a graphic representation of this information.
a Multiple values in one cell reflect multiple experiments in the same study.
b We used the mean (193 msec) of the two values in the first (Fay, 1936) study as the 100% mark against which we compared the other values in this column.
c We used the first value (162.5) as the 100% mark against which we compared the other values in this column.
Table 2. Human-Information Processing Data: Working Memory Capacity and the Stroop Effect
Note. See Figure 2C and 2D to see a graphic representation of this information.
a Multiple values in one cell reflect multiple experiments in the same study.
b We used the mean (7.92 items) of the two values in the first (Gates, 1916) study as the 100% mark against which we compared the other values in in this column.
c We used the first value (753) as the 100% mark against which we compared the other values in this column.
The graphs in Figure 2A and 2B represent the data from research on simple reaction time (RT)—the time, in msec, to make a simple response (e.g., pressing a key) to the presentation of a single, simple stimulus (here, a light [Figure 2A] or a sound [Figure 2B]). Figure 2C is a summary of working memory capacity studies, with the values being a number (the number of items recalled). And Figure 2D represents the Stroop Effect data, again in msec. In each case, in order to represent the values for each variable relative to all the other values, we have arbitrarily set the oldest data value as the 100% value and represented all subsequent findings as a percentage of that first value. That is, the Gates (1916) “mnemonic span” of 7.9 items (actually a mean of two experiments, published in the first year of the Journal of Experimental Psychology!) represents the base value, and so the Taub (1972) finding of 7.0 items is represented as 88.6% of that first value. It would not matter which value, or even some other random value, we chose as the baseline; the relationship among the obtained values, as represented by the regression line, would be the same.
Simple RTs are among the earliest scientifically-studied measures of human performance. In 1868 Donders described a method for measuring simple reaction times and observed times similar to those measured in more recent times (Donders, 1969). Each data value from Table 1 (that is, each point in Figures 2A and 2B) represents a single experiment or a condition from an experiment, and the line in each figure represents the best-fitting line (least squares regression function). Both simple RT functions show only modest change during the period of time captured in the data. The regression function for RT with visual stimuli is essentially flat, increasing only .18 msec per year. The increase in the regression function for auditory stimuli was slightly greater, 1.35 msec per year, but the slope of that line was strongly influenced by a relatively fast RT from a study in 1920. Note that rather than a decrease in RT over the years, the data show a slight increase; people are taking slightly more time, not less time, on these simple tasks. These small changes, though, are likely due to procedural differences among the studies rather than changes in the basic sensory/perceptual/cognitive/motor functioning of humans. RT can be influenced by variables like stimulus intensity, level of test subject arousal, training, or even test participant body mass index. The mean RTs across the studies that we sampled were 220 msec for visual RT and 229 msec for auditory RT, falling within the range that is typically reported for simple RT.
Another human-information processing variable is expressed by Miller (1956) in which he proposed that humans could maintain five to nine “chunks” of information in short-term (or working) memory. The graph in Figure 2C shows the data for six papers (and 10 separate experiments) that measured the number of items (letters or numbers) that participants could recall immediately after hearing or reading a larger set of digits. This measure is typically known as the digit span. All of the data presented here are from forward digit span tasks in which the participants list the digits recalled in the order in which they were presented; an alternative version is the backward digit span task in which the participant responds with the recalled digits in the reverse order relative to presentation. As with visual RT, working memory performance that was reported changes only slightly over the years (a decline of .007 items per year based on the regression model). The slight change is a decline in working memory capacity, but is unlikely to be meaningful. The mean number of items recalled across the 10 experiments in six studies was 7.5, essentially Miller’s (1956) “magic number 7.”
Stroop (1935) reported two experiments focused on naming colors. Stroop’s study illustrates a third human-information processing variable. In Experiment 1, he presented two lists of words. For one list, participants had to read aloud a series of color names printed in black. For a second list, they read a series of color names printed in a color that differed from the one named (e.g., the word “red” printed in green ink). He observed no difference in the times to read the two lists. In Experiment 2, one list contained the color names printed in a colored ink in such a way that the color of the ink was always different from the color name. So, again, the word “red” printed in green ink. A second list contained not words but solid colored rectangles, printed in the same sequence of colors as the inks used for the word list. In both cases in Experiment 2, the participants’ task was to name the color of the ink used to print the word in one list and to print the rectangle in the other list. In this comparison, Stroop observed much longer times to name the colors when the list was made up of words than when it was made up of blocks. That difference in color naming time in Experiment 2 is typically called the Stroop Effect and is commonly explained as being a function of the interference of the reading of the color word with the naming of the color, with reading being seen as the more automatic process that is accomplished first, thereby interfering with naming the color of the ink. Researchers have proposed that the Stroop Test provides a good measure of executive functioning, an activity linked to the frontal lobe (e.g., Miyake, Friedman, Emerson, Witzki, & Howerter, 2000). The graph in Figure 2D shows the mean time to respond to name the color in a Stroop Test in eight experiments from 1972 to 2008. The regression model shows a very small decrease of 0.8 msec per year, but the model accounts for only 3% of the variance in the data.
The stability of many basic human-information processing capabilities across time has allowed some researchers to provide engineering-style models of human cognition in which basic capabilities can be represented parametrically. For example, Card, Moran, and Newell (1983) proposed the Model Human Processor (MHP), equipped with numerous estimates based on research findings for simple reaction time, sensory memory decay times, short-term memory span, eye movement dwell times, sensitivity of movement time to movement difficulty, and so on. Although human-information-processing test performance is sensitive to a variety of factors like age, gender, drug states, and sleep deprivation, taken over the population or large samples thereof, the basic capabilities that underlie the performance are likely to change only over evolutionary time scales.
However, one failing of the engineering-type models of human cognition such as the MHP is that cognition consists of more than a set of parameters from basic processes. One area of research in which this has become evident is in cognitive aging. Most of the aforementioned information processing capabilities increase as people age from birth to about 20 years, then decline, gradually for 20 to 30 years, followed by a more dramatic decrease after we turn 40 to 50 (e.g., Salthouse, 1996). Yet, human performance in most everyday tasks don’t show similar declines; research on the effects of aging on cognition in real-world contexts has demonstrated that older adults can perform as well as or better than their younger counterparts in complex tasks to which they can bring their greater knowledge and experience (e.g., Hess, 2005). The experience of older adults allows them to develop strategies that compensate for the losses in fundamental capabilities such as memory capacity, reaction time, and the speed of target-directed movements. Thus, we might consider human cognition to involve a complex interplay of relatively inflexible, hard-wired processes like simple reaction to a stimulus and more flexible, soft processes like the application of strategies that determine when and how to apply the hard-wired processes. Interestingly, the hard-wired processes, which are shaped by species’ experience with the world, seem to show little change over decades if we examine people of the same age, but decline in individuals over time; while humans are good at language and pattern recognition and visual processing and memory of pictures and fine motor manipulations with fingers, we are not likely to be getting any better any time soon.
In contrast, the soft processes, because they are based on experience and consequent knowledge of an individual, are much more flexible. As the environment in which individuals work and live changes over years, such as with the increasing dominance of computing technologies, those individuals can acquire new strategies to improve performance working with those technologies, even when the technologies are a poor fit with the hard-wired processes. This use of soft, strategic processes to overcome loss of function resembles compensation for the declines due to age. That is, developments like the application of better personal strategies can help an individual accommodate to the diminution of hard-wired processes like simple reaction to a stimulus due to aging, but as a species we are not getting better at a rate that is likely to help anyone reading this. In thinking about this we can distinguish changes to soft processes, changes to how we do things, from hard-wired changes, changes in our underlying mental machinery. Hard-wired changes seem to be happening slowly at best. And indeed, it will be the task of HCI designers to increasingly take advantage of these higher-level functions in future designs.
Thus we have the broad message of this review: Figure 3 shows the best fitting line (least squares regression function) of each of the four human-information processing variables juxtaposed with the function representing Moore’s Law. Digital technologies are advanced by deliberate design and engineering improvements and are improving very rapidly; whereas, basic human-information processing capabilities are advanced by a slow and weakly directed evolutionary process and are improving little if at all. When human operations can actually be replaced by machine operations, that is, when true automation is possible, the ultimate limits on the possible improvement either in speed or reliability are found in physics1. But what if substitution is not possible (or at least not possible yet)? And what are the implications of these mismatched trends for net system performance?
Figure 3. Human-information processing variables compared with Moore’s Law data.
The Amdahl’s Law Case
The interplay between technology and humans is similar to the situation in parallel computation, where performance comes to be dominated by the non-parallelizable parts of a problem, as characterized by Amdahl’s Law (Amdahl’s Law, n.d.). What happens to the performance of a system in which some operations are performed by a machine, and hence speed up rapidly over time, and some must be performed by a human, and do not speed up?
Some simple situations are easy to analyze. Suppose a task requires some work to be done by a computer and a human working together, so that the computer requires Tc (computer time) to do its part and the person requires Tp (person time) to do his or her part. Now suppose the computer is made faster by a factor F. The time required for the computer’s work will now be
(1/F)Tc
which (of course) represents a speedup of F: new time / old time = ((1/F)Tc)/Tc = 1/F
But if we consider the overall task time, including the human’s work, we get
new time / old time = ((1/F)Tc+Tp)/(Tc+Tp)
As F increases, the numerator is dominated by Tp, the time required for the person’s work. The ratio of new time to old time tends towards
Tp(Tc+Tp) = 1/((Tc/Tp)+1)
representing a speedup of at most ((Tc/Tp)+1) no matter how large F gets. That is, as the computer gets faster, the performance of the human plays a bigger role in determining the total task time.
What happens if some work can be overlapped between the computer and the person? The basic situation doesn’t change. As the computer speeds up, the amount of possible overlap diminishes, and the time required for human operations still comes to dominate the overall time required for the task.
The situation is shown graphically in Figure 4. The graph shows total task time for 11 doublings of computer performance, corresponding to 22 years or so of improvement. No improvement in human performance is assumed over this period. The traces in the graph correspond to different divisions of labor, Tc/Tp. As can be seen, the effects of improvements in computer performance appear early on and are greater when the computer’s share of the work is initially larger. But as time goes on, further performance improvements deliver essentially no benefit. As in Amdahl’s Law for parallelism, if some portion of a task cannot be sped up, the increase in speed due to improvements diminishes as task completion time comes to be dominated by the time required for the unimproved portion.
Figure 4. Effects of varying human and machine performance parameters on the relation between the time for computer operations and the task performance of a computer-human team.
And thus, as in Amdahl’s Law for parallelism, if some portion of a task cannot be done by the computer, the increase in speed due to increasing computer performance diminishes, as task completion time comes to be dominated by the time required for the human’s portion of the work.
An example employing a task that will be familiar to all is the speed of creating a document, perhaps a business letter. Today, this task is nearly always supported by computer. At an early stage of the development of the relevant computational tools, improvements in the performance of these tools had substantial impact on the overall task time. But the time required today to produce a document is dominated by the time required by the human operations; even reducing the time required for all of the computational operations to zero would make little practical difference in total task time. Improvement can come only from improvements in the human operations in this particular hybrid computing-human system. A slow typist would be much better served by spending his or her money on typing lessons than by replacing his or her 1.66 GHz computer with a 3.2 GHz computer.
Reliability Has a Similar Logic to Speed
In many situations obtaining results more reliably is more important than increased speed. Here the characteristics of digital technology also often offer advantages. In most situations, well-specified procedures can be carried out more reliably by a machine than by human operators. But when human operations must be combined with digital operations (i.e., when an entire task cannot be automated), how does the reliability of a hybrid system respond to improvements in the performance of its elements?
Consider two simple ways in which these elements could be combined. In a serial combination, the results of some machine operation are passed on for human processing, or vice versa, or even chained in alternation. In parallel combination, results of human and machine operations are passed on for post-processing to determine a result. In both of these constructions—serial and parallel—it is reasonable to assume that both human and machine operations have to be correct to obtain a correct final result.
This leads to a similar relationship to that seen above for speed. If the error rate for computational operations is Ec and that for the operations of people is Ep, the combined error rate, when both computer and human operations must be correct, is
Ec+(1-Ec)Ep
As in the speed analysis, one sees that as Ec decreases, the overall error rate is dominated by Ep. Once Ec is small, further decreases have very little impact.
Improvements in Human Performance, Plus an Analogy
Except for cases of total automation that do not require a human serving even as a monitor, it appears that improvements in machine speed or reliability, on their own, provide limited benefits without accompanying improvements in human performance. And Figure 2 suggests that we can’t expect improvements in basic human-information processing any time soon.
Imagine, if you will, the accomplishment of any human task as being like trying to get from one place to another—from a place of not having the task done to the place of having the task completed. For our purposes, we will consider the task of writing a letter, and we will liken it to getting from New York to Los Angeles. In the days before papyrus and ink, writing a letter, perhaps with a stone and a piece of flint (“Dear Oog”), was a laborious task, much like traveling from NY to LA in a horse-drawn wagon. Paper and ink made letter writing faster, like perhaps driving in a Model T on dirt roads. The Gutenberg press moves us to traveling on one-lane highways (though, really, it didn’t speed up the first letter so much as the copies), and with the typewriter we’re in an Edsel on blacktop. Comes the PC, and we are buzzing along the interstate highway system. With human fingers and brains, we are not going to type a letter any faster. But, oh wait, what about the airplane? So now we have changed human channels; instead of typing we are using speech-to-text, and we are generating our letter—we are getting to LA much faster. But—and here is our fundamental point—we may do a better job of selecting which human-information processing channel to employ, and we may improve the connection between some information activity and that channel, but given the laws of physics and the almost-flat-lined parameters of human-information processing, the human is destined to be the limiting factor in any human-machine system.
In these terms, given the trends observed in Figure 3, the human brain and sensory/perceptual/cognitive/motor system is ever increasingly the limiting factor. We have more and more information and the possibility of more and more online tasks, all presented and represented by more and more powerful, parallel technological systems, trying to squeeze through eyes, ears, and brains that evolved for hunting, gathering food, finding mates, and avoiding getting eaten by predators. However, as the distinction that we made above between hard-wired and soft human processes suggests, the picture in Figure 3 may be misleadingly dismal. Although there are indeed aspects of human performance that have not improved, we’ll suggest that many others have improved dramatically and could be improved further.
Is Multitasking Special?
Human beings have always had a capacity to attend to several things at once, although perhaps with some cost in task performance depending on the relations between the tasks in characteristics like sensory modality or visual channel (e.g., Wickens, 2008). Mothers have done it since the hunter-gatherer era—picking berries while suckling an infant. Nor is electronic multitasking entirely new; people have been driving while listening to car radios since the radios became popular in the 1930s. But there is no doubt that the phenomenon has reached a kind of warp speed in the era of Web-enabled computers (Wallis, 2006).
Today, “We produce and consume data at rates that are agonizingly slow by computer standards” (Lee, 2010, para. 4). But the proliferation of digitized information, plus the associated proliferation of devices to afford access to this information, leads to some new possibilities. Does the supposed ability of Generation M to multitask allow them to do a better, or worse, job of processing information? Has the multitasking skill helped narrow the gap between human performance and machine performance? Might it be tapped more fully in hybrid systems?
In telephony the term is multiplexing, whereby multiple analog message signals or digital data streams may be combined into a single signal that travels over some medium (Multiplexing, n.d.). In psychology the term tends to be parallel processing, whereby one person, or some single entity, carries out multiple operations or tasks at the same time (Parallel processing, n.d.). Ask any middle school boy what it’s called when he’s doing his homework, listening to his iPod, engaging three friends in separate texting threads, and emailing his teacher about why that homework may be late, and he’ll say “multitasking.” Are people actually getting better at multitasking?
Some researchers would argue that what appears to be multitasking may actually be, at least in some cases, rapid task switching, involving shifting attention from one task to another every few seconds (e.g., Salvucci, Taatgen, & Borst, 2009). Whether it be multitasking or task switching or some combination, those of us who think we’re good at it tend to believe it strongly. Think how long it would take us if we were to do all those things one at a time! The empirical data seem to be mixed, with a tendency towards the notion that the time spent switching and ramping back up on the new task, washing out any real speed or reliability advances afforded by multitasking (e.g., Liefooghe, Barrouillet, Vandierendonck, & Camos, 2008; Meiran, Chorev, & Sapir, 2000). Whether or not multitasking or task switching tends to improve overall attentional abilities (e.g., Green & Bavelier, 2003, 2007) or otherwise improve performance in general, people are doing it, and work is needed to support it effectively in UI designs. Park (2014) has employed real-world tasks to examine what variables lead to the experience of “flow states” when multitasking. And this provides additional motivation for a renewed focus on usability. What is it that we have done already, in the realm of UI design or software engineering practices or HCI research, to support or reflect this focus?
Empirical Research Data Related to Multitasking
As said, there are many researchers who would argue that true multitasking, in which a human attends to two or more tasks at once, cannot be accomplished by the human perceptual/cognitive system. They would argue that what appears to be multitasking is actually rapid task switching, involving shifting attention from one task to another every few seconds (e.g., Burgess, 2000). An argument against that position is that humans engage in many responses that typically require no attention—breathing, blinking eye, maintaining posture—while engaged in other attention-requiring tasks. However, ask a person having a serious asthma attack if they are capable of doing other tasks when they have to think about taking each breath (although we suggest that you wait until the attack has subsided before asking). Also, as Wallis (2006) suggested, as we become skilled drivers, we may listen to the radio or carry on a conversation while steering, navigating, and accelerating the car, seemingly dividing our attention among these different activities. When the car in front of us suddenly stops, we can shift our attention to braking, or when we are searching for a particular freeway exit, we can shift to navigation, but it seems as if highly experienced drivers can perform many tasks at once under normal, placid driving conditions. However, recall what it was like as a novice driver when a friend tried to carry on a conversation; many novice drivers have difficulty doing both tasks. Another example is a skilled typist who can type a manuscript and carry on a conversation simultaneously. Novice typists can’t do both tasks, but the highly experienced touch typist can. So, it appears that multitasking can occur when at least one of the tasks is highly practiced so that it requires no conscious attention to the task.
Many other situations that are referred to as multitasking are more likely cases of frequent and relatively rapid task switching. For example, in a report on the effects of the Internet on the behavior of young Americans (Lenhart, Rainie, & Lewis, 2001), a 17-year-old girl was quoted, “I get bored if it’s not all going at once, because everything has gaps—waiting for a website to come up, commercials on TV, etc.” This quote was used in a subsequent report on media multitasking (Foehr, 2006). But as the debate and the research on the relative value of multitasking continues (e.g., Park & Bias, 2012), there can be no doubt the at least some of us are attempting to multitask, or task switch with short cycle times, much more frequently than in years past.
How Have Designers Responded to a (Perhaps Tacit) Awareness That the Human Is the Tortoise in This Race?
We don’t believe we are the first to recognize a need to attend to both the machines and the people, to maximize the interaction between the two. The following are some examples of apparent accommodations that people producing information products and devices have devised to help us.
Software Tools and Features
With the acknowledged explosion of information has come the development of online tools and features in ways that have acknowledged the limitations of human-information processing capabilities. One example is RSS feeds and other software agents; while we may read only 250 words per minute (e.g., Haught, & Walls, 2002) and are unlikely to improve significantly in the foreseeable future, these feeds allow us a much bigger signal-to-noise ratio, filtering out relatively uninteresting words.
As more and more types of information get stored online, video and image search tools have improved to reduce similarly in the set of items users must consider before finding their target items.
Accessibility tools such as screen readers have helped computer users with visual impairments gain access to information and capabilities to process that information that would not otherwise be available to them. Relatedly, tools such as speech-to-text tools let people carry out some tasks more efficiently and perhaps in parallel with visual tasks.
Trends in Information Management
With the proliferation of digitally-stored information, of new and particular import is concern with how to manage all this information. There are several trends in the field of Information Studies that this research thread will fuel:
- Information retrieval (IR): Once created and stored, how shall information be sought and retrieved by human users? IR (e.g., Agichtein, Brill, & Dumais, 2006; van Rijsbergen, 2006) is concerned with tools to help humans filter the vast expanse of information and to help maximize the chances that any information they confront is valuable, and that all valuable information is confronted.
- Data mining (DM): Related to IR, DM (e.g., Frawley, Piatetsky-Shapiro, & Matheus, 1992; Hand, Mannila, & Smyth, 2001) entails performing automatic searches of large volumes of data for patterns such as association rules or groups of data records (Data mining, n.d.). As Dumas points out, “What data mining does is find patterns that humans are not likely to find because the trends are buried in too much data. This is an example in which the technology overcomes a human weakness” (personal communication, December 5, 2013).
- Information visualization: How shall information be presented to human users to maximize the chances that they can extract the information they need once they have accessed the document or other information container (Bederson & Shneiderman, 2003; Tufte, 2001)? Here is another example of how with attention to particular human capabilities (here, in pattern recognition and processing of pictorial information) we can design better user interfaces.
- Social computing and crowdsourcing: Extending computer-supported cooperative work, new advances in distributed, shared conduct of tasks (e.g., Paolacci, 2010) allow for multiple people in disparate locations to make progress in parallel on an ever increasing variety of tasks.
Software Engineering Practices
In order for information to be retrieved, data to be mined, and information to be visualized, software developers are responsible for producing computer-based functionality. Indeed, they get paid and promoted for doing so, especially for doing so on time and with quality. With increased functionality, though, comes, without concomitant improvements in the matching of that visualization to the needs of the user, increased UI complexity. Additional functionality is then created to deal with the complexity, such as the following:
- Graphical user interface (GUI) developers provide menus, which acknowledges the fact that cued recall is typically better than free recall (Haist, Shimamura, & Squire, 1992), and icons (with a nod to Fitts’ Law [e.g., Card, English, & Burr, 1978]), which affords the user relatively large targets to hit with their pointing devices.
- Internet standards have evolved, such as blue, underlined text to indicate links (given that the healthy human visual system is typically good at distinguishing color, except for the 10% of males and 1% of females with some form of color vision deficiency, and underlines).
- The aforementioned RSS feeds and other software agents help us sort through the proliferation of digital information, and spam filters help similarly.
Still words must be read, blue links must be distinguished from purple ones, targets must be acquired visually and clicked, and decisions must be made. Research in such topics will drive best practices in UI design. If average human capabilities in reading, identifying, clicking, and deciding are not likely to change in our lifetime, then computer system developers have to be aware of, and work within, these limitations. Whether or not we have reached the limit in taking advantage of human processing capabilities—and we assume we have not—someday we will approach that limit, and the human will not be advancing further. And so, what software engineering practices might be amended, what research threads might we pursue, if we are to appropriately acknowledge the tortoise-like advances in rudimentary human-information processing?
How Can We Better Accommodate the Fixed Human-Information Processing Capabilities?
Clearly, some interaction design practices have acknowledged and taken advantage of the human’s higher-level processes, beyond our basic information processing skills. The move from text-based screens with keyboard control to GUIs controlled by a mouse to touch-based interfaces to speech recognition represents this evolution in our exploitation of human capabilities; we can now fly to LA instead of taking that beast-drawn wagon. Barring total automation or the hands/eyes/ears-free brain interface—before we get to electron-transfer from New York to LA—what are the next steps, or what will fuel them? Below are some areas whereby an appreciation of the ever-increasing importance of the human in any human-computer system may well lead to improved HCI performance.
Context-Aware Initiative
T. V. Raman (personal communication, June 6, 2009) has suggested that rapid enhancement of the ability of devices like phones to sense their environment (e.g., location, time, orientation) creates the opportunity to replace human actions and decisions that are now required with machine initiative. He cites the example of accessing a bus schedule. Currently, this usually requires human initiative, but in some situations a machine could determine that the schedule is very likely to be needed, and simply present it. For example, the user may be in a place at a time of day when he or she very often calls for the schedule, such as leaving work toward the end of the day.
Improvements in Learnability
A study by Babbage (2011) showed that typical users can use a range of iPhone applications with no training. In fact, users with serious brain injuries, including people for whom remembering training experiences or instructions is difficult, did quite well too. Increased attention to what makes applications easy and hard to learn will pay big dividends in user productivity and satisfaction.
Changes in the Human Operations That Are Required
The logic of visualization is the replacement of difficult judgments by easy ones, for example, replacing the task of finding the largest number in a list with the task of seeing the highest point on a curve. More generally, replacing difficult cognitive processes with easier perceptual ones will continue to deliver improvements in UIs.
Cognitive Dimensions Analysis for Configuration Facilities
Thomas Green and others (e.g., Green & Petre, 1996) have documented many ways in which representations can support complex tasks like programming better or less well. They captured these insights in the form of dimensions for designers to consider. For example, a representation is viscous if it is difficult to make changes to it. Increased attention to such ideas can help make it easier for users to configure their computational tools and thus enable the machine to do more of the work in a hybrid human-machine system.
IT Acquisitions/Practice
Imagine if your company or organization were considering two competing employee management systems. System 1 costs $100,000 to purchase, and System 2 costs $200,000 to purchase. However, System 2 requires less user training, affords the users quicker and more direct access to the system functions, and leads to less user frustration. At what point does it make more sense to purchase System 2? A consideration of total cost of ownership, informed by crisp awareness of user capabilities, will lead to maximally efficient human-machine systems (e.g., Menard, 2009).
Higher Education Curricula
This consideration of the fixed human-information processing capabilities needs to inform higher education curricula, in programs of Information Studies, Computer Science, Communications, and beyond. We have written elsewhere on the dangers of amateur usability engineering (Bias, 2003). Curricula (beyond traditional HF programs) need to include UCD and to address the increasing relative importance of the human, and the human’s strengths and weaknesses, in any human-computer systems, for the ultimate betterment of subsequent UI designs.
HCI Research
More research should intentionally combine the study of psychophysics and perception with experimentation on computer presentation of information. An even larger opportunity exists for work on the cost structure of human operations in programming. The few beginnings in this area show that the critique of Newell and Card (1985) over 25 years ago still holds: “Millions for compilers but hardly a penny for understanding human programming language use” (p. 212).
The cost structure of social operations also badly needs inquiry. Our ability to predict what operations will and will not be effectively carried out by self-organizing volunteers is weak, yet this kind of work is already of decisive importance in many real-world activities.
One act that would help drive integration of UCD would be cross-publishing in HCI and computer science venues. Perhaps this call is to editorial boards, who will need to avoid a dependency on insider lingo and appreciate the value of the contributions from other “camps.” The answer here requires action on all fronts—writers need to pursue new venues and the editors and reviewers need to be receptive.
Why Not Usability?
Why aren’t the things we’ve just called for already being done or (since all of them are being done in some places) done more? Since the early days of the study of HCI (e.g., the first Human Factors in Computer Systems conference in Gaithersburg, in 1982, or the publication of Card, Moran, and Newell’s The Psychology of Human-Computer Interaction in 1983), numerous researchers and computer scientists have argued that usability deserves a place at the design table (see Johnson & Henderson, 2012). The attempts to make this argument or to support it have taken numerous forms. In those early days of the late 1970s and ‘80s much of the focus was on establishing research methods and a scientific knowledge base (e.g., Card, English, & Burr, 1978). Later, in the ’80s and into the ’90s, many researchers developed models of the interaction between humans and computers and used those models to further expand the usability toolkit (e.g., Carroll & Rosson, 1992; Lewis, Polson, Wharton, & Rieman, 1990). Researchers later addressed the potential economic effects of usability by examining various approaches to cost justifying usability (e.g., Bias & Mayhew, 1994, 2005). These analyses focused on the importance of usability for (a) reducing user errors and task completion time and increasing user efficiency and satisfaction, thus leading to increased sales or usage; (b) decreasing expensive changes late in the design process; (c) decreasing training time and effort as well as turnover of employees; and (d) decreasing the costly customer support burden. More recently, usability has been expanded to focus on the entire user experience (e.g., Hassenzahl & Tractinsky, 2006). User experience can be considered to encompass more of the user’s interaction with a technological artifact than did traditional usability analyses, including affective responses, preferences, and beliefs, in addition to the perceptual and cognitive underpinnings more often examined in usability analyses (e.g., ISO FDIS 9241-210, 2009). Yet, despite all of the research reports, chapters, and books extolling the importance of usability, it seems that still more progress is needed to increase the acceptance of usability in the product design process. The situation is better than it was two decades ago when Landauer (1995) asserted “Systems are only rarely tried out on users in their environment before they are sold” (p. 133). But maybe not as much better as we would have hoped.
Why has so much work yielded so few (or at least insufficient, we assert) results? Below we suggest some possible factors. Overcoming these factors may be necessary to create conditions for accommodating our fixed human-information processing capabilities.
Poor science base for usability engineering
Much of the basic research on perception and cognition doesn’t generalize well to real world tasks (or at least this generalization has not been a focus). For example, task switching research tends to use operations such as adding numbers or searching for a target letter and to force the participant to task switch with no preparation (Panepinto, 2010).
There is a need to push the contextual paradigm for cognitive research, especially applied cognitive research, for example, by studying explicitly simulations of real world tasks. Panepinto’s (2010) study of task switching, for example, entailed real-world tasks (document proofreading and completing a Sudoku puzzle) and real-world situations (forced vs. self-selected task switching).
Poor communication of the relevant science base to usability engineers
The relevant science base that does apply to HCI is frequently misapplied. For example, Fitts’ Law, which predicts movement times based on the ratio of the movement distance and the size of the target being moved to, has been applied to designs of buttons that are of a constant size (e.g., Silfverberg, MacKenzie, & Korhonen, 2000)—the lack of variation in the target size makes the use of Fitts’ Law meaningless. A second example is the misapplication of Miller’s (1956) “magical number seven plus or minus two.” In his classic paper, Miller describes research in one-dimensional absolute judgment tasks (e.g., identifying levels of saltiness or brightness) and in short-term memory tasks that suggest limitations in information processing and short-term storage of about five to nine items. However, this idea of limited processing has been too broadly applied to topics like menu design, where HCI designers have proposed that the ideal menu size is seven because of Miller’s magical number (Comber, 1995). A list of menu items is neither a one-dimensional stimulus nor does it involve short-term memory (well, not to the same degree as Miller’s task) because the items are all visually available. Even when limitations on short-term memory are important in design, such as, with an interactive voice response (IVR) system, the concept of the magical number has been misapplied to suggest that menus should be limited to no more than five items (Commarford, Lewis, Smither, & Gentzler, 2008). Commarford et al. (2008) showed that a broader IVR menu produced better performance than a shallow menu, in part because users can discard menu items that don’t match their goals from working memory, thereby reducing the load on working memory.The discipline of Psychology will do well to develop and nurture a cadre of applications specialists whose job involves the transfer of knowledge from basic research to application and to translate the issues in applied domains into interesting problems for basic researchers (e.g., Gillan & Bias, 1992; Gillan & Schvaneveldt, 1999).
We need to develop ways to make science base understandable and usable by its consumers—designers and usability engineers. One example of this approach is the development of workload measures as a method for applying the basic science construct of cognitive workload to design. A historically useful workload measure is the NASA-TLX (Hart & Staveland, 1988). To enhance its ease of application even further, researchers have developed online versions of the NASA-TLX (e.g., Sharek, 2011). Another example would be CogTools, which integrate into a UI prototyping tool some automated evaluation based on empirically-based human performance models (Bellamy, John, & Kogan, 2011).
Users haven’t demanded usable products
It has been noted frequently that users who experience usability problems have a tendency to blame themselves for the problem. One focus during the 2010s should be educating users about usability—perhaps a Usability Reports magazine modeled after Consumer Reports?.
The computer industry has a history of “satisficing”
Imagine if drug manufacturers had as many recalls as software has bugs (software defects and usability problems). Of course, most software problems don’t lead to such serious consequences as medication problems might. (Though certainly some do.) But still, there has been a sense that it is acceptable to “get something out there” and make incremental improvements as the actual users unearth problems. We believe that as stakeholders (and purse-string holders) get educated as to the costs (in terms of customer/user satisfaction and, directly or indirectly, profit margins) of this approach, a more proactive approach to attention to human needs and capabilities will take hold. An emerging science of design (Baldwin, 2007), especially as applied to web pages and other software UIs, is serving to drive this change. Also, the aforementioned work by Babbage and by the CogTools team highlights the increasing, and increasingly, perceived value of attention to human limitations in the design of computing systems.
Recommendations
Moving forward, the discipline of usability would do well to reinforce
- the recognition of the disparity between the speed of advancement of technology and the speed of advancement of human-information processing capability,
- the use of this fact to advocate for more resources devoted to usability practice and HCI research, and
- the systematic attention to ways in which system design can accommodate our fixed human-information processing capabilities.
Conclusion
Given the proliferation of digitized information and the unmistakable and inexorable advances in computing technology, the human user is losing the race in advancing information processing capabilities. In this treatise we have leaned on a century of published psychological literature to demonstrate this. We have identified software engineering and design practice, and research threads, that have already (explicitly or implicitly) acknowledged the need for ongoing and increased attention to the human component of any human-computer system, and we have proposed future actions that will undergird better design. Our intention is to motivate strides both in our understanding of the human limits that should drive all computer design and in our communicating this fact to the greater HCI-design community.
References
- Agichtein, E., Brill, E., & Dumais, S. T. (2006). Improving web search ranking by incorporating user behavior. In Proceedings of SIGIR 2006. Retrieved October 19, 2006, from http://research.microsoft.com/~sdumais/SIGIR2006-fp345-Ranking-agichtein.pdf
- Amdahl’s law. (n.d.). In Wikipedia. Retrieved September 27, 2009, from http://en.wikipedia.org/wiki/Amdahl’s_law
- Babbage, D. (2011, June). Cognitive barriers to mainstream mobile computing devices in neurorehabilitation. Advancing Rehabilitation Technologies for an Aging Society (RESNA/ICTA), Toronto.
- Baldwin, C. Y. (2007, March 1). Steps toward a science of design. NSF PI Conference on the Science of Design, Alexandria, VA. Retrieved July, 1, 2014, from http://www.people.hbs.edu/cbaldwin/dr2/baldwinscienceofdesignsteps.pdf
- Bayliss, D. M., Jarrold, C., Gun, D. M., & Baddeley, A. D. (2003). The complexities of complex span: Explaining individual differences in working memory in children and adults. Journal of Experimental Psychology: General, 132, 71–92.
- Beck, C. H. (1963). Paced and self-paced serial simple reaction time. Canadian Journal of Psychology 17, 90–97.
- Bederson, B. B., & Shneiderman, B. (2003). The craft of information visualization: Readings and reflections. San Francisco: Morgan Kaufmann.
- Bellamy, R., John, B. E., & Kogan, S. (2011). Deploying CogTool: Integrating quantitative usability assessment into real-world software development. In Proceedings of the 33rd International Conference on Software Engineering (ICSE ’11; pp. 691–700). New York, NY: ACM.
- Bias, R. G. (2003). The dangers of amateur usability engineering. In S. Hirsch (chair), Usability in practice: Avoiding pitfalls and seizing opportunities. Annual meeting of the American Society of Information Science and Technology, October, Long Beach.
- Bias, R. G., & Mayhew, D. J. (Eds.). (1994). Cost-justifying usability. Boston: Academic Press.
- Bias, R. G., & Mayhew, D. J. (Eds.). (2005). Cost-justifying usability: Update for the Internet age (2nd ed.). San Francisco: Morgan Kaufmann.
- Brynjolfsson, E. (1993, December). The productivity paradox of information technology: Review and assessment. Communications of the ACM. 36, 66-77.
- Burgess, P. W. (2000). Real-world multitasking from a cognitive neuroscience perspective. In S. Monsel & J. Driver (Eds.), Control of cognitive processes, attention and performance (No. XVIII; pp. 465–472). Cambridge, MA: MIT Press.
- Card, S. K., English, W. K., & Burr, B. J. (1978). Evaluation of mouse, rate-controlled isometric joystick, step keys, and text keys for text selection on a CRT. Ergonomics, 21, 601–613.
- Card, S. K., Moran, T., & Newell, A. (1983). The psychology of human-computer interaction. Hillsdale, NJ: Erlbaum.
- Carroll, J. M., & Rosson, M. (1992). Getting around the task-artifact cycle: How to make claims and design by scenario. ACM Transactions on Information Systems, 10, 181–212.
- Chincotta, D., Underwood, G. A., Ghani, K., Papadopoulou, E., & Wresinski, M. (1999). Memory span for Arabic numerals and digit words: Evidence for a limited-capacity, visuo-spatial storage system. The Quarterly Journal of Experimental Psychology. Section A: Human Experimental Psychology, 52A(2), 325–351.
- Comber, T. (1995). Building usable web pages: An HCI perspective. In A. Ellis & R. Debreceny (Eds.), AusWeb95, innovation and diversity: The World Wide Web in Australia, Proceedings of AusWeb95, the first Australian World Wide Web Conference (pp. 119–124). Lismore, Australia: Norsearch Ltd.
- Commarford, P. M., Lewis, J. R., Smither, J. A., & Gentzler, M. D. (2008). A comparison of broad versus deep auditory menu structures. Human Factors, 50, 77–89.
- Cothran, D. L., & Larsen, R. (2008). Comparison of inhibition in two timed reaction tasks: The color and emotion Stroop tasks. Journal of Psychology, 142, 373–385.
- Data mining. (n.d.) In Wikipedia. Retrieved October, 19, 2006, from http://en.wikipedia.org/wiki/Data_mining#_ref-1
- Donders, F. C. (1969). On the speed of mental processes. In W.G. Koster (Ed.), Attention and performance II. Acta Psychologica, 30, 412-431. (Original work published in 1868.)
- Dubash, M. (2005, April 13). Moore’s Law is dead, says Gordon Moore. Techworld. Retrieved July 1, 2011, from http://news.techworld.com/operating-systems/3477/moores-law-is-dead-says-gordon-moore/
- Dulaney, C. L., & Rogers, W.A. (1994). Mechanisms underlying reduction in Stroop interference with practice for young and old adults. Journal of Experimental Psychology, 20, 470–484.
- Emmerich, D. S., Fantini, D. A., & Ellermeier, W. (1989). An investigation of the facilitation of simple auditory reaction time by predictable background stimuli. Perception & Psychophysics, 45, 66–70.
- Fay, P. J. (1936). The effect of cigarette smoking on simple and choice reaction time to colored lights. Journal of Experimental Psychology, 19, 592–603.
- Foehr, U. (2006). Media multitasking among American youth: Prevalence, predictors and pairings. Report Number 7592. Kaiser Family Foundation. Retrieved February 22, 2012 from http://www.kff.org/entmedia/upload/7592.pdf
- Frawley, W. J., Piatetsky-Shapiro, G., & Matheus, C. J. (1992). Knowledge discovery in databases: An overview. AI Magazine, 13, 57–70.
- Gates, A. I. (1916). The mnemonic span for visual and auditory digits. Journal of Experimental Psychology, 1(5), 393–403.
- Gillan, D. J., & Bias, R. G. (1992). The interface between human factors and design. Proceedings of the Human Factors Society 36th Annual Meeting (pp. 443–447).
- Gillan, D. J., & Schvaneveldt, R. W. (1999). Applying cognitive psychology: Bridging the gulf between basic research and cognitive artifacts. In F. T. Durso, R. Nickerson, R. Schvaneveldt, S. Dumais, M. Chi, & S. Lindsay (Eds.), The handbook of applied cognition (pp. 3–31). Chichester, England: Wiley.
- Goodrich, S., Henderson, L., Allchin, N., & Jeevaratnam, A. (1990). On the peculiarity of simple reaction time. The Quarterly Journal of Experimental Psychology. 42A, 763–775.
- Green, C. S., & Bavelier, D. (2007). Action video game experience alters the spatial resolution of attention. Psychological Science, 18(1), 88–94.
- Green, C.S., & Bavelier, D. (2003). Action video games modify visual selective attention. Nature, 423, 534-537.
- Green, T. R. G., & Petre, M. (1996). Usability analysis of visual programming environments: A ‘cognitive dimensions’ framework. Journal of Visual Languages and Computing, 7, 131–174.
- Haist, F., Shimamura, A. P., & Squire, L. R. (1992). On the relationship between recall and recognition memory. Journal of Experimental Psychology: Human Learning and Memory, 18, 691–702.
- Hand, D., Mannila, H., & Smyth, P. (2001). Principles of data mining. Cambridge, MA: MIT Press.
- Hart, S. G., & Staveland, L. E. (1988). Development of a NASA-TLX (Task Load Index): Results of empirical and theoretical research. In P.A. Hancock and N. Meshkati (Eds.), Human mental workload (pp. 139–183). Amsterdam: North-Holland.
- Hassenzahl, M., & Tractinsky, N. (2006). User experience—A research agenda. Behaviour and Information Technology, 25, 91–97.
- Haught, P. A., & Walls, R. T. (2002). Adult learners: New norms on the Nelson-Denny reading test for healthcare professionals. Reading Psychology, 23, 217–238.
- Heathcote, A., Popiel, S. J., & Mewhort, D. J. K. (1991). Analysis of response time distributions: An example using the Stroop Task. Psychological Bulletin, 109, 340–347.
- Hess, T. M. (2005). Memory and aging in context. Psychological Bulletin, 131, 383–406.
- Hintsman, D. L., Carre, F. A., Eskridge, V. L., Owens, A. M., Shaff, S. S., & Sparks, M. E. (1972). “Stroop” effect: Input or output phenomenon? Journal of Experimental Psychology 95, 458–459.
- Inui, N. (1997). Simple reaction times and timing of serial reactions of middle-aged and old men. Perceptual and Motor Skills, 84, 219–225.
- ISO FDIS 9241-210:2009. (2009). Ergonomics of human system interaction—Part 210: Human-centred design for interactive systems (formerly known as 13407). International Organization for Standardization (ISO). Switzerland.
- Johnson, J., & Henderson, A. (2012). Usability of interactive systems: It will get worse before it gets better. Journal of Usability Studies, 7(3), 88–93
- Klemmer, E. T. (1956). Time uncertainty in simple reaction time. Journal of Experimental Psychology, 51, 179–84.
- Kohfeld, D. L. (1969). Effects of the intensity of auditory and visual ready signals on simple reaction time. Journal of Experimental Psychology, 82, 88–95.
- Kryder, M. H., & Kim, C. S. (2009). After hard drives—What comes next? IEEE Transactions on Magnetics, 45(10), 3406–3413.
- Landauer, T. K. (1995). The trouble with computers: Usefulness, usability and productivity. Cambridge, MA: MIT Press.
- Lee, T. B. (2010). Open systems user interfaces suck. Retrieved November 18, 2010, from http://timothyblee.com/2010/11/15/open-user-interfaces-suck/
- Lee, Y., Lu, M., & Ko, H. (2007). Effects of skill training on working memory capacity. Learning and Instruction, 17, 336–344.
- Lenhart, A., Rainie, L., & Lewis, O. (2001). Teenage Life Online: The Rise of the Instant-Message Generation and the Internet’s Impact on Friendships and Family Relationships. Washington, DC: Pew Internet & American Life Project.
- Lewis, C., Polson, P, Wharton, C., & Rieman, J. (1990). Testing a walkthrough methodology for theory-based design of walk-up-and-use interfaces. Proceedings of the SIGCHI conference on human factors in computing systems: Empowering people (pp. 235–242). New York: Association for Computing Machinery.
- Liefooghe, B., Barrouillet, P, Vandierendonck, A., & Camos, V. (2008). Working memory costs of task switching. Journal of Experimental Psychology: Learning, Memory, and Cognition, 34, 478–494.
- Martin, M. (1978). Memory span as a measure of individual differences in memory capacity. Memory & Cognition, 6, 194–198.
- Meister, D. (1999). The history of human factors and ergonomics. Mahwah, NJ: Lawrence Erlbaum Associates.
- Meiran N., Chorev Z., & Sapir A. (2000). Component processes in task switching. Cognitive Psychology, 41, 211–253.
- Menard, R. (2009, September 2). It ain’t the price—It’s the cost, stupid! Purchasing and Negotiation Training. Retrieved July 1, 2011 from http://purchasingnegotiationtraining.com/purchasing/it-aint%E2%80%99-the-price-%E2%80%93-it%E2%80%99s-the-cost-stupid/
- Miller, G. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information, Psychological Review, 63, 81–97.
- Miyake, A., Friedman, N.P., Emerson, M.J., Witzki, A.H., & Howerter, A. (2000). The unity and diversity of executive functions and their contributions to complex “frontal lobe” tasks: A latent variable analysis. Cognitive Psychology, 41, 49–100.
- Muckler, F. (1984). The future of human factors. Human Factors Society Bulletin, 27(2), 1.
- Multiplexing. (n.d.). In Wikipedia. Retrieved September 27, 2009, from http://en.wikipedia.org/wiki/Multiplexing
- Newell, A., & Card, S. (1985). The prospects for psychological science in human-computer interaction. Human Computer Interaction, 209–242.
- Norman, Donald A. (1990). The design of everyday things. New York: Doubleday.
- Panepinto, M. (2010). Voluntary versus forced task switching. In Proccedings of the Human Factors and Ergonomics Society 54th annual meeting (pp. 453–457). Santa Monica, CA: HFES.
- Paolacci, C. (2010). Running experiments on Amazon Mechanical Turk. Judgment and Decision Making 5 (5), 411–419.
- Parallel processing. (n.d.). In Wikipedia. Retrieved September 27, 2009, from http://en.wikipedia.org/wiki/Parallel_processing
- Park, J. H. (2014). Flow in multitasking. Unpublished dissertation proposal, the University of Texas at Austin, Austin, TX.
- Park, J. H., & Bias, R. G. (2012). Understanding human multitasking behaviors through a lens of Goal-Systems Theory. Paper presented at the annual meeting of the Association for Library and Information Science Education, Dallas, January.
- Patterson, D. (2010). The trouble with multicore. IEEE Spectrum. July. Retrieved July 1, 2011, from http://spectrum.ieee.org/computing/software/the-trouble-with-multicore/1
- Raab, D. H. (1962). Effect of stimulus-duration on auditory reaction time. The American Journal of Psychology, 75, 298–301.
- van Rijsbergen, C. J. (2006). Information retrieval. Retrieved October 19, 2006, from, http://www.dcs.gla.ac.uk/Keith/Preface.html
- Salo, R., Henik, A., & Robertson, L. C. (2001). Interpreting Stroop interference: An analysis of differences between task versions. Neuropsychology, 15, 462-471.
- Salthouse, T. A. (1996). The processing-speed theory of adult age differences in cognition. Psychological Review, 103, 403–428.
- Salvucci, D. D., Taatgen, N. A., & Borst, J. P. (2009). Toward a unified theory of the multitasking continuum: From concurrent performance to task switching, interruption, and resumption. In Proceedings of CHI2009 (pp. 1819–1828). New York: ACM.
- Schilling, W. (1921). The effect of caffein and acetanilid on simple reaction time. Psychological Review, 28, 72–79.
- Sharek, D. (2011). Developing a usable online NASA-TLX tool. In Proceedings of the Human Factors and Ergonomics Society 55th Annual Meeting 55 (pp. 1375–1379).
- Silfverberg, M., MacKenzie, I. S. & Korhonen, P. (2000). Predicting text entry speed on mobile phones. In Proceedings of CHI2000 (pp. 9–16). New York: ACM.
- Simon, J. R., & Sudalaimuthu, P. (1979) Effects of S-R mapping and response modality on performance in a Stroop task. Journal of Experimental Psychology: Human Perception and Performance, 5(1), 176-187.
- Steinman, A., & Venia, S. (1944). Simple reaction time to change as a substitute for the disjunctive reaction. Journal of Experimental Psychology, 34, 152–158.
- Stroop, J. R. (1935). Studies of interference in serial verbal reactions. Journal of Experimental Psychology, 18, 643–662.
- Suchoon, S. M., & George, E. J. (1976) Foreperiod effect on time estimation and simple reaction time. Acta Psychologica, 41, 47–59.
- Taub, H. A. (1972). A comparison of young adult and old groups on various digit span tasks. Developmental Psychology, 6, 60–65.
- Tehrani, R. (2000). As we may communicate. Retrieved November 18, 2013, from http://www.tmcnet.com/articles/comsol/0100/0100pubout.htm
- Tufte, E. (2001). The visual display of quantitative information (2nd ed.). Cheshire, CT: Graphics Press.
- Vredenburg, K., Isensee, S., & Righi, C. (2002). User-centered design: An integrated approach. Upper Saddle River, NJ: Prentice Hall.
- Vroon, P. A., & Vroon, A. G. (1973). Tapping rate and expectancy in simple reaction time tasks. Journal of Experimental Psychology, 98, 85–90.
- Wallis, C. (2006, March 27). The multitasking generation. Time. Retrieved February 22, 2012 from http://www.balcells.com/blog/Images/Articles/Entry558_2465_multitasking.pdf
- Wickens, C. D. (2008). Multiple resources and mental workload. Human Factors, 50(3), 449–455.
- Wright, B. C., & Wanley, A. (2003). Adults’ versus children’s performance on the Stroop task: Interference and facilitation. British Journal of Psychology, 94, 475–485.
- Zhang, H., & Kornblum, A. (1998). The effects of stimulus-response mapping and irrelevant stimulus-response and stimulus-stimulus overlap in four-choice Stroop tasks with single-carrier stimuli. Journal of Experimental Psychology: Human Perception and Performance, 24, 3–19.
1 As J. Dumas (personal communication, December 5, 2013) observes, this figure and this observation is reminiscent of Brynjolfsson’s (1993) “productivity paradox,” whereby advances in technology, which would be expected to be accompanied by productivity advances, sometimes disappoint.