What We Mean When We Talk About Robots
The term “robot” comes to us from the Czech artist Josef Capek and his playwright brother, Karel, who wrote R.U.R. (Rossum’s Universal Robots; 1921). They coined the term “robot” from the Czech word robota, which denotes labor by servants. The play, R.U.R., was about a world in which biological androids, robots, performed the menial tasks formerly done by humans. The term robot soon was used to describe any machine that did the work, especially physical labor, that otherwise might be done by humans. In recent years, robot has been generalized to include machines that perform both physical and cognitive tasks originally performed by humans. In addition, the term robot has been shortened to “bot” to indicate any autonomous system designed to behave like a human or so that humans interacting with the bot might mistake it for a human.
Although human-bot interactions are interesting in and of themselves, this paper focuses on robots as we normally think of them—machines capable of sensing the physical environment and acting on the environment by interpreting input from humans and providing feedback to humans. Within this framework, these machines (a) have various levels of autonomy, from teleoperation by a human to full autonomy; (b) may operate in various environmental contexts, for example, in air, on land, or on and under water; (c) may be fixed in place in that physical environment or may move around the environment; (d) may have an appearance that ranges from very machine-like to humanoid; and (e) have uses in health care, military operations, disaster relief, education, entertainment, transportation, or construction.
How do humans interact with robots? The answer to this question is as varied as the types of robots, environments in which they work, and the tasks that they perform. For example, the da Vinci surgical robotic system allows surgeons to perform minimally-invasive operations with improved fine motor control. The surgeon views the surgical field at the surgical console and performs motions using instruments on the console. These surgical instructions are then transmitted to the robotic arms and instruments that are contained on the “patient-side cart.” With visual feedback provided to the surgeon for every movement, the surgeon-controlled robotic instruments operate on the patient. Because of the enhanced fine motor control, surgeons can perform surgeries with smaller incisions than would be possible with direct human contact in a surgery. Thus, this robot is teleoperated with no ability to make autonomous decisions about the surgery—the surgeon provides all of the surgical moves which are then interpreted by the device based on its algorithms to comply with the surgeon’s movements.
Another example of a teleoperated robot is the PackBot. This small robot moves via tracks that allows it to traverse difficult terrain and weather conditions, and it has a manipulator arm on the platform and has cameras (currently four color cameras and possibly a fifth infrared camera) in various locations, including the front of the platform and on the end of the manipulator arm. PackBots or similar robots have been popular in media. For example, the Academy Award winning movie The Hurt Locker showed a PackBot disarming a bomb in its first sequence of scenes. In addition, PackBots can be used for search and rescue to search for survivors when a building may be too unsafe for humans to conduct the search directly, for example. PackBots were used in the World Trade Center after the attacks on September 11, 2001 (Casper & Murphy, 2003).
Although most of human-robot interaction (HRI) in the summer of 2020 involves teleoperations, when we think about it, let’s face it, most of us think of talking with Data from Star Trek: The Next Generation or at least driving a partially autonomous car. Robots totally controlled by a human operator seem to many people to be just oversized remote-controlled cars, not really counting as robots. That is to say, people often think that robots are autonomous actors.
Most people alive today will likely never interact with Data, but we will soon be behind the wheel of a vehicle with some degree of autonomy, perhaps even full autonomy. SAE International (formally known as the Society of Automotive Engineers) proposed six levels of automation for self-driving cars (2018). Level 0 is the present situation, for example, me driving my 2010 Prius. Levels 1 and 2 involve assistance with steering and/or braking (Level 1 is the “or” condition and Level 2 is the “and”). In Level 3, Conditional Automation, the car does most of the driving and requires the driver to be aware at some but not all times. Level 4, High Automation, involves a car that can do all of the driving but not in all conditions; for example, Waymo, formerly known as the Google Self-Driving Car project, is an example of a Level 4 car. Finally, Level 5 is Full Automation, in which the car does all of the driving in all conditions and in some cases a driver need not be present; for example, Nuro is a test vehicle used to deliver groceries.
Besides self-driving cars, semi-automated and fully-automated robots are under development for factories, warehouses, military settings, healthcare environments, and even home care and companionship. In some cases, robots may fully replace humans for tasks in these environments, but in other cases, robots and humans will be co-workers.
HRI and Usability
What are the usability issues with teleoperated robots? For those robots that are mobile, operators need to have a high-quality visual display of the spatial environment through which they can control the robot. However, even with high resolution and color displays, the three-dimensional view can be compromised (Gillan, 2014). So, an important usability issue relates to the ways that the interface can provide depth information to the operator (Tittle et al., 2002). A second usability issue for teleoperations also relates to the visual feedback to the operator. In many tasks, the operator needs to be able to identify specific objects, such as parts of an explosive devices for bomb disposal, or specific people, for example to find someone in a search and rescue situation or to determine friends and foes in military operations. Yet picking out an object or a person from rubble when the robot and its attached cameras are moving past that object can be difficult (Gillan, 2014). How can UX provide assistance to the operator? In addition, as the ability to have greater distances between the operator and the teleoperated robot increases, problems with the time lags between the operator’s input and the receipt of the input by the robot and between the robot’s response and the feedback to the operator will increase due to physical constraints on signal propagation. This disruption of the natural flow of visual information and motor performance has detrimental effects on usability metrics, such as error rates and time to complete tasks (e.g., Scholcover & Gillan, 2017).
In the case of teleoperated, semi-autonomous, and autonomous robots that are mobile, they may be sharing a workspace or a pathway (perhaps even a sidewalk) with human co-workers and bystanders. These co-located people will not be the operators but should still be considered in any usability analysis. Our typical usability analyses focus on the operator and how the interaction with a system can be cognitively and emotionally productive. When the system can get up and move around, we now need to start considering safety as part of usability. How do we avoid having the robot run into other people in its environment?
We appear to have abilities, either inherited or earned early in life, to read subtle social cues that indicate where another person is going. When machines are co-located briefly with humans, for example cars and pedestrians, the machines should provide signals that can be interpreted by the humans and that accurately preview intended movements, for example, turn signals and brake lights. Human drivers often forget to use their turn signals when switching lanes or making a turn. Think of the frustration that you, as a pedestrian or as another driver, feel when a driver and car do not provide those cues and are suddenly entering your lane or turning in front of you. Imagine how people will feel about a robotic partner who does not provide any cues about their actions. Any negative emotions produced by such interactions between collaborators will result in decreased trust and acceptance. As we consider the intermixing of people and autonomous robots in the same space, we need a new outlook for usability, one in which the user is no longer an operator but a co-worker or collaborator.
For self-driving cars, a solution to the problem of cars hitting pedestrians, bicyclists, or other cars is to have a human monitor in the car, ready to take over if a problem occurs. However, people are easily bored and lose attentional focus when asked to do repetitive tasks. As a consequence, they may fail to notice an upcoming problem such as a bicyclist that the car’s system does not recognize. Or humans in semi-automated or fully-automated cars might engage in multitasking, as they currently do when driving is their primary task (Caird et al., 2008). They might monitor some of the time, but switch to reading, surfing the web, or watching a movie for most of the rest of the time, thereby decreasing the value of having a human monitor.
In the workplace, having a human monitor for each autonomous system would not be cost effective. Having a single human operator monitor multiple robots has been proposed as a way to overcome that low cost effectiveness. However, such multitasking has been shown to disrupt an operator’s situation awareness (Cummings & Mitchell, 2008). The loss of situation awareness would make it difficult to catch, diagnose, and prevent errors before they happened or to correct an error state after the error had happened (see Lewis, 2013, for a review of this issue). Solutions to the problems of a single operator monitoring several robots at the same time will require the expertise of usability engineers.
Preventing a robot and a human from occupying the same space at the same time may be the fundamental usability problem in human and robot collaboration, but enhancing productivity will be nearly as important a usability issue. During the 1980s and early 1990s microcomputers became widely available and were quickly introduced into offices and factories. Because of the expected shift of labor-intensive parts of tasks (e.g., accounting and document production) from humans to computers, economists predicted a rapid increase in productivity. However, during the decade after the computerization of the workplace, productivity decreased, an event known as the productivity paradox (Bynjolfsson, 1993). One of the factors that has been proposed as a cause of the productivity paradox was the poor usability of the early computers, the operating systems, and other software (Landauer, 1995).
If we do not carefully consider all aspects of the coordination between human users and semi-autonomous and autonomous robots, introducing them into workplaces could lead to another years-long productivity decline. And given the diversity of the types of robots and the tasks in which they are likely to be involved, fixing the usability problems after the fact may take much longer than it did to improve usability with computers.
Usability Methods and Measures for HRI
Certain usability methods, such as task analysis and user personas, likely will be applicable to human-robot interaction with minimal modifications. However, other very common methods, such as the System Usability Scale might be easily adapted for a teleoperator, but for co-workers on a human-robot team there would need to be an extensive revamping of processes. For a co-worker of an autonomous robot, any usability survey would need to focus on ease of task coordination and task hand offs, such as the ability of the autonomous robot to explain its decisions, whether the robot knows to ask for assistance when needed, and so on along with other issues that will come to light as we gain more experience with human and robot teams. We may need to create new instruments that measure usability as it relates to team and social processes.
Just as with computing systems, the mental model that a teleoperator or a robot’s teammate has of the robot, its capabilities, and its interface will be important. However, with teams of humans and autonomous robots, the robot will need a mental model of the human teammates. The robot will need to predict where humans are likely to move and what they are likely to do during tasks, and they will need to understand what specific humans mean when they asking for information, as well as how to explain their guidance and decisions to humans. These behavioral predictions, understandings, and explanations cannot be generic as they may vary as a function of each human’s experience in a task, culture, gender, age, and certain personality characteristics. The answer that a human teammate provides for another human would not be the same for someone who is new to a task or team compared to a teammate who is a long-time team member or an expert at a task. Usability researchers, with their tools and experience in measuring mental models would be essential for robot designers as they try to determine how to develop flexible mental modelling methods to aid in development of mental models for autonomous robots.
Another critical factor for HRI in the workplace will be user acceptance (Smids et al., 2019). If users of the various types in various tasks described above are not prepared to accept robots, the user experience is guaranteed to be low. One critical feature of acceptance for robot co-workers will be trust. The primary factors that influence how much people trust a robot relate to the robots’ performance, such as reliability, predictability, failure rates, and level of automation (Hancock et al., 2011). Likewise, in my lab, we have found that the level of automation influences the degree to which people will blame a robot for an error committed by a human-robot team (Furlough et al., 2019). So, usability measures might be developed to include trust, acceptance, and attribution of blame.
Conclusion
As teleoperated, semi-autonomous, and autonomous robots enter the workplace in increasingly large numbers, the ensuing decreases or increases in productivity of the American and world economies will likely depend on how well humans operate, monitor, and collaborate with the robots. Usability researchers, analysts, designers, and evaluators have the requisite knowledge to improve the varied interactions of humans and robots. However, the many instruments currently applied by usability experts cannot simply be exported to the situations in which humans and robots interact, but will need to be adapted and new methods must be developed. The sooner that the current usability tools are modified and the new ones created, the smoother will be the adjustment to the coming changes in the workplaces.
Acknowledgements
The author thanks Roger Chadwick, Patty McDermott, Jennifer Riley, Rosemarie Yagoda, Lixiao Huang, Caleb Furlough, Thomas Stokes, Federico Scholcover, and many other colleagues who have helped shape my thinking about human-robot interaction.
References
Bynjolfsson, E. (1993). The productivity paradox of information technology. Communications of the ACM, 36(12), 66–77.
Caird, J. K., Willness, C. R., Steel, P., & Scialfa, C. (2008). A meta-analysis of the effects of cell phones on driver performance. Accident Analysis and Prevention, 40(4), 1282–1293.
Capek, J. (1921). R.U.R. (Rossum’s universal robots). Aventinum.
Casper, J., & Murphy, R. R. (2003). Human-robot interaction during the robot-assisted urban search and rescue response at the World Trade Center. IEEE Transactions on Systems, Man, and Cybernetics: Part B, 33(3), 367–385.
Cummings, M. L., & Mitchell, P. J. (2008). Predicting controller capacity in remote supervision of multiple unmanned vehicles. IEEE Systems, Man, and Cybernetics, Part A Systems and Humans, 38(2), 451–460.
Furlough, C., Stokes, T., & Gillan, D. J. (2019). Attributing blame to robots: I. The influence of robot autonomy. Human Factors. https://doi.org/10.1177%2F0018720819880641
Gillan, D. J. (2014). Eye, robot: Visual perception and human-robot interaction. In R. Hoffman, P. Hancock, M. Scerbo, & R. Parasuraman, (Eds.), The Cambridge handbook of applied perceptual research. (pp. 830–847). Cambridge University Press.
Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y. C., de Visser, E. J., & Parasuraman, R. (2011). A meta-analysis of factors affecting trust in human-robot interaction. Human Factors, 53(5), 517–527.
Landauer, T. K. (1995). The trouble with computers: Usefulness, usability, and productivity. MIT Press.
Lewis, M. (2013). Human interactions with multiple robots. Reviews of Human Factors and Ergonomics, 9(1),131–174. DOI: 10.1177/1557234X13506688
SAE International. (2018). SAE Standards News: J3016 automated-driving graphic update. Available at https://www.sae.org/news/2019/01/sae-updates-j3016-automated-driving-graphic#:~:text=The%20J3016%20standard%20defines%20six,%2Dvehicle%20(AV)%20capabilities.
Scholcover, F., & Gillan, D. J. (2017). Using temporal sensitivity to predict performance under latency in teleoperation. Human Factors, 60(1), 80–91.
Smids, J., Nyholm, S., & Berkers, N. (2019). Robots in the workplace: A threat to—or an opportunity for—meaningful work? Philosophy & Technology. https://doi.org/10.1007/s13347-019-00377-4
Tittle, J. S., Roesler, A., Woods, D. D. (2002). The remote perception problem. In Proceedings of the Human Factors and Ergonomics Society annual meeting. (pp. 260–264). Los Angeles: Sage Publications.