Main Page  |   About Me  |  Clegg Lab Members  |  Research  Lab News

 

OVERVIEW

My research addresses a number of questions related to human performance. This work has included both applied and basic perspectives. My interests span topics in applied cognitive psychology, human factors, and cognitive engineering. One central research interest can be characterized as the areas of skill acquisition and training. A core current component of this research has been studies of the impact of automation, including looking at how the presence of automated systems changes performance, and underlying learning. I have also extended my work into aspects of skilled performance - issues like the use of automation and supervisory control, situation awareness, workload, and enhancing decision making.

AUTOMATION

One recent line of our research has looked at automation support within a visual search task, including exploring the impact of automation failures. In a complex search task, we showed that automation assistance in the form of visual cues can enable fast and accurate responses (Warden et al., 2023), however there was downside to the types of cues that provide the greatest benefits. When the automation makes a mistake, visual cues can be too compelling leading people to erroneously follow the incorrect advice - a form of automation bias. A subsequent study moved from simulated errors in this type of automation to examing actual errors in an AI-driven support system (Raikwar et al., 2023). The findings were broadly similar, with perhaps even more tendency towards automation bias.

When automation fails in high performance situations, for the human operator things rapidly turn from undemanding and routine operations to extreme demands that may need to be sustained over time. Trying to better understanding the workload transitions associated with these types of catastrophic automation failures was the aim of a NASA funded project, conducted in collaboration with Chris Wickens. The ultimate goal was to develop empirically validated tools that can be used to predict astronaut performance on long-duration missions (Clegg, Vieane, Wickens, Gutzwiller, & Sebok, 2014; Wickens, Clegg, Vieane, & Sebok, 2015; Wickens, Gutzwiller, Vieane, Clegg, Sebok, & Jane, 2016). Our work in this area won the Jerome Ely Award from the Human Factors & Ergonomics Society for the best paper in the Human Factors Journal. My then graduate student, Robert Gutzwiller, used this project to also run a number of interesting studies on voluntary task switching in applied domains, looking at a model that captures both the choice to switch and the subsequent decision of what activity to switch to (Gutzwiller, Wickens, & Clegg, 2014).

My interests in skill acquisition have led me to examine learning in systems featuring automation. Training with highly automated systems is becoming increasingly common, but comparatively little has been known about the impact of automation on the nature of what is being learned. Two publications (Gutzwiller, Clegg, & Blitch, 2013; Heggestad, Clegg, Goh, & Gutzwiller, 2012) offer overviews of theoretical and empirical contributions to this area coming out of my lab.

UNDERSTANDING SPATIAL UNCERTAINTY

We have been conducting work looking at people's understanding of predictions in spatial uncertainty, and trying to identify methods to better support performance (see, for example, Witt & Clegg, 2022). When determining where and when a hurricane might make landfall (Witt et al., 2023), a downed plane might have crashed, a potential rendezvous location to supply a ship, or the future potential position of a submarine, human decision makers must make predictions about the uncertain trajectory of an object. Out of this research, Jessi Witt and I have developed an approach to visualizing spatial uncertainty we call "animated risk trajectories", which we think has the potential to enhance decision making through improved understanding.

In a U.S. Navy funded research project, along with Cap Smith, Chris Wickens, and my graduate students, we have examined performance in spatial predictions under uncertainty, especially focusing on people's struggles to grasp the possible variability. We initially explored attempts to understand spatial variability (Herdener, Wickens, Clegg, & Smith, 2016). Subsequent efforts looked at methods that might improve or degrade performance, including visualizations (Pugh, Wickens, Herdener, Clegg, & Smith, 2018), increased attention to the dimension (Herdener, Wickens, Clegg,& Smith, 2018), prior information (Herdener, Clegg, Wickens, & Smith, 2019), and decision support automation (Fitzgerald, Wickens, Smith, Clegg, Vijayaragavan, Williams, 2019). We have examined some of the factors that might influence and improve performance, like history trails (Patton et al, 2023), and some that might be expected to help but do not - like automation tranparency (Patton et al., 2023; Pharmer et al., 2022). Recently we have had an overarching theory paper accepted for publication (Wickens, Clegg, Witt, Smith, Herdener, & Spahr, 2020). Within this work we have also became interested in a bias that pushes people towards favoring information over its expected value returns (Wickens, Smith, Clegg, & Herdener, 2019).

COGNITIVE BIASES IN DECISION MAKING

In addition to my work exploring methods to enhance decision making with automation and through new visualizations, I have also been interested in the impact of cognitive biases. Cognitive biases are systematic errors that result from reliance on heuristics (mental shortcuts) in decision-making. Such biases are typically regarded as automatic and unconscious influences on behavior, and can occur in a wide range of situations and contexts. Cognitive biases have previously been found to be resistant to mitigation training. I have been involved in a unique attempt to build training to address cognitive biases as part of a large, inter-disciplinary team funded by IARPA under the Sirius program (Clegg et al., 2015; Clegg et al., 2014). Our goal was to create a serious computer game to train future potential intelligence analysts to recognize and mitigate cognitive biases. The ability to work on a project that allowed me to direct principles of learning within such an interesting applied domain, and towards such core aspects of cognition, has been tremendously exciting.

Over a 5-year period our research team developed a pair of serious computer games that effectively trained individuals to overcome their tendencies towards 6 different cognitive biases. These games provide training that was delivered rapidly, led to a substantial change in behavior, proved highly robust, and was durable enough to be detected in tests months later. While this general training is now being adopted by the intelligence analyst community for whom it was originally developed, the approach has the clear potential to reduce cognitive biases in other settings. The project has linked with an external corporate partner. We also continue seek to examine the potential effectiveness of the games in other domains like medical diagnosis.

A subsequent IARPA funded effort with some of the same research team tackled developing methods to improve reasoning within the CREATE program. Our web-based platform offered a new support structure for reasoning and analysis that moved beyond existing prescribed, structured analytic techniques towards a more flexible, report-forward format (Stromer-Galley et al., 2018). Favoring flexibility produced significant gains in reasoning over the more common intelligence community approach of set orders of operations. The results of multiple studies are currently under review or being written up for publication. The project has gained some interest from potential commercial partners, and we are continuing to explore the most effective path towards transition.