A team of Army scientists and engineers at Aberdeen Proving Ground, Md., have challenged long-held views in the area of human-autonomy interaction to change the way science involves people, especially in developing advanced technical systems that involve artificial intelligence and autonomy.
As part of a research program initially funded in 2013 by the Office of the Secretary of Defense, U.S. Army Research Laboratory researchers led a multi-disciplinary team of Department of Defense, industry and academic researchers to develop a novel, general-purpose principled framework.
The research team proposes what they’ve named the Privileged Sensing Framework, which was conceived to leverage recent advances in human sensing technologies to dynamically integrate human and autonomous agents on the basis of their individual characteristics. For example, humans tend to easily adapt to changes in the environment or task. Autonomous agents typically can process large amounts of data more quickly than humans, said Dr. Amar Marathe a researcher in Army Research Laboratory’s Real-World Soldier Quantification Branch.
The focus of this research was to demonstrate how the Privileged Sensing Framework, also known as PSF, preserves the human as a primary, critical and central authority while also enabling technical systems, like robots, to detect and mitigate when people’s decisions or actions would lead to dysfunction or even catastrophe, said Dr. Amar Marathe, a researcher in ARL’s Real-World Soldier Quantification Branch.
“The research was fundamentally enabled by a critical move towards a novel control systems framework that can account for dynamic interactions among information components that impact the value of that information and yet appropriately propagates into robust overall decisions,” he said. “The PSF provides an evolved approach to [human-autonomy interaction] that treats the human as a special class of sensor rather than as the ultimate and absolute command arbiter.”
The PSF was based on the concept of appropriately ‘privileging’ information during the process of integration to provide special rights to specific agents based on their capabilities within the current task context, and the performance goals.
“Through a series of simulation experiments, the PSF significantly improved joint human-autonomy performance without sacrificing the gains to be made from incorporating human strengths,” Marathe explained.
“Additional studies have extended this approach into a wide range of applications that include joint human-autonomy driving, human-autonomy target detection, and command and control. Overall, these efforts provide further evidence that the incorporation of the principles of the PSF can provide improved performance of joint human-autonomy systems across a wide range of applications,” Marathe said.
He said future efforts will focus on developing novel methods for incorporating the PSF into experimental human-autonomy systems to enable further testing of the impact this approach on human-autonomy system performance, and generalizing the framework to accommodate a variety of tasks and scenarios.
In about 20 years or so, Marathe estimates, the inception of a generalizable framework that incorporates dynamic estimates of human capabilities to facilitate and advance human-autonomy interaction will provide rich opportunity to revolutionize capabilities of multi-agent cooperative teams across a broad range of applications.
Human-automation integration challenges were addressed in human-computer coupled visual search, real-time mitigation of mistrust in automation, advanced commander decision aides, and in-the-loop test and evaluation of human-robot systems.
Marathe said the research was motivated by persistent, fundamental issues that have thus far precluded the transition of advanced automation and autonomous technologies from the laboratory into the operational environment.
“Generally, humans readily adapt to varying task and environmental complexities during decision making and therefore are often treated as a failsafe for cases where autonomous technology underperforms. However, humans are constantly changing due to factors such as fatigue or shifts in attention, which means that even skilled humans sometimes make errors. The inherent variability in human performance makes the problem of integrating humans in the loop with autonomous technologies extremely challenging,” he said.
Until recently, most frameworks for human-autonomy integration have preserved a central role for the human while neglecting the important role of human variability, Marathe noted. “As a result, human excellence has not been fully exploited and neither has human failure been fully offset, leaving joint human-autonomy systems fundamentally incapable of achieving their full potential.”
With broad collaborations with the U.S. Air Force Research Laboratory, U.S. Army Tank and Automotive Research, Development, and Engineering Center, U.S. Space and Naval Warfare Systems Center, DCS Corporation, Ball Aerospace, and Columbia University, the research team brought expertise from computational sciences, systems and software engineering, cognitive neuroscience and human factors engineering, and real-time virtual and mixed-reality simulation to demonstrate the applicability of their approach across four distinct problem spaces.