0
Select Articles

Trust between Humans and Learning Machines: Developing the Gray Box PUBLIC ACCESS

[+] Author Notes
James C. Christensen, Joseph B. Lyons

1Air Force Research Laboratory 711th Human Performance Wing Wright-Patterson AFB, OH

Dr. James Christensen is the portfolio manager for Airman Sensing and Assessment within the 711th Human Performance Wing of the Air Force Research Laboratory. Dr. Christensen received his B.S. in biopsychology from the University of Michigan in 2001, and his PhD in cognitive psychology from the Ohio State University in 2008. He has been employed by AFRL full time since then. His work for the Air Force has focused on the neuroscientific assessment of cognitive function, with a particular interest in real-time workload estimation and adaptive automation. Dr. Christensen holds an adjunct associate professorship at Wright State University, and is an instrument-rated private pilot and command pilot for Angel Flight, a charitable patient transport organization. Dr. Christensen can be contacted at james.christensen.7@us.af.mil.

Dr. Joseph B. Lyons is the Technical Advisor for the Human Trust and Interaction Branch within the 711 Human Performance Wing at Wright-Patterson AFB, OH. Dr. Lyons received his PhD in Industrial/Organizational Psychology from Wright State University in Dayton, OH, in 2005. Some of Dr. Lyons’ research interests include human-machine trust, interpersonal trust, leadership, and social influence. Dr. Lyons has worked for the Air Force Research Laboratory as a civilian researcher since 2005, and between 2011-2013 he served as the Program Officer at the Air Force Office of Scientific Research where he created a basic research portfolio to study both interpersonal and human-machine trust. Dr. Lyons has published in a variety of peer-reviewed journals, is currently the Editor for The Military Psychologist, and is an Associate Editor for the journal Military Psychology. Dr. Lyons can be contacted at: joseph.lyons.6@us.af.mil.

Mechanical Engineering 139(06), S9-S13 (Jun 01, 2017) Paper No: ME-17-JUN5; doi: 10.1115/1.2017-Jun-5

This article explores the notion of the ‘Gray Box’ to symbolize the idea of providing sufficient information about the learning technology to establish trust. The term system is used throughout this article to represent an intelligent agent, robot, or other form of automation that possesses both decision initiative and authority to act. The article also discusses a proposed and tested Situation Awareness-based Agent Transparency (SAT) model, which posits that users need to understand the system’s perception, comprehension, and projection of a situation. One of the key challenges is that a learning system may adopt behavior that is difficult to understand and challenging to condense to traditional if-then statements. Without a shared semantic space, the system will have little basis for communicating with the human. One of the key recommendations of this article is that there is a need to provide learning systems with transparency as to the state of the human operator, including their momentary capabilities and potential impact of changes in task allocation and teaming approach.

Current trends in learning systems have favored methods such as deep learning that have had high profile successes, including IBM's Watson and the Deepmind AlphaGo system. These systems are developed via extensive training rather than being explicitly designed, and as such many of the capabilities, behaviors, and limitations of learning systems are an emergent property of interaction/experience. Given appropriate training this can result in systems that are robust and meet or exceed human capabilities [e.g., 1]. However, this training process can have unpredictable results or produce apparently inexplicable behavior, which has been described as the “black box” problem of such systems. Indeed, the widely reported “Move 37” that AlphaGo selected in its second game against Lee Sedol was regarded as very unpredictable, a move that no human would have made, yet critical in the system's eventual win. This has resulted in a popular notion that without complete knowledge and predictability of a learning system, one cannot fully understand, and thus, partner with such technology.

There are at least three reasons why learning systems can create challenges for human interaction. First, a learning system may adopt behavior that is difficult to understand and challenging to condense to traditional if-then statements. Without a shared semantic space, the system will have little basis for communicating with the human. As a result, what a human may perceive as an error may be fully logical to the system. Second, an actual error on the part of the system may be difficult to detect by the human if the human does not understand the system's basis for the decision making and data/environmental state. Third, by definition, a learning system should evidence some degree of dynamic behavior which challenges the notion of predictability. This article adopts the perspective that learning systems may never be completely “knowable,” much like humans; yet they very well may be trusted by providing the users with information to reduce uncertainty, increase understanding of rationale, and by sharing lessons learned through peer and informal networks. In this paper we explore the notion of the “Gray Box” to symbolize the idea of providing sufficient information about the learning technology to establish trust wherein, much like with humans, we trust based on the synthesis of predictability, feasibility, and inference of intent based on one's knowledge of the goals, values, and interaction with the system. The term system is used throughout this brief paper to represent an intelligent agent, robot, or other form of automation that possesses both decision initiative and authority to act.

Paramount in the notion of the Gray Box is the idea of reducing uncertainty. Predictability is an essential antecedent to trust of complex systems [2], and the same will hold true of learning systems–perhaps even more so. Yet, by their very nature, learning systems are believed to be unpredictable, due both to the fact that their future behavior is contingent on past experience and that many systems incorporate sources of randomness or random sampling in generating and selecting courses of action. While this is true, we contend that there are still ways to reduce uncertainty associated with such systems. In particular, we will discuss one method in detail–using transparency methods.

In general, transparency refers to a set of methods to establish shared awareness and shared intent between a human and a machine [3,4]. This may include information about the current and future state of the system and information related to the system's intent in order to allow the human to develop a clear and efficient mental model of the system [5]. Chen and colleagues [5,6] have proposed and tested the Situation Awareness-based Agent Transparency (SAT) Model which posits that users need to understand the system's perception, comprehension, and projection of a situation. Guided by the SAT model, Mercado and colleagues [6] found that added transparency increased user performance and trust, notably, without increasing workload. Lyons [3] offers a broader conceptualization of transparency to include features of the intent, environment, task, analytics, teamwork, human state, and social intent as it relates to the human. For learning systems, transparency will likely need to include some fusion of information from the SAT model and information from the various transparency facets discussed by Lyons [3].

RESERVISTS GO ACTIVE DUTY An MQ-9 Reaper pilot and sensor operator fly a training mission from a ground control station at Holloman Air Force Base, New Mexico. (U.S. Air Force photo by Airman 1st Class Michael Shoemaker).

Grahic Jump LocationRESERVISTS GO ACTIVE DUTY An MQ-9 Reaper pilot and sensor operator fly a training mission from a ground control station at Holloman Air Force Base, New Mexico. (U.S. Air Force photo by Airman 1st Class Michael Shoemaker).

Humans interacting with learning systems will need to understand how the system senses the environment, how it makes decisions and acts, how it teams with the human, and how this teaming strategy changes over time based on changing situational constraints or goals (i.e., the notable autonomy paradox of transfer of authority). To the first point, the human needs to understand how the system interacts with its environment. This may include understanding how the system ingests and perceives data, what kind of sensors it has and the limitations of those sensors, and where possible it should communicate its understanding of the environment to the human. This will help the human understand the mental model of the system in relation to the environment and notably how this mental model changes as the system adapts to novel situations. Second, the human should understand how the system makes decisions and how these decisions translate into actions. Research has shown that transparency methods in the form of decision rationale can increase trust for recommender systems in commercial aviation [7]. A replication of Lyons and colleagues [7] using a high-fidelity simulation found that added rationale increases user trust and reliance on the decision aid while reducing verification (i.e., second guessing) of the automation's recommendation [8]. Humans need to understand the logic behind any recommendations by a complex system. With a learning system, the human will need to understand if and how the decision logic of the system changes and why it changes (i.e., what conditions drive the strategy change, what are the thresholds for such changes, what are the underlying assumptions of the system?). Perhaps most importantly for a learning system, the human needs to understand how the system will team with the human and how this teaming strategy changes based on human states and situational constraints.

AIR FORCE CYBER COMMAND ONLINE FOR FUTURE OPERATIONS Capt. Jason Simmons and Staff Sgt. Clinton Tips update anti-virus software for Air Force units to assist in the prevention of cyberspace hackers July 12 at Barksdale Air Force Base, La. The Air Force is setting up the Air Force Cyberspace Command soon and these Airmen will be the operators on the ground floor. (U.S. Air Force photo/Tech. Sgt. Cecilio Ricardo).

Grahic Jump LocationAIR FORCE CYBER COMMAND ONLINE FOR FUTURE OPERATIONS Capt. Jason Simmons and Staff Sgt. Clinton Tips update anti-virus software for Air Force units to assist in the prevention of cyberspace hackers July 12 at Barksdale Air Force Base, La. The Air Force is setting up the Air Force Cyberspace Command soon and these Airmen will be the operators on the ground floor. (U.S. Air Force photo/Tech. Sgt. Cecilio Ricardo).

The teaming strategy of the system may include the division of labor between the human and the system, the intent of the system toward the human, and meaningful exposure of the human and system to events to jointly experience and react to novel stimuli. Future human-machine teaming paradigms will likely involve some division of labor between humans and intelligent machines. The human needs to understand both in real-time and future projections, how that division of labor is perceived by the system, how it will change, and what triggers the change. The system should visually represent the division of labor for a particular task or set of tasks. This will allow the human and system to develop shared awareness of the current and future teamwork context. Further, it is plausible that advances in physiological assessment and intelligent algorithms will allow systems to transfer authority between the human and the system as required by situational demands. For instance, the Air Force has fielded an advanced automated system called the Automatic Ground Collision Avoidance System (AGCAS) on the F-16 platform that will take control away from the pilot when it detects an imminent collision with the ground [9]. This system only activates at the last possible moment to avoid nuisance activations and interference with the pilot. It was this innovative design to consider the pilot's perceived nuisance threshold that drove much of its success–and it is this understanding that has positively influenced pilot's trust of the system [9].

Humans must also understand the intent of the system in relation to the human. This will require that humans fully understand the goals of the system and how the system prioritizes multiple goals across a variety of situational constraints. Understanding this goal prioritization and how priorities fluctuate across situations will be an important antecedent of trust for learning systems. This forms the basis for understating what “motivates” the system's behavior. Humans can gain exposure to these nuances through systematic joint training sessions where the human and system jointly interact across a range of scenarios. These scenarios will comprise meaningful tests or stretches of the system's intent across the various situations that will be needed to foster appropriate trust of a learning system [10]. While the “values” of the system may be opaque, what we can do is to structure scenarios that test out the behavioral consistency of the system across a range of demanding constraints. Thus, while we can never test every possible scenario to achieve full understanding, much like humans, we must infer behavioral consistency based on demonstrated consistency and predictability in a variety of challenging scenarios. I do not know how a close friend will react to positive or negative news with full certainty, but such outcomes are generally predictable based on prior experiences which we shared. The same will hold true for a learning system. Prior experiences should serve as information to guide future predictions of consistency. The value of such information will depend not on a factor of total time of interaction, but rather in the meaningfulness of the interactions that are jointly experienced. Ultimately, humans may not need to know in detail exactly how a learning system will react to a novel stimulus, however it would be effective to know that the system has reacted to other novel stimuli encountered in the past in ways that support their own goals. Understanding the rules that govern the behavior of the system and having experienced behavioral consistency that is in accordance with those rules should be a sufficient starting point for teaming with a learning system.

The rules or values that are encoded in a learning system are thus critical to overall success and especially successfully teaming with humans. Much like learning in humans, learning systems require explicit and implicit direction to instill those values. For example, in many implementations, explicit feedback in the form of negative rewards can be attached to behaviors (e.g., damaging a friendly asset), while implicit direction is provided by careful construction of training scenarios wherein violations of values lead to failure. This process then may be validated during a cooperative training process; as the human gains experience with the system's behavior they should have the opportunity to provide feedback to the system and reinforce those values.

In a human-machine team, transparency of the system is not the sole key for the human in understanding the learning system. The human partner must be transparent as well, in the sense that the system should monitor the state and inputs of the human partner in order to adapt and team most effectively–a concept that has been termed Robot-of-Human transparency [3] to refer to the bidirectional nature of transparency. Ideally, this monitoring is passive and nonintrusive so as to minimize additional workload burdens on the human partner. There are at least two potential ways in which data from human monitoring can be utilized by a learning system. One is error detection or expectation mismatch; signals such as verbal expressions of surprise or the P300 (an electroencephalographic waveform indicating that the human has perceived something as unexpected) can provide the system with evidence that its behavior has caused violated operator expectations, and trigger reevaluation, changes in behavior, and potentially queries to the human operator to clarify. This concept has been demonstrated in online control with no overt response from the operator [11].

A second way in which human data from physiology and behavior can be utilized is as a system input to adapt teaming behavior. This concept has been extensively discussed in the literature under adaptive automation and coadaptive aiding [1215]. The key concept is that accurate real-time monitoring of human state (e.g., cognitive workload, stress, and fatigue) can become part of the total system environment and directly impact optimal function allocation and teaming strategies. This approach has the potential both to improve overall system performance due to optimizing utilization of human and system resources, and to improve the likelihood of user acceptance and adoption. Users have historically resisted adoption of aiding systems that can arbitrarily take control of tasks, however if the system only does so when the user is overloaded then the system can be framed and communicated as more of a partner and aid than an unpredictable, in-flexible machine. There are a few key challenges to successful implementation of this approach. One is the limitation in accuracy of state assessment (generally on the order of 80-90% correct over time, [16]); confidence and error probability must be understood and quantified in the system's representation of the human partner and behavior selection. Another challenge is effective handoff or transfer of tasks; task set changes and sudden transitions in workload levels can have negative impacts on human performance [e.g. 17, 18].

In addition to providing transparency, we can reduce uncertainty of learning systems by facilitating knowledge sharing between peer groups. Stories related to both successful instances of interaction and failures can be a useful way to reduce uncertainty for a novel system. Stories shared among operators have been shown to influence trust of fielded automated systems in the Air Force [9]. These stories help to fill in the gaps of uncertainty as different users encounter disparate environmental constraints–and as a result a wider variety of experience with the system (under various conditions) is shared throughout the social network. A critical consideration, however, is to ensure that systems being considered are indeed the same, lest users share stories based on different versions of a system or a different system altogether. Designers should expect that users will transfer both optimistic and pessimistic expectations from one system to another when the users perceive the systems as similar.

In summary, we have three recommendations that will shed light on the function of learning systems, resulting in systems that can be described as gray boxes:

  • Provide human operators with maximum transparency as to the inputs, process, and potential outputs of the learning system, as well as values encoded in that system.
  • Train humans and learning systems together using challenging and realistic scenarios to increase mutual understanding, improve teaming, and enable human operators to gain experience and insight into system performance.
  • Provide learning systems with transparency as to the state of the human operator, including their momentary capabilities and potential impact of changes in task allocation and teaming approach.

Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G.,... & Petersen, S. “Human-level control through deep reinforcement learning.” Nature, 518 (7540), 2015. 529– 533 [CrossRef] [PubMed]
Lee, J.D., & See, K.A. “Trust in automation: Designing for appropriate reliance.” Human Factors, 46, 2004. 50– 80 [CrossRef] [PubMed]
Lyons, J.B. “Being transparent about transparency: A model for human-robot interaction.” D. Sofge, G.J. Kruijff, and W.F. Lawless Trust and Autonomous Systems: Papers from the AAAI Spring Symposium (Technical Report SS-13-07). Menlo Park, CA: AAAI Press. 2013.
Lyons, J.B., Havig, P.R. “Transparency in a human-machine context: Interface approaches for fostering shared awareness/intent.” R. Shumaker and S. Lackey Virtual, Augmented, and Mixed Reality: Designing and Developing Virtual and Augmented Environments 2014, Part I, Lecture Notes in Computer Science, 8525, June 2014. 181– 190
Chen, J.Y.C., & Barnes, M.J. “Human-agent teaming for multirobot control: A review of the human factors issues.” IEEE Transactions on Human-Machine Systems, 2014. 13– 29
Mercado, J.E., Rupp, M.A., Chen, J.Y.C., Barnes, M.J., Barber, D., & Procci, K. “Intelligent agent transparency in human-agent teaming for multi-UxV management.” Human Factors, 58 (3), 2016. 401– 415 [CrossRef] [PubMed]
Lyons, J.B., Koltai, K.S., Ho, N.T., Johnson, W.B., Smith, D.E., & Shively, J.R. “Engineering Trust in Complex Automated Systems.” Ergonomics in Design, 24, 2016. 13– 17 [CrossRef]
Saddler, G.G., Battiste, H., Ho. N.T., Lyons, J.B., Hoffman, L., Shively, R. “Effects of transparency on pilot trust and acceptance in the autonomous constraints flight planner.” Presentation at the Digital Avionics Systems Conference, Sacramento, CA. September 2016.
Lyons, J.B., Ho, N.T., Koltai, K., Masequesmay, G., Skoog, M., Cacanindin, A. & Johnson, W.W. “A Trustbased analysis of an Air Force collision avoidance system: Test Pilots.” Ergonomics in Design, 24, 2016. 9– 12 [CrossRef]
Lyons, J.B., Clark, M.A., Wagner, A., & Schuelke, M.J. “Certifiable Trust in Autonomous Systems: Making the Intractable Tangible.” AI Magazine (in press).
Zander, T. O., Krol, L. R., Birbaumer, N. P., & Gramann, K. “Neuroadaptive technology enables implicit cursor control based on medial prefrontal cortex activity.” Proceedings of the National Academy of Sciences, 113 (52), 14898– 14903, doi: 10.1073/ pnas.1605155114, 2016. [CrossRef]
Rouse, W. B. “Adaptive allocation of decision making responsibility between supervisor and computer.” T. B. Sheridan, C & G. Johannsen Monitoring behavior and supervisory control. New York: Plenum Press. 1976, pp. 295– 306.
Byrne, E. A., & Parasuraman, R. “Psychophysiology and adaptive automation.” Biological Psychology, 42, 1996. 249– 268 [CrossRef] [PubMed]
Wilson, G. F., & Russell, C. A. “Performance enhancement in an uninhabited air vehicle task using psychophysiologically determined adaptive aiding.” Human Factors, 43 (3), 2007. 1005– 1018 [CrossRef]
Christensen, J. C., & Estepp, J. R. “Coadaptive aiding and automation enhance operator performance.” Human Factors: The Journal of the Human Factors and Ergonomics Society. 55 (5), 2013. 965– 975 doi: 10.1177/0018720813476883 [CrossRef]
Christensen, J. C., Estepp, J. R., Wilson, G. F., & Russell, C. A. “The effects of day-to-day variability of physiological data on operator functional state classification.” NeuroImage, 59 (1), 2012. 57– 63 [CrossRef] [PubMed]
Morgan, J.F., & Hancock, P.A. “The effect of prior task loading on mental workload: An example of hysteresis in driving.” Human Factors, 53 (1), 2011. 75– 86 [CrossRef] [PubMed]
Ungar, N.R., Matthews, G., Warm, J.S., Dember, W.N., Thomas, J.K., Finomore, V.S., & Shaw, T.H. “Demand transitions and tracking performance efficiency: Structural and strategic models.” Proceedings of the Human Factors and Ergonomics Society 49th annual meeting.2005.
Copyright © 2017 by ASME
View article in PDF format.

References

Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G.,... & Petersen, S. “Human-level control through deep reinforcement learning.” Nature, 518 (7540), 2015. 529– 533 [CrossRef] [PubMed]
Lee, J.D., & See, K.A. “Trust in automation: Designing for appropriate reliance.” Human Factors, 46, 2004. 50– 80 [CrossRef] [PubMed]
Lyons, J.B. “Being transparent about transparency: A model for human-robot interaction.” D. Sofge, G.J. Kruijff, and W.F. Lawless Trust and Autonomous Systems: Papers from the AAAI Spring Symposium (Technical Report SS-13-07). Menlo Park, CA: AAAI Press. 2013.
Lyons, J.B., Havig, P.R. “Transparency in a human-machine context: Interface approaches for fostering shared awareness/intent.” R. Shumaker and S. Lackey Virtual, Augmented, and Mixed Reality: Designing and Developing Virtual and Augmented Environments 2014, Part I, Lecture Notes in Computer Science, 8525, June 2014. 181– 190
Chen, J.Y.C., & Barnes, M.J. “Human-agent teaming for multirobot control: A review of the human factors issues.” IEEE Transactions on Human-Machine Systems, 2014. 13– 29
Mercado, J.E., Rupp, M.A., Chen, J.Y.C., Barnes, M.J., Barber, D., & Procci, K. “Intelligent agent transparency in human-agent teaming for multi-UxV management.” Human Factors, 58 (3), 2016. 401– 415 [CrossRef] [PubMed]
Lyons, J.B., Koltai, K.S., Ho, N.T., Johnson, W.B., Smith, D.E., & Shively, J.R. “Engineering Trust in Complex Automated Systems.” Ergonomics in Design, 24, 2016. 13– 17 [CrossRef]
Saddler, G.G., Battiste, H., Ho. N.T., Lyons, J.B., Hoffman, L., Shively, R. “Effects of transparency on pilot trust and acceptance in the autonomous constraints flight planner.” Presentation at the Digital Avionics Systems Conference, Sacramento, CA. September 2016.
Lyons, J.B., Ho, N.T., Koltai, K., Masequesmay, G., Skoog, M., Cacanindin, A. & Johnson, W.W. “A Trustbased analysis of an Air Force collision avoidance system: Test Pilots.” Ergonomics in Design, 24, 2016. 9– 12 [CrossRef]
Lyons, J.B., Clark, M.A., Wagner, A., & Schuelke, M.J. “Certifiable Trust in Autonomous Systems: Making the Intractable Tangible.” AI Magazine (in press).
Zander, T. O., Krol, L. R., Birbaumer, N. P., & Gramann, K. “Neuroadaptive technology enables implicit cursor control based on medial prefrontal cortex activity.” Proceedings of the National Academy of Sciences, 113 (52), 14898– 14903, doi: 10.1073/ pnas.1605155114, 2016. [CrossRef]
Rouse, W. B. “Adaptive allocation of decision making responsibility between supervisor and computer.” T. B. Sheridan, C & G. Johannsen Monitoring behavior and supervisory control. New York: Plenum Press. 1976, pp. 295– 306.
Byrne, E. A., & Parasuraman, R. “Psychophysiology and adaptive automation.” Biological Psychology, 42, 1996. 249– 268 [CrossRef] [PubMed]
Wilson, G. F., & Russell, C. A. “Performance enhancement in an uninhabited air vehicle task using psychophysiologically determined adaptive aiding.” Human Factors, 43 (3), 2007. 1005– 1018 [CrossRef]
Christensen, J. C., & Estepp, J. R. “Coadaptive aiding and automation enhance operator performance.” Human Factors: The Journal of the Human Factors and Ergonomics Society. 55 (5), 2013. 965– 975 doi: 10.1177/0018720813476883 [CrossRef]
Christensen, J. C., Estepp, J. R., Wilson, G. F., & Russell, C. A. “The effects of day-to-day variability of physiological data on operator functional state classification.” NeuroImage, 59 (1), 2012. 57– 63 [CrossRef] [PubMed]
Morgan, J.F., & Hancock, P.A. “The effect of prior task loading on mental workload: An example of hysteresis in driving.” Human Factors, 53 (1), 2011. 75– 86 [CrossRef] [PubMed]
Ungar, N.R., Matthews, G., Warm, J.S., Dember, W.N., Thomas, J.K., Finomore, V.S., & Shaw, T.H. “Demand transitions and tracking performance efficiency: Structural and strategic models.” Proceedings of the Human Factors and Ergonomics Society 49th annual meeting.2005.

Figures

Tables

Errata

Discussions

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In