There are at least three reasons why learning systems can create challenges for human interaction. First, a learning system may adopt behavior that is difficult to understand and challenging to condense to traditional if-then statements. Without a shared semantic space, the system will have little basis for communicating with the human. As a result, what a human may perceive as an error may be fully logical to the system. Second, an actual error on the part of the system may be difficult to detect by the human if the human does not understand the system's basis for the decision making and data/environmental state. Third, by definition, a learning system should evidence some degree of dynamic behavior which challenges the notion of predictability. This article adopts the perspective that learning systems may never be completely “knowable,” much like humans; yet they very well may be trusted by providing the users with information to reduce uncertainty, increase understanding of rationale, and by sharing lessons learned through peer and informal networks. In this paper we explore the notion of the “Gray Box” to symbolize the idea of providing sufficient information about the learning technology to establish trust wherein, much like with humans, we trust based on the synthesis of predictability, feasibility, and inference of intent based on one's knowledge of the goals, values, and interaction with the system. The term system is used throughout this brief paper to represent an intelligent agent, robot, or other form of automation that possesses both decision initiative and authority to act.