0
Select Articles

Toward Intelligent Human Machine Interactions PUBLIC ACCESS

Human Assistance Systems (HAS)

[+] Author Notes
Yingzi Lin

Intelligent Human Machine Systems Lab Department of Mechanical and Industrial Engineering, College of Engineering, Northeastern University, Boston

Yingzi Lin is the Director and Principal Investigator of the Intelligent Human-Machine Systems (IHMS) Laboratory, and an Associate Professor (tenured) with the Department of Mechanical and Industrial Engineering, College of Engineering, Northeastern University, Boston, MA, USA. Prior to that, she was an Assistant Professor in the Concordia Institute for Information Systems Engineering at Concordia University, Montreal, Canada. Her research has been funded by the National Science Foundation (NSF), Natural Sciences and Engineering Research Council of Canada (NSERC), and major industries. She is a recipient of a few prestigious research awards, including an NSF CAREER award and an NSERC UFA (University Faculty Award). She has published over 100 technical papers in refereed journals and conference proceedings. Her area of expertise includes: intelligent human-machine systems, driver-vehicle systems, smart structures and systems, sensors and sensing systems, multimodality information fusion, human-machine interface design, and human friendly mechatronics. Dr. Lin was the Chair of the Virtual Environments Technical Group of the Human Factors and Ergonomics Society (HFES). She was on the committees of the Transportation Research Board (TRB) of the National Academy of Sciences. She served as an Associate Editor of the IEEE Trans. on Systems, Man and Cybernetics - Part A: Systems and Humans, and Structural Health Monitoring: An International Journal. In addition, Professor Lin has been a reviewer for many professional journals and conferences. She has been on the organizing committee of a number of professional meetings in the areas of Advanced Sensors, Mechatronic Systems, Dynamic Systems and Control, Advanced Smart Materials and Smart Structures, and human-machine interaction.

Mechanical Engineering 139(06), S4-S8 (Jun 01, 2017) (5 pages) Paper No: ME-17-JUN4; doi: 10.1115/1.2017-Jun-4

This article discusses the concept of human assistance systems (HAS) and research works to design the interface of HAS. It also focuses on the issue of how humans and HAS collaborate with each other during such interactions. HAS are expected to detect and compensate for human errors. In a case that a machine is a part of the team to complete an operation, it is highly desired that HAS collaborate with humans effectively. Advances on HAS have been made within application areas including vehicle driving, pilot–flight interfaces, healthcare and rehabilitation, robotics, etc. One important method for studying driver assistance system (DAS) is the availability of a powerful research tool, as the simulator is an effective means to generate real-world traffic scenarios without putting drivers in any real danger. A control strategy for HAS been investigated, especially for DAS. The goal is to provide a warning message and/or intervention to the driver if necessary to avoid hitting objects on road while not frustrating the user.

Humans create devices, which include structures, machines, etc., to help humans in coping with nearly all kinds of socio-technical systems (e.g., manufacturing, servicing, etc.). In fact, devices have never left humans alone; that is, a full automation has never taken place. Therefore, humans ubiquitously interact with devices to make sure that jobs are effectively carried out. Particularly, humans serve as masters while devices serve as slaves. In this context, devices are identified, inherently, as human assistance systems (HAS). There are naturally two issues on constructing HAS. The first issue is about how to design the interface of HAS. The second issue is about how humans and HAS collaborate with each other during such interactions.

With regard to the first issue, interfaces are of two types: (i) devices used by humans to communicate with machines, e.g., keyboard, mouse, and (ii) devices used by machines to communicate with humans, e.g., display screens, audio systems. Type (i) devices are called human-to-machine interfaces and Type (ii) devices are called machine-to-human interfaces. Both types of interfaces are responsible for effective and efficient operation of HAS. Interfaces are of soft and hard parts. The soft part refers to “What, Where, When (WWW)” the right information and/or action are communicated to humans from HAS, and the hard part refers to how the information and/or action are realized with devices.

With regard to the second issue, humans may make errors or act improperly in human-machine interactions. HAS are expected to detect and compensate for human errors. In a case that a machine is a part of the team to complete an operation, it is highly desired that HAS collaborate with humans effectively. Indeed, HAS possess a certain level of human intelligence as described below.

There are several levels of intelligence with HAS:

  • Level 1:HAS that can follow a pre-defined procedure of a human's operations. Many displays in process plants, flight displays, and vehicles fall into this level of intelligence with HAS. In essence, this level of intelligence is such that system intelligence is built in by interface designers. The HAS with this level of intelligence may also be called passive intelligence.

  • Level 2:HAS that can understand human action, cognition, and/or emotion. One example is the smart steering wheel where an array of sensors (including heart rate, Galvanic skin responses, etc.) were constructed on the surface of the steering wheel to measure driver states [1]. HAS at this level do not change a machine's behavior and so, they still fall into the category of passive intelligence.

  • Level 3:HAS that possess Level 2 of intelligence and can perform cognitive tasks and change machines to respond to a new situation that happens at the human side. HAS at this level change machine's behavior, which could be in (physical) action, (non-physical) communication, and such intelligence may be called active intelligence.

  • Level 4:HAS that possess Level 3 of intelligence and can further exhibit intelligence emotionally. By emotionally, it is meant that emotion plays an important role in one's decision and action. For instance, HAS may take a more aggressive intervention to the braking operation when HAS detect that the driver is in an angry state [2]. That type of intelligence is also called emotional intelligence.

  • Level 5:HAS that have Level 4 of intelligence and can express emotions known to humans. For instance, driver assistance system (DAS) may use a particular soft voice to remind a particular driver of a hazard ahead.

  • Level 6:HAS that have Level 5 of intelligence and can express emotions based on a machine's states in a physical and/or cognitive sense. For instance, DAS in the braking operation would give an emotional message to a particular driver based on a state of the braking system.

Remark (1): Level 4, 5, 6 of intelligence with HAS all fall into the category of active intelligence, and can be further called emotional intelligence I, II, and III;

Remark (2): In the case of automation, machines are controlled by computers, and as such, human-machine interactions become human-computer interactions. However, the nature of human-machine interaction is not changed, as the computer in this case is a part of the machine and part of the machine-to-human interface;

Remark (3): In the situation when a machine is a part of computer software with no interest in the physically tireless machine behind, the software exhibits and operates intelligently, i.e., persuasive technology, which is a type of HAS at Level 5.

Basic problems of HAS may be described in the following dimensions:

  • Dimension 1: interface design and interaction (for all the levels of intelligence), particularly the problem of determining “What, Where, When” for a piece of information relayed to humans and how to exhibit this piece of information.

  • Dimension 2: development of human-to-machine interacting devices, e.g. joystick, keyboard, etc. (for all the levels of intelligence).

  • Dimension 3: development of machine-to-human interacting devices, e.g., audio, display screen, etc. (for all the levels).

  • Dimension 4: development of sensors that are built on or worn by machines for HAS to infer and predict human states (for Level 2 of intelligence and above).

  • Dimension 5: design of software for HAS to provide assistance, including soft message and/or hard intervention, to humans based on the analysis of information regarding a human's action and cognition (for Level 3 and above).

  • Dimension 6: design of software for HAS to provide assistance, including soft message and/or hard intervention, to humans based on analysis of information regarding human's action and cognition as well as emotion (for Level 4 and above).

  • Dimension 7: development of HAS that can exhibit human emotions on appearance (for Level 5 of intelligence).

  • Dimension 8: understanding of the relationship between machine's states and human's emotions (for Level 6).

Advances on HAS have been made within application areas including vehicle driving, pilot flight interfaces, healthcare and rehabilitation, robotics, etc. In the following, research efforts in HAS for vehicle driving, which is also called DAS (driver assistance system), are described.

One important method for studying DAS is the availability of a powerful research tool (i.e., simulator), as the simulator is an effective means to generate real-world traffic scenarios without putting drivers in any real danger [3,4]. An advanced driving simulator has been in development for the past decade at the IHMS laboratory [5]. The purpose of the driving simulator is to facilitate the research and development of DAS with an eye on more generalized findings for HAS in other application areas and to facilitate the training and assessment of drivers in terms of essential skills in driving, e.g., reaction to hazards.

Specifically, there are two primary functions within driving simulators, see Figure 1. The first function is to test sensors with algorithms for the HAS to understand a driver's states in action, cognition and emotion. The second function is to test operation management systems (both hardware and software) for driver assistance. With these functions, the driving simulator can support research across all of the dimensions of problems, Dimensions 1-8, as previously mentioned.

The quality of the driving simulator lies in its fidelity. Figures 2-3 show various road situations the driving simulator can simulate, and how the simulator facilitates the development of an operation management system for drivers’ reactions to hazardous situations.

FIGURE 1 Sensors and Driver Interface.

Grahic Jump LocationFIGURE 1 Sensors and Driver Interface.

The driving simulators are also developed into a networked platform (Figure 4), upon which scenarios can be constructed, where there are multiple drivers on the road [5,6,7]. Specifically, the networked simulator enables (a) the simulation of single- and multi-driver immersive driving, (b) the visualization of interactive surrounding traffic, (c) the specification and creation of reproducible traffic scenarios, (d) the capture of drivers’ behavioral and physiological data, and (e) real-time information communication between vehicles.

FIGURE 2 Virtual environments and driving scenarios.

Grahic Jump LocationFIGURE 2 Virtual environments and driving scenarios.

FIGURE 3 Driving Simulation at IHMS Lab at Northeastern.

Grahic Jump LocationFIGURE 3 Driving Simulation at IHMS Lab at Northeastern.

A variety of research projects have been performed, such as driver fatigue [8], distraction [9]. In the following, some selected projects are summarized.

The primary goal of DAS in this case is first to assess hazards (including the driver's state of hazard perception, intent to react, and reaction) and then to take action (or no action) accordingly. Apparently, understanding the driver's hazard perception is most crucial. The behavior of hazard perception of a driver is found to be sensitive to the physiological state of a driver, especially Electroencephalography (EEG). This provides an avenue to develop a real-time marker or indicator of the driver's behavior of hazard perception. A project was carried out to develop such a real-time marker. The objective is to build a map between the physiological signal and the hazard perception behavior derived from a standard test available in the literature. In this pilot study, a total of about 50 participants were asked to see images of two categories: not hazardous situations, and hazardous situations (see Figure 5). The participants were required to respond to the situations in the images (Yes or No for hazardous situation identification). While driving, physiological signals of the participants are measured, which include EEG and skin conductance signals. Figure 6 shows the experiment scene, in particular the display of hazardous images and the measurement of EEG signals and skin conductance (SC) of the participants. Data analysis establishes the mapping among the behavioral score, physiological signal, risk category or no-risk category. Data analysis also reveals that the physiological signal is more sensitive to the risk or no-risk category than the behavior score. Further, this strongly suggests that the driver's physiological responses are potentially reliable objective measures for driver licensing tests.

FIGURE 4 Networked Multi-Driver Simulation Platform.

Grahic Jump LocationFIGURE 4 Networked Multi-Driver Simulation Platform.

Driving anger, called “road rage”, is a unique emotion caused by pressure or frustration from daily life or from bad traffic situations and the discourteous behavior of surrounding drivers. First, anger emotion was induced by elicitation events. Then, anger intensity was labeled in terms of the self-reported anger levels, and were associated with the EEG spectral features under different driving anger states. In particular, the relative energy spectrum of δ, θ, α and β bands of EEG signal among different anger levels were obtained, see Figure 7. As shown, the relative energy spectrum of β band (β%) in neutral state (anger level = 0) is the lowest, while β% at anger level 5 is the highest, and β% markedly increases with the increase of anger level. Meanwhile, the relative energy spectrum of θ band (θ%) markedly decreases with the increase of anger level. Additionally, the relative energy spectrum of δ band (δ%) in anger state (anger level = 1, 3, 5) is smaller than that in neutral state, and the relative energy spectrum of α band (α%) in anger state (anger level = 1, 3) is smaller than that in neutral state. However, the same consistent changing trends were not found for δ% and α%, respectively, with the increment of anger level [10,2].

Sensors are a fundamental problem in human-machine systems; see the previous discussion on the dimension of problems (Dimension 4) and on the level of intelligence of human-machine systems (Level 2 of intelligence and above). The sensor plays three roles: understanding of the scene, of the machine, and of the human. The sensors for the scene and the machine are not a focus of this paper; the sensor for the machine is the business of machine manufacturers. The IHMS laboratory focuses on the sensor for the human. Ample evidence shows that human physiological signals are sensitive to the human states. The essential criterion for sensors to measure human physiological signals is nonintrusiveness. We have focused on a so-called “natural contact sensor” [11]. The natural contact sensor makes a machine “wear” a wrapper or engineers a “skin” on a machine with such sensors embedded in the wrapper or the skin. The natural contact sensor paradigm for human physiological signals is complementary to the wearable sensor paradigm for human physiological signals. For the wearable sensor, humans need to “wear” sensors in order to measure their physiological signals.

There are two challenges with the natural contact sensor paradigm: (i) how to install this suite of sensors into a machine, and (ii) how to predict what point on the machine subjects will contact for operation. Therefore, the concept of flexible thin film sensing array was proposed [12], which can be easily wrapped and retrofitted to machine surfaces to address both of the challenges. Based on this, two such non-intrusive sensors have been developed. (1) Skin Temperature Sensor. Change of temperature and pressure from humans are the main factors that induce cross interference. In order to perform temperature compensation, the temperature sensors should be robust thermometers which have stable performance and suffer little from cross interference. Specific skin temperature sensors have been developed and their performance has been verified [13]; (2) Heart Rate Sensor. For Heart Rate Variability, heart rate can be measured by observing the amount of infrared light reflected by the skin from a light source, to measure Blood Volume Pulse. By combining quantum dots with conductive polymers used to make organic LEDs, a thin, flexible film that can measure Blood Volume Pulse can be developed [14].

A control strategy for HAS has been investigated, especially for DAS. The goal is to provide a warning message and/or intervention to the driver if necessary to avoid hitting objects on road (e.g., lead cars) while not frustrating the user. DAS make use of the program of assessing driver's hazard perception and assesses the risk level of driving per se. For the middle risk level and high risk level, one example is that it provides a warning and/or intervention based on a discrete PID control law. Intuitively, adjusting the operators’ physical/mental states to achieve optimal performance is operators’ own responsibility. In reality, the idea of having the operator as the only controller is not sufficient because human (controlling) behavior is inherently uncertain. An idea that deserves more exploration is to develop an intelligent system that collaborates with the human operator in controlling the machine's executive unit. This kind of HAS intelligent system is referred to as an operator assistance system (OAS) [15]. Subsequently, all human-machine cooperative systems can be simplified to a structure with operators, HAS or OAS, and peripheral executive mechanisms. In simpler terms, HAS become the brain of the machine, and the rest of the executive mechanisms are its actuators. The general function of HAS is to maintain the aforementioned nominal situation during human-machine cooperation. This general role is performed through three basic functions of HAS: (1) perceiving cues of human operators’ states, (2) inferring human operators’ mental state, and (3) making decisions and adjustments aimed at recovering or maintaining the nominal state. HAS leave the execution functions (e.g., steering wheel turning, gas pedal control, and brake pedal control in driving) to the actuators. From the viewpoint of HAS, the target system being dealt with is a human-in-the-loop system. From machine side, HAS collaborates with human operators to jointly control the actuators.

FIGURE 5 Driver road hazard perception scene.

Grahic Jump LocationFIGURE 5 Driver road hazard perception scene.

FIGURE 6 EEG Experiment set-up.

Grahic Jump LocationFIGURE 6 EEG Experiment set-up.

FIGURE 7 EEG and driver state (Neutral vs. Angry).

Grahic Jump LocationFIGURE 7 EEG and driver state (Neutral vs. Angry).

A technical system should be viewed as a human-machine system, as the premise of any technical system is that the human serves as a master while the machine is a slave. HAS are a generic notion as part of machine systems to improve the level of intelligence of machines and to ultimately improve the mission accomplishment of human-machine systems, and HAS are attachable and detachable to machines. This paper presented six levels of intelligence with HAS to improve machine intelligence in working with humans, and eight dimensions of problems in developing HAS with these levels of intelligence. A summary description about some related research projects have been described. However, there are a few limitations for our current work. Take driver hazards perception, for example. Further work is planned for having participants perform the test in more realistic situations, i.e., driving simulator or real road tests. Based on the above analysis, there is still a need for research on sensors (i.e., on Dimension 4 and below) to further improve the accuracy of inferring and predicting human cognitive and emotional state, especially human intent to take actions and emotions. This includes both research sensors and information fusion algorithms. Among these, human intent might be one of those most challenging research problems, but it is probably well worth it given the potential benefits it will bring to advance human-machine interactions. Second, research needs to be conducted to build emotional intelligence of HAS (i.e., on Dimension 6 and beyond). The key challenge is to develop associations between the machine's state (cognitive and physical) and the human emotion and the principle behind the association.

Despite its great technical and social significance, the modeling of human states and behaviors remains one of the greatest challenges in science and technology development. It is known that human states and behaviors are highly nonlinear, uncertain, and random, which challenge many scientific disciplines. This line of research truly calls for interdisciplinary and transdisciplinary collaborations from experts from all the related fields to lead to groundbreaking discoveries in the new era of human-machine interactions.

Various aspects of this work have been supported by the National Science Foundation (NSF) through grant # 0954579 & #1333524.

H. Cai and Lin, Y., “An experiment to non-intrusively collect driver physiological parameters towards cognitive/emotional state recognition”, SAE 2007 World Congress, SAE Paper No. 2007-01-0403, April 16-19, 2007, pp. 101– 106.
P. Wan, C. Wu, Lin, Y., X. Ma, “An On-Road Experimental Study on Driving Anger Identification Model Based on Physiological Features by ROC Curve Analysis”, IET Intelligent Transport Systems, 2017 (in press)
National Highway Traffic Safety Administration. Research Note: 2014 Motor Vehicle Crashes: Overview. 2016.
World Health Organization. WHO - Road Traffic Accidents. Retrieved from http://www.who.int/violence_injury_prevention/road_safety_status/2015/magnitude_A4_web.pdf?ua=1, 2015.
Lin, Y., “i-DRIVE (Intelligent Driver Interactive Vehicle Environment): Are We Ready?”, J Automotive Safety and Energy, Vol. 7 No. 1, 2016, pp 14– 24.
Xu, J. and Lin, Y, “Study on Interactive Driving Behavior Using Networked Multi-Drivers Simulator”, The 15th Driving Simulation Conference (Europe), Paris, France. Sept. 4-5, 2014, pp. 1101– 1109.
H. Cai, Lin, Y., and R. R. Mourant, “Study on Driver Emotion in driver-vehicle-environment systems Using Multiple Networked Driving Simulators”, Driving Simulation Conference 2007 - North America, Sept. 12-14, 2007, Iowa city, CD-ROM.
G. Yang, Lin, Y., and P. Bhattacharya, “A Driver Fatigue Recognition Model Based on Information Fusion and Dynamic Bayesian Network”, Information Sciences 180, 2010, pp. 1942– 1954. [CrossRef]
Samareh A., J. Xu, J. Beneyan, Lin, Y, “Study the Headway Distance and Physiological Responses of Driver Distraction - An experiment on Networked Multi-Drivers Simulator”, Human Factors and Ergonomics Society's 59th Annual Meeting, San Diego, Oct. 26-30, 2015, pp. 1820– 1823.
P. Wan, C. Wu, Lin, Y.., X. Ma, “Optimal Threshold Determination for Discriminating Driving Anger Intensity Based on EEG Wavelet Features and ROC Curve Analysis”, Journal of Information, Vol. 7, No. 52, 2016, 7030052.
Lin, Y., “A Natural Contact Sensor Paradigm for Non-intrusive and Real-time Sensing of Bio-signals in Human-Machine Interactions”, IEEE Sensors Journal, Special Issue on Cognitive Sensor Networks, 11 (3), 2011, pp. 522– 529.
Leng, H. and Lin, Y., “A MEMS/NEMS sensor for human skin temperature measurement”, Smart Structures and Systems, 8 (1), 2011, pp. 53– 67. [CrossRef]
H. Leng and Lin, Y., “From human skin to Nano-Skin: An experimental study on human skin temperature measurement”, International Journal of Smart and Nano Materials, Vol. 2, No. 2, June 2011, pp. 78– 91. [CrossRef]
D. Schmidt and Lin, Y., “Quantum Dot Sensing: Nano-Scale Photodiode Structures as Functional Devices to Measure Human Responses”, Joint 6th International Conference on Advances in Experimental Structural Engineering (6AESE) and 11th International Workshop on Advanced Smart Materials and Smart Structure Technology (11ANCRiSST), August 1-2, 2015, Urbana, Illinois.
Cai, H. and Lin, Y., “Coordinating cognitive assistance with cognitive engagement control approaches in human-machine collaboration”, IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, 42 (2), 2012, pp. 286– 294. [CrossRef]
Copyright © 2017 by ASME
View article in PDF format.

References

H. Cai and Lin, Y., “An experiment to non-intrusively collect driver physiological parameters towards cognitive/emotional state recognition”, SAE 2007 World Congress, SAE Paper No. 2007-01-0403, April 16-19, 2007, pp. 101– 106.
P. Wan, C. Wu, Lin, Y., X. Ma, “An On-Road Experimental Study on Driving Anger Identification Model Based on Physiological Features by ROC Curve Analysis”, IET Intelligent Transport Systems, 2017 (in press)
National Highway Traffic Safety Administration. Research Note: 2014 Motor Vehicle Crashes: Overview. 2016.
World Health Organization. WHO - Road Traffic Accidents. Retrieved from http://www.who.int/violence_injury_prevention/road_safety_status/2015/magnitude_A4_web.pdf?ua=1, 2015.
Lin, Y., “i-DRIVE (Intelligent Driver Interactive Vehicle Environment): Are We Ready?”, J Automotive Safety and Energy, Vol. 7 No. 1, 2016, pp 14– 24.
Xu, J. and Lin, Y, “Study on Interactive Driving Behavior Using Networked Multi-Drivers Simulator”, The 15th Driving Simulation Conference (Europe), Paris, France. Sept. 4-5, 2014, pp. 1101– 1109.
H. Cai, Lin, Y., and R. R. Mourant, “Study on Driver Emotion in driver-vehicle-environment systems Using Multiple Networked Driving Simulators”, Driving Simulation Conference 2007 - North America, Sept. 12-14, 2007, Iowa city, CD-ROM.
G. Yang, Lin, Y., and P. Bhattacharya, “A Driver Fatigue Recognition Model Based on Information Fusion and Dynamic Bayesian Network”, Information Sciences 180, 2010, pp. 1942– 1954. [CrossRef]
Samareh A., J. Xu, J. Beneyan, Lin, Y, “Study the Headway Distance and Physiological Responses of Driver Distraction - An experiment on Networked Multi-Drivers Simulator”, Human Factors and Ergonomics Society's 59th Annual Meeting, San Diego, Oct. 26-30, 2015, pp. 1820– 1823.
P. Wan, C. Wu, Lin, Y.., X. Ma, “Optimal Threshold Determination for Discriminating Driving Anger Intensity Based on EEG Wavelet Features and ROC Curve Analysis”, Journal of Information, Vol. 7, No. 52, 2016, 7030052.
Lin, Y., “A Natural Contact Sensor Paradigm for Non-intrusive and Real-time Sensing of Bio-signals in Human-Machine Interactions”, IEEE Sensors Journal, Special Issue on Cognitive Sensor Networks, 11 (3), 2011, pp. 522– 529.
Leng, H. and Lin, Y., “A MEMS/NEMS sensor for human skin temperature measurement”, Smart Structures and Systems, 8 (1), 2011, pp. 53– 67. [CrossRef]
H. Leng and Lin, Y., “From human skin to Nano-Skin: An experimental study on human skin temperature measurement”, International Journal of Smart and Nano Materials, Vol. 2, No. 2, June 2011, pp. 78– 91. [CrossRef]
D. Schmidt and Lin, Y., “Quantum Dot Sensing: Nano-Scale Photodiode Structures as Functional Devices to Measure Human Responses”, Joint 6th International Conference on Advances in Experimental Structural Engineering (6AESE) and 11th International Workshop on Advanced Smart Materials and Smart Structure Technology (11ANCRiSST), August 1-2, 2015, Urbana, Illinois.
Cai, H. and Lin, Y., “Coordinating cognitive assistance with cognitive engagement control approaches in human-machine collaboration”, IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, 42 (2), 2012, pp. 286– 294. [CrossRef]

Figures

Tables

Errata

Discussions

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In