0
Select Articles

Preparing for the Intelligence Era PUBLIC ACCESS

With Proper Support for Emerging Technologies, We can have Intelligent Transportation Networks that Run as Efficiently as Factories; Smart Energy Systems that Make the Best use of Resources, and Personalized Health Care when and where it is Needed.

[+] Author Notes

Ahmed K. Noor is Eminent Scholar and William E. Lobeck Professor of Modeling, Simulation, and Visualization Engineering at Old Dominion University in Norfolk, Va.

Mechanical Engineering 132(11), 24-29 (Nov 01, 2010) (6 pages) doi:10.1115/1.2010-Nov-1

This article provides an overview of various technologies meant for developing intelligent digital engineering ecosystems. The article also highlights the need to develop new methods for combining visual and haptic impressions to provide a high degree of immersion, and enable touching and moving virtual objects. Some current activities are devoted to studying and improving the relation between humans and computing devices. One of these activities is the Human–Computer Confluence, an interdisciplinary initiative funded by the European Commission, as part of its Future and Emerging Technologies program. Intelligent digital engineering ecosystems will closely link research and academic institutions with industry and policymakers, and will facilitate the networking of innovation knowledge. They will enable the widespread adoption of augmented reality; the seamless integration of virtual and physical worlds; establishing a new baseline for human functionality; and experimentation with novel modalities of expression. They will accelerate advances in neurocomputation, artificial general intelligence, and other novel technologies, as well as the synergistic union of the human brain, intelligent computing devices, and the ambient intelligence environment to usher in the Intelligence Era.

We’ve come a long way since the introduction almost 30 years ago of the graphical user interface to our personal computers. The GUI set the stage for the computer to become the phenomenon it is today. Complex interactions and processes were reduced to a point and a click. We could concentrate on the content of our work and not on the computer.

A Displax product uses a transparent polymer film to turn surfaces including glass, plastic, and wood into multi-touch interfaces.

Now the hands-free phone in a car relies on computing devices that react to our spoken words. New wireless mobile devices are used to transfer money from our bank accounts through the touch of fingertips. With voice commands, devices can use Google Maps to overlay basic navigation information on videos and display them on their screens, while we are en route to our desired destination.

Just as these ideas were once considered fanciful, technologies expected to be commercialized over the next several years have possibilities, if we seize them, that could serve the engineering profession, society, and the economy in ways that are impossible now.

It is the purpose of all technologies to improve the quality of life, and of work, as well. Technology is intended to make things better—that is, safer, easier, more satisfying, and therefore more enjoyable. It extends what people are capable of achieving.

Significant effort is currently being devoted to making human interactions with computers, physical systems, and with information in general, simple, natural, and seamless. The objectives of many of the recent developments are to enhance productivity and accelerate innovation.

The pace of advances in computing, communication, mobile, robotic, and interactive technologies is accelerating. The trend towards digital convergence of these technologies with information technology, virtual worlds, knowledge-based engineering, and artificial intelligence is ushering in a new era. Perhaps it will be called the Intelligence Era.

New methods of displaying information—including surfaces that can be rolled up and taken anywhere and 3-D stereoscopic displays without the need for glasses—are in development. Control systems may soon involve eye-tracking, gesture recognition, bioacoustics on the skin, and perhaps a combination of them all.

Multitoe, an interactive floor display, recognizes the imprints of users’ soles and asks for a newcomer's identity.

Patrick Baudisch , Hasso Plattner Institute. Photo by Kay Herschelmann

Grahic Jump LocationMultitoe, an interactive floor display, recognizes the imprints of users’ soles and asks for a newcomer's identity.Patrick Baudisch , Hasso Plattner Institute. Photo by Kay Herschelmann

There are even headsets that one can wear to give a computer rudimentary commands directly as one thinks them. This brain-to-machine interface today is roughly at the same stage of development and reliability that voice-recognition technology had achieved 20 years ago.

Consider this possibility: When an engineer approaches his office, his gait and face are recognized, and the office door opens automatically. Temperature and lighting are auto-matically adjusted to his preferred settings, and a large screen on the wall displays the tasks of the day.

The cameras on the top of the screen track his iris, so when he blinks on a task, he calls up a 3-D virtual world simulating the different phases of that job. A special lens combined with backlight creates stereoscopic views, without the need of stereo glasses. He points to his area of interest and the display focuses there. Three-dimensional models of five concepts, generated by the distributed team, are displayed, along with notations. The engineer uses voice commands to select new values for the design parameters, and the model adjusts accordingly, in real time.

Advances in artificial general intelligence are creating robots that can learn more-complex tasks than in the past, and learn them faster.

Here is a snapshot of what one day may be possible in a blended real-virtual factory. An engineer is standing next to a robot and both are facing a display at one end of the factory floor. The engineer uses voice commands and gestures to interact with both the virtual factory and the robot. The robot uses its vision system, scene understanding, navigation and learning capability, and activity recognition to execute the commands of the engineer, and to carry out the tasks assigned to it.

The engineer issues a voice command to display a 3-D model of the factory floor. Another voice command and a finger pointing at a location in the display launches a simulation of a robot in the process of creating a new product. The physical robot identifies and moves to the corresponding position on the real factory floor. The robot, recognizing in the simulation the tasks assigned to it, begins to perform the tasks.

Pushing the Interface

New possibilities of human interaction with technology are emerging. Some permit several users to work together at the same time. Others use more than one sensory mode to convey information.

The familiar touch screen, for example, in which the display serves both as input and output device, has evolved into the multi-touch device.

The Microsoft Surface is a 30-inch (diagonal) reflective surface with an XGA DLP projector underneath it. It is a multi-user touch screen and enables non-digital objects to be used as input devices. A normal paint brush, for example, can be used to create digital painting in the software.

The 10/GUI system is an enhanced touchpad for desktop computers. It enables the use of all ten fingers, and detects the pressure of each one.

Displax Multitouch uses an ultra thin polymer film to transform any nonconductive surface, from a tabletop to a wall, into an interactive multi-touch screen. Currently, the screen can be up to 50 inches (diagonal) and enables multiple users to interact with the surface. It can respond to as many as 16 fingers simultaneously.

The Multitouch system uses modular units to create large table- and wall-size displays for multiple users.

A touchless gesture-based system, which requires no physical contact, was developed at Fraunhofer FIT Institute in Germany. The system detects the fingers and hands of multiple users as they move them in the air, then recognizes and interprets the gestures.

Interfaces have been developed which combine more than one communication modality for input or output—combining speech, eye gaze, and gesture, for example. The modalities can complement each other and enhance both the dependability and usability of the device. The weaknesses of one modality are compensated for by the strengths of another.

Combining a touchless interface with a holographic display will let a user work with images and information in mid-air.

Grahic Jump LocationCombining a touchless interface with a holographic display will let a user work with images and information in mid-air.

Microsoft has been working on technologies such as speech, touch, contextual and environmental awareness, immersive 3-D experiences, and anticipatory computing—with the overall goal of enabling computing devices to see, listen, learn, talk, and act smartly on behalf of the users.

Work has been devoted to using video devices as input/output devices to detect hand gestures and provide more interaction than is possible with touch screens or pads.

The Touch Projector, developed by the Hasso Plattner Institute in Germany, is a handheld mobile device which tracks itself with respect to surrounding displays (including wall, tabletop, and laptop displays). The content on the touch screen of the mobile device is projected onto a target display.

SixthSense, developed at the MIT Media Lab, includes a camera, projector, and mobile computer. It responds to hand gestures.

MIT media lab

Grahic Jump LocationSixthSense, developed at the MIT Media Lab, includes a camera, projector, and mobile computer. It responds to hand gestures.MIT media lab

SixthSense, developed at MIT Media Lab, uses a wearable gesture interface consisting of a simple Webcam for input, and a small projector for the output along with a mobile computing device to create an augmented reality experience for the user. The system projects digital information onto surfaces and physical objects, and allows the user to interact with information using natural hand gestures.

The G-Speak, developed by Oblong Industries, is a spatial operating environment based on a gesture technology. It uses special surfaces and displays for tracking hand movements of users wearing special gloves, and directly manipulating objects on the screen. Controlling applications can be done through hand poses, pointing, and hand movements. Simultaneous input from several sources is supported. Hand and finger motions are tracked to an accuracy of 0.1 mm at 100 Hz.

The Skinput project of Microsoft Research and Carnegie Mellon University, is an example of an organic user interface. It aims at treating the body as an input surface (touch screen interface), and uses wearable bioacoustic sensors to turn the hands and arms into buttons.

Microsoft's Project Natal uses sophisticated sensors and software to track body movements, recognize faces, and respond to spoken commands. With Project Natal, the whole body of the user is turned into a video game controller, so that the user can enjoy games with friends the same way that they play in the real world—by talking, shouting, running, swinging, and other movements and gestures. A commercial product that came out of Project Natal is Kinect, an add-on peripheral to Xbox 360 planned for this year. It enables users to control and interact with a game by using gestures with hands or objects and spoken commands without the need to touch a game controller.

Microsoft's Skinput technology uses acoustics in the body to turn the skin's surface into a touchscreen interface.

Microsoft Research

Grahic Jump LocationMicrosoft's Skinput technology uses acoustics in the body to turn the skin's surface into a touchscreen interface.Microsoft Research

Other interfaces that have been developed include a bionic contact lens, with nanometer-scale electronic circuits to provide a virtual display superimposed on the real world, and Multitoe, which turns the floor into a multi-touch screen controlled by the feet.

The European Union is funding the “Immersense Project,” which aims at developing new methods for combining vision and touch interfaces of virtual objects.

Tangible user interfaces let a user manipulate physical artifacts with physical gestures, which are sensed by the system and acted upon. Feedback is given by the system.

Imaginary interfaces can use a camera phone and an infrared light to recognize gestures and to interact spatially. Users can control onscreen objects and even draw with gestures in the air.

Technologies offer novel and increasingly efficient ways for human beings to control computers, devices, and systems, both for work and for play. Powerful microprocessors, display screens, sensing devices, miniature cameras, and other technologies are being integrated into everyday objects, and can react appropriately to the users’ gestures and actions. Computing and communication capabilities are embedded in all types of physical devices including autonomous robots, wearable computers, and wireless mobile devices, networked to sense, monitor, and help humans negotiate their environment.

That is interesting by itself, but the long-term practical implications could affect the world in more ways than by increasing the market for video games and gadgets, and over a broader reach than the engineer's workplace.

If current and emerging technologies can be integrated into large systems, they will provide unprecedented benefits. Technologies could be applied to transportation systems, for instance, to improve safety and make efficient use of assets for moving populations. Cities or regions might one day manage traffic with the same degree of control that networked SCADA systems give to manufacturers in factories today.

The same technologies could support remote medical monitoring networks offering improved care and, perhaps in some cases, increased independence for the elderly and infirm. Monitoring could be continual in cases where it is deemed necessary so health care professionals could react immediately to any development.

My colleagues and I call these kinds of concepts intelligent digital engineering ecosystems. We envision them as broad ecologies of networked smart devices and cognitive robots (with high-level reasoning, planning, and decision making capabilities), cyber collaboration and collective intelligence facilities, combined physical and virtual environments, and novel interaction technologies.

The term “interface” encompasses a link between systems or devices and people. Although it is commonly associated with computers, it can be applied to any engagement between humans and machines, including robots.

Interfaces exist to facilitate understanding. They transform digital signals and invisible radiation into media that are readily accessible to human senses. Properly designed and implemented interfaces, in addition to facilitating system-to-system communication, can simplify and automate control of complex functions, thereby reducing the cognitive load on the user.

The graphical user interface is based on using a physical input device to control the position of a cursor and on presenting information, which is organized in windows and represented by icons.

The trend is now towards more intuitive and natural user interfaces, with new means of user interaction and devices based on using natural human movements, vision, voice, or gestures to control the system, or more directly manipulate content. Over time, improved integration between the human body and electronic devices will lead to the development of organic user interfaces, which might include displays projected onto the user's skin, biometric sensors, and eventual brainmachine interfaces that provide direct interface to the brain.

Future interaction technologies will insure reliable communication and information transmission anywhere, any time, and with anyone. They will enable a seamless experience across many devices and virtual environments.

User interfaces will take advantage of the greater connectivity between devices and enable some of these devices to work in concert. Intelligent software agents, which have been serving as virtual assistants for a number of years, are going to gain ground and be integrated with the computing devices. For example, networked smart computing devices will proactively anticipate the users’ needs and be able to take action, according to preset criteria, on their behalf.

The increasing use of infrared and ultrasound sensors, and miniature cameras can make the interface devices become invisible at all levels. As a result, human interaction with a computer could become significantly more natural and spontaneous than it is today.

Some current activities are devoted to studying and improving the relation between humans and computing devices. One of these activities is the Human-Computer Confluence, an interdisciplinary initiative funded by the European Commission, as part of its Future and Emerging Technologies program. Its objectives are to provide better understanding of how sensory information is delivered to, and interpreted by, the human brain; to develop new methods and tools for merging the real and virtual spaces; and to discover new ways of understanding and interacting with massive amounts of data.

The boundaries of human values are continually being pushed further, and engineers will be required to perform increasingly complex and imaginative tasks of synthesis and creativity. Intelligent digital engineering ecosystems need to be developed by the effective integration and exploitation of rapidly evolving new technologies.

The ecosystems will be populated by a dynamic aggregation of humans, cognitive robots, virtual world platforms, and other digital components. They will build on the technologies and facilities of the systems being developed to exploit and augment human capabilities. Humans will have multisensory, immersive 3-D experiences in mixed physical-virtual worlds, including interaction with large surface displays, small mobile devices, and wearable computers.

The development of the ecosystems requires a holistic approach covering the environment, strategies, operations, organizations, and all related technologies, interactions, and services to optimize human performance. The services, infrastructure, and solutions of the systems evolve and adapt to local conditions with the evolution of the components.

In the near-term a fusion of different interaction technologies will facilitate 3-D collective interaction in the ecosystems. Multi-modal interfaces combining multi-touch, gesture and pointing recognition, and voice will be used to increase the quality of communication in the ecosystems. Smart mobile devices with multiinput capabilities and 3-D autostereoscopic display will be widely available.

New methods will be developed for combining visual and haptic impressions in order to provide a high degree of immersion, and enable touching and moving virtual objects.

Intelligent digital engineering ecosystems will closely link research and academic institutions with industry and policy makers, and will facilitate the networking of innovation knowledge. They will enable the widespread adoption of augmented reality, and the seamless integration of virtual and physical worlds; establish a new baseline for human functionality; and enable experimentation with novel modalities of expression. They will accelerate advances in neurocomputation, artificial general intelligence, and other novel technologies, as well as the synergistic union of the human brain, intelligent computing devices, and the ambient intelligence environment to usher in the Intelligence Era.

For more information

Readers interested in pursuing the subject covered in this article will find links to more information at www.aee.odu.edu/inteldigitalengecosys.

The Web site, created as a companion to Mechanical Engineering magazine's Feature Focus, contains links to material on digital ecosystems, human technology interfaces, other relevant topics, and has continuously updated information feeds. There are also links to other online services and features of the Center for Advanced Engineering Environments at Old Dominion University.

From Your Brain to the Machine

Recent advances in cognitive neuroscience and brain imaging technologies are enabling a direct interface with the human brain. Brainmachine interfaces (BMIs) provide a direct communication pathway between a human brain and a machine. The device is purely controlled by brain waves, which correspond to certain thoughts. BMIs tie directly to the brain's neural system, and are often used to assist, augment, or repair human cognitive or sensory-motor functions.

BMIs can be invasive, partially invasive, or non-invasive. The first category uses neuroprosthetic devices implanted within the motor cortex (the grey area) of the brain to restore damaged hearing, sight, and movement. The second category includes the devices implanted inside the skull, but outside the brain.

Noninvasive BMIs use external sensors and neuroimaging technologies to map brain activity, including electroencephalography (EEG), magnetoencephalography (MEG), and functional magnetic resonance imaging (fMRI).

EEG is the most widely used, but current EEG technology is less accurate than the others. It measures only tiny voltage potentials. The signals are weak and prone to interference.

Recent fMRI can characterize regionspecific brain activity in real time. With an accurate map of brain activity, machines can use context, as in language context, to help determine what the users really mean.

The BMI user wears a headset with electrodes placed on the scalp, and as the user performs a task, neural communication produces electrical activity that generates small neurosignals. The signals are amplified, digitized, measured, and recorded. Salient features of brain activity are identified from the data. A pattern classification and interpretation system uses the brain-activity information to determine which task the user performed. It can also detect and monitor the brain wave patterns associated with the user's emotional state and stress level. The BMI can present bio-feedback to the user or generate a message or command to an external device.

A general purpose software system, BCI2000, has been developed for data acquisition, stimulus presentation, and brain monitoring applications.

Several applications are being explored for BMI technology. In clinical applications, bio-feedback is used mostly for eliminating or ameliorating a problem such as attention deficit disorder, depression, addiction, and phobia. BMIs are also used for disabled users to replace conventional interfaces. Guger Technologies’ g.tech system, for example, which enables a user with locked-in syndrome (a condition in which one is aware and awake, but unable to use voluntary muscles to communicate) to type with thought.

The user wears an EEG cap and focuses on a grid of letters. When desired letter lights up, the brain activity spikes and the system types the letter. The system can also convert text to speech, print it, or copy it to e-mail, and can trigger an alarm.

Direct brain-to-computer interfaces today are about on a par with voice-recognition technology 20 years ago.

AMERICAN HONDA MOTOR CO. INC.

Grahic Jump LocationDirect brain-to-computer interfaces today are about on a par with voice-recognition technology 20 years ago.AMERICAN HONDA MOTOR CO. INC.

Potential applications of BMIs for healthy users include monitoring alertness or cognitive state, and confirming identity. This information can be used in adaptive interfaces, which dynamically adjust themselves in useful ways to better support the user in the task at hand.

A number of commercial BMI systems have been launched for gaming applications and PC users. They include the Emotiv EPOC, which uses 14 electrodes, and NeuroSky's MindSet, with one electrode.

Military applications of BMI have focused on enhancing troop performance, as well as using EEG to provide private communication and read brain waves. Examples include the Defense Advanced Research Project Agency's Silent Talk and the Synthetic Telepath project, funded by the U.S. Army.

The goal of Silent Talk is to allow user-to-user communication on the battlefield through analysis of neural signals without the use of vocalized speech.

Synthetic Telepathy is a joint project of the University of California, Irvine; Carnegie Mellon University, and the University of Maryland. The objective of the project is to detect and analyze neural signals, using EEG, which occur before speech is vocalized to see if the patterns are generalizable.

For example, a soldier would “think” a message to be transmitted and a computer-based speech recognition system would decode the EEG signals. The decoded thoughts, which are translated into brain waves, are then transmitted using a system that points in the direction of the intended target. Future military applications can include mind-controlled battle robots, UAVs flown by mere thought, and cyborg soldiers.

Honda Research Institute in Japan has demonstrated an initial BMI prototype that enabled a user, wearing a headset containing EEG and near-infrared spectroscopy sensors, to control a humanoid robot, ASIMO. The user merely imagines making a movement and ASIMO makes the corresponding movement. The prototype is still large, slow, and imprecise, but it has high potential for future applications of mind-controlled robotic assistants.

Some current cutting-edge research is focused on using BMIs for thought recognition and communication (including, both brain-to-machine and brain-to-brain communication). The creation of sensations, words, or even thoughts in a receiving mind is a much greater challenge than using thought to control a device. However, as our understanding of the brain increases, this may eventually become possible.

BMIs may provide the users with transparent ability to interface with, and have direct access to, vast knowledge resources. They may also help in off-loading menial mental tasks to external devices.

The Honda Research Institute has demonstrated a brainmachine interface operating ASIMO the robot.

Grahic Jump LocationThe Honda Research Institute has demonstrated a brainmachine interface operating ASIMO the robot.

The potential of using BMI technologies in engineering design, manufacturing, and space exploration, among many other applications, is enormous. Imagine an engineer quickly converting visual feedback related to a design configuration, and making adjustments, by thought: the wing thickness of a UAV is too thin; make it twice as thick, and carry out new aerodynamics and structural analyses. If the resulting weight becomes unacceptable, then merely thinking “undo” results in undoing the thickness change. On the factory floor, engineers could control robots by thought.

In space exploration beyond Earth orbit, robotic systems could perform extravehicular activities teleoperated by astronauts using thought communication and control, uplinked through satellites. The loss of performance of the human sensory-motor system in reduced or zero gravity is compensated by the assistive space robotics and the BMI technologies.

Copyright © 2010 by ASME
View article in PDF format.

References

Figures

Tables

Errata

Discussions

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In