0
Select Articles

Game Changers PUBLIC ACCESS

Systems that Can Learn, Reason, Even Find Mistakes in Data are Coming to the Aid of Professionals in an Ever-More-Complex World.

[+] Author Notes

Ahmed K. Noor is Eminent Scholar and William E. Lobeck Professor of Modeling, Simulation and Visualization Engineering at Old Dominion University in Norfolk, Va.

Mechanical Engineering 136(09), 30-35 (Sep 01, 2014) (6 pages) Paper No: ME-14-SEP1; doi: 10.1115/1.2014-Sep-1

This article discusses the recent development in “cognitive computing” technology. Unlike expert systems of the past, which required inflexible hard-coded expert rules, cognitive computers interpret unstructured data (sensory information, images, voices, and numbers), navigate through vast amounts of information, learn by experience, and participate in dialogues with humans using natural language to solve extremely complex problems. The U.S. Defense Advanced Research Projects Agency is funding a program called SyNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electronics) to develop machine technology that will function like biological neural systems. IBM, Hughes Research Labs, and several universities are working on this program. The aim is to build an electronic system that matches a mammalian brain in function, size, and power consumption. It would recreate 10 billion neurons and 100 trillion synapses, consume one kilowatt (same as a small electric heater), and measure less than 2,000 cubic centimeters. Several other projects are also under way to apply cognitive technology to robotics, cars, and production systems.

Cognitive systems, which take on some of the activities of the human brain, promise a powerful new generation of engineering tools.

The quiz show Jeopardy always engaged a very human mind. Contestants benefit from encyclopedic knowledge, but they also need logic, intuition, and a bit of luck. Even grammar is involved. The game isn’t question and answer. Instead, the player is given an answer, and has to respond by phrasing an appropriate question.

For instance, a contestant may be told, “When Apple sued for iPad patent infringement, Samsung cited this 1968 movie as the originator of the design.”

The response is: “What is 2001: A Space Odyssey?”

It requires contributions by different parts of the brain. That was a challenge of a very human sort. But then, along came Watson.

IBM's supercomputer played against two Jeopardy champions in 2011 and won. Watson was able to respond directly and precisely to natural language prompts with relevant, correct responses. It had access to 200 million pages of structured and unstructured information consuming four terabytes of disk storage including the full text of Wikipedia, but was not connected to the Internet during the game.

It was like Captain Kirk's interaction with Computer on the Enterprise. Watson was science fiction brought to real life.

Of course, IBM didn’t develop Watson to win game shows. The goal was to have computers start to interact in natural human terms across a range of applications.

The computer has moved on to address bigger and more serious issues now. For instance, it has a role in health care in which it helps physicians overwhelmed by the herculean task of both treating patients and keeping up with an exponentially expanding body of medical research.

The term we give to this kind of technology is “cognitive computing.” Unlike expert systems of the past, which required inflexible hard-coded expert rules, cognitive computers interpret unstructured data (sensory information, images, voices, and numbers), navigate through vast amounts of information, learn by experience, and participate in dialogues with humans using natural language to solve extremely complex problems.

They were hoping against hope that the humans would screw up. … They were right. Watson won handily.

From “The Obsolete Know-It-All,” a TED talk by Ken Jennings, who won more than 70 times at Jeopardy. Pictured, from left, are Jennings, a stand-in for Watson (which was in its climate controlled room), and Brad Rutter, the show's top cash winner.

Grahic Jump LocationFrom “The Obsolete Know-It-All,” a TED talk by Ken Jennings, who won more than 70 times at Jeopardy. Pictured, from left, are Jennings, a stand-in for Watson (which was in its climate controlled room), and Brad Rutter, the show's top cash winner.

IBM isn’t alone in pursuing cognitive computing.

The U.S. Defense Advanced Research Projects Agency is funding a program called SyNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electronics) to develop machine technology that will function like biological neural systems. IBM, Hughes Research Labs, and several universities are working on this program. The aim is to build an electronic system that matches a mammalian brain in function, size, and power consumption. It would recreate 10 billion neurons and 100 trillion synapses, consume one kilowatt (same as a small electric heater), and measure less than 2,000 cubic centimeters.

The Human Brain Project of the European Union was initiated in 2013. It aims, among other things, to build information and computing technologies by mapping the inner workings of the brain and mimicking them in computing. As part of the project, researchers in Germany used a neuromorphic chip and software modeled on insects’ odor-processing systems to recognize plant species by their flowers.

Computing itself is only one element in the broader pursuit of cognitive engineering. “Cognitive” means that a system can perform some functions of human cognition. The system would have natural language processing capability, learn from experience, interact with humans in a natural way, and help make decisions.

Most of the work engineers do is cognitive in nature. So many see a potential in cognitive systems as very powerful tools to support engineers and other professionals.

In its health care role, for example, IBM Watson will help a New York-based genome research center in developing treatments for glioblastoma, the most common type of brain cancer in U.S. adults.

What makes Watson unique is that it isn’t programmed like most computers. Instead of relying on information put into it, Watson learns by reading vast amounts of information and combining it with the results of previous work to solve problems.

Researchers at the Rensselaer Polytechnic Institute are at work on cognitive technologies with similar abilities that can be applied to data streams, particularly those used to control and guide aircraft. They are funded by the Air Force Office of Scientific Research to develop “active data” technologies, known as smart analytics.

The technologies would enable otherwise passive data systems to search for patterns and relationships and to identify incorrect data generated by faulty sensors, or other hardware failures, such as those that contributed to the Air France 447 crash in June 2009. During that flight, important sensors failed, and reported erroneous airspeed data. But the autopilot didn’t know that and acted as if the data were correct.

The system uses mathematical and programming elements that search for patterns and relationships that indicate hardware failure. Active data has been incorporated into a software system called the Programming Language for spatiO-Temporal data Streaming applications, or PILOTS, which treats air speed, ground speed, and wind speed as data streams that sometimes exhibit errors, which can be automatically corrected so that the pilot receives the correct readings and can adjust accordingly.

In addition to its benefits in making flight systems safer, smart analytics could also be helpful in other applications that rely on sensors, such as health care. Analyzing the patterns of data collected from sensors on the human body could detect early signs of seizures or heart attacks.

Projects are also under way to apply cognitive technology to robotics, cars, and production systems.

As interaction of robots with humans increases, so does the demand for sophisticated robotic capabilities associated with deliberation and high-level cognitive functions. Future robots will be endowed with higher-level cognitive functions to interact with humans, reason, and perceive so they can function in unpredictable environments.

Cognitive robots have the processing architecture to reason how to accomplish complex goals.

An early-generation cognitive robot is the human-like Myon, developed by the Neurorobotics Research Lab of Humboldt University in Germany. All the body parts of Myon can be removed and mounted again. The body parts retain their separate functionality because each one has its own energy supply and computational ability. The neural network is distributed over the decentralized robot.

Myon is learning to respond to human emotion. The humanoid robot will be playing a lead role in “My Square Lady” in Berlin's Komische Oper in the 2014-15 season, as an experiment.

A European consortium, led by the University of Graz and including both biological and technical institutions, is creating a swarm of cognitive, autonomous underwater robots. The goal of the project is to develop robotic vehicles that can interact with each other and cooperate in tasks. They could be used for ecological monitoring, or for searching, maintaining, and even harvesting in underwater environments.

The swarm will need the robustness and stability to function under dynamically changing conditions. The vehicles will interact with each other and exchange information, resulting in a cognitive system that is aware of its environment, of local individual goals and threats, and of global swarm-level goals and threats.

As shown by natural swimming fish swarms, such mechanisms are flexible and scalable. The usage of cognition-generating algorithms can allow robots in the swarm to mimic each other's behavior and to learn from each other adequate reactions to environmental changes.

The plan includes investigating the emergence of artificial collective pre-consciousness, which leads to self-identification and further improvement of collective performance. In this way, several general principles of swarm-level cognition will be explored to assess their importance in real-world applications. These studies complement earlier studies led by Naomi Leonard at Princeton University.

Watson is ushering a new era of computing, the cognitive computing era. In addition to programmatic computing (used in the current programmable systems era), Watson uses a combination of three capabilities that make it unique: namely, it can understand and process natural language; it can generate and evaluate hypotheses to provide evidence-based responses, with a confidence level; and it can be trained (i.e., have dynamic learning capability).

Watson communicates more like a human in both query and reply, and uses probability to reason out the best answers, and it can do this with speed and precision. It can process 500 gigabytes, the equivalent of a million books, per second. The sources of information for Watson include encyclopedias, dictionaries, thesauri, newswire articles, and literary works. Watson also uses databases, taxonomies, and ontologies.

Watson's hardware integrates massively parallel POWER7 processors. It is composed of a cluster of 90 IBM Power 750 servers, each of which uses a 3.5 GHz POWER7 eight-core processor, with four threads per core. In total, the system has 2,880 POWER7 processor cores and has 16 terabytes of RAM.

Watson, above, has many connections. Developers, for instance, access it through the cloud, and soon classes of university students will do so as well.

In November 2013, IBM announced the formation of the Watson Ecosystem for creating a complete environment for fostering new business opportunities and driving innovation. The environment supports a community of interconnected partners, which include cross-industry application developers and content providers. They are provided access to the Watson platform, technologies, and tools through the Watson developer cloud. They are also supported by a collaborative community of IBM technical, marketing, and sales resources. The developer cloud is used for building and testing applications of the business partners.

An Unstructured Information Management Architecture framework to support distributed computing is being developed.

In September 2014, IBM will launch the Cognitive Systems Institute to advance the development and deployment of cognitive computing systems. Like IBM's Watson, these systems can learn, reason, and help human experts make complex decisions involving extraordinary volumes of data. The institute will comprise universities, research organizations, and IBM clients.

Also, IBM is partnering with a number of universities to launch new cognitive computing courses, which provide students with access to Watson via the cloud. The courses will include building ideas for cognitive innovations, creating cognitive apps, and developing entrepreneurial know-how.

Google's self-driving car eliminates steering wheel, accelerator, and brake pedals

The results can be exploited for improving the robustness, flexibility, and efficiency of other technical applications in the field of information and computing technology.

In June 2014, the European Commission, along with 180 companies and research organizations (under the umbrella of euRobotics), launched the world's largest civilian research and innovation program in robotics. Covering manufacturing, agriculture, health, transport, civil security, and households, the initiative is called SPARC and aims at developing technologies including smart industrial robots, autonomous cars, and drones.

Cognitive cars are equipped with integrated sensors, cameras, GPS navigation systems, and radar devices that provide coordinates and information gathered on the road to other cars, equipped with the same car-to-car communication system. The new technologies are intended to serve and protect drivers and passengers, and ultimately render human drivers superfluous.

The advanced technologies that make cognitive and self-driving cars have been filtering into commercial products at a fast rate.

In April 2014, the Google self-driving cars surpassed 700,000 autonomous accident-free miles. This was done by improving the software that can detect hundreds of distinct objects simultaneously.

The self-driving cars can ingest massive amounts of data in a very short amount of time, explore multiple scenarios, and eventually run simulations to insure that their decisions are as safe as possible. The cars pay attention to pedestrians, buses, stop signs, and a cyclist making gestures that indicate a possible turn, in a way that a human driver cannot, and they never get tired or distracted.

Google has unveiled a fully autonomous, two-seat electric car prototype without steering wheel, accelerator, or brake pedal. The cars can go up to 25 mph. Google is building about 100 prototypes of this sort and plans to conduct initial tests in versions that retain the manual controls.

Meanwhile, a new autopilot tool, Cruise Rp-1, which enables hands-free driving on highways, has been advertised and is slated for use in California starting in 2015. It can be fitted for nearly any vehicle. It includes two cameras, a radar mechanism, GPS, inertial sensors, and an on-board computer, as well as actuators that control the car's steering, acceleration, and braking. Using this software/hardware combination, the Cruise RP-1 constantly scans the road to keep the car operating within safe parameters in relation to other cars and the boundaries of the driving environment.

Myon—shown shaking the hand of Jan-Hendrik Olbertz, president of Humboldt University— can respond to human emotions.

Grahic Jump LocationMyon—shown shaking the hand of Jan-Hendrik Olbertz, president of Humboldt University— can respond to human emotions.

The cars can deal with changing environments and some level of dynamic uncertainty. However, it is impossible to plan ahead for every single scenario that a fully autonomous car might have to handle. Therefore, one of the key requirements of autonomous cars is to have human-like cognitive capabilities (being able to learn and make decisions on the fly).

A cognitive system with implications for factory production is being developed by researchers at Kings College in London. It is a cognitive robotic hand with vision system.

The robotic hand uses a Kinect depth-sensing camera to analyze a 3-D object, builds a 3-D computer model of it, and determines how the robotic hand can securely grasp it.

That kind of autonomy and flexibility gives rise to the concept of cognitive factories. Possible systems include autonomous machining, automated programming of industrial robots, human-robot cooperation, knowledge-based quality assurance, and process control.

As part of the SkillPro project funded by the European Commission, researchers at the Karlsruhe Institute of Technology in Germany are developing cognitive tools for smart reconfigurable manufacturing systems and mass customization. The project considers a modern production system as a combination and collaboration of cyber-physical assets that offer different skills.

Today's computers use the so-called von Neumann architecture. They shuttle data back and forth between a central processor and memory chips in linear sequences of calculations. While that method is appropriate for crunching numbers and executing precisely written programs, it is not efficient for processing images, or for detecting and predicting patterns. By contrast, neuromorphic chips process sensory data, such as images and sound, and respond to changes in data in ways not specifically programmed. They attempt to model in silicon the massively parallel way the brain processes information as billions of neurons and trillions of synapses respond to sensory stimuli. The neurons also change how they connect with each other in response to changing images, sounds, or patterns. Neuromorphic chips have the potential of overcoming the physical limitations and considerably reducing the power requirements of the traditional von Neumann processors.

Qualcomm demonstrated a small robot powered by a neuromorphic chip and specialized software that simulate the activity in the brain. Simply telling the robot when it has arrived in the right spot allows it to figure out how to get there later without any complex set of commands. The company plans to partner with researchers and startups, offering them a platform to create neural architectures very quickly with Qualcomm's tools.

Potential applications include computers that draw on wind patterns, tides, and other indicators to predict severe storms more accurately, or glasses for the blind that use visual and auditory cues to recognize objects.

Jennifer Hasler holds a board, above, with bio-based neuron structures; right, a neuromorphic chip from Heidelberg. Photo: Heidelberg University, Germany

Grahic Jump LocationJennifer Hasler holds a board, above, with bio-based neuron structures; right, a neuromorphic chip from Heidelberg. Photo: Heidelberg University, Germany

Having completed one order, manufacture of any new product ordered mostly requires a modification of the production process. When manufacturing small series, preparation, setup, and programming of the machinery often take much longer than manufacture proper. Machines equipped with cognitive capabilities and communicating with each other are expected to significantly reduce the changeover time.

A machine equipped with cognitive tools and camera sensors, for instance, can recognize any workpiece even in the case of changing products. Having examined the workpiece's shape and position, the machine can decide how to apply its gripper or suction cups and where to place the workpiece. Depending on the product, machines with gripping, welding, or bonding skills can determine their next task or production step.

They communicate with neighboring machines and know whether they have to ask for a mobile robot to transport the product to the next workstation or the shipping department of the company.

Cognitive computing and cognitive technologies can be major game changers for engineering systems, practice, and training. The confluence of cognitive systems with such technologies as cloud, mobile, wearable devices, Internet of Things, Big Data, and visual analytics will amplify their impact.

Future cognitive engineering systems will be designed to handle various tasks in a flexible manner and adapt to the user's needs. They should also be reasonably easy to instruct and affordable. They will incorporate a variety of sensors, interacting reasoning modules, and actuators.

With the use of cognitive systems, engineers will be able to perform highly sophisticated search within a dynamic domain, find relevant information and patterns, see the bigger picture outside their immediate expertise, and harvest insight from data that is constantly being updated. The result will let engineers explore large numbers of alternative designs and make better decisions in large multidisciplinary projects. All this promises to reduce both the time and cost of the development process.

The aim is to create a partnership in which cognitive devices and facilities will support the thinking of the human brain. This form of partnership will think better than any human brain by itself, and will process data in a way better than current information handling machines.

Such interactions can amplify human capabilities and help engineers in creating more innovative products in powerful new ways.

For more information on cognitive computing and cognitive engineering systems, go to: www.aee.odu.edu/cognitivecomp. The website, created as a companion to this Mechanical Engineering magazine feature, contains links to material on cognitive computing, cognitive technologies and systems, current activities, educational programs, and research projects.

Copyright © 2014 by ASME
View article in PDF format.

References

Figures

Tables

Errata

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In