0
Select Articles

A New Breed of Robots that Drive Themselves PUBLIC ACCESS

Thanks to Advances in Control Systems, Vehicles can Carry out Tasks without Human Guidance.

[+] Author Notes

Dawn Tilbury is a professor of mechanical engineering at the University of Michigan in Ann Arbor and director of the Ground Robotics Reliability Center.

Galip Ulsoy, an ASME Fellow, is the C.D. Mote Jr. Distinguished University Professor of Mechanical Engineering and the William Clay Ford Professor of Manufacturing at the University of Michigan, Ann Arbor. Ulsoy is also past editor of ASME's Journal of Dynamic Systems, Measurement, and Control.

Mechanical Engineering 133(02), 28-33 (Feb 01, 2011) (5 pages) doi:10.1115/1.2011-FEB-2

This article elaborates the advancement in unmanned ground vehicles (UGVs) technology. UGVs are the robotic ground vehicles capable of operating in a variety of environments and functioning in place of humans. They are being used for a variety of commercial and military purposes. UGVs can go to places where humans cannot, such as hundreds of meters down oil well pipes. These kinds of robots are on the brink of becoming an integral part of the everyday world. However, to truly become revolutionary, UGVs need a capability beyond locomotion, a suite of sensors, and a manipulator arm. They need to be able to navigate new environments without the guidance of a human operator. There are several research projects aimed at improving UGVs and making the technology ready to take on a larger role in the economy. These projects include such concepts as adjustable autonomy, enhanced reliability through design optimization, control reconfiguration, and augmented reality user interfaces.

The unmanned ground vehicle is a new name for a fairly old idea. Most of us have played with radio-controlled toy cars, for instance, or at the other extreme of complexity, have seen the images relayed from Spirit and Opportunity, the rovers that wandered the surface of Mars. But until relatively recently, fully functioning UGVs capable of carrying out complex operations were mostly confined to hobbyists’ garages and university research labs.

Today, however, unmanned ground vehicles—robotic ground vehicles capable of operating in a variety of environments and functioning in place of humans—are being used for a variety of commercial and military purposes. UGVs go places where humans cannot, such as hundreds of meters down oil well pipes, and were used extensively to help rescue workers search the rubble of the World Trade Center. At the presidential inauguration in 2009, UGVs were driven underneath buses to check for explosives.

Unmanned ground vehicles of various kinds are used to do a variety of dirty, dull, and dangerous tasks, such as vacuuming a room, harvesting crops, or carrying heavy loads for long distances.

UGVs have also played a larger and more important role in the United States armed forces in recent years, with more than 8,000 currently deployed by the U.S. Army, compared to only a few hundred less than ten years ago. In Iraq and Afghanistan, for instance, various UGVs are used to inspect and disarm potential improvised explosive devices or search caves and buildings. Hundreds of soldiers are without doubt alive today because a UGV found or detonated an IED before they were set off by a person. UGVs are also used for many other tasks, from surveillance to carrying loads for the soldier. In every one of these examples, UGVs were used to accomplish something that would have been extremely inconvenient, dangerous—or impossible—for a human to do.

These kinds of robots are on the brink of becoming an integral part of the everyday world. But to truly become revolutionary, UGVs need a capability beyond locomotion, a suite of sensors, and a manipulator arm. They need to be able to navigate new environments without the guidance of a human operator.

The groundwork for that advance is being laid at the University of Michigan's Ground Robotics Reliability Center and other research labs across the world. The day will soon dawn when fully autonomous UGVs are ready to roll out of the labs and into the mainstream of industrial activity.

The unmanned ground vehicle is not the first technology to move from the domain of hobbyists and academics to the industrial world. The UGV industry is in a situation similar to the automotive industry of a century ago, or as Bill Gates, co-founder of Microsoft, has noted, the personal computer industry of a few decades ago. But as the technology is called upon to meet important societal needs, the culture of the builders of UGVs has to move away from that of a hobby or intellectual pursuit.

Industrial robots have been around for decades, efficiently performing tasks such as spray painting, welding, and assembly in manufacturing plants, and they continue to grow in numbers. However, a new generation of robots, as exemplified by UGVs and quite different from their industrial robot cousins, is now emerging.

Industrial robots are pre-programmed to repetitively perform routine tasks with precision and reliability. They operate in a well-known and structured environment, and have little or no interaction with humans once they are programmed and turned on.

The OmniTread Search and Rescue Robot has been designed to worm its way through debris to look for trapped people.

The PackBot has been deployed to Iraq and Afghanistan to assist in explosive identification and disposal.

Grahic Jump LocationThe OmniTread Search and Rescue Robot has been designed to worm its way through debris to look for trapped people.The PackBot has been deployed to Iraq and Afghanistan to assist in explosive identification and disposal.

UGVs, on the other hand, perform a variety of tasks and encounter many diverse operating environments, from carpeted floors and stairs in buildings, to hot and sandy conditions in the deserts of Iraq. Their interactions with human operators, or with human or robot partners, can be quite varied and complex.

Consequently, because the UGV industry is very young with many new technologies, because UGVs are difficult for humans to interact with, and because UGVs operate in uncertain environments, they currently suffer from a number of reliability issues and break down frequently.

For instance, while cars are typically designed for a 100,000-mile life, current UGV design goals are a mean time between failure of 100 hours, and most actually achieve MTBFs that are far shorter. Recent studies of search-and-rescue UGVs by Jennifer Carlson and Robin Murphy of the University of South Florida in Tampa, and others, show that the mean time between failures is typically between 6 and 20 hours. (We have heard similar reports in personal communications during a visit to the U.S. Army Joint Robotics Repair and Fielding facility at Selfridge Air National Guard Base, near Detroit.)

In some ways, not only the industry, but also the reliable operation of UGVs is at a level similar to that of personal computers in the 1970s or of automobiles at the start of the 20th century. The hope is that, as the technology matures, the operation of UGVs will become as reliable as today's computers and automobiles.

Another factor constraining the usefulness of unmanned ground vehicles is their ease of use. At present, most UGVs are remote controlled, or tele-operated, and this significantly limits their capabilities and the missions they can successfully undertake. Their effective use requires the dedicated attention of skilled operators. Typically, joystick-type interfaces are used by trained operators to move the platform, as well as to separately control the motion of any manipulator arms or sensors. For some demanding tasks, multiple operators may be needed to control a single UGV, and in combat operations, extra personnel may be needed to guard a robot operator whose attention is focused on the small screen. Operators may only have feedback about the mission from limited sensors on board the UGVs, or through line-of-sight visual contact.

Such restrictions limit the complexity of the operations that can be undertaken, and also limit the distances over which the UGVs can be operated.

To reach their full potential, UGVs will have to become increasingly autonomous, moving gradually from tele-operation to limited supervised autonomy to eventual fully autonomous operation. To spur this development, DARPA has sponsored the Grand Challenge and Urban Challenge competitions. Teams of autonomous ground robots demonstrated their reconnaissance and surveillance abilities in the MAGIC competition, co-sponsored by the U.S. and Australian defense departments.

A May 2010 report by Werner J.A. Dahm, the chief scientist of the U.S. Air Force, emphasized the importance of autonomy. “Increased use of autonomy—not only in the number of systems and processes to which autonomous control and reasoning can be applied but especially in the degree of autonomy that is reflected in these—can provide … potentially enormous increases in its capabilities,” Dahm wrote, “and if implemented correctly can do so in ways that enable manpower efficiencies and cost reductions.”

The OmniTread moves in a serpentine fashion. The treads along each of its sides enable it to take advantage of uneven or constricted environments— such as debris fields or pipes—and its joints are flexible enough to allow the robot to climb over small obstacles and descend stairs.

Grahic Jump LocationThe OmniTread moves in a serpentine fashion. The treads along each of its sides enable it to take advantage of uneven or constricted environments— such as debris fields or pipes—and its joints are flexible enough to allow the robot to climb over small obstacles and descend stairs.

Although increasing numbers of unmanned ground vehicles are being deployed every day, there remain many important research questions to be investigated. One important research area is situational awareness.

As UGVs become entrusted to carry out tasks autonomously, either independently or as part of a mixed team of humans and robots, they must be enabled to make critical tactical decisions instead of only following preprogrammed actions or relying on tele-operation. UGVs must be able to operate in close proximity to humans, safely, even at high speeds.

Better sensors—and sensor processing—can improve situational awareness for both the robot and its operator. Since UGVs must operate in challenging and unknown environments, improving mobility is important; mission durations are currently limited by battery life. Finally, both design and manufacturing of UGVs must improve to achieve the reliability required for future generations.

Many universities and other research organizations around the world are engaged in research related to UGVs. For example, the University of Michigan's Ground Robotics Reliability Center, funded in part by the Tank Automotive Research, Development, and Engineering Center at the Department of Defense, has grouped the research needs described above under the umbrella of reliable operations, and is working on a number of research projects to address these needs. Some of those projects are:

One priority is to develop techniques that can dynamically extend the behavior of unmanned robotic systems while ensuring the reliability of the new and modified behaviors. Such techniques will increase the level of autonomy of UGVs.

The approach focuses on interactive instruction, in which a human guides the robot in new missions, tactics, and skills, and verifies the correctness of the robot's behavior during performance or rehearsal. Because the human gives instructions while the robot is performing a task, the instruction is grounded in the real world, thereby eliminating many forms of ambiguity.

The approach is interactive. The robot can ask for help when it needs it, and the human can correct any errors that are noticed. Trust is built and established through confirmation and explanation: the robot can explain what it is doing in a certain situation.

We may see a day when UGVs are assisting—or even replacing—soldiers in the battlefield and first responders at fires and other disaster sites.

John Laird and his coworkers at the center are implementing this research using the Soar cognitive architecture, an open source infrastructure for decision-making, planning, and complex doctrine execution.

As with other mobile machinery, energy storage capacity and power consumption rates are critical factors for reliable operation of unmanned ground vehicles. UGVs that are now available have mission durations of just over an hour; our goal is to increase that to 8 to 10 hours. Huei Peng and his coworkers at the center are considering how to provide energy to UGVs requiring total power levels from several hundred watts to several kilowatts—and to have the energy pack fit inside a backpack.

In addition the center is developing a systematic process for the design and control of energy systems for ground robots to improve the reliability of their operations.

In order to operate reliably in diverse environments, UGVs must be able to sense and interpret their surroundings. Ed Olson and his coworkers at the center are developing a robotic system that is capable of autonomously exploring a previously unknown area and producing an inventory of that space. By inventory, we mean that the robot will build a metrically accurate map that is annotated with other tactically important information.

For instance, the robot will report the location and activities of humans within that space, along with the location of other features of interest. The identification of these objects could eventually be combined with the exploration strategy: tracked humans could be pursued (in order to aid apprehension of suspects, for example) or avoided (to maximize the stealth of the system).

Both video and range-finding sensors such as LIDAR have been used with some success in map-building. In our work, we are combining the best attributes of both types of sensors in order to improve the feature recognition and understanding of three-dimensional space.

LIDAR has been recognized as being good for capturing geometry but providing poor appearance data; cameras, on the other hand, give rich appearance data but poor geometry. We use both sensors, rigidly attached to each other to facilitate coordinate transformations. By combining the data from the two sensors, we have obtained improved object segmentation. These capabilities have been deployed in a team of robots that recently won the 2010 MAGIC competition.

Humans and other animals are adept at avoiding moving obstacles, even if we don’t have an absolute fix on their locations. This is still a hurdle for UGVs and other mobile robots. Solving this problem is the focus of a project by Galip Ulsoy and his coworkers at the center.

Our approach begins with two basic assumptions: the sensor data is uncertain and the environment is unknown. The sensor data from the mobile robot's environment is decomposed into a grid of individual cells, each corresponding to a velocity of the robot. For each moving obstacle, the programming calculates a velocity obstacle, which is the set of robot velocities that will lead to a collision with the robot. These data are then used to create a map of the velocity occupancy space, which combines the velocity obstacles (including the sensor noise) and the position of the robot's eventual goal in a weighted average for each velocity grid cell.

Different types of obstacles, such as pedestrians or cars, can have different weights. At each time step, the robot selects the obtainable velocity with the largest combined cost. In simulations, we tuned the weights by hand for the positive (attractive) potential of the goal and the negative (repulsive) potential of the obstacles and received good results. We are currently integrating sensors onto a UGV for validating the results experimentally.

It can be difficult for an operator who is remotely controlling a UGV that is exploring a building to keep track of its exact location. Global Positioning System signals are not generally available inside buildings, and the limited information available from onboard sensors can be confusing. Even in a familiar building, operators often get disoriented, and look for landmarks to figure out where the robot is.

To improve the reliability of locating UGVs in indoor environments, Johann Borenstein and his coworkers in the center are developing an extremely accurate position tracking system for small, tele-operated robots. The Indoor Position Tracking system produces accurate real-time trajectories of the robot on the operator's screen, providing accurate position and heading information. In addition, if the UGV becomes incapacitated or if communication breaks down, the last known position of the robot is immediately evident from the trajectory plot, enabling the quick extraction of the robot.

The IPT system is based on lowcost gyroscopes and inertial measurement units, which are used for dead reckoning. Traditionally, the drift inherent in such sensors has rendered the measurements useless after a short time. However, we have developed a heuristic method for drift elimination. Applications of this and related technology include leader-follower scenarios, and methods to assist operators of tele-operated UGVs with their navigation. These are being demonstrated at the Robotics Rodeo in Fort Benning, Ga.

The Ground Robotics Research Center has also developed a robotics testbed that includes a Packbot and several other small robots. The robots are interfaced with a variety of different sensors, and the use of a standard interface called the University Research Controller will enable the different research projects to test and validate their results using a common interface regardless of hardware and software differences in the UGVs.

Image data is not enough to enable robots to navigate a complex environment. Sharply contrasting objects can be detected, but similarly colored objects might be merged (a,b). LIDAR can merge objects without much depth (c). By combining LIDAR and visual data we obtain a more complete picture (d).

Grahic Jump LocationImage data is not enough to enable robots to navigate a complex environment. Sharply contrasting objects can be detected, but similarly colored objects might be merged (a,b). LIDAR can merge objects without much depth (c). By combining LIDAR and visual data we obtain a more complete picture (d).

The URC not only enables researchers to implement the new results of their research on a variety of platforms, but also protects the proprietary aspects of commercial UGV platforms in the testbed, thus supporting technology transfer and commercialization of the research.

There are many other research projects, at the GRRC and other leading labs, aimed at improving unmanned ground vehicles and making the technology ready to take on a larger role in the economy. These projects include such concepts as adjustable autonomy, enhanced reliability through design optimization, control reconfiguration, and augmented reality user interfaces.

If the level of development of UGV technology is indeed analogous to that of personal computers in the 1970s, then it may take only one more generation for such robots to become indispensible parts of everyday life. We may see a day when UGVs are assisting—or even replacing—soldiers in the battlefield and first responders at fires and other disaster sites. They may become as integral to the typical household as kitchen appliances or even domestic pets. (Forget the Roomba. Imagine a robot that places children's toys back on the shelf or picks scattered laundry off the floor and carries it to the washing machine.)

It's hard to imagine that the UGVs we see in our lab could evolve into such helpful machines. But few people who put together their first Altair 8800s from kits realized what personal computers eventually would become.

NOTE: This research was supported in part by the Ground Robotics Reliability Center at the University of Michigan, with funding from government contract DOD-DOA W56H2V-04-2-0001 through the Ground Vehicle Robotics group at TARDEC (Unclassified. Dist. A. Approved for public release).

Copyright © 2011 by ASME
View article in PDF format.

References

Figures

Tables

Errata

Discussions

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In