0
Select Articles

Eyes of Mars PUBLIC ACCESS

Actually they're Cameras, Helping Robots Work a Little Easier in an Untidy World.

[+] Author Notes

Associate Editor

Mechanical Engineering 126(09), 40-44 (Sep 01, 2004) (4 pages) doi:10.1115/1.2004-SEP-3

Skaar has been researching methods of combining the inherent talents of both man and machine for such tasks as automatic forklift operation. A project at a bag manufacturing plant in Florida demonstrates the recent success of his lowbrow approach. Jack Forbes, project manager at the bag plant in Pensacola, has been overseeing the robotics project there and considering it for other company plants. Today, the plant has two Fanuc robots handling three production lines. It plans a third robot to better distribute the workload. The robots stand at the ends of the tuber lines, where multiple rolls of paper and plastic film are unwound, glued, folded, and separated into flattened tubes. Skaar’s appealingly practical approach to the robot vision problem appears to be less ambitious than other programs such as the European Union’s Cognitive Systems for Cognitive Assistants. This program, also known as CoSy, seeks ways of raising the intelligence of robots from their current insect-like level to that of a pre-schooler.

A video clip on a University of Notre Dame Web site depicts an industrial robot picking pallets from one haphazardly arranged stack and placing them neatly atop each other on the bed of a miniature truck. To complete each placement, the robot identifies the position of the starting pallet, the location of the receiving pallet, and the orientation of the lifting fork. It does so by observing the dimensions of several elliptical cues that have been attached to each object. It's impressive until you realize you could pay a man a minimal wage to do the same thing, without the visual cues.

The quick little demo makes a point, though. Robots prefer a tidy, orderly world to a messy one. But, in many cases, messy is what they're given. Robotics engineers refer to this place as the unstructured environment. It is everywhere, from the rubble-strewn surface of Mars to the back flaps of a supermarket loading dock. Nothing is where it is supposed to be—ever.

Humans function quite well in this world, according to Steven Skaar, a professor of mechanical engineering at Notre Dame. You can drive your car and make frequent, minor course corrections while holding up your end of a detailed business conversation. You manage three-dimensional tasks, such as threading a needle, adroitly, even if you have a hard time remembering where you've left your keys.

Robots and computers, on the other hand, eagerly keep track of everything-no matter how irrelevant the data may be. But when they get lost, robots and computers have a tough time finding a way back to where they were. Skaar has been researching methods of combining the inherent talents of both man and machine for such tasks as automatic forklift operation. A project at a bag manufacturing plant in Florida demonstrates the recent success of his low-brow approach.

Jack Forbes, project manager at the bag plant in Pensacola, has been overseeing the robotics project there and considering it for other company plants. The factory manufactures a variety of industrial bags, the kind that are often filled with 80 lbs. of cement mix, he said.

Today, the plant has two Fanuc robots handling three production lines. It plans a third robot to better distribute the workload. The robots stand at the ends of the tuber lines, where multiple rolls of paper and plastic film are unwound, glued, folded, and separated into flattened tubes. The tubes, unsealed at both ends, emerge from the tuber machines in stacks 35 to 90 pieces high. The robots grab the stacks and place them on pallets.

The robots rely on two cameras and two lasers for locating the pallets and judging the height of the intermediate tiers as the tubes are piled up to a finished height of about 54 inches. As layers are added, the tubes below compress under the additional weight. Determining where the top layer ends is not a matter of simply counting up the layers that have already been put down, Forbes said, as it would be for a less compressible product-bricks, for instance.

Instead, a single spot laser casts a beam onto the pile. The two video cameras pick up the laser dot from two different angles. Image processors then tell the robot the height at which it should deposit its payload.

Before the robot began palletizing the stacks of tubes it needed to know just where the edges of the empty pallet lay. To find out, another laser laid down a matrix of dots over the region where the empty pallet was supposed to be waiting. Using the matrix manipulation techniques of matching and mapping the dots, the two cameras compared their views and defined the location of the pallet's edges.

Forbes pointed out that the plant has replaced this pallet-finding portion of the system with hard stops against which a lift driver places an empty skid. That's a task a man can accomplish readily. But Forbes pointed out, too, that this is merely phase one of a project that he hopes ultimately will use the vision system to dispense with the intermediate palletizing. Instead, the vision system will guide the robot arm to move the tubes directly from the tuber into the operation that closes off the ends of the bags.

Telemetry data produces maps for determining how steep and how far things are on Mars. JPL image processors rush to relay terrain data to rover drivers, who transmit targets up to the rovers.

Grahic Jump LocationTelemetry data produces maps for determining how steep and how far things are on Mars. JPL image processors rush to relay terrain data to rover drivers, who transmit targets up to the rovers.

NASA's two recent Mars rover successes, Opportunity and Spirit, have been busy beaming back photos and experimental results, thanks, in part to the efforts of one of their drivers, Eric Baumgartner. Baumgartner, a former Notre Dame student of Skaar's and now lead engineer for the rovers' five-degrees-of-freedom arms at NASA's Jet Propulsion Lab in Pasadena, Calif., had a few minutes between drives to discuss the differences between planetary robots and the more mundane industrial robots on Earth.

Both Mars rovers are fully autonomous, Baumgartner said. JPL staff uploads a batch of the day's activities during the Martian morning and then waits to hear the results in the afternoon. Data relays to Earth by way of the Odyssey platform, which has been orbiting the red planet for the past two and one-half Earth years. Transmissions take about 20 minutes, one way.

Each rover uses several pairs of stereo cameras to find its way, Baumgartner explained. Four hazard-avoidance cameras look at the near region; a set of navigation cameras views the middle distance from 20 to 30 meters out; and a pair of panoramic cameras observe the region lying between 50 and 100 meters from the rover.

An imaging team produces 3-D terrain meshes from the image data beamed to Earth. Rover drivers then use the maps in dispatching the vehicles to various targets. Using image data, the rovers can get within 1 cm and 10 degrees of a destination. A probe on the end of the robot arm then closes the gap between the actual and estimated target position. JPL staff also rely on wheel odometry and feature tracking to feed the rover an integrated position estimate, Baumgartner said.

If the day's activities include geology experiments, the rover can place its x-ray spectrometer, its Mossbauer spectrometer, or its rock abrasion tool on a sample with a millimeter or better repeatability, Baumgartner said.

End-of-arm tooling moves a pile of tubes to a pallet (top). Cameras on the scene observe interplay of lasers, pallets, and tubes. The height of a pallet load can vary, as tubes (bottom) compress readily. Here, tubes exit the tuber.

Grahic Jump LocationEnd-of-arm tooling moves a pile of tubes to a pallet (top). Cameras on the scene observe interplay of lasers, pallets, and tubes. The height of a pallet load can vary, as tubes (bottom) compress readily. Here, tubes exit the tuber.

The rovers' tried-and-true way of finding their places is more properly called the calibration-and-servo method of robot control, to distinguish it from camera-space manipulation—the formal name for the control method deployed on the bag project.

Essentially, the rover cameras provide a reference frame tied to some physical point on each vehicle. Each rover then relies on a kinematic model of its arm to move the end-effector to the desired position. Servo feedback tells the robot when it arrives.

Skaar likens the calibration-and-servo method to a person first using his eyes to determine the absolute coordinates of a needle and thread in space, then closing them and relying on knowledge of his limb length and joint angles alone to actually thread the needle. That's not how a human threads a needle at all, Skaar said. Instead, he moves his joints and observes the motions and positions of the two objects as they come together.

Adding this estimating capability to a robot's already highly developed sense of limb coordinates makes for a system that can actually become more accurate over time, Skaar said. Kinematic models are seldom perfect. System accuracies degrade as components wear. Building a system that adds the complementary sense of sight is one way to free the robot from the restrictions of imperfect models and component wear.

A second example of camera-space manipulation comes from another robotic bag handler developed about five years ago. Using suction cups and other end-of-arm tools, a Fanuc robot grabs flattened bags from a random pile and moves them between filling and sealing stations before laying them down on a shipping skid.

Each bag wears an imprinted visual cue, according to Skip Poole, general manager of the bag company's consumer packaging equipment division of Salt Lake City. The cues are identical to those visible in the pallet-handling video on the Notre Dame Web site.

Visual cues consist of light and dark disks on contrasting backgrounds. Disk centroids remain consistent regarcl1ess of camera angle.

Two cameras, observing the cues that mark both bag sides, determine orientation as each bag comes up in the pile. Vacuum cups grab the bag along the top fold and snap it open with a quick downward stroke. The rest of the filling, sealing, and stacking operation, though fun to watch, is basically automated handling like we've seen before. It's the robot's making order out of the chaos of empty, flat sacks that really grabs you.

The three-dimensional application of vision systems to robots has turned out to be greater problem than first anticipated years ago, Skaar said. The human talent for distinguishing subtle characteristics of a scene and thereby identifying discrete objects visually has remained inimitable by machines.

Compared with machines, "humans are superior at pointing and clicking on surface junctures," Skaar said.

Skaar demonstrates an example of this activity on his Web site, where he's setup a way for users to simulate a box stacking operation. An operator selects a box with a laser pointer and can watch as the robot uses camera space manipulation to determine the orientation of the box for positioning its end effectors. After the robot has grasped and lifted the box, the operator selects the stack where he wants the box placed. Again, the system uses camera-space manipulation to identify the orientation and location of the stack top. The robot moves and positions the box accordingly and sets it down.

It's one step removed from the human-in-the-loop approach that NASA and other users of artificial mechanical dexterity take-in which an operator controls a robot arm remotely through a joystick or similar means in real time. That approach becomes increasingly unwieldy as the degrees of freedom increase and visual access turns camera dependent. For the Mars rovers, a human in the loop is impossible because of the lengthy time delay between signal transmission and reception.

Camera-space manipulation, on the other hand, could serve planetary rovers quite well, Skaar explained, because it works independently of time and distance. It also might suit a robot mission to repair the Hubble space telescope, a mission NASA is currently considering.

Skaar has applied the same robot used in the box-stacking demonstration to a drilling task that he has set up on the Notre Dame campus. There, a user selects a spot on a surface where a hole is to be drilled by pointing and clicking on surface junctures. Two other surface point-clicks establish a surface perpendicular to which the bit should be positioned. The user specifies hole depth and the robot goes to work, controlled through camera-space manipulation.

"Drilling is but one of hundreds of tasks where requirements can be conveyed to the dexterous robot system by humans selecting the surface points," Skaar said.

Skaar's appealingly practical approach to the robot vision problem appears to be less ambitious than other programs such as the European Union's Cognitive Systems for Cognitive Assistants. This program, also known as CoSy, seeks ways of raising the intelligence of robots from their current insect-like level to that of a preschooler.

Still, Skaar's approach produces what has long been promised: a way of combining a human's understanding of a task and his ease in identifying key surfaces with a robot's perfect memory, precision, and robustness.

"The long-expected day of exploiting the steadiness, strength, and versatility of mechanical dexterity in three dimensions may just have needed the right mix of human and machine attributes," Skaar said.

A robot arm picks a single bag from a random pile after observing a circular cue on its end-of-arm tooling and similar cues printed on the bags themselves. The robot can pick a lone bag from the stack regardless of the height, orientation, or side at which each bag lies. Lower photo shows the bag's destination, a filling nozzle.

Grahic Jump LocationA robot arm picks a single bag from a random pile after observing a circular cue on its end-of-arm tooling and similar cues printed on the bags themselves. The robot can pick a lone bag from the stack regardless of the height, orientation, or side at which each bag lies. Lower photo shows the bag's destination, a filling nozzle.

Data from several rover cameras help bring the robot arms within reach of their targets. Probes touch the targets to ascertain final coordinates.

For the two Mars rovers, a human-in-theloop approach won't work because of the lengthy time delay separating the signal's reception and transmission.

Copyright © 2004 by ASME
Topics: Robots , Machinery
View article in PDF format.

References

Figures

Tables

Errata

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In