0
Select Articles

# Airborne, Autonomous & CollaborativePUBLIC ACCESS

Unmanned Aerial Vehicles have Changed the Face of Warfare, But UAVs can do Even More When they Fly in Flocks.

[+] Author Notes

Brandon Basso and Joshua Love are Ph.D. candidates in mechanical engineering at the University of California at Berkeley.

J. Karl Hedrick is the James Marshall Wells Professor of Mechanical Engineering at Berkeley, specializing in nonlinear control, automotive control systems, and aircraft control.

Mechanical Engineering 133(04), 26-31 (Apr 01, 2011) (6 pages) doi:10.1115/1.2011-APR-1

## Article

Autonomous aircraft have always had a certain allure for control system designers. Unstable dynamics, six degrees of freedom, GPS, gyroscopes, and accelerometers: control doesn’t get much more challenging than that. And the payoff—unprecedented high-definition and persistent aerial imagery, human-free operation, longduration missions—drove early development in the field of aircraft autonomy.

The military was the initial customer for unmanned aerial vehicles, beginning with target drones in the early 1900s. Inexpensive commercial autopilots expanded the field to include the commercial, private, and hobbyist sectors. It is now possible to build an autonomous UAV that can fly without human guidance for less than $500 with open-source hardware and software. In 2011, simple UAVs are practically out-of-the box for anyone with a soldering iron, some epoxy, and a free weekend. With UAVs becoming both cheap and easy to build, the field's leading edge is now systems—squadrons of two or five or ten aircraft, collaborating to achieve a common goal. The ambition is to use teams of flying robots to develop vision-based maps of large areas, track moving objects, fuse information from multiple aircraft and multiple sensors, and perform high-level task planning. While military applications have focused on fully integrated, single-UAV solutions, recent research in multi-agent autonomous systems has underscored several benefits of collaboration among robots. A team can possess heterogeneous capabilities, enabling division of labor and efficient task execution. Flight tests performed in the Center for Collaborative Control of Unmanned Vehicles at the University of California at Berkeley, where we work, have demonstrated the benefits of heterogeneity. In a 2010 demonstration, researchers tasked two UAVs, one with a low-resolution broad field of view, and one with a high-resolution narrow field of view, with identifying and tracking a person leaving a building while keeping the building under surveillance. The two UAVs were able to dynamically allocate themselves to the live task list based on their individual sensing capabilities and the requirements of the task themselves. That is real team-level autonomy. We have found that multiple, smaller, specialized UAVs often can be less expensive to design and operate than larger more capable aircraft. The keys to their success are small sensors, fast signal processing, and intelligent multi-agent decentralized control systems. Unmanned aerial vehicles present several unique sensing challenges. The relative velocity between aircraft and targets on the ground is typically large, and the distances separating them can range between hundreds of meters and several kilometers. And while all aircraft are limited in payload capacity—meaning smaller sensors, smaller computers, and limited energy supply—most small UAVs weigh less than 100 kilograms and must operate right on the margin, scrupulously trading off grams of computers and sensors for more battery and more endurance. When sensors must pack a lot of power in a small package, certain sensing options such as the sort of lasers popular in many ground robotics applications, are not possible. Even larger aircraft that fly farther from their targets do not employ lasers because of their limited range. For UAVs and autonomous robotics in general, vision remains one of the most prevalent sensing options. Even with a low-cost webcam, autonomous aircraft can identify and track humans, cars, and physical landmarks. Higher quality sensors increase resolution and frame rate; our center (which goes by the short form C3UV) uses a high-definition camera that images at 112 frames per second. Low-cost and hacker-friendly stereo cameras have spurred a wave of development in stereo vision processing. Thanks to open-source drivers and a healthy developer community, sensors of these types will no doubt make their way into the UAV domain soon. There are as many types of UAVs as there are sensor options. Most people have become familiar with the common military UAVs, from the 2 kg Raven to the 3,800 kg Global Hawk. But commercially available autopilots such as Piccolo and Kestrel can be combined with virtually any hobbyist radio-controlled fixed-wing aircraft or rotorcraft to make an autonomous aircraft, and open-source hardware options can do a comparable job at a tenth the cost. The C3UV lab maintains a fleet of several fixed-wing aircraft, ranging from the MLB Co.’s 40 kg Bat IV that can fly for eight hours to the 3 kg Zagi that can fly for 30 minutes. The Zagi was the result of searching for the sweet spot between capability and simplicity. While larger aircraft generally return more in terms of sensing quality, operating larger aircraft requires exponentially more manhours and ground-support equipment than are needed for smaller vehicles. Additionally and perhaps more important, for university research it is highly desirable to have a more modifiable and versatile aircraft that doesn’t shy away from a rapid development cycle. The resulting Zagi platform is a delta wing aircraft that costs less than$1,000. What it lacks in endurance and capacity, the Zagi makes up for in cost and operational simplicity. What's more, these durable foam aircraft can be prepped and launched in a matter of minutes, unlike their larger brethren. Rapid deployment allows for less ground support and more vehicles in the air. The key ingredient is an intelligent collaborative control system that makes operating one aircraft as easy as ten.

To control a complex system, such as a collaborative team of UAVs, engineers often separate the overall control system into several simpler problems. That enables the system designers to view pieces of the whole from different perspectives and to apply the most appropriate control tools and techniques for each individual piece. The C3UV system is designed to allow a single user to command a fleet of UAVs. To do so we solve simpler control problems for path following, path planning, sensing and estimation, collaboration, high-level mission definitions, and system integration.

The path-following problem addresses how a UAV can be controlled to follow a desired path. The UAV's rigid body motion is typically modeled using ordinary differential equations to represent the position and orientation of the vehicle. A control strategy is then found that makes the position closely follow the desired path. If the UAV's dynamics are linear, and if the desired path is simply moving to a static point, linear control techniques, such as proportional–integral–derivative control and pole placement, may be used. That may be the case for the approximated dynamics of quad-rotors or helicopters.

However, simple models for fixed-wing aircraft are often non-linear. Additionally, if the desired path is more complex than a fixed point (such as a spline connecting multiple points or tasks) the resulting controller will likely be based on non-linear techniques. These two factors result in real UAVs often using non-linear control techniques like feedback linearization or sliding-mode control to generate state-space based control laws that track the desired paths.

One step up in the control hierarchy, path planning determines how to generate a path that the UAV should follow to accomplish a goal—image an area, say, or patrol a border. The path-planning problem is more abstract than the path-following problem and assumes that an adequate path-following controller exists. There are many different methods of generating the paths. Sometimes the goal's specification is itself a path, such as searching along a line. Other times the goal requires creating spacefilling curves to search an area or generating paths that search an area as quickly as possible.

Even more abstract goals can be defined, such as optimal imaging, or explicit control of the sensor footprint so as to maximize image quality. In such a case, the path-planning algorithms determine where the sensor footprint should be and computes a path that will put the UAV there.

When a UAV is given a goal such as to map an area, track a car, or search for lost campers, the desired behavior depends on what it is currently looking for and what it has already found, and where everything might be in the future. That predictive layer, often simply referred to as estimation, is essential for planning in the uncertain environments in which UAVs operate. Sensing and estimation should affect the paths planned.

This sequence shows the path of a UAV during an autonomous search experiment. The search algorithm directs the UAV toward the most probable target location, shown by certainty gradient lines. After the first pass found nothing, the UAV plotted a course through the remaining unsearched area.

C3UV LAB

Consider a “search-then-track” task where a UAV must search for, identify, and track a person. The system must have some way of representing an estimate of where the person is; then it must plan UAV paths based on changes in that estimate. As a UAV flies over an area where it does not sense the person, the estimate of the person's location should change, and so too should the path planned for the UAV: it should look elsewhere.

Once the person is identified (assuming the UAV has computer vision algorithms that can identify people), we would like to keep him in the field of view while relaying the location information to a human search party. The search problem becomes a tracking problem, and as the pedestrian leaves the field of view, their position and likely future position must be predicted forward in time. Appropriate paths are continually re-planned based on the updated estimates and future predictions.

Combining multiple ways of representing uncertain information, including Gaussian distributions and non-Gaussian probabilistic density functions, with various path planning methods—from minimizing variance or entropy to flying towards the mean or the maximum—provides different options for translating objectives with uncertainty into paths for a UAV to follow.

The collaborative tasking system is the next highest layer in the control system. It decides the objective in the first place. One form of collaboration consists of enabling teams of UAVs to work together on a task that would be impossible with only one UAV. An example of tight collaboration would be fighting forest fires, where one UAV locates hotspots and the others drop fire retardant. Path planning for these types of behaviors may be either centralized or decentralized. In a centralized approach, one computer would take in the estimates and task parameters and generate the desired paths for all UAVs. In a decentralized alternative, each individual UAV decides what it will do based on some set of information communicated to it. Centralized algorithms tend to be conceptually simpler, easier to optimize, but less robust to failures.

The next obvious extension of UAV collaboration is to enable the vehicles to work together on several tasks at the same time. It is possible to have a human user manually assign individual UAVs to specific tasks, but this is very taxing on the operator, who is unlikely to guess the best allocations. Automated allocation can be done through heuristic algorithms, such as the nearest-neighbor approach, which assigns the UAV closest to the task. The heuristics produce simple rules that behave predictably, but not necessarily optimally.

An alternative lies in optimization-based methods, which can be applied in either a centralized or decentralized manner and can enable the system to determine which allocation is best according to some criterion: speed, fuel economy, average proximity to the runway, and so on. With collaboration through task allocation, the human user can specify a set of tasks for the system to work on and not worry about which UAV is working on which task at any given moment. System intelligence of this variety is absolutely necessary to enable one operator to control many aircraft.

Indeed, once a system of UAVs has a notion of task assignment, the human operator can begin to think abstractly about how he wants the system as a whole to behave. To make the process easier, researchers can develop mission definition languages, which contain pre-defined types of tasks. Different types of tasks have different behaviors, such as searching an area or patrolling a border. Individual tasks are created from these predefined types by filling in the desired parameters, such as an identifier for an object of interest or the GPS points defining the patrol border.

An advanced UAV will be able to search for and identify a person of interest. But to track the person successfully, the control systems must also possess a predictive layer, that will estimate not only where the person is, but where he may go within a certain period of time. As shown in the image, the person of interest, positively identified in the red rectangle, is likely to remain in the white shaded area in the inset for a certain period of time.

C3UV LAB

The human user will likely never type any code in these languages, but instead will interact with a front-end graphical user interface that enables the operator to create tasks defined and stored in the mission definition language. The GUI allows the user to use graphics to understand the state of the system, such as which UAV is where and what it is doing. The interface should also be simple and intuitive enough to enable the user to draw new tasks to be accomplished by the system. The human user should focus on what the system is doing and what information is being streamed back, not on how to operate the GUI.

Designing controllers for such a large and interconnected system is a daunting challenge, necessitating a structured approach and a system architecture. The field of system architecture seeks to simplify the process of designing, implementing, and deploying a controller on a network of UAVs and other similar systems. Much work in C3UV and other robotics labs has focused on designing the tools for addressing the challenges of writing reliable, maintainable, portable, and scalable software.

Object-oriented programming has inspired one popular approach: component-based software design. The principle is simple—separate individual functions into selfcontained executables and modularize on multiple levels of hierarchy. (For instance, subsystems are a higher-level modularization than components.) Commercially available tools that aim to organize large software projects typical in multi-agent systems, combined with commercial middleware for brokering communication, go a long way to making large-scale systems more attainable.

UAVs are at the dawn of a new era, but some hard problems still remain. Low-level control and basic sensing may be largely solved, but teams of highly autonomous vehicles require reasoning about situations with near-human decision-making capability. The new control systems, unlike lower level single-agent motion control systems, must be able to deal with abstract and varied goals from multiple sources, a myriad of equally abstract outputs, and no clear notion of “system dynamics” now that the system is a cloud decision-making entity.

In perhaps the largest departure from classical controllers, humans must be considered in the loop for high-level decision-making control systems. Research into human-robot interaction has focused on modeling and translating between human objectives and robot objectives.

The often stated but little understood topic of “adjustable autonomy” is at the heart of the human-robot interaction problem. In the ideal case, human operators would have access to a dial: on one side, complete human control over a network of UAVs, for example, performing a complex mission of many tasks; on the other “all-computer” side of the dial, the system would perform all tasking and high-level reasoning about the state of the mission. Both of these options exist independently today. But what happens in the middle of the adjustable autonomy dial? What does a part-human, part-automated solution look like? Who does what, and who overrides whom?

Many of these questions are yet to be answered. Likely, the autonomous system will have to learn from humans how to reason like humans if there is any hope for a collaborative human-machine system to work. Unlike lower levels of controls, where objectives, inputs, and outputs are clearly defined, reasoning about high-level decision problems can involve multiple and vague objectives, inputs, and outputs. One potential solution would be to adopt methods such as apprenticeship learning and supervised learning that have been developed in the artificial intelligence community.

We hope that, in the future, control systems will be better, faster, cheaper, and smarter to deal with human interaction and collaborative control problems inherent in teams of UAVs. We may then be able to see flocks of UAVs fulfill their greatest potential as a means to extend human capability and meet human needs.

The many flavors of UAVs, clockwise from top left: a lightweight Zagi being readied for a test flight; a Sig Rascal, propelled by a two-stroke engine, can have a small camera (inset) mounted on its wing; another view of the Rascal; the MLB Bat IV has a 4-meter wingspan; a package of cameras and sensors can be installed in the Bat's nose.

## Acknowledgements

The authors would like to acknowledge the students, post-docs, and faculty who have worked at C3UV over the years, as well as the support of the Office of Naval Research.

View article in PDF format.

## Discussions

Some tools below are only available to our subscribers or users with an online account.

### Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related Proceedings Articles
Related eBook Content
Topic Collections