0
Select Articles

Taming The Technological Frontier PUBLIC ACCESS

A Student of Disaster Recommends Teamwork and Openness as a Recipe to Avoid Serious Breakdown.

Mechanical Engineering 126(03), 40-42 (Mar 01, 2004) (3 pages) doi:10.1115/1.2004-MAR-4

Abstract

This article discusses a technology that has been a boon to everyone. But as news of tragedies like the Columbia disaster constantly reminds that the complicated systems that drive an advanced society can malfunction dramatically and tragically in seconds. When writing about the past, judgment comes in. A piece about disasters that emphasizes the past focuses on accountability. The writer has the power of hindsight to say that obviously they should have done this, or they did not do that. Managers need to encourage employees to report things they see that are out of the ordinary, even if the employee cannot explain the problem. It is maybe just that their intuition says something does not feel right. Something might not fit a pattern, but a worker would not have all the information at his/her disposal to say why it does not fit the pattern.

Article

Technology has been a boon to everyone. But as news of tragedies like the Columbia disaster constantly reminds us, the complicated systems that drive an advanced society can malfunction dramatically and tragically in seconds.

It doesn't have to be that way, according to writer James Chiles, the author of Inviting Disaster: Lessons From the Edge of Technology, published in 2001 by HarperCollins Publishers of New York. In the book, he shows how increasingly smart systems leave us wide open to human tragedy and how management practice can help mitigate catastrophe.

By weaving a dramatic narrative that explains how breakdowns in these systems resulted in more than 50 disasters of our age, such as the chain-reaction crash of the Air France Concorde and the meltdown at the Chernobyl nuclear power station, Chiles demonstrates how the battle between man and machine may be escalating beyond manageable limits. He delineates how fatal system fractures happen through a combination of human error and mechanical malfunction, and suggests steps to take in order to sidestep such tragedies in the future.

Chiles has written about technology and history for more than two decades. His articles have appeared in Smithsonian, Audubon, Air & Space, Texas Monthly, Harvard magazine, and American Heritage of Invention & Technology He lives in Minneapolis.

Here, in a discussion with Associate Editor Jean Thilmany, Chiles offers a glimpse behind the scenes of his book and puts forth his ideas about how managers might mitigate the human error in system failure.

Mechanical Engineering: The marriage of technology, history, and system malfunction is fascinating. How did you become interested in the topic?

Chiles: I've been a technology writer for 24 years and in that time some patterns started showing up in my research. I was reading things like a 10-year retrospective on the 1977 New York blackout. But when I looked at all these different stories, one thing missing from the writing was: How do you learn as many lessons as possible from these disasters and think more about the future than the past?

When writing about the past, judgment comes in. A piece about disasters that emphasizes the past focuses on accountability. The writer has the power of hindsight to say that obviously they should have done this, or they didn't do that.

That's the kind of article that can be written, particularly about Three Mile Island. It's so clear now in retrospect what happened.

That kind of writing is very morally satisfying. But is it useful?

It's good to say that the managers should have examined everything they knew and made an informed decision based on that. But they didn't have enough time to do that. The most future-minded analysis has tried to look for a pattern of what humans do in those kinds of situations and then offer some rules of thumb that operators or managers could use. The rule of thumb could be something like: How should we think right now if we don't know all the facts?

One thing I drew out of my mass of information is that successful managers know how to stabilize the system during a potential disaster and give themselves a few minutes. They try to hold the system together for a few minutes so they can reflect and draw on their team. They think in terms of successful group action. They don't act hastily or quickly, even in these types of situations.

Mechanical Engineering: You speak of an engineering code of ethics, much as doctors take the Hippocratic Oath. How far could such a code for engineers go in building a highly reliable organization inside a complex company?

Chiles: The engineering code of ethics would help people who were at the middle level, those who design and put the pieces together in the organization. If there's a code of ethics in place, those people would feel empowered as engineers to call a halt to a project or to say, "In my professional judgment, this doesn't meet a standard of work."

A code of ethics would read that the project you're working on has to meet a certain industry wide standard. So it would allow engineers to say, "I've got to speak up. I've got to say something. I have no choice."

It's awfully hard to stand up in an organization. You're putting your job at risk. You may be telling the manager who hired you, "Bob, you're wrong." And how easy is that?

They hired you. You're supposed to owe them loyalty. It's easier to stand up if you're citing a code of ethics.

Mechanical Engineering: What are some engineering management issues you discovered in the course of writing Inviting Disaster?

Chiles:Managers need to encourage employees to report things they see that are out of the ordinary, even if the employee can't explain the problem. It's maybe just that their intuition says something doesn't feel right. Something might not fit a pattern, but a worker wouldn't have all the information at his disposal to say why it doesn't fit the pattern.

In an organization, there's a lot of eyes out there. The one who sees something isn't necessarily the one who knows the situation is serious, only that it's out of the ordinary.

In the case of the Hartford [Conn.] Civic Center Coliseum [whose 1,400- ton roof collapsed under a little more than four inches of snow on Jan. 18, 1978, five hours after fans left the arena], the thing the workers thought was weird was that they couldn't get the panel bolt holes to line up when they were putting up the roof. That's an aberration that most people would comment on, or least get mad about.

In this case, they reported it to their supervisors, but nothing happened. The culture was that you're paid to fix this, not be a sidewalk superintendent. So they fixed it by spot-welding it and, of course, that was a bad idea. When the roof fell in, it was clear to people who had more knowledge than the welders that the reason the bolt holes didn't line up was because the whole thing was sagging. The bolt holes were a sign that something wasn't working right.

The price for allowing reporting to happen is that organizations won't run as fast and will have to look back. The current corporate culture is to never look back, never slow down. If we never slow down, they can never catch us. I've actually heard that said.

One way that type of culture is fostered is when managers don't have the confidence to hear those things. They aren't confident enough in their ability to sort things out, and they don't want to hear about the problem. A good manager can hear about these problems and sort them out.

I'd never expect managers to act on every single problem report they get. They have to make judgments about whether the report is serious enough or not. We don't have totally perfect systems. Our world doesn't operate like that. But we can certainly do a lot better.

Mechanical Engineering: Well, how do you develop or nurture these people, who are expert at picking out the subtle signs of a real problem in a system, compared with what you call in your book" the constant noise of native difficulties?"

Chiles: When you've worked in an industry five or 10 years, you start to know the industry pretty well. And chances are, your career will continue along that same line.

So I think it would be good, once people have adopted a certain career path as professionals, to hear about how things have gone right and gone wrong in that industry, so that they could maybe have a little more vigilance and know what to look for.

Mechanical Engineering: Do managers need special or particular training in the face of modern technological systems?

Chiles: My point in the book is that in American culture, emergencies and failures are something to run the other way from. We're a success-oriented society, and disasters are seen as a failure.

We have plenty of disaster books and movies. They're not very productive. It's a voyeuristic form of excitement. But we do want to instill lessons from disaster and to say that none of us who run complex sytsems are far from such incidents.

The lessons from the near-misses can be passed to others. We could use those specifics for training. But that means we have to reveal those near-miss incidents. Close calls are something that most people could be trained better in how to deal with. A close call should be reported, analyzed, and become part of a database. Not just in one company, but across an entire industry.

Mechanical Engineering: In your book, you say that when there's no captain of a ship, that affects the organization. There's no one to steer.

Chiles: I compare a ship with no captain to a chief engineer or manager who doesn't know the system. And I give the example of the Ocean Ranger [an oil rig overcome by waves during a storm on Feb. 14, 1982, in the North Atlantic off the coast of Newfoundland] as a place where no one was in command. In that case, the investigating commission made that observation.

That, incidentally, is a good model for what an investigating commission should be. They spent $13 million trying to understand what happened. And this is 22 years ago, so that was a lot of effort for the time. They inter-viewed hundreds of people and tried to understand the whole industry-not just the rig, but how it fit within the practices that prang up in a very fast-growing industry. The observation was that the oil and gas offshore industry was growing too fast.

In the Ocean Ranger case, there were three people who thought they should have real, day-to-day authority: the tool pusher, the master-or the captain-and the company man, who was the representative of the oil company that was paying the bills. And these three didn't get along well together. It was not a good teamwork situation. The captain wanted to be more conservative than the others, but was overruled at every point.

The other problem was that no single person really understood that rig very well. And when a crisis occurred, there was no one to take the helm and say, "We should do X, and we definitely don't want to do Y." No one knew the system well enough to see what was developing. When someone doesn't know their system, they make decisions based on haste and desperate experimentation.

You get this time-telescoping thing. When you're really scared, your perception of time changes. Even though a few seconds may have passed, you think it's more like a minute. You try something and you don't wait very long before you try something else. It doesn't take long before you've altered the system so much even God wouldn't know how you got there. You've pushed buttons and opened valves because you're scared.

Mechanical Engineering: What does it take to make an excellent organization?

Chiles: I like to call what we're in the technological frontier. This is the first time for a lot of these advanced systems. No one knows what to expect.

There's danger and opportunity bound together by the unknown. And we're all in this together, so I'd like to have a good team around me. I'll watch out for them, and they'll watch out for me.

That makes for a safer and more resilient system as well as a more satisfying work environment. It's amazing to me what people will do if they have bonded as a work team. It's astounding how dedicated they will be. I'm not saying that you should expect sacrifice, but in some program., the edgier the system, the greater the sacrifice.

You saw that with Apollo 13. They formed very strong groups, and that's what they gained for their sacrifice. That's true in combat, too. The thing that brings them through is the friendships and the memories they've had. That's a form of payment for the greater sacrifice.

In this book, the author shows how increasingly smart systems can lead to human tragedy.

Grahic Jump LocationIn this book, the author shows how increasingly smart systems can lead to human tragedy.

James Chiles says that "successful managers know how to stabilize the system during a potential disaster. They don't act hastily or quickly."

Grahic Jump LocationJames Chiles says that "successful managers know how to stabilize the system during a potential disaster. They don't act hastily or quickly."

Copyright © 2004 by ASME
View article in PDF format.

References

Figures

Tables

Errata

Discussions

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In