0
Select Articles

Keener Eyes for Beowulf PUBLIC ACCESS

Maintaining The Country's Nuclear Weapons-Without a Shot Being Fired-Requires Flexible Computer Power and Ever-Expanding Images of Data.

[+] Author Notes

This article was prepared by staff writers in collaboration with outside contributors.

Mechanical Engineering 123(06), 78-79 (Jun 01, 2001) (2 pages) doi:10.1115/1.2001-JUN-6

This article reviews that maintaining the country’s nuclear weapons—without a shot being fired—requires flexible computer power and ever-expanding images of data. ASCI researchers are faced with the challenge of visualizing computational models of a size that could hardly have been imagined a few years ago. The animated simulations reveal exactly how individual atoms and molecules interact with each other in the bombs. The fusion reactions studied by ASCI can occur in just three-billionths of a second. ASCI researchers use the software to visualize a number of components within a data set, including scalar fields, vector fields, cell-centered variables, vertex-centered variables, and polygon information. But as the research continues, the stakes get higher. Before long, scientists will need software that can visualize a terabyte or more of data. A model on the terabyte scale is incomprehensible unless it can be visualized, and the only way to fully understand complicated 3D calculations is through sophisticated graphics software.

A computer room in Apex, N.C., houses a study in contrasts: Against one wall is a sleek purple SGI Onyx 2 supercomputer that's larger than the average refrigerator. Next to it is a rack of 16 plain-vanilla PCs connected by Ethernet cables.

Despite their differences, both systems are important to scientists working on the Department of Energy's Accelerated Strategic Computing Initiative. Officials at the labs have contracted with Apex-based CEI Inc. to scale its En Sight Gold software to visualize datasets containing billions of cells. The Onyx 2 is for work on a $1.8 million contract with Los Alamos National Laboratory. The PCs, linked in what is known as a Beowulf cluster for parallel processing, are being used for a $1.4 million contract with Sandia, Los Alamos, and Lawrence Livermore labs.

In 1996, the Comprehensive Test Ban Treaty was signed, placing a moratorium on underground nuclear testing. But the Department of Energy and its national laboratories remain responsible for guaranteeing the safety and reliability of the nation's nuclear stockpile. The national labs formed their program, known by the acronym ASCI, to develop the high-resolution, three-dimensional physics modeling needed to evaluate the aging stockpile and reasonably predict how time will affect different weapon components.

ASCI researchers are faced with the challenge of visualizing computational models of a size that could hardly have been imagined a few years ago. The animated simulations reveal exactly how individual atoms and molecules interact with each other in the bombs. The fusion reactions studied by ASCI can occur in just three-billionths of a second.

"Since we can no longer conduct live testing, these computer simulations must be a much higher resolution than what we have dealt with in the past," said Jeff Jortner, a principal member of the technical staff at Sandia.

"There are a lot of variables that go into viewing these large datasets. We must have a finer fidelity of the meshes and there are more time steps involved. A large part of our research is to develop supercomputers and software that can handle these data sets ."

ASCI researchers use the software to visualize a number of components within a data set, including scalar fields, vector fields, cell-centered variables, vertex-centered variables, and polygon information.

But as the research continues, the stakes get higher. Before long, scientists will need software that can visualize a terabyte or more of data.

"A model on the terabyte scale is incomprehensible unless it can be visualized, and the only way to fully understand complicated 3-D calculations is through sophisticated graphics software," said Mike Krogh , senior developer at CEI. Both ASCI contracts call on CEI to add features that will improve the speed and efficiency of handling massive amounts of data for rendering on normal displays and in virtual reality environments such as CAVEs and PowerWalls.

The lab uses CEl's software to visualize large data sets. Last August, Krogh and a colleague, Dan Schikore of CEI, generated a test model of 11.5 billion cells at Los Alamos National Laboratory. According to Krogh, it was about 10 times larger than models being used in current ASCI research.

Although the goals of the two contracts are the same, tl1ey require CEI to scale the software on two different computing tracks-traditional shared-memory processing such as that used on the SGr Onyx 2, and the Beowulf cluster. The research will help determine the most cost-effective and efficient way to meet scientists' visualization needs.

The Los Alamos research requires a workstation, the Onyx 2, from Silicon Graphics Inc. of Mountain View, Calif., with eight 195-MHz MIPS Rl0000 processors, 10 gigabytes of random-access memory, a 500-GB disk, and three Infinite Reality 3 graphics engines

The Tri-Labs (Sandia, Los Alamos, and Lawrence Livermore) contract focuses on parallel computing using commodity- based PCs and associated components. CEl's Beowulf cluster is connected with 100-Mbit Ethernet and an HP ProCurve 4000 switch. It runs on Linux 6.2 software from Red Hat Inc. of Durham, N.C. Each PC contains a 733-MHz Pentium III processor and an NVIDIA GeForce2 graphics card, which can render about 20 million triangles per second.

The PC that functions as root node, or server, has 2 GB of RAM and a 55-GB disk. Each of the other 15 PCs has a gigabyte of RAM and a 40-GB disk.

Beowulf, named for the hero of the Old English epic who destroyed the monster Grendel, refers to a cluster of PCs linked for parallel processing using the Linux operating system.

The original Beowulf program was developed by a team that included Donald Becker and Dan

Ridge at NASA Goddard Space Flight Center. Scyld Computing Corp., based in Annapolis, Md. , with Becker as chief technical officer, continues to update Beowulf.

CEI has written its own Beowulf parallel-processing software from the ground up, based on the published ideas of Becker and his associates.

According to Krogh, the post processing visualization software in development at CEI is intended to work with any Beowulf system.

When linked together correctly, Beowulf clusters can reach high rates of computation speed comparable to a traditional supercomputer at a lower cost. CEl's cluster cost $35,000.

Not only are Beowulf clusters inexpensive to buy and maintain, they offer flexibility for increasing functions, according to Jortner. "With a cluster, you can add a new machine, more memory, or more processors, and it's much less expensive than the traditional supercomputers of the past," he said.

The majority of computers at the national labs are now clusters of some so rt. Some fill entire rooms, such as IBM's ASCI White, which is capable of running at speeds of 12.3 teraflops, or 12.3 trillion floating point operations a second. Other cluster-based systems include Los Alamos' Avalon, a 140-processor Beowulf cluster, and Loki, a 16-processor Beowulf.

"Computational codes are all being moved to clusters," said Jortner. "Right now they're no longer building single machines that can do computations of this size, so we are constantly looking for more power. But that doesn't always mean it has to be more expensive."

While the idea of linking PCs together to emulate a visualization supercomputer sounds obvious, the programming involved is not. A cluster is a distributed memory system, which is more difficult to program for parallel processing than a shared-memory system such as the Onyx 2. If a cluster is not programmed properly, it can be tediously slow, eliminating any of its benefits.

Traditionally, scaling visualization software with a Beowulf cluster required fragmenting the simulation data across the individual PCs. The separate pieces of the solution are then "glued" back together into a data file, then postprocessed the traditional way, as they would be with a supercomputer.

The problem is that as the aggregate model size increases, the combined results file becomes too large and unwieldy to process on a single system.

A test model of 11.5 billion cells created at Los Alamos is approximately 10 times larger than current nuclear-event models.

Grahic Jump LocationA test model of 11.5 billion cells created at Los Alamos is approximately 10 times larger than current nuclear-event models.

CEI's goal is to program its software to have the intelligence to find these individual pieces and spawn multiple functions. If the user wants to take a slice through the model, for example, the code would have to issue a command to each individual PC to do its part of this slice, then collect all the data and display it on the user's monitor. All this activity must be transparent to the user, as in dealing with a single file set on a single computer. And it must be done as fast as it would be on a supercomputer, or faster.

If programmed and maintained properly, the price/performance ratio of a cluster can be at least an order of magnitude greater than that for a supercomputer, according to Krogh. PC-based systems are built using commodity components costing tens to hundreds of dollars, significantly lowering maintenance costs. Most PCs come with a one- to three-year warranty on parts and can be fixed by someone with a little experience. A traditional supercomputer, on the other hand, requires an expensive maintenance contract with a vendor field engineer to perform the repairs . Individual parts can cost thousands or tens of thousands of dollars.

If CEI's cluster is successful, it will open markets beyond the ASCI project, potentially in all areas of research and development, said CEl's president, Kent Misegades.

"This project will help advance state-of-the-art, largescale visualization," he said. "In particular, it enables us to adapt our technology to advanced parallel computing architectures, which we believe will be the systems of choice in the future for scientists and engineers. It will mean major innovations for all types of research and disciplines that have a need for supercomputing power."

Copyright © 2001 by ASME
View article in PDF format.

References

Figures

Tables

Errata

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In