0
Select Articles

Too Hot for Comfort PUBLIC ACCESS

As Server Power Grows, so Does the Need to Find More Efficient Ways to Keep Computers Cool.

[+] Author Notes

Associate Editor

Mechanical Engineering 128(12), 32-34 (Dec 01, 2006) (3 pages) doi:10.1115/1.2006-DEC-3

Electronics have grown much hotter over the past decade, making cooling a top priority in data centers. APC, better known for backup power supplies, supplies cold air from rack-size towers mounted along each row. It then monitors server temperatures, adjusting each individual air conditioner tower to achieve optimal cooling. Such localized cooling is efficient, such that users can boost server rack power to 18 kW—nearly nine times the average found by Uptime. IBM believes even the largest, most sophisticated data center managers need help with cooling. Like APC, it encloses its racks with a roof, but unlike APC it uses a cold rather than hot center aisle and exhausts the heated air into the data center. IBM also removes heat with a water-cooled heat exchanger attached to the back of a rack. In addition, IBM provides power management software that enables IT managers to adjust power and heat output. This ensures that power managers can power down based on workload, or move workload to environments that are not effectively using the power and cooling capacity that they have.

Back in 1965, Intel Corp. cofounder Gordon Moore predicted that manufacturers could shrink electronics fast enough to double the number of transistors on a chip every two years.

Moore’s Law got it about right. Over the past 40 years, manufacturers have boosted the number of transistors on computer processors from dozens to hundreds of millions. Today’s electronics are smaller, faster, and far more powerful than those of just a few years ago.

They are also much hotter, quadrupling between 1992 and 2001. Granted, your desktop computer will not warm your hands on an icy winter morning. But the electronics in data centers are another story altogether.

Data centers are rooms that house the computer hardware that runs the software, communications, and databases that glue together large enterprises. Not that long ago, most data centers were chilled rooms housing large mainframe computers. Today, those mainframes have given way to microprocessor-based servers, PC-size computers designed to network into powerful computer arrays.

A typical data center consists of row after row of computer racks. Each rack stands 6 or 7 feet tall and could easily fit about 40 pizza box-shape servers and even more of those servers called blades, stripped to their essential circuit boards.

That, at least, is the theory. Unfortunately, servers generate far too much heat to fill a rack to capacity. While each individual server actually runs far cooler than the old mainframe did, stacking them atop one another in a refrigerator-size enclosed space could produce enough heat to melt the solder off the boards.

That is why most racks hold far fewer servers than their maximum capacity. According to Steve Sams, vice president of site and facilities services for IBM Global Technology Services, a fully loaded rack generates about 32 kilowatts thermal.

Yet when the Uptime Institute Inc., an information technology reliability organization based in Santa Fe, N.M., surveyed 19 computer rooms measuring a total of 204,000 square feet (slightly larger than four football fields), it found that, in practice, hardware on the average rack generated only 2.1 kW of heat.

According to Uptime, most data rooms struggled to handle even that. Hot spots occurred even though the average computer room used 2.6 times more cooling capacity than optimally required for the heat load. One room actually had 25 percent hot spots despite running 10 times more cooling capacity than required. In fact, one out of every 10 servers measured ran hotter than its published reliability guidelines. Some ran at 100°F. For every 18 degrees above 70°F, reliability declines by 50 percent.

No wonder excessive heat was the No. 1 facilities concern among data center managers, according to a recent survey by Gartner Group, a well-known IT consultancy in Stamford, Conn. “Power and cooling will be a top- three issue with all chief information officers in the next six to 12 months,” Gartner consultant Michael Bell said.

It’s not as if data room cooling systems lack sophistication. They are, in essence, large refrigerators. Most of them circulate cool air under raised floors, then push it through floor vents aimed at the racks. The cool air passes over the hot servers, then exits the room as warmed air through returns in the side walls and mixes with cold air as it goes.

The problem, says Uptime, is that only 28 percent of the chilled air actually reaches the computer equipment. The remaining 72 percent bypasses the servers entirely. More than half that amount escapes from unsealed cable holes and conduits. Another 14 percent exits floor vents facing away from the hot racks rather than toward them. More air is obstructed by the snake’s nest of wires under the floor. A full 10 percent of air conditioning equipment fails without notifying operators.

In addition, hot air exiting the racks mixes with cool room air, lowering room temperatures. Most rooms are dotted with hot spots where temperatures exceed requirements. Servers generally respond to such high temperatures by throttling back on power consumption, which reduces performance.

In the past, data room managers simply threw more cooling at hot rooms or moved their servers to larger rooms. Higher power costs and tighter budgets have made these options more painful today.

Electronics have grown much hotter over the past decade, making cooling a top priority in data centers like the one pictured on facing page. To keep equipment cool, IBM routes air conditioning directly into server racks like the one on the right.

Grahic Jump LocationElectronics have grown much hotter over the past decade, making cooling a top priority in data centers like the one pictured on facing page. To keep equipment cool, IBM routes air conditioning directly into server racks like the one on the right.

Unfortunately, electronics keep getting hotter. Between 1992 and 2001, the heat load per square foot of server rose about 18 percent annually, more than quadrupling to 1,100 watts per square foot. Although the rate of heat buildup has slowed, it remains in double digits and is not expected to level off until 2010. Correcting air bypass problems may solve some of today’s problems, but it won’t enable users to run denser racks in the future.

Ben Steinberg, a staff applications engineer at American Power Conversion Corp. of Newport, R.I., agrees. “About 90 percent of new data centers run racks at only 2 to 5 kW,” he said. “Two of those racks together use as much power as a home oven. And as soon as you turn on the switch, you need at least as much power to keep them cool.”

APC encloses racks to remove hot air efficiently (left). A typical high-performance data center (right), in which IBM has a new racking system.

Grahic Jump LocationAPC encloses racks to remove hot air efficiently (left). A typical high-performance data center (right), in which IBM has a new racking system.

Multiply this by dozens or even hundreds of racks, and it is easy to see why rising electronics temperatures are overwhelming cooling systems and utility budgets. APC, better known for backup power supplies, thinks it has a solution that is simplicity itself: Isolate the problem.

The company starts by abandoning the old cooling architecture. Instead of running cold air up from the floor, APC supplies cold air from rack-size towers mounted along each row. It then monitors server temperatures, adjusting each individual air conditioner tower to achieve optimal cooling. Such localized cooling is so efficient that users can boost server rack power to 18 kW—nearly nine times the average found by Uptime.

A second innovation adds enough cooling capacity to handle a 30 kW bank of servers, APC said. APC encloses the exhaust end of two adjacent rows of racks under a single hood. According to the company, instead of releasing hot air into the room, where it can produce hot spots, the system contains hot air and carries it away.

APC is not alone in trying to cool data centers. Just about every provider of server racks and IT services wants a piece of the lucrative $25 billion data center renovation, expansion, and relocation market. Cooling and power are a big part of this business. In fact, market researcher International Data Group Inc. of Boston predicts that data center spending on these products will surpass money invested in servers within the next year or two.

IBM believes even the largest, most sophisticated data center managers need help with cooling. “Ten and fifteen years ago, we all built data centers using rules of thumb,” Sams said. “At the time, we were generating only 1 to 3 kW of heat in a rack-size space. Today, it’s 20 kW and maxes out at 32 kW, and those rules of thumb have no value. We have to be cognizant of what we put where or we will melt the servers in the rack.”

Like APC, IBM also has a new racking system capable of handling 20 to 25 kW heat loads. Instead of adding in-line air conditioners, IBM redirects the data center’s air conditioning through the rack itself. Like APC, it encloses its racks with a roof, but unlike APC it uses a cold rather than hot center aisle and exhausts the heated air into the data center. “We believe the design is 40 to 50 percent better than conventional racking, and we are going to run tests to compare it with APC,” Sams said.

Installation calls for running new air conditioning vents under the raised data center floor. This space is usually filled with a snake’s nest of unused electrical cables from previous room configurations. Removing old cables to make room for the vents eliminates airflow obstructions and also adds to the center’s data efficiency, according to Sams.

IBM also removes heat with a water-cooled heat exchanger attached to the back of a rack. “It looks like an auto radiator and eliminates about 50 percent of the heat,” Sams said. “We used it at Georgia Tech’s high-performance computer center to enable a non-state-of-the-art data center to support state-of-the-art technology.”

In addition, IBM provides power management software that enables IT managers to adjust power and heat output. “They can power down based on workload, or move workload to environments that are not effectively using the power and cooling capacity that they have,” Sams said. “It lets them spread the work around so you don’t need extra cooling or build up hot spots.”

For IT departments seeking to cut power bills and stretch existing assets, such solutions can’t come too soon.

Copyright © 2006 by ASME
View article in PDF format.

References

Figures

Tables

Errata

Discussions

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In