A Super Job of Cooling at Virginia Tech

When Virginia Polytechnic Institute and State University de-cided to build a supercomputer, it was an ambitious goal—creating one of the five fastest supercomputers in the world. But coming up with a solution for cooling such high power density required equal innovation and determination. School officials at Virginia Tech, in Blacksburg, planned to create their supercomputer by clustering...

By Scott Siddens, Senior Editor September 1, 2004

When Virginia Polytechnic Institute and State University de-cided to build a supercomputer, it was an ambitious goal—creating one of the five fastest supercomputers in the world. But coming up with a solution for cooling such high power density required equal innovation and determination.

School officials at Virginia Tech, in Blacksburg, planned to create their supercomputer by clustering 1,100 64-bit Power Mac G5 desktop computers. Among other things, this power cluster would enable simulation of natural or human-engineered systems, serving activities such as nanoscale electronics modeling, quantum chemistry, aerodynamics, computational acoustics and molecular modeling.

The project also had the potential to change the way supercomputers are assembled, creating a new development model that could bring supercomputing within reach of organizations that previously couldn’t afford it.

But adequate cooling for this power density was a challenge. An inadequate solution would reduce performance and life span.

“The challenge is not necessarily in the quantity of computers, but in their density,” states Benjie Linkous, electrical engineer with Whitescarver, Hurd & Obechain, Inc., Roanoke, Va., and a designer on the project. “Conventional underfloor cooling systems start to become impractical at very high heat densities due to the volume of air to be moved .”

Linkous further explains that if computers are spread out over a larger floor area, this volume of air movement does not create problems. In this case, however, the computers are located within a small footprint requiring airflow distribution that would be impractical with conventional cooling techniques.

“Computers that comprise the supercomputer are all housed in the AISB Machine Room,” he says. “The room has a raised floor area that is used extensively for an air plenum and also data and power.”

The supplier that provided the solution worked with preliminary project specifications to analyze equipment power consumption and projected heat loads. Then, using a specialized computer program that models airflow beneath the floor and through the tiles, they determined the optimum arrangement of racks in the room and began to model the effectiveness of different cooling system configurations.

Each configuration was based on a “hot aisle/cold aisle” combination, in which cold aisles have perforated floor tiles that allow cooling air to come through the floor, while hot aisles do not. Equipment racks are arranged front-to-front so the cooling air being pushed into the cold aisle is pulled in through the front of the rack and exhausted at the back of the rack into the hot aisle. Once different approaches had been reviewed, the consultants analyzed two configurations in depth: relying exclusively on traditional air-conditioning units; or creating a hybrid solution that combined traditional computer room air-conditioning units with supplemental cooling provided by a waterless system.

The traditional solution

Based on the data available at the time, designers projected a sensible heat load of nearly 2 million BTU/hr. for the 3,000-sq.-ft. facility. Assuming this heat load, the room would require nine 30-ton air conditioners—seven primary units and two backups. The primary units would generate a total sensible capacity of more than 2 million BTU/hr., and 106,400 cfm. Based on these requirements, the facility would need 236 perforated floor tiles, based on a normal per-tile airflow of 450 cfm. These tiles would cover nearly a third of total floor space.

Airflows were then analyzed using the modeling program, assuming an 18-in. raised floor. With this configuration, the volume of air being pumped under the floor created extremely uneven airflows, from 70 cfm to 1,520 cfm, depending upon location in the row and the location of the row. An analysis of airflow by a single row showed variation of negative airflow on one end to 400 cfm on the other end.

In addition, under-floor pressures and velocities demonstrated significant tumult beneath the floor, reducing cooling system efficiency and creating the potential for hot spots that could damage the computers and reduce availability. The engineers then evaluated the effect of raising the floor from 18 in. to 40 in. to equalize airflow in the room. This change improved the situation, but still resulted in less than optimum cooling. It also was determined that a 40-in. raised floor was impractical due to physical limitations of the building.

In the end, the solution was a configuration that uses fewer room-level precision air conditioners and supplements these units with rack-based cooling systems.

The hybrid solution

Based on the detailed analysis, it was clear that a combination of two 20-ton room air conditioners and 48 supplemental units would deliver the optimum cooling solution. Specialists analyzed airflow from the two 20-ton units and confirmed that the two units achieved a more uniform airflow; variance within the room was reduced from more than 400 cfm to less than 100 cfm. With uniform airflow from the main systems established, designers determined the optimum number and configuration of supplemental cooling systems.

“The system uses refrigerant for heat transfer,” says Linkous. The system is very similar to heat pipe technology. Pumping units pump refrigerant out to individual refrigerant fan coil units mounted directly on the racks. Chilled water from chillers is pumped to each of the pumping units as a medium for heat rejection in the refrigerant system.

“Once we realized just how much heat we would be dealing with, it was clear that traditional approaches to cooling would not be sufficient by themselves,” says Kevin Shinpaugh, director of research and cluster computing at Virginia Tech. In short, the cooling solution is as much a work of art as the supercomputer.

For more information on university cooling and automation solutions, visit the HVAC community at www.csemag.com . For more information on computer-rack cooling from Liebert, circle 101 on the Reader Service Card.