IT Experts Expound on Data Centers

Top IT experts from the likes of IBM, Sun and Intel, addressing a packed room of professionals involved in the design, operation and construction of data centers, talked about growing heat densities associated with computer equipment in these facilities. “The question is not whether we can put 50 kW in a rack but whether we should,” said John Pfluegler, a technology strategist with ...

By Staff November 1, 2006

Top IT experts from the likes of IBM, Sun and Intel, addressing a packed room of professionals involved in the design, operation and construction of data centers, talked about growing heat densities associated with computer equipment in these facilities.

“The question is not whether we can put 50 kW in a rack but whether we should,” said John Pfluegler, a technology strategist with Dell.

Plfuegler was one of six leading IT experts participating in a panel discussion of data center heat management at Emerson Network Power’s recent AdaptiveXchange Conference in Columbus, Ohio. “Density, however, is a choice,” added Christian Belady, P.E., a technologist with Hewlett Packard. “Users can choose not to put as many servers in a rack. But it’s really a real estate issue—greater densities tend to lower your costs, but you can’t do some of the things you did before,” he said.

Robert Mitchell of Computerworld magazine, who moderated the panel, asked if water-based cooling is the answer. Nick Aneshansley, vice president of technology for Sun, said that’s one way to go, but it’s not something he’s prepared to do right now. “If fluids are available, it’s a really good strategy because you can cut down on moving air which uses a significant amount of energy,” he said. “But I think liquids are not ready for prime time. There needs to be more standardization and even the possibility that it’s brought in as a utility. When that happens, we’ll design the equipment to plug right into such utilities.”

On the other hand, Steve Madera, vice president of Liebert Environmental Business, argued that the option of waiting may be a luxury many can’t afford, especially because a lot of energy is wasted with forced air and bringing cooling closer to the source of heat makes a lot of sense. “If you start to get over 35 kW per rack, you start losing the capability of cooling the rack with air—you’ll need a heat exchanger—and if you get upwards of 50 kW per rack, you’ll likely need some kind of liquid cooling.”

Right now, he added, there are two viable choices: chilled water or a refrigerant-based liquid.

Roger Schmidt, an engineer with IBM and a member of the ASHRAE TC9.9 committee on mission-critical facilities, is a fan of the former. “In the old days, chilled water came right to the chip and you needed a cold plate,” he said. “There are lots of new technologies available, but I’m not so unsure that a return to direct liquid cooling to a cold plate to cool a processor is a bad idea.”

Belady didn’t disagree, but like Aneshhansley, said standards are the key, and that perhaps it should fall to organizations like ASHRAE to create a road map.

“But it’s also going to depend on the adoption rate by the industry—if it’s rapid, we’ll use it, if not, we won’t,” he said.

Belady added there’s obviously also a cost issue in converting existing facilities. One strategy that makes a lot of sense, in his opinion, is only converting critical areas. “You don’t have to do a whole data center, but if you plan to keep doing upwards of 30 kW per rack, you may be better off reconsidering the whole process.”

Computational fluid dynamic modeling, he added, should be an integral part of that process.

So should driving down energy inefficiencies in the major pieces of electrical equipment, added Pfluegler. “A lot of power is lost at the PDU level, and not enough makes it to the IT equipment itself. We really have to examine this,” he said.

Noting that these issues have only recently really come to bear and the industry, as a whole, has only recently begun to resolve these issues, Madera concluded by noting that problems have forced these issues.

“But we need to start creating and sharing best practices because traditional data center cooling designs just won’t work,” he said.