Hot Designs for Cool Data

The generic term "data center" implies many things. In other words, a data center can be anything from a single computer room in an office to a huge, off-site server farm that serves data and telecommunications operations. The latter is the more typically the case, as many companies store their most important records or perform a significant portion of their IT-related business functions in off...

By Mindi Zissman, Contributing Writer April 1, 2005

The generic term “data center” implies many things. In other words, a data center can be anything from a single computer room in an office to a huge, off-site server farm that serves data and telecommunications operations.

The latter is the more typically the case, as many companies store their most important records or perform a significant portion of their IT-related business functions in off-site facilities. Recent federal legislation—the Public Company Accounting Reform and Investor Protection Act of 2002—may further drive business along such a path. More commonly known as Sarbanes-Oxley, the act requires publicly-traded companies to retain all correspondences and records between executives and their auditors for a period of five years. And although the jury is still out on the legislation’s ultimate impact, early indicators point to an even more critical role that data centers will play in the business world (see “Sarbanes-Oxley and Data Centers, p. 40).

But no matter what the size or location, all data centers have one thing in common: power and cooling loads are critical. In these facilities, mechanical and electrical systems make up about 30% to 40% of the total space and are so crucial to daily operations that their failure—even for less than a second—can devastate a company financially. Consequently, M/E system design is one of the most critical components of these mission-critical facilities and must keep pace with the constant change in information technology.

However, considering the rapid rate at which IT changes, designing M/E systems that can keep pace with this change is a major challenge.

“IT changes every 18 months, but a data center is intended to last 10 to 20 years,” says Raj Gupta, president of Environmental Systems Design, Inc. (ESD), Chicago. “The only way we can get around this dilemma is to design something that is flexible.”

Collaboration breeds flexibility

Gupta strives for a holistic approach, which he feels is the only way to ensure proper flexibility. ESD encourages its clients to bring in all stakeholders as part of the design process. “From facility and IT managers to outside vendors, all should have some type of involvement,” he says. “The problem is that each team needs different specifications and doesn’t always communicate with one another. With the holistic approach, however, we can properly plan for future expansion.”

Besides integrated collaboration, another way consulting engineers can ensure the necessary flexibility for expanding M/E systems, according to Thomas Reed, senior principal of technology facilities at Kling, Philadelphia, is through a modular approach. “A data center is a 15-year asset. Designing it to its full capacity from day one is not the proper use of the money,” he says. “Instead, we build the systems so they’re scalable from day one.”

This approach, however, does not mean all the generators and UPS have to be purchased on day one. As the customer deploys more IT hardware, explains Reed, that’s when facilities should deploy new UPS and generators.

The same logic applies to other critical or redundant systems. “If we put in two chillers because we need them today, and we know we’ll need another two in a few years, we may allocate the space, but not install them today,” adds Gupta.

In other words, modular design allows undeveloped space in a data center to be reserved for IT systems expansion, while also laying the groundwork for continuing growth of the mechanical and electrical systems. “Building the data center in modules protects its future growth,” says Gupta, who sometimes plans for expansion as many as 10 years out.

Really reliable

Designing with holistic and modular approaches leads to that other critical element of data center design—reliability. A major issue engineers face in creating dependable data centers is matching the reliability of the M/E systems. If one system is more dynamic than the other, the facility may experience a greater margin of error.

“Sometimes you see [data facilities] with an ultra-robust electrical system and a really weak mechanical system,” says Bill Kosik, managing principal with the Chicago office of EYP Mission Critical Facilities.

It is paramount, in his opinion, that reliability for both be in tandem. EYP MCF regularly performs a reliability risk assessment on its data centers using software that looks for compatible mechanical and electrical systems design, system redundancy and all single points of failure.

ESD’s Gupta offers another way of looking at data center reliability. He identifies two reliability strategies: quantitative and qualitative. Both are essential in keeping a data center running without fail.

“Many customers will spend a lot of money on theoretical reliability—the quantitative aspect—including backup systems and equipment, but won’t spend money on proper commissioning of and training on the systems—the qualitative approach,” says Gupta.

The former, he adds, literally can translate into clients spending hundreds of dollars per square foot to ensure four, five or even six nines of reliability (99.9999), but nothing, not even 1% or 2%, is spent on the latter. “We see this over and over again, where systems are designed, but never commissioned, and those in charge of running them are not capable of solving a problem if one arises.”

This is a very dangerous strategy in Gupta’s mind. “If the operators aren’t trained on how the backup system works, or if [systems are] not maintained, there is a false sense of security. The client has spent all this money on security, but it doesn’t work,” he says.

In fact, Gupta, says only one-third of his data center clients complete comprehensive commissioning, and slightly fewer than a fourth maintain ongoing training programs. Rather than perpetuate this situation, Gupta thinks many companies might be better off going with a system that is, say, only three nines, where the money saved could be put into the training and commissioning. “They would get more use out of it, and it would be more reliable,” he implores.

Keeping cool at knife point

Beyond scalability and system reliability, there is yet another major factor in high-density data center design: accounting for blade servers. These more svelte servers carry as much memory as their much larger, but conventional, predecessors. And because of their size, they can be stacked on top of one another. This allows for much more computing power in a given space, but it also generates a lot more heat. “Blade servers have such a high-density heat load that they require a lot more cold-air supply per rack to take care of that heat,” says Tom Squillo, HVAC discipline leader at EYP MCF’s Chicago office. “The challenge is getting that amount of air in front of the rack.”

There are two other cooling challenges. The first is determining where to place the cooling supply load in a facility that will constantly change. The second is addressing hot spots and, frankly, getting cool air where it’s needed, as opposed to conditioning the entire space.

Underfloor schemes can help, says Squillo. “A lot of flexibility can come from a raised-floor system that has the ability to move supply grids around,” he says. “As equipment and heat loads change, the locations of the preformatted air-supply tiles can be adjusted and moved around to get the cooling where it’s needed.”

An underfloor variation that is gaining popularity is the hot aisle/cold aisle model. In this configuration, the fronts of equipment racks face each other, pulling cool air up from the raised floor and across the computers from front to back, while the computer equipment itself discharges heated air out the back of the racks.

“The best air-conditioning system is one that will return the heated air directly back to the air-conditioning units most efficiently,” says Squillo.

This is where supplemental cooling comes into play. The common underfloor-air-supply design is often supplemented with overhead cooling units that can blow air directly onto a blade server’s racks. The building’s main HVAC system, however, must be able to get enough air into that space effectively. “In some areas we’ve even eliminated the raised floor altogether and just taken advantage of the ceiling height to get stratification of the heat high up,” he says.

Final link: energy efficiency

So what else is there besides scalability, reliability and cooling that designers need to be cognizant of in creating a new generation of data facilities? Energy efficiency.

According to Squillo, a typical office building uses 75 to 100 watts per sq. ft., while the latest data centers consume anywhere from 100 to 200 watts per sq. ft. “One of the main things our clients are now looking at, because of the new, high heat loads, is energy use,” he says. “Energy efficiency is a lot more important to them lately because of the amount of money they’re spending each year; now it’s No. 2, behind reliability.”

Systems designers are looking at several energy-efficient solutions, including economizer cooling systems, which provide free cooling during cold weather by using outside air—not electricity—to cool water, or using the cold outside air itself to cool the space. Other possibilities include variable-speed fans and pumps, tighter temperature and humidity controls and thermal storage.

Using energy and reliability modeling, firms like EYP MCF can mathematically model the reliability and energy performance of their systems design before they execute it. “We work together with our clients through a series of options to come up with a good base for energy-efficient design,” says Kosik, who estimates about 30% to 40% of EYP MCF’s clients make energy efficiency a priority. “At the starting point in the design process we ask, ‘What can we use to make an energy-efficient decision?’ That’s where the modeling comes in.”

This stage is crucial, according to Kosik, because what is energy-efficient for one building may not be for another. For example, EYP MCF is building two identical data centers for a client—one in the southeastern United States and another in the upper Midwest. Both facilities are more than 200,000 sq. ft. and have identical characteristics. However, their geography made a difference in the energy-efficient design of their systems.

“Some of the thermal storage techniques didn’t work in the Southeast because of the way the utility rate is structured there. There wasn’t enough of a rate reduction at night to have a payback,” says Kosik.

The weather in the upper Midwest, on the other hand, he explains, allows the firm to not only use thermal storage, but also use additional techniques including free cooling.

Of all the lessons to be learned in good data center design, be it cooling, equivalent reliability of M/E systems or implementing proper training and commissioning, Kosik says there is one overarching tenet: “It’s not one size fits all.”

Sarbanes-Oxley and Data Centers

The Public Company Accounting Reform and Investor Protection Act of 2002, otherwise known as Sarbanes-Oxley, requires publicly-traded companies to retain all records and correspondence between executives and their auditors for five years.

Most in the industry believe this will have an effect on data centers, but that has yet to be determined. “There are a lot of different ideas out there about what it entails and who is impacted,” says Cyrus Izzo, senior vice president and national critical facilities director for Syska Hennessy Group, New York.

Some speculate it will require centralizing corporate information in data centers. Others imagine it may exclusively impact the financial accountability of data centers.

The future will tell. “All is well until something goes wrong,” says Izzo. “But, when an accountability incident is reported, we may see the landscape change. I wouldn’t be shocked to see a lawsuit asking why these directors didn’t view their critical facility as a corporate asset.”

The Power Behind the Design

It takes only 20 milliseconds of lost power for a data center to drop offline completely.

“The electrical systems need to be designed so that if one component fails, the rest of the system takes over in less than 20 milli-seconds,” says Mike Kuppinger, senior vice president of technology and mission-critical facilities at Environmental Systems Design, Chicago.

“By putting components in parallel so they back each other up, or by installing fast switches, the integrity of the entire system is maintained. These ensure the entire system has a higher reliability than just the sum of its components.”

Within the last year, Kuppinger says three breakthroughs have significantly enhanced the design of data center electrical systems:

Affordability of static switches.

New computer technology, with manufacturers creating IT systems with multiple, builtin power ports.

Modular, “hot swappable” parts that allow electrical equipment to be replaced without system shutdown.

That being said, the biggest challenge still facing electrical engineers, says Kuppinger, is power density.

One solution for centers with power requirements over 150 watts per sq. ft. is to spread out their computer equipment. But this forces owners to invest in more real estate and less on bringing power to and cooling tight spaces.

“Right now we’re at a plateau,” says Kuppinger. “One of two things will have to happen. Either manufacturers will have to create technology that uses less electricity and produces less heat, or we’ll have to look for better heat transfer methods.”

New Resource for Data Center Design

A new publication from ASHRAE, Datacom Equipment Power Trends and Cooling Applications , >was written to meet the needs of architects, engineers, planners and operations managers when designing, maintaining and provisioning data-communication facilities.

Cooling equipment for datacom centers typically has a much longer life cycle than the computer equipment it cools, explains Don Beaty, P.E., chair of ASHRAE’s technical committee that wrote the book. Consequently, the design of the cooling system must be able to accommodate one or more computer equipment redeployments to avoid major infrastructure up-grades—or premature obsolescence—of a data facility.

The book includes power trend charts with the latest information from the major datacom equipment manufacturers. The charts provide improved background on the derivation of the data, extended forecasting up to 2014 and further delineation to distinguish between the multiple types of servers and communications equipment. The reference also includes instructional information and sample applications for the trend charts.

The book also contains key aspects of planning a facility, describing the power density loads and the need for a collaborative approach between the building cooling/facilities industry and the datacom equipment/information technology (IT) industry. An integral part of that is a comprehensive glossary of common terms from both sets of industries, says Beaty.

“This book is intended to be a single point of reference aimed at the entire datacom cooling industry,” he says.

From Military Bunker to Data Center: Able Adaptive Reuse

While it has been common to repurpose vacant warehouses and industrial facilities into large data centers, one recent project is an especially creative reuse.

For 80 years, underground bunkers at the U.S. Army depot in Savanna, Ill., with their 24-in. concrete walls and 6-in. steel doors, served as a fortress that safely stored tons of munitions.

Closed by Congress in 2000, the 13,000-acre bomb-proof facility is being resurrected by Savanna Depot Technologies Corp. into what is claimed by its owners to be one of the country’s largest and most secure data centers, with an infrastructure that can withstand the most potent terrorist attack.

Key to making this data center a self-sustained fortress is energy technology that will allow the facility to generate its own power completely separate from the utility grid. Operators plan to incorporate combined-heat-and-power (CHP) technology with initial sites installed this year.

At the Savanna data center, there are plans to funnel waste heat from on-site power generators to absorption chillers that will make cold water to help cool each data center igloo.

In addition, the first 21 data center igloos will feature software control modules for management of emergency power from Winsor, Colo.-based Encorp, along with digital paralleling switchgear and generator control software modules. Each individual igloo will have an independent power supply. Monitor software will create larger “microgrids” that allow operators to remotely monitor diesel gensets and the igloos they serve throughout the complex.