Cabinets, Cabling and Cooling

CSE: Is it safe to say that many of the notions the engineering community has about cooling data centers and similar mission-critical facilities are already outdated? SPINAZZOLA: Without question. The focus must be on kilowatts per cabinet, not watts per sq. ft. STACK: The reality is that newer rack-based servers generate much more heat than older systems, creating hot spots within the control...

By Barbara Horwitz-Bennett, Contributing Editor July 1, 2003

CSE: Is it safe to say that many of the notions the engineering community has about cooling data centers and similar mission-critical facilities are already outdated?

SPINAZZOLA : Without question. The focus must be on kilowatts per cabinet, not watts per sq. ft.

STACK : The reality is that newer rack-based servers generate much more heat than older systems, creating hot spots within the controlled environment.

SPINAZZOLA : Yet there is an unwritten standard developing in the industry that about 3.5 kW per cabinet is the high end of what I call the “old” methodology to cool a data center—raised floor, downflow air-conditioning units, performance tiles, etc. This theory is called “random cooling,” with the idea that if enough cool air is moved, the equipment will stay cool. Unfortunately, at more than 3.5 kW per cabinet, thermal migration within the cabinet causes significant overheating issues in the upper third of cabinets, resulting in a tripling of equipment failures.

CSE: So what kind of technology or design strategies can offset this heat build up?

STACK : One solution for dealing with hot spots is to mount a fan to the back of the rack to pull heat out. A more scalable approach—one of the most significant developments in data center cooling in the past 15 years—is represented by new supplemental cooling systems that mount to the top of the rack or to the ceiling above the rack. They provide very efficient cooling because they are located close to the source of heat and use the energy absorption capability of a liquid changing state in the cooling process.

SPINAZZOLA : Besides air and refrigerant-based solutions, there are also liquid-based solutions that employ chilled water piped to each cabinet, which also increases heat rejection capacity. All three solutions can cool about 8 kW per 24-in.-wide cabinet and are considered to be precision cooling approaches, meaning that the more directed the cooling solution, the more efficient the cooling.

MORDICK : More consideration should be given to duct placement and increased cooling reserves.

CSE: How about on the power side?

STACK : Newer servers have been found to operate more efficiently at higher voltages. Enclosure-mounted power distribution units (PDUs) are now available to support these systems. This can also reduce cable size to the enclosure and improve utilization of the PDU circuit breaker pole positions.

MORDICK : Don’t forget servers require access to two power sources, thus a high density power distribution system is needed. Such a system also requires remote monitoring and must be able to selectively shut down certain servers when required.

CSE: When it comes to specifying this kind of equipment, how should an engineer go about determining exactly what a building owner’s needs are?

STACK : As a first step, it is important to get an understanding of the business’ reliance on their IT systems, and the cost if the IT systems go down. The system design has a major impact on the level of expandability that can ultimately be achieved, so it is important to design it to meet current and future availability requirements. It is likely that many organizations will be seeking to increase availability in the years ahead as they move toward 24×7 computing.

MORDICK : I can’t emphasize enough the importance of incorporating reserve cooling capacity, as much as 50% to 100% for future expansion, and clean power access with 100% to 200% power access reserve.

And something we’ve only briefly touched on is cabling, but in the matter of meeting an owner’s needs, reserve cabling capacity should be at least 200%. One should also expect that cables installed today will be obsolete in less than three years and will be required to be replaced, according to new codes.

SPINAZZOLA : Many data center owners deal directly with computer equipment vendors and therefore tend to buy a packaged solution that is comprised of either power or cooling, or both. The owner first needs to identify what computing products best meet his or her IT programming needs and not necessarily buy packaged solutions. The role of the engineer and manufacturer is to then develop a product and solution to address the cooling, power and cable-management issues to support the IT requirements.

CSE: Since we’ve broached the issue of cabling, let’s address that subject as it relates to cabinets. As communications equipment has evolved to become more expensive and extensive in data centers, so has the volume and sensitivity of the cabling connected to it. Has the functionality of server cabinets developed sufficiently to meet an owner’s needs?

MORDICK : Space is the key to cable management. With equipment becoming smaller, a trend has developed to cram all the equipment into as few cabinets as possible. What has not changed is the number of cables that interconnect and drop to the work area/PC locations. More cables are being routed into a cabinet and are often routed in areas that block airflow, which leads to cooling deficiencies. The real cost comes when changes are made to the cables and they need to be re-routed. In some cases this is a weekly task—be it for new offices, upgrades, new equipment, etc. The added up-front cost of providing expansion and easy access to cables—although relatively small—often entices many IT administrators not to do it correctly to meet short-term budget targets. In the end, it costs more in upgrades, added locations and network maintenance.

SPINAZZOLA : Cable management is one of the most important issues to be dealt with when deploying a data center. Cabinet manufactures now need to include it in their design to be competitive and meet industry standards. Most manufacturers have developed cable management products that fit into the standard cabinet in order to make the cabinet more adaptable to end users facing cable-management issues.

In fact, the leading cabinet manufacturers have developed integral cabinet cable management beyond industry standards. These manufacturers are including features that eliminate using specialty products, and are also developing products that are highly flexible and accommodate moves, adds and changes.

CSE: How about from a power quality perspective? Do any cabinets today provide shielding from EMF? How about cables?

MORDICK : Power coming into a cabinet can very easily pick up external noise or EMI/RFI energy. This radiation must be removed from the power source and oftentimes is grounded to earth.

Even though EMF is less of an issue within data centers, it is a real issue with industrial spaces, and metal cabinets alone are not the answer. The cabinet must provide conductive surfaces along all seams and door/cover converging areas. This can be done using conductive gasket material that is laid over a conductive surface. This approach provides what’s called a “Faraday Cage.”

SPINAZZOLA : There are a myriad of power solutions on the market, and many are compatible with most cabinets. With increased load per cabinet, power cable management is a major issue. Many computing products on the market are dual cord, or even tri-cord. Therefore, it is important to know what is going into each cabinet and match the power solution to that need.

But power quality really happens outside the cabinet through advances in UPS and PDU technology. The challenge in the short and mid term is putting the pieces together to be able to deploy blade server technology now starting to receive acceptance in the community.

CSE: In general, what kinds of technological advances do you see coming down the line?

STACK : In terms of cooling, we don’t see the traditional computer room air conditioners going away, but we do see these systems increasingly being supplemented by spot cooling solutions targeted at specific racks. Some suppliers will introduce enclosures with integral water cooling capability, but the more advanced will provide this integral cooling capability without introducing water into the data center.

Power systems that support these high-density systems will not only need to be scalable, but also capable of supporting high levels of availability, meaning redundancy.

We also expect to see more powerful and expensive systems being utilized outside the data center, increasing the need for enclosures that can integrate power, cooling, monitoring and physical security in a self-contained system.

MORDICK : Cooling equipment will drive toward more integrated designs directly connecting the cabinet to the HVAC system to maximize efficiencies. I do not see cabinets with water or other fluids being plumbed to the cabinet as an efficient means to ensure acceptable cabinet environments. Although this process is being investigated and offered by several manufacturers, it is too expensive and introduces greater complexity to the overall system. I also question the reliability and flexibility of such systems.

Finally, in the area of cable management, we anticipate an increasing use of fiber from the data center to the work area. Wireless may also become more prevalent as issues addressing bandwidth and security are improved. Ultimately, this will help with the cable management issues.

SPINAZZOLA : I’m skeptical in that until the design community wakes up and smells the coffee, and accepts that the cooling approach we have been using for the last 30 years is no longer effective, new technology will sit on the shelf. That being said, it is important to know that there is great technology on the market just waiting to be deployed.

Participants

R. Stephen Spinazzola , P.E., Vice President, Director of Engineering, RTKL Assocs., Inc. Baltimore

Brian Mordick , Product Manager, Hoffman Enclosure Co. Anoka, Minn.

Fred Stack , Vice President of Marketing Liebert Americas Columbus, Ohio

Cool Cabinets

The concept of integrating cooling into cabinets has been around since 1993, says Fred Stack, vice president of marketing for Liebert Americas, Columbus, Ohio, and it has been enhanced significantly over its lifetime. “Suppliers now offer several different types of enclosures with varying capacities of cooling, and more options in internal power distribution and condition monitoring,” he explains.

What’s more, according to Stephen Spinazzola, P.E., vice president and director of engineering for RTKL Assocs., Baltimore, the cabinet is no longer a commodity component of the data center, but rather it is a value-added component that is part of an engineered solution. And the key issue driving this, notes Spinazzola, is kW per cabinet.

“In 1999, a typical cabinet operated at about 0.5 to 1.0 kW per cabinet. Today, a cabinet can operate at up to 14 kW. Virtually overnight, data center operators are dealing with up to a 2,000% increase in power and cooling density at the cabinet level, and it is a major issue,” he says. In Brian Mordick’s assessment, the most economical way to keep equipment cool and ensure the highest reliability at lowest cost is to utilize an open-system type of cabinet that directs ambient air to the equipment intakes.

Mordick, a product manager for the Hoffman Enclosure Co., Anoka, Minn., further explains that the cabinet needs to have its own cooling equipment, especially when an enclosure is put into a hostile environment.

Tower of Cool Passes U of Maryland Test

A thumbs up sign is being given by members of the Maryland Industrial Partnerships program, a University of Maryland research and development initiative charged with testing a data center cooling technology featured in CSE last year. That technology—the Tower of Cool, designed by roundtable participant Stephen Spinazzola, P.E., of Baltimore-based RTKL—cools data storage racks by flowing cool air through the rack itself via specially designed doors equipped with integrated supply fans (see “Tower of Cool Dominates Heat Horizon” CSE 01/02 p. 36).

RTKL was awarded a $30,000 research grant last October to work with university researchers to test the technology. “One of the benefits demonstrated was the uniform supply temperature the unit provided for all the equipment in the cabinet, eliminating the hot spots,” notes Dr. Reinhard Radermacher, a university researcher involved in the testing. “The result could be a significant extension in the life of the equipment.”

According to Spinazzola, Tower of Cool technology ultimately provides more focused and effective cooling of the servers. “It can result in first cost energy savings of 6% or more to companies who invest in the technology for their data centers,” he says.

RTKL was granted three U.S. patents for Tower of Cool technology in 2002 and 2003. Multiple additional patents are pending both domestically and internationally.