Analyzing data centers

Data is the lifeblood of any business or organization—which makes a data center a facility’s beating heart. Here, engineers with experience on data center projects show how to succeed on such facilities, and how to keep your finger on the pulse of data center trends.

By Consulting-Specifying Engineer April 27, 2017


Respondents

  • Robert C. Eichelman, PE, LEED AP, ATD, DCEP, Technical Director, EYP Architecture and Engineering, Albany, N.Y.
  • Karl Fenstermaker, PE, Principal Engineer, Southland Engineering, Portland, Ore.
  • Bill Kosik, PE, CEM, LEED AP, BEMP, Senior Mechanical Engineer, exp , Chicago
  • Kenneth Kutsmeda, PE, LEED AP, Engineering Manager—Mission Critical, Jacobs, Philadelphia
  • Keith Lane, PE, RCDD, NTS, LC, LEED AP BD&C, President, Lane Coburn & Associates LLC, Bothell, Wash.
  • Brian Rener, PE, LEED AP, Senior Electrical Engineer, SmithGroupJJR, Chicago
  • Mark Suski, SET, CFPS, Associate Director, JENSEN HUGHES, Lincolnshire, Ill.
  • Saahil Tumber, PE, HBDP, LEED AP, Senior Associate, Environmental Systems Design, Chicago
  • John Yoon, PE, LEED AP, Lead Electrical Engineer, McGuire Engineers Inc., Chicago

CSE: What’s the No. 1 trend you see today in data center design?

Karl Fenstermaker: ASHRAE’s thermal guidelines for data processing centers is becoming more accepted and implemented in the industry. Operating over a wider range of temperature and humidity conditions requires more attention to detail during the design and operation of the data center, so as a result, we are seeing more leveraging of advanced technology tools, such as computational fluid design for thermal modeling and data center infrastructure management (DCIM) systems for more precise monitoring and control of the data center environment.

Keith Lane: We’re seeing modularity, increased efficiency, and flexibility. Most data center end users require all of these in their facilities.

Brian Rener: Calculated and measured performance, whether on energy efficiency, reliability, or life cycle costs. Owners are seeking verified value for their investment in the data center facility.

Saahil Tumber: Colocation providers used to be conservative in their approach and tended to follow standardized designs. However, they are now open to deploying new technologies and topologies to increase resiliency, improve power-usage effectiveness (PUE), reduce time to market, reduce cost, and gain a competitive advantage. They are coming out of their comfort zones. They are also laying emphasis on strategies that reduce stranded capacity and space. For enterprise clients, there is more collaboration between various stakeholders (information technology, operations, security, engineering, etc.). They are not working in silos anymore, but working toward a common goal. We are seeing consistency in their needs and requirements.

John Yoon: A trend is reduced infrastructure-redundancy requirements for clients that are migrating services to the cloud. A 2N UPS and N+1 computer room air conditioner (CRAC) redundancy used to be commonplace in our designs for corporate headquarters building-type data centers. That type of redundancy is now becoming the exception. The prevailing information technology (IT) mindset seems to be that if mission critical services are being moved offsite, why invest extra money in redundant infrastructure (and manpower) for what’s left behind? One significant experience that would speak to the contrary involved a client that decommissioned their main data center at headquarters and replaced it with a much smaller server room. The new server room was provided with no redundancy for the UPS equipment. That UPS was in service for more than 4 years without an incident. However, one day during a utility blip, the UPS dropped the critical load because a single battery cell faulted, causing a full battery-string failure. Although the power interruption was brief and the generator started, the inability of the UPS to immediately sync to an unstable bypass voltage took down everything downstream of the UPS—including the core network switches that allowed  headquarters to communicate with the rest of their facilities around the world. Although power was quickly restored via the UPS manual bypass, the reboot of the core switches did not occur smoothly. Communications back to headquarters were knocked out for nearly a day. Needless to say, executives were not pleased.

CSE: What other trends should engineers be on the lookout regarding such projects in the near future (1 to 3 years)?

Bill Kosik: There will still be a high demand for data centers. Technology will continue to evolve, morph, and change. The outlook for new or renovated data centers continues to be bullish with analysts looking at the industry doubling cloud strategies over the next 10 years. So, trends will center around lower-cost, higher shareholder-return data centers that need to address climate change and comply with data-sovereignty laws.

Kenneth Kutsmeda: A trend that will become more popular in data centers is the use of lithium batteries. One manufacturer of lithium batteries recently acquired UL listings (UL 1642: Standard for Lithium Batteries and UL 1973: Standard for Batteries for Use in Light Electric Rail (LER) Applications and Stationary Applications), and others will soon follow. Unlike cell phones that use lithium cobalt oxide, which has a high-energy density and is prone to safety risks when damaged, data center batteries use a combination of lithium manganese oxide and lithium nickel manganese cobalt oxide, which has a lower energy density but longer lifecycle and inherent safety features. Jacobs recently completed a project using lithium batteries. The lithium battery has a more than 15-year lifecycle and requires no maintenance. Lithium batteries provide a 65% space savings and 75% weight reduction as compared with wet-cell batteries. The lithium battery-management system provides the ability to isolate individual cabinets without taking down the UPS and eliminates the need for a separate monitoring system.

Rener: New metrics on reliability versus the old terms of availability. We are seeing a move away from prescriptive terms on availability to calculations on reliability using IEEE. Edge-cooling approaches (local to the server) have become more popular as well as fluid-based cooling at the rack.

Yoon: We expect to see further densification of server equipment. As recently as 10 years ago, a 45U high rack full of 1U “pizza-box” servers seemed like absurdly high density. Now, the highest-density blade server solution that I’m currently aware of has 280 blade servers in a 60U high rack—that’s a six-fold increase in density. With these dramatically higher equipment densities, traditional environmental design criteria just won’t cut it anymore. Much higher cold/hot-aisle temperatures are becoming the norm. In the next year or so, we also expect to see an increase in the use of lithium-ion (Li-ion) in place of valve-regulated lead-acid batteries for systems 750 kVA and larger. The value proposition appears to be there—they’re lighter, last longer, and more tolerant of higher temperatures. The one uncertainty is which Li-ion battery chemistry gains dominance. Some chemistries offer high energy densities but at the expense of increased volatility. The guiding NFPA safety codes and standards haven’t yet evolved to the point where any significant distinction can be made between these.

CSE: Please describe a recent data center project you’ve worked on—share details about the project, including location, systems engineered, team involved, etc.

Tumber: I’m currently working on a colocation data center campus in Chicago. The existing building can support 8 MW of IT load. The new 2-story building incorporates 160,000 sq ft of white space and will be capable of supporting 32 MW of IT load. The data halls are conditioned using outdoor packaged DX units, which use heat pipe for indirect airside economization. Each unit has a net-sensible cooling capacity of 400 kW, and each one discharges into a 48-in.-high raised-access floor. The electrical design is based on block-redundant topology and uses a 97%-efficient UPS system.

CSE: Describe a modular data center you’ve worked on recently, including any unique challenges and their solutions.

Yoon: We haven’t seen much in the way of large modular data centers (a la Microsoft ITPACs). Those seem to be mostly limited to large cloud providers. Our clients typically prefer traditional “stick-built” construction—simply because the scale associated with modular data center deployment doesn’t make much sense for them.

Lane: With all of our modular data center projects, we continue to strive to increase efficiency, lower cost, and increase flexibility. These challenges can be achieved with good planning between all members of the design team and innovation with prefabrication. The more construction that can be completed and is repeatable in the controlled environment of a prefabrication warehouse, the more money can be saved on the project.

CSE: What are the newest trends in data centers in mixed-use buildings?

Rener: One of the more exciting projects we’ve worked on in a mixed-use building is the National Renewable Energy Lab—Energy Systems Integration Facility, which is a 182,500-sq-ft energy research lab with supporting offices and high-performance computing (HPC) data center located in Golden, Colo. The IT cabinets supporting the HPC research component are direct water-cooled cabinets and the cooling system has the ability to transfer the waste heat from the data center to preheat laboratory outside air during the winter months. This ability to use waste energy from the data center in other parts of the building is sure to become an emerging trend in mixed-use buildings with data centers.

Fenstermaker: One emerging trend is recovering heat from the data center to heat the rest of the building. This is most commonly employed by using hot-aisle air for the air side of a dual-duct system or heating air intake at a central AHU. In addition, smaller data centers are using direct-expansion (DX) fan coils connected to a central variable refrigerant flow (VRF) system with heat-recovery capabilities to transfer heat from the data center to other zones requiring heating.

Yoon: One of the newest trends is smaller, denser, and less redundancy.

Tumber: Large-scale data center deployments are not common in mixed-use buildings as they have unique requirements that typically can only be addressed in single-use buildings. One of the main issues is with securing the data center. This is because even the most comprehensive security strategy cannot eliminate non-data center users from the premises. For small-scale deployments where security is not a big concern, a common infrastructure that can serve both the needs of the data center and other building uses is important to ensure cost-effectiveness. Emphasis is being placed on designs that recover low-grade heat from the data center and uses it for other purposes, such as space heating.

CSE: Have you designed any such projects using the integrated project delivery (IPD) method? If so, describe one.

Tumber: I recently worked on a project that involved wholesale upgrades at the flagship data center of a Fortune 500 company. The data center is located in the Midwest, and IPD was implemented. The project was rife with challenges, as the data center was live and downtime was not acceptable. In fact, a recent unrelated outage lasting 30 seconds led to stoppage of production worldwide and caused $10 million in losses. We worked in collaboration with contractors. They helped with pricing, logistical support, equipment procurement, construction sequencing, and more during the design phase. The project was a success, and all project goals were met.

CSE: What are the challenges that you face when designing data centers that you don’t normally face during other building projects?

Robert C. Eichelman: With few exceptions, data centers serve missions that are much more critical than those served by other building types. The infrastructure design, therefore, requires a higher degree of care and thoughtfulness in ensuring that systems support the mission’s reliability and availability requirements. Most data centers have very little tolerance for disruptions to their IT processes, as interruptions can result in disturbances to critical business operations, significant loss of revenue and customers, or risk to public safety. Most often, the supporting mechanical, electrical, and plumbing (MEP) systems need to be concurrently maintainable, meaning that each and every component has the ability to be shut down, isolated, repaired/replaced, retested, and put back into service in a planned manner without affecting the continuous operation of the critical IT equipment. Systems usually have a high degree of fault tolerance as well. The infrastructure design needs to be responsive to these requirements and most often includes redundant major components, alternate distribution paths, and compartmentalization, among other strategies. Power-monitoring systems are much more extensive to give operators a complete understanding of all critical parameters in the power system. Systems are also more rigorously tested and commissioned and routinely include factory witness testing of major equipment including UPS, generators, and paralleling switchgear. MEP engineers also have a larger role in controlling costs. The MEP infrastructure for data centers represents a much higher percentage of the total building construction and ongoing operating costs than for other building types, requiring engineers to be much more sensitive to these costs when designing their systems.

Lane: A data center is a mission critical environment, so power cannot go down. We are always striving to provide the most reliable and maintainable data center as cost-effectively as possible. These projects are always challenging when considering new and emerging technologies while maintaining reliability.

Rener: Future flexibility and modular growth. IT and computer technologies are rapidly changing. Oftentimes during the planning and design of the facility, the owner has not yet identified the final equipment, so systems need to be adaptable. Also, the owner will often have multiyear plans for growth, and the building must grow without disruption.

Yoon: Managing people and personalities. Most management information systems/IT (MIS/IT) department staff are highly intelligent, extremely motivated people, but they are not used to being questioned on technical points. This can make the data center programming process extremely challenging—and even confrontational at times—when you’re trying to lock in MEP infrastructure requirements. The key is to remember that many CIOs and their MIS/IT departments are accustomed to operating with reasonably high levels of independence within their companies. Many people within their own organizations don’t understand exactly what the MIS/IT staff members do, only that they control the key infrastructure that’s critical to the day-to-day operations. If they haven’t been involved in the construction of a data center before, the MEP engineer is often viewed as an external threat. The key is to make sure they understand the complementary set of skills that you bring to the table.

Tumber: The project requirements and design attributes of a data center are different from other uses. The mission is to sustain IT equipment as opposed to humans. They are graded on criteria including availability, capacity, resiliency, PUE, flexibility, adaptability, time to market, scalability, cost, and more. These criteria are unique to data centers, and designing a system that meets all the requirements can be challenging.

CSE: Describe the system design in a colocation data center. With all the different clients in a colocation facility, how do you meet the unique needs of each client?

Lane: The shell in a colocation facility must be built with flexibility in mind. You must provide all of the components for reliability and concurrent maintainability while allowing the end user to tweak the data center to their own unique needs. Typically, the shell design will stop either at the UPS output distribution panel or at the power distribution unit (PDU). The redundancy (N, N+1, or 2N) and the specific topology to the servers can be unique to the end user. Some larger clients will take a more significant portion of the data center, if timing allows, and they will be able to select the UPS, generator, and medium-voltage electrical distribution topology.

Tumber: The design of a colocation data center is influenced by its business model. Powered shell, wholesale colocation, retail colocation, etc. need to be tackled differently. If the tenant requirements are extensive, the entire colocation facility can be designed to meet their unique needs, i.e., built-to-suit. Market needs and trends typically dictate the designs of wholesale and retail data centers. These data centers are designed around the requirements of current and target tenants. They offer varying degrees of flexibility, and any unique or atypical needs that could push the limits of the designed infrastructure are reviewed on a case-by-case basis.

Fenstermaker: The most important thing is to work with the colocation providers to fully understand their rate structures, typical contract size, and the menu of reliability/resiliency they want to offer to their clients in the marketplace. The optimal design solution for a retail colocation provider that may lease a few 10-kW racks at a time with Tier 4 systems, located in a high-rise in the downtown area of Southern California, is drastically different than another that leases 1-MW data halls in central Oregon with Tier 2 systems. Engineers need to be fully aware of all aspects of the owner’s business plan before a design solution can be developed.

Yoon: Colocation facilities seem to be evolving into one-size-fits-all commodities. Power availability and access to multiple carriers/telecommunication providers with low-latency connections still seem to be how they try to differentiate themselves. However, simple economies of scale give larger facilities the upper hand in these key metrics.

Eichelman: For a colocation data center, it’s important to understand the types of clients that are likely to occupy the space:

  • Is it retail or wholesale space?
  • What power densities are required?
  • Any special cooling systems/solutions needed for the IT equipment?
  • Are there any special physical or technical security requirements?

The specific design solutions need to be responsive to the likely/typical requirements while also being flexible and practical to accommodate other needs that may arise. A typical approach could include designing a facility with a pressurized raised floor, which allows for air-cooled equipment while making provisions for hot-aisle or cold-aisle containment and underfloor chilled water for water-cooled equipment and in-row coolers. Power distribution could also be provided via an overhead busway system to allow flexibility in accommodating a variety of power requirements.

The tendency to allow unusual requirements to drive the design, however, should be carefully considered or avoided, unless the facility is being purpose-built for a specific tenant. To optimize return on investment, it’s important to develop a design that is modular and rapidly deployable. This requires the design to be less dependent on equipment and systems that have long lead times, such as custom paralleling switchgear. Designs need to be particularly sensitive to initial and ongoing operational costs that are consistent with the provider’s business model.