Trends, changes in data center design

Several trends are pushing the engineered systems in data centers in different directions

By Consulting-Specifying Engineer June 13, 2022
Courtesy: CFE Media


  • Bill Kosik, PE, CEM, BEMP, Senior Energy Engineer, DNV, Oak Brook, Illinois
  • Matt Koukl, DCEP, Principal, Market Leader Mission Critical, Affiliated Engineers Inc., Madison, Wisconsin
  • Kenneth Kutsmeda, PE, LEED AP, Global Technology Leader – Mission Critical, Jacobs, Philadelphia
  • Ben Olejniczak, PE, Senior Project Mechanical Engineer, Environmental Systems Design Inc., Chicago
  • Brian Rener, PE, LEED AP, Mission Critical Leader, Smith Group, Chicago
  • Jonathan Sajdak, PE, Senior Associate/Fire Protection Engineer, Page, Houston

Bill Kosik, PE, CEM, BEMP, Senior Energy Engineer, DNV, Oak Brook, Illinois – Matt Koukl, DCEP, Principal, Market Leader Mission Critical, Affiliated Engineers Inc., Madison, Wisconsin – Kenneth Kutsmeda, PE, LEED AP, Global Technology Leader – Mission Critical, Jacobs, Philadelphia – Ben Olejniczak, PE, Senior Project Mechanical Engineer, Environmental Systems Design Inc., Chicago – Brian Rener, PE, LEED AP, Mission Critical Leader, Smith Group, Chicago – Jonathan Sajdak, PE, Senior Associate/Fire Protection Engineer, Page, Houston

What are some current trends in data centers?

Matt Koukl: The firm sees interest in the investigation and deployment of liquid cooling technologies and higher-density cooling loads necessitating higher-density cooling solutions. These workloads are mostly focused on artificial intelligence, data analytics processing and other high-performance computing workloads. In addition to these workloads, systems that include graphics processing units and other coprocessor type systems are also requiring a rethinking of cooling methods and systems supporting those types of systems.

Kenneth Kutsmeda: Sustainable, carbon-free backup energy solutions are trending in data centers. Many global technology companies are leading the way toward climate action and setting aggressive net zero carbon targets. Data center backup power is typically provided by diesel generators because they are highly reliable and cost effective. But the diesel generator exhausts carbon dioxide. To meet their climate goals and eliminate carbon, data centers are looking toward alternative back up energy solutions such as hydrogen fuel cells and lithium-ion battery solutions. Fuel cells that use pure green hydrogen (hydrogen produced using renewable energy) are completely carbon-free.

Ben Olejniczak: The biggest trend I am seeing right now is a push to integrate construction as part of a facility’s design process. Traditionally, the construction team would not get involved in a project until the design was completed, the project was bid and the contract was awarded. Now, many of our hyperscale clients have built enough data centers and have gained enough familiarity with general contractors and mechanical contractors across the country that they are integrated into the process as trusted partners. The design team uses its fabrication and installation expertise to provide feedback on the design.

Also, prefabrication is becoming a very important requirement in many of our jobs. What can we do to decrease the schedule and improve our time to first megawatt? What products do we specify and how do we work them into the job to decrease on-site labor and increase installation time?

Brian Rener: Focus on water savings and approaches for new higher density power needs.

Hydrogen fuel cell plant rendering. Courtesy: Jacobs

Please explain some of the codes, standards and guidelines you commonly use during the project’s design process. Which codes/standards should engineers be most aware of?

Bill Kosik: I always try to look beyond local code for inspiration and new ideas. Certainly, at the end of the day, you need to meet code, but innovation and code compliance are not mutually exclusive. Organizations such as Uptime Institute and The Green Grid have a wealth of information on data centers. Also, real estate developers like JLL and Cushman-Wakefield regularly publish data center updates. On the technical side, of course anything data center-related from ASHRAE. I also will refer to international design standards and publications such as CIBSE guides and briefings.

Jonathan Sajdak: The most common standards used in the design of data centers include, but are not limited to:

These standards contain design requirements for automatic sprinkler systems (i.e., wet-pipe, dry-pipe and preaction), fire alarm and smoke detection systems (including air-aspirating smoke detection) and clean agent systems, respectively. Most building codes and fire codes such as the International Building Code and NFPA 1: Fire Code identify when these fire protection systems are required, then reference the standards that explain how they should be designed and installed.

Where adopted by the jurisdiction, NFPA 75: Standard for the Fire Protection of Information Technology Equipment and NFPA 76: Standard for the Fire Protection of Telecommunication Facilities also contain criteria and requirements for fire protection systems. Some owners will also require insurer underwriter requirements to be met, which may then result in additional criteria to be followed such as FM Global datasheets (2-0, 4-9, 5-32, 5-48, etc.).

Ben Olejniczak: Common codes and standards that I use project to project are:

It’s hard to put a finger on which codes and standards are most important. They are all important, especially those that may impact your ability to obtain a building permit and deliver a project.

Matt Koukl: The considerable work ASHRAE Technical Committee 9.9 performs and the various research projects funded by ASHRAE provide significant benefit. These publications and standards are used as benchmarks during the design process. The numerous books and publications developed through ASHRAE TC 9.9 are viewed as global standards and as a knowledge basis of design for these facilities. The extensive research and broad depth of knowledge benefits all readers and those that implement the facility design using this information.

Kenneth Kutsmeda: When using lithium-ion batteries for uninterruptible power supply energy storage, two codes that engineers should be aware of are International Fire Code – Section 1206 Electrical Energy Storage Systems and NFPA 855: Standard for the Installation of Stationary Energy Storage Systems. Both codes contain important requirements (location, separation, quantities, etc.) that relate to the installation of battery systems in particular lithium-ion batteries. Another standard to be familiar with is UL 9540A. UL 9540A is the testing method for evaluating thermal runaway fire propagation in battery energy storage systems. UL 9540A was developed to address the safety concerns associated with lithium-ion batteries and help manufacturers provide compliance to the new IFC and NFPA code regulations.

What future trends should an engineer or designer expect for such projects?

Ben Olejniczak: Liquid-cooled server technology will become more prevalent soon. As the internet begins to change and we start to consider things like the evolution of the internet as we know it, the demand for AI technology, virtual reality and augmented reality will drive an increase in computing power requirements, data processing speeds and data storage. We’re already seeing this as our hyperscale clients are planning for their next-generation data center designs. Higher kilowatts per cabinet, along with a desire to manage white space square footage and overall building footprint, opens the door for liquid-based cooling solutions. These systems offer the benefits of retaining a compact footprint and possess the means to remove more heat than an air-cooled equivalent. Many are beginning to challenge the status quo/historical norms and are revisiting the idea of having fluid in the critical space interfacing directly with IT infrastructure.

Brian Rener: Increased attention to alternatives to the use of diesel field generators such as hydrotreated vegetable oils, clean hydrogen, fuel cells and utility grade battery storage units.

Matt Koukl: In general, future trends are pointing to rack and processor power densities increasing to the point of needing to consider different and alternative cooling methods besides using air. Whether it is a new build or a retrofit, thoughts and considerations should be given to how the facility will accommodate needs for liquid cooling or an alternative to air cooling. Additionally, the need for a hybrid environment of liquid cooling and some air cooling will be critical to the planning and design of either a retrofit or a new build facility.

Kenneth Kutsmeda: Although the technology is a few years from being readily available, engineering firms like Jacobs are in the early stages of feasibility and concept planning for the use of micronuclear energy for data centers. The constant load profile of a data center is the perfect fit to optimize the nuclear reactor. Micronuclear will allow the facility to go off grid and self-produce zero carbon primary power. Data centers can install N+1 micronuclear reactors for reliability and concurrent maintenance. There is potential to offset the cost of the micronuclear by selling power from the redundant unit back to the utility. The heat by product of micronuclear could be used to drive evaporative cooling to reduce heating electrical loads or a direct air capture system, making the data center carbon negative.

How is the growth of cloud-based storage and virtualization impacting co-location projects?

Kenneth Kutsmeda: As computer demand increases, the need for additional cloud-based storage increases. Land acquisition, utility infrastructure and construction of large cloud/hyperscale facilities takes time. To meet the demand, cloud providers are looking toward co-location for additional data center space. The traditional co-location facility with fixed offerings, shared white space and shared infrastructure are not conducive to cloud providers. Co-location facility designs had to be changed to meet the requirements of cloud providers. Spaces had to be private and leased in bigger blocks (2 to 5 megawatts). Utility infrastructure, distribution and redundant components had to be dedicated to the space, not shared. Security and fire separation had to be provided between customers.

Bill Kosik: The move of big corporations to a public-cloud solution has been slow but steady. However, it is projected that by the end of 2022 still more than one-half of workload will be in on-premise data centers. One of the main challenges is mutual trust and transparency. The customer’s operations are extremely critical so making sure everyone is on the same page is necessary for good outcome.

What types of challenges do you encounter for these types of projects that you might not face on other types of structures?

Matt Koukl: Data centers are unique in several ways but most significantly are the high sensible cooling loads and power densities. Every aspect of ensuring the data center and associated operating equipment inside has the highest levels of availability, maintainability and resiliency to ensure 24/7 operation of the digital infrastructure. The power densities for the areas where computing equipment is present can reach watts per square foot of greater than 2,000 watts/square foot. Most human occupied facilities whether hospital, office or research facility typically never reach above 20 watts/square foot.

What are professionals doing to ensure such projects (both new and existing structures) meet challenges associated with emerging technologies?

Matt Koukl: Emerging technologies in computing hardware are going to pose some interesting challenges not previously experienced in the modern era of data centers. The greatest ability to understand these technologies and be prepared to confront these challenges is to have high engagement with various organizations such as ASHRAE, Open Compute Project and other organizations that are focused on advancing data center design and technologies supporting data centers. These organizations are made up of individuals that are industry leading by developing and deploying equipment with the newest technologies.

Kenneth Kutsmeda: Data center technology is always evolving and changing. Therefore, data center facilities have to be designed to allow adaptation and integration of new technology. Especially those data centers that are scalable and designed to grow over time as the load increases. Data center engineers are locating more equipment outside in containerized, premanufactured enclosures. They are bringing cooling systems and centralized electrical/UPS systems outside the physical walls of the data center facility so that they can easily adapt to the changing technology without affecting the physical data center. For example, locating generators outside allow for them to be replaced with hydrogen fuel cells in the future. The distribution system also has to be designed to allow for this plug and play configuration.

In what ways are you working with information technology experts to meet the needs and goals of a data center?

Ben Olejniczak: We work with IT experts to obtain information on the hardware that will be located in the data hall. They provide insights on rack loading, staging of load over the life span of the building, server operational environment and any long-term plans the business may have (i.e., air-cooled load migrating over to liquid-cooled load). While it is sometimes difficult to get in touch with IT stakeholders, the information we generally extract from this team is extremely important.

Bill Kosik: In some ways, working with IT managers and directors is equal or even greater that working with the facilities staff. I say this because without a doubt the IT operations are the element that everything else emanates from. Learning about short- to midterm growth goals (in density and overall power) will help answer questions on what type of stems are most applicable and how and when they need to be expanded. Also, getting the vision on future power and cooling technology like water cooling or lithium-ion battery storage is also important from a system planning perspective.

Finally, we need to know the type of server/storage/network control and automation that is being planned. Gaining insight to the computer hardware vitals (internal/external temperature, minimum processor power, maximum (or capped) processor power and current as a percent of max) can be instrumental in providing real-time data to the cooling and power systems.

Describe a co-location facility project. What were its unique demands and how did you achieve them?

Matt Koukl: AEI’s experiences with co-location providers focus on speed to market for capacity deployment, repeatable design solutions, scalable and flexible deployment options. With the need to accommodate diverse types of workloads and densities, system flexibility for handling workloads is a need. It is critical to have scalable systems that meet initial deployment loads — commonly much lower than final deployment loads. Additionally, the ability to have a base design that can be adaptable by site that meets the requirements without needing to develop new sequences and fundamental system designs.

Tell us about a recent project you’ve worked on that’s innovative, large-scale or otherwise noteworthy.

Brian Rener: We completed a high-performance computing center that was designed to reside inside a new university building. The energy profile of a data center is different from a traditional education building and we had to find ways for synergy. Also, the initial high-performance computing equipment by NVidia was suitable to air cooling in a hot aisle containment system, but we needed to design in future technology changes for water cooled systems.

Matt Koukl: One current AEI project that includes innovative methods achieving almost 100% economization in Southern climates is for a high-performance computing center. The ability to modularize large-scale infrastructure to achieve a productized type of solution that allows efficiency in procurement and installation are also noteworthy accolades. The team performs significant studies optimizing the infrastructure and other aspects of the system, gaining the greatest efficiency of the system at all times of the year. Additional evaluations are performed to evaluate water reuse knowing that high-performance computing loads are considerable and need significant amounts of heat rejection.

Bill Kosik: This was a few years back, but it is still one of my favorite data center projects. (I’m going to guess that the data center has had some additions or modifications since it was built.) The project is for a large Midwestern university that always has been a leader in advanced computing. They competed and won a large grant from the federal government to build a super-computing facility. What makes this facility so fascinating to me is the sheer scale of the power and cooling load. The overall facility could support on IT load of around 20 megawatts. The IT cabinets ranged from 75 to 150 kilowatts, resulting in a load density across the entire data center floor area of approximately 1,500 watts/square foot. The computers are direct water cooled with the exception of a small electrical load on each cabinet that was air-cooled.

Kenneth Kutsmeda: Jacobs developed a plan of record design called Cloud Condos. Cloud Condos is a data center that brings together the best features of both hyperscale and co-location design. The modular data center is built and leased in 5-megawatt blocks. Each 5-megawatt block is a private, separated module with dedicated security and fire rated construction at the perimeters for enhanced protection. Blocks can be repeated horizontally or stacked vertically. Each 5-megawatt block is fed from dedicated utility infrastructure and distribution. Electrical and mechanical systems are located outdoors in prefabricated enclosures and adaptable to different configurations, technology and climates. Design is vendor agnostic, but vendor-specific versions have been developed with major manufacturers to shorten equipment lead times and also to leverage emerging technologies and sustainable strategies. A plug-and-play approach maximize speed to market allowing for design to online availability up to 30% faster than traditional delivery approaches.

Ben Olejniczak: Recently, we completed several buildings on a hyperscale data center campus located on the East Coast. Each building supports approximately 75 megawatt and spans approximately 1 million square feet. The main cooling systems are comprised of a built-up, direct evaporative cooling system and packaged, direct evaporative air handling units. With these systems, we were projecting to hit annualized targets of 1.1 power usage effectiveness and 0.056 water usage effectiveness. The project incorporates multiple levels of distribution and mechanical redundancy and includes on-site water storage for situations where water may become scarce. Interestingly, a solar facility is to be installed nearby and will provide renewable energy for use by the data center campus.

How are engineers designing these kinds of projects to keep costs down while offering appealing features, complying with relevant codes and meeting client needs?

Matt Koukl: Keeping costs down in the current market is challenging. The firm’s innovative methods include using all pieces of the system to the greatest extents, while enhancing operational efficiency and overall system efficiency. Engineers aid clients to achieve systems that holistically look at inputs and outputs driving efficiency for operations while focusing on the bottom line and capital expenditures.

Ben Olejniczak: Throughout my career, many of the data center client standards that I’ve designed around are centered on ambient design criteria that almost statistically never occur. Systems deploy oversized and underused, costing the client both in the short and long term. By having practical conversations with my client counterparts and diving into the data, we have been able to align on more realistic design criteria, lowering equipment capital and operational expenditure and right-sizing the system as it should be. Data centers are notorious for an all-encompassing belts and suspenders approach. By applying the belts and suspenders where they need to be applied and practically designing the system in other places, unnecessary costs can be avoided. It should be noted that this discussion is critically important to have with your client stakeholders, as each company may have client standards that it abides by. There is also data indicating that global temperatures are on the rise, adding additional context for consideration.

Brian Rener: The focus these days is on supply chain issues and disruptions from COVID and conflicts. There may be reasons to consider the use of alternative materials or equipment, but more importantly these supply chain issues are affecting lead times — in many cases some equipment is running a year out from placing an order. Early bid packages are more critical than ever.

What are the data center infrastructure needs for a 5G system to work seamlessly?

Bill Kosik: Power for 5G systems is much higher than 4G. And 5G systems are a good application for edge computing. Telecom data centers constructed 15 or 20 years ago will not have the power and cooling capability for new 5G systems. It is not uncommon to see the installation of larger (or more) cooling and power central plant components to provide the necessary support. Also, supporting a high-density edge data center is a challenge due to the lack of centralized power and cooling.


Consulting-Specifying Engineer