Trends and challenges unique to modern data center design

In this Roundtable, experts answer common questions about the challenges and considerations of designing data centers in our highly connected world

By Consulting-Specifying Engineer June 4, 2024
With a generator power capacity of 80MW, the Digital Realty Data Center Campus in Franklin Park, Illinois, provides co-location and peering services. Courtesy: Stantec and Digital Realty.

Data center insights

  • Data center developers are increasingly focused on optimizing power distribution and embracing advanced cooling technologies to meet the demands driven by AI and high-performance computing.
  • Challenges in designing data centers include coordinating dense electrical equipment, managing power distribution across compact sites and navigating regulatory scrutiny.


  • Amanda Carter, PE, Electrical Discipline Lead, Stantec, Chicago
  • Brian A. Rener, PE, LEED AP, Mission Critical Leader, Smith Group, Chicago
  • William Kosik, PE, CEM, LEED AP, Lead Senior Mechanical Engineer, kW Mission Critical Engineering, Chicago
Courtesy: WTWH Media

Courtesy: WTWH Media

What are some current trends in data centers?

Amanda Carter: Current trends in data centers include developing new approaches to on-site power distribution as utility capacity is becoming more difficult to acquire, whether it is limited by utility timelines for expansion or a utility simply not having available capacity to support the ever-increasing power requirements of data center facilities. These increasing power requirements are largely driven by the data center industry’s response to artificial intelligence (AI) and other developing technologies that require more and more computing power.

William Kosik: In the data center developer space there is a drive to increase power density, decrease cost and increase speed to market. We are currently working on projects, either in design or construction, that are nearing 100 milliwatts (MW) for the facility. We are also developing concepts for data center facilities that are far beyond 100 MW.

Brian A. Rener: Higher Density, water-cooled high-performance computing (HPC) and AI machines are now becoming main stream. Previously these were utilized only in national labs, universities and research facilities.

What future trends (one to three years) should an engineer or designer expect for such projects?

Amanda Carter: In the near future, look for the optimization of power distribution, whether that means extending medium voltage distribution interior to the data center facility or streamlining the overall distribution from substation to server. There will also be a continued embrace of cooling technologies, such as liquid cooling and coolant distribution units, as the servers require higher power input for increasing computing requirements.

Brian A. Rener: We are already seeing significant increase in demand for both power and water, which will only accelerate in the next one-three years. Some forecasts warn of significant impacts to electrical grid supply from growth in the data center market. Various leading organizations are working on ways to reduce this demand including grid interactive battery energy storage systems (BESS) and evolution of the use of new generation small modular nuclear reactors.

Figure 1: Example of heat recovery system at National Renewable Energy Labs ESIF HPC data center. Courtesy: SmithGroup.

Figure 1: Example of heat recovery system at National Renewable Energy Labs ESIF HPC data center. Courtesy: SmithGroup.

William Kosik: Current hyperscaler data centers use 200 to 400 watts per square foot for the data halls. To provide power and cooling for those loads, highly effective and energy efficient methods of cooling are required, including the use of fan walls and liquid cooling.

What types of challenges do you encounter for these types of projects that you might not face on other types of structures?

William Kosik: There two important issues: these data centers are electrically dense and are extremely large in physical size and scale. Based on the site, especially urban locations, the buildings will have multiple floors to provide data hall space and power to the tenants. The scale of the facilities also requires dozens of chillers, cooling towers, computer room air handler units and more.

Amanda Carter: Data center campuses often have power requirements that rival some small towns. Distributing this large amount of power across fairly compact sites, while also coordinating with other underground utilities, can require a great deal of focus and attention to minute details. And that’s before even entering the building. At building entry, there is extensive coordination with structural and architectural partners to coordinate either routing under the building structure or routing attached to and then penetrating the building envelope. If routing underground, extensive Neher-McGrath calculations are required to ensure the feeders aren’t operating at a temperature that would exceed their insulation rating. Unlike commercial facilities, data centers often run at a load factor close to 1.0. In addition, a data center electrical room is often dense with various electrical equipment, making coordination with mechanical counterparts imperative to ensure operating temperatures are maintained and heat rejection is properly accounted for.

What are professionals doing to ensure such projects (both new and existing structures) meet challenges associated with emerging technologies?

Amanda Carter: The biggest challenge associated with emerging technologies is that many of these technologies are still undefined. We are often estimating what the future power and cooling requirements will be, so building in maximum flexibility, for ease of expansion or allowance for future growth, is key. At the same time, we don’t want to spend money unnecessarily, so it’s important to balance day-one needs and the desire to build for the future.

Describe a co-location facility project. What were its unique demands, and how did you achieve them?

Amanda Carter: A co-location facility is a data center facility that is designed for lease to outside tenants. They are unique to enterprise or hyperscale facilities because they require maximum flexibility to respond to a future tenant’s requirements. Sometimes the tenant is known, and provisions are made for their specific needs, but often the facility is built before a tenant is under contract. Therefore, flexibility is built in by designing the data hall white spaces in blocks that include many levels of redundancy and the ability to parcel the spaces as required.

Tell us about a recent project you’ve worked on that’s innovative, large-scale or otherwise noteworthy. Please tell us about the location, systems your team engineered, key players, interesting challenges or solutions and other significant details.

Brian A. Rener: We are currently engaged in a high-performance computing facility for the Department of Energy in Virginia. The facility will be deploying the latest generation Exascale computers requiring over 300KW per rack and water to chip cooling plates. This amount of power and cooling density requires careful planning for the infrastructure and distribution within the data center to the racks.

Figure 2: With a generator power capacity of 80MW, the Digital Realty Data Center Campus in Franklin Park, Illinois, provides co-location and peering services. Courtesy: Stantec and Digital Realty.

Figure 2: With a generator power capacity of 80MW, the Digital Realty Data Center Campus in Franklin Park, Illinois, provides co-location and peering services. Courtesy: Stantec and Digital Realty.

William Kosik: In some cities in the United States, water use and sewer discharge are scrutinized heavily during the initial design phases. The municipalities require detailed studies on annual water use and sewer discharge, including peak monthly flowrates. On a recent project, the municipality had a grey-water system that was used for makeup water to the cooling equipment. The data center was required to have on-site water storage for the makeup water system in case of a low-flow condition from the municipality’s grey-water system.

How are engineers designing these kinds of projects to keep costs down while offering appealing features, complying with relevant codes and meeting client needs?

Brian A. Rener: Construction costs continue to rise, but not as rapidly as a year or two ago. A major challenge still remaining is procurement and lead times. This may require choosing between equipment that could be more expensive, but could cut lead times and bring the project online far quicker.

How are you preparing for future phases of data center operation? How do you specifically address this challenge when the first phase is still operational?

Amanda Carter: For a co-location data center, the facility is specifically designed to be built in phases. One way this is done is by using modular electrical rooms that can be installed on skids or in prefabricated structures that sit just outside of the data center building. This allows for a plug-and-play approach to the phasing and expansion of the facility as the space is leased out. For hyperscale data centers, phasing is often done on a building-by-building approach instead of a data hall-by-data hall, which accounts for future phases through utility planning for future expansion. Underground utilities are prepared and terminated at or near future phases, and head-end services like substations and water service are sized upfront for future capacity.

Brian A. Rener: Good engineering design for data centers includes infrastructure that is concurrently maintainable. This approach also allows for phased future growth and buildout without interrupting currently operating data halls. Typical features include N+1 distribution lines, and taps for additional infrastructure and equipment.