Data Centers

How to design data centers

An expert panel provides engineering and design tips in this Q&A

By Consulting-Specifying Engineer April 26, 2021
Courtesy: SmithGroup

Respondents

  • Peter Czerwinski, PE, Uptime ATD, Mechanical Engineer/Mission Critical Technologist, Jacobs, Pittsburgh
  • Garr Di Salvo, PE, LEED AP, Associate Principal – Americas Data Center Leader, Arup, New York
  • Scott Gatewood, PE, Electrical Engineer/Project Manager, Regional Energy Sector Leader, DLR Group, Omaha, Neb.
  • Brian Rener, PE, LEED AP, Mission Critical Leader, SmithGroup, Chicago
Peter Czerwinski, PE, Uptime ATD, Mechanical Engineer/Mission Critical Technologist, Jacobs, Pittsburgh; Garr Di Salvo, PE, LEED AP, Associate Principal - Americas Data Center Leader, Arup, New York; Scott Gatewood, PE, Electrical Engineer/Project Manager, Regional Energy Sector Leader, DLR Group, Omaha, Neb.; Brian Rener, PE, LEED AP, Mission Critical Leader, SmithGroup, Chicago. Courtesy: Jacobs, Arup, DLR Group, SmithGroup

Peter Czerwinski, PE, Uptime ATD, Mechanical Engineer/Mission Critical Technologist, Jacobs, Pittsburgh; Garr Di Salvo, PE, LEED AP, Associate Principal – Americas Data Center Leader, Arup, New York; Scott Gatewood, PE, Electrical Engineer/Project Manager, Regional Energy Sector Leader, DLR Group, Omaha, Neb.; Brian Rener, PE, LEED AP, Mission Critical Leader, SmithGroup, Chicago. Courtesy: Jacobs, Arup, DLR Group, SmithGroup


What’s the current trend in data centers?

Peter Czerwinski: The demand for new data centers and expansion of existing data centers has increased significantly due to the spike in reliance on virtualization and remote work, resulting from the rapidly evolving effects of COVID-19, placing an even greater importance on time-to-market.

Garr Di Salvo: Over the past year, pandemic-driven information technology modernization has accelerated cloud and colocation demand growth. Remote working has emphasized the importance of connectivity across large geographies, spurring investment in robust, reliable and secure distributed networking. To accommodate growing demand in traditional markets, clients are considering sites that would have previously been untouchable, due to the level of investment required to develop the site and surrounding infrastructure. Demand has grown beyond the major “NFL” markets and core connectivity hubs, forcing a move into more speculative markets. Scalability, flexibility, time-to-market and cost continue to be key considerations driving new approaches, like design standardization and off-site fabrication.

Scott Gatewood: The current data center trend is the management of blending. On-premise enterprise, edge and co-location services blending continues to create both ecosystem opportunities and challenges. Integrating and managing the emergent organism

Brian Rener: Energy use continues to a major challenge for data centers. Various estimates put data centers growing from 1% of global energy consumption up toward 8% or higher. Yet many large data center owners are investing heavily in reducing energy consumption so even as data centers expand, they try to improve their energy efficiency.

Secondly, there is a push to put more data centers on “the edge” of major cities and towns rather than in large central locations as the need for high-speed data grows with new technology like 5G. Lastly, we are seeing increasing use of high-performance computing for artificial intelligence applications in government, enterprise and research facilities.

What future trends should an engineer or designer expect for such projects?

Garr Di Salvo: Looking forward, I expect corporate sustainability commitments to percolate through organizations. This will continue to drive real energy and water savings and fuel efforts to minimize the industry’s carbon footprint. On-site net zero could be achieved, at least for small scale data centers. Other trends include the growth of edge computing, fostered by 5G deployments and growing demand for remote connectivity. Energy storage breakthroughs will force us to reassess system reliability strategies, along with operations and maintenance. Enhanced use of analytics and system modeling also offers tremendous opportunities for operational optimization.

Peter Czerwinski: Data center owners are placing higher importance on sustainability and carbon footprint on their data center designs.

Scott Gatewood: Designers should expect the old trends to persist (efficient, scalable, resilient cooling; generation; suppression; security) while the future brings client critical clusters of interconnected server and storage structures networked to prepare and leverage the rise of “internet of things.”

Brian Rener: There will be continued push to lower energy metrics such has power usage effectiveness, but also additional sustainability measures like carbon emissions and water usage.

How is the growth of cloud-based storage impacting co-location projects?

Garr Di Salvo: We’re seeing more deployments not only of high-density solid-state storage but also a resurgence in tape storage and tape transfer deployments. This trend has been particularly impactful in facilities that leverage the fringe of the ASHRAE Technical Committee 9.9 environmental envelope. Tape systems require more stringent conditions than even the recommended class A1-A4 operating environment. Introducing such systems into conventional white space demands equipment isolation and control system changes and may require supplemental heating, ventilation and air conditioning support.

Peter Czerwinski: Cloud-based web hosting proves to be an enticing offering to the market, however the need for tight control over IT hardware remains a strong requirement for businesses with critical storage and applications.

What types of challenges do you encounter for these types of projects that you might not face on other types of structures?

Brian Rener: Data centers create significant mechanical and electrical infrastructure compared to conventual facilities. There is a need for rapid efficient development of construction documents and support for fast-track construction compared to other facilities. Due to the size, complexity and time frames with the design and construction team there is a need for strong project management and project controls. A solid integrated design approach is critical.

Peter Czerwinski: Data centers are often incorporated into existing campuses or developments. Due to the high heat rejection requirement relative to other structures, it can be a challenge designing equipment and the building layout to accommodate the airflow and square footage required to adequate heat rejection while complying with local sound ordinances.

Garr Di Salvo: Data center developments represent some of the fastest pace projects we do. It’s not uncommon to get a proposal request one week, have a project kickoff the week following and deliver full construction documents within a couple of months. Construction schedules are similarly compressed. Successful project delivery hinges on the ability to mobilize experienced staff, communicate issues clearly and make decisions quickly.

Scott Gatewood: Variability in site infrastructure, in structural durability and in renewables integration remain unique to individual client’s objective and must be planned well in advance.

What are professionals doing to ensure such projects meet challenges associated with emerging technologies?

Garr Di Salvo: IT refresh cycles happen on the order of three to five years in many firms and new technology developments come out at an even faster rate. Contrast this with the 20-plus year life cycle of building infrastructure equipment and it is readily apparent that facility designs must be flexible enough to accommodate the needs of technology not only of today and tomorrow, but also five to eight generations on.

Peter Czerwinski: IT equipment rack densities are trending upward, so a design that accommodates today’s IT density may not function adequately for future higher densities. Therefore, design professionals need to account for proper space layout, cooling utilities and power supply for future demands.

Scott Gatewood: Clearly the IT equipment changes are transformative. However, within the infrastructure side, one of the most innovative emerging industry trends will be the final evolution of liquid contact cooling. Displacing cooling air flows with liquid cooling mass will create structural and IT management changes affecting the logistics spaces from staging, test and configuration to the environments through the data hall.

In what ways are you working with IT experts to meet the needs and goals of a data center?

Scott Gatewood: Over the past 25 years, the most influential work has been on energy management for resilience. IT professionals contend with software and hardware transitions and optimization, with no time and little interest in energy management for resilience. By simply concentrating on Energy Star rated equipment (not the Binford 9000 series), with simple cooling airflow management techniques and method of procedures that mine orphan IT equipment, IT experts enable the integrated infrastructures operational bandwidth.

Brian Rener: Engineers must work closely with IT experts on the number of racks, the current and future types of equipment to be installed and the maximum anticipated power limits. As engineers we need to help define both the maximum power and heat load demand but also the phased implementation and growth of that IT load. The design and installation need to be modular and expandable.

Peter Czerwinski: Early engagement with end users is key to a successful data center project. Important design parameters such as allowable rack inlet temperature, current and future rack densities and capacity and required resiliency can be determined and ideal design options can come to fruition.

Garr Di Salvo: Arup is constantly working with industry experts to identify the impacts of IT hardware and networking developments on the built environment and facility requirements. As a truly diverse, multidisciplinary organization, our staff supports a wide variety of industry, code and standards organizations allowing us to see and shape future facilities. Through our Ventures initiative, we explore concepts for products, digital tools and new businesses and provide funding and technical support to commercialize new technologies.

Describe a co-location facility project. What were its unique demands and how did you achieve them?

Garr Di Salvo: Co-location facilities must provide reliable service to end users, operate efficiently and scale to accommodate changes in demand. We work with our clients to support new customers by helping them liberate and use stranded capacity within their existing investments. Often there’s an imbalance in the capabilities of the various building systems. Supplementing these shortcomings provides cost effective growth options. For example, by refeeding some power distribution and providing supplemental cooling support at one client facility, we were able to add 30% more capacity without installing additional electrical equipment.

Scott Gatewood: Co-location clients are challenged by the overlapping and blending of supporting infrastructure that, over time, becomes ever more challenging as equipment churns result in reordering and unavoidable overlaps in prior infrastructure. Enabling simple, segment able and dedicated power and telecommunications infrastructure for each tenant is important

Tell us about a recent project you’ve worked on that’s innovative, large-scale or otherwise noteworthy.

Peter Czerwinski: A recent data center project I worked on was challenging due to the requirement to design a prototype building that was nearly identical for all four locations in different geographies. There were also different groups of IT end users with conflicting operational criteria that had to be coordinated and implemented into the design, resulting in two different types of construction (i.e., stick-built and off-site pre-manufactured) with both pumped refrigerant economization and indirect evaporative cooling.

Garr Di Salvo: Many co-location and cloud developers rely on standard architectures or reference designs to provide a reliable, efficient, tested product. We’re currently taking this concept of standardization one step further by helping a client leverage a single facility design across multiple sites in several locations. Mechanical, electrical, plumbing, fire protection, telecom and security systems are the easiest systems to standardize in such a facility. Civil, structural and architectural systems present greater challenges but we’ve been able to achieve a very high degree of commonality throughout this program.

Scott Gatewood: The most challenging projects are the renovations of operational data centers. Strategic designs of multiphase implementations that must both protect and preserve existing infrastructure that remains, without placing the ongoing IT operations at risk. A recent project integrated 6 megawatts into a landlocked city center site requiring the repositioning of existing and neighboring utilities, acoustical and emissions ordinances and refueling access and all combined with physical security and durability requirements. This included new primary electrical and mechanical systems combined with a modern data center hall supported from a reoriented logistics train of docks, staging, test and configuration program spaces.

This shows the main electrical room in a data center. Courtesy: SmithGroup

This shows the main electrical room in a data center. Courtesy: SmithGroup

Brian Rener: SmithGroup designed the MEP systems new Dwight and Dian Diercks Computational Science Hall at the Milwaukee School of Engineering. The building contains a supercomputer providing access to artificial intelligence for learning at the school. While the supercomputer room occupies less than 1% of the physical space of the new structure, it consumes more than 60% of its energy. As such, it needed to be seamlessly integrated into the building and MEP infrastructure. The entire data center is designed for N+1 redundancy, allowing it to remain functional even in the event of a single component failure, thanks to multiple backup generators and cooling units.

The engineering team was able to achieve a symbiotic relationship between the base academic building energy systems and the supercomputer systems. For instance, the computer room and academic building use the same cooling system during summer months, maximizing efficiency across the whole building. During the Wisconsin winter, when the academic building no longer requires mechanical cooling, the computer facility uses “free cooling” from cold outside air via a separate system to keep the NVIDIA supercomputer cool.

How are engineers designing these kinds of projects to keep costs down while offering appealing features, complying with relevant codes and meeting client needs?

Peter Czerwinski: Energy efficiency measures that were once costly to achieve are now being implemented in the latest building codes, so as this technology becomes more affordable, certain design features are becoming mandatory.

Garr Di Salvo: Capital and total cost of ownership are a paramount concern for any data center developer or operator. Anywhere from 60% to 80% of construction costs can be attributed to the mechanical, electrical and plumbing infrastructure. By standardizing and modularizing building systems, owners can minimize these costs by leveraging volume procurement agreements and off-site fabrication and testing. We work with clients to identify lessons learned from each experience so they can be used to inform successive builds. Our transactions and analysis group helps us understand each client’s business model and particular financial sensitivities. A co-location operator may need to stage capital costs, paying for tomorrow’s projects with today’s leases, for example, whereas a cloud operator may be more sensitive to time to market and opportunity cost.

Scott Gatewood: Cost management centers around proper capacity planning. As the needs emerge, infrastructure components must be easily added to meet the demand, while avoiding the sunk costs lost to overcapacity.

Brian Rener: We are a strong proponent of integrated design with all disciplines working together real time on shared design platforms like BIM on the cloud and using strong quality control and quality assurance tools. This real time collaboration extends to clients, construction managers, contractors, cost estimators to participate in design assist and long lead equipment pre purchasing.

How has your team incorporated integrated project delivery, virtual reality or virtual design and construction into a project?

Garr Di Salvo: In one project, a change in requirements forced the support scheme for power and telecom loads to be relocated from the ceiling to the floor. To avoid dismantling the existing installation, we leveraged laser scans of the space and designed a supplemental support structure as the built condition. By using augmented reality headsets, the construction team was able to see the new configuration and provide input to streamline buildout. A digital simulation of the installation sequence was used to align the entire project team, greatly improving the implementation of this challenging revision.

Scott Gatewood: Wisdom, judgment and creative ideas emerge from every stakeholder and one never knows from where it will emerge. So, by combining the people, the systems and the business interests to facilitate and harvest results for the client what we should all be doing. As a full-service design organization, integrated design is completely natural and is a guiding principle at DLR Group. The experiences and imaginations of the owner and the constructor can only maximize the outcome. While BIM now facilitates the sharing of spatial understanding to nonplan creators, VR visualizes what has not been imagined. Combined with geospatial laser imaging, we now share and preserve our built environments.


Consulting-Specifying Engineer