What do you need to know about designing data centers?
Will data centers get larger? More efficient? Use less water? Learn about the trends here
Data center insights
- The demand for energy and water will directly impact how data centers are designed and operated.
- Cost constraints, location and energy use will dictate the location of data centers.
- Bill Kosik, PE, CEM, BEMP, Senior Energy Engineer, DNV, Oak Park, Illinois
- Brian Rener, PE, LEED AP, Principal, Mission Critical Leader, SmithGroup, Chicago, Illinois
- Ameya Soparkar, Market Leader, Mission Critical, Affiliated Engineers Inc., Rockville, Maryland
- Robert Sty, PE, LEED AP, Vice President, HDR Inc., Phoenix, Arizona
What are some current trends in data centers?
Bill Kosik: According to the Jones Lang LaSalle study H1 2022 Global Data Center Outlook, starting in 2017, hyperscale data centers nearly doubled electricity consumption to 87 terawatt-hours, while traditional data centers halved electricity consumption to 33 terawatt-hours. Because hyperscale data centers have a much higher information technology (IT) power density (watts per square foot) than traditional enterprise data centers, a single data center can be designed with an IT load of dozens of megawatts in the same footprint.
In 2010, the global energy demand for hyperscale and cloud data centers was 13% and for traditional data centers 87%. By 2022, global energy demand for traditional data centers was projected to be 23% with cloud and hyperscale data centers making up 77%.
Brian Rener: We are seeing increased requirements for higher density rack loads (30 kW or greater) and various forms of water cooling mixed with air cooling for high-performance or artificial intelligence (AI) computing. It may only be future provisions for now, but there is expressed need to have future flexibility
Ameya Soparkar: Apart from more capacity being added in cloud and colocation sectors and new developers entering the data center facility market, there are three trends that I find are noteworthy. The first is the enterprise data center market which continues to see a gradual increase.
The second is edge computing, which has grown in companies that don’t have IT as their core business function, manufacturing, pharmaceuticals, retail, etc. with server loads of less than 2 MW.
And lastly, liquid cooling. With Microsoft clearing the way by running a production data center on two-phase immersion cooling, the adoption rate of the technology as well as direct to chip liquid cooling is on the rise.
Robert Sty: Over the past few years, many colocation providers have moved to a standardized model for several reasons. Recently, the ability to procure equipment early on within the design process to secure a position at the start of the supply chain has taken on a new importance. Every year we see higher density deployments and the discussion around when to move to a liquid cooled solution. For years, the discussion in almost every forum was decreasing a data center, power usage effectiveness (PUE) value, yet the prolonged drought in the western United States has changed the conversation to finding a balance between reduced energy use in conjunction with reducing water-based heat rejection strategies.
What future trends (one to three years) should an engineer or designer expect for such projects?
Bill Kosik: According to the latest City of Chicago Energy Benchmarking Study (2019), of the commercial buildings included in the study, data centers represent only 1% of the floor space but are responsible for 26% of the indirect greenhouse gas (GHG) emissions. As municipalities expand their decarbonization programs, commercial buildings (in particular, data centers) are a significant part of the reduction potential.
Ameya Soparkar: Designing for mission critical IT in facility infrastructure that was not initially designed to be mission critical will be the challenge. With the growth in enterprise and edge data centers, IT hardware is being placed in areas that were considered nontraditional spaces and in buildings that were not designed for data center operation. Ensuring uninterruptible power, continuous cooling and seamless maintainability will require the engineers/designers to look at the infrastructure from the basic elements and come with creative ways to make modifications/additions to fulfill the mission critical requirement of the business.
Robert Sty: As municipalities push back against using water-based heat rejection technologies (cooling towers, direct/indirect evaporative cooling), our teams will be challenged with finding new ways to effectively cool data centers. More attention will be given to developing solutions that can manage extreme temperatures — for example when the temperature reached 115˚F in Oregon and the deep freeze Texas suffered in 2021. The industry has seen a large interest in the investigation of on-site power generation strategies to address markets where the utility is constrained.
How is the growth of cloud-based storage and virtualization impacting co-location projects?
Robert Sty: Cloud-based storage is one of the main drivers of growth for the data center industry, with demand expected to grow as companies move to either full cloud or hybrid cloud solutions. Hyperscale cloud providers are leveraging co-location providers and in turn are expanding into secondary markets due to lack of land and power availability in the traditional markets. There will be strain on all aspects of the supply chain — materials, equipment, construction labor and technical knowledge, which includes engineering and architecture design firms, among other professionals in the industry. Although this will inherently challenge all aspects to stay within cost constraints, this will optimistically drive new innovations in the process of delivery, as a result.
What types of challenges do you encounter for these types of projects that you might not face on other types of structures?
Bill Kosik: Fast-paced, multiple client, site challenges (power, water), rapid technology advancements, certain organizations use standardized (but often customized) equipment configurations. Due to the aggressive design and construction schedules, vendors and equipment manufacturers sometimes have difficulty in meeting delivery dates. Finding the balance between efficiency, reliability, maintainability and cost can be difficult unless the customer has clear standards in place.
Ameya Soparkar: In typical data center projects, probable failures are factored in the design and the system is designed for continuous operation. You will have large cooling plants, redundant generators capable to supporting 100% of the building load, fuel storage to last days. The infrastructure runs 24x7x365, which in many commercial buildings is not the case, hence having an on-site team and response strategy for any time during the day and year for failures is common in data centers.
Some of the challenges in the recent years have been adding/ building more capacity due to the still ongoing supply chain shortages, getting additional power from the utility and finding capable personnel.
Robert Sty: There can be significant pushback from municipalities against large scale data center development versus other types of projects, such as high-tech manufacturing. Although the systems and platforms, are critical to the operation of every business, the facilities themselves do not always support many primary jobs (direct employees). In contrast, semiconductor manufacturing facilities could potentially employ thousands of workers. Both types of facilities use a large amount of utility resources and use many secondary jobs in the real estate, architecture, engineering and construction industries. In many cases, the manufacturing facilities would be welcomed, due to the vast employment potential where the data center may not potentially support economic development.
What are professionals doing to ensure such projects meet challenges associated with emerging technologies?
Brian Rener: Higher density rack power demands means increased future demands for water to rack or water to chip. In some cases, liquid immersion tanks are being considered. The presence or water in a data center brings special challenges which as increase structural capacity on the data center floor and careful consultation of fluid spill management.
Robert Sty: Retrofitting existing data centers can be a significant challenge when considering new and emerging technologies. The facility may not have the power or cooling infrastructure in place to support higher density deployments. Strategies, such as rear door heat exchangers, could be appropriate in this type of scenario. We have also proposed adding supplemental structural framing to support new equipment on the roofs of data centers. While new greenfield designs can be easier to adapt to potential new technologies, there are still limitations to space, power and cooling infrastructure given the allotted footprint of the facility.
In what ways are you working with information technology experts to meet the needs and goals of a data center?
Brian Rener: As design engineers and architects we find predicting how quickly the data center will be populated with racks, in what phases challenging. Modularity and scaled implementation in mechanical, electrical and plumbing (MEP) systems is not only important in terms of energy efficiency, but in terms of initial capital expenditures and procurement schedules.
Ameya Soparkar: IT experts broadly can fall into two categories: one that makes the hardware and the second that provides consulting on the software applications to the businesses, which determines the type of compute, storage capacity they will need, the IT hardware that would fulfill the requirement and the future growth trajectory of the IT. Working with both of these experts allows us to design the supporting infrastructure specific to the hardware that will be put into the data center as opposed to designing for generic loads and deal with under/over capacities when the IT manager decides what they are going to populate the space with.
Robert Sty: The data center power and cooling systems exist to support the critical IT systems and help maintain operations. Our mechanical and electrical teams work closely with our expert IT/telecommunications engineers to align power and cooling infrastructure that is flexible and efficient. In existing facilities, the engineers will coordinate with the IT teams to support server refreshes, which can increase power demand per cabinet, but also create efficiencies throughout virtualization. Although power requirements are often given to the mechanical and electrical engineer in a static metric of average kilowatts/cabinet, server loads are very dynamic in different settings and will fluctuate due to many variables. Using a power and cooling delivery platform that responds efficiently to these dynamic changes, takes coordination between mechanical and electrical engineering and IT engineering.
Describe a co-location facility project. What were its unique demands and how did you achieve them?
Robert Sty: Co-location facilities have moved toward standardized designs, which has created a lot of efficiencies, benefits in cost and meeting schedules. Site adaption for unique climates, such as Phoenix that experiences extreme temperatures, creates challenges. When developing standardized models, we take into consideration the impacts of various climates and seismic zones. Models that are flexible and adaptable to change, respond better than those that are rigid. Where we have seen real benefits is in the site selection process. We have developed a parametric modeling tool, based upon various inputs of redundancy levels. Cooling platforms can develop rapid prototype models to maximize the critical IT load. This helps speed up the decision process for our clients during site selection.
Tell us about a recent project you’ve worked on that’s innovative, large-scale or otherwise noteworthy. Please tell us about the location, systems your team engineered, key players, interesting challenges or solutions and other significant details.
Brian Rener: SmithGroup has a specialty in advanced computing environments including high-performance computing and artificial intelligence. Years ago, we started focusing on water cooled solutions in addition to air cooled systems. Back then 50 kW per rack was considered high density, now we are working on projects with 100-300 kW rack and we are having to innovate again on how to efficiently power and cool these leading edge facilities.
Robert Sty: We recently performed architecture services for a large co-location facility in the Asia-Pacific market, which is far more space constrained with land than the United States. This drove the client toward a multistory vertical design with power and cooling equipment on the roof. There was a high level of coordination with the standby generators and modular chiller plants located on the roof with the structural and architectural systems. The team had to make considerations for locations and spacing of the equipment to accommodate maintenance of the equipment and safety of the facilities team. Located in a metropolitan area with residences, the team also had to consider the acoustic mitigations in addition to designing an aesthetically pleasing building. Although difficult to accomplish with the various technical challenges and stay within the client budget, these types of projects are typically the most rewarding and fun.
How are engineers designing these kinds of projects to keep costs down while offering appealing features, complying with relevant codes and meeting client needs?
Robert Sty: Engineering professionals are part of the overall supply chain more than ever before. COVID-19 and the resulting challenges with the global supply chain have brought this to light. Before COVID-19, we saw a mostly sequential model of design, procure and then build. The model of just-in-time delivery is being challenged and inventory is no longer frowned upon. Our engineering teams are working much more closely with client and contractor procurement groups to pre-order equipment and materials to accommodate for long lead times. This means the design team must work very closely with our clients to make smarter decisions earlier in the process. Standardization and scalable/repeatable system design help in reducing costs, while meeting client needs.
What types of cloud, edge or fog computing requests are you getting and how do you help the owner achieve these goals?
Ameya Soparkar: The cloud companies have extremely capable people on staff and if new builds are kept aside, general requests that are received are more on the tasks that we as consulting engineers are more streamlined to do, calculations, studies, computational fluid dynamics, energy analysis, etc. The edge projects is where we do quite a bit of feasibility studies, evaluation of different technologies, total cost of ownership analysis for our clients so that they can decide which option serves them best for the short and long term. We see ourselves as trusted advisers to our clients and try to develop designs and engineering solutions that would fulfill the requirements of their business.
Robert Sty: The purpose of edge computing is to bring the cloud closer to the device for faster processing of data and reduction in latency. Every company defines their edge a little differently depending on the business function. One approach is the deployment of smaller facilities that do not have regular on-site facilities staff, driving solutions that are a little more robust from a reliability viewpoint. Additional options include diesel fuel capacity, N+2 versus N+1 platforms and overall facility envelope enclosures that are hardened and secure. In 2020, ASHRAE TC9.9 put forth a technical bulletin addressing some of these concerns for edge facilities.
What are the data center infrastructure needs for a 5G system to work seamlessly?
Robert Sty: 5G networks will deliver data faster (low latency) and with greater reliability than previous network generations. This means that there will be much more data generated and shared, which will put a strain on existing network infrastructure. Data centers can prepare by upgrading network hardware, like switches and routers, to those that are compatible with 5G networks. Additionally, we will most likely see an increase in edge data center deployments or data centers that are closer to the devices that use and transmit the data.