Data Centers
A data center is a physical facility used to house and manage the critical infrastructure and technology needed to support the storage, processing and networking of data. These facilities are typically designed to support the operation of a large number of computers and servers, as well as the storage of data in the form of databases and file systems. Data centers are usually equipped with redundant power supplies, backup generators and cooling systems to ensure that they can operate continuously and reliably. They also often have multiple layers of physical and digital security to protect against unauthorized access and data breaches. Data centers are used by businesses, government agencies and other organizations to store, process and manage large amounts of data. They can range in size from small server rooms to large, purpose-built facilities with multiple floors and thousands of servers.
Data Centers Content
Trends, changes in data center design
Several trends are pushing the engineered systems in data centers in different directions
Respondents:
- Bill Kosik, PE, CEM, BEMP, Senior Energy Engineer, DNV, Oak Brook, Illinois
- Matt Koukl, DCEP, Principal, Market Leader Mission Critical, Affiliated Engineers Inc., Madison, Wisconsin
- Kenneth Kutsmeda, PE, LEED AP, Global Technology Leader – Mission Critical, Jacobs, Philadelphia
- Ben Olejniczak, PE, Senior Project Mechanical Engineer, Environmental Systems Design Inc., Chicago
- Brian Rener, PE, LEED AP, Mission Critical Leader, Smith Group, Chicago
- Jonathan Sajdak, PE, Senior Associate/Fire Protection Engineer, Page, Houston
Bill Kosik, PE, CEM, BEMP, Senior Energy Engineer, DNV, Oak Brook, Illinois – Matt Koukl, DCEP, Principal, Market Leader Mission Critical, Affiliated Engineers Inc., Madison, Wisconsin – Kenneth Kutsmeda, PE, LEED AP, Global Technology Leader – Mission Critical, Jacobs, Philadelphia – Ben Olejniczak, PE, Senior Project Mechanical Engineer, Environmental Systems Design Inc., Chicago – Brian Rener, PE, LEED AP, Mission Critical Leader, Smith Group, Chicago – Jonathan Sajdak, PE, Senior Associate/Fire Protection Engineer, Page, Houston
What are some current trends in data centers?
Matt Koukl: The firm sees interest in the investigation and deployment of liquid cooling technologies and higher-density cooling loads necessitating higher-density cooling solutions. These workloads are mostly focused on artificial intelligence, data analytics processing and other high-performance computing workloads. In addition to these workloads, systems that include graphics processing units and other coprocessor type systems are also requiring a rethinking of cooling methods and systems supporting those types of systems.
Kenneth Kutsmeda: Sustainable, carbon-free backup energy solutions are trending in data centers. Many global technology companies are leading the way toward climate action and setting aggressive net zero carbon targets. Data center backup power is typically provided by diesel generators because they are highly reliable and cost effective. But the diesel generator exhausts carbon dioxide. To meet their climate goals and eliminate carbon, data centers are looking toward alternative back up energy solutions such as hydrogen fuel cells and lithium-ion battery solutions. Fuel cells that use pure green hydrogen (hydrogen produced using renewable energy) are completely carbon-free.
Ben Olejniczak: The biggest trend I am seeing right now is a push to integrate construction as part of a facility’s design process. Traditionally, the construction team would not get involved in a project until the design was completed, the project was bid and the contract was awarded. Now, many of our hyperscale clients have built enough data centers and have gained enough familiarity with general contractors and mechanical contractors across the country that they are integrated into the process as trusted partners. The design team uses its fabrication and installation expertise to provide feedback on the design.
Also, prefabrication is becoming a very important requirement in many of our jobs. What can we do to decrease the schedule and improve our time to first megawatt? What products do we specify and how do we work them into the job to decrease on-site labor and increase installation time?
Brian Rener: Focus on water savings and approaches for new higher density power needs.
Hydrogen fuel cell plant rendering. Courtesy: Jacobs
Please explain some of the codes, standards and guidelines you commonly use during the project’s design process. Which codes/standards should engineers be most aware of?
Bill Kosik: I always try to look beyond local code for inspiration and new ideas. Certainly, at the end of the day, you need to meet code, but innovation and code compliance are not mutually exclusive. Organizations such as Uptime Institute and The Green Grid have a wealth of information on data centers. Also, real estate developers like JLL and Cushman-Wakefield regularly publish data center updates. On the technical side, of course anything data center-related from ASHRAE. I also will refer to international design standards and publications such as CIBSE guides and briefings.
Jonathan Sajdak: The most common standards used in the design of data centers include, but are not limited to:
- NFPA 13: Standard for the Installation of Sprinkler Systems.
- NFPA 72: National Fire Alarm and Signaling Code.
- NFPA 2001: Standard on Clean Agent Fire Extinguishing Systems.
These standards contain design requirements for automatic sprinkler systems (i.e., wet-pipe, dry-pipe and preaction), fire alarm and smoke detection systems (including air-aspirating smoke detection) and clean agent systems, respectively. Most building codes and fire codes such as the International Building Code and NFPA 1: Fire Code identify when these fire protection systems are required, then reference the standards that explain how they should be designed and installed.
Where adopted by the jurisdiction, NFPA 75: Standard for the Fire Protection of Information Technology Equipment and NFPA 76: Standard for the Fire Protection of Telecommunication Facilities also contain criteria and requirements for fire protection systems. Some owners will also require insurer underwriter requirements to be met, which may then result in additional criteria to be followed such as FM Global datasheets (2-0, 4-9, 5-32, 5-48, etc.).
Ben Olejniczak: Common codes and standards that I use project to project are:
- ASHRAE Standard 62.1: Ventilation for Acceptable Indoor Air Quality
- ASHRAE Standard 90.1: Energy Standard for Buildings Except Low-Rise Residential Buildings
- ASHRAE TC 9.9, detailing equipment thermal guidelines for data processing environments along with the ASHRAE datacom series papers written by the ASHRAE TC 9.9 technical committee detailing items like liquid-cooled server guidelines and recommendations
- International Code Council documents like energy and mechanical code.
It’s hard to put a finger on which codes and standards are most important. They are all important, especially those that may impact your ability to obtain a building permit and deliver a project.
Matt Koukl: The considerable work ASHRAE Technical Committee 9.9 performs and the various research projects funded by ASHRAE provide significant benefit. These publications and standards are used as benchmarks during the design process. The numerous books and publications developed through ASHRAE TC 9.9 are viewed as global standards and as a knowledge basis of design for these facilities. The extensive research and broad depth of knowledge benefits all readers and those that implement the facility design using this information.
Kenneth Kutsmeda: When using lithium-ion batteries for uninterruptible power supply energy storage, two codes that engineers should be aware of are International Fire Code – Section 1206 Electrical Energy Storage Systems and NFPA 855: Standard for the Installation of Stationary Energy Storage Systems. Both codes contain important requirements (location, separation, quantities, etc.) that relate to the installation of battery systems in particular lithium-ion batteries. Another standard to be familiar with is UL 9540A. UL 9540A is the testing method for evaluating thermal runaway fire propagation in battery energy storage systems. UL 9540A was developed to address the safety concerns associated with lithium-ion batteries and help manufacturers provide compliance to the new IFC and NFPA code regulations.
What future trends should an engineer or designer expect for such projects?
Ben Olejniczak: Liquid-cooled server technology will become more prevalent soon. As the internet begins to change and we start to consider things like the evolution of the internet as we know it, the demand for AI technology, virtual reality and augmented reality will drive an increase in computing power requirements, data processing speeds and data storage. We’re already seeing this as our hyperscale clients are planning for their next-generation data center designs. Higher kilowatts per cabinet, along with a desire to manage white space square footage and overall building footprint, opens the door for liquid-based cooling solutions. These systems offer the benefits of retaining a compact footprint and possess the means to remove more heat than an air-cooled equivalent. Many are beginning to challenge the status quo/historical norms and are revisiting the idea of having fluid in the critical space interfacing directly with IT infrastructure.
Brian Rener: Increased attention to alternatives to the use of diesel field generators such as hydrotreated vegetable oils, clean hydrogen, fuel cells and utility grade battery storage units.
Matt Koukl: In general, future trends are pointing to rack and processor power densities increasing to the point of needing to consider different and alternative cooling methods besides using air. Whether it is a new build or a retrofit, thoughts and considerations should be given to how the facility will accommodate needs for liquid cooling or an alternative to air cooling. Additionally, the need for a hybrid environment of liquid cooling and some air cooling will be critical to the planning and design of either a retrofit or a new build facility.
Kenneth Kutsmeda: Although the technology is a few years from being readily available, engineering firms like Jacobs are in the early stages of feasibility and concept planning for the use of micronuclear energy for data centers. The constant load profile of a data center is the perfect fit to optimize the nuclear reactor. Micronuclear will allow the facility to go off grid and self-produce zero carbon primary power. Data centers can install N+1 micronuclear reactors for reliability and concurrent maintenance. There is potential to offset the cost of the micronuclear by selling power from the redundant unit back to the utility. The heat by product of micronuclear could be used to drive evaporative cooling to reduce heating electrical loads or a direct air capture system, making the data center carbon negative.
How is the growth of cloud-based storage and virtualization impacting co-location projects?
Kenneth Kutsmeda: As computer demand increases, the need for additional cloud-based storage increases. Land acquisition, utility infrastructure and construction of large cloud/hyperscale facilities takes time. To meet the demand, cloud providers are looking toward co-location for additional data center space. The traditional co-location facility with fixed offerings, shared white space and shared infrastructure are not conducive to cloud providers. Co-location facility designs had to be changed to meet the requirements of cloud providers. Spaces had to be private and leased in bigger blocks (2 to 5 megawatts). Utility infrastructure, distribution and redundant components had to be dedicated to the space, not shared. Security and fire separation had to be provided between customers.
Bill Kosik: The move of big corporations to a public-cloud solution has been slow but steady. However, it is projected that by the end of 2022 still more than one-half of workload will be in on-premise data centers. One of the main challenges is mutual trust and transparency. The customer’s operations are extremely critical so making sure everyone is on the same page is necessary for good outcome.
What types of challenges do you encounter for these types of projects that you might not face on other types of structures?
Matt Koukl: Data centers are unique in several ways but most significantly are the high sensible cooling loads and power densities. Every aspect of ensuring the data center and associated operating equipment inside has the highest levels of availability, maintainability and resiliency to ensure 24/7 operation of the digital infrastructure. The power densities for the areas where computing equipment is present can reach watts per square foot of greater than 2,000 watts/square foot. Most human occupied facilities whether hospital, office or research facility typically never reach above 20 watts/square foot.
What are professionals doing to ensure such projects (both new and existing structures) meet challenges associated with emerging technologies?
Matt Koukl: Emerging technologies in computing hardware are going to pose some interesting challenges not previously experienced in the modern era of data centers. The greatest ability to understand these technologies and be prepared to confront these challenges is to have high engagement with various organizations such as ASHRAE, Open Compute Project and other organizations that are focused on advancing data center design and technologies supporting data centers. These organizations are made up of individuals that are industry leading by developing and deploying equipment with the newest technologies.
Kenneth Kutsmeda: Data center technology is always evolving and changing. Therefore, data center facilities have to be designed to allow adaptation and integration of new technology. Especially those data centers that are scalable and designed to grow over time as the load increases. Data center engineers are locating more equipment outside in containerized, premanufactured enclosures. They are bringing cooling systems and centralized electrical/UPS systems outside the physical walls of the data center facility so that they can easily adapt to the changing technology without affecting the physical data center. For example, locating generators outside allow for them to be replaced with hydrogen fuel cells in the future. The distribution system also has to be designed to allow for this plug and play configuration.
In what ways are you working with information technology experts to meet the needs and goals of a data center?
Ben Olejniczak: We work with IT experts to obtain information on the hardware that will be located in the data hall. They provide insights on rack loading, staging of load over the life span of the building, server operational environment and any long-term plans the business may have (i.e., air-cooled load migrating over to liquid-cooled load). While it is sometimes difficult to get in touch with IT stakeholders, the information we generally extract from this team is extremely important.
Bill Kosik: In some ways, working with IT managers and directors is equal or even greater that working with the facilities staff. I say this because without a doubt the IT operations are the element that everything else emanates from. Learning about short- to midterm growth goals (in density and overall power) will help answer questions on what type of stems are most applicable and how and when they need to be expanded. Also, getting the vision on future power and cooling technology like water cooling or lithium-ion battery storage is also important from a system planning perspective.
Finally, we need to know the type of server/storage/network control and automation that is being planned. Gaining insight to the computer hardware vitals (internal/external temperature, minimum processor power, maximum (or capped) processor power and current as a percent of max) can be instrumental in providing real-time data to the cooling and power systems.
Describe a co-location facility project. What were its unique demands and how did you achieve them?
Matt Koukl: AEI’s experiences with co-location providers focus on speed to market for capacity deployment, repeatable design solutions, scalable and flexible deployment options. With the need to accommodate diverse types of workloads and densities, system flexibility for handling workloads is a need. It is critical to have scalable systems that meet initial deployment loads — commonly much lower than final deployment loads. Additionally, the ability to have a base design that can be adaptable by site that meets the requirements without needing to develop new sequences and fundamental system designs.
Tell us about a recent project you’ve worked on that’s innovative, large-scale or otherwise noteworthy.
Brian Rener: We completed a high-performance computing center that was designed to reside inside a new university building. The energy profile of a data center is different from a traditional education building and we had to find ways for synergy. Also, the initial high-performance computing equipment by NVidia was suitable to air cooling in a hot aisle containment system, but we needed to design in future technology changes for water cooled systems.
Matt Koukl: One current AEI project that includes innovative methods achieving almost 100% economization in Southern climates is for a high-performance computing center. The ability to modularize large-scale infrastructure to achieve a productized type of solution that allows efficiency in procurement and installation are also noteworthy accolades. The team performs significant studies optimizing the infrastructure and other aspects of the system, gaining the greatest efficiency of the system at all times of the year. Additional evaluations are performed to evaluate water reuse knowing that high-performance computing loads are considerable and need significant amounts of heat rejection.
Bill Kosik: This was a few years back, but it is still one of my favorite data center projects. (I’m going to guess that the data center has had some additions or modifications since it was built.) The project is for a large Midwestern university that always has been a leader in advanced computing. They competed and won a large grant from the federal government to build a super-computing facility. What makes this facility so fascinating to me is the sheer scale of the power and cooling load. The overall facility could support on IT load of around 20 megawatts. The IT cabinets ranged from 75 to 150 kilowatts, resulting in a load density across the entire data center floor area of approximately 1,500 watts/square foot. The computers are direct water cooled with the exception of a small electrical load on each cabinet that was air-cooled.
Kenneth Kutsmeda: Jacobs developed a plan of record design called Cloud Condos. Cloud Condos is a data center that brings together the best features of both hyperscale and co-location design. The modular data center is built and leased in 5-megawatt blocks. Each 5-megawatt block is a private, separated module with dedicated security and fire rated construction at the perimeters for enhanced protection. Blocks can be repeated horizontally or stacked vertically. Each 5-megawatt block is fed from dedicated utility infrastructure and distribution. Electrical and mechanical systems are located outdoors in prefabricated enclosures and adaptable to different configurations, technology and climates. Design is vendor agnostic, but vendor-specific versions have been developed with major manufacturers to shorten equipment lead times and also to leverage emerging technologies and sustainable strategies. A plug-and-play approach maximize speed to market allowing for design to online availability up to 30% faster than traditional delivery approaches.
Ben Olejniczak: Recently, we completed several buildings on a hyperscale data center campus located on the East Coast. Each building supports approximately 75 megawatt and spans approximately 1 million square feet. The main cooling systems are comprised of a built-up, direct evaporative cooling system and packaged, direct evaporative air handling units. With these systems, we were projecting to hit annualized targets of 1.1 power usage effectiveness and 0.056 water usage effectiveness. The project incorporates multiple levels of distribution and mechanical redundancy and includes on-site water storage for situations where water may become scarce. Interestingly, a solar facility is to be installed nearby and will provide renewable energy for use by the data center campus.
How are engineers designing these kinds of projects to keep costs down while offering appealing features, complying with relevant codes and meeting client needs?
Matt Koukl: Keeping costs down in the current market is challenging. The firm’s innovative methods include using all pieces of the system to the greatest extents, while enhancing operational efficiency and overall system efficiency. Engineers aid clients to achieve systems that holistically look at inputs and outputs driving efficiency for operations while focusing on the bottom line and capital expenditures.
Ben Olejniczak: Throughout my career, many of the data center client standards that I’ve designed around are centered on ambient design criteria that almost statistically never occur. Systems deploy oversized and underused, costing the client both in the short and long term. By having practical conversations with my client counterparts and diving into the data, we have been able to align on more realistic design criteria, lowering equipment capital and operational expenditure and right-sizing the system as it should be. Data centers are notorious for an all-encompassing belts and suspenders approach. By applying the belts and suspenders where they need to be applied and practically designing the system in other places, unnecessary costs can be avoided. It should be noted that this discussion is critically important to have with your client stakeholders, as each company may have client standards that it abides by. There is also data indicating that global temperatures are on the rise, adding additional context for consideration.
Brian Rener: The focus these days is on supply chain issues and disruptions from COVID and conflicts. There may be reasons to consider the use of alternative materials or equipment, but more importantly these supply chain issues are affecting lead times — in many cases some equipment is running a year out from placing an order. Early bid packages are more critical than ever.
What are the data center infrastructure needs for a 5G system to work seamlessly?
Bill Kosik: Power for 5G systems is much higher than 4G. And 5G systems are a good application for edge computing. Telecom data centers constructed 15 or 20 years ago will not have the power and cooling capability for new 5G systems. It is not uncommon to see the installation of larger (or more) cooling and power central plant components to provide the necessary support. Also, supporting a high-density edge data center is a challenge due to the lack of centralized power and cooling.
Data Centers FAQ
-
What is difference between cloud and data center?
A cloud is a network of remote servers that are accessed over the internet and are used to store, process and manage data. Cloud networks are typically owned and operated by third-party companies, which offer their computing resources to users on a pay-as-you-go basis. This means that users only pay for the resources they consume and can scale up or down as needed.
A data center, on the other hand, is a physical facility that organizations use to house their critical applications and data. Data centers typically include redundant power, networking and cooling systems to ensure the continuous operation of the servers and other equipment they contain. Data centers are used to store and process data for a variety of purposes, including running applications, hosting websites and storing and analyzing data.
One key difference between a cloud and a data center is that clouds are typically owned and operated by third-party companies, while data centers are owned and operated by the organizations that use them. Another difference is that clouds are typically accessed over the internet, while data centers are accessed through a private network.
-
What are some examples of data centers?
A data center is a facility used to house computer systems and associated components, such as telecommunications and storage systems. It generally includes redundant or backup power supplies, redundant data communications connections, environmental controls (e.g., air conditioning, fire suppression) and various security devices. Data centers are used by organizations to store, process and manage large amounts of data, including data from websites, cloud-based services and internet-connected devices. A colocation (known as co-lo) data center is a type of data center where equipment, space and bandwidth are available for rental. These co-lo facilities typically host several clients or users within the same building.
-
How many types of data centers are there?
There are several different types of data centers, each with its own unique characteristics and design. Some of the main types of data centers include:
- Enterprise data centers: These are data centers that are owned and operated by a single organization, such as a corporation or government agency and are used to support the organization's internal operations and services.
- Colocation data centers: These are data centers that are owned and operated by a third-party company and are used by multiple organizations to store and manage their data.
- Cloud data centers: These are data centers that are owned and operated by companies that provide cloud-based services, such as Amazon Web Services or Microsoft Azure.
- Internet exchange points (IXPs): These are data centers that are used to connect different networks and internet service providers, allowing them to exchange traffic and improve internet connectivity.
- Edge data centers: These data centers are located near the ""edge"" of a network, close to where data is generated and consumed. They allow organizations to process and analyze data locally and make decisions faster.
- Hyperscale data centers: These are data centers that are specifically designed to handle a very large amount of data and have a large scale infrastructure.
- Micro data centers: These are small-scale data centers, often used for edge computing and are typically housed in a single rack.
- Containerized data centers: These data centers are housed in shipping containers, providing an easy and fast way to deploy a data center.
The specific type of data center that an organization chooses will depend on its specific needs and requirements. Some organizations may choose to use a combination of different types of data centers to meet their needs.
-
How does a data center work?
At a high level, a data center works by receiving and processing data from various sources, such as computers, servers and internet-connected devices. The data is then stored on servers, which are powerful computers specifically designed for data storage and processing. The servers are connected to a network of switches and routers, which allows the data to be transmitted and received by the various devices and systems that make up the data center.
The data center also includes various environmental controls, such as air conditioning and fire suppression systems, to ensure that the servers and other equipment are operating in a safe and stable environment. In addition, the data center is designed with multiple layers of security, including physical security, network security and data security, to protect the data from unauthorized access.
Consulting engineers play a variety of roles in data centers, including the design of the mechanical, electrical, plumbing (MEP) and fire protection systems.
Some FAQ content was compiled with the assistance of ChatGPT. Due to the limitations of AI tools, all content was edited and reviewed by our content team.