Designing modular data centers

Modular data centers offer quick turnaround and clear project costs. Like traditional data centers, they require carefully engineered electrical and cooling systems.
By Brian Rener, PE, LEED AP, and Bill Kosik, PE, CEM, BEMP, LEED AP BD+C December 15, 2014

This article has been peer-reviewed.Learning Objectives

  • Learn about the key components that make up a modular data center.
  • Understand when modular data centers can best be used.
  • Know the electrical and cooling requirements of modular data centers.

When people in the data center industry hear the term "modular data center" (MDC), each individual will have a different impression of what an MDC really is. The use of MDCs is on the rise: IDC reported that of the 74% of all respondents who stated they had undertaken major rebuilds, approximately 87% had deployed modular, prefabricated, or container data center components in the past 18 mo. But this has not assuaged the reality that most of the data center industry—whether focusing on IT or facilities—still does not have a consistent understanding of the different types of modular data centers.

The authors prefer to use the term “flexible facility” because not all of the facilities are truly modular, but still offer many of the advantages of modularization. These data center facilities come in the form of containers (located indoors and outdoors), industrialized or prefabricated, and brick-and-mortar buildings. Yes, brick-and-mortar buildings can be considered a flexible facility if they are designed using the same scalability and agility concepts as a prefabricated or containerized data center. For the purpose of this article, we will focus on three primary types of flexible facilities: containerized solutions, industrialized or prefabricated data centers, and enterprise brick-and-mortar data centers.

Buzzword 101: Clarifying modularity, scalability, and flexibility

There are many widely used industry terms for MDCs, and all of them have broad definitions. Terms like modularity, scalability, vertical integration, flexibility, agility, prefabrication, repeatable quality, and rapid deployment pepper marketing brochures, industry publications, and other industry outlets. Not to say that using these terms is inaccurate, but rather the terms must be attached to real-world products and approaches and how they can improve the operations of the data center and ultimately, business performance. As an example of how this can be accomplished, consider the following IDC data on the reasons there is an increasing proliferation of MDCs:

  • Quick turnaround from contract signing to going live, which enables enterprises to quickly scale up computing solutions when they need them
  • Greater clarity into project costs, which enables enterprises to have greater control over budgets and estimates on associated return on investment
  • An ability to deploy solutions very rapidly and almost anywhere in the world, whether enterprises need additional data center capacity in an existing building or at a remote location, such as for redundancy or in support of a satellite location.

In these situations, the ideas of modularity, scalability, and flexibility become more concrete.

Basics of flexible facilities

When containerized data centers began to gain greater visibility and started to be used in more mainstream business applications, advances in design and engineering enabled closely integrating IT and facilities, both from a physical location and an operational approach. Instead of using components created for an enterprise data center and fitting them into a much smaller space, today’s flexible facilities are designed differently using factory assembled and tested IT, power, and cooling components designed specifically for the type of enclosure.

Figure 1: Equipment racks in the cold aisle in a containerized data center offer the ability to design the IT gear for high densities (a). The hot aisle (above right) must allow for the optimal mix of safe personnel access and heat removal (b). Courtesy: HP Data Center Facilities ConsultingFigure 1: Equipment racks in the cold aisle in a containerized data center offer the ability to design the IT gear for high densities (a). The hot aisle (above right) must allow for the optimal mix of safe personnel access and heat removal (b). Courtesy: HP Data Center Facilities ConsultingThere is an important foundational belief regarding flexible facilities: the facility is built around the IT vs. designing the IT around the facility, the latter being the typical model in the data center industry. Dynamic IT operations require a facility that can respond and adjust to changing needs without significant cost and downtime (see Figure 1). Also, having the capability for dynamic scale-up and decreased time to market provides an important advantage for IT operations. This scale-up (or scale-out) is enabled by not only the modular aspect of the facility but, arguably more important, the close integration of the IT equipment installation and the design of the facility.

Other important, albeit not widely known, aspects of a flexible facility have to do with fabrication and on-site construction work streams. Efficiency gains are obtained from reallocating field labor and focusing it on the prefabrication of mechanical and electrical modules. In this model, supply chain management becomes one of the primary activities to ensure on-time schedule delivery and installation.

What a flexible facility is not

The notion of attaining a data center in a short amount of time, at a significantly lower cost, and that can never become obsolete, may seem like fantasy, but flexible facilities are still facilities and must follow customary processes and adhere to local building codes and regulations. The belief that flexible facilities are simply dropped into place, plugged in, and the IT operations can start right away is, unfortunately, simply not possible. Certainly having the facility (or parts of the facility) under manufacture while the site is being prepped, meeting with the authority having jurisdiction, and working with the local electrical utility will shave off considerable amounts of time from the schedule.

But electrons and water will flow in the same way regardless of the type of data center, flexible facility, or traditional brick-and-mortar enterprise data center. Also, unless the new flexible facility will be closely connected to an existing facility, a lot of underground conduits and cooling water piping must be installed. Site access, roads, parking lots, retention basins, sewers, and other elements will most likely be needed. Certainly, every project will have different requirements, but the idea is that much more is required beyond the flexible facility itself. And because many of these data centers are deployed in remote locations, it is essential to have local operations and maintenance teams in place to provide scheduled maintenance and to rapidly respond to trouble calls.

Finally, one of the greatest areas of innovation in a flexible facility is the integrated control and operation of the cooling and IT systems. Depending on the design of the data center, the cooling equipment will be closely coupled with the IT equipment and will react sooner to changes in power state of the IT equipment, reducing overall energy use of the cooling systems. Although this description is an oversimplification of the actual operational strategy, it illustrates an area that must be understood before the facility is turned over to the owner: the sophisticated control strategy and physical close coupling of the IT and cooling systems require additional contingencies to avoid thermal overload in the event of an equipment failure. Failure concepts of concurrent maintainability or fault tolerance are certainly not new in designing data centers. But in the realm of flexible facilities, which offer a high degree of optimization including space, additional failure scenarios must be explored.

Cooling system modularity and flexibility

Different flexible facility types can be fitted with different types of cooling systems. The type that is ultimately chosen depends on many factors: facility type, owner familiarity, location, first cost, ongoing energy and maintenance costs, reliability, and maintainability.

Containerized solution: For example, a containerized data center can use direct expansion (DX) cooling, outdoor air economizer with DX cooling, direct evaporative cooling (with or without DX cooling), cooling water for water-cooled computers, and others. These cooling solutions, other than any sources of cooling water, are integral to the container itself. If solutions using chilled water are used, more external equipment is required. To keep the facility as modular as possible, the external equipment can be designed, fabricated, and installed using the same approach as the data center. Therefore, a certain amount of decision making is required to determine the final configuration of the containerized data center. Depending on size and reliability level, these types of facilities will range from 400 kW to 2 MW.

Industrialized or prefabricated data centers: This type of facility offers a greater degree of freedom in choosing a cooling system. Typically, these facilities will have outdoor-located air handling units that use direct outdoor air economization, direct evaporative, or indirect evaporative cooling. In certain geographic locations, mechanical cooling is not required, especially using elevated server inlet temperatures. This type of facility more closely resembles a traditional building and uses more physical space for rooms not included in the container, such as control and command center, storage, and equipment loading area. The modules are larger but in the same range as the container, and can be expanded until site area and electrical power run out.

Enterprise brick-and-mortar data center: While typically not thought of in the MDC realm, large enterprise data centers can offer similar flexibility and reliability as other types of flexible facilities. There are many reasons that this type of facility would be selected over a prefabricated or containerized data center, and if planned properly, the facility can adapt dynamically to the IT systems and future expansion can be accomplished with no threat to IT operations. These facilities can be of any size, and module sizes will typically correspond to power and cooling equipment sizing and business needs. Central plant equipment can also be designed using a modular approach to facilitate future expansion and reconfiguration.

How does a flexible facility offer efficiency?

This question can be answered in many ways. There are arguments that the space itself can be reshaped or repurposed as IT strategies and hardware continue to evolve. Other claims on the effectiveness of the procurement and construction process use prefabrication techniques and prototyping. Certainly, these and many other reasons offer the data center owner options and malleability to ensure an optimal data center facility. A system of a modular data center that is a good representation of the advantages of flexible facilities is the cooling system. Inherent to most large commercial cooling systems is that they are often designed and built using a modular approach. The challenge is always deciding on a logical module or block that is cost effective, energy efficient, and can be easily expanded. This is often easier said than done. But this decision process is easier for a data center because the cooling load (IT equipment) is predetermined and there is an understanding of how the equipment will operate and what the specific cooling loads will be.

Figure 2: A hallmark of a flexible facility is the ability of the cooling and power systems to match the growth rate of the IT hardware, resulting in optimal provisioning of the systems. The power and cooling equipment is designed and built in a modular fashion (a). The primary air handling units have a simple yet effective method of conditioning the data center with a high degree of energy efficiency (b). Courtesy: HP Data Center Facilities ConsultingFigure 2: A hallmark of a flexible facility is the ability of the cooling and power systems to match the growth rate of the IT hardware, resulting in optimal provisioning of the systems. The power and cooling equipment is designed and built in a modular fashion (a). The primary air handling units have a simple yet effective method of conditioning the data center with a high degree of energy efficiency (b). Courtesy: HP Data Center Facilities ConsultingIn general terms, the cooling systems for a flexible facility will be of the same construct as a traditional data center, but will be in clear, discrete modules that have a one-to-one correspondence to a data hall (see Figure 2). This philosophy applies to all types of cooling strategies (direct expansion, direct outdoor air, chilled water, evaporative cooling, and others) and allows for a much more even outlay of capital knowing the specific plan for expansion well ahead of construction commencing.

Figure 3: When comparing fan motor energy for a flexible facility vs. a traditional data center facility, IT growth, modularity, and reliability are key factors in the overall system energy consumption. To demonstrate this, a simple energy model was used to determine annual fan motor energy use for a 4-yr period. The flexible facility model is able to maintain the required reliability by increasing the number of modules (a). When comparing the two options, the flexibility facility will save energy each year over a 4-yr period. The graph shows the annual energy use as a ratio of the energy use to the first year energy use of the flexible facility model (b). Courtesy: HP Data Center Facilities Consulting

In addition to first cost and constructability advantages, flexible facilities, by the nature of their modularity, can provide distinct energy use efficiency benefits. To illustrate this example, a simple energy model was used to determine the cumulative energy use of fan motors in air handling units serving the data centers. The model for the traditional data center uses larger air handling units in an N+1 configuration, for a total of three. The flexible facility model uses all of the same parameters except for using smaller, more modular air handling units—a total of five in an N+1 configuration. To meet an N+1 requirement, the larger air handling units require higher fan power because the N is larger than the N in the flexible facility example. The analysis simulated a 4-yr period, running at 25% the first year, 50% the second year, 75% the third year, and 100% the fourth year. 

Figure 3: When comparing fan motor energy for a flexible facility vs. a traditional data center facility, IT growth, modularity, and reliability are key factors in the overall system energy consumption. To demonstrate this, a simple energy model was used to determine annual fan motor energy use for a 4-yr period. The flexible facility model is able to maintain the required reliability by increasing the number of modules (a). When comparing the two options, the flexibility facility will save energy each year over a 4-yr period. The graph shows the annual energy use as a ratio of the energy use to the first year energy use of the flexible facility model (b). Courtesy: HP Data Center Facilities Consulting

The results indicate a reduction in fan power using the flexible facility approach by 30% to 50% depending on the duration that was studied. This analysis, albeit very simple, demonstrates the potential for a modular approach to reduce energy use and possibly improve maintenance and operations by sizing equipment and systems based on the IT equipment, reducing possible oversizing and inefficiencies (see Figure 3).

What’s next?

Based on the increased demand for MDCs, the flexible facilities will continue to evolve and mature. The Internet of things phenomenon, where more and more items used in our daily life (watches, refrigerators, security systems, and other things we haven’t even thought of) are connected to the Internet, continuously uploading and downloading data, accessing databases, and storing data sets, will require more rapidly deployed, low-cost data centers. Using a flexible facility certainly will be an integral part of this expansion strategy, giving companies the required agility to build strong Internet-based products and services.

Figure 4: A typical data container consists of data equipment racks, distribution panels, and sometimes localized UPSs. Courtesy: SmithGroupJJR

Electrical modular components

As with cooling, modular or containerized data center power components can be broken down into several possible components from an electrical power perspective. These components include the data equipment, mechanical cooling, the main power distribution equipment, and the backup power sources.

The data container will typically consist of racks of data equipment and distribution panels, and may also include localized UPS, cooling units, fire detection or suppression, and lighting (see Figure 4).

Mechanical cooling systems may also be located in their own modular containers externally or directly attached to the data containers. This can require a separate power feed to these mechanical systems, or they may be fed from the same source to the container systems depending on design requirements and tier ratings.

Figure 5: The photo shows one type of outdoor-rated switchgear. Courtesy: SmithGroupJJRPrimary on-site power equipment, such as main switchgear, can be specified and configured in several ways to supply power to modular data center forms. A possible configuration is simply an outdoor rated enclosure, which may or may not be a walk-in type. These types of outdoor gear are common in many types of projects from data centers to industrial (see Figure 5).

Alternatively, the power distribution equipment can be provided in a container similar to the data equipment. Benefits to this approach include single-source quality control and integration with other components (see Figure 6).

Backup power can consist of standby generators, UPSs, rotary, or other systems sources. Due to their tight tolerances and environmental conditions, UPSs are ideally suited to be placed in containerized or modular units with integral cooling. Outdoor generator enclosures are commonly applied on many projects and can include walk-in and not walk-in enclosures. As with switchboards, the generators can be provided with containerized enclosures and integrated as part of a modular data center solution from a single-source supplier.

Figure 6: Typical switchboards in a modular container are shown in the photo. Courtesy: SmithGroupJJRIt is not uncommon that the central power systems, including switchboards and UPS, might be specified ahead of time, and located in a pre-manufactured utility building with or without walls, with the data center containers sitting just outside or partially inside this utility structure. This hybrid approach provides maintenance and operations staff with a more conventional central utilities building (instead of using the data center building) by using data containers.

Power configurations

As with conventional data centers, the electrical systems for modular data centers may be designed around various voltage levels, including 120/208, 480/277, and 400 V. Another similarity is that the modular containers can be configured or specified to comply with an overall system tier rating. Tier ratings as specified by the Uptime Institute define four type of tier ratings (tiers I, II, III, and IV) for electrical infrastructure topologies. These tiers define performance outcomes for dedicated infrastructure, redundant components, concurrent maintainability, and fault tolerances. Within the actual data module or container, this often comes down to the decision of whether redundant A/B feeds and internal distribution will be provide to the racks.

Figure 7: This is an example of an A/B feed using plug-and-cord connections. Courtesy: SmithGroupJJRExternally, the power connections to the data container can take several forms. A hardwired connection, using a traditional conduit, cable, and disconnecting device, is available similar to a packaged air handler connection. Alternatively, the modular container can make use of more flexible plug-type connections (see Figure 7).

A key electrical power difference for most data modules is the maximum power available when using a containerized data center approach. Typically, a single containerized data module will have a power limit of approximately 1 MW for data processing equipment. Based on computing needs by the client, this 1 MW limit will drive the number of individual containers needed. This limitation does not apply to other types of modular flexible data centers.

Grounding and lightning protection

Figure 8: This diagram shows a typical grounding and bonding method for modular data containers. Courtesy: SmithGroupJJRModular data containers require grounding similar to any data center. Each container will come with internal grounding systems to the racks and components inside the container. However, the engineer will need to design a grounding system for the site supporting the various modular data center containers and power systems (see Figure 8). This will typically result in a grounding grid using undergrounding rings and grounding bars interconnecting all containers and systems into a low impedance system.

Along with this ground system, a lightning protection system will most often be designed, placing air terminals on top of containers, modular structures, generators, and other equipment on the site. 


Brian A. Rener, PE, LEED AP is an associate at SmithGroupJJR. He has more than 20 years of experience in management and engineering of new and existing facilities. He specializes in electrical power and life safety systems for commercial, industrial, governmental, and mission critical facilities, with a focus on research, laboratory, and health care buildings. He is a member of the IEEE industrial applications Chicago society where he has served as an officer and chair. Brian has published or presented numerous papers on power systems for IEEE, Consulting-Specifying Engineer magazine, and Pure Power magazine.

Bill Kosik, PE, CEM, BEMP, LEED AP BD+C, is the principal data center energy technologist at HP Technology Services. He is the leader of “Moving toward Sustainability,” which focuses on the research, development, and implementation of energy-efficient and environmentally responsible design strategies for data centers. Kosik collaborates with clients, developing innovative design strategies for cooling high-density environments, and creating scalable cooling and power models.

Both authors are members of the Consulting-Specifying Engineer Editorial Advisory Board.