Designing efficient data centers

In today’s digital age, businesses rely on running an efficient, reliable, and secure operation, especially with mission critical facilities such as data centers. Here, engineers with experience on such structures share advice and tips on ensuring project success.

By Consulting-Specifying Engineer April 24, 2018

Respondents

Doug Bristol, PE, Electrical Engineer, Spencer Bristol, Peachtree Corners, Ga.,
Terry Cleis, PE, LEED AP, Principal, Peter Basso Associates Inc., Troy, Mich.
Scott Gatewood, PE, Project Manager/Electrical Engineer/Senior Associate, DLR Group, Omaha, Neb.
Darren Keyser, Principal, kW Mission Critical Engineering, Troy, N.Y.
Bill Kosik, PE, CEM, LEED AP, BEMP, Senior Engineer – Mission Critical, exp, Chicago
Keith Lane, PE, RCDD, NTS, LC, LEED AP BD&C, President, Lane Coburn & Associates LLC, Seattle
John Peterson, PE, PMP, CEM, LEED AP BD+C, Program Manager, AECOM, Washington, D.C.
Brandon Sedgwick, PE, Vice President, Commissioning Engineer, Hood Patterson & Dewar Inc., Atlanta
Daniel S. Voss, Mission Critical Technical Specialist, M.A. Mortenson Co., Chicago


CSE: What’s the biggest trend you see today in data centers?

Doug Bristol: I’m seeing increasing emphasis on modularity and build-as-you-go to minimize the initial expense.

Terry Cleis: Designing overall systems that are focused at the rack level. These designs include targeted rack-level cooling and row containment for hot or cold areas. Some of these systems can be designed to provide flexible levels of cooling to match changing needs for individual racks. These designs include rack-mounted monitoring for temperature and power and associated power and cooling systems designed to cover a predetermined range of equipment. These systems also often allow for raised floor elevations to be minimized or even removed. This enables any space that is below the floor to be used for other systems with less concern on air movement.

Scott Gatewood: Beyond reliability and durability, efficiency and scalability remain top priorities for our clients’ infrastructures. Although this is not a new revelation, the means and methods of achieving them through design and information technology (IT) hardware continue to evolve. Data center energy (with an estimated 90 billion kWh of data center energy waste this year, according to the Natural Resources Defense Council) remains a key operational cost-management goal. The tools, methods, and hardware needed to reduce energy continue advancing. The Internet of Things (IoT) has entered the data center with data center infrastructure-management (DCIM) software, sensors, analytics, and architectures that closely couple cooling and energy recovery, providing energy efficiencies rarely achievable just 6 years ago. With increased automation, managing the plant is increasingly achievable from remote locations, just as the IT infrastructure has been. Scalability also remains critical to our clients. How this is achieved also continues to evolve. For businesses seeking innovative advantages through speed to market, modular approaches using pre-engineered scaled solutions with fast deployment continue to grow. Although not for everyone or every site, more options exist to scale rapidly than ever before.

Bill Kosik: Over the past 10 years, data center design has evolved tremendously. During that maturation process, we have seen trends related to reliability, energy efficiency, security, consolidation, etc. I don’t believe there is a singular trend that is broadly applicable to data centers like the trends we’ve seen in the past. They are more subtle and more specific to the desired business outcome; data center-planning strategies must include the impacts of economical cloud solutions, stricter capital spending rules, and the ever-changing business needs of the organization. Data centers are no longer the mammoth one-size-fits-all operation consolidated from multiple locations. We see that one organization will use different models across divisions, especially when the divisions have very diverse business goals.

Keith Lane: Some of the new trends we see in the industry include striving for increased efficiency and reliability without increasing the cost. Efficiency can be gained with better uninterruptible power supply (UPS) systems, proper loading on the UPS, transformers, and increased cold-aisle temperatures. Additionally, a proper evaluation of the specific critical loads and the actual required redundancies can allow some of the loads to be fed at 2N, some at N+1, others at N, and others with straight utility power.  Allowing this type of evaluation to match specific levels of redundancy/reliability with actual load types can significantly increase efficiency.

John Peterson: We are seeing a continuation of the many trends that have been happening in the industry over the past few years. For the more cutting-edge, power density and temperature ranges move higher while infrastructure moves toward becoming more automated and software-defined. Modularity for scalability is more popular. Enterprises are mimicking the more agile IT environments that large cloud providers have established as the new paradigm. Edge computing continues to grow, and with that, support will be needed. Clients will be balancing bandwidth and storage to deploy in quantities that are closer to what they need.

Brandon Sedgwick: The biggest trends we see in data centers today are megasites and demand-dependent construction. In this highly competitive market, minimizing cost per megawatt of installed capacity is a priority for data center owners, which is why megasites spanning millions of square feet with hundreds of megawatts of capacity are becoming more common. Borrowing a page from just-in-time manufacturing principles, these megasites (and even smaller facilities) are designed to be built or expanded in phases in response to precontracted demand to minimize upfront capital expenditure and expedite time to market. Consequently, these phased projects often demand compressed construction schedules with unyielding deadlines driven by financial penalties for the owner. This has led to simpler or modular designs to expedite construction, maximize capacity, and reduce costs while allowing flexible redundancy and maintainable configurations to meet individual client demands.

Daniel S. Voss: We’re noticing large colocation providers with faster speed-to-market construction and implementation. There is a high level of competition between the major countrywide colocation providers to have the ideal space with all amenities (watts per square foot, raised access floor, security, appropriate cooling, etc.) ready for each new client and customer.

CSE: What trends and technologies do you think are on the horizon for such projects?

Kosik: Information and communications technology (ICT), particularly high-end enterprise servers, continues to evolve by increasing the computing power while simultaneously reducing energy use. The robust workloads that run on these machines are designed to take advantage of the increased productivity, so even though the computing efficiency has increased, the overall power consumption also increases. This leads to a greater electrical-demand density (watts per square foot) across the data center and a greater electrical density at the server-cabinet level (watts per cabinet).

Gatewood: In addition to the plant infrastructure, we tend to watch emerging IT infrastructure trends for their potential effects on the future of the physical environment. Here, the landscape continues its rapid change. Beyond the megatrends of the cloud/hybrid, edge computing, and security, we see changes in storage—and networking technologies will alter the personalities of the white space with more storage equipment. Due to the vastly larger amounts of data production from IoT and video appliances, combined with costs and performance increases, data center and edge storage will explode and change the IT footprint of the white space.

Voss: There are really two trends. The first is using existing, heavy industrial buildings and repurposing them for data centers. To increase the speed to market, many owners and constructors are eyeing containers and containerization for electrical, mechanical, and IT disciplines. The second involves building hyperscale data centers with 20 mW or more of critical IT computing power. Many large colocation providers are constructing multibuilding campuses with total campus capacity exceeding 50 mW of critical IT compute power.

Cleis: We’re seeing targeted cooling with more options including water and refrigerant for racks. Better options for the piping distribution associated with these systems will continue to evolve to make the work associated with ongoing maintenance and future changes better suited to take place in a data center environment. We have owners asking for more modular designs and designs that will prevent issues like software/firmware problems that can ultimately shut down entire systems. These include smaller UPS systems or using multiple UPS manufacturers. Smaller systems can be located closer to the loads and allow equipment upgrades or replacements associated with failures without affecting the entire facility. Replacement and repairs to smaller components can also help reduce costs associated with ongoing maintenance and repairs.

Sedgwick: One trend we are seeing more frequently is that IT is leveraging methods, such as virtualization, that can be used to “shift” server processes from one location to another in the event of a failure, to offset physical power-delivery system redundancy. This allows engineers to streamline infrastructure design by reducing power transformations between incoming sources and the load, simplifying switching automation, and minimizing—or even eliminating—UPS and backup generation. Simpler power-delivery systems consume less square footage, are faster to build, and free up more of a facility’s footprint for white space.

Peterson: Liquid and immersion cooling is likely to grow in the coming years. As power densities increase and the costs and implementation challenges are solved, liquid and immersion cooling practices can start to develop, as efficiency is still a prime factor for operations. Surveys have shown that enterprise businesses will be continuing or expanding their investment in hybrid or cloud solutions. This indicates that the software-defined data center market is still growing and that it won’t matter where the data centers are or who owns and operates them. As DCIM becomes implemented in more comprehensive ways, we’ll see improvements that are a step or two away from being automated.

Bristol: Lithium-ion batteries appear to be ready for prime time.

CSE: What are engineers doing to ensure data centers—new and in existing structures—meet the challenges associated with emerging technologies?

Cleis: Engineers and designers should always keep an open mind and spend time researching and reading to stay informed about evolving design and system innovations. I find that good ideas very often come from owners and end users during the early programming stages of the design process. Many owners and end users have a solid technical background and a historic understanding of how data centers operate. Most of these people spend a lot of time working in data centers, which enables them to bring an insightful perspective. They are able to inform us what systems are reliable and have worked well for them in the past, and what systems have given them problems. They also provide the design team with ideas for how to make the systems function better.

Voss: With increasing IT power densities, cooling and power can become limiting factors in optimizing the built environment. Additionally, data center customers use varying electrical distribution topologies and facilities need to be designed to accommodate these different needs. A goal is to create a flexible design that can evolve with differing customer requirements and emerging technologies. These designs also need to accept modular construction with traditional building materials and methods and provide the necessary landing/connection points for containers.

Lane: It is incumbent for engineers who provide design and engineering service for mission critical facilities to keep up with technology and with the latest data center trends. Our company has vendors present the latest technology to us; we belong to several professional organizations, read numerous industry magazines, and provide extensive independent research on codes, design standards, and emerging technologies.

Peterson: In most cases of data centers up to 20 years old, revisions to the existing data center are possible to allow increases in density. Specialized cooling systems have allowed for increased density, which is often in localized areas of a legacy data center. With more choices of adaptable air segregation and other means to decrease bypass air, older data centers can control hot spots and better serve future needs. For new designs, data center layouts are often being coordinated for specific densities that work with their common operation within a certain power, space, and cooling ratio. Some new facilities are aiming for the flexibility of more direct liquid (water) cooling and are willing to invest in the upfront coordination and installations to meet their future needs.

Gatewood: Emerging technologies are difficult to predict accurately. I recall the 1995 white paper preceding the creation of the Uptime Institute predicting 500 W/sq ft white space by the early 2000s. Predictably, Moore’s Law did produce exponential performance increases, but not exponential energy consumption. But the change too-often overlooked is the ever-increasing weight of the product footprint. An existing structure can be improved to meet the more than 250-psf loads that today’s white spaces demand. Future technologies may incorporate liquid cooling and require even denser liquid-cooling weights to existing and new data center structures.

CSE: Tell us about a recent project you’ve worked on that’s innovative, large-scale, or otherwise noteworthy. In your description, please include significant details—location, systems your team engineered, key players, interesting challenges or obstacles, etc.

Darren Keyser: While all projects are presented with unique challenges, a recent multistory data center on the West Coast presented significant challenges, particularly for the fuel system design. The client’s goal of maximizing the amount of leasable white space meant there was little space for the generator plant, which needed 48 hours of fuel storage. Adding to the challenge was unfavorable soil conditions for underground fuel tanks. With limited room at grade, the tanks needed to be vertical. In addition, the facility is in a seismic zone, which added to the complexity of the tank support. The 10 3-mW engines were placed on the roof of the facility, adding to the intricacy of the fuel-delivery system. Additionally, even though the piping was abovegrade welded steel, the client wanted to manage the risk of a fuel leak and decided to exceed code by implementing a double-wall piping system.

Gatewood: While new data centers are more straightforward, renovating existing data center environments are not for the faint of heart. In line with the importance of a proper structural design, we are wrapping up the reconstruction of a new data center environment within an existing 50,000-sq-ft data center while simultaneously carrying out a complete retrofit of the existing footings and structural steel beneath an active data center. Careful sequencing of the work, a well-thought-out method of procedures, and change-management controls are allowing a space that was designed to handle 90 psf to carry a new modern data center, without affecting the existing operations.

Sedgwick: Iron Mountain’s 200-acre campus in western Pennsylvania is one of the world’s most secure colocation facilities. Located 220 ft underground in an abandoned limestone mine, it is completely powered by renewable energy and is geothermally cooled by an underground lake that provides naturally chilled, recycled water. This "chiller-free" cooling saves millions of gallons of water each year. During two expansions of this 1.8 million-sq-ft Tier IV data center, which included repurposing single generators to a new parallel-generator plant, we commissioned UPS modules, power distribution units (PDUs), and the associated electrical power-monitoring system (EPMS) infrastructure.

Cleis: We are in the process of designing a moderately sized data center to fit inside an existing vacant building. The owner requested that the design include smaller-scale equipment configured in a modular design to allow for easier maintenance and equipment replacement. This includes smaller UPS units, PDUs, and non-paralleled generators. Providing levels of redundancy using these smaller pieces of equipment and not paralleling the generators proved to be a challenge. The current design contains modules that are based on a predetermined generator size. The overall generator system is backed up using transferring equipment and an extra generator unit in the event a single generator fails.

Bristol: We recently helped a large corporate enterprise data center operator replace a legacy UPS and switchgear dating from 1992, using rental units. The data center facility-management team, corporate IT, data center operators, commissioning agents, UPS and switchgear vendors, and generator/switchgear-maintenance contractors all contributed and partnered to successfully implement a no-outage seamless cutover for the 2N, 5-mW system.

Voss: The QTS Chicago data center fits that description to a tee. QTS leveraged this former Sun-Times printing facility’s robust base structure and efficient layout to support its repurposing as an industry-leading data center. The innovative conversion is a modular design that populates the structure from east to west as more clients and tenants occupy the data hall space. We are currently constructing a 125-mW substation for Commonwealth Edison onsite, which will not only provide power to the existing 470,000-sq-ft building but also have sufficient capacity to expand on the same site.

CSE: Each type of project presents unique challenges—what types of challenges do you encounter on projects for data centers that you might not face on other types of structures?

Sedgwick: In data centers, downtime is not an option. Period. As a commissioning firm, this challenge presents itself in different ways depending on whether you are building a new facility or modifying an existing live site. In a new facility, building a reliable system is the primary focus throughout the entire project, and commissioning is verified by the owner that once the system goes live, it won’t go down. However, as project schedules are constantly under pressure to be expedited, or issues cause time frames to slip, it’s usually the commissioning schedule that is shortened to accommodate delays upstream. The stakes are high when equipment needs to be added, capacity expanded, or controls upgraded in a live facility. Working safely while maintaining power to critical components requires scrutiny above and beyond that of new construction to prevent injury, property damage, and service disruptions. The commissioning agent must be knowledgeable enough to anticipate unintended consequences of planned actions, and the agent must thoroughly understand operational sequences and system responses to mitigate unnecessary risks to personnel and property. Discernment is crucial when determining what level of commissioning is required. Commissioning specifications for a live site often duplicate those developed for the original installation. The commissioning authority may suggest specification modifications to align the commissioning effort and approach with functional verification requirements, and to minimize operational impact. In some cases, the live-site environment may warrant more testing or different methods, or the scope may need to be reduced to mitigate risk.

Lane: Most facilities and general buildings do not draw consistent power over a 24-hour period. Data centers and other mission critical facilities draw power with a high load factor. Duct banks can overheat when feeding a data center with a high load factor. Specific to data centers with high load factors, Neher-McGrath duct-bank heating calculations are required to ensure the conductors feeding the facility are adequately sized.

Bristol: Mostly, the challenge is the relentless search for maximum reliability and concurrent maintainability. These high-performance buildings are required to be operating essentially at full throttle all the time, even during times of maintenance, so multiple service paths for all utilities (cooling, power, air, etc.) is essential.

Voss: A greater level of commissioning for electrical-redundant paths and mechanical equipment requires a review of each system early in the project, which includes reviewing what components make up the system and what schedule must be met to provide the proper turnover date. Understanding each item that is to be commissioned and how it interacts with other electrical and mechanical equipment is critical, so the sequence of operations and various levels of commissioning are being actively thought about during preconstruction and throughout the entire project.

Peterson: Data centers are made for silicon-based life, not carbon-based. Once owners and operators understand that modern IT equipment can withstand high air-inlet temperatures, they can start to gain monumentally through cooling efficiency.

CSE: Is your team using BIM in conjunction with the architects, trades, and owners to design a project? Describe an instance in which you’ve turned over the BIM model to the owner for long-term operations and maintenance (O&M) or measurement and verification (M&V).

Peterson: We perform all of our designs using BIM. Through practice, we are able to incorporate more information in our models to reduce the number of coordination errors that lead to changes in the field. Owners have seen the benefits over time, as new additions and changes to the designs can be shared with consultants, added to the BIM, and then returned. Third-party construction-management groups have then taken the model and added updates as necessary throughout the process, including input from commissioning and controls changes.

Keyser: Absolutely. This is key. Our firm provides a complete mechanical, electrical, plumbing, and fire protection (MEP/FP) consulting engineering model. Our goal is to provide a clash-free model, including a complete fire suppression system layout, when issued for construction. Because we dedicate so much time and effort working with the client to meet their needs, we will often “own” the model for the initial phase of construction coordination. This ensures all those conversations and decisions made during design—prior to the construction team being brought on board—are maintained. This also makes the construction process more efficient. The more efficiently the entire design-build team works together, the better and quicker the construction process. Speed to market is a huge driver for our clients.

Bristol: Yes, we use BIM with mission critical projects during the design. During construction, we share the model with contractors and subcontractors to fine-tune their systems, then the record (sometimes referred to as “as-built”) model is turned over to the owner not only for long-term operations and maintenance but also for use by future design teams when the inevitable renovations or expansions occur.

Voss: Absolutely, especially for data centers. Our firm uses BIM for all of our projects throughout the country. This is mandatory for repurposing existing buildings; oftentimes, the amount of available space to install the overhead infrastructure is less than in a data center-designed structure. On a recent data center, our company leveraged BIM to support the graphics for the building management system (BMS). This not only saved time and money in creating new graphics for the BMS system, but it also provided the customer with a far more accurate representation of their facility.

Lane: A majority of data center projects we are involved in are using REVIT.

CSE: How are engineers designing data centers to keep initial costs down while also offering appealing features, complying with relevant codes, and meeting client needs?

Cleis: One of our jobs as design engineers is to help owners understand the risks, benefits, and costs associated with different levels of redundancy for the various systems that make up an overall data center facility. Hybrid designs with varying levels of redundancy between different systems are not uncommon, particularly for smaller and midsize systems. Our job is to educate owners and help them understand their options, but ultimately to design a facility that meets their needs and works within their budget. It may sometimes appear that we are underdesigning a certain system in a facility, but in fact, we are establishing a lower overall baseline of design redundancy for the facility. Then, with the owner’s input, we design some specific systems to higher levels of reliability to address historic problems or known weaknesses for that particular client or facility.

Gatewood: The bulk of the data center’s initial costs is the electrical and mechanical systems needed to provide 100 to 200 times the power demands of an average office building. Add to this the redundancy and resilience required so that a system failure or service outage, say a fan motor, must not result in an outage of the IT work product. This is where the high initial costs come from. However, many operations can grow over time, which permits using scalable infrastructure that allows our client to grow their plant as their IT needs grow. This results in the best initial cost while allowing them to grow quickly as their needs change.

Keyser: Predicting the future is impossible. Whether it’s colocation or enterprise, the industry needs to plan for it. We create a master plan for a facility, yet only build to meet the initial needs of our client and future-proof the rest, the best we can. The initial build consists of space fit for immediate deployment while the balance may be shell space. Implementing a container solution not only speeds up construction, it also allows the client to defer purchase and installation of expensive infrastructure until the IT loads require expansion. It’s a tough balance, which is why master planning is so crucial. While there are systems and equipment that can wait, there are certain systems in future spaces of the facility that must be installed from day one to minimize disruption of the active data center.

Lane: This is the real challenge and the mark of a good engineer. The engineer must dig deep into the owner’s basis of design and work closely with the owner to understand where some loads need high reliability and where lower reliability and associated redundancies can be removed. Also, right-sizing the equipment will save money upfront and increase efficiency. Always design toward constructibility and work hand-in-hand with the electrical contractor. Using BIM and asking for input from the contractors will save time and money during construction. We are seeing ever-evolving code changes with respect to arc flash calculations, labeling, and mitigation. It is critical to ensure that the available fault current at the rack-mounted PDU is not exceeded. As a firm, we provide the design of mission critical facilities as well as fault-current and arc flash calculations and selective-coordination studies. We always design toward reducing cost and arc and fault-current hazards during the design process.

Voss: It is a true balancing act to arrive at an optimal solution that meets or exceeds the needs of the customer within the established budget. We work closely with engineers and architects to perform detailed cost-benefit analysis to ensure features and requirements are evaluated holistically. Using modular construction techniques and understanding the client in great detail will help the design team come up with innovative ideas and opportunities.

Bristol: Most designs now include a modularity strategy so owners can build (and spend) as they go. Modularity almost always includes a roadmap to the “end game” and has to include strategies to minimize the impact to the existing live data center as the facility is built out. For example, if the data center’s capacity includes 10 MW of generator capacity at N+1, but only the first two are being installed on day one, then all exterior yard equipment—pads, conduit rough-ins, etc.—would be included on day one so that adding generators would be almost plug-and-play. Outdoor cooling equipment, interior gear, UPS, batteries, etc. would work in a similar way.

Peterson: They’re doing it by using typical equipment sizes and modularity; vendors have been able to bring down costs considerably. Contractors also see savings with typical equipment; installations gain in speed as the project progresses.

CSE: High-performance design strategies have been shown to have an impact on the performance of the building and its occupants. What value-add items are you adding to data centers to make the buildings perform at a higher level?

Voss: By focusing on energy-efficient enclosure systems and operational infrastructure systems (lighting, office HVAC) we can help reduce the noncritical energy usage of the data center. This helps reduce the power-usage effectiveness (PUE) value, which not only saves our customers money but also improves the marketability of their asset.

Gatewood: A value-added performance item many of our clients appreciate is adiabatic humidification techniques that save substantial amounts of energy and water while also improving humidity control.

CSE: We’ve seen severe weather devastate businesses in many regions in the U.S. How are you working to safeguard a clients’ information and systems against extreme weather conditions? 

Peterson: We have performed more risk assessment studies for data center clients over the past year than in previous years. While this often starts based on severe-weather outlooks, we examine redundancy on fiber, power, and other utilities. Clients also have been aiming to consolidate data centers to certain regions to reduce latency, but using multiple sites across that region to avoid loss of connectivity. A higher degree of care is taken with data centers, as they most often serve missions that are more critical than other building types.

Cleis: When designing a facility, the team should always address known factors regarding potential natural disasters in a geographic region when searching for a site for a new facility. It’s also common to include similar concerns when selecting spaces within existing facilities when the data center will only occupy part of the building. Avoiding windows, potential roof leaks, and flooding are common requirements. We often try to select an area in a building that has easy access to MEP systems, while also avoiding exterior walls, top floors, and basements. Typically, we try to avoid areas that are too public and select areas that are easy to secure and have limited access. It’s also important to select an area that has access paths that will allow large equipment to be moved.

Gatewood: Many clients understand the ever-escalating cost of downtime and the consequences of disaster recovery following a complete loss of equipment and staff. The Federal Emergency Management Agency’s FEMA 361, Safe Rooms for Tornadoes and Hurricanes: Guidance for Community and Residential Safe Rooms, FEMA 426, Reference Manual to Mitigate Potential Terrorist Attacks Against Buildings, and FEMA P-431, Tornado Protection: Selecting Refuge Area in Buildings, along with FM Global 1-40 offer national reference standards for durability and survivability by design. Many jurisdictions subject to natural hazards have created their own set of enforceable codes that draw from some of these standards. It’s critical that the design team understands the client’s risk tolerances and can communicate the costs of physical durability. Surprisingly, it is typically not as costly as one might think when compared with the project cost and the value of the assets within.

Lane: We typically provide an NFPA 780: Standard for the Installation of Lightning Protection Systems lightning-protection analysis for mission critical systems. A majority of data centers are designed with a lightning-protection and/or a lightning-mitigation system.

Voss: Fortunately for projects in the greater Chicagoland area, the worst weather we have ranges from subzero temperatures to heavy snows/blizzards to high winds with a lot of rain. Keeping the building out of flood plains, and for certain clients, constructing an enclosure that can withstand a tornado rating of EF-4 (207-mph winds) are issues we’ve faced.

Bristol: Depending on the risk of a shutdown to a given data center, strategies to “stormproof” the building are popular, such as minimizing the possibilities of projectiles impacting the building by eliminating or reducing the amount of roof-mounted and outdoor-mounted equipment. Another strategy is having not only redundant systems, but also a completely redundant site for disaster recovery.

CSE: Interest in cloud computing is on the rise—according to your experience and observations, has that had a visible impact on current/future data center projects?

Voss: Absolutely. Many corporations are moving from onsite computing facilities to cloud-based colocation data centers. The quantity of new enterprise data centers is decreasing while the quantity of colocation sites is increasing at a rapid pace.

Peterson: We’ve seen a lot of growth from the main cloud providers, and industry analysts are expecting that this growth will continue for at least the next 10 years. Since most have a typical format for their buildings, the structures themselves haven’t changed a lot to accommodate the enormous pressure on schedule to meet the cloud demand. Over time, the trends may shift to lower costs and yield higher returns for shareholders that are investing now.

Gatewood: Cloud computing’s visible impact on current and future data centers clearly reveals itself in the enterprise client’s white space. The combination of virtual machines and the cloud have slowed the growth of rack deployments. Clearly, each client’s service and application set will affect cloud strategy. In some cases, growth has stopped as applications move to the cloud. 

Sedgwick: We live in an on-demand, instant-gratification world; cloud computing enables users and companies to take advantage of a great deal of computing power and storage without massive capital outlay for systems and infrastructure. This unprecedented access, coupled with current and emerging data-intensive applications (i.e. streaming entertainment services, ever-present mobile devices, artificial intelligence, home automation, autonomous vehicles, etc.), is driving demand at an accelerated pace. As a result, we’ve seen a demonstrable uptick in construction as wholesale and retail data center providers clamber for market share. Additionally, to remain competitive, data center operators are paying more attention to operational efficiency, resource utilization, streamlined data processing, and other functional strategies to reduce costs and improve flexibility and scalability without sacrificing reliability.

CSE: How do data center project requirements vary across the U.S. or globally?

Keyser: The local environment has a huge impact on the mechanical solution. Questions to ask: Is free cooling an option? What are the local utility costs for water versus electricity? Questions like these are key elements that will drive the design.

Kosik: There are many variations, primarily due to geo-specific implications including climate and weather, impacts on cooling system efficiency, severe weather events, water and electricity dependability, equipment and parts availability, sophistication and capability of local operational teams, prevalence and magnitude of external security threats, local customs, traditional design approaches, and codes/standards. It is important to be cognizant of these issues before planning a new data center facility.

Voss: Voss: Selecting a data center site normally goes through many steps to reach a potential final location. The climate, geography, an adequately trained workforce, state and local concessions, and constructability play a large part in the selected location. The chosen location, in turn, dictates which building codes, electrical codes, and other applicable codes impact the data center design. The actual owner requirements most likely will have very few changes, as the project design is created from the owner’s basis of design.

Lane: Lane: We have provided the design and engineering for data centers across the globe. We have seen many variations in design. Some of these variations include serving utility voltage, server voltage, lightning protection, grounding requirements, surge and transient protection, and others. Additionally, the energy cost can significantly drive the design. In areas of the world where energy costs are higher, efficiency is very critical. In areas of high lightning strike density, lightning protection and/or mitigation is a must.