Get the latest updates on the Coronavirus impact on engineers.Click Here
Data Centers

Data centers achieve a new level of high-tech

Designing solutions for data center clients — whether hyperscale or colocation facilities — requires advanced engineering knowledge

By Consulting-Specifying Engineer April 21, 2020
Courtesy: SmithGroup

Respondents

  • Bill Kosik, PE, CEM, BEMP, senior energy engineer, DNV GL Technical Services, Oak Brook, Ill.
  • John Peterson, PE, PMP, CEM, LEED AP BD+C, mission critical leader, DLR Group, Washington, D.C.
  • Brian Rener, PE, LEED AP, principal, mission critical leader, SmithGroup, Chicago
  • Mike Starr, PE, electrical project engineer, Affiliated Engineers Inc., Madison, Wis.
  • Tarek G. Tousson, PE, principal electrical engineer/project manager, Stanley Consultants, Austin, Tex.
  • Saahil Tumber, PE, HBDP, LEED AP, technical authority, ESD, Chicago
  • John Gregory Williams​, PE, CEng, MEng, MIMechE, vice president, Harris, Oakland, Calif.
Top row: Bill Kosik, PE, CEM, BEMP, DNV GL Technical Services; John Peterson, PE, PMP, CEM, LEED AP BD+C, DLR Group; Brian Rener, PE, LEED AP, SmithGroup; Mike Starr, PE, Affiliated Engineers Inc. Bottom row: Tarek G. Tousson, PE, Stanley Consultants; Saahil Tumber, PE, HBDP, LEED AP, ESD; John Gregory Williams​, PE, CEng, MEng, MIMechE, Harris. Courtesy: DNV GL Technical Services, DLR Group, SmithGroup, Affiliated Engineers, Stanley Consultants, ESD, Harris

Top row: Bill Kosik, PE, CEM, BEMP, DNV GL Technical Services; John Peterson, PE, PMP, CEM, LEED AP BD+C, DLR Group; Brian Rener, PE, LEED AP, SmithGroup; Mike Starr, PE, Affiliated Engineers Inc. Bottom row: Tarek G. Tousson, PE, Stanley Consultants; Saahil Tumber, PE, HBDP, LEED AP, ESD; John Gregory Williams​, PE, CEng, MEng, MIMechE, Harris. Courtesy: DNV GL Technical Services, DLR Group, SmithGroup, Affiliated Engineers, Stanley Consultants, ESD, Harris

CSE: What’s the current trend in data centers?

John Peterson: With some owners finding out exactly what they like and what works for their deployments, the current trend beyond reliability and efficiency is the ability to scale to better match the needs of the load, yet grow quickly when needed tomorrow. Because those servers and need for data center space can be gradual over years or sudden over weeks, the ability to deploy the equipment on an as-needed basis has been the latest driver to reduce costs.

Brian Rener: The current trend is to employ hot-aisle containment strategies for rack-based cooling with mechanical equipment located outside of the white space. Hot-aisle containment allows supply air temperatures to be elevated just below room temperatures, increasing efficiency and extending the use of economizers to lower cooling energy.

Mike Starr: Several of my current projects include high-performance computing. Today’s supercomputer ratings range from 300 kilowatts to 15 megawatts (1.0 to 200 petaflop machines). HPC that I have encountered uses 480 Y/277 volts (4W+G), which is a coordination point for uninterruptible power supply systems due to the grounded conductor. These power connections are single-ended, not dual-corded.

Most HPC cooling applications are liquid cooling direct to the computer chip. The project and owner teams must determine if circulating pumps may wait for on-site backup generation or if an uninterruptible source is required for those motor loads. Introducing liquid to chip makes for a more complex load bank plan during commissioning also, as water-cooled load banks are needed.

A continued trend is modular focused construction. Data centers are using such methods as installing related equipment on steel skids (platforms) and building modular electrical rooms for mobile transport to the final job site. This process leads to more compact construction schedules and greater quality control. Hyperscale companies are deploying multiple 20 to 80 megawatts or more power planes, per year. Considering the extent of artificial intelligence data center market demand on the horizon, I suspect this trend will only grow.

Tarek G. Tousson: One of the growing trends in data centers is the high demand for HPC. HPC is the ability to process data and perform complex calculations at high speeds. HPC solutions are known as supercomputers. A supercomputer comprises thousands of compute nodes that work simultaneously in parallel to complete multiple complex operations. HPC solutions are used for wide range of industries and academic research. Technologies such as “internet of things,” AI, machine learning and 3D imaging are using HPC platforms to develop cure for cancer, create new materials and teach autonomous vehicles.

Saahil Tumber: Hyperscale data centers are continuing with their massive scale of deployment. Information technology load exceeding 50 megawatts per building and 300 megawatts per campus have become the norm. Owners are cognizant of the significant electric and water consumption by these data centers and are focusing on sustainability and means of reducing their environmental footprint. They are also evaluating their project delivery processes with the aim of eliminating bottlenecks and improving efficiencies. Colocation data centers are striving to standardize design across their portfolio to reduce costs and improve delivery schedules. They are also incorporating features to cater to hyperscale tenants, such as offering the ability to accommodate alternate mechanical and electrical technologies preferred by prospective tenants and support increased power densities in excess of the baseline design.

John Gregory Williams: For years, everything about mechanical, electrical and plumbing design in construction has been changing with an eye to getting more done with less. With data center construction on a near exponential rise, the scarcity of resources is driving designs to be more efficient. With the coming of printed circuit boards within servers, the cooling requirements have become less stringent and as such, new cooling techniques are being implemented.

CSE: What future trends should engineers and designers expect for such projects?

Bill Kosik: The IT industry continues to be on the leading edge in the development of tools, processes and hardware, used by a wide array of enterprises. Judging by the current rate of innovation, IT companies are required to maintain a rapid pace to stay competitive in the marketplace.

For example, the ever-expanding capabilities of IT systems, combined with the technological needs of end-users, fueled the growth of the IoT, where computer-based items from the residential, commercial and industrial sectors are connected to the internet, increasing network traffic and creating a growing need for computing and storage capabilities. Devices on the IoT are typically stand-alone and must send and receive information from servers, storage devices and networking gear housed in a remote data center. The proliferation of the IoT increases the demand for data centers.

True flexibility and responsiveness in power and cooling systems, once considered to be a “nice-to-have” feature in data centers, has proven to be a “must-have” in state-of-the-art data centers. Each facility has unique characteristics that will inform the power and cooling design: edge data centers must have very compact, low energy-use cooling systems; hyperscale data centers have a homogeneous, high-density server cabinet lay-out; an enterprise data center will have many different types of compute, storage and networking equipment and requires a mix of different cooling solutions. These facilities must be flexible and responsive to new computer technology.

Flexibility means that as computer technology evolves, the power and cooling infrastructure is able to adapt with minimal physical changes, especially changes that may have a negative impact on the IT systems. Responsiveness is being able to adapt quickly to accommodate changes in the computer technologies.

Tumber: The data center market will continue to grow at an impressive pace due to drivers such as AI, virtual reality, IoT, 5G wireless, social media and others. In the near future, I anticipate liquid cooling to gain traction at hyperscale data centers and high-performance compute applications. Increased emphasis on data privacy, security, climate change and regulations will also impact future data centers. AI is still in the nascent stage but offers tremendous potential when applied to data centers. Data centers could become self-reliant entities that are in complete control of all underlying infrastructure and human intervention in minimized or potentially eliminated.

Williams: Evaporative cooling is here to stay. The next step in evaporative cooling technology is being developed by Nortek Air Solutions, among others, for some of the key data center operators. It’s called StatePoint Liquid Cooling system and is purported to reduce water consumption in a direct evaporative cooling system between 20% and 90% depending on the climate. The SPLC requires less water than a typical indirect cooling system, because it uses air to cool water instead of using water to cool air.

On the electrical side of the equation, mitigating power transmission losses through on-site power generation and site locations that are adjacent to major power transmission lines is another trend that is being seen typically. These energy efficient design choices are essential to data center owners to be able to build new sites in areas like Northern Virginia where power and water are in short supply.

Tousson: Engineers and designers should expect two trends for such projects: on-site generation and direct liquid cooling. At peak operation, supercomputers can consume up to 20 megawatts. This type of large load demand can be problematic for the utility grid operation during peak periods. The rapid growth of demand for supercomputers will require incorporating on-site generation in the design to mitigate the shortage of utility grid supply during peaks. Supercomputers have high-density power per rack. The high-density demand for excessive heat removal exceeds the limits of traditional forced air-cooling. Direct liquid cooling at the compute nodes level provides effective and efficient solution for supercomputers cooling.

Starr: Anticipate continued focus on arc flash mitigation. Based on my current project trends, I suspect lithium-ion batteries will become the energy storage of choice for UPS systems. Except for critical networks, hyperscale customers are increasingly relying on software mirroring of their data center sites rather than including UPS systems of scale.

I would not be surprised if these same customers started eliminating standby generators from their projects as well. Fortunately, friendly market sectors, such as aerospace, are creating demand for higher capacity technologies. Electric propulsion airplanes, for example, will bring many advances in the energy storage sector. The renewables and automotive industries are also spawning new technologies for high-voltage direct current applications. Higher thermal design power-rated graphic processing units and central processing units will be a continued trend. This likely results in increased power density for data racks.

Higher density racks require innovative cooling solutions, such as ZutaCore’s HyperCool2 and other direct-to-chip liquid cooling technologies. These technologies have applications in both new and retrofit data center cooling to the chip applications.

Peterson: The volume of data being added every day, along with the need to process it in short timeframes, is the continuing trend that is still ramping up. Some data centers may begin to focus on processing and networking at key interconnection hubs while others are aimed to provide medium- to long-term storage with much more efficiently operating media. Each of these has a different demand for power, space and cooling and the successful designers will dive into the details to ensure they are matching these needs closely.

Rener: Rack densities will continue to rise. The use of water-cooled servers is becoming more common for supercomputing but has not made its way down to the enterprise server level. Likely before this happens, the trend will be to operate air-cooled racks at elevated temperature differentials, from the current 20°F to 25°F to up to 40°F.

CSE: How is the growth of cloud-based storage impacting data center projects?

Peterson: Data centers are needed for not only the data pushed to the cloud by users, but also for the many applications that are interacting with each other. Colocation and hyperscale data centers are responding quickly to the growth of the cloud and finding spaces that support their needs and mix of applications that need to be localized and can be anywhere. The projects are becoming more focused on determining the split between those local needs and supporting only what is needed there while relying on the large data center projects to support the aggregated demand regionally.

Williams: In our market in northern Virginia, cloud-based storage providers are building more data centers than any other type of data center provider. The major cloud-based storage providers are all in competition to offer their customers the lowest storage plan costs. They do that by lowering their construction and operating costs.

These customers are commonly referred to in our market as the “Walmarts of cloud computing.” This means they must find ways to be more efficient and charge less while providing more. They achieve this through more efficient cooling techniques as well as new more efficient server designs. New servers installed in some of these facilities are bare-bones and are stripped of anything not essential for efficiency. Plastic trim pieces and mounting hardware have been removed to lower the cost. These simple alterations also create more free area around the server for larger heat sinks and fans to be installed. These changes result in much better air flow, which requires less energy to keep them cool.

Starr: Instead of costly preventive maintenance programs, many companies are migrating to the cloud. With cloud migration prevalent, we are seeing a slowdown in enterprise data center project work, except for site risk assessment type reports. Colocation facilities housing cloud-storage and applications continue to require innovative maintenance-focused designs to meet contractual uptime agreements for their customers.

CSE: What types of challenges do you encounter for these types of projects that you might not face on other types of structures?

Tumber: Data center projects are unique. They typically involve varied stakeholders, aggressive design and construction schedules, massive scale of deployment, extensive infrastructure, significant electric and water consumption, large capital and operating expenditure, increased scrutiny from the authority having jurisdiction and public, susceptibility to regulations, etc. Data centers are also graded on unique criteria such as availability, capacity, resiliency, power usage effectiveness, water usage effectiveness, flexibility, adaptability, time to market, scalability, cost and more. While all facets are not necessarily challenging, they do require a different mindset.

Tousson: Depending on the facility — whether new or existing — we are always faced with challenges to design mechanical and electrical infrastructure to meet the requirements for data centers loads. In both cases, we work closely with the client, contractors and vendors to provide design solutions without compromising the intent of the design and equipment performance. Also, confirming compatibilities of equipment build to international standards with U.S. standards. In existing data centers, the biggest challenges are to isolate, relocate and be sensitive to key equipment and assets while the data center is still powered up.

Starr: Data centers are high power density buildings. It is important for all stakeholders to reflect on this and become critical of all construction practices. Energy storage, for example, is a multidiscipline coordination effort between nearly every design team member. Another example is the application of the NFPA 70: National Electrical Code Article 645: this is an optional article that allows less stringent installation methods. The code recognizes data centers are constantly chasing the newest technology and may need to install wiring without permanently attaching it to the floor at regular intervals. The code permits such variations from standard installation methods, but only if the design meets several prerequisites related to construction type, wall ratings, air handling distribution, emergency power off, etc.

Data center designs are also referenced by Tier levels, corresponding with the Uptime Institute. It is important for owner project requirements to go steps further than stating a wholistic building Tier rating, as this leaves design intent open for interpretation, possibly resulting in under- or over-design of systems. In addition to a Tier rating, it is best to describe the system (electrical, mechanical, etc.) capacity and distribution components in terms of N, N+1, 2N, etc. This helps make sure the program intentions are clear and ensures that the owner’s investment is appropriately allocated.

Peterson: Today’s clients expect reliability to be built into the design, while other building types may not have nearly as much redundancy as they aim to optimize for best performance and lower first cost. We always have the added goal of streamlining efficiency of the part and full loads that are expected throughout the life of the building which can vary wildly from the first day to the last. Also, we have been following the trend of compressed design and construction schedules for data centers, with integrating prefabrication of modular components becoming a common solution to shorten delivery times.

CSE: What are professionals doing to ensure such projects (both new and existing structures) meet challenges associated with emerging technologies?

Starr: Most of these design approaches provide inherent flexibility to help with new technology hurdles:

  • Cable tray systems.
  • Raised floor systems.
  • Plug-in track busway.
  • Tie isolation breakers at key locations.
  • Modular focused design and construction.

Still, we cannot anticipate all future technologies. Challenges with lithium ion batteries are a good example. Instead of traditional combustible diffusion flames, lithium ion produces jetting flames with varying heat release rates that make it difficult to know when an event is over. The best way for the project team to overcome unexpected technology challenges is to keep asking questions of each other throughout the process. Increased communication inevitably advances other project goals as well.

Peterson: We are fortunate to have technology companies with professionals and consultants that are open to our innovations and discussing some of the latest technologies. By learning from what has been written and shared so openly by the community, we can hone how we implement those approaches and ask for greater things from the vendors and suppliers. Along with new designs is the desire to audit the older facilities and operations to adjust and perform better or reduce controls and performance issues.

Rener: The design team needs to sit down with the IT managers at the beginning of the project to understand their vision for five years, 10 years and beyond to be able to design systems with the flexibility to adapt to changing IT requirements.

CSE: In what ways are you working with IT experts to meet the needs and goals of a data center?

Starr: At the start of a project we provide end-users with a data center questionnaire. If the owner team is able to complete this before the design kick-of meeting, IT and facility team members on the client side may use this as an opportunity to meet and discuss the project needs ahead of the design kick-off meeting. Also, the questions and responses document helps organize the conversation, so both design and owner teams understand the reason for the project. By getting to know the function of data center processes and seeing growth plans, designers learn densities: power, data, cooling, floor and cabinet space and so on. On successful projects all team members walk away from meetings with a better understanding of current and upcoming facility needs.

Peterson: The IT experts and their direction have differed in approaches depending on the needs of their business — from financial institutions with their need for speed and reliability to technology clients wanting uniform and predictable cloud deployments. In many cases, the data center owners have savvy teams that define how the data center should support their needs. Some may be working with only their applications and when compiled with others of their organization the requirements or main drivers differ widely. Either way, everyone adds to the requirements and introduces innovations for consideration —whether in design, engineering or construction — without missing the greater needs of the IT teams.

Rener: We nearly always retain an IT expert as part of our integrated team even when the client my handle those systems separately. There is a level of understanding on cabling, rack, power and cable tray topologies that only come with having expert on our team. Often the biggest challenge in early design phases of a data center is planning for future technologies and addressing flexibilities and limitations. An IT expert on the team helps us and our clients design the best solutions for space, power systems and heating, ventilation and air conditioning.

Kosik: Collaborating with IT equipment manufacturers provides insight into the development of new computer equipment, starting in the early stages of planning, continuing to on-site testing (maintaining all intellectual property requirements, of course). IT equipment manufacturers respond to customer demand by introducing new technology that usually provides capabilities beyond the current customer’s needs, but with an eye to the future anticipating that need. Having a basic idea of the power and cooling requirements of the newly developed IT equipment creates a portal for us where we can get a glimpse of the near-term future of power and cooling for data centers.

There is another great resource for understanding how enterprise-scale companies envision the development and operation of their data centers: the people whose job it is to liaison between the IT and power and cooling infrastructure teams. These people are in a unique position to have simultaneous view into the working of both teams, getting a glimpse at new technologies in computing systems as well as HVAC and electrical. But this vantage point also enables a good look at the struggles and challenges within the different domains. Because these people are the link between the teams, they often can provide invaluable advice on what works and what doesn’t.

CSE: Describe a colocation facility project. What were its unique demands and how did you achieve them?

Williams: Colocation facilities have different requirements than a cloud-based storage data center. While the latter is focused on efficiency and cost, the former must comply with their client’s requirements, all of which may vary depending on their needs. Many of the colocation facilities being built in our market are traditional chilled water or direct expansion computer room air conditioner unit designs that are implemented to maintain much lower set points required by their co-locators. For years, server and other electronic equipment suppliers have built their products to meet the new allowable temperature and humidity ranges in ASHRAE TC9.9 2011 however, the idea of running servers in a 90°F or higher environment is more than many colocation clients are willing to accept especially if they have to comply with federal government

Starr: The colocation project work I do uses the design development phase to coordinate details. How will the computer whitespace be allocated to tenants? Is rack level security hardware or access control needed? Will the rack power strips need controlled receptacles? How do you plan to meter user consumption? Will your tenant agreements allow scheduled maintenance outages?

If the installation is in a campus environment, a master planning activity likely make sense. Colocation data centers have design pinch points that should be reviewed carefully. Not coordinating the colocation needs risks either having capacity available without distribution or vice versa. In some colocation facilities, high-resistance grounding is a choice for low-voltage power distribution to continue operation during faults until an outage may be scheduled, like an industrial manufacturing facility.

Peterson: On top of redundancy and efficiency to pass along operational savings, today’s colocation facilities are aiming to be scalable to match the capacity need for deployments as they arrive. Flexibility is also a growing key factor, as liquids for cooling are a more common solution for power-dense racks and rows. Maintainability of the data center continues to be a factor that is often overlooked. Colocation providers know they need to inform their clients when equipment is not performing adequately. Taking a client offline for maintenance is simply not an acceptable practice.

Tumber: I worked on the master plan for a 12-level colocation data center in the Midwest. The facility was designed to support an IT load of 36 megawatts. The project was unique because the data center was located in a downtown neighborhood with stringent local ordinances. We incorporated sound attenuation and plume abatement features at the cooling towers located on the roof. To maximize the building footprint, there was no space available for an outdoor equipment yard. Electrical equipment such as generators were located indoors on each level and we incorporated strategies to meet the emissions and noise requirements. We also worked closely with the preconstruction team and ensured the design could accommodate agile and lean construction strategies because disruptions to the busy neighborhood had to be minimized during build.

CSE: Tell us about a recent project you’ve worked on that’s innovative, large-scale or otherwise noteworthy.

Peterson: One of our most recently completed projects was the design of EdgeCore’s Phoenix Data Center Campus. The first of seven build-to-suit data halls is complete and totals 178,252 square feet with an adjoining 20,000 square feet of office core. The two-story building, located on the southwest corner of the campus closest to the main entrance, has 32 megawatts of critical load capacity. DLR Group was selected by EdgeCore, in partnership with Dotterweich, Carlson, Mehner Design Inc., for its North American roll out, providing services on the Mesa Data Center Campus and five other data center projects across the nation. DLR Group provided mechanical and plumbing design and engineering, computational fluid dynamics analysis, fire suppression, commissioning and U.S. Green Building Council LEED services. (See project sheet for more details.)

Rener: We recently designed a new computer since building for the Milwaukee School of Engineering. At the center of this new building was an advanced artificial intelligence supercomputing facility. The project was an example of blending our strong experience in higher education design with our capabilities in data center and high-performance computing environments. (See case study for more information.)

The new Dwight and Dian Diercks Computational Science Hall at the Milwaukee School of Engineering is the academic home for the school’s first computer science degree. The focal point of the building is a supercomputer powered by state-of-the-art NVIDIA GPU units. While “Rosie” the supercomputer room occupies only 1,500 square feet of the 65,000-square-foot structure, it consumes more than 60% of its energy. The computer room and academic building use the same cooling system during summer months, maximizing efficiency across the whole building. During the Wisconsin winter, when the academic building no longer requires mechanical cooling, the computer facility uses free cooling from cold outside air via a separate system to keep NVIDIA’s Rosie cool. Courtesy: SmithGroup

The new Dwight and Dian Diercks Computational Science Hall at the Milwaukee School of Engineering is the academic home for the school’s first computer science degree. The focal point of the building is a supercomputer powered by state-of-the-art NVIDIA GPU units. While “Rosie” the supercomputer room occupies only 1,500 square feet of the 65,000-square-foot structure, it consumes more than 60% of its energy. The computer room and academic building use the same cooling system during summer months, maximizing efficiency across the whole building. During the Wisconsin winter, when the academic building no longer requires mechanical cooling, the computer facility uses free cooling from cold outside air via a separate system to keep NVIDIA’s Rosie cool. Courtesy: SmithGroup

Tumber: I am currently working on the redesign of an existing colocation data center. The facility was master planned about five years ago and designed for 160,000 square feet of white space. The owner’s requirements have evolved over the past few years and we are revising the master plan of the data center and engineering the next phases accordingly. It is a challenging project because the electrical and mechanical design needed to be reevaluated and modified from the ground up without impacting the existing tenants. For example, power densities were increased by approximately 20%, mechanical design transitioned to air-cooled, split-system DX CRAC units with pumped refrigerant economizers in lieu of outdoor self-contained DX units and electrical design transitioned to outdoor equipment enclosures in lieu of indoor stick-built solution.

Tousson: The University of Texas at Austin’s Frontera supercomputer at the Texas Advanced Computing Center is the fastest university computer in the world and fifth-fastest in the world. Frontera Supercomputer is the latest addition to the top 500 list. Frontera achieved 23.5 PetaFLOPS, a measure of the system’s floating-point computing power, with a theoretical peak performance of 38.7 PetaFLOPS. At peak operation, Frontera consumes 6.5 megawatts of power.

Our design team provided detailed design solutions for MEP and structural infrastructure upgrades. The biggest challenges were a condensed schedule, working in tight physical confines, limited budget and construction in an active compute center space. Our team provided new feeders from the electrical distribution system by using the existing infrastructure and identifying other feeders from adjacent space. This solution eliminated the extended interruption of other supercomputer operations colocated in the same space. Also, we modified the existing chilled water loop to provide new direct liquid cooled system and ran associated piping in the available limited space by developing a 3D model.

CSE: How are engineers designing these kinds of projects to keep costs down while offering appealing features, complying with relevant codes and meeting client needs?

Rener: The challenge is often meeting first cost, while providing for future staged expansion that won’t interrupt current operations. Planning and building core and shells and having modular, repeatable MEP infrastructure. Some of this modular MEP infrastructure can be built off-site with lower labor costs and better quality control and shipped to the site.

Peterson: Modularity and prefabrication are key factors in reducing overall costs. More modern products and techniques are beginning to gain traction as designers and contractors realize that the benefits of speed and uniformity lead to cost reductions, which are scalable across multiple disciplines.

Tumber: It is important to have a firm understanding of the project requirements and objectives. Each project is unique, so cost management exercise needs to be tailored accordingly. As options are being evaluated, collaboration with the stakeholders is extremely important. Working with the vendors, manufacturers, construction team and others also provides valuable insight. Ultimately, it is critical to present the whole picture to clients to help them make an informed decision.

Tousson: In existing structures, accurately documenting existing conditions and leveraging existing distribution infrastructure reduces investment needs and keeps costs down.

Starr: If the project is considering features, the easiest design choice to make is roughing-in for future infrastructure. Install conduit and pull string without pulling wire and terminating; especially consider this future-proofing measure if the installation method designed is underground. Even if the project is not phased (full build-out Day One), consider the building blocks of the installation as you design — this makes for easier value engineering or scaling-up in the future. There is a balance between flexibility and complying with presiding codes.

Although underground conduit is cheaper to install compared to an overhead metallic conduit, concrete encasing and/or direct burying circuits are subject to higher internal temperatures than aboveground installations. Depending on load characteristics, this heating may impact the conductor or cable-rated ampacity. Although the data center industry likes to market 105°C rated conductors, but beyond 75°C terminations for less than 1,000 volts is not a standard product offering.

CSE: How has your team incorporated integrated project delivery, virtual reality or virtual design and construction into a project?

Tousson: It is always a challenge when it comes to providing a design in an existing structure. Building a detailed 3D model based on record documents, gathered information in the field and adding new equipment and associated electrical and mechanical infrastructure helps develop sectional views to eliminate any possible conflicts that may occur during construction.

Peterson: We have seen an increase in the level of design start from 100 and grow to 400+ as BIM has become the new norm, with designers, vendors and builders integrating more tools for greater detail with the end goal to ensure buildability and maintainability. DLR Group has VR champions and experts in over 20 offices who leverage VR technology to showcase rendered spaces and collaborate with our teams and partners to improve deliverables and minimize costly change orders.

CSE: What types of cloud, edge or fog computing requests are you getting and how do you help the owner achieve these goals?

Peterson: Our clients tend to be practical about right-sizing to meet their needs and that’s where we can support in determining their mix of needs for cloud, colocated and on-premises IT equipment and services. When we walk through a requirement list, we can start to quickly deduce which items matter most and how we can achieve the best value and energy efficiency for the life of the facility.

CSE: What are the data center infrastructure needs for a 5G system to work seamlessly?

Peterson: We are setting up facilities to be able to provide for the wave of 5G, from speed to data to processing needs. As 5G may take time to reach full coverage, modular and edge locations will begin to proliferate to support the more localized needs. At the larger scale, we’ve noticed an increase in the need for data centers with an amount of connectivity that can support the edge and the many devices that will be connecting to 5G that will deliver a flood of information.

Williams: As a company, Harris has been involved in the cooling solutions for Verizon Wireless’ 5G network equipment in the Washington, D.C., market. It has been a 3-year initiative to date for the carrier who has been colocating its network equipment inside of its parent companies’ telecommunications switching buildings. The implementation of the 5G network will allow access to cloud-based computing where the traditional distributed cloud computing model doesn’t work.

Essentially, the hyperscale cloud computing data center model could be reduced to smaller servers being colocated alongside 5G network equipment to provide cloud access to customers who don’t have the best internet access. For more information on the future of 5G and cloud collaboration check out this article.

CSE: What is the typical project delivery method your firm uses when designing these a facility?

Tumber: We see design-bid-build and design-build primarily on enterprise data center projects. For colocation and hyperscale projects, integrated project delivery and design-assist are more common.

I worked on a recent hyperscale project and design-assist was implemented. A general contractor was hired to provide preconstruction services and to assist the team through the design process. The preconstruction team helped with the evaluation of various options and alternates and provided feedback on cost and schedule. Ensuring safety during construction was important to the owner and the preconstruction team recommended design features to ensure a safe build. They also reviewed project drawings at each milestone and helped ensure that the team was tracking toward the project goals.

Rener: There is no typical delivery method, but I would say that most involve some form of early involvement with construction managers or contractors due to schedule and cost demands. This can be construction manager design-assist, design-build or IPD. Recently one of our mission critical projects was successfully completed using a CM design-assist model, which significantly improved the project’s ability to meet schedule and cost targets. After the CM was selected, we were involved in the selection of mechanical and electrical subtractors and then worked closely with them during the final stages of construction documents.

Peterson: For many projects, the traditional design-bid-build has been employed, but also several other methods have grown in popularity as data center owners and developers aim for reduced cost and schedule while still maintaining a high rate of delivered quality. As with most fast-paced projects, we are well versed in submitting early product packages to shorten the construction schedule, with contractors working closely with us on products and options, no matter which delivery method is employed.

Starr: The term “left-shift” is becoming common place: shifting activities on a project schedule to the left for faster building occupancy. For example, many manufacturers now offer HMI-in-a-box (switchgear simulator) that allows for detailed functional testing before the equipment is installed. This schedule-saving opportunity also provides the owner with a training simulator for facility staff use over the life of the electrical installation. Much like other market sectors, the data center industry is trying all delivery methods that have potential for shorter construction schedules. No matter the specific delivery method, above all, the most successful attempts happen when experienced design and construction teams are selected.

Recently our firm has seen an in-between delivery method commonly referred to as bridging documents. We work with the owner to develop documents through the design development phase. As an industry I feel we sometimes forget that the construction document phase is a less ideal time to coordinate design. Instead, the construction document phase is best used to detail and coordinate the design development-established design approach. The bridging documents approach lets the owner-engineer establish the design and allows the owner-contractor to detail and coordinate the execution. Typically, a bridging documents approach has the installing contractor signing/sealing the documents as the official engineer of record.


Consulting-Specifying Engineer