Data Center Technology Past, Present and Future
After 42 years, Intel co-founder Gordon Moore's prediction that the number and speed of transistors on a microchip would double every one to two years is still going strong. Named Moore's Law, this 1965 forecast was expected to endure just a decade its author says, but historical cycles lengthened the grip of Moore's Law on technological innovation (see Figure 1).
After 42 years, Intel co-founder Gordon Moore’s prediction that the number and speed of transistors on a microchip would double every one to two years is still going strong. Named Moore’s Law, this 1965 forecast was expected to endure just a decade its author says, but historical cycles lengthened the grip of Moore’s Law on technological innovation (see Figure 1).
Reviewing past technologies, including heat density, water-cooled components, sustainability, energy efficiency and commissioning, can help forecast just where today’s systems are headed.
While a significant omen, the past isn’t always a black and white predictor. It’s only accurate when combined with tomorrow’s technologies. Here’s how Moore’s Law works.
Rise of heat densities
Anyone who works with data center technology knows that the hottest design problem in the industry is extinguishing the equipment’s soaring heat densities. A look at yesterday’s data centers reveals a parallel trend.
Mainframe computers hit 400 watts/sq.-ft. just 20 years ago, not far off from where today’s cutting-edge processing equipment is headed. No one seemed to notice back then, though, because as much as 60% of computer-generated heat was removed by direct-water cooling, and mainframe processors and servers were connected to their respective peripherals (i.e. printers and backup hard drives) that required significant space, but demanded less power.
Today’s centralized computer rooms house processors, servers and peripherals in separate areas, but are all connected to enhance computing and analysis power that allows businesses to compare and share backlogged information, which generate the most accurate and up-to-date data.
This trend, brought on by government regulations and business applications that drive high-end computations, applies to servers as well. The mainframes of the ’80s engaged a number of processors simultaneously, all working together on each application. Then, the standard rack-mount and “pizza box” servers of the ’90s and early 2000s were used one per function. Today, blade servers are back to the 1980s model, with a number of them working together to support both single and multiple applications, creating a virtual single processor image with tremendous computing power (see Figure 2).
While advanced computing has enabled businesses to reach new information and processing heights, centralization also has created a severe temperature inequality, as computer and server rooms demand major cooling power, while their peripheral neighbors need much less. Balancing these requirements to appropriately cool each space is one of today’s design challenges.
Pre-action fire systems
Truth be told—water always has existed inside even the most critical of facilities, but as recent as 10 years ago, data center computers were still directly connected to water in order to cool their processes. While leaks seldom occurred, there was great concern for the damage this conductive liquid could cause, with reliability at stake. In the early ’90s, water left computers’ bedsides and was repurposed inside another data center cooling agent, computer room air conditioning (CRAC) units.
Today, however, the thousands of gallons of water pumped hourly into individual CRAC units are no longer enough to support centralized computer rooms and claustrophobic server areas. Instead, computer room heat is calling for fluid cooling to take over again.
One industry solution is to use non-conductive liquids, which have the ability to directly cool computer equipment, but also don’t contain any water. Such a resolution is ideal, and would offer the best of both worlds, but currently non-conductive liquids are only available in small quantities, typically for test stations.
Water also resides in the ceilings of most mission critical facilities. Two decades ago, data centers were outfitted with traditional fire protection systems, similar to those in today’s non-critical facilities, clad with water-filled sprinkler heads. All it took was a small fire to ruin the equipment of an entire facility.
In order to avoid risking unnecessary equipment devastation, most data centers have upgraded to a pre-action fire protection system, which eliminates leaking sprinkler heads and gives data center personnel an opportunity to intervene and save their computer equipment in the event of a fire. When the temperature reaches 165°F, sprinkler heads release air into the room while an alarm blazes throughout the building, warning operators of the existing conditions. If human intervention does not cool the temperature in a pre-determined amount of time, water sprays out and extinguishes the fire.
Data centers often employ a common supplemental fire protection system—a clean agent—that releases a gaseous substance that captures the oxygen from the room, and therefore, extinguishes the fire. The gas then turns into a powder, leaving a dust residue behind that is easy to clean up and doesn’t damage the equipment.
Another water-alternative is a fire protection solution called ionized mist, which is made of redefined ionized water that isn’t conductive. Released when room temperature hits 165°F, the mist sprays the computer room and cools the air by absorbing its heat.
Although data centers are quick to adapt to evolving equipment technologies, their owners typically are cautious when employing new MEP systems like these for fear that they will affect reliability. Understandably so, this resistance has also impacted the cycle of technology yesterday and today.
Sustainability and energy performance
The sustainability bug has swept the nation in the last decade, with the U.S. Green Building Council’s LEED rating system and environmentally safe products pushing the envelope. Data centers have been slow to catch on, as owners may be willing to sacrifice cost for Mother Earth, but never reliability.
Still, energy has become one of the top three costs of operating a data center, just behind personnel and software expenses. Designers and owners alike recognize that the more energy efficient a facility is, the less heat it will radiate. These challenges have initiated some interest in specifying sustainable elements, but still leave much to be desired. Even in this category, history repeats itself.
Remember the days of direct current (DC) power? Years ago, telecom companies outfitted their data centers with DC voltage, or battery power. A more efficient way of powering computers, DC voltage offers exactly what a computer needs, requiring no alternating current (AC) to DC conversion, and therefore, no wasted energy.
But, the high costs and safety issues involved with DC power distribution led telecom data centers away from DC power in the 1980s and settled data centers on AC voltage, which is safer and cheaper, but also inefficient. AC power is brought in at a higher voltage, rectified to DC power and transferred down to the needed amount by each computer, generating and exhausting a lot of heat in between. (NOTE: 35% to 40% of power is lost in its direct delivery to the computer; another 30% to 40% is lost inside the computer, leaving only about 25% of the power to be used by chips for computations. Using DC power will eliminate the first 35% to 45%, and specifying new, low power/stacked chips will eliminate the second 30% to 40% loss.)
Now DC power is back. Computer and server manufacturers are again offering it as an option for their equipment, promoting sustainability without sacrificing reliability.
Other efforts toward creating green data centers include: free cooling, which utilizes natural outdoor air below 35°F to condition the facility’s interior temperature; perfecting hot aisle/cold aisle efficiency to achieve larger temperature differences between aisles; and designing MEP equipment to be more energy efficient than typical operation which runs at 10% to 40% of capacity, with efficiencies reaching 80% to 90% capacity at best.
Energy monitoring is one way to increase efficiency (and reliability, too). Traditionally, data centers that were interested in improving energy efficiency hired engineers to examine individual systems within their facility, performing analysis on each one. Today, software is being developed that allows a single database to monitor all systems (i.e. chilled water, MEP and electrical systems, fire protection, etc.), analyzing their functions and abilities as parts of a larger whole.
Commissioning: the first and final steps
Providing peace of mind that buildings and their systems will operate as intended throughout their lifecycle, commissioning can help any data center reduce its risk of unplanned downtime.
Defined by ASHRAE Guideline 1-1996, commissioning is “the process of ensuring that systems are designed, installed, functionally tested and capable of being operated and maintained to perform in conformity with the design intent.”
Just over a decade ago, a good percentage of data center projects were commissioned, but after schedules became compressed, competition tightened and the cost of commissioning made the process a luxury for many owners. Data center systems began to fail because they weren’t tested and human error rose as operators weren’t properly trained.
Today, commissioning is back and it’s better than ever, with the majority of data center owners requesting 100% system functional testing. Previously, sampling was used to statistically verify component and system performance, but some equipment would still fall through the cracks. Full commissioning, however, demonstrates the importance of simulating all operating and failure modes possible before the facility is ready for operation.
A case in point is an ESD client that is a major international bank, who built a Chicago-area data center five years ago. ESD recommended independent Critical System commissioning for the project, and the bank agreed even though it might delay the facility’s early move-in date and add to the construction costs. During the commissioning, ESD uncovered a 30% failure rate and manufacturing error with an electrical circuit device, which had passed both factory and on-site systems start-up testing. Because it was a manufacturer’s error, the vendor replaced the fleet of devices, and to date, the facility has yet to experience an outage. Without complete commissioning of the data center, reliability would have been sacrificed.
As commissioning has evolved over the years, its project delivery process has changed. The role of the owner, contractor and engineer has evolved to create the need for a more independent entity: the commissioning agent, or a third party that provides owners with written testimony their data center will perform as designed.
Training facilitation is just one example. In this capacity, the commissioning agent has become a facilitator between the design team, owner, contractors and manufacturers to achieve effective training of the data center’s operations staff. Contractor-performed installation verification also plays a role, providing contractors with the ability to establish independent divisions performing their own checks and balances. After training facilitation and contractor installation verification, the facility then completes formal commissioning.
Where are data centers headed?
As technology races to keep up with historical predictions and possibly the final years of the Moore’s Law theory, the future is filled with uncertainty and imagination about where data center technology is going.
The following are some “wild” predictions for the future:
-
Computer virtualization: Providing distributed computing during off hours, millions of idle desktop processors and servers back in the office will lend their computing power to help their data center counterparts process a variety of functions 24/7. In another version, national and international companies will “rent” computing power from data centers during their slow period. For example, a company in India looking for more power during their daylight hours will rent unused computer power from a U.S. company whose critical facility is untapped overnight.
-
Fluid submergence: Just like nuclear reactors are submerged in water to relieve the tremendous heat of uranium, as computers and servers continue to increase their power, and therefore cooling demand, they will be submerged in a non-conductive liquid, which is over 10 times more efficient at removing heat than today’s air-based heat transfer.
-
On-site generation: Some industry studies have concluded that on-site power generation is much more reliable in remote areas. Combined with the fact that data centers are spending $10 to $20 million on generators to support their facility’s load, in addition to utility company delays and connection charges, owners will start to build their own electrical generation plants on-site.
In the future, as in years ago, problems will be solved differently because the tools and technologies at our finger tips will continue to evolve and improve cyclically. And so, history keeps on repeating itself.
James Vallort, vice president and director of Building Sciences for Environmental Systems Design, Chicago, contributed to this article.
Fluid Dynamic Modeling: a solution for the dense data center
Gone are the days of hand calculations and spreadsheets that determine equipment temperature output. Instead, air flow and heat transfer tools have taken over and allowed design engineers to produce more accurate and realistic floor plans than ever before.
Programmed to understand current trends, both mathematical computations and three-dimensional modeling solve today’s fluid dynamic modeling challenges. Data center equipment is entered into these programs and after a series of calculations, the number of failures based on historic trends is revealed.
The hottest solution to come out of these modeling programs to date is the placement of air handlers outside of the computer room. A pioneer in its day, this theory has become more popular by an Intel white paper. Practically, the concept works like this: a very dense data center (one that is pushing 500 watts/sq.-ft.) is outfitted with 5 ft. of raised floor, instead of the typical 2 ft., and has very large ceilings. The air handlers are customized to operate at higher volumes and at the best temperatures while blowing air above and below the computer room’s floor.
From a maintenance standpoint, this solution is a dream, allowing the HVAC equipment to be serviced outside the data center room, where traditional CRAC units are often not well maintained due to the concern of non-IT personal being in the critical data center. But, engineers and owners benefit from this method as well because the air handlers must be custom made; specified for specific temperature ranges and air volumes. Because of the critical facility’s extreme conditions, the result can be lower in total cost than buying a unit that is already assembled.
Do you have experience and expertise with the topics mentioned in this content? You should consider contributing to our WTWH Media editorial team and getting the recognition you and your company deserve. Click here to start this process.