10 common data center surprises
Appropriate technologies and best practice tips can help data center managers and consulting-specifying engineers prepare for the unexpected.
At AFCOM’s Data Center World Spring, held March 18 to 22, 2012, in Las Vegas, Emerson Network Power released a list of 10 common surprises for data center managers and consulting-specifying engineers and offered tips on how to be prepared for them. The list includes information on a surprising cause of data center downtime, what data center managers and engineers might not know about that next server refresh, and the growing trend sneaking up on virtually every data center.
“When you are a data center manager or consulting-specifying engineer, very few things are more unsettling than the unexpected,” said Peter Panfil, vice president of global power, Emerson Network Power. “We hope this list helps IT and engineering professionals better anticipate these issues and prepares them with the appropriate technologies, solutions, and best practices.”
Common data center surprises include the following:
1. Those high-density predictions finally are coming true: After rapid growth early in the century, projections of double-digit rack densities have been slow to come to fruition. Average densities hovered between 6.0 and 7.4 kW per rack from 2006 to 2009, but the most recent Data Center Users’ Group (DCUG) survey predicted average rack densities will reach 12.0 kW within three years. That puts a premium on adequate UPS capacity and power distribution as well as cooling to handle the corresponding heat output.
2. Data center managers will replace servers three times before they replace UPS or cooling systems: Server refreshes happen approximately every three years. Cooling and UPS systems are expected to last much longer—sometimes decades. That means the infrastructure that organizations invest in today must be able to support—or, more accurately, scale to support—servers that may be two, three, or even four generations removed from today’s models. Today’s data center manager must ensure that infrastructure technologies have the ability to scale to support future needs. Modular solutions can scale to meet both short- and long-term requirements. Engineers will need to consider and make the necessary adjustments and allocations regarding day-to-day servicing and maintenance of the longer lasting power and cooling equipment.
3. Downtime is expensive: Everyone understands downtime is bad, but the actual costs associated with an unplanned outage are stunning. According to a Ponemon Institute study, an outage can cost an organization an average of about $5,000 per minute. That’s $300,000 in just one hour. The same study indicates the most common causes of downtime are UPS battery failure and exceeding UPS capacity. Avoid those problems by investing in the right UPS—adequately sized to support the load—and proactively monitoring and maintaining batteries. This gives engineers an opportunity to share best practices with clients and recommend battery monitoring solutions and high-end availability architecture. They can use the cost of downtime information to support recommendations and ensure clients understand how they can implement design changes and modifications that will improve availability.
4. Energy rebates are available for energy efficiency upgrades: Many utility providers offer energy rebates and incentives for data centers that make energy efficiency improvements. This presents an opportunity for engineers to propose high-efficiency designs and help clients receive reimbursements for upgrading legacy equipment with high-efficiency power and cooling systems. Clients may also look to engineers to assist with the often lengthy application process. Once the reimbursement has been approved, utilities will request information on actual project costs and may require follow-up measurement and verification to determine actual energy savings.
5. Industry codes are playing a larger role in cooling strategy: In the 2010 edition of ASHRAE 90.1: Energy Standard for Buildings Except Low-Rise Residential Buildings, the SCOP (seasonal coefficient of performance) rating was expanded to include data centers. Codes such as this, which focus on energy efficiency, are becoming more numerous and impacting data center cooling strategies and technology developments. It is important that engineers keep abreast of new codes and regulations and the latest technologies that enable compliance.
6. Monitoring is a mess: IT managers have more visibility into their data centers than ever before, but accessing and making sense of the data that comes with that visibility can be a daunting task. According to an Emerson Network Power survey of data center professionals, data center managers use, on average, at least four different software platforms to manage their physical infrastructure. Of those surveyed, 41% say they produce three or more reports for their supervisors every month, and 34% say it takes three hours or more to prepare those reports. The solution? Move toward a single monitoring and management platform that can consolidate that information and proactively manage the infrastructure to improve energy and operational efficiency, and even availability.
7. The IT guy is in charge of the building’s HVAC system: The gap between IT and facilities is shrinking, and the lion’s share of the responsibility for both pieces is falling on the IT professionals. Traditionally, IT and data center managers have had to work through facilities when they needed more power or cooling to support increasing IT needs. That process is being streamlined. For engineers, it is important that they now incorporate all of these players into the design process. Gone are the days when the engineer had to work with only one or two individuals, usually from the facility side. Now it is a complex ecosystem comprised of IT, operations, facilities, and sometimes procurement.
8. That patchwork data center needs to be a quilt: In the past, data center managers and engineers freely mixed and matched components from various vendors because those systems worked together only tangentially. However, the advent of increasingly intelligent, dynamic infrastructure technologies and monitoring and management systems has increased the amount of actionable data across the data center, delivering real-time modeling capabilities that enable significant operational efficiencies. IT and infrastructure systems still can work independently, but to truly leverage the full extent of their capabilities, integration is imperative.
9. Data center on demand is a reality: The days of lengthy design, order and deployment delays are over. Today there are modular, integrated, rapidly deployable data center solutions for any space. Integrated, virtually plug-and-play solutions that include rack, server, and power and cooling can be installed easily in a closet or conference room. On the larger end, containerized data centers can be used to quickly establish a network or to add capacity to an existing data center.
10. IT loads vary—a lot: Many industries see extreme peaks and valleys in their network usage. Financial institutions, for example, may see heavy use during traditional business hours and virtually nothing overnight. Holiday shopping and tax seasons also can create unusual spikes in IT activity. Businesses depending on their IT systems during these times need to have the capacity to handle those peaks but often operate inefficiently during the valleys. A scalable infrastructure with intelligent controls can adjust to those highs and lows to ensure efficient operation.
Draper is the manager of strategy and research at Emerson Network Power.