The Summer of 2012--A most unusual summer, too much “heat”
I’d like to pause for a bit this week to discuss some of the very unusual events of this very unusual summer for the data center industry, and also to discuss, of all things, the WEATHER.
This series of discussions so far has been generally about substation transformers within data centers, about major failures of dry-type transformers due to switching-induced transient voltages, about the “load center substation concept,” which I think now makes more sense than ever, and about methods to enable large reductions in usage of copper cable in data centers’ major power systems.
Next week, I plan to introduce to you an exciting new liquid substation transformer product that I think solves all of the problems we’ve been discussing so far.
But I’d like to pause for a bit this week to discuss some of the very unusual events of this very unusual summer for the data center industry, and also to discuss, of all things, the WEATHER. I’m sure that everyone who works in the data center industry is aware of, and has closely followed some serious system crashes in some major data centers over the past two months.
Almost all of them involved mysterious failures in power system infrastructure, and in some cases, the unusual weather this summer seems to have been a contributing factor – either triple-digit temperatures for days on end that extended from the Plains States to the mid-Atlantic coast, and all the way up into Canada, or some extraordinary thunderstorms with 80 mph straight-line winds in early July that caused really unusual disturbances of utility services, with repetitive drops of individual phases on utility transmission lines over short durations in some parts of the country.
Many data centers experienced problems with these events, and more than a few plants completely crashed. Most of those were quiet and unpublicized failures that were well below the industry’s radar, but three or four others were front-page news, everywhere.
I want to comment on two of the serious problems that became apparent during the unusual weather this summer. These happen to fall within my very limited range of expertise – yet caught me way off guard, nonetheless.
For the past several years, I’ve been using temperature sensors in outdoor medium voltage switchgear installed at data centers, to issue alarms on the owner’s monitoring system if temperatures inside the switchgear become excessive. In mid-May, I began receiving calls and e-mails from various sites reporting that their switchgear was in alarm on overtemperature. The first of these came from a site in Chicago, of all places, then more reports from more sites from southern Ohio to the Carolinas to Virginia to New Jersey.
One of my major clients decided to install temperature recorders in his MV switchgear at several of his sites. Most of these lineups are well-ventilated with turbine-style blowers sized to provide about 30 complete changes of internal air per hour. The temperature recorders charted internal temperatures of anywhere from 120 F to 140 F in some instrument compartments.
One lineup (the oldest switchgear of the lot, at about 10-years old, and painted utility-style Munsell green) recorded temperatures of 160 F at the top of some instrument compartments. This lineup also happens to be non walk-in equipment, which is very difficult to properly ventilate. (All other lineups are sheltered-aisle construction.)
Aside from just plain triple-digit heat, I think that other things may be going on. I suspect that the unusually bright sun falling on the eastern half of the country for the last 10 weeks might be also carrying with it greater-than-normal concentrations of energy in the near-infrared spectrum. I’m just guessing that this might be due unusually intense periods of solar storms and solar flares this year, expected to peak next year. Whatever the cause, this summer of 2012 is far different than anything I’ve seen in my 35-year career in the power systems business.
My intent with using the temperature alarms in MV switchgear was to make sure that very expensive and critically important microprocessor-based protective relays were operating within safe temperature ranges - not to mention things like dc power supplies, Ethernet switches, SER’s, circuit power monitors, control power transformers, and VT’s.
But this unusual summer has demonstrated that the internal switchgear components that are most vulnerable of all to this high heat are actually metal oxide varistor (MOV) surge arresters. Most MOV’s are rated for application in a maximum ambient air temperature of only 40 C (104 F) for prolonged periods of time. Obviously, these internal switchgear temperatures of 120 F to 160 F are placing those arresters at risk, and arresters are failing in all this recent heat.
One such arrester failure occurred this summer at a data center in the Carolinas and caused a catastrophic failure of the plant’s lineup of 27 kV main utility switchgear.
A failure of a single surge arrester can be a violently explosive event, on its own, that can heavily damage switchgear, but this failure was particularly devastating because of another related problem.
The switchgear was a double-ended Main-Tie-Tie-Main arrangement fed with dual dedicated 1,200-amp feeders from a utility-owned substation located nearby. The arresters were required by the utility company to be installed at the incoming line terminals of the switchgear, in order to protect the utility’s cables.
One arrester on one phase failed in the heat, and failed shorted, which is the normal mode of initial failure until the arrester blows itself apart and then opens. A multifunction protective relay back on the utility’s 1,200-amp feeder breaker correctly saw the failure as a ground fault (of around 7,000 amps), and picked up on its #51G function (there was no #50G used, apparently to allow selectivity with downstream relays).
However, before the relay could trip the breaker, the grounding cable for the three arresters blew itself apart, and actually cleared the ground fault. The grounding lead was not able to carry the fault current, and behaved exactly like a fuse, but operating in open air. About 6 in. of the grounding lead was literally vaporized by the current, producing copper plasma that blasted through most of the switchgear, created multiple phase to phase faults in multiple locations. The upstream breaker ultimately tripped on phase overcurrent, but not until after the switchgear was pretty much destroyed.
So, how do we avoid heat-related problems like these in outdoor switchgear?
If you’re a data center owner who happens to own 15 or 20 lineups of medium voltage outdoor switchgear, you don’t want the cost of installing 10 tons of HVAC in each of those lineups, and you don’t want the operating costs and maintenance chores. You also don’t want the hassle of finding a suitable power source for the HVAC (which should be derived from an “essential bus” somewhere inside the plant), and you don’t want more criticism from activist environmental groups who are saying data centers are energy hogs who are already using way too much power, which is coming from too many “dirty” sources.
It became obvious this summer that plain old industry-standard ANSI-61 or ANSI-49 grey paint causes the switchgear’s steel enclosure to just absorb too much solar heat energy. One electrical consulting engineering firm who specializes in large data centers recently changed its standard switchgear specification to include “Bright White” paint color on all outdoor MV switchgear, with solar reflectivity in the 90% range (versus the 35-45% reflectivity of most ANSI-49 and ANSI-61 coatings).
The large client I mentioned above is beginning to experiment with some various types of new high-tech exterior coatings, which use nanotechnology and ceramics to provide exceptional solar reflectivity as well as some element of insulation. We’re optimistic that this alone might bring interior switchgear temperatures down to very safe levels.
Engineers should specify adequate ventilation on doors or cover plates of rear cable compartments in MV switchgear. Various manufacturers use widely differing standard practices in terms of ventilation of rear cable compartments. Some use almost no ventilation at all, while others, as shown below, use a very good multiple vent arrangement. (MOV arresters almost always are mounted in the rear cable compartments. Even most climatized Switchgear Power Houses that use cooling in the aisle and the switchgear interior don’t cool the cable compartments - so good ventilation at the rear is critical).
Above all, specify a minimum cable size and ampacity for surge arrestor grounding that will allow the cable to withstand the maximum expected ground current for the longest expected duration of the fault, without burning itself in two.
It’s been a long hot summer, with much more of it still to come.
Case Study Database
Get more exposure for your case study by uploading it to the Consulting-Specifying Engineer case study database, where end-users can identify relevant solutions and explore what the experts are doing to effectively implement a variety of technology and productivity related projects.
These case studies provide examples of how knowledgeable solution providers have used technology, processes and people to create effective and successful implementations in real-world situations. Case studies can be completed by filling out a simple online form where you can outline the project title, abstract, and full story in 1500 words or less; upload photos, videos and a logo.
Click here to visit the Case Study Database and upload your case study.