Network controls for electrical systems
Examples of network failures
Northeastern United States, 1965: On Nov. 9, power was disrupted to parts of Ontario, Connecticut, Massachusetts, New Hampshire, Rhode Island, Vermont, New Jersey, and New York. More than 30 million people were without electricity for up to 12 hours, all due to an error in setting up a protective relay in the network. An interconnecting transmission line between the Niagara Falls (N.Y.) hydro generating station and Ontario was tripped offline, while it was operating well below its rated capacity. The loads that were being served by this line were transferred to another transmission line, causing it to overload and trip offline. This created a major power surge due to excess capacity on the Sir Adam Beck generating station, which reversed the direction of power flow on other transmission lines crossing into New York. While the Robert Moses plant (Niagara Falls generating station) isolated itself from the surges, the network disruptions cascaded throughout New York and the other Northeast states. There were alternating overloaded transmission lines and power generating stations tripping offline to protect themselves from damage due to the power surges. During this event, the network frequency dropped to 56 Hz and, 1 minute before the blackout, down to 51 Hz.
New York City, 1977: On July 13 and 14, major portions of New York City were without power during an outage that was initiated by a lightning strike on a substation in Westchester County. The network failed due to a faulty breaker that did not reclose after the strike. A subsequent strike caused two other breakers to trip and only one of them reclosed. Due to control network problems, two major transmission lines were overloaded and local peaking generators failed to start, exacerbating the system instability. Within 1 hour of the initial lightning strike, all of the local generating capacity had failed and all inter-tie connections to areas outside of the local service area had been severed. Less than 2 hours after this failure, power restoration procedures had begun, but procedural problems within the network slowed the restoration efforts for more than 24 hours.
Northeastern United States Blackout, 2003: Some 55 million people experienced a major power outage lasting up to 16 hours, all due to a failure in the network control and data collection system. The sequence of events:
- The beginnings of one of the largest power outages in U.S. history was a computer software bug in an energy management system that caused incorrect telemetry data to be reported to a power flow monitoring system. At 12:15 p.m., this problem was discovered and corrected by a system operator who failed to restart the power monitoring system.
- Following this minor monitoring failure, a generating station was taken offline by its operating personnel due to extensive maintenance problems.
- A 345 kV transmission line in northeast Ohio made contact with a tree and tripped out of service.
- The alarm system at the local utility control room failed and was not repaired.
- One transmission line after another tripped offline as they were sequentially overloaded by the failing network. More than 20 major transmission lines ranging from 138 to 345 kV were taken offline due to undervoltage and overcurrent. At this time, the shedding of 1.5 GW in the Cleveland region could have averted subsequent failures. However, the network operational data was inaccurate, unavailable, or not acted upon by the network control personnel in a timely fashion (4:05:57 p.m.).
- Between 4:06 and 4:10 p.m., multiple transmission lines experienced undervoltage and overcurrent conditions and were tripped offline. Seconds later, multiple power generating stations along the East Coast were overloaded and tripped offline to protect themselves. At this point, the blackout was in full swing.
- At 4:10:37, the Michigan power grids isolated themselves.
- At 4:10:38, the Cleveland power grid isolated itself from Pennsylvania.
- At 4:10:39, 3.7 GW of power flowed west along the Lake Erie shoreline toward southern Michigan and northern Ohio. A surge that was 10 times the power flow only 30 seconds earlier caused a major voltage drop across the network.
- At 4:10:40, the power flow flipped to a surge of 2 GW eastward, a net of 5.7 GW, and then reversed again to the west, all in the span of 0.5 seconds.
- Within 3 seconds of this event, many of the international connections failed. This resulted in one of the Ontario power stations tripping offline due to the unstable network conditions.
- By 4:12:58, multiple areas on both sides on the United States-Canada border disconnected from the grid and New Jersey separated itself from the New York and Philadelphia power grids. This caused a cascade of generator station failures as the network conditions continued to deteriorate, both in New Jersey and westward.
- At 4:13 p.m., 256 power stations were offline with 85% of them failing after the grid separations occurred. Most of these were caused by automatic protective controls in the power network or the individual power station.
Southwestern United States Blackout, 2011: On Sept. 8, a procedural error caused a widespread power outage covering southern California, western Arizona, and northwestern Mexico. While electrical utilities normally use network monitoring and computer modeling to determine when a single point failure will cause a problem, this was not done prior to a capacitor bank switching event. Because the network control did not know about the potential problems that this line failure could cause, there was a cascade event lasting 11 minutes that tripped off transmission lines, generating stations, and transformers. More than 7 million people ended up without electrical power for up to 14 hours.
India Blackout, 2012: In July, 620 million people were without electrical power between July 29 and August 1. The network failure began with a 400 kV transmission line tripping out due to an overload. Rather than shedding load to protect this critical circuit, power stations began tripping offline. This event created a cascade effect, shutting down power to more than 300 million people for about 15 hours. On July 31, the network failed again due to a control relay malfunction near Agra. This very small event caused a complete shutdown of 38% of the generation capacity in India, with impacts in 22 of 28 of the Indian states. This failure was the largest power outage ever experienced, and it would have been even larger but the five largest consumers of electricity in India had their own off-the-grid power generators. Overall there was 35 GW of private generation capacity and plans to implement an additional 33 GW, all off-the-grid. Of the recorded areas not affected, none were connected to the power network and therefore remained in service.
While the aforementioned examples were all very large power networks, the principles of operation and failure work the same for small networks within a single building.
Spinning reserve: The amount of electrical capacity that is available from a generating system, without the station personnel starting another generator.
Strong ties: Strong ties between sections of a network are those that have a tendency to try to keep the adjacent, disturbed network section operational.
Weak ties: Weak ties between sections of a network are those that have a tendency to immediately break the connection to a disturbed network section whenever there is the smallest disruption in that section.
Kenneth L. Lovorn is president of Lovorn Engineering Assocs. He is a member of the Consulting-Specifying Engineer editorial advisory board.