Network controls for electrical systems

Networking of electrical systems has a number of aspects that should be considered.

By Kenneth L. Lovorn, PE, Lovorn Engineering Assocs., Pittsburgh March 11, 2013

There is a movement in the engineering design community toward networking controls for electrical power systems. While according to popular opinion this is a good thing, there are some issues that should be considered before implementation.

Networking of electrical systems has a number of aspects that should be understood before we delve into these aforementioned issues. One element of a network could be the basic monitoring of system conditions such as an elementary type of a building automation system (BAS). Monitoring of voltage, current, and electrical usage using the BAS can have benefits for building management and energy conservation. This data can be used to allocate electrical energy usage for individual tenants or specific functions and can aid in systems maintenance. For example, a pump with a constant load has a trend toward increased electrical usage over time. There could be restrictions in the lines downstream or increased bearing friction that could cause an increase in motor current, which can be investigated and the cause corrected before major equipment damage occurs. 

Special protection schemes (SPSs) can also have “weak” ties between systems or components. Weak ties are those that are intended to accommodate minor changes in the systems without manual intervention. An example of a weak tie would be a power system that would allow a preset energy transfer between one distribution system and another. Thus, as the loads for each distribution vary, the diversity of generators in each of the distribution systems can allow generators of each respective system to share the total load. For example, the generators in system B can carry some of the extra load of system A when the system A generators do not have the spinning reserves to serve the short-term load increases. When there is a generator failure or system disturbance on system B, the network control will disconnect the tie between systems A and B to protect the operations of system A, thus, it can be called a weak tie.

In SPSs, there are also “strong” ties between systems or components. Strong ties operate differently than weak ties in that they have the ability to carry a much larger portion of the load. When a generator fails on system B, the generators on system A attempt to ramp up to assume the load and keep system B on line. 

All three SPS configurations are determined by the control settings and may be changed through software modifications.

Pros and cons of networked SPS controls 

When network SPS control systems work properly, many operational benefits can result: 

  • A single operator can control the flow of energy into and out of his network.
  • The operator can, at a glance, determine the system status and make system adjustments to maintain the stability of his network.
  • Operators can use predictive software to determine whether there might be future system capacity or overloads that should be addressed immediately. The operator can shed generating capacity, bring additional generating capacity online to increase his spinning reserves, or shed selective loads.
  • Networked SPS systems can have an inherit increase in reliability.
  • If the system data are accurate and properly used, the network can be quite useful. 

On the con side, the negatives can outweigh the positives, particularly when it comes to maintaining strong ties between systems: 

  • Inaccurate data or a failure of the operator to properly use the data can precipitate a system-wide failure. Data collection can range from $500 to $5000 per point, so an extensive data collection network can be very expensive. If data are only collected and not used in the network operation, the expense is difficult to justify.
  • Strong ties between systems can be disastrous when system B fails and system A does not have the capacity or stability to assume the load. When this happens and the operator for system A does not sever his connection to system B, then both systems A and B can fail.
  • Multiple systems, all connected with strong ties, can result in a widespread failure that can ripple across multiple counties or states. Under worst-case conditions, major sectors of a country or adjacent countries can be affected.  

Networking 

Networking is a very good way to improve the reliability and monitoring of a system. This statement needs some qualification, given the examples where monitoring of controls and network status failed. Also, the instances where a network of SPS systems with strong ties between them created a cascading failure must be considered. 

Obviously, having more information on a system is always beneficial. As long as the information is accurate, the operator acts on the information in the prescribed manner, the system automatically adjusts itself as intended, and/or there are means to make sure that the information gathering system is operational, the electrical system will remain online. All too often, one or more of these links in the system break down and either the system or operator does not make the correct decision based on the available information or the information is wrong. 

The strength of the ties between elements of the networked SPSs can be either very beneficial or very detrimental to system operation. In the above examples, the effects of strong ties are shown to create the environment for a massive network failure. The failure could be throughout a single facility or a major portion of the country, according to the network extent. 

While all of the examples show the possible scope of a network failure due to strong ties, there are underlying clues to the benefits of a network. In areas where there were weak ties between sections of the network, one can see that there were many noted instances in which the ties were broken and the weak-tied section of the network remained in operation. 

Also, note that there were no examples of a network composed on many sections having strong ties between each of the sections remaining up and running because of these strong ties. Widespread blackouts are always newsworthy and are carefully analyzed and dissected to determine the root cause. But there are virtually no records of networks that stayed up and fully operational after a system disturbance.

Solutions 

Control and monitoring: Before decisions are made to change the state of a network, the system status and the various parameters that would be used to decide what changes should be made must be verified. For high-reliability networks, there are two data collection systems that can be compared to make sure that the network is functioning properly and the data is correct. For very critical networks where there is automatic switching or other changes in the network state, three systems are used and the best two out of three data sets are used to make the correct decision. If a human operator is assessing the network condition, two sets of data are generally adequate since the operator can, from prior experience, make an educated decision on which data set more accurately reflects the actual network conditions. 

Strong or weak ties: In SPS network design there must be a balance between strong and weak ties between sections. The network failure examples have a good mix of failures caused by strong and weak ties. For instance, the 1965 blackout was initiated by an overcurrent relay tripping out a transmission line well below its rated capacity—a weak tie. However, the failure cascade was the result of a series of strong ties attempting to maintain the network voltage and frequency. So does one design a network with all strong ties, so that the network will always try to keep itself operational? Or does one design a network with all weak ties, so that every small disturbance will segment the network into many sections, with each attempting to maintain its own operation while letting the adjacent sections fail? 

Network stability (or instability) is the subject of many very long, involved analyses, which are well outside the scope of this article, but they have some nuggets of truth that we will extract. Networks are stable as long as the various elements are balanced and become unstable when the elements become unbalanced. This seems obvious, but in every example the cascaded failures were initiated by a large change in the network load or the network generating capacity. Let us suppose for a moment that we were able to recreate one of the aforementioned blackouts, say the 2003 Northeast Blackout. The issues with the control and monitoring problems were addressed already, so we can start with the first large step load change (the generating station shutdown). If this generating station was shut down slowly, over a period of time instead of all at once, there would have been less impact on the network and the initial step load disturbance would not have occurred. For example, if the generating station had four 1000 MW generators, the operators could have shut down one generator at a time, waited until the network was stable after assuming that load, and then shut down another generator. Another alternative would have been to notify the network controller so that he could bring peaking units on line to add to the spinning reserves, before the generating station was dropped offline. 

We could second-guess nearly every event that occurred in the 2003 blackout or any other similar occurrence and still not solve the basic problem. The key to the stability of any network is to avoid network disturbances. When they occur, operating personnel must know in advance what actions they should take in order to restabilize the network. In particular, areas in which misunderstandings can occur should be clarified to prevent worsening the disturbance. For example, operating personnel reduced voltage to comply with the request to reduce load, when, in reality, the request to reduce load was meant for the operators to disconnect loads. While this was a minor misunderstanding, it had the opposite effect from what was intended and worsened the network disturbance, not mitigated it.

Examples of network failures 

Northeastern United States, 1965: On Nov. 9, power was disrupted to parts of Ontario, Connecticut, Massachusetts, New Hampshire, Rhode Island, Vermont, New Jersey, and New York. More than 30 million people were without electricity for up to 12 hours, all due to an error in setting up a protective relay in the network. An interconnecting transmission line between the Niagara Falls (N.Y.) hydro generating station and Ontario was tripped offline, while it was operating well below its rated capacity. The loads that were being served by this line were transferred to another transmission line, causing it to overload and trip offline. This created a major power surge due to excess capacity on the Sir Adam Beck generating station, which reversed the direction of power flow on other transmission lines crossing into New York. While the Robert Moses plant (Niagara Falls generating station) isolated itself from the surges, the network disruptions cascaded throughout New York and the other Northeast states. There were alternating overloaded transmission lines and power generating stations tripping offline to protect themselves from damage due to the power surges. During this event, the network frequency dropped to 56 Hz and, 1 minute before the blackout, down to 51 Hz. 

New York City, 1977: On July 13 and 14, major portions of New York City were without power during an outage that was initiated by a lightning strike on a substation in Westchester County. The network failed due to a faulty breaker that did not reclose after the strike. A subsequent strike caused two other breakers to trip and only one of them reclosed. Due to control network problems, two major transmission lines were overloaded and local peaking generators failed to start, exacerbating the system instability. Within 1 hour of the initial lightning strike, all of the local generating capacity had failed and all inter-tie connections to areas outside of the local service area had been severed. Less than 2 hours after this failure, power restoration procedures had begun, but procedural problems within the network slowed the restoration efforts for more than 24 hours.

Northeastern United States Blackout, 2003: Some 55 million people experienced a major power outage lasting up to 16 hours, all due to a failure in the network control and data collection system. The sequence of events:

  • The beginnings of one of the largest power outages in U.S. history was a computer software bug in an energy management system that caused incorrect telemetry data to be reported to a power flow monitoring system. At 12:15 p.m., this problem was discovered and corrected by a system operator who failed to restart the power monitoring system.
  • Following this minor monitoring failure, a generating station was taken offline by its operating personnel due to extensive maintenance problems.
  • A 345 kV transmission line in northeast Ohio made contact with a tree and tripped out of service.
  • The alarm system at the local utility control room failed and was not repaired.
  • One transmission line after another tripped offline as they were sequentially overloaded by the failing network. More than 20 major transmission lines ranging from 138 to 345 kV were taken offline due to undervoltage and overcurrent. At this time, the shedding of 1.5 GW in the Cleveland region could have averted subsequent failures. However, the network operational data was inaccurate, unavailable, or not acted upon by the network control personnel in a timely fashion (4:05:57 p.m.).
  • Between 4:06 and 4:10 p.m., multiple transmission lines experienced undervoltage and overcurrent conditions and were tripped offline. Seconds later, multiple power generating stations along the East Coast were overloaded and tripped offline to protect themselves. At this point, the blackout was in full swing.
  • At 4:10:37, the Michigan power grids isolated themselves.
  • At 4:10:38, the Cleveland power grid isolated itself from Pennsylvania.
  • At 4:10:39, 3.7 GW of power flowed west along the Lake Erie shoreline toward southern Michigan and northern Ohio. A surge that was 10 times the power flow only 30 seconds earlier caused a major voltage drop across the network.
  • At 4:10:40, the power flow flipped to a surge of 2 GW eastward, a net of 5.7 GW, and then reversed again to the west, all in the span of 0.5 seconds.
  • Within 3 seconds of this event, many of the international connections failed. This resulted in one of the Ontario power stations tripping offline due to the unstable network conditions.
  • By 4:12:58, multiple areas on both sides on the United States-Canada border disconnected from the grid and New Jersey separated itself from the New York and Philadelphia power grids. This caused a cascade of generator station failures as the network conditions continued to deteriorate, both in New Jersey and westward.
  • At 4:13 p.m., 256 power stations were offline with 85% of them failing after the grid separations occurred. Most of these were caused by automatic protective controls in the power network or the individual power station. 

Southwestern United States Blackout, 2011: On Sept. 8, a procedural error caused a widespread power outage covering southern California, western Arizona, and northwestern Mexico. While electrical utilities normally use network monitoring and computer modeling to determine when a single point failure will cause a problem, this was not done prior to a capacitor bank switching event. Because the network control did not know about the potential problems that this line failure could cause, there was a cascade event lasting 11 minutes that tripped off transmission lines, generating stations, and transformers. More than 7 million people ended up without electrical power for up to 14 hours. 

India Blackout, 2012: In July, 620 million people were without electrical power between July 29 and August 1. The network failure began with a 400 kV transmission line tripping out due to an overload. Rather than shedding load to protect this critical circuit, power stations began tripping offline. This event created a cascade effect, shutting down power to more than 300 million people for about 15 hours. On July 31, the network failed again due to a control relay malfunction near Agra. This very small event caused a complete shutdown of 38% of the generation capacity in India, with impacts in 22 of 28 of the Indian states. This failure was the largest power outage ever experienced, and it would have been even larger but the five largest consumers of electricity in India had their own off-the-grid power generators. Overall there was 35 GW of private generation capacity and plans to implement an additional 33 GW, all off-the-grid. Of the recorded areas not affected, none were connected to the power network and therefore remained in service. 

While the aforementioned examples were all very large power networks, the principles of operation and failure work the same for small networks within a single building.

Definitions 

Spinning reserve: The amount of electrical capacity that is available from a generating system, without the station personnel starting another generator. 

Strong ties: Strong ties between sections of a network are those that have a tendency to try to keep the adjacent, disturbed network section operational.   

Weak ties: Weak ties between sections of a network are those that have a tendency to immediately break the connection to a disturbed network section whenever there is the smallest disruption in that section. 


Kenneth L. Lovorn is president of Lovorn Engineering Assocs. He is a member of the Consulting-Specifying Engineer editorial advisory board.