Green Power

By J. Michael Pearson, P.E., Senior Electrical Engineer, DLR Group, Phoenix June 1, 2005

In this era of green architecture, there is one sustainability tool at our disposal that has been most underutilized—the technology of power-factor correction. In fact, the U.S. Green Building Council would do well to include power-factor correction as one of its credits for its Leadership in Environmental and Energy Design certification program.

We engineers know power factor (PF) mainly as a mathematical expression of the efficiency with which electrical energy is used. More specifically, it is the ratio of “apparent” to “real” power, or even more precisely, the cosine of the angle between these two electrical vectors. And while this famous power triangle is among the first lessons taught in engineering school, its significance too often becomes overshadowed when one gets down to the business of designing building systems.

There are several reasons for this, and they have, largely, to do with economic realities. First and foremost, there is the fact that poor PF is easy to ignore. While power companies have long paid lip service to the drag poor PF exerts on the nation’s power grid, users have had little or no financial incentive to improve. On the supply side, downside effects are masked by the money being made. On the demand side, operating budgets routinely account for standard electric rates, a practice that covers up the system’s inefficiencies.

Simply put—and this is a characteristic common to nearly all forms of inefficiency—there currently exists little financial incentive to improve. Even considering the fact that utility company regulations almost universally provide for the assessment of penalties for poor PF, such regulations are seldom enforced. As a result, like most speed limits, the penalties are generally ignored.

Economic Impact

To appreciate the impact PF has on the economy, it may be instructive to consider a simple hypothetical case. Say the PF for an average building is 80%. What this means is that 80% of the energy used actually goes to lighting, cooling and running the owner’s equipment—and that the 20% balance is wasted. Also consider that, according to the U.S. Dept. of Energy, the total amount of energy consumed by the United States in 2002 was almost 99 quadrillion BTUs. This would mean that almost 20 quads were wasted that year. Translate that figure into tons of coal and barrels of oil wasted—not to mention the cost of tons of carbon dioxide and hydrocarbons being released into the atmosphere—and one can see the why power-factor correction is vital.

But another reality, one that electrical engineers are compelled to take more to heart, is the need for excellent power quality. While they may be aware of the existence of less than optimal PF, designers have learned to avoid the voltage transients created by the conventional capacitor banks used to solve the PF problem. The potential improvements to energy efficiency are simply not worth the drawbacks to sensitive electronic systems and equipment.

Most conscientious engineers will immediately shout “power quality” when someone proposes PF correction. And since the past is the best predictor of the future, they are amply justified. In the digital age, the phenomenon of the capacitor-induced transient voltage has been one of the strongest arguments against power factor correction. Due to the fact that the products that are commercially available today involve relatively large, stepped cap banks—the operation of which can wreak havoc on electronics—avoidance has rightly been an almost universal reaction. Furthermore, when we acknowledge that direct financial incentives are virtually nonexistent, we can see why the risk has almost invariably been judged to overwhelm the reward.

However, ideas introduced in the last ten years hold real promise for solving even the largest of these issues. The related technology uses variable-impedance circuits, coupled with sophisticated monitoring and control equipment in filters that, applied on a small scale, have virtually eliminated damaging voltage transients in electronic equipment. Along this line of reasoning, formal LEED recognition of power-factor correction could serve to encourage technical advances.

Enter LEED

Enter the USGBC’s Leadership in Environmental and Energy Design program. LEED, as many engineers know, is a phenomenon that is gathering momentum at an impressive rate, inducing more architects and engineers to become certified, and enticing more owners to demand that their buildings be LEED-accredited. LEED ratings have become a badge of honor for modern buildings, and as such, are much sought-after by owners wanting to be good citizens.

According to Jeff Stanton, AIA, principal, SmithGroup, Detroit, “The purpose of LEED is really to facilitate positive results for the environment, occupant health and financial return. It will also help to promote whole building integrated design.”

Which brings us back to PF. Supporting the efficient use of resources is exactly what power-factor correction is all about. Consequently, it is in the interest of the green building movement to acknowledge the positive impact that improving power factor can have in this area.

Even with these possibilities in mind, the case still needs to be made that the indirect savings associated with successfully reducing the “drag” of poor PF are worthy of serious consideration. U.S. DOE studies suggest that buildings make up 65% of the load on the U.S. power grid. If we say that the average power factor for these buildings is 85%—an optimistic figure in my estimation—it’s obvious that the potential amount of energy to be saved is significant. Look back to the hypothetical above. For example, each percentage point of power-factor correction could reduce annual U.S. energy consumption by a billion BTUs. And since so much of the energy comes from coal- and oil-burning power plants, this equates to millions of barrels of oil and millions of tons of coal per year.

In terms of carbon dioxide and particulate emissions—well, one would have to plant literally millions of new trees every year to have the same effect on the atmosphere as power-factor correction.

This diversity of potential savings has historically sparked little enthusiasm, however, because the immediate beneficiaries would be the power utilities—not the people footing the bill. But just as recycling a pop can or a piece of paper, or reusing a brick, are like raindrops in the ocean, the cumulative effect is worth our attention now. After all, we know how impressive the ocean can be.

And as recent massive outages and brownouts on both sides of the country have demonstrated, the North American power grid can use all the help it can get.

Perfecting the power factor of the building-related loads would overnight have the same impact as bringing countless power plants on-line.


Perhaps the reasons given are compelling enough to encourage the U.S. Green Building Council to recognize power-factor correction as a distinct credit in the category of innovation and design process, with status equal to erosion control, water-efficient landscaping, recycling and other LEED categories. Power-factor correction is one potential tool in the sustainability toolkit that could have a big impact with a modest effort on the part of design professionals.

Power Factor Primer

Simply put, power factor is a measure of how efficiently electrical power is consumed. The ideal power factor is unity—or one . Anything less than one, or 100% efficiency, means that extra power is required to achieve the actual task at hand.

This extra energy is known as reactive power , which unfortunately is necessary to provide a magnetizing effect required by motors and other inductive loads to perform their desired functions.

However, reactive power can also be interpreted as watt-less, magnetizing or wasted power, and an extra burden on the electricity supply.

Power-factor correction is the term given to a technology that has been used since the turn of the 20th century to restore power factor to as close to unity as is economically possible.

This is normally achieved by the addition of capacitors to the electrical network, which compensate for the reactive power demand of the inductive load, and thus reduce the burden on the supply.

The main benefits of power factor correction include:

Reduced power consumption.

Reduced electricity bills.

Improved electrical energy efficiency.

Extra kVA availability from the existing supply.

Transformer and distribution equipment losses reduced.

Voltage drop reductions in long cables.

How is Power Factor Caused?

Most electrical equipment such as motors, compressors, welding sets and even switch-start fluorescent lighting, create what is known as an inductive load on the supply.

An inductive load requires a magnetic field to operate, and in creating such a magnetic field causes the current to “lag” the voltage—that is, the current is not in phase with the voltage.

Power-factor correction is the process of compensating for the “lagging” current by applying a “leading” current in the form of capacitors.

This way, power factor is adjusted closer to unity and energy waste is minimized.

Three-phase AC supplies can be measured in three ways.

Active power — the real usable power available, measured in kW.

Reactive power — that part of the supply that creates the inductive load, measured in kVAr.

Apparent power — the resultant of the other two components, measured in kVA.

That being said, power factor is best expressed as: active power kW/apparent power kVA.

A good analogy of power factor is to imagine a horse pulling a barge along a canal.

The useful work is the force acting along the line of the canal, which moves the barge by overcoming the resistance of the water in which it floats, analogous to the useful power (kW) in an electrical circuit.

The horse can’t walk on water and must move along the towpath, so the towrope is at an angle to the direction along which the barge must move.

The towrope is always trying to pull the horse sideways into the water and at the same time, trying to pull the barge into the canal bank. The horse must walk along at a slight angle and the bargee uses the rudder to keep the craft in the center of the canal.

The sideways forces act at right angles to the direction that the barge intends to travel and are “useless,” analogous to reactive current in the electrical circuit (kVAr). The towrope itself shows the resultant force created by the useful pull and useless side pull and the length is representative of size of force, or kVA in the electrical version.

The actual value of the term “power factor” is the ratio of the force represented by the diagonal line divided by the total force represented by the towrope. The cosine of the angle B of the triangle forces, so power is sometimes also referred to as “cosine B”.

Here is a typical example:

A 100-kW motor operates at a given power factor of 0.80. The total or apparent power required by the motor is actually 125 kVA (100 kW/0.80).

By improving the power factor to 0.95, the total power draw from the supply will be reduced to 105 kVA (100 kW/0.95).

The total power reduction is 20 kVA, which is equivalent to an energy saving of 16%.

The Hidden Cost of Power Factor

Electrical networks with poor power factor draw more power than strictly necessary, forcing electricity generators to increase output.

This extra power means extra generating cost, which is always passed on to the end user in one form or another. Most often, the end user will be charged for reactive power—and it doesn’t come cheaply. Consequently, power factor correction will usually show payback within a couple of years.

Power Factor—Past and Present

In the past, it seemed as if every plant engineer had his or her own “rule of thumb” for power-factor correction. In general, conventional wisdom was to install individual capacitors on larger motors and provide a modest fixed bank at the service point that did not exceed some percentage of the supply transformer capacity. The installed capacitor, in parallel with the supply impedance—mostly inductive—created a parallel “resonance” point, or high impedance to a specific frequency. These methods worked fine in a world full of linear loads that used current only at 60 cycles.

What was created, however, was a plant with harmonic “land mines” that appeared on the frequency spectrum with starting and stopping of the capacitance-corrected motors. If exactly the right number of capacitors were energized when a voltage transience occurred—that also happened to contain the same frequency—the transient would step on the land mine and be amplified into a much larger, possibly damaging transient.

These events are also the target of today’s switching mode power supplies commonly found in lighting ballasts, drives and computers. These devices draw sustained harmonic currents that, when crossed by these roving resonance points, cause a sudden increase in harmonic distortion that can disrupt the operation of sensitive equipment, as well as prematurely age electric system components.