The Smith Predictor: A Process Engineer's Crystal Ball

Arguably the trickiest problem to overcome with a feedback controller is process deadtime -- the delay between the application of a control effort and its first effect on the process variable. During that interval, the process does not respond to the controller's activity at all, and any attempt to manipulate the process variable before the deadtime has elapsed inevitably fails. This classic article is among the most-read on the Control Engineering site. (See diagrams.)

05/01/1996


PROCESS CONTROL TUTORIAL

Process control & instrumentation

Control theory

PID (proportional, integral, derivative)

Modeling

Loop tuning

Loop controllers

This tutorial is the second in a series of four. Part 1, in February, examined PID control . Part 3, in September, will examine sampled vs. continuous control , and Part 4, in December, will look at multivariable control .

Arguably the trickiest problem to overcome with a feedback controller is process deadtime--the delay between the application of a control effort and its first effect on the process variable. During that interval, the process does not respond to the controller's activity at all, and any attempt to manipulate the process variable before the deadtime has elapsed inevitably fails.

A deadtime example


Fig. 1: A simplified rolling mill. The process deadtime is D= S/V seconds. Source: Control Engineering

Deadtime occurs in many different control applications, generally as a result of material being transported from the site of the actuator to another location where the sensor takes its reading. Not until the material has reached the sensor can any changes caused by the actuator be detected. Consider, for example, the rolling mill shown in Figure 1, which produces a continuous sheet of some material at a rate of V inches per second. A feedback controller uses a piston to modify the gap between a pair of reducing rollers that squeezes the material into the desired thickness. The deadtime in this process is caused by the separation S between the rollers and the thickness gauge.

The controller in this example can compare the current thickness of the sheet (the process variable PV) with the desired thickness (the setpoint SP) and generate an output (CO), but it must wait at least D S/V seconds for the thickness of the sheet to change. If it expects a result any sooner, it will determine that its last control effort had no effect and will continue to apply ever larger corrections to the rollers until the sensor begins to see the thickness changing in the desired direction. By that time, however, it will be too late. The controller will have already overcompensated for the original thickness error, perhaps to the point of causing an even larger error in the opposite direction.

How badly the controller overcompensates depends on how aggressively it is tuned and on the difference between the actual and the assumed deadtime. That is, if the controller assumes that the deadtime is much shorter than is actually the case, it will spend a much longer time increasing its output before successfully effecting a change in the process variable. If the controller is tuned to be particularly aggressive, the rate at which it increases its output during that interval will be especially high.

Overcoming deadtime
Curing overcompensation means ad-dressing one or both symptoms. The easiest solution is to 'detune' the controller to slow response rate. A detuned controller won't have time to overcompensate unless deadtime is very long.

The integrator in a PID controller is particularly sensitive to deadtime. By design, its function is to continue ramping up the controller's output so long as there is an error between the setpoint and the process variable. In the presence of deadtime, the integrator works overtime. Ziegler and Nichols determined the best way to detune a PID controller to handle a deadtime of D seconds is reduce the integral tuning constant by a factor of D2. Also, the proportional tuning constant should be reduced by a factor of D. The derivative term is unaffected by deadtime. It only occurs after the process variable begins to move.

Detuning can restore stability to a control loop that suffers from chronic overcompensation, but it would not even be necessary if the controller could first be made aware of the deadtime, then endowed with the patience to wait it out. That is essentially what happens in the famous Smith Predictor control strategy proposed by O.J.M. Smith, U. of California at Berkeley, in 1957.

The Smith Predictor


Fig. 2: The mathematical model of a Smith Predictor is usually implemented digitally, analog transit delays being difficult to construct. Source: Control Engineering



Fig. 3: The Smith Predictor effectively removes the deadtime from the loop. Source: Control Engineering

Smith's strategy is shown in Figure 2. It consists of an ordinary feedback loop plus an inner loop that introduces two extra terms directly into the feedback path. The first term is an estimate of what the process variable would look like in the absence of any disturbances. It is generated by running the controller output through a process model that intentionally ignores the effects of load disturbances. If the model is otherwise accurate in representing the behavior of the process, its output will be a disturbance-free version of the actual process variable.

The mathematical model used to generate the disturbance-free process variable has two elements connected in series. The first represents all of the process behavior not attributable to deadtime. The second represents nothing but the deadtime. The deadtime-free element is generally implemented as an ordinary differential or difference equation that includes estimates of all the process gains and time constants. The second element is simply a time delay. The signal that goes into it comes out delayed, but otherwise unchanged.

The second term that Smith's strategy introduces into the feedback path is an estimate of what the process variable would look like in the absence of both disturbances and deadtime. It is generated by running the controller output through the first element of the process model (the gains and time constants), but not through the time delay element. It thus predicts what the disturbance-free process variable will be once the deadtime has elapsed (hence the expression Smith Predictor).

Subtracting the disturbance-free process variable from the actual process variable yields an estimate of the disturbances. By adding this difference to the predicted process variable, Smith created a feedback variable that includes the disturbances, but not the deadtime.

So what?
The purpose of all these mathematical manipulations is best illustrated by Figure 3. It shows the Smith Predictor of Figure 2 with the blocks rearranged. It also shows an estimate of the process variable (with both disturbances and deadtime) generated by adding the estimated disturbances back into the disturbance-free process variable. The result is a feedback control system with the deadtime outside of the loop.

The Smith Predictor essentially works to control the modified feedback variable (the predicted process variable with disturbances included) rather than the actual process variable. If it is successful in doing so, and if the process model does indeed match the process, then the controller will simultaneously drive the actual process variable towards the setpoint whether the setpoint changes or a load disturbs the process.

Not as easy as it looks
Unfortunately, those are big 'ifs.' It's easier for the controller to meet its objectives without dealing with the deadtime, but it is not always a simple matter to generate the process models needed to make the strategy work. Even the slightest mismatch between the process and the model can cause the controller to generate an output that successfully manipulates the modified feedback variable, but drives the actual process variable off into oblivion. There have been several fixes proposed to improve on the basic Smith Predictor, but deadtime remains a particularly difficult control problem.





No comments
Consulting-Specifying Engineer's Product of the Year (POY) contest is the premier award for new products in the HVAC, fire, electrical, and...
Consulting-Specifying Engineer magazine is dedicated to encouraging and recognizing the most talented young individuals...
The MEP Giants program lists the top mechanical, electrical, plumbing, and fire protection engineering firms in the United States.
Water use efficiency: Diminishing water quality, escalating costs; Lowering building energy use; Power for fire pumps
Building envelope and integration; Manufacturing industrial Q&A; NFPA 99; Testing fire systems
Labs and research facilities: Q&A with the experts; Water heating systems; Smart building integration; 40 Under 40 winners
Maintaining low data center PUE; Using eco mode in UPS systems; Commissioning electrical and power systems; Exploring dc power distribution alternatives
Protecting standby generators for mission critical facilities; Selecting energy-efficient transformers; Integrating power monitoring systems; Mitigating harmonics in electrical systems
Commissioning electrical systems in mission critical facilities; Anticipating the Smart Grid; Mitigating arc flash hazards in medium-voltage switchgear; Comparing generator sizing software
As brand protection manager for Eaton’s Electrical Sector, Tom Grace oversees counterfeit awareness...
Amara Rozgus is chief editor and content manager of Consulting-Specifier Engineer magazine.
IEEE power industry experts bring their combined experience in the electrical power industry...
Michael Heinsdorf, P.E., LEED AP, CDT is an Engineering Specification Writer at ARCOM MasterSpec.