Where control systems have been, and where they are going

Here We Go Again.

By Dave Harrold, CONTROL ENGINEERING January 1, 2000

The author, at a fieldbus seminar hosted by Control Dynamics (Midlothian, Va.) in May 1999, presented the following material.

I n 1899, the U.S. Patent office recommended to Congress, that the office should begin phasing themselves out of business. They believed everything worth patenting had already been invented. We can applaud the one and only government office believed to want to ‘pull-the-plug’ on itself, however that recommendation demonstrates the human tendency to focus on today.

Developed and emerging technologies in networking, operating systems, web technologies, and connectivity, many from other industries might find their way into control and automation systems in the next three to five years. And it is safe to say all of these technologies were invented since 1899. Thank goodness the U.S. Patent office was mistaken.

Punctuated equilibrium theory An influential theory in evolutionary biology holds that evolution does not gradually occur as small steps over long periods of time but rather as long periods of stability interspersed with rapid change. These periods of rapid changes are often the result of severe environmental changes. This scientific theory is called punctuated equilibrium.It seems the punctuated equilibrium theory also applies to the evolution of business practices and technologies used in process control and automation.

The Good Old Days Customers produce products with life expectancies of 5, 10, or even 15 years. Frequently suppliers would construct plants to produce a single product or a small family of similar products just to meet single customers needs. Brand loyalty was strong, and a lot of business was conducted because of personal relationships.

Process investments were generally capacity driven, and the cost was what it was. Demanding suppliers reduce prices and threatening to change suppliers was rare.

Plant and operating unit managers were king of their domain that was defined by the four walls or the fence line. Managers focused on getting product out the door. Theirs was something of an isolated world, where no one called unless it was to buy or sell something.

Business planning was conducted on a quarterly or annual basis and consisted of ensuring marketing projections could and would be met.

Early process control was manual and some simple pneumatic devices used things called flapper nozzles, restrictor valves, and feedback bellows. These devices were distributed throughout the processing facility in what today would be called ‘field centric architectures.’ There was no architecture to it; things were placed where they made sense and where there was physical space.

Grizzled operators roamed the plant floor using all their senses to keep the process ‘in control.’ When a plant manager needed more production to meet a special customer request, they sought out the senior operator on his turf, and quietly explained the situation, knowing the request would be met with grumbles and results.

Evolution process begins The mathematics behind process control began to produce new control strategies, some quite complex, even by today’s standards. This new process control sophistication required sophisticated controllers. A few electric instruments, using vacuum tubes, slide-wires, and wire-wound resistors emerged, but many processes contained hazardous and/or explosive materials, and valve packing leaked, so safely placing an electric instrument in these environments required bullet-proof like enclosures easily identified by the quantity of bolts holding the cover in place.

Pneumatic instruments were more suitable for deployment in hazardous processes, and soon entire families of pneumatic instruments became available. With functions like add, subtract, multiply, divide, as well as proportional control, complex process control strategies could be assembled.

Initially these devices were deployed in the field, but 3-15 psi signals operate near the speed of sound, thus long tubing runs added undesired lags.

To resolve the connection complexity, panel-based pneumatic instruments were developed. Looking back, we see this as an example of the punctuated equilibrium theory. A long period of field-based process control followed by several rapid step changes leading to centralized control rooms.

Moving into a control room environment helped solve controller-to-controller connection issues, but tubing run induced lags remained a problem. When lags became a significant problem, pneumatic amplifiers were installed. Fortunately, the transistor was soon invented and the age of semiconductors allowed pneumatic panel board instrumentation to be replaced, but this was simply a new technology used in the same way.

In the `70s, panel board instrumentation gave way to programmable microprocessor systems called configurable distributed systems.DCSs and PLCs. Manufacturers were very careful to avoid implications of placing computers in process control domains and any reference to programming might open the door for intrusion by those IT folks. But again, this was a new technology deployed in the same way.

More than 30 years later, another punctuated equilibrium began when fieldbus promised a ‘field centric architecture’ that would revolutionize the process industry.

It has been reported that a voice, remarkably similar to ‘Mr. Grizzled Operator,’ was heard in plants around the globe saying, ‘I told you control rooms would never work!’

Business was also evolving

The biggest deals

1955 Sperry Corp.
Remington Rand
$208 million

1959 General Telephone
Sylvania Electric
$265 million

1967 McDonnell
Douglas Arcraft
$850 million

1969 Atlantic Richfield
Sinclair Oil Corp.
$1,851 million

1979 Exxon
Reliance Electric
$1.4 billion

1979 Shell Oil
Belridge Oil
$3.7 billion

1988 Phillip Morris
Kraft
$13.1 billion

1989 Bristol-Myers
Squibb
$12 billion

1998 AT&T
TCI
$69.9 billion

1998 Exxon
Mobil
$86.4 billion

Business experienced its own periods of punctuated equilibrium during this same period of time.

Announced mergers

1950 to 1970 – 33,117

1990 to 1996 – 21,543

1997 & 1998 – 15,609

1901 marked the beginning of the first merger mania when J.P. Morgan created U.S. Steel by buying Carnegie Steel for $480 million ($9.5 billion in today’s dollars). Within a few years, Morgan had purchased nine additional steel companies.

In 1911, the Justice Department decided John D. Rockefeller’s Standard Oil Company was a monopoly, and forced Standard Oil to break up its holdings. This was one of several successful trust-busting efforts the government conducted between 1910 and 1920.

1920s brought about recession and eventually the depression, quieting merger actions.

Mergers began again in 1943 when Edward Noble acquired Blue Network from RCA for eight million dollars. Blue Network became ABC in 1946.

1950 introduced the beginning of strategic mergers between companies with complementary assets. Big bank mergers were the rage; and U.S. Steel survived efforts by Senator Estes Kefauver and the Justice Department to stop their purchase of Consolidated Steel.

In 1953 the model for future strategic mergers occurred when Merck, a fine-chemicals producer, merged with pharmaceuticals-maker Sharp & Dhome.

Between 1950 and 1970 there were 33,117 announced mergers. Contrast that 20-year period with the past eight years when 37,152 mergers were announced.15,609 of those announced in 1997/98.

What’s happening today

Amazon.com

e-trade.com

Compaq buys DEC

Daimler-Benz buys Chrysler

Exxon merges with Mobil

Kinetic enterprises emerging

There is no reason to dwell on what’s happening today. Each of us is experiencing it. We were promised that technology would simplify our lives and provide more leisure time, but everything you read indicates we have less leisure time. How can this be?

If business practices had remained unchanged while technology advanced, we would be enjoying less stress and more leisure time; but guess what? There were new companies forming who wanted to carve out their niche. They promised better products, at less price, in less time. The competitive nature of business leaders said, ‘Oh no you won’t. We can do everything they claim they can do, AND we also can _________ ,’ you fill in the blank. The race was on and each of our lives began to accelerate. The reality is, there will always be old and new companies with new ideas and new products. The challenge facing business leaders today is to avoid being ‘Amazoned.’

No longer can business leaders rely on monitoring the competition to determine business activities because they don’t always know who the competition is.

Barnes & Noble and Waldenbooks never saw amazon.com coming. Nor did Smith-Barney and Merrill-Lynch see e-trade.com coming.

Would any one have predicted Compaq would buy DEC; Daimler-Benz would buy Chrysler; or how about the $86.4 billion merger of Exxon and Mobil?

Let’s play a game. The category is Software and the answer is Ireland. The question is; ‘What country is the 2nd largest producer of software in the world and has experienced a 7.4% annual growth over the last five years?’ The answer is Ireland. Who would have predicted Ireland to be a ‘major player’ in software development?

Today’s businesses must be agile in everything they do. From marketing, to planning, to financing, to production, the only way to remain a viable company is to be ready, willing, and able to change overnight.

Steve Larsen, vp with SAP says companies must become ‘kinetic enterprises.’ Mr. Larsen says kinetic enterprises meet two simple but outrageous goals. First, they are capable of profitably serving a single customer; and second, they do it in near zero time.

That means the entire organization must be event driven, not process or function focused. Customers and stockholders no longer will accept hearing; ‘our infrastructure won’t support your specific requirements.’

For example, Xerox has created intelligent copiers and duplicators by embedding web technology. A machine can be polled from a remote location by a browser, or the machine can initiate a call to Xerox’s service center. Xerox’s commitment to its customers was any part, anywhere in the world, with a repairperson, by 8 a.m. local time, next business day. Xerox established strategically placed parts centers in key world areas but they needed air support. Who do you think provides Xerox’s air support to meet the 8 a.m. customer commitment? Airborne Express! Why? Airborne did not have the rigid infrastructure of Federal Express. Airborne doesn’t fly everything to a hub and redistribute it, and Airborne contracts for space on every commercial airline in the world. Airborne won because they were able to focus on event opportunities, not on functions and process.

It appears Mr. Larsen is correct in defining a kinetic enterprise. Companies must become event driven because you won’t always know your competitors, you can’t predict different customers’ requirements, and you can’t afford to ignore a single irate customer complaint.

For example, Phil Van der Vossen of New York became disgruntled with his Internet retailer buy.com so he launched his own web page bashing buy.com. To his surprise, buy.com contacted him and offered to fly him to California to meet with corporate executives to discuss how to improve. The message, one disgruntled customer can reach an audience of millions. Such events can be devastating.

Steve Forbes, ceo of Forbes Publishing, and presidential candidate in his keynote address at IMS Expo99 in Orlando said, ‘The technology continuum began with the invention of the transistor, and continued with the microchip. It continues today with the Internet leading that growth spurt.’

Mr. Forbes is correct about the Internet’s influence on our lives at home and at work. We are the most informed generation, ever. How business responds to the speed information can be disseminated is key to success.

To borrow from Microsoft’s tagline, ‘What are your customers saying about your company today, and what forum are they using?’

Batch architecture Separate reports prepared by Donna Takeda, vp with Merrill Lynch, and Asish Ghosh, with ARC Advisory Group, agree, process control investments will grow about three to five percent annually over the next five years, but they also indicate batch process investments will be nearer to 5-7% annually.

Increasing interest and investments in batch processing is driven by need for more agility (kinetic enterprise) in meeting customer requirements, competitive pressures, marketing projections, and new product introductions. Batch processes, especially those assembled with multi-product, multi-stream capability, offer the greatest flexibility. Though batch processes are generally more complex from an automation and integration viewpoint, once implemented, batch processes make it easier to meet unique quality related customer requirements. Examination of continuous processes with a goal of converting the process to batch could provide competitive advantage and open up new product markets. But wait, if your process is continuous and your only experience with batch is pouring a beer into a frosty mug, where do you begin?

Somewhere in the world, right now, a committee is meeting to develop a standard for something. Many standards’ developments are targeted toward a particular technology for a specific product and/or market. They seldom have little direct meaning or use to most of us. However, there are a few standards worth spending time to learn more about.

One is ANSI/ISA S88.01- Batch Processing Models and Terminology . S88.01 breaks process control into a physical and procedural model. The physical model assembles plant floor objects such as pumps, valves, vessels, headers, jacket services, reactors, separation columns, centrifuge, process cells, etc., into a hierarchy. The procedural model provides a means to define how each physical object should behave during different processing scenarios. For example, the procedural model for a reactor defines the operations and the behavior of each operation such as filling, mixing, reacting, and dumping. S88 declares every operation should include predefined states such as idle, running, holding, aborting, and failing. In defining the actions for an operation such as mixing, actions are grouped under each state. Familiarization with S88 provides two benefits. First, it encourages repetitive design passes, an especially important benefit when evaluating if a continuous process can be made into a batch process. Second, S88 designs are product independent. Rather than focusing on the needs of a product or family of products, working through an S88-based design delivers a solution that takes full advantage of the available equipment. An article on using S88 to develop a control and automation specification appeared in the April 1999 issue of Control Engineering magazine, or visit www.controleng.com. Also, a book titled S88 Implementation Guide written by Darrin Fleming and Velumani Pillai and published by McGraw-Hill is available. The S88 standard is available from ISA.

SP95 is another standard that may provide assistance to process and automation engineers. SP95 is a set of models and terminology for integrating control system data with manufacturing execution systems (MES) and enterprise resource planning systems (ERP). SP95 is based on two previous integration works. The first was developed in the `80s by Purdue University and is titled The Purdue Computer Integrated Manufacturing (CIM) Model . The second is the Manufacturing Execution System Architecture (MESA) model. Using these models, SP95 defines the boundary between the process control domain and the information integration domain. With the domains established, SP95 focuses on defining the transactions that need to occur between these two domains.

Connectivity

Component Object Module (COM)

Distributed Component Object Module (DCOM)

Distributed interNet Applications for Manufacturing (DNA for Manufacturing)

Java beans & applets

OPC (OLE for Process Control)

Sun Connect

HTTP-NG

You frequently hear about connectivity standards, some competing with one another and all promising to bring us closer to interoperable open control and automation solutions. Among these connectivity methods are Microsoft’s Component Object Model (COM), Distributed Component Object Model (DCOM), and Distributed interNet Applications (DNA) for Manufacturing, also there is OPC (OLE, object linking and embedding, for process control), Sun Microsystems Java with its related beans and applets, and Sun Microsystems Sun Connect.

DNA for manufacturing framework relies on COM as its foundation. It links islands of information in a manufacturing environment, improving information flow, and bridging gaps between enterprise applications as well as supply chain business partners. Other important DNA technologies include Visual Basic for Applications (VBA) and DCOM. DNA permits development of scalable, distributed applications on Windows platforms and extends existing data to support the Internet and a wide range of client devices. Companies already using Microsoft’s DNA technology include Aspen Technology, Camstar Systems, Cincom Systems, Compaq Computer, Ernst & Young, Honeywell, Iconics, Intellution, National Instruments, Rockwell Automation, and Sequencia.

Java beans also offer connectivity solutions and competes with Microsoft COM and DCOM. Java beans are self-contained objects of information. The significant difference between Java beans and COM/DCOM is content. Java beans contain all the information about an object. COM/DCOM objects contain part of the information and a map on where to find the remaining object information. When a Java bean reaches its destination, the bean is unwrapped and the information needed by the receiving application is extracted and the rest of the bean is discarded. When a COM/DCOM object reaches its destination, the object is examined by the receiving application. If the required information is in the basic COM/DCOM object, it is extracted and used. If the information is not in the basic object, the receiving application examines the map and retrieves the additional information.

Which is best? Depends on whom you talk to. Java is popular on the Internet and in business systems because it is operating system independent, but COM/DCOM prevails in desktop applications for business and manufacturing. For example, OPC is built on the DCOM model.

OPC is the model for standards development. Within a year after launch, the first draft of the OPC specification was completed and implementation began. Within eighteen months initial testing was completed, and products began to appear. At ISA/EXPO 98 in Houston there were about 150 OPC compliant products available. In April 1999, over 250 OPC compliant products were available. That’s encouraging, because OPC is the first standard introduced in our industries that has been embraced by nearly every instrumentation and control system manufacturer. OPC provides a way to replace custom drivers used to connect different manufacture’s products. For example, you might need a driver to connect Intellution FIX with Allen-Bradley PLCs; another to connect FIX with Siemens PLCs; yet a third to connect FIX with GE Fanuc PLCs; and so forth. Wonderware advertised at one time that it had over 200 developed drivers. With OPC, the PLCs are servers and Intellution FIX is a client. If, for example, you want to replace Intellution FIX with USDATA, it is not a problem. The PLCs remain OPC servers and communicate with any and all OPC compliant clients; they don’t care which ones.

Sun Connect is a standard developed by Sun Microsystems. It is very similar to OPC in purpose, but is built around Java beans and applets instead of COM/DCOM. Product developers make their applications Sun Connect-compliant, submit them for certification, and can market them as Sun Connect certified server or client products.

One additional connectivity development worth mentioning is hypertext transport protocol next generation or http-NG. HTTP is the language used by Internet servers. Developers of original http never imagined today’s Internet utilization and in 1997 released http 1.1 which tweaked performance, but next generation http, scheduled to release sometime in year 2000, promises to significantly improve Internet performance and allow machine-to-machine collaboration.

In addition to the connectivity wars, there are operating system wars. And out there in the ‘no-fly zone’ user’s try to sort it all out and still operate a profitable business.

IEEE 1451.1 & .2 are an attempt to provide users the freedom to choose devices to connect to one of the 60+ available networks best suited for a particular application.

1451.1 defines extensible common object model for the components of a networked smart transducer.

1451.2 defines electronic data sheets, information model, and plug-and-play handshake interface.

Operating System Wars

Proprietary for mainframes

Unix for business workstation applications

Microsoft 2000 (NT)

Real-time extensions (RTX) for NT

Linux

Enea OSE embedded, fault-tolerant real-time

Embedded NT

Java Virtual Machine (JVM)

picoJava (embedded Java)

Microsoft is prevalent on the desktop, but business computers are still mainframes with proprietary operating systems. Bernard Mathaisel, chief information officer for Ford Motor says, ‘Sun pitches me all the time for a piece of our data center business, but it’s not feasible to think about replacing our mainframes. We have 40 million records in our customer database.’ Even Sun’s Unix-based Starfire, selling for over one-million dollars is unable to compete for existing and new applications running on mainframes like IBM’s S/390 G5. Introduced in 1998, IBM sold 1,000 S/390’s in the first 100 days, each with a $1-3 million-dollar price tag. New applications requiring mainframe horsepower are e-commerce and e-services. While network standardization may be desirable throughout an enterprise, computer platforms will not reach any resemblance of standardization in the foreseeable future.

Microsoft 2000 (NT) is prevalent for desktop and most control system manufactures offer products on Windows NT platforms, but NT is really not a real-time operating system when real-time is defined as communications among process instrumentation, sensors, and controllers, for example. That’s why companies like VenturCom offer real-time extensions (RTX) for NT, and Microsoft encourages use of VenturCom’s RTX.

One operating system upshot that could impact the standardization of operating systems is Linux. Linux gained significant prestige when IBM announced support of Linux on their mainframe computers and SAP announced their popular R3 ERP system was available for Linux operating system use. Linux shares a lot of Unix like features, and will run most existing Unix applications. But unlike Unix, Linux is scalable, making it capable of deployment on different size platforms, including desktop computers. Many Internet servers now use Linux, so it can’t be ruled out. Besides, as long as Linux exist, Microsoft can contend the operating system market place is free enterprise, and they [Microsoft] are not a monopoly.

Embedded operating systems are another subject. Recently Microsoft announced plans to offer an embedded version of NT and again recommends that applications requiring real-time execution consider contacting VenturCom to obtain real-time NT extensions.

Where a very fast-embedded real-time operating system is needed, Enea OSE Systems provides a fault tolerant operating system used by Motorola and Lucent Technologies for cellular communications in towers, phones, and pagers.

Java also recently announced picoJava for embedded applications at sensor and device levels. PicoJava silicon is expected to cost about three to five dollars per chip.

As intelligence is pushed nearer the plant floor, embedded operating systems will become more important to developers, and less important to users, but don’t expect to have a single operating system throughout your enterprise or even in a control and automation solution space. It’s just not going to happen.

Another area unlikely to achieve a single standard in the future is fieldbus.

Fieldbus wars The year was 1950 and ISA established the SP50 committee to determine and write a standard for field device interface. In 1961 we received the 4-20 mA, 1-5 V dc standard.

SP50 was resurrected in 1984 to establish a standardized digital communication protocol for field devices. Fifteen years later they are getting close. The S50.02 / IEC 61158 fieldbus standard has been approved in North America, but turmoil remains in Europe. Hopefully the ‘keepers’ of these standards will put politics in the closet and see that there’s potential more pie for everyone via cooperation.

In the meantime, frustrated manufacturers developed digital communications ‘standards’ and products based on these standards. Today, there are over 60 digital communication’s standards for sensor, field devices, and control level networks, and there are over 5,000 products available for those networks. Manufacturers have invested significant resources in developing fieldbus technologies. They are becoming weary and when someone becomes weary, they become protective of their territory, and thus wars erupt. (Jan. 2000 fieldbus update). Politics did win out over common sense and the IEC 61158 standard allows for eight different, non-interoperable protocols to claim ‘standards’ compliance. (A truly pitiful state of affairs.)

During this 15-year period, Ethernet and Internet technologies have made remarkable advances.

Let’s examine current and emerging network technologies that likely will influence next generation control systems.

Network technologies

www.arc.com (ARC)

www.advmfg.com (AMR Research)

www.epri.com (electric utility test)

www.fieldbus.com (collection site of information)

The Internet is redefining everything from the use of technology, how we obtain information and make decisions, how we purchase computers, books, and cars, how we communicate with suppliers and customers, how we invest our money, and on-and-on. Internet based companies are moving so fast, the stock market can’t figure out how to value them. For example, Internet companies who have yet to make a single dollar in profit cannot be evaluated using traditional valuation rules, but that has not deterred investors. As a group, Internet stocks are outpacing every other stock being traded 2 to 1.

The explosion of network use has encouraged companies like Cisco Systems to develop advanced network management products, such as hubs, routers, switched hubs, and redundancy managers. These same advanced network management improvements have found their way into products for deployment of networks in increasingly harsh environments. For example Synergetic and Hirschmann offer industrial strength advanced network management products.

If press releases and reports from research companies like ARC Advisory Group, AMR Research, and Gartner Group are an accurate indication, Ethernet is the up and coming network for control and possibly device level communications.

If you listen to Ethernet critics, and their numbers are dwindling, you’ll hear about Ethernet’s limitations: It’s too expensive for device-level networking or it’s not deterministic.

Control Engineering articles on Ethernet capabilities and its shortcomings indicate that arguments that Ethernet is too costly and nondeterministic are now invalid.

Silicon chips providing Ethernet, Internet solutions for embedding in semi-intelligent devices are being offered by companies like NETsilicon for $10-15 per chip. Compared to fieldbus silicon, that’s not expensive.

The determinism argument was addressed and tested by the electric utilities as they prepare for deregulation. One of their initiatives was to define and develop a means of delivering to one another, supply, demand, and cost information in 4 msec (milli-seconds) or less; their definition of real-time. They chartered the Electric Power Research Institute (EPRI, Palo Alto, Calif.) to investigate network communication options, conduct test to verify findings, and define a standardized communication protocol they all could use. Eventually EPRI tested 12 Million bits per second (Mbps) Profibus DP against 10 Mbps Ethernet with switched hubs and 100 Mbps Fast Ethernet with hubs. Both Ethernet solutions bettered the 4-msec requirement. Profibus did not. More information about this test appears in the April ’99 issue of Control Engineering, or visit www.controleng.com, or www.epri.com.

Ethernet’s popularity was pretty good when it performed at 10 Mbps, but like a good-looking kid at a new school, interest really picked up when Fast Ethernet, operating at 100 Mbps, was introduced. If that weren’t enough, Gigabit Ethernet (1,000 Mbps) delivers remarkable performance. Gigabit Ethernet was originally developed for use on fiber-optic networks, but Cisco Systems recently announced a successful implementation of Gigabit Ethernet on the same twisted pair physical media used for Ethernet, and Fast Ethernet. Cisco’s release was careful to warn this was a carefully designed and controlled implementation.never the less it offers exciting possibilities. Now Tetra Ethernet is being discussed.

The question about Ethernet’s suitability as a control network has already been answered. Companies like ABB, Foxboro, Fisher-Rosemount, GE Fanuc, Moore Automation, Rockwell Automation, and Schneider Electric have been using Ethernet to connect controllers and operator interfaces for two or three years. Network experts agree, segmented 10 Mbps Ethernet is suitable for control networks with 15-20 connected devices. Being an ex-hot rodder, faster is always better, so wouldn’t upgrading to Fast Ethernet be 10 times better? Apparently, it’s not that easy. Remember that Fast Ethernet uses the same wire and data travels faster along the wire, but the devices sending and receiving data are like off/on ramps on the interstate, where congestion can occur. Experts say upgrading a well designed, segmented 10 Mbps Ethernet to Fast Ethernet will provide about a two to three ‘x’ improvement in performance.

As previously mentioned, there is considerable dialogue about Ethernet’s suitability on the plant floor. However, unlike business and control level deployments of Ethernet, there are two major hurdles to be solved before Ethernet is ready to displace fieldbus technologies on the plant floor. First, Ethernet cannot operate on the same wire used to supply device excitation power, the 24 V dc, required of traditional two-wire devices, such as transmitters and valve controllers. Second, Ethernet operating on copper is not intrinsically safe, making it unsuitable for deployment in hazardous environments.

That said, announcements arrive every week of another company offering Ethernet I/O systems. It’s still too early to determine how many Ethernet I/O solutions are making it into manufacturing and process applications, but the message is clear, Ethernet is likely to be some part of your plant floor network in the future.

One of the questions often asked is, ‘Will Ethernet help end the fieldbus wars?’

In the words of President Richard Nixon, ‘Let me make this perfectly clear.’ Ethernet is the physical media, the wire that connects devices. Even when TCP/IP (transport control and Internet protocols) are added to Ethernet it does not provide a complete communication solution. There remains a need for the application layer to provide a way for different products from different manufacturers to share information across the network and achieve interoperability. Each fieldbus provides their application interface layer of the communication stack. That said, if intrinsic safety and/or power for field devices is not required, it is feasible for fieldbus, such as Foundation Fieldbus, Profibus, or DeviceNet, to operate on Ethernet’s physical media.

So that begs another question, ‘Are there plans by any fieldbus group to do that?’ Fieldbus Foundation, ControlNet International, the Profibus organization and other efforts are underway, but the jury is still out.

Now let’s examine a few technologies appearing in other industries and business segments that are beginning to influence process control and automation.

Other technologies to watch

Intelligent appliances

Hewlett-Packard Vantera

Opto 22 Internet I/O – www.internetio.com

I subscribe to online business news wires that deliver announcements from manufacturers serving industries that I select. Commercial networking and computer technologies are one of those subscription services. Intelligent appliances for residential and commercial use are generating a lot of press releases.

Intelligent appliances are defined as Internet aware devices that permit specialized products to connect directly to the Internet and perform as an Internet server. Devices included in the intelligent appliance arena include WebTV, WebDVD, WebPhone, home security systems, and there is even a WebRefrigerator. Companies like Sun Microsystems, Microsoft, Sony, Philips, and Quantum are investing in intelligent appliance developments. Hewlett-Packard recently created an intelligent appliance division. These companies see huge opportunities to make money developing and selling intelligent appliances that connect directly to the Internet. One scenario explaining the use of IAs in a commercial application is developed around a mini-market. The mini-market is fitted with its own intranet network to which gas pumps, vending machines, car wash equipment, utility meters, security system, and cash register are connected. The mini-market network is connected to the Internet through a secure firewall. Using the Internet a corporate computer can access the mini-market and share information to and from any of the intelligent appliances. For example, download new gas prices, determine vending machine inventory, review utility usage, etc. Or the intelligent device can initiate communication with suppliers. For example, intelligent appliances could send e-mail for a ‘just in time delivery’ request for more fuel, ding-dongs, soda, or beer.

Products expected to fit in the broad category of intelligent appliances will be ‘built-for-purpose’ web servers. They will do one thing and one thing only. Users won’t care what operating system, communication stack, application software, etc. reside in IAs. They want them to plug-and-play on the network. Intelligent appliances with like functionality will be interoperable. When a manufacturer adds new features, they can download these across the network. When factory support or diagnostics is required, it will arrive via the network. With IAs, the same level of factory support is as available to a Brazilian jungle site as it is next door to the IA manufacturer.

I mention IAs because silicon chips being developed will make connection to the Internet/intranet very inexpensive. Estimates in the three to five dollars per connection range are being touted. When network silicon becomes that cheap, it will find it’s way into instrumentation, sensors and control systems serving the manufacturing and process industries. Already, companies like Hewlett-Packard and Opto 22 have introduced Internet aware I/O devices. Hewlett-Packards Vantera was developed to allow connection of utility meters to the Internet. Opto 22’s I/O server connects to Ethernet and host OPC and web pages. OPC provides the communication to OPC clients for real-time data sharing and a user with a browser can configure and access on-board diagnostics. Hands-on access to Opto 22’s I/O server is available at www.internetio.com.

Tempest, NASA’s embedded web-technology developed to support NASA’s manned space flight program for shuttle and station experiment control. Tempest turns an embedded microprocessor into an accessible web site. Tempest fits into 34 kilobytes, executes at 4-6 milliseconds, and uses a real-time operating system such as VxWorks, or any system running Java or a Java Virtual Machine. This new technology also uses other existing web tools, such as standards browsers, common object broker architecture (CORBA), virtual-reality modeling language (VRML), and hypertext meta-language (html). NASA believes that by 2005 the embedded systems market (i.e., cell phones and Palm Tops) will be hundreds or thousands of times larger than the desktop PC market.

Tempest is a commercial quality, fully documented, simple to install technology that supports simple GUIs. Source code and training for Tempest can be obtained from NASA’s Glenn Research Center.

Among the ‘wild’ emerging technologies is finger print recognition. Silicon has been developed that can be embedded in the grips of handguns. Running through a learn cycle, the handgun memorizes the personalized finger print signature of the registered owner, and the handgun will only operate when held by the registered owner.

Field-centric.again These technologies, and others, could come together to form a highly intelligent, widely distributed field-centric control system architecture.

A recent AMR Research report calls this ‘new’ paradigm Ubiquitous Computing and Control (UAC). I’m sure that term has more marketing appeal than plant floor centric, but the future will place control intelligence very close to the process.

Explore this control system scenario. Processing units, i.e., reactors, columns, boilers, centrifuge, etc., will have a corresponding network, perhaps Ethernet, perhaps not, but it will mimic the Internet, permitting machine to machine collaboration without concern for a machines operating system. Connected to these process unit networks will be ‘built-for-purpose’ (BFP) intelligent appliances such as transmitters, valves, motors, sensors, and controllers. BFPs will be available from the original equipment manufacturer for devices such as a centrifuge, boiler, or dryer. Companies with specialized expertise, such as distillation or fermentation, will also develop BFPs for unit management and optimization. Related process unit networks will be connected to a process cell network. BFPs connected to process cell networks will be data historians, batch recipe managers, continuous process optimizers, and information managers. Moving further up in the process will be process area networks. BFPs connected at this level will provide intelligent advisories, dynamic modeling, historical data archiving, and process optimizers.

Some emerging BFPs are available from companies like webPlan who have developed e-supply-chain suite of applications including onPLAN, SupplyIT, OrderIT, and DemandIT. A similar BFP offering from J.D. Edwards, a major ERP vendor, was announced in May `99. J.D. Edwards claims client/servers are nearing the end of the useful life and are being replaced with e-business servers. They announced a suite of 62 web-based applications including employee self-service, Internet storefronts, and data exchange servers between supply-chain partners.

I envision a plant floor that mimics Internet/intranets. Where networks connect intelligent specialized servers with very focused assignments and machines collaborate to produce a product. Where on-line machine performance is available to anyone with a browser and where business information is shared without concern as to source or destination platforms. I envision a day when process and automation engineers focus on process and product improvements, and I’m not alone in this vision.

Dick Morley, the father of the PLC and founder of Modicon, at a March ’99 seminar, validated the vision of a web centric plant floor.

Alliances & mergers will bring solutions

Guidelines for Safe Automationof Chemical Processes

Center for Chemical Process Safety of theAmerican Institute of Chemical Engineers345 East 47th StreetNew York, NY 10017

To provide products for an open systems market, vendors will become much more vertically aware and begin to shy away from being everything to everyone. They will accomplish this by strengthening existing alliances and adding new alliances to fill gaps in industry domains they choose to provide solution offerings.

Mergers in the process control and instrumentation arena will slow down in the next five years because there won’t be that many companies left to merge and because companies are already struggling with how to manage multiple control system platforms without upsetting existing customers. You will be able to buy open system solutions from a single source, but vendor choices may be limited and don’t assume it’s ABB, Fisher-Rosemount, Honeywell, Invensys (Foxboro/Wonderware), or Yokogawa. GE Fanuc, Siemens, and Rockwell Automation each view themselves as viable process control automation suppliers and are acquiring complimentary companies, developing products, and assembling industry expertise to compete for process control system applications.

Small, entrepreneurial companies will continue to be the most innovative and if you’re seeking a non-conventional solution, look to a small company, but don’t expect long-term support, upgrades, or enhancements. If the solution is really good it, or the company that owns it, will be bought by one of the majors and merged into the ‘flock.’ More than several small companies have the goal of being bought as their long-term strategic goal.

One of the major advantages from buying control solutions from a single supplier, or a supplier with a very strong alliance compliance program, will be performance testing. The one thing proprietary control systems provided was a totally integrated, fully tested, one-butt-to-kick solution. Systems assembled by an integrator or by users redistribute the burden of performance testing back to the end user.

A few small, industry focused, control system providers will continue to exist, but unless your in the industry, and possibly the geographic area, they serve you likely will not hear about them.

An area yet to find favor.partnerships A few years ago, an IBM service organization, like many companies, had developed a lot of in-house expertise on computer performance and networking. One day it dawned on someone to market that expertise to IBM customers. IBM approached a large international bank about taking over their computer centers. After all, banks were good at banking, not at running computer centers. IBM knew computers. A 10-year deal was signed with IBM being paid based on the number of transactions between the bank and its customers. IBM immediately installed a computer in the bank that ran diagnostics on all other computers, and when a problem was detected, the diagnostic computer called IBM’s service center in Colorado. Regularly bank officials would meet with IBM to discuss new and changing requirements affecting transaction quantities, types, and locations. IBM was responsible for upgrading the computer systems to meet changing transaction requirements. Were there risks? You bet, but it was in the interest of both parties to make the other successful.a text book definition of partnership.

Major process control companies will develop similar partnerships with a major producer company. Getting close to such a partnership is the Honeywell and Chevron deal that has Honeywell taking full responsibility for instrumentation, control system, and automation application programming at Chevron for 10 years. The deal is worth $250 million dollars.

Not all good news Government regulations for EPA, OSHA, etc., will continue to get tougher. Compliance reporting will become more complicated, especially for those user companies who do not have well integrated information systems.

The sophistication of chemical processes will increase, the use of wireless technologies, complex control algorithms, distributed intelligent appliances, and especially open platforms, will make assembly of safe solutions more difficult. The need to protect the community, the environment, and company assets will require more attention to reliability and safety when implementing control systems. Best practices and standards, such as ANSI/ISA S84, will become even more important to ensure networked control systems and safety systems retain separation and that system risk analysis are conducted and documented. If you have doubts about how to assess, assign, design, implement, and test control systems and safety instrumented systems, obtain a copy of Guidelines for Safe Automation of Chemical Processes from American Institute of Chemical Engineers located in New York City.

Summary Our quick journey exploring the punctuated equilibrium theory reveals that many business practices began to form long ago and that the ‘new’ paradigm of field based control is not that new.

The technologies explored here are not earth-shaking science fiction. Many already exist and the rest are very close to becoming available. What these technologies offer process and control engineers is an opportunity to focus their efforts on improving processes, and less time evaluating technologies, worrying about software compatibility, and managing upgrades.

Comments?E-mail dharrold@cahners.com