Machine vision: Focus on design

A well-designed machine vision system enables manufacturers to improve product quality, enhance process control, and increase manufacturing efficiency while lowering the total cost of ownership. A good machine vision design starts with selecting a motion-vision integration type, based on the machine’s automation tasks.


Figure 1: Synergetic integration is shown in web inspection. Courtesy: National Instruments

Machine vision can solve a wide range of applications with automation solutions that can improve manufacturing efficiency, lower costs, and increase customer satisfaction with close to zero defects and recalls. To maximize the benefits of machine vision, integrators must focus on the design of the integrated system. Whether machine vision is used for basic automated inspection or for highly precise vision-guided automation, a well-conceived and developed design is key to achieving a high return on investment (ROI).  The core design for a machine vision system revolves around the integration of a vision subsystem with a motion subsystem. Emphasis on this core piece of the design provides an early and accurate bill of materials, mitigates risk from incompatible motion and vision components, reduces integration time and costs, and lowers the total cost of ownership while delivering the benefits of machine vision.

Integrate machine vision design

In an integrated machine vision system, the motion and the vision systems can have varying levels of interaction, from basic information exchange to advanced vision-based feedback. The level of interaction depends on the requirements of the machine, that is, the sequence, the accuracy and precision, and the nature of the tasks that must be performed by the machine. Depending on the level of interaction between the motion and the vision systems, a design can be based on one of the following four types of integration: synergetic integration, synchronized integration, vision-guided motion, and visual servo control. For a high ROI, the machine must meet the specified requirements at deployment and must scale well with next-generation process and product improvements. Hence, integrators must first identify the current and future requirements and use those requirements to determine the type of integration that will best suit the application.

Synergetic integration

Synergetic Integration is the most basic type of integration. In this type of integration, the motion and the vision systems exchange basic information such as velocity or a time base. The time to communicate between the motion and vision systems is typically on the order of tens of seconds. A good example of synergetic integration is a web inspection system (Figure 1). In a web inspection system, the motion system moves the web, usually at a constant velocity. The vision system generates a pulse train to trigger cameras, and it uses the captured images to inspect the web. The vision system needs to know the velocity of the web in order to determine the rate for triggering the cameras. 

Synchronized integration

In synchronized integration, the motion and the vision systems are synchronized through high-speed I/O triggering. High-speed signals wired between the motion and the vision systems are used to trigger events and communicate commands between the two systems. This I/O synchronization effectively synchronizes the software routines running on the individual systems. A good example of synchronized integration is high-speed sorting, in which objects are sorted based on the difference in specific image features, such as color, shape, or size.

Figure 2: Synchronized integration is shown in high-speed sorting. Courtesy: National InstrumentsIn a high-speed sorting application, the vision system triggers a camera to capture the image of a part moving across the camera (Figure 2). The motion system uses the same trigger to capture the position of the part. Next, the vision system analyzes the image to determine if the part of interest exists at that position. If it does, that position is buffered. Because the conveyor is moving at a constant velocity, the motion system can use the buffered position to trigger an air nozzle further down the conveyor. When the part reaches the air nozzle, the air nozzle is triggered to move the part to a different conveyor, sorting the different colored parts. High-speed sorting is widely used in the food industry to sort product types or discard defective products. It achieves a high throughput, lowers labor costs, and significantly reduces defective shipments resulting from human errors.

Vision-guided motion

In vision-guided motion, the vision system provides some guidance to the motion system, such as the position of a part or the error in the orientation of the part. As we move from a basic to a more advanced integration type, there is an additional layer of interaction between the motion and the vision systems. For example, you can have high-speed I/O triggering in addition to vision guidance.

Figure 3: Vision-guided motion is shown in flexible feeding. Courtesy: National InstrumentsA good example of vision-guided motion is flexible feeding. In flexible feeding, parts exist in random positions and orientations. The vision system takes an image of the part, determines the coordinates of the part, and then provides the coordinates to the motion system (Figure 3). The motion system uses these coordinates to move an actuator to the part to pick it up. It can also correct the orientation of the part before placing it. With this implementation, you do not need any fixtures to orient and position the parts before the pick-and-place process. You can also overlap inspection steps with the placement tasks. For example, the vision system can inspect the part for defects and provide pass/fail information to the motion system, and the actuator can then discard the defective part instead of placing it.

Figure 4: Vision-guided motion block diagram includes a control loop, sense, decide, and actuate. Courtesy: National InstrumentsFigure 4 shows the block diagram of the vision-guided motion system described in Figure 3. The vision system provides the position of the part to the motion trajectory generator at least once every second. This type of processing requires fast real-time systems that can meet the timing and processing needs of a vision-guided motion system.

In a vision-guided motion system, the vision system provides guidance to the motion system only at the beginning of a move. There is no feedback during or after the move to verify that the move was correctly executed. This lack of feedback makes the move prone to errors in the pixel-to-distance conversion, and the accuracy of the move is entirely dependent on the motion system. These drawbacks become prominent in high-accuracy applications with moves in the millimeter and submillimeter range.

Visual servo control

The drawbacks of vision-guided motion can be eliminated if the vision system provides continual feedback to the motion system during the move. In visual servo control, the vision system provides initial guidance to the motion system as well as continuous feedback during the move. The vision system captures, analyzes, and processes the images to provide feedback in the form of position setpoints for the position loop (dynamic look and move) or actual position feedback (direct servo). Visual servo control reduces the impact of errors from pixel to distance conversions and increases the precision and accuracy of existing automation. With visual servo control, you can solve applications that were previously considered unsolvable, such as those that require micrometer or submicrometer alignments. Visual servo implementations, especially those based on the dynamic look-and-move approach, are becoming viable through field programmable gate array (FPGA) technologies that provide hardware acceleration for time-critical vision processing tasks and that can achieve the response rates required to close the fast control loops used in motion tasks.

Design challenges

Challenge: Interoperability and technical support

Component selection, setup, and configuration of a motion and a vision system are challenging tasks. Integrators typically choose motion and vision components from multiple vendors. These components are not readily interoperable, and technical support from the vendor(s) is usually insufficient. These factors significantly increase the integration time, resulting in higher integration costs.

Solution: When designing a machine vision system, first consider an off-the-shelf, integrated machine vision solution. These integrated solutions, offered by limited vendors, provide turnkey solutions at the expense of cost. These solutions result in a shorter integration time and faster deployment. So, if a standard integrated solution meets the requirements at a reasonable cost, it is the best option to pursue.

The second option is to consider a vendor who offers the motion and the vision components or, at minimum, the core programmable motion and vision components. With this option, there is guaranteed compatibility between the components, a single programming environment, and one source for technical support. These factors contribute to a significantly shorter integration time. A third option is vendors that have a partnership. With such partnering initiatives, vendors usually offer solutions to interface between at least a subset of their products, and workarounds for compatibility issues are often well documented.

With any option, evaluate the amount of support you will receive from the vendor. If required, budget and pay for support packages such as premium phone or individual support from a systems engineer. Support becomes increasingly important with custom and high-performance applications. Good support from a vendor can significantly reduce your integration time, and hence reduce your integration costs.

Challenge: Software programming

While higher software performance is enabling the adoption of vision-guided motion solutions, software integration challenges are still the main barrier for integrators and manufacturers. Typically, there are two different programming environments for the motion and vision components. Learning the distinct environments often requires a huge time investment. Even after getting acquainted with the environments, transferring and translating commands and information between them is cumbersome. Integration of HMI and I/O adds an additional burden and increases integration time. Hardware integration across various communication buses (such as GigE, Cameralink, and EtherCAT) further complicates the issue.

Real-time and FPGA programming, as well as customization, are often required in high-speed vision-guided motion systems, visual servo systems, and highly specialized applications such as those found in the biomedical and life sciences industries. Typically, any customization added by the vendor is available at a premium price. User customization, if available, is often in the form of firmware-level programming, requiring very high domain knowledge and familiarity with vendor hardware.

One environment is simpler

Figure 5: Project with integrated vision and motion systems uses a familiar file structure. Courtesy: National InstrumentsSelect one programming environment that is easy to learn and that can seamlessly integrate motion, vision, HMI, and I/O across various hardware platforms and buses. This programming environment must also allow customized real-time and FPGA programming to solve current and next-generation machine vision applications. The environment must feature tools for quick prototyping and early validation of the mechanical and vision components of your machine. Figure 5 shows an example where an embedded vision system with a GigE camera is connected to a servo motor drive over EtherCAT. Vision and motion programming modules provide the tools required to implement the high-level functions to process the images captured from the camera and control the axis through the servo drive. Additional high-level APIs provide simple integration of the HMI and I/O.

Quality, control, efficiency, less cost

A well-designed machine vision system enables manufacturers to improve product quality, enhance process control, and increase manufacturing efficiency while lowering the total cost of ownership. A good machine vision design starts with selecting a motion-vision integration type, which is based on the automation tasks that must be performed by the machine. To successfully develop the design, integrators must carefully consider motion and vision vendors with a focus on interoperability and quality of support services. Integrators must select a single programming environment that enables seamless hardware integration. When solving highly customized and advanced applications, integrators should choose a software platform that makes customization easy and real-time and FPGA programming possible.

Priya Ramachandran is an NI senior engineer. Courtesy: National Instruments- Priya Ramachandran is a senior engineer at National Instruments. Ramachandran is responsible for product design and strategy for motion control. She has designed leading-edge motion controllers with a focus on advanced control and has an extensive background with advanced motion-vision applications. Ramachandran holds a master's degree in electrical engineering from Virginia Tech. Edited by Mark T. Hoske, content manager, CFE Media, Control Engineering.

Video: Machine vision-motion system draws a line on quality (coming soon)

Machine vision cameras, controllers, software - see link below 

More about the author

Technology Update: Leg-wheeled hybrid mobile robot[tt_news]=46767

Consulting-Specifying Engineer's Product of the Year (POY) contest is the premier award for new products in the HVAC, fire, electrical, and...
Consulting-Specifying Engineer magazine is dedicated to encouraging and recognizing the most talented young individuals...
The MEP Giants program lists the top mechanical, electrical, plumbing, and fire protection engineering firms in the United States.
Exploring fire pumps and systems; Lighting energy codes; Salary survey; Changes to NFPA 20
How to use IPD; 2017 Commissioning Giants; CFDs and harmonic mitigation; Eight steps to determine plumbing system requirements
2017 MEP Giants; Mergers and acquisitions report; ASHRAE 62.1; LEED v4 updates and tips; Understanding overcurrent protection
Power system design for high-performance buildings; mitigating arc flash hazards
Transformers; Electrical system design; Selecting and sizing transformers; Grounded and ungrounded system design, Paralleling generator systems
Commissioning electrical systems; Designing emergency and standby generator systems; VFDs in high-performance buildings
As brand protection manager for Eaton’s Electrical Sector, Tom Grace oversees counterfeit awareness...
Amara Rozgus is chief editor and content manager of Consulting-Specifier Engineer magazine.
IEEE power industry experts bring their combined experience in the electrical power industry...
Michael Heinsdorf, P.E., LEED AP, CDT is an Engineering Specification Writer at ARCOM MasterSpec.
Automation Engineer; Wood Group
System Integrator; Cross Integrated Systems Group
Fire & Life Safety Engineer; Technip USA Inc.
This course focuses on climate analysis, appropriateness of cooling system selection, and combining cooling systems.
This course will help identify and reveal electrical hazards and identify the solutions to implementing and maintaining a safe work environment.
This course explains how maintaining power and communication systems through emergency power-generation systems is critical.
click me