Combining Vision with Motion
Three Considerations for a Successful Vision-Guided Motion System
A well-implemented vision-guided motion system enables manufacturers to improve product quality, enhance process control, and increase efficiency while lowering cost of ownership. Three considerations for finding a successful solution have proven to help yield a high ROI within a year of commissioning.
PRIYA RAMACHANDRAN, NATIONAL INSTRUMENTS
Page 1 of 1
Vision-guided motion systems can automate tasks at accuracies and speeds that provide next-generation machines with faster throughput, higher quality and lower cost. Today, the manufacturing community is under tremendous strain to maintain an edge in the global market by offering high quality products at competitive prices. To stay competitive, manufacturers are seeking solutions to improve productivity, lower manufacturing costs and increase customer satisfaction with zero defects and recalls. Vision-guided motion solutions offer manufacturers the means to achieve these goals.
A vision-guided motion system consists of a motion subsystem and a vision subsystem, which provides guidance to the motion subsystem in the form of the position or the orientation of an object. The motion subsystem then uses this guidance to move the object as required by the application. The self-guidance capability of a vision guided motion system eliminates the need for hard tooling, fixtures and positioning equipment. Parts can be presented in random positions or orientations and the robotic system is able to locate the part through vision guidance. The benefits of vision guidance are provided in Table 1.
Benefits of vision guidance.
Types of Vision Guidance
The types of vision guidance vary from basic to advanced in terms of the level of integration or amount of interaction between the motion and the vision subsystems. A basic vision-guided motion sequence starts with the vision subsystem capturing the image of an object, for example, an unassembled part. The vision subsystem then processes the image to determine the coordinates of the part in pixels and then converts the pixel coordinates to real-world coordinates. The real world coordinates are then provided as guidance to the motion subsystem. The motion subsystem uses the coordinates to determine the trajectory for a coordinated multi-axis move, such as a move to pick and place the part (Figure 1).
Basic vision-guided motion system.
In a basic vision-guided motion system the vision subsystem provides guidance to the motion system only at the beginning of a move. There is no feedback during or after the move to verify that the move was correctly executed. This lack of feedback makes the move prone to errors in the pixel-to-distance conversion and the accuracy of the move is entirely dependent on the motion subsystem. This drawback becomes prominent in high accuracy applications that have moves in the millimeter and sub-millimeter range. Such applications need a highly accurate and expensive robotic system. In some cases such a system may be cost prohibitive.
The drawbacks of basic vision guided motion can be eliminated if the vision subsystem provides continual feedback to the motion subsystem during the move. This advanced type of vision guidance is called visual servo control. In visual servo control, the vision system provides feedback in the form of the position setpoints for the position loop (dynamic look and move) or the actual position feedback (direct servo). The dynamic look and move approach is becoming increasingly popular in industrial applications. Visual servo control reduces the impact of errors from pixel-to-distance conversions and increases the accuracy of existing automation. Visual servo control solutions are enabled by FPGA technologies that provide the processing optimizations required to generate position setpoints or feedback at the rate of tens of milliseconds to sub-milliseconds.
First Consideration: Vendor Offering and Support
The market now has several motion and vision vendors providing high quality components at competitive prices. When selecting components, two often overlooked but major factors to consider are the degree of interoperability between the various components, and the quality of support services from the vendor(s). These factors significantly affect the integration time and hence the total integration cost. The compatibility of components and the level of support can make or break an integration project.
The first option is to consider a fully integrated vision guided motion system. These integrated solutions, offered by a limited number of vendors, provide turnkey solutions at a lower cost. They are also standardized and are usually not optimized for the requirements of the system. However, these solutions result in a shorter integration time and faster deployment. So, if a standard integrated solution meets the requirements at a reasonable cost, it is the best option to pursue.
The second option is to consider a vendor who offers both the motion and the vision components, or at minimum, the core programmable motion and vision components. With this option there is guaranteed compatibility between the components, a single programming environment and one source for technical support. These factors contribute to a significantly shorter integration time. When compared to the standard integrated solution, this option has the potential to provide higher performance and flexibility as shown in Figure 2.
If the requirements of your design cannot be met with a single vendor, consider vendors that have a close partnership. In this case, software functions are usually available to communicate between the motion and vision systems. The workarounds for interoperability issues are typically documented, but these workarounds are often complex and time consuming. In many cases, interoperability is not fully tested for all features, so there is a risk of incompatibility. This option is comparable to the single vendor solution for performance and flexibility, but the integration time is significantly longer because of multiple programming environments, complicated workarounds and potentially insufficient technical support.
Performance or equipment costs can force you to consider two or more vendors that do not have exiting partnership(s). Although this option has the potential to provide the most performance and flexibility, it definitely has the longest integration time. The path of integrating a set of untested components is littered with incompatibility and support land mines.
The quality of support provided by vendors must also be factored in when selecting components. Component selection, set-up and configuration of a motion and a vision system are challenging tasks. A good support structure is crucial for the success of the project. Available technical support should include quality user documentation, a support website featuring knowledge base entries, example programs, training modules and individual support from applications engineers.
Second Consideration: Software Platform
While higher software performance is enabling the adoption of vision guided motion solutions, software integration challenges are still the main barrier for integrators and manufacturers. Typically, there are two different programming environments for the motion and vision components. Learning the distinct environments often requires a huge investment of time. Even after getting acquainted with the environments, transferring and translating commands and information between them is cumbersome. Integration of HMI and I/O adds an additional burden and increases integration time. Hardware integration across various buses such as GigE, Camerlink, and EtherCAT further complicate the issue.
It is important to select components with a common programming environment that is intuitive and easy-to-learn and can seamlessly integrate motion, vision, HMI, and I/O across various hardware platforms and buses. Figure 3 shows an example where an embedded vision system with a GigE camera is configured as an EtherCAT master and connected to an EtherCAT servo motor drive. Vision and motion programming modules provide the tools required to implement the high-level functions to process the images captured from the camera and control the axis through the servo drive. Additional high-level APIs provide simple integration of the HMI and I/O.
Project with integrated vision and motion subsystems.
Advanced vision-guided motion applications need high performance, determinism, tight synchronization, and custom event management. Today, CPUs and FPGAs provide previously unimaginable processing powers. Processors running fast real-time systems can meet the timing and processing needs of advanced systems, while FPGAs enable hardware acceleration for time-critical vision processing tasks and provide the response rates needed to close control loops in motion tasks.
Customization is needed in new and unique applications in which the standard features of off-the-shelf products do not make the cut. Customization is often required in high-speed vision-guided motion systems, in visual servo systems, or in highly specialized applications, such as those found in the biomedical and life sciences industries. Typically, any customization added by the vendor is available at a premium price, and the price increases exponentially with each customized feature. User customization, if available, is often in the form of register-level or firmware-level programming, requiring very high domain knowledge and familiarity with vendor hardware. For this reason, customization has always meant high integration costs.
A customizable real-time and FPGA programming platform is required to solve current and next generation advanced vision guided motion applications. Figure 4 shows an example of deterministic code running on a real-time target from a visual servo application. First, images from a GigE camera are processed to generate the pixel position. Then, the pixel position is converted to an actual position setpoint. The position setpoint is sent to a user-written custom proportional integral (PI) loop to generate the velocity setpoint, which is then sent to the FPGA running the velocity loop for the motor drive. This modular architecture, on a real-time and FPGA platform, allows customization of each part of the application, from the processing of captured images to the position and current loops that control the motor drive.
Deterministic Real-Time Code from a Visual Servo Application.
Third Consideration: Calibration Tools
Spatial calibration is the process of correlating the pixels in an acquired image to real-world units while accounting for the errors in the imaging setup. Spatial calibration produces a mapping for each pixel in the acquired image to a real-world location. In a vision-guided motion system, the motion subsystem uses the position data generated from this mapping. Hence, accurate spatial calibration is required for the system to perform as expected. Spatial calibration must account for variables such as perspective projection, lens distortion, lens defects, tangential distortion, irregular image surfaces and, in some cases, atmospheric conditions. Not accounting for any of these will result in inaccurate calibration mappings.
In a vision-guided motion system, the calibration must also account for any non-linearity in the motion system. Once the mechanical pieces of the system are in place, it is necessary to perform several sequences of moves and generate a mapping of pixels to motion units such as encoder counts. For maximum accuracy, the sequence of moves must provide quality coverage of all the axes in the coordinate system.
It is almost impossible to manually calibrate the setup for the accuracies required by vision-guided motion applications. The process of manual calibration is long, tedious, inefficient and produces unsatisfactory results. Many of the calibration variables change with time, so the system requires recalibration. Manual recalibration can significantly increase the maintenance costs of the system. Software calibration tools, on the other hand, can generate highly accurate calibration mappings within a fraction of the time required by manual calibration. They also provide the capability to easily recalibrate the system at regular intervals.
Software calibration tools that make it possible for users to programmatically correct for lens and tangential distortion, perspective projections and non-linearity in the mechanical system must be used. Calibration can be implemented as part of the initialization routine of the application so that the system can be recalibrated at any time. A calibration training interface can help significantly reduce the time required to learn the full calibration model of the system. A calibration training interface is a graphical wizard that can be used to generate the calibration template for the system (Figure 5). Often, the interface will also provide the capability to visualize the accuracy of the calibration template by reviewing the mean error, standard deviation and other factors.
A Calibration Training Interface.
Quality calibration tools generate the highly accurate calibration models required in vision-guided motion applications with very little time investment and risk to the project. They also provide the flexibility to programmatically recalibrate and improve the accuracy of the system after deployment. As such, they are crucial to the success of a vision-guided motion application.
To successfully implement such a system, integrators must carefully consider motion and vision vendors with a focus on interoperability and quality of support services. Equally important and related to the choice of vendor is the software platform used to create the application. Integrators must focus on selecting a single programming environment that enables seamless hardware integration. When solving highly customized and advanced applications, integrators should select a software platform that makes customization and easy real-time and FPGA programming possible. Finally, integrators need to opt for software calibration tools that perform quick and accurate initial calibration and programmatic recalibration of the system after deployment.