BROWSE BY TECHNOLOGY



RTC SUPPLEMENTS


SYSTEM INTEGRATION

Motion Control and Safety

Machine Vision Passes the Bucket

Coordinating motion with machine vision presents a number of unique challenges. A real-world example shows some of the expected and unexpected considerations that must be built into a system for even a very specific set of actions.

BEN DAWSON, DALSA

  • Page 1 of 1
    Bookmark and Share

Unloading empty paint buckets from a pallet and placing them on a conveyer line for filling is fast-paced, stressful and tedious work. One worker moves pallets from a truck, clips retaining straps, and rolls pallets into reach of the other worker. The other worker sweeps or grabs buckets and puts them onto the conveyer to the fill stations.

This de-palletizing process was a “pinch point” in a paint manufacturer’s production line—if the workers could not keep pace the line stopped. So the paint manufacturer asked Faber Industrial Technologies to design and build an automated de-palletizing system using Dalsa’s machine vision components.

As before, pallets are prepared and rolled into a safety cage by one worker, but the eyes and hands of the other worker are replaced by a machine vision system and a robot with custom “end effectors”—the robot’s “hands”—to unpack and pass buckets to production. In comparison to our human abilities, this task seems easy to do, but is really quite challenging.

A pallet of cans is a stack of materials. The top layer is a “picture frame” of wood that, when bound with retaining straps, holds the layers of cans in place without damaging them. Each layer of cans starts with a protective sheet of cardboard. There are 56 cans in each layer and up to 6 layers of cans on a pallet. The bottom of the stack is the pallet itself. Each of these layers needs to be “seen” by the vision system in order to be picked up by the robot’s end effectors. To do this, the camera is mounted so it is looking down on the stack. The robot puts cans on the production line conveyer, and packing material in piles to be recycled (Figure 1).

When a new pallet is brought in, the vision system finds the position of the “picture frame” and directs the robot to remove it. Then the robot removes a layer of protective cardboard and the vision system finds the position of each can. Individual cans that are more than a few inches out of packed, hexagonal grid position are a problem, as we shall see. Once all the cans in a layer are found, the robot arm’s end effectors pick up half the cans in that layer at a time and place them on the conveyer for filling.

The stack of cans is about 6 feet high when first seen, but shrinks to about 8 inches high when all the cans are removed. Thus the vision system camera must adjust to focus on the current layer of cans. When the pallet itself is finally exposed, the robot uses another of its end effectors to pick up and remove the pallet.

The first step in any machine vision solution is to select lighting that emphasizes important part features and suppresses unwanted details. In this application the vision system is in an open-mesh safety cage and is therefore subject to uncontrolled, ambient illumination. Bright lights were positioned at an angle to the can tops to “wash out” most of the influence of ambient illumination. This results in images with bright ovals where the can rims reflect the light and dark centers for the insides of the cans.

The lens is specified by the field of view (FOV), the working distances and the camera specifications. The pallet stack is 48 x 40 inches (width x height), but a slightly larger field of view, 58 x 44 inches, is used to allow for skew in a layer of cans and variations in the location of the pallet. Optical magnification, M, is the camera’s sensor size (SS) divided by the FOV, so M = SS / FOV = 0.00429. The lens must reduce the FOV by about 233 times to fit into the sensor size.

The working distance (WD) is the distance from the camera to the top of the pallet stack. If the camera looks straight down on the pallet stack, the WD is about 48 inches. The lens focal length = WD * M / (M + 1), where M is the magnification computed above. This gives a lens focal length of 5.2 mm.

5.2 mm is a very short focal length so the lens is a “wide angle” lens, and wide angle lenses distort the image. You might have seen images taken with a “fish eye” lens, which is an extreme example of a wide angle lens. If you are trying to visually guide a robot’s “hands” into cans to pick them up, this optical distortion is a problem. As mentioned, the camera focus must change as layers of the pallet stack are removed. This application also had to contend with perspective distortion, the apparent decrease in object size as a function of 1 / WD.

Longer focal length lenses greatly reduce these problems. To use a 25 mm focal length lens the working distance must be about 20 feet; however, this would not work due to the plant ceiling height. Therefore, to increase the working distance and to get the camera out of the reach of the moving robot arm, the camera and lens were mounted on the ceiling to one side of the pallet stack. This introduced more perspective distortion. The lens used is designed for surveillance and has motorized settings of focal length (zoom), focus and iris so that the vision system can adjust the field of view, focus (on the can rims) and image contrast as layers of cans are removed.

Optical distortions due to the lens and perspective distortion are corrected by building calibration tables for each layer of cans and for each can in a layer. The rims or interiors of the cans are used to find the center position of each can in a layer, and then the calibration tables are applied to correct the center position for the optical and perspective distortion (Figure 2).

Dalsa’s Sherlock machine vision software finds the can rims or the dark “blobs” that indicate can interiors. Because of the somewhat uncontrolled lighting, both methods are used at different pallet stack levels. The vision system then reports the location of can centers to the robot and the robot end effectors are positioned so they insert into the cans. The end effectors expand to grasp the cans from the inside, and the cans are lifted by the robot arm and placed on the fill line conveyer.

To find the rims of the cans an “edge detector” is applied to produce an image that only shows the edges of objects. Then a circle Hough transform is applied to accumulate edges that could be part of a circular can rim. The Hough transform is a voting scheme. Each edge pixel “votes” as to what circles it could be a part of and the votes are tallied in an “accumulator space.” Peaks in this accumulator space indicate circles with a “winning” number of votes. The Hough transform is robust to image noise but requires significant computation time. The center of the can opening is returned by the Hough transform. The radius is also returned but not used.

When the lighting gives dark interiors for the cans, these dark areas are detected using connectivity analysis, often called “blob analysis.” An intensity threshold is applied to the image so that the interior of the cans is mapped to 1 (or some non-zero value) and the rest of the image is set to 0. Then areas of connected (“touching”), non-zero pixel values are found. The center of gravity of a “blob” of connected pixels is the center of a can’s opening (Figure 3).

The end effectors on the robot arm are mechanically complex and constitute a large part of the design effort. The can pick-up end effectors are on a fixed, hexagonal grid to match the pattern of cans on the pallet. This means that if a can is more than a few inches off of this hexagonal grid, it might be hit when the robot is trying to insert the pick-up end effectors into the cans.

The assumption here was that delivered cans could have layers skewed with respect to other layers, but would have no cans more than an inch or two off of the hexagonal grid alignment. The failure of this assumption was shown when the end effectors came down on the grid of cans and crushed cans or sent cans that were off the grid flying.

No one was eager to redesign the end effectors to allow more can position tolerance, and having the robot pick up one can at a time would have been too slow. Three measures were taken to reduce the “bucket kicking” problem. First, the remaining worker had to push wayward cans into a tight packing after he or she removed the retaining straps. Second, the vision system was improved to detect cans more than a few inches off the hexagonal grid and stop the robot from trying to pick up these cans. Then the worker goes into the safety cage to straighten out the cans, which halts the robot. These two measures reduced can alignment problems, but the robot still occasionally stops, and this can stop the filling line.

The last measure was to require the producer of the empty cans to be more careful about the packaging and shipping of the cans so that off-the-grid experiences were reduced. One suggestion, yet to be implemented, was to add horizontal strapping on each can layer in addition to the vertical strapping between the “picture frame” and the bottom pallet.

As engineers, we enjoy solving the hard problems while not paying attention to assumptions that seem “obvious” and so are less interesting. As we have seen here, these unexamined assumptions are often the ones that come back to bite you.

DALSA Corporation
Billerica, MA.
(978) 670-2002.
[www.DALSA.com].