BROWSE BY TECHNOLOGY



RTC SUPPLEMENTS


EDITOR'S REPORT

Research and Progress in Robotics

True Robots Differ Substantially from Other Automated Systems

Yes, it can put those cans on the pallet using its arm and camera like a champ. But can it then run over to the fridge and get me a beer—avoiding my kids’ toys in the way? What makes a system really a robot?

TOM WILLIAMS, EDITOR-IN-CHIEF

  • Page 1 of 1
    Bookmark and Share

Article Media

The topic of robotics comes with a certain number of preconceived notions. On one end, robots are ambulatory, linguistically endowed anthropomorphic intelligent machines—the stuff of science fiction. On the other end, they are synonymous with most semi-autonomous automated control systems such as those found on the factory floor. In actuality, today’s robots are neither of these things. Rather, they are systems at some point in the transition from mundane machine to as far toward the science fiction image as technology and ingenuity can take them. But they are far from that goal despite some fascinating advances.

So how do we differentiate between an automated machine and a robot? According to Siddhartha Srinivasa, Senior Research Scientist for Intel, two things really distinguish robots: the ability to do numerous adaptive general-purpose tasks and the ability to operate in uncertain, unstructured environments. For example, an automated factory machine—and this includes those electromechanical arms that are often referred to as “industrial robots”—works really well in a structured environment such as a factory floor doing one defined task that is defined for it. Those tasks can, of course, be changed by switching out equipment (a welding tip for a paint sprayer) and loading a different program.

Robots, on the other hand, are distinguished by their ability to perform many general-purpose tasks and tasks that may be similar but differ in terms of objects, distances and other variables. For example, the robot that can pick up a cup from a coffee table and hand it to you should be equally capable of moving across the room, picking up a beer mug from a counter top and bringing it back to you without reprogramming. That same robot, in moving across the room, should be able to recognize and avoid obstacles even if they have been recently moved. These two little stipulations bring with them an enormous amount of added complexity, the need for large amounts of computational power and creative developments in machine intelligence. Such machines need to be automatically adaptable both at the task level and at the level of the surrounding environment.

One big issue of trying to write algorithms for robotics, according to Srinivasa, is “to try to write them as general as possible using words that have very general meaning so that at the application level they can be put together in different ways to make different paragraphs, stories and meanings.” He calls these “building blocks of autonomy” so that the application developer does not, for example, have to worry about how many degrees of freedom the arm has but can specify instructions to “Pick up an object and put the object there and don’t spill the coffee in the object.”

Then, of course, there is the question of how one uses such a level of abstraction to instruct the robot to “Pick up the glass.” That concept is translated in the human brain from its linguistic generality to very specific arm and hand motions that carry out the task for any number of specific locations and circumstances. By the same token, a robotic system must be able to take a general description of picking up the glass and apply it to many specialized instances.

In the case of the robot used by Intel Labs in Pittsburgh— HERB, the Home Exploring Robotic Butler (Figure 1)—this is done by literally taking the hand and arm of the robot and moving it to the object, wrapping the fingers of the hand around the object and lifting it. That involves, in this one teaching instance—a large series of specific movements of motor encoders and other devices within the machine. These are associated with algorithms stored and classified in a very large database. The robot tries to capture all the possible states plus what the object looks like, where it is located in its coordinate space in addition to how its arm is moving. From this example, the robot builds a model at a higher level of abstraction within its brain. This internal model is then used to search out and apply specific algorithms and values to fit a different instance of “Pick up the glass.”

Figure 1
The Home Exploring Robotic Butler—HERB—is an Intel research project for proof of concept development in robotics. The system has two arms, a laser system and a visual system for navigation and recognition, and a hierarchical software architecture for adapting task models to particular situations for execution.

In addition to controlling major peripherals like its arms, HERB must also integrate and constantly update information about its surroundings. To that end, it incorporates a vision system and a laser-based coordinate system. The laser generates light pulses around the robot and measures the frequency of the returning beam to generate 40,000 points per second around the robot. The data from the laser system is used to build a 3D model of its surrounding world. In addition, a camera running vision processing algorithms is used to recognize and manipulate objects. The robot can pick up an object, twist it around and build a 3D model that is stored in its database.

Srinivasa stresses that the recent advances in compute power have been a tremendous boon for robotics, especially in providing the ability to search large spaces to find the proper motion algorithms for a given task. The compute power in the robot is also highly distributed with motor controllers at the lowest level—right at the robot’s joints. These are very fast, specific-purpose devices that talk to the motors at 1,000 Hz, and their algorithms are at the lowest level of the software hierarchy. The next level consists of behavioral loops, such as image acquisition, that run at about 10 Hz. Then at the highest level are the planning algorithms that take the data and make longer term plans to carry out a task like picking up a glass.

To do this, the robot must execute one of its general-purpose models, possibly named “pick up the glass,” and adopt it to the current situation. Thus it will not be executing the exact same routines that were invoked when it learned the task. Rather, it will assess the situation given the coordinates from the laser system of objects in its surroundings and images from the vision system to invoke the proper model. Then it will plan the execution of the task by searching its database for the most appropriate algorithms for that particular instance of the task and arranging them in a sequence, setting variables for those algorithms that have been computed from the coordinate space. 

It is this adaptability that sets a robotic system apart from a simpler semi-autonomous automated system. The robot selects a method that is similar to what it has learned before. It then executes it while asking if the object is still there (vision system) and if it is feeling the forces it should be feeling (tactile feedback). It is also moving its arm according to the algorithms that have been set up based on the coordinate space measured by the laser system. If it notices an error, it propagates that error back to the “brain,” which is the planning level. The brain has a state machine that reacts to errors. Recognizing, interpreting and correcting for errors is one of the more advanced areas of robotic research.

Of course, not all robots—even at the research level—use exactly the same mechanisms as the Intel HERB, but to be truly robots as distinguished from automated control systems, they must be able to generalize, adapt and manage an unstructured environment. One of the best moments in his research, according to Srinivasa, was “when I had never programmed the robot to pick up a given object, but it figured it out from what it had learned before.”

The question then naturally arises, “Where are we going and what are we getting from robotics research?” Interestingly, much of the long-term goal seems to be directed at things like personal and home robots to take care of ordinary chores. The word “robot,” after all, comes from the Czech word “robota,” which means “work” or “drudgery.” Obviously, the same class of machines could and is being used for work in harsh environments like space. There are aspects of robotics in unmanned aerial vehicles (UAVs), even though these are also subject to direct human control as well.

There are annual competitions involving autonomous vehicles and autonomous submersible vehicles, all of which have attractive possibilities for applications. Although we do not yet have commercially available robotic cars, we do have some advanced automobiles like the Lexus that are capable of autonomous parallel parking. This latter task must meet the more stringent criteria for a robotic system in that it must adapt a general task for parallel parking to each individual situation—especially if it involves a Hell’s Angels bike. There are further more immediate applications in health care, and there are other aspects of current research that are being examined for possible spin-offs for applications. Along the way to 3-CPO we will definitely find creative and useful ways to make use of more autonomous and adaptable electromechanical systems no matter what we call them.  

Intel
Santa Clara, CA.
(408) 765-8080.
[www.intel.com].