BROWSE BY TECHNOLOGY



RTC SUPPLEMENTS


INDUSTRY INSIGHT

Mobile Robots

Mobile Robotics: Moving Robots Forward

Mobile robots require a number of different subsystems for sensing, making decisions and acting that must all work in concert. Some of these computational tasks must also meet real-time requirements.

MEGHAN MECKSTROTH, NATIONAL INSTRUMENTS

  • Page 1 of 1
    Bookmark and Share

Article Media

Mobile robotics is one of the fastest growing fields of engineering. By 2015, the Department of Defense has mandated that one third of all military vehicles must be autonomous. The International Federation of Robotics predicts that by 2011, over 12 million service robots will be sold for personal use. Cutting-edge sensor technologies such as high-definition LIDAR and stereo vision, in addition to the evolution of robotics architectures and development tools, are allowing these complex devices to become increasingly common.

Mobile robots are most commonly found performing tasks that are dull, dirty or dangerous. The U.S. military uses robotic systems for dangerous tasks, such as walking through minefields, deactivating bombs or clearing out hostile buildings. Farmers use mobile robots to perform dirty tasks, including harvesting, collecting crop data, weeding and micro spraying. Hospitals use mobile robots to deliver specimens to laboratories and for assistive care. Mobile robots are even used to perform routine chores around the house, such as vacuuming and cleaning pools and gutters.

Almost every type of mobile robot operates in a different environment, has different behavior, and connects to different sensors and actuators. Therefore, they are often developed on different hardware platforms with different software development tools. Consequently, when an engineer develops a proven control system for one robot, it is difficult to transfer it to another robot because the APIs for sensing, steering and motor control are different in syntax and semantics on different robot hardware.

When designing, prototyping and deploying mobile robotics applications, the following aspects of development can be a challenge:

• Combining deterministic control and high level intelligence

• Integration with sensors and actuators

• Translating high-level algorithms to embedded hardware

• Displaying a custom user interface with large amounts of data

• Utilizing multicore hardware

By understanding basic components of mobile robot control systems and investing in a robotics development platform that addresses common challenges, engineers and scientists can quickly move forward with the development of their mobile robots.

Embedded Control Systems

Mobile robots come in many shapes and sizes, and although there isn’t really any single definition of a mobile robot, they all contain three main components. First, there is some combination of sensors that are used for understanding the environment. Then, of course, there is the onboard computer for planning and decision making. Finally, for it to be a mobile robot there must be some form of locomotion allowing the robot to act on its environment.

The centerpiece of every robotic control system is an onboard controller. The controller makes decisions based on the available sensor data and sends instructions to its motors to control the robot’s movement. Robotic control systems require the following subsystems:

An interface to I/O - Robotic control systems have to communicate with a wide variety of sensors and actuators. Key sensors such as LIDAR and GPS commonly use a USB or serial interface, while motors might require a digital port or CAN interface.

Low-level control - PID loops or state-space equations are implemented to perform processing based on sensor feedback. For example, a PID loop is used to process the encoder feedback and navigate the robot in a straight line. This type of control requires deterministic response and tight integration with I/O.

Autonomous navigation system - A mobile robot has subsystems for perception and planning. Once a robot perceives, or understands its sensor data, the data is passed to a higher-level planning module. The planning module can be broken down even further; low-level planning, such as stopping when an obstacle is present, or high-level planning, such as making decisions regarding the mission of the robot.

User interface - User interfaces are often used to display information regarding a robot’s health, such as power consumption levels, and a notification of hardware failures. This includes both remote and local APIs.

Finding an embedded hardware platform that combines the deterministic control and the high-level intelligence required for mobile robotic applications can be challenging. Microprocessors are often considered due to their low cost and small form factor. A processor is ideal for running a user interface and autonomous navigation system, but using a processor to control hardware interfaces and signal-processing systems while executing the higher-level tasks, can be computationally intensive. Field Programmable Gate Arrays (FPGAs) are ideal for controlling hardware interfaces and signal processing. However, high-level tasks such as navigation can be very complicated to implement on an FPGA.

An ideal embedded solution for a mobile robot control system is a pairing of an FPGA and a processor, with communication to a remote HMI for displaying data to a user. This architecture enables robot designers to implement the hardware interfaces and signal processing in logic, and frees up the processor to handle high-level tasks such as navigation. This architecture also allows time-critical control algorithms to be completely implemented in hardware using the FPGA. Figure 1 shows an example of how robot architecture could be implemented on an embedded system with an integrated FPGA and processor.

An example of a complex robotic control system requiring both high-level modules for complex algorithms and low-level control is Alliance Spacesystems’s Aerospace Robotics Testbed (ART). Alliance Spacesystems has been providing solutions for robotics, mechatronics and embedded systems for the past 12 years. ART is used for prototyping robotic arms that will be used on future planetary rovers.

The biggest challenge that Alliance Spacesystems faced during development was finding a system that allowed them to analyze the robotic arm system (including hardware and controller), visualize the arm motion within the workspace, compute the inverse kinematics and dynamics, and at the same time provide an intuitive user interface and deal with a large amount of telemetry from the arm. It also required the ability to rapidly prototype a system with numerous, custom interfaces.

Because it was such a customized system, Alliance Spacesystems initially had several processors and various boards to accomplish each task. It was very much a systems engineering problem and became very challenging to support all the different interfaces and features. ART required hardware that was capable of executing high-level inverse kinematics algorithms, and low-level control algorithms for the robotic arms.

Alliance Spacesystems moved to an NI Compact RIO, which uses a RIO (reconfigurable I/O) architecture consisting of an integrated embedded real-time processer and FPGA. The FPGA was used for the low-level control algorithms for the robotics arms, and the embedded processor was used for the inverse kinematics. Alliance Spacesystems expanded their platform by plugging in several I/O modules including a CAN card to interface to the intelligent motor controller, an RS-232 interface, and SD memory module for storing data (Figure 2).

In addition to off-loading low-level control to an FPGA, mobile robots also require multiple tasks to execute in parallel on a single processor. Tasks such as perception and planning vary in complexity, priority and computation time. It can be challenging to execute all of these tasks while still achieving high performance and deterministic response. Multicore embedded devices can significantly improve system performance by managing tasks such as perception and planning on multiple cores. Additional cores provide expanded resources for real-time processing of more complex algorithms. For critical applications, multicore embedded processers can enable the user to reserve an entire core for a high-priority task.

Software Development Tools

Software developers have to bridge a difficult gap between the design of algorithms for use in a robot and the actual implementation in hardware. The expanding use-cases for modern robots require increasingly complex algorithms that have highly responsive control for tasks like navigation and localization.

Many designers use high-level mathematical software for testing and modeling algorithms before migrating to embedded C and assembly programming for low-cost, low profile hardware. This poses a significant challenge as these environments often have different programming syntaxes and often do not provide access to the same analysis and mathematics libraries. This means that control algorithms often have to be re-implemented and re-tested using the embedded hardware.

The task of prototyping robots is expedited if designers have access to real-world data, such as sensor input, as well as the ability to control actuators such as motors. This is made possible by high-level languages that abstract the communication with hardware and provide access to the same analysis and mathematics libraries on the desktop, as well as an embedded target. Tool chains that are ideal for robotics provide the ability to migrate code that was developed using modeling and simulation to an actual real-time environment or an FPGA.

An example of a software development platform that abstracts the communication with embedded hardware is NI LabView. NI LabView is a high-level graphical programming and textual language that can enable domain experts to rapidly design and develop complex robots. NI LabView is easily deployed to embedded targets ranging from integrated real-time processor and FPGA solutions, to custom 32-bit microcontrollers.

NI LabView has been used in a variety of robotic applications, including Victor Tango, Virginia Tech’s autonomous vehicle that placed third in the DARPA Urban challenge, and the VECNA Bear, a robot used for battle extraction and retrieval. A simpler application, for the sake of example, is Nicholas, shown in Figure 3. Nicholas is a demonstration platform that navigates an environment while avoiding obstacles. This small-scale unmanned ground vehicle uses an NI Single-Board RIO as the control system, and a Hokuyo LIDAR sensor for visualization. The motor control is being executed on the built-in FPGA using pulse-width modulation (PWM), and the obstacle avoidance algorithm is being executed on the deterministic real-time processor.

The Hokuyo LIDAR driver, which can be downloaded from ni.com/idnet, plugged into the existing application without requiring additional modifications. By forming partnerships with leading vendors such as Hokuyo, Velodyne, Garmin, Lynxmotion and Maxon Motors, National Instruments is working toward the development of an integrated, open robotics development platform. This allows robotics engineers to spend less time re-implementing an interface to external hardware so they can focus on the high-level mission.

Mobile robots are complex systems that are used in many different applications and industries. These devices utilize advanced control systems that require deterministic control, high-level intelligence and tight integration with I/O. In addition, increasingly complex algorithms for tasks such as navigation require high-level tools and reusable code. Robotics engineers can decrease development time by investing in an open platform and understanding basic mobile robot architectures.

National Instruments
Austin, TX.
(512) 794-0100.
[www.ni.com].