View Chapter

Chapter 62 — Intelligent Vehicles

Alberto Broggi, Alex Zelinsky, Ümit Özgüner and Christian Laugier

This chapter describes the emerging robotics application field of intelligent vehicles – motor vehicles that have autonomous functions and capabilities. The chapter is organized as follows. Section 62.1 provides a motivation for why the development of intelligent vehicles is important, a brief history of the field, and the potential benefits of the technology. Section 62.2 describes the technologies that enable intelligent vehicles to sense vehicle, environment, and driver state, work with digital maps and satellite navigation, and communicate with intelligent transportation infrastructure. Section 62.3 describes the challenges and solutions associated with road scene understanding – a key capability for all intelligent vehicles. Section 62.4 describes advanced driver assistance systems, which use the robotics and sensing technologies described earlier to create new safety and convenience systems for motor vehicles, such as collision avoidance, lane keeping, and parking assistance. Section 62.5 describes driver monitoring technologies that are being developed to mitigate driver fatigue, inattention, and impairment. Section 62.6 describes fully autonomous intelligent vehicles systems that have been developed and deployed. The chapter is concluded in Sect. 62.7 with a discussion of future prospects, while Sect. 62.8 provides references to further reading and additional resources.

Driver fatigue and inattention

Author  Alberto Broggi, Alexander Zelinsky, Ümit Ozgüner, Christian Laugier

Video ID : 840

This video demonstrates real-time driver inattention and distraction, including that caused fatigue. The system uses a monocular vision system and infrared pods to achieve robust operation in all lighting conditions.

Chapter 13 — Behavior-Based Systems

François Michaud and Monica Nicolescu

Nature is filled with examples of autonomous creatures capable of dealing with the diversity, unpredictability, and rapidly changing conditions of the real world. Such creatures must make decisions and take actions based on incomplete perception, time constraints, limited knowledge about the world, cognition, reasoning and physical capabilities, in uncontrolled conditions and with very limited cues about the intent of others. Consequently, one way of evaluating intelligence is based on the creature’s ability to make the most of what it has available to handle the complexities of the real world. The main objective of this chapter is to explain behavior-based systems and their use in autonomous control problems and applications. The chapter is organized as follows. Section 13.1 overviews robot control, introducing behavior-based systems in relation to other established approaches to robot control. Section 13.2 follows by outlining the basic principles of behavior-based systems that make them distinct from other types of robot control architectures. The concept of basis behaviors, the means of modularizing behavior-based systems, is presented in Sect. 13.3. Section 13.4 describes how behaviors are used as building blocks for creating representations for use by behavior-based systems, enabling the robot to reason about the world and about itself in that world. Section 13.5 presents several different classes of learning methods for behavior-based systems, validated on single-robot and multirobot systems. Section 13.6 provides an overview of various robotics problems and application domains that have successfully been addressed or are currently being studied with behavior-based control. Finally, Sect. 13.7 concludes the chapter.

Experience-based learning of high-level task representations: Demonstration

Author  Monica Nicolescu

Video ID : 27

This is a video recorded in early 2000s, showing a Pioneer robot learning to visit a number of targets in a certain order - the human demonstration stage. The robot execution stage is also shown in a related video in this chapter. References: 1. M. Nicolescu, M.J. Mataric: Experience-based learning of task representations from human-robot interaction, Proc. IEEE Int. Symp. Comput. Intell. Robot. Autom. Banff (2001), pp. 463-468; 2. M. Nicolescu, M.J. Mataric: Learning and interacting in human-robot domains, IEEE Trans. Syst. Man Cybernet. A31(5), 419-430 (2001)

Chapter 10 — Redundant Robots

Stefano Chiaverini, Giuseppe Oriolo and Anthony A. Maciejewski

This chapter focuses on redundancy resolution schemes, i. e., the techniques for exploiting the redundant degrees of freedom in the solution of the inverse kinematics problem. This is obviously an issue of major relevance for motion planning and control purposes.

In particular, task-oriented kinematics and the basic methods for its inversion at the velocity (first-order differential) level are first recalled, with a discussion of the main techniques for handling kinematic singularities. Next, different firstorder methods to solve kinematic redundancy are arranged in two main categories, namely those based on the optimization of suitable performance criteria and those relying on the augmentation of the task space. Redundancy resolution methods at the acceleration (second-order differential) level are then considered in order to take into account dynamics issues, e.g., torque minimization. Conditions under which a cyclic task motion results in a cyclic joint motion are also discussed; this is a major issue when a redundant manipulator is used to execute a repetitive task, e.g., in industrial applications. The use of kinematic redundancy for fault tolerance is analyzed in detail. Suggestions for further reading are given in a final section.

KUKA LBR iiwa - Kinematic Redundancy

Author  KUKA Roboter GmbH

Video ID : 813

The video shows the robot dexterity achieved by kinematic redundancy and illustrates the basic concept of self-motion (here called null-space motion).

Chapter 64 — Rehabilitation and Health Care Robotics

H.F. Machiel Van der Loos, David J. Reinkensmeyer and Eugenio Guglielmelli

The field of rehabilitation robotics considers robotic systems that 1) provide therapy for persons seeking to recover their physical, social, communication, or cognitive function, and/or that 2) assist persons who have a chronic disability to accomplish activities of daily living. This chapter will discuss these two main domains and provide descriptions of the major achievements of the field over its short history and chart out the challenges to come. Specifically, after providing background information on demographics (Sect. 64.1.2) and history (Sect. 64.1.3) of the field, Sect. 64.2 describes physical therapy and exercise training robots, and Sect. 64.3 describes robotic aids for people with disabilities. Section 64.4 then presents recent advances in smart prostheses and orthoses that are related to rehabilitation robotics. Finally, Sect. 64.5 provides an overview of recent work in diagnosis and monitoring for rehabilitation as well as other health-care issues. The reader is referred to Chap. 73 for cognitive rehabilitation robotics and to Chap. 65 for robotic smart home technologies, which are often considered assistive technologies for persons with disabilities. At the conclusion of the present chapter, the reader will be familiar with the history of rehabilitation robotics and its primary accomplishments, and will understand the challenges the field may face in the future as it seeks to improve health care and the well being of persons with disabilities.

The MIME rtehabilitation-therapy robot

Author  Peter Lum,Machiel Van der Loos, Chuck Burgar

Video ID : 495

The 6-DOF MIME robot assisting the left arm in unilateral and bimanual modes. In the unilateral mode, the robot provides end-point tunnel guidance toward the target. In bimanual mode, movement of the right arm is measured with a 6-DOF digitizer, and the robot assists the left arm in performing mirror-image movements.

Chapter 40 — Mobility and Manipulation

Oliver Brock, Jaeheung Park and Marc Toussaint

Mobile manipulation requires the integration of methodologies from all aspects of robotics. Instead of tackling each aspect in isolation,mobilemanipulation research exploits their interdependence to solve challenging problems. As a result, novel views of long-standing problems emerge. In this chapter, we present these emerging views in the areas of grasping, control, motion generation, learning, and perception. All of these areas must address the shared challenges of high-dimensionality, uncertainty, and task variability. The section on grasping and manipulation describes a trend towards actively leveraging contact and physical and dynamic interactions between hand, object, and environment. Research in control addresses the challenges of appropriately coupling mobility and manipulation. The field of motion generation increasingly blurs the boundaries between control and planning, leading to task-consistent motion in high-dimensional configuration spaces, even in dynamic and partially unknown environments. A key challenge of learning formobilemanipulation consists of identifying the appropriate priors, and we survey recent learning approaches to perception, grasping, motion, and manipulation. Finally, a discussion of promising methods in perception shows how concepts and methods from navigation and active perception are applied.

A day in the life of Romeo and Juliet (mobile manipulators)

Author  Oussama Khatib

Video ID : 776

Arm/vehicle coordination, dynamically decoupled self motion control, useful compliant motion tasks, cooperative compliant motion and internal force control.

Chapter 1 — Robotics and the Handbook

Bruno Siciliano and Oussama Khatib

Robots! Robots on Mars and in oceans, in hospitals and homes, in factories and schools; robots fighting fires, making goods and products, saving time and lives. Robots today are making a considerable impact on many aspects of modern life, from industrial manufacturing to healthcare, transportation, and exploration of the deep space and sea. Tomorrow, robotswill be as pervasive and personal as today’s personal computers. This chapter retraces the evolution of this fascinating field from the ancient to themodern times through a number of milestones: from the first automated mechanical artifact (1400 BC) through the establishment of the robot concept in the 1920s, the realization of the first industrial robots in the 1960s, the definition of robotics science and the birth of an active research community in the 1980s, and the expansion towards the challenges of the human world of the twenty-first century. Robotics in its long journey has inspired this handbook which is organized in three layers: the foundations of robotics science; the consolidated methodologies and technologies of robot design, sensing and perception, manipulation and interfaces, mobile and distributed robotics; the advanced applications of field and service robotics, as well as of human-centered and life-like robotics.

Robots — A 50 year journey

Author  Oussama Khatib

Video ID : 805

In this collection of short segments, this video retraces the history of the most influential modern robots developed in the 20th century (1950-2000). The 50-year journey was first presented at the 2000 IEEE International Conference on Robotics and Automation (ICRA) in San Francisco.

Chapter 6 — Model Identification

John Hollerbach, Wisama Khalil and Maxime Gautier

This chapter discusses how to determine the kinematic parameters and the inertial parameters of robot manipulators. Both instances of model identification are cast into a common framework of least-squares parameter estimation, and are shown to have common numerical issues relating to the identifiability of parameters, adequacy of the measurement sets, and numerical robustness. These discussions are generic to any parameter estimation problem, and can be applied in other contexts.

For kinematic calibration, the main aim is to identify the geometric Denavit–Hartenberg (DH) parameters, although joint-based parameters relating to the sensing and transmission elements can also be identified. Endpoint sensing or endpoint constraints can provide equivalent calibration equations. By casting all calibration methods as closed-loop calibration, the calibration index categorizes methods in terms of how many equations per pose are generated.

Inertial parameters may be estimated through the execution of a trajectory while sensing one or more components of force/torque at a joint. Load estimation of a handheld object is simplest because of full mobility and full wrist force-torque sensing. For link inertial parameter estimation, restricted mobility of links nearer the base as well as sensing only the joint torque means that not all inertial parameters can be identified. Those that can be identified are those that affect joint torque, although they may appear in complicated linear combinations.

Calibration and accuracy validation of a FANUC LR Mate 200iC industrial robot

Author  Ilian Bonev

Video ID : 430

This video shows excerpts from the process of calibrating a FANUC LR Mate 200iC industrial robot using two different methods. In the first method, the position of one of three points on the robot end-effector is measured using a FARO laser tracker in 50 specially selected robot configurations (not shown in the video). Then, the robot parameters are identified. Next, the position of one of the three points on the robot's end-effector is measured using the laser tracker in 10,000 completely arbitrary robot configurations. The mean positioning error after calibration was found to be 0.156 mm, the standard deviation (std) 0.067 mm, the mean+3*std 0.356 mm, and the maximum 0.490 mm. In the second method, the complete pose (position and orientation) of the robot end-effector is measured in about 60 robot configurations using an innovative method based on Renishaw's telescoping ballbar. Then, the robot parameters are identified. Next, the position of one of the three points on the robot's end-effector is measured using the laser tracker in 10,000 completely arbitrary robot configurations. The mean position error after calibration was found to be 0.479 mm, the standard deviation (std) 0.214 mm, and the maximum 1.039 mm. However, if we limit the zone for validations, the accuracy of the robot is much better. The second calibration method is less efficient but relies on a piece of equipment that costs only $12,000 (only one tenth the cost of a laser tracker).

Chapter 40 — Mobility and Manipulation

Oliver Brock, Jaeheung Park and Marc Toussaint

Mobile manipulation requires the integration of methodologies from all aspects of robotics. Instead of tackling each aspect in isolation,mobilemanipulation research exploits their interdependence to solve challenging problems. As a result, novel views of long-standing problems emerge. In this chapter, we present these emerging views in the areas of grasping, control, motion generation, learning, and perception. All of these areas must address the shared challenges of high-dimensionality, uncertainty, and task variability. The section on grasping and manipulation describes a trend towards actively leveraging contact and physical and dynamic interactions between hand, object, and environment. Research in control addresses the challenges of appropriately coupling mobility and manipulation. The field of motion generation increasingly blurs the boundaries between control and planning, leading to task-consistent motion in high-dimensional configuration spaces, even in dynamic and partially unknown environments. A key challenge of learning formobilemanipulation consists of identifying the appropriate priors, and we survey recent learning approaches to perception, grasping, motion, and manipulation. Finally, a discussion of promising methods in perception shows how concepts and methods from navigation and active perception are applied.

Catching objects in flight

Author  Seungsu Kim, Ashwini Shukla, Aude Billard

Video ID : 653

We target the difficult problem of catching in-flight objects with uneven shapes. This requires the solution of three complex problems: predicting accurately the trajectory of fast-moving objects, predicting the feasible catching configuration, and planning the arm motion, all within milliseconds. We follow a programming-by-demonstration approach in order to learn models of the object and the arm dynamics from throwing examples. We propose a new methodology for finding a feasible catching configuration in a probabilistic manner. We leverage the strength of dynamical systems for encoding motion from several demonstrations. This enables fast and online adaptation of the arm motion in the presence of sensor uncertainty. We validate the approach in simulation with the iCub humanoid robot and in real-world experiment with the KUKA LWR 4+ (7-DOF arm robot) for catching a hammer, a tennis racket, an empty bottle, a partially filled bottle and a cardboard box.

Chapter 74 — Learning from Humans

Aude G. Billard, Sylvain Calinon and Rüdiger Dillmann

This chapter surveys the main approaches developed to date to endow robots with the ability to learn from human guidance. The field is best known as robot programming by demonstration, robot learning from/by demonstration, apprenticeship learning and imitation learning. We start with a brief historical overview of the field. We then summarize the various approaches taken to solve four main questions: when, what, who and when to imitate. We emphasize the importance of choosing well the interface and the channels used to convey the demonstrations, with an eye on interfaces providing force control and force feedback. We then review algorithmic approaches to model skills individually and as a compound and algorithms that combine learning from human guidance with reinforcement learning. We close with a look on the use of language to guide teaching and a list of open issues.

Demonstrations and reproduction of moving a chessman

Author  Sylvain Calinon, Florent Guenter, Aude Billard

Video ID : 97

A robot learns how to make a chess move from multiple demonstrations and to reproduce the skill in a new situation (different position of the chessman) by finding a controller which satisfies both the task constraints (what-to-imitate) and constraints relative to its body limitation (how-to-imitate). Reference: S. Calinon, F. Guenter, A. Billard: On earning, representing and generalizing a task in a humanoid robot, IEEE Trans. Syst. Man Cybernet. B 37(2), 286-298 (2007); URL: http://lasa.epfl.ch/videos/control.php.

Chapter 51 — Modeling and Control of Underwater Robots

Gianluca Antonelli, Thor I. Fossen and Dana R. Yoerger

This chapter deals with modeling and control of underwater robots. First, a brief introduction showing the constantly expanding role of marine robotics in oceanic engineering is given; this section also contains some historical backgrounds. Most of the following sections strongly overlap with the corresponding chapters presented in this handbook; hence, to avoid useless repetitions, only those aspects peculiar to the underwater environment are discussed, assuming that the reader is already familiar with concepts such as fault detection systems when discussing the corresponding underwater implementation. Themodeling section is presented by focusing on a coefficient-based approach capturing the most relevant underwater dynamic effects. Two sections dealing with the description of the sensor and the actuating systems are then given. Autonomous underwater vehicles require the implementation of mission control system as well as guidance and control algorithms. Underwater localization is also discussed. Underwater manipulation is then briefly approached. Fault detection and fault tolerance, together with the coordination control of multiple underwater vehicles, conclude the theoretical part of the chapter. Two final sections, reporting some successful applications and discussing future perspectives, conclude the chapter. The reader is referred to Chap. 25 for the design issues.

Dive with REMUS

Author  Woods Hole Oceanographic Institution

Video ID : 87

Travel with a REMUS 100 autonomous, underwater vehicle on a dive off the Carolina coast to study the connection between the physical processes in the ocean at the edge of the continental shelf and the things that live there. Video footage by Chris Linder. Funding by the Department of the Navy, Science & Technology; and Centers for Ocean Sciences Education Excellence (COSEE).