View Chapter

Chapter 20 — Snake-Like and Continuum Robots

Ian D. Walker, Howie Choset and Gregory S. Chirikjian

This chapter provides an overview of the state of the art of snake-like (backbones comprised of many small links) and continuum (continuous backbone) robots. The history of each of these classes of robot is reviewed, focusing on key hardware developments. A review of the existing theory and algorithms for kinematics for both types of robot is presented, followed by a summary ofmodeling of locomotion for snake-like and continuum mechanisms.

CMU medical snake robot

Author  Howie Choset

Video ID : 175

Video of CMU medical snake robot performing a closed-chest ablation of left atrial appendage.

Chapter 47 — Motion Planning and Obstacle Avoidance

Javier Minguez, Florant Lamiraux and Jean-Paul Laumond

This chapter describes motion planning and obstacle avoidance for mobile robots. We will see how the two areas do not share the same modeling background. From the very beginning of motion planning, research has been dominated by computer sciences. Researchers aim at devising well-grounded algorithms with well-understood completeness and exactness properties.

The challenge of this chapter is to present both nonholonomic motion planning (Sects. 47.1–47.6) and obstacle avoidance (Sects. 47.7–47.10) issues. Section 47.11 reviews recent successful approaches that tend to embrace the whole problemofmotion planning and motion control. These approaches benefit from both nonholonomic motion planning and obstacle avoidance methods.

Sena wheelchair: Autonomous navigation at University of Malaga (2007)

Author  Jose Luis Blanco

Video ID : 708

This experiment demonstrates how a reactive navigation method successfully enables our robotic wheelchair SENA to navigate reliably in the entrance of our building at the University of Malaga (Spain). The robot navigates autonomously amidst dozens of students while avoiding collisions. The method is based on a space transformation, which simplifies finding collision-free movements in real-time despite the arbitrarily complex shape of the robot and its kinematic restrictions.

Chapter 41 — Active Manipulation for Perception

Anna Petrovskaya and Kaijen Hsiao

This chapter covers perceptual methods in which manipulation is an integral part of perception. These methods face special challenges due to data sparsity and high costs of sensing actions. However, they can also succeed where other perceptual methods fail, for example, in poor-visibility conditions or for learning the physical properties of a scene.

The chapter focuses on specialized methods that have been developed for object localization, inference, planning, recognition, and modeling in activemanipulation approaches.We concludewith a discussion of real-life applications and directions for future research.

Touch-based, door-handle localization and manipulation

Author  Anna Petrovskaya

Video ID : 723

The harmonic arm robot localizes the door handle by touching it. 3-DOF localization is performed in this video. Once the localization is complete, the robot is able to grasp and manipulate the handle. The mobile platform is teleoperated, whereas the robotic arm motions are autonomous. A 2-D model of the door and handle was constructed from hand measurements for this experiment.

Chapter 8 — Motion Control

Wan Kyun Chung, Li-Chen Fu and Torsten Kröger

This chapter will focus on the motion control of robotic rigid manipulators. In other words, this chapter does not treat themotion control ofmobile robots, flexible manipulators, and manipulators with elastic joints. The main challenge in the motion control problem of rigid manipulators is the complexity of their dynamics and uncertainties. The former results from nonlinearity and coupling in the robot manipulators. The latter is twofold: structured and unstructured. Structured uncertainty means imprecise knowledge of the dynamic parameters and will be touched upon in this chapter, whereas unstructured uncertainty results from joint and link flexibility, actuator dynamics, friction, sensor noise, and unknown environment dynamics, and will be treated in other chapters. In this chapter, we begin with an introduction to motion control of robot manipulators from a fundamental viewpoint, followed by a survey and brief review of the relevant advanced materials. Specifically, the dynamic model and useful properties of robot manipulators are recalled in Sect. 8.1. The joint and operational space control approaches, two different viewpoints on control of robot manipulators, are compared in Sect. 8.2. Independent joint control and proportional– integral–derivative (PID) control, widely adopted in the field of industrial robots, are presented in Sects. 8.3 and 8.4, respectively. Tracking control, based on feedback linearization, is introduced in Sect. 8.5. The computed-torque control and its variants are described in Sect. 8.6. Adaptive control is introduced in Sect. 8.7 to solve the problem of structural uncertainty, whereas the optimality and robustness issues are covered in Sect. 8.8. To compute suitable set point signals as input values for these motion controllers, Sect. 8.9 introduces reference trajectory planning concepts. Since most controllers of robotmanipulators are implemented by using microprocessors, the issues of digital implementation are discussed in Sect. 8.10. Finally, learning control, one popular approach to intelligent control, is illustrated in Sect. 8.11.

Sensor-based online trajectory generation

Author  Torsten Kröger

Video ID : 761

The video shows the movements of a position-controlled 6-DOF industrial-robot arm equipped with a distance sensor at its end-effector. The task of the robot is to draw a rectangle on the table, while the force on the table is controlled by a force controller which acts only orthogonally to the table surface. The dimensions of the rectangle are determined by the obstacles in the robot's environment. If the obstacles are moved, the distance sensor triggers the execution of a new trajectory segment which is computed within one control cycle (1 ms), so that it can be instantly executed.

Chapter 53 — Multiple Mobile Robot Systems

Lynne E. Parker, Daniela Rus and Gaurav S. Sukhatme

Within the context of multiple mobile, and networked robot systems, this chapter explores the current state of the art. After a brief introduction, we first examine architectures for multirobot cooperation, exploring the alternative approaches that have been developed. Next, we explore communications issues and their impact on multirobot teams in Sect. 53.3, followed by a discussion of networked mobile robots in Sect. 53.4. Following this we discuss swarm robot systems in Sect. 53.5 and modular robot systems in Sect. 53.6. While swarm and modular systems typically assume large numbers of homogeneous robots, other types of multirobot systems include heterogeneous robots. We therefore next discuss heterogeneity in cooperative robot teams in Sect. 53.7. Once robot teams allow for individual heterogeneity, issues of task allocation become important; Sect. 53.8 therefore discusses common approaches to task allocation. Section 53.9 discusses the challenges of multirobot learning, and some representative approaches. We outline some of the typical application domains which serve as test beds for multirobot systems research in Sect. 53.10. Finally, we conclude in Sect. 53.11 with some summary remarks and suggestions for further reading.

Robot Pebbles - MIT developing self-sculpting smart-sand robots

Author  Kyle Gilpin, Ara Knaian, Kent Koyanagi, Daniela Rus

Video ID : 211

Researchers at the Distributed Robotics Laboratory at MIT's Computer Science and Artificial Intelligence Laboratory are developing tiny robots that could self-assemble into functional tools, then self-disassemble after use. Dubbed the "smart sand," the tiny robots (measuring 0.1 cubic cm) would contain microprocessors and EG magnets which could latch, communicate, and transfer power to each other, enabling them to form life-size replicas of miniature models. https://groups.csail.mit.edu/drl/wiki/index.php?title=Robot_Pebbles

Chapter 62 — Intelligent Vehicles

Alberto Broggi, Alex Zelinsky, Ümit Özgüner and Christian Laugier

This chapter describes the emerging robotics application field of intelligent vehicles – motor vehicles that have autonomous functions and capabilities. The chapter is organized as follows. Section 62.1 provides a motivation for why the development of intelligent vehicles is important, a brief history of the field, and the potential benefits of the technology. Section 62.2 describes the technologies that enable intelligent vehicles to sense vehicle, environment, and driver state, work with digital maps and satellite navigation, and communicate with intelligent transportation infrastructure. Section 62.3 describes the challenges and solutions associated with road scene understanding – a key capability for all intelligent vehicles. Section 62.4 describes advanced driver assistance systems, which use the robotics and sensing technologies described earlier to create new safety and convenience systems for motor vehicles, such as collision avoidance, lane keeping, and parking assistance. Section 62.5 describes driver monitoring technologies that are being developed to mitigate driver fatigue, inattention, and impairment. Section 62.6 describes fully autonomous intelligent vehicles systems that have been developed and deployed. The chapter is concluded in Sect. 62.7 with a discussion of future prospects, while Sect. 62.8 provides references to further reading and additional resources.

Lane tracking

Author  Alex Zelinsky

Video ID : 836

This video demonstrates robust lane tracking under variable conditions, e.g., rain and poor lighting. The system uses a particle-filter-based approach to achieve robustness.

Chapter 0 — Preface

Bruno Siciliano, Oussama Khatib and Torsten Kröger

The preface of the Second Edition of the Springer Handbook of Robotics contains three videos about the creation of the book and using its multimedia app on mobile devices.

The handbook — The story continues

Author  Bruno Siciliano

Video ID : 845

This video illustrates the joyful mood of the big team of the Springer Handbook of Robotics at the completion of the Second Edition.

Chapter 69 — Physical Human-Robot Interaction

Sami Haddadin and Elizabeth Croft

Over the last two decades, the foundations for physical human–robot interaction (pHRI) have evolved from successful developments in mechatronics, control, and planning, leading toward safer lightweight robot designs and interaction control schemes that advance beyond the current capacities of existing high-payload and highprecision position-controlled industrial robots. Based on their ability to sense physical interaction, render compliant behavior along the robot structure, plan motions that respect human preferences, and generate interaction plans for collaboration and coaction with humans, these novel robots have opened up novel and unforeseen application domains, and have advanced the field of human safety in robotics.

This chapter gives an overview on the state of the art in pHRI as of the date of publication. First, the advances in human safety are outlined, addressing topics in human injury analysis in robotics and safety standards for pHRI. Then, the foundations of human-friendly robot design, including the development of lightweight and intrinsically flexible force/torque-controlled machines together with the required perception abilities for interaction are introduced. Subsequently, motionplanning techniques for human environments, including the domains of biomechanically safe, risk-metric-based, human-aware planning are covered. Finally, the rather recent problem of interaction planning is summarized, including the issues of collaborative action planning, the definition of the interaction planning problem, and an introduction to robot reflexes and reactive control architecture for pHRI.

Human-robot handover

Author  Wesley P. Chan, Chris A. Parker, H.F.Machiel Van der Loos, Elizabeth A. Croft

Video ID : 716

In this video, we present a novel controller for safe, efficient, and intuitive robot-to-human object handovers. The controller enables a robot to mimic human behavior by actively regulating the applied grip force according to the measured load force during a handover. We provide an implementation of the controller on a Willow Garage PR2 robot, demonstrating the feasibility of realizing our design on robots with basic sensor/actuator capabilities.

Chapter 23 — Biomimetic Robots

Kyu-Jin Cho and Robert Wood

Biomimetic robot designs attempt to translate biological principles into engineered systems, replacing more classical engineering solutions in order to achieve a function observed in the natural system. This chapter will focus on mechanism design for bio-inspired robots that replicate key principles from nature with novel engineering solutions. The challenges of biomimetic design include developing a deep understanding of the relevant natural system and translating this understanding into engineering design rules. This often entails the development of novel fabrication and actuation to realize the biomimetic design.

This chapter consists of four sections. In Sect. 23.1, we will define what biomimetic design entails, and contrast biomimetic robots with bio-inspired robots. In Sect. 23.2, we will discuss the fundamental components for developing a biomimetic robot. In Sect. 23.3, we will review detailed biomimetic designs that have been developed for canonical robot locomotion behaviors including flapping-wing flight, jumping, crawling, wall climbing, and swimming. In Sect. 23.4, we will discuss the enabling technologies for these biomimetic designs including material and fabrication.

RoACH: a 2.4 gram, untethered, crawling hexapod robot

Author  Aaron M. Hoover, Erik Steltz, Ronald S. Fearing

Video ID : 286

The robotic autonomous crawling hexapod (RoACH) is made using lightweight composites with integrated flexural hinges. It is actuated by two shape-memory-alloy wires and controlled by a PIC microprocessor. It can communicate over IrDA and run untethered for more than nine minutes on a single charge.

Chapter 41 — Active Manipulation for Perception

Anna Petrovskaya and Kaijen Hsiao

This chapter covers perceptual methods in which manipulation is an integral part of perception. These methods face special challenges due to data sparsity and high costs of sensing actions. However, they can also succeed where other perceptual methods fail, for example, in poor-visibility conditions or for learning the physical properties of a scene.

The chapter focuses on specialized methods that have been developed for object localization, inference, planning, recognition, and modeling in activemanipulation approaches.We concludewith a discussion of real-life applications and directions for future research.

Tactile localization of a power drill

Author  Kaijen Hsiao

Video ID : 77

This video shows a Barrett WAM arm tactilely localizing and reorienting a power drill under high positional uncertainty. The goal is for the robot to robustly grasp the power drill such that the trigger can be activated. The robot tracks the distribution of possible object poses on the table over a 3-D grid (the belief space). It then selects between information-gathering, reorienting, and goal-seeking actions by modeling the problem as a POMDP (partially observable Markov decision process) and using receding-horizon, forward search through the belief space. In the video, the inset window with the simulated robot is a visualization of the current belief state. The red spheres sit at the vertices of the object mesh placed at the most likely state, and the dark-blue box also shows the location of the most likely state. The purple box shows the location of the mean of the belief state, and the light-blue boxes show the variance of the belief state in the form of the locations of various states that are one standard deviation away from the mean in each of the three dimensions of uncertainty (x, y, and theta). The magenta spheres and arrows that appear when the robot touches the object show the contact locations and normals as reported by the sensors, and the cyan spheres that largely overlap the hand show where the robot controllers are trying to move the hand.