View Chapter

Chapter 58 — Robotics in Hazardous Applications

James Trevelyan, William R. Hamel and Sung-Chul Kang

Robotics researchers have worked hard to realize a long-awaited vision: machines that can eliminate the need for people to work in hazardous environments. Chapter 60 is framed by the vision of disaster response: search and rescue robots carrying people from burning buildings or tunneling through collapsed rock falls to reach trapped miners. In this chapter we review tangible progress towards robots that perform routine work in places too dangerous for humans. Researchers still have many challenges ahead of them but there has been remarkable progress in some areas. Hazardous environments present special challenges for the accomplishment of desired tasks depending on the nature and magnitude of the hazards. Hazards may be present in the form of radiation, toxic contamination, falling objects or potential explosions. Technology that specialized engineering companies can develop and sell without active help from researchers marks the frontier of commercial feasibility. Just inside this border lie teleoperated robots for explosive ordnance disposal (EOD) and for underwater engineering work. Even with the typical tenfold disadvantage in manipulation performance imposed by the limits of today’s telepresence and teleoperation technology, in terms of human dexterity and speed, robots often can offer a more cost-effective solution. However, most routine applications in hazardous environments still lie far beyond the feasibility frontier. Fire fighting, remediating nuclear contamination, reactor decommissioning, tunneling, underwater engineering, underground mining and clearance of landmines and unexploded ordnance still present many unsolved problems.

DALMATINO

Author  James P. Trevelyan

Video ID : 575

This is another smaller, remotely-operated, mine-clearance vehicle similar in principle to the BOZENA machine described in Video 574. This video clearly shows the vegetation removal capability of these machines.

Chapter 36 — Motion for Manipulation Tasks

James Kuffner and Jing Xiao

This chapter serves as an introduction to Part D by giving an overview of motion generation and control strategies in the context of robotic manipulation tasks. Automatic control ranging from the abstract, high-level task specification down to fine-grained feedback at the task interface are considered. Some of the important issues include modeling of the interfaces between the robot and the environment at the different time scales of motion and incorporating sensing and feedback. Manipulation planning is introduced as an extension to the basic motion planning problem, which can be modeled as a hybrid system of continuous configuration spaces arising from the act of grasping and moving parts in the environment. The important example of assembly motion is discussed through the analysis of contact states and compliant motion control. Finally, methods aimed at integrating global planning with state feedback control are summarized.

The Mobipulator

Author  Siddhartha Srinivasa et al.

Video ID : 367

The video shows a dual-differential drive robot that uses its wheels for both manipulation and locomotion. The front wheels move objects by vibrating asymmetrically while the rear wheels help to move the robot and the object around the environment.

Chapter 70 — Human-Robot Augmentation

Massimo Bergamasco and Hugh Herr

The development of robotic systems capable of sharing with humans the load of heavy tasks has been one of the primary objectives in robotics research. At present, in order to fulfil such an objective, a strong interest in the robotics community is collected by the so-called wearable robots, a class of robotics systems that are worn and directly controlled by the human operator. Wearable robots, together with powered orthoses that exploit robotic components and control strategies, can represent an immediate resource also for allowing humans to restore manipulation and/or walking functionalities.

The present chapter deals with wearable robotics systems capable of providing different levels of functional and/or operational augmentation to the human beings for specific functions or tasks. Prostheses, powered orthoses, and exoskeletons are described for upper limb, lower limb, and whole body structures. State-of-theart devices together with their functionalities and main components are presented for each class of wearable system. Critical design issues and open research aspects are reported.

L-Exos for upper-limb motor rehabilitation

Author  Massimo Bergamasco

Video ID : 180

The video shows the L-Exos integrated into a virtual environment, which has been specifically developed for the motor rehabilitation of the upper limb.

Chapter 32 — 3-D Vision for Navigation and Grasping

Danica Kragic and Kostas Daniilidis

In this chapter, we describe algorithms for three-dimensional (3-D) vision that help robots accomplish navigation and grasping. To model cameras, we start with the basics of perspective projection and distortion due to lenses. This projection from a 3-D world to a two-dimensional (2-D) image can be inverted only by using information from the world or multiple 2-D views. If we know the 3-D model of an object or the location of 3-D landmarks, we can solve the pose estimation problem from one view. When two views are available, we can compute the 3-D motion and triangulate to reconstruct the world up to a scale factor. When multiple views are given either as sparse viewpoints or a continuous incoming video, then the robot path can be computer and point tracks can yield a sparse 3-D representation of the world. In order to grasp objects, we can estimate 3-D pose of the end effector or 3-D coordinates of the graspable points on the object.

Parallel tracking and mapping for small AR workspaces (PTAM)

Author  Georg Klein, David Murray

Video ID : 123

Video results for an augmented-reality tracking system. A computer tracks a camera and works out a map of the environment in real time, and this can be used to overlay virtual graphics. Presented at the ISMAR 2007 conference.

Chapter 69 — Physical Human-Robot Interaction

Sami Haddadin and Elizabeth Croft

Over the last two decades, the foundations for physical human–robot interaction (pHRI) have evolved from successful developments in mechatronics, control, and planning, leading toward safer lightweight robot designs and interaction control schemes that advance beyond the current capacities of existing high-payload and highprecision position-controlled industrial robots. Based on their ability to sense physical interaction, render compliant behavior along the robot structure, plan motions that respect human preferences, and generate interaction plans for collaboration and coaction with humans, these novel robots have opened up novel and unforeseen application domains, and have advanced the field of human safety in robotics.

This chapter gives an overview on the state of the art in pHRI as of the date of publication. First, the advances in human safety are outlined, addressing topics in human injury analysis in robotics and safety standards for pHRI. Then, the foundations of human-friendly robot design, including the development of lightweight and intrinsically flexible force/torque-controlled machines together with the required perception abilities for interaction are introduced. Subsequently, motionplanning techniques for human environments, including the domains of biomechanically safe, risk-metric-based, human-aware planning are covered. Finally, the rather recent problem of interaction planning is summarized, including the issues of collaborative action planning, the definition of the interaction planning problem, and an introduction to robot reflexes and reactive control architecture for pHRI.

An assistive decision-and-control architecture for force-sensitive, hand-arm systems driven via human-machine interfaces (MM2)

Author  Jörn Vogel, Sami Haddadin, John D. Simeral, Daniel Bacher , Beata Jarosiewicz, Leigh R. Hochberg, John P. Donoghue, Patrick van der Smagt

Video ID : 620

This video shows a 2-D pick and place of an object using the Braingate2 neural interface. The robot is controlled through a multipriority Cartesian impedance controller, and its behavior is extended with collision detection and reflex reaction. Furthermore, virtual workspaces are added to ensure safety. On top of this, a decision-and-control architecture, which uses sensory information available from the robotic system to evaluate the current state of task execution, is employed.

Chapter 30 — Sonar Sensing

Lindsay Kleeman and Roman Kuc

Sonar or ultrasonic sensing uses the propagation of acoustic energy at higher frequencies than normal hearing to extract information from the environment. This chapter presents the fundamentals and physics of sonar sensing for object localization, landmark measurement and classification in robotics applications. The source of sonar artifacts is explained and how they can be dealt with. Different ultrasonic transducer technologies are outlined with their main characteristics highlighted.

Sonar systems are described that range in sophistication from low-cost threshold-based ranging modules to multitransducer multipulse configurations with associated signal processing requirements capable of accurate range and bearing measurement, interference rejection, motion compensation, and target classification. Continuous-transmission frequency-modulated (CTFM) systems are introduced and their ability to improve target sensitivity in the presence of noise is discussed. Various sonar ring designs that provide rapid surrounding environmental coverage are described in conjunction with mapping results. Finally the chapter ends with a discussion of biomimetic sonar, which draws inspiration from animals such as bats and dolphins.

Sonar-guided chair at Yale

Author  Roman Kuc

Video ID : 295

Four strategically-placed Polaroid vergence sonar pairs on an electric scooter are controlled by a PIC16877 microcontroller interfaced to the joystick and the wheelchair controller. The sonar vergence pair below the foot stand determines if the obstacle is to the left or right. A sonar vergence pair on each side of the chair (at knee level) determines if the chair can pass by an obstacle without collision. A right-side-looking vergence pair maintains the distance and a parallel path to the wall. When sonar detects obstacles, the user joystick commands are overridden to avoid collision with those obstacles. The blindfolded user navigates a cluttered hallway by holding the joystick in a constant forward position.

Chapter 11 — Robots with Flexible Elements

Alessandro De Luca and Wayne J. Book

Design issues, dynamic modeling, trajectory planning, and feedback control problems are presented for robot manipulators having components with mechanical flexibility, either concentrated at the joints or distributed along the links. The chapter is divided accordingly into two main parts. Similarities or differences between the two types of flexibility are pointed out wherever appropriate.

For robots with flexible joints, the dynamic model is derived in detail by following a Lagrangian approach and possible simplified versions are discussed. The problem of computing the nominal torques that produce a desired robot motion is then solved. Regulation and trajectory tracking tasks are addressed by means of linear and nonlinear feedback control designs.

For robots with flexible links, relevant factors that lead to the consideration of distributed flexibility are analyzed. Dynamic models are presented, based on the treatment of flexibility through lumped elements, transfer matrices, or assumed modes. Several specific issues are then highlighted, including the selection of sensors, the model order used for control design, and the generation of effective commands that reduce or eliminate residual vibrations in rest-to-rest maneuvers. Feedback control alternatives are finally discussed.

In each of the two parts of this chapter, a section is devoted to the illustration of the original references and to further readings on the subject.

Feedforward/feedback law for path tracking with a KUKA KR15/2 robot

Author  Michael Thümmel

Video ID : 136

This 2006 video shows the performance of a type of model-based feedforward (using the elastic joint model) plus state-feedback stabilization for trajectory tracking. Designed for an industrial KUKA KR15/2 manipulator having cycloidal gearboxes, which are known for their visco-elasticity, this controller is compared to a standard one for the robot task of moving in a rest-to-rest mode along three (orthogonal) square paths in Cartesian space. References: 1. M. Thümmel: Modellbasierte Regelung mit nichtlinearen inversen Systemen und Beobachtern von Robotern mit elastischen Gelenken, Dissertation, Technische Universität München, Munich, (2006) (in German); 2. A. De Luca, D. Schröder, M. Thümmel: An acceleration-based state observer for robot manipulators with elastic joints, IEEE Int. Conf. Robot. Autom. (ICRA), Rome (2007), pp. 3817-3823, 2007. doi: 10.1109/ROBOT.2007.364064

Chapter 47 — Motion Planning and Obstacle Avoidance

Javier Minguez, Florant Lamiraux and Jean-Paul Laumond

This chapter describes motion planning and obstacle avoidance for mobile robots. We will see how the two areas do not share the same modeling background. From the very beginning of motion planning, research has been dominated by computer sciences. Researchers aim at devising well-grounded algorithms with well-understood completeness and exactness properties.

The challenge of this chapter is to present both nonholonomic motion planning (Sects. 47.1–47.6) and obstacle avoidance (Sects. 47.7–47.10) issues. Section 47.11 reviews recent successful approaches that tend to embrace the whole problemofmotion planning and motion control. These approaches benefit from both nonholonomic motion planning and obstacle avoidance methods.

Autonomous robotic smart-wheelchair navigation in an urban environment

Author  VADERlab

Video ID : 707

This video demonstrates the reliable navigation of a smart wheelchair system (SWS) in an urban environment. Urban environments present unique challenges for service robots. They require localization accuracy at the sidewalk level, but compromise estimated GPS positions through significant multipath effects. However, they are also rich in landmarks that can be leveraged by feature-based localization approaches. To this end, the SWS employed a map-based approach. A map of South Bethlehem was acquired using a server vehicle, synthesized a priori, and made accessible to the SWS client. The map embedded not only the locations of landmarks, but also semantic data delineating seven different landmark classes to facilitate robust data association. Landmark segmentation and tracking by the SWS was then accomplished using both 2-D and 3-D LIDAR systems. The resulting localization algorithm has demonstrated decimeter-level positioning accuracy in a global coordinate frame. The localization package was integrated into a ROS framework with a sample-based planner and control loop running at 5 Hz. For validation, the SWS repeatedly navigated autonomously between Lehigh University's Packard Laboratory and the University bookstore, a distance of approximately 1.0 km roundtrip.

Chapter 74 — Learning from Humans

Aude G. Billard, Sylvain Calinon and Rüdiger Dillmann

This chapter surveys the main approaches developed to date to endow robots with the ability to learn from human guidance. The field is best known as robot programming by demonstration, robot learning from/by demonstration, apprenticeship learning and imitation learning. We start with a brief historical overview of the field. We then summarize the various approaches taken to solve four main questions: when, what, who and when to imitate. We emphasize the importance of choosing well the interface and the channels used to convey the demonstrations, with an eye on interfaces providing force control and force feedback. We then review algorithmic approaches to model skills individually and as a compound and algorithms that combine learning from human guidance with reinforcement learning. We close with a look on the use of language to guide teaching and a list of open issues.

Policy refinement after demonstration

Author  Sylvain Calinon, Petar Kormushev, Darwin Caldwell

Video ID : 105

Use of stochastic optimization in the policy-parameters space to refine a skill initially learned from demonstration. Reference: S. Calinon, P. Kormushev, D.G. Caldwell: Compliant skills acquisition and multi-optima policy search with EM-based reinforcement learning, Robot. Auton. Syst. 61(4), 369–379 (2013); URL: http://vimeo.com/13387420

Chapter 76 — Evolutionary Robotics

Stefano Nolfi, Josh Bongard, Phil Husbands and Dario Floreano

Evolutionary Robotics is a method for automatically generating artificial brains and morphologies of autonomous robots. This approach is useful both for investigating the design space of robotic applications and for testing scientific hypotheses of biological mechanisms and processes. In this chapter we provide an overview of methods and results of Evolutionary Robotics with robots of different shapes, dimensions, and operation features. We consider both simulated and physical robots with special consideration to the transfer between the two worlds.

Morphological change in an autonomous robot.

Author  Josh Bongard

Video ID : 771

This video demonstrates a robot that is able to change its morphology. It is here shown that this change enables evolution to create useful controllers for this robot faster than a comparable robot that does not undergo morphological change.