View Chapter

Chapter 69 — Physical Human-Robot Interaction

Sami Haddadin and Elizabeth Croft

Over the last two decades, the foundations for physical human–robot interaction (pHRI) have evolved from successful developments in mechatronics, control, and planning, leading toward safer lightweight robot designs and interaction control schemes that advance beyond the current capacities of existing high-payload and highprecision position-controlled industrial robots. Based on their ability to sense physical interaction, render compliant behavior along the robot structure, plan motions that respect human preferences, and generate interaction plans for collaboration and coaction with humans, these novel robots have opened up novel and unforeseen application domains, and have advanced the field of human safety in robotics.

This chapter gives an overview on the state of the art in pHRI as of the date of publication. First, the advances in human safety are outlined, addressing topics in human injury analysis in robotics and safety standards for pHRI. Then, the foundations of human-friendly robot design, including the development of lightweight and intrinsically flexible force/torque-controlled machines together with the required perception abilities for interaction are introduced. Subsequently, motionplanning techniques for human environments, including the domains of biomechanically safe, risk-metric-based, human-aware planning are covered. Finally, the rather recent problem of interaction planning is summarized, including the issues of collaborative action planning, the definition of the interaction planning problem, and an introduction to robot reflexes and reactive control architecture for pHRI.

Justin: A humanoid upper body system for two-handed manipulation experiments

Author  Christoph Borst, Christian Ott, Thomas Wimböck, Bernhard Brunner, Franziska Zacharias, Berthold Bäuml

Video ID : 626

This video presents a humanoid two-arm system developed as a research platform for studying dexterous two-handed manipulation. The system is based on the modular DLR-Lightweight-Robot-III and the DLR-Hand-II. Two arms and hands are combined with a 3-DOF movable torso and a visual system to form a complete humanoid upper body. The diversity of the system is demonstrated by showing the mechanical design, several control concepts, the application of rapid prototyping and hardware-in-the-loop (HIL) development, as well as two-handed manipulation experiments and the integration of path planning capabilities.

Chapter 64 — Rehabilitation and Health Care Robotics

H.F. Machiel Van der Loos, David J. Reinkensmeyer and Eugenio Guglielmelli

The field of rehabilitation robotics considers robotic systems that 1) provide therapy for persons seeking to recover their physical, social, communication, or cognitive function, and/or that 2) assist persons who have a chronic disability to accomplish activities of daily living. This chapter will discuss these two main domains and provide descriptions of the major achievements of the field over its short history and chart out the challenges to come. Specifically, after providing background information on demographics (Sect. 64.1.2) and history (Sect. 64.1.3) of the field, Sect. 64.2 describes physical therapy and exercise training robots, and Sect. 64.3 describes robotic aids for people with disabilities. Section 64.4 then presents recent advances in smart prostheses and orthoses that are related to rehabilitation robotics. Finally, Sect. 64.5 provides an overview of recent work in diagnosis and monitoring for rehabilitation as well as other health-care issues. The reader is referred to Chap. 73 for cognitive rehabilitation robotics and to Chap. 65 for robotic smart home technologies, which are often considered assistive technologies for persons with disabilities. At the conclusion of the present chapter, the reader will be familiar with the history of rehabilitation robotics and its primary accomplishments, and will understand the challenges the field may face in the future as it seeks to improve health care and the well being of persons with disabilities.

ReWalk

Author  Argo Medical Technologies

Video ID : 508

The ReWalk is a legged exoskeleton designed to help people with paralysis to walk.

Chapter 18 — Parallel Mechanisms

Jean-Pierre Merlet, Clément Gosselin and Tian Huang

This chapter presents an introduction to the kinematics and dynamics of parallel mechanisms, also referred to as parallel robots. As opposed to classical serial manipulators, the kinematic architecture of parallel robots includes closed-loop kinematic chains. As a consequence, their analysis differs considerably from that of their serial counterparts. This chapter aims at presenting the fundamental formulations and techniques used in their analysis.

6-DOF statically balanced parallel robot

Author  Clément Gosselin

Video ID : 48

This video demonstrates a 6-DOF statically balanced parallel robot. References: 1. C. Gosselin, J. Wang, T. Laliberté, I. Ebert-Uphoff: On the design of a statically balanced 6-DOF parallel manipulator, Proc. IFToMM Tenth World Congress Theory of Machines and Mechanisms, Oulu (1999) pp. 1045-1050; 2. C. Gosselin, J. Wang: On the design of statically balanced motion bases for flight simulators, Proc. AIAA Modeling and Simulation Technologies Conf., Boston (1998), pp. 272-282; 3. I. Ebert-Uphoff, C. Gosselin: Dynamic modeling of a class of spatial statically-balanced parallel platform mechanisms, Proc. IEEE Int. Conf. Robot. Autom. (ICRA), Detroit (1999), Vol. 2, pp. 881-888

Chapter 76 — Evolutionary Robotics

Stefano Nolfi, Josh Bongard, Phil Husbands and Dario Floreano

Evolutionary Robotics is a method for automatically generating artificial brains and morphologies of autonomous robots. This approach is useful both for investigating the design space of robotic applications and for testing scientific hypotheses of biological mechanisms and processes. In this chapter we provide an overview of methods and results of Evolutionary Robotics with robots of different shapes, dimensions, and operation features. We consider both simulated and physical robots with special consideration to the transfer between the two worlds.

Coevolved predator and prey robots

Author  Dario Floreano

Video ID : 38

Coevolved predator and prey robots engaged in a tournament. The predator and prey robot (from left to right) are placed in an arena surrounded by walls and are allowed to interact for several trials starting at different, randomly-generated orientations. Predators are selected on the basis of the percentage of trials in which they are able to catch (i.e., to touch) the prey, and prey on the basis of the percentage of trials in which they were able to escape (i.e., to not be touched by) predators. Predators have a vision system, whereas the prey have only short-range distance sensors, but can go twice as fast as the predator. Collision between robots is detected by a conductive belt at the base of the robots.

Chapter 74 — Learning from Humans

Aude G. Billard, Sylvain Calinon and Rüdiger Dillmann

This chapter surveys the main approaches developed to date to endow robots with the ability to learn from human guidance. The field is best known as robot programming by demonstration, robot learning from/by demonstration, apprenticeship learning and imitation learning. We start with a brief historical overview of the field. We then summarize the various approaches taken to solve four main questions: when, what, who and when to imitate. We emphasize the importance of choosing well the interface and the channels used to convey the demonstrations, with an eye on interfaces providing force control and force feedback. We then review algorithmic approaches to model skills individually and as a compound and algorithms that combine learning from human guidance with reinforcement learning. We close with a look on the use of language to guide teaching and a list of open issues.

Exploitation of social cues to speed up learning

Author  Sylvain Calinon, Aude Billard

Video ID : 106

Use of social cues to speed up the imitation-learning process, with gazing and pointing information to select the objects relevant for the task. Reference: S. Calinon, A.G. Billard: Teaching a humanoid robot to recognize and reproduce social cues, Proc. IEEE Int. Symp. Robot Human Interactive Communication (Ro-Man), Hatfield (2006), pp. 346–351; URL: http://lasa.epfl.ch/research/control_automation/interaction/social/index.php .

Chapter 28 — Force and Tactile Sensing

Mark R. Cutkosky and William Provancher

This chapter provides an overview of force and tactile sensing, with the primary emphasis placed on tactile sensing. We begin by presenting some basic considerations in choosing a tactile sensor and then review a wide variety of sensor types, including proximity, kinematic, force, dynamic, contact, skin deflection, thermal, and pressure sensors. We also review various transduction methods, appropriate for each general sensor type. We consider the information that these various types of sensors provide in terms of whether they are most useful for manipulation, surface exploration or being responsive to contacts from external agents.

Concerning the interpretation of tactile information, we describe the general problems and present two short illustrative examples. The first involves intrinsic tactile sensing, i. e., estimating contact locations and forces from force sensors. The second involves contact pressure sensing, i. e., estimating surface normal and shear stress distributions from an array of sensors in an elastic skin. We conclude with a brief discussion of the challenges that remain to be solved in packaging and manufacturing damage-tolerant tactile sensors.

The effect of twice dropping, and then gently placing, a two-gram weight on a small capacitive tactile array

Author  Mark Cutkosky

Video ID : 15

Video illustrating the effect of twice dropping, and then gently placing, a two-gram weight on a small capacitive tactile array sampled at 20 Hz. The first drop produces a large dynamic signal in comparison to the static load, but the second drop is missed, demonstrating the value of having dynamic tactile sensing.

Chapter 56 — Robotics in Agriculture and Forestry

Marcel Bergerman, John Billingsley, John Reid and Eldert van Henten

Robotics for agriculture and forestry (A&F) represents the ultimate application of one of our society’s latest and most advanced innovations to its most ancient and important industries. Over the course of history, mechanization and automation increased crop output several orders of magnitude, enabling a geometric growth in population and an increase in quality of life across the globe. Rapid population growth and rising incomes in developing countries, however, require ever larger amounts of A&F output. This chapter addresses robotics for A&F in the form of case studies where robotics is being successfully applied to solve well-identified problems. With respect to plant crops, the focus is on the in-field or in-farm tasks necessary to guarantee a quality crop and, generally speaking, end at harvest time. In the livestock domain, the focus is on breeding and nurturing, exploiting, harvesting, and slaughtering and processing. The chapter is organized in four main sections. The first one explains the scope, in particular, what aspects of robotics for A&F are dealt with in the chapter. The second one discusses the challenges and opportunities associated with the application of robotics to A&F. The third section is the core of the chapter, presenting twenty case studies that showcase (mostly) mature applications of robotics in various agricultural and forestry domains. The case studies are not meant to be comprehensive but instead to give the reader a general overview of how robotics has been applied to A&F in the last 10 years. The fourth section concludes the chapter with a discussion on specific improvements to current technology and paths to commercialization.

A mini, unmanned, aerial system for remote sensing in agriculture

Author  Joao Valente, Julian Colorado, Claudio Rossi, Alex Martinez, Jaime Del Cerro, Antonio Barrientos

Video ID : 307

This video shows a mini-aerial robot employed for aerial sampling in precision agriculture (PA). Issues such as field partitioning, path planning, and robust flight control are addressed, together with experimental results collected during outdoor testing.

Chapter 69 — Physical Human-Robot Interaction

Sami Haddadin and Elizabeth Croft

Over the last two decades, the foundations for physical human–robot interaction (pHRI) have evolved from successful developments in mechatronics, control, and planning, leading toward safer lightweight robot designs and interaction control schemes that advance beyond the current capacities of existing high-payload and highprecision position-controlled industrial robots. Based on their ability to sense physical interaction, render compliant behavior along the robot structure, plan motions that respect human preferences, and generate interaction plans for collaboration and coaction with humans, these novel robots have opened up novel and unforeseen application domains, and have advanced the field of human safety in robotics.

This chapter gives an overview on the state of the art in pHRI as of the date of publication. First, the advances in human safety are outlined, addressing topics in human injury analysis in robotics and safety standards for pHRI. Then, the foundations of human-friendly robot design, including the development of lightweight and intrinsically flexible force/torque-controlled machines together with the required perception abilities for interaction are introduced. Subsequently, motionplanning techniques for human environments, including the domains of biomechanically safe, risk-metric-based, human-aware planning are covered. Finally, the rather recent problem of interaction planning is summarized, including the issues of collaborative action planning, the definition of the interaction planning problem, and an introduction to robot reflexes and reactive control architecture for pHRI.

Human-robot interactions

Author   J.Y.S. Luh, Shuyi Hu

Video ID : 613

In human-robot cooperative tasks, the robot is required to memorize different trajectories for different assignments and to automatically retrieve a proper one from them in real-time for the robot to follow when any assignment is repeated as, e.g., when carrying a rigid object jointly by a human and a robot. To start the task, the human leads the robot along a suitable trajectory and thereby achieves the desired goal. For every new task, the human is required to lead the robot. During the process, the trajectories are recorded and stored in memory as "skillful trajectories" for later use. Reference: J.Y.S. Luh, S. Hu: Interactions and motions in human-robot coordination, Proc. IEEE Int. Robot. Autom. (ICRA), Detroit (1999), Vol. 4, pp. 3171 – 3176; doi: 10.1109/ROBOT.1999.774081.

Chapter 38 — Grasping

Domenico Prattichizzo and Jeffrey C. Trinkle

This chapter introduces fundamental models of grasp analysis. The overall model is a coupling of models that define contact behavior with widely used models of rigid-body kinematics and dynamics. The contact model essentially boils down to the selection of components of contact force and moment that are transmitted through each contact. Mathematical properties of the complete model naturally give rise to five primary grasp types whose physical interpretations provide insight for grasp and manipulation planning.

After introducing the basic models and types of grasps, this chapter focuses on the most important grasp characteristic: complete restraint. A grasp with complete restraint prevents loss of contact and thus is very secure. Two primary restraint properties are form closure and force closure. A form closure grasp guarantees maintenance of contact as long as the links of the hand and the object are well-approximated as rigid and as long as the joint actuators are sufficiently strong. As will be seen, the primary difference between form closure and force closure grasps is the latter’s reliance on contact friction. This translates into requiring fewer contacts to achieve force closure than form closure.

The goal of this chapter is to give a thorough understanding of the all-important grasp properties of form and force closure. This will be done through detailed derivations of grasp models and discussions of illustrative examples. For an indepth historical perspective and a treasure-trove bibliography of papers addressing a wide range of topics in grasping, the reader is referred to [38.1].

Grasp analysis using the MATLAB toolbox SynGrasp

Author  Monica Malvezzi, Guido Gioioso, Gionata Salvietti, Domenico Prattichizzo

Video ID : 551

In this video a examples of few grasp analysis are documented and reported. The analysis is performed using SynGrasp, a MATLAB toolbox for grasp analysis. It provides a graphical user interface (GUI) which the user can adopt to easily load a hand and an object, and a series of functions that the user can assemble and modify to exploit all the toolbox features. The video shows how to use SynGrasp to model and analyze grasping, and, in particular it shows how users can select and load in the GUI a hand model, then choose an object and place it in the workspace selecting its position w.r.t. the hand. The grasp is obtained closing the hand from an initial configuration, which can be set by the users acting on hand joints. Once the grasp is defined, it can be analyzed by evaluating grasp quality measures available in the toolbox. Grasps can be described either using the provided grasp planner or directly defining contact points on the hand with the respective contact normal directions. SynGrasp can model both fully and underactuated robotic hands. An important role in grasp analysis, in particular with underactuated hands, is played by system compliance. SynGrasp can model the stiffness at contact points, at the joints or in the actuation system including transmission. A wide set of analytical functions, continuously increasing with new features and capabilities, has been developed to investigate the main grasp properties: controllable forces and object displacement, manipulability analysis, grasp stiffness and different measures of grasp quality. A set of functions for the graphical representation of the hand, the object, and the main analysis results is provided. The toolbox is freely available at http://syngrasp.dii.unisi.it.

Chapter 53 — Multiple Mobile Robot Systems

Lynne E. Parker, Daniela Rus and Gaurav S. Sukhatme

Within the context of multiple mobile, and networked robot systems, this chapter explores the current state of the art. After a brief introduction, we first examine architectures for multirobot cooperation, exploring the alternative approaches that have been developed. Next, we explore communications issues and their impact on multirobot teams in Sect. 53.3, followed by a discussion of networked mobile robots in Sect. 53.4. Following this we discuss swarm robot systems in Sect. 53.5 and modular robot systems in Sect. 53.6. While swarm and modular systems typically assume large numbers of homogeneous robots, other types of multirobot systems include heterogeneous robots. We therefore next discuss heterogeneity in cooperative robot teams in Sect. 53.7. Once robot teams allow for individual heterogeneity, issues of task allocation become important; Sect. 53.8 therefore discusses common approaches to task allocation. Section 53.9 discusses the challenges of multirobot learning, and some representative approaches. We outline some of the typical application domains which serve as test beds for multirobot systems research in Sect. 53.10. Finally, we conclude in Sect. 53.11 with some summary remarks and suggestions for further reading.

Metamorphic robotic system

Author  Amit Pamecha, Gregory Chirikjian

Video ID : 198

This video describes a metamorphic robotic system composed of many robotic modules, each of which has the ability to locomote over its neighbors. Mechanical coupling enables the robots to interact with each other.