View Chapter

Chapter 40 — Mobility and Manipulation

Oliver Brock, Jaeheung Park and Marc Toussaint

Mobile manipulation requires the integration of methodologies from all aspects of robotics. Instead of tackling each aspect in isolation,mobilemanipulation research exploits their interdependence to solve challenging problems. As a result, novel views of long-standing problems emerge. In this chapter, we present these emerging views in the areas of grasping, control, motion generation, learning, and perception. All of these areas must address the shared challenges of high-dimensionality, uncertainty, and task variability. The section on grasping and manipulation describes a trend towards actively leveraging contact and physical and dynamic interactions between hand, object, and environment. Research in control addresses the challenges of appropriately coupling mobility and manipulation. The field of motion generation increasingly blurs the boundaries between control and planning, leading to task-consistent motion in high-dimensional configuration spaces, even in dynamic and partially unknown environments. A key challenge of learning formobilemanipulation consists of identifying the appropriate priors, and we survey recent learning approaches to perception, grasping, motion, and manipulation. Finally, a discussion of promising methods in perception shows how concepts and methods from navigation and active perception are applied.

CHOMP trajectory optimization

Author   Nathan Ratliff, Matt Zucker, J. Andrew Bagnell, Siddhartha Srinivasa

Video ID : 665

Covariant functional gradient techniques for motion planning via optimization. Computer simulations and video demonstrations based on two experimental platforms: Barrett Technologies WAM arm and Boston Dynamics LittleDog.

Chapter 72 — Social Robotics

Cynthia Breazeal, Kerstin Dautenhahn and Takayuki Kanda

This chapter surveys some of the principal research trends in Social Robotics and its application to human–robot interaction (HRI). Social (or Sociable) robots are designed to interact with people in a natural, interpersonal manner – often to achieve positive outcomes in diverse applications such as education, health, quality of life, entertainment, communication, and tasks requiring collaborative teamwork. The long-term goal of creating social robots that are competent and capable partners for people is quite a challenging task. They will need to be able to communicate naturally with people using both verbal and nonverbal signals. They will need to engage us not only on a cognitive level, but on an emotional level as well in order to provide effective social and task-related support to people. They will need a wide range of socialcognitive skills and a theory of other minds to understand human behavior, and to be intuitively understood by people. A deep understanding of human intelligence and behavior across multiple dimensions (i. e., cognitive, affective, physical, social, etc.) is necessary in order to design robots that can successfully play a beneficial role in the daily lives of people. This requires a multidisciplinary approach where the design of social robot technologies and methodologies are informed by robotics, artificial intelligence, psychology, neuroscience, human factors, design, anthropology, and more.

Playing triadic games with KASPAR

Author  Kerstin Dautenhahn

Video ID : 220

The video illustrates (using researchers taking the roles of children) the system developed by Joshua Wainer as part of his PhD research at University of Hertfordshire. In this study, KASPAR was developed to fully autonomously play games with pairs of children with autism. The robot provides encouragement, motivation and feedback, and 'joins in the game'. The system was evaluated in long-term studies with children with autism (J. Wainer et al. 2014). Results show that KASPAR encourages collaborative skills in children with autism.

Chapter 53 — Multiple Mobile Robot Systems

Lynne E. Parker, Daniela Rus and Gaurav S. Sukhatme

Within the context of multiple mobile, and networked robot systems, this chapter explores the current state of the art. After a brief introduction, we first examine architectures for multirobot cooperation, exploring the alternative approaches that have been developed. Next, we explore communications issues and their impact on multirobot teams in Sect. 53.3, followed by a discussion of networked mobile robots in Sect. 53.4. Following this we discuss swarm robot systems in Sect. 53.5 and modular robot systems in Sect. 53.6. While swarm and modular systems typically assume large numbers of homogeneous robots, other types of multirobot systems include heterogeneous robots. We therefore next discuss heterogeneity in cooperative robot teams in Sect. 53.7. Once robot teams allow for individual heterogeneity, issues of task allocation become important; Sect. 53.8 therefore discusses common approaches to task allocation. Section 53.9 discusses the challenges of multirobot learning, and some representative approaches. We outline some of the typical application domains which serve as test beds for multirobot systems research in Sect. 53.10. Finally, we conclude in Sect. 53.11 with some summary remarks and suggestions for further reading.

Swarm construction robots

Author  Radhika Nagpal

Video ID : 216

This video describes produced at Harvard's Wyss Institute for Biologically Inspired Engineering, showing the development of swarm robots for construction. These robots follow the biological principles underlying insect swarms to achieve their constructions. The robots follow local control roles that, together with traffic control laws, guarantee the building of desired structures.

Chapter 72 — Social Robotics

Cynthia Breazeal, Kerstin Dautenhahn and Takayuki Kanda

This chapter surveys some of the principal research trends in Social Robotics and its application to human–robot interaction (HRI). Social (or Sociable) robots are designed to interact with people in a natural, interpersonal manner – often to achieve positive outcomes in diverse applications such as education, health, quality of life, entertainment, communication, and tasks requiring collaborative teamwork. The long-term goal of creating social robots that are competent and capable partners for people is quite a challenging task. They will need to be able to communicate naturally with people using both verbal and nonverbal signals. They will need to engage us not only on a cognitive level, but on an emotional level as well in order to provide effective social and task-related support to people. They will need a wide range of socialcognitive skills and a theory of other minds to understand human behavior, and to be intuitively understood by people. A deep understanding of human intelligence and behavior across multiple dimensions (i. e., cognitive, affective, physical, social, etc.) is necessary in order to design robots that can successfully play a beneficial role in the daily lives of people. This requires a multidisciplinary approach where the design of social robot technologies and methodologies are informed by robotics, artificial intelligence, psychology, neuroscience, human factors, design, anthropology, and more.

Mental-state inference to support human-robot collaboration

Author  Cynthia Breazeal

Video ID : 563

In this video, the Leonardo robot infers mental states from the observable behavior of two human collaborators in order to assist them in achieving their respective goals. The robot engages in a simulation-theory-inspired approach to make these inferences and to plan the appropriate actions to achieve the task goals. Each person wants a different food item (chips or cookies), locked in one of two larger boxes. The robot can operate a remote control interface to open two smaller boxes, one containing chips and the other cookies. The task is inspired by the Sally-Anne false-belief task, where the humans have diverging beliefs caused by a manipulation witnessed by only one of the participants. The robot must keep track of its own beliefs, in addition to inferring the beliefs of the human collaborators, as well as infer their respective goals, to offer the correct assistance.

Chapter 63 — Medical Robotics and Computer-Integrated Surgery

Russell H. Taylor, Arianna Menciassi, Gabor Fichtinger, Paolo Fiorini and Paolo Dario

The growth of medical robotics since the mid- 1980s has been striking. From a few initial efforts in stereotactic brain surgery, orthopaedics, endoscopic surgery, microsurgery, and other areas, the field has expanded to include commercially marketed, clinically deployed systems, and a robust and exponentially expanding research community. This chapter will discuss some major themes and illustrate them with examples from current and past research. Further reading providing a more comprehensive review of this rapidly expanding field is suggested in Sect. 63.4.

Medical robotsmay be classified in many ways: by manipulator design (e.g., kinematics, actuation); by level of autonomy (e.g., preprogrammed versus teleoperation versus constrained cooperative control), by targeted anatomy or technique (e.g., cardiac, intravascular, percutaneous, laparoscopic, microsurgical); or intended operating environment (e.g., in-scanner, conventional operating room). In this chapter, we have chosen to focus on the role of medical robots within the context of larger computer-integrated systems including presurgical planning, intraoperative execution, and postoperative assessment and follow-up.

First, we introduce basic concepts of computerintegrated surgery, discuss critical factors affecting the eventual deployment and acceptance of medical robots, and introduce the basic system paradigms of surgical computer-assisted planning, execution, monitoring, and assessment (surgical CAD/CAM) and surgical assistance. In subsequent sections, we provide an overview of the technology ofmedical robot systems and discuss examples of our basic system paradigms, with brief additional discussion topics of remote telesurgery and robotic surgical simulators. We conclude with some thoughts on future research directions and provide suggested further reading.

IREP robot - Insertable robotic effectors in single-port surgery

Author  Columbia University

Video ID : 831

This movie shows the single-port-access surgical robot IREP. This multimedia extension accompanies the IEEE ICRA 2010 paper describing design considerations for suturing. The work was carried out by Jienan Ding, Kai Xu, Roger Goldman, and Nabil Simaan at ARMA lab in collaboration with Peter Allen and Dennis Fowler from Columbia University.

Chapter 46 — Simultaneous Localization and Mapping

Cyrill Stachniss, John J. Leonard and Sebastian Thrun

This chapter provides a comprehensive introduction in to the simultaneous localization and mapping problem, better known in its abbreviated form as SLAM. SLAM addresses the main perception problem of a robot navigating an unknown environment. While navigating the environment, the robot seeks to acquire a map thereof, and at the same time it wishes to localize itself using its map. The use of SLAM problems can be motivated in two different ways: one might be interested in detailed environment models, or one might seek to maintain an accurate sense of a mobile robot’s location. SLAM serves both of these purposes.

We review the three major paradigms from which many published methods for SLAM are derived: (1) the extended Kalman filter (EKF); (2) particle filtering; and (3) graph optimization. We also review recent work in three-dimensional (3-D) SLAM using visual and red green blue distance-sensors (RGB-D), and close with a discussion of open research problems in robotic mapping.

Large-scale SLAM using the Atlas framework

Author  Michael Bosse

Video ID : 440

This video shows the operation of the Atlas framework for real-time, large-scale mapping using the MIT Killian Court data set. Atlas employed graphs of coordinate frames. Each vertex in the graph represents a local coordinate frame, and each edge represents the transformation between adjacent local coordinate frames. In each local coordinate frame, extended Kalman filter SLAM (Chap. 46.3.1, Springer Handbook of Robotics, 2nd edn 2016) is performed to make a map of the local environment and to estimate the current robot pose, along with the uncertainties of each. Each map's uncertainties were modelled with respect to its own local frame. Probabilities of entities in relation to arbitrary map-frames were generated by following a path formed by the edges between adjacent map-frames, using Dijkstra's shortest path algorithm. Loop-closing was achieved via an efficient map matching algorithm. Reference: M. Bosse, P. M. Newman, J. Leonard, S. Teller: Simultaneous localization and map building in large-scale cyclic environments using the Atlas framework, Int. J. Robot. Res. 23(12), 1113-1139 (2004).

Chapter 4 — Mechanism and Actuation

Victor Scheinman, J. Michael McCarthy and Jae-Bok Song

This chapter focuses on the principles that guide the design and construction of robotic systems. The kinematics equations and Jacobian of the robot characterize its range of motion and mechanical advantage, and guide the selection of its size and joint arrangement. The tasks a robot is to perform and the associated precision of its movement determine detailed features such as mechanical structure, transmission, and actuator selection. Here we discuss in detail both the mathematical tools and practical considerations that guide the design of mechanisms and actuation for a robot system.

The following sections (Sect. 4.1) discuss characteristics of the mechanisms and actuation that affect the performance of a robot. Sections 4.2–4.6 discuss the basic features of a robot manipulator and their relationship to the mathematical model that is used to characterize its performance. Sections 4.7 and 4.8 focus on the details of the structure and actuation of the robot and how they combine to yield various types of robots. The final Sect. 4.9 relates these design features to various performance metrics.

BigDog - Applications of hydraulic actuators

Author  Boston Dynamics

Video ID : 645

Fig. 4.22a Applications of hydraulic actuators to robot: BigDog (Boston Dynamics).

Chapter 23 — Biomimetic Robots

Kyu-Jin Cho and Robert Wood

Biomimetic robot designs attempt to translate biological principles into engineered systems, replacing more classical engineering solutions in order to achieve a function observed in the natural system. This chapter will focus on mechanism design for bio-inspired robots that replicate key principles from nature with novel engineering solutions. The challenges of biomimetic design include developing a deep understanding of the relevant natural system and translating this understanding into engineering design rules. This often entails the development of novel fabrication and actuation to realize the biomimetic design.

This chapter consists of four sections. In Sect. 23.1, we will define what biomimetic design entails, and contrast biomimetic robots with bio-inspired robots. In Sect. 23.2, we will discuss the fundamental components for developing a biomimetic robot. In Sect. 23.3, we will review detailed biomimetic designs that have been developed for canonical robot locomotion behaviors including flapping-wing flight, jumping, crawling, wall climbing, and swimming. In Sect. 23.4, we will discuss the enabling technologies for these biomimetic designs including material and fabrication.

Autonomous, self-contained, soft robotic fish

Author  Andrew D. Marchese, Cagdas D. Onal, Daniela Rus

Video ID : 433

The robotic fish was built by Andrew Marchese, a graduate student in MIT's Department of Electrical Engineering and Computer Science and the lead author of the paper, where he is joined by Daniela Rus and postdoc Cagdas D. Onal. Each side of the fish's tail is bored through with a long, tightly undulating channel. Carbon dioxide released from a canister in the fish's abdomen causes the channel to inflate, bending the tail in the opposite direction.

Chapter 40 — Mobility and Manipulation

Oliver Brock, Jaeheung Park and Marc Toussaint

Mobile manipulation requires the integration of methodologies from all aspects of robotics. Instead of tackling each aspect in isolation,mobilemanipulation research exploits their interdependence to solve challenging problems. As a result, novel views of long-standing problems emerge. In this chapter, we present these emerging views in the areas of grasping, control, motion generation, learning, and perception. All of these areas must address the shared challenges of high-dimensionality, uncertainty, and task variability. The section on grasping and manipulation describes a trend towards actively leveraging contact and physical and dynamic interactions between hand, object, and environment. Research in control addresses the challenges of appropriately coupling mobility and manipulation. The field of motion generation increasingly blurs the boundaries between control and planning, leading to task-consistent motion in high-dimensional configuration spaces, even in dynamic and partially unknown environments. A key challenge of learning formobilemanipulation consists of identifying the appropriate priors, and we survey recent learning approaches to perception, grasping, motion, and manipulation. Finally, a discussion of promising methods in perception shows how concepts and methods from navigation and active perception are applied.

State-representation learning for robotics

Author  Rico Jonschkowski, Oliver Brock

Video ID : 670

State-representation learning for robotics using prior knowledge about interacting with the physical world.

Chapter 28 — Force and Tactile Sensing

Mark R. Cutkosky and William Provancher

This chapter provides an overview of force and tactile sensing, with the primary emphasis placed on tactile sensing. We begin by presenting some basic considerations in choosing a tactile sensor and then review a wide variety of sensor types, including proximity, kinematic, force, dynamic, contact, skin deflection, thermal, and pressure sensors. We also review various transduction methods, appropriate for each general sensor type. We consider the information that these various types of sensors provide in terms of whether they are most useful for manipulation, surface exploration or being responsive to contacts from external agents.

Concerning the interpretation of tactile information, we describe the general problems and present two short illustrative examples. The first involves intrinsic tactile sensing, i. e., estimating contact locations and forces from force sensors. The second involves contact pressure sensing, i. e., estimating surface normal and shear stress distributions from an array of sensors in an elastic skin. We conclude with a brief discussion of the challenges that remain to be solved in packaging and manufacturing damage-tolerant tactile sensors.

Capacitive tactile sensing

Author  Mark Cutkosky

Video ID : 14

Video demonstrating the capacitive tactile sensing suite on the SRI-Meka-Stanford four-fingered hand built for the DARPA ARM-H Mobile Manipulation program.