View Chapter

Chapter 30 — Sonar Sensing

Lindsay Kleeman and Roman Kuc

Sonar or ultrasonic sensing uses the propagation of acoustic energy at higher frequencies than normal hearing to extract information from the environment. This chapter presents the fundamentals and physics of sonar sensing for object localization, landmark measurement and classification in robotics applications. The source of sonar artifacts is explained and how they can be dealt with. Different ultrasonic transducer technologies are outlined with their main characteristics highlighted.

Sonar systems are described that range in sophistication from low-cost threshold-based ranging modules to multitransducer multipulse configurations with associated signal processing requirements capable of accurate range and bearing measurement, interference rejection, motion compensation, and target classification. Continuous-transmission frequency-modulated (CTFM) systems are introduced and their ability to improve target sensitivity in the presence of noise is discussed. Various sonar ring designs that provide rapid surrounding environmental coverage are described in conjunction with mapping results. Finally the chapter ends with a discussion of biomimetic sonar, which draws inspiration from animals such as bats and dolphins.

B-scan image of indoor potted tree using multipulse sonar

Author  Roman Kuc

Video ID : 315

By repeatedly clearing the conventional sonar ranging board, each echo produces a spike sequence that is related to the echo amplitude. A brightness-scan (B-scan) image - similar to diagnostic ultrasound images - is generated by transforming the short-term spike density into a gray scale intensity. The video shows a B-scan of a potted tree in an indoor environment containing a doorway (with door knob) and a tree located in front of a cinder-block wall. The B-scan shows the specular environmental features as well as the random tree-leaf structures. Note that the wall behind the tree is also clearly imaged. Reference: R. Kuc: Generating B-scans of the environment with a conventional sonar, IEEE Sensor. J. 8(2), 151 - 160 (2008); doi: 10.1109/JSEN.2007.908242 .

Chapter 40 — Mobility and Manipulation

Oliver Brock, Jaeheung Park and Marc Toussaint

Mobile manipulation requires the integration of methodologies from all aspects of robotics. Instead of tackling each aspect in isolation,mobilemanipulation research exploits their interdependence to solve challenging problems. As a result, novel views of long-standing problems emerge. In this chapter, we present these emerging views in the areas of grasping, control, motion generation, learning, and perception. All of these areas must address the shared challenges of high-dimensionality, uncertainty, and task variability. The section on grasping and manipulation describes a trend towards actively leveraging contact and physical and dynamic interactions between hand, object, and environment. Research in control addresses the challenges of appropriately coupling mobility and manipulation. The field of motion generation increasingly blurs the boundaries between control and planning, leading to task-consistent motion in high-dimensional configuration spaces, even in dynamic and partially unknown environments. A key challenge of learning formobilemanipulation consists of identifying the appropriate priors, and we survey recent learning approaches to perception, grasping, motion, and manipulation. Finally, a discussion of promising methods in perception shows how concepts and methods from navigation and active perception are applied.

Autonomous robot skill acquisition

Author  Scott Kuindersma, George Konidaris

Video ID : 669

This video demonstrates the autonomous-skill acquisition of a robot acting in a constrained environment called the "Red Room". The environment consists of buttons, levers, and switches, all located at points of interest designated by ARTags. The robot can navigate to these locations and perform primitive manipulation actions, some of which affect the physical state of the maze (e.g., by opening or closing a door).

Chapter 71 — Cognitive Human-Robot Interaction

Bilge Mutlu, Nicholas Roy and Selma Šabanović

A key research challenge in robotics is to design robotic systems with the cognitive capabilities necessary to support human–robot interaction. These systems will need to have appropriate representations of the world; the task at hand; the capabilities, expectations, and actions of their human counterparts; and how their own actions might affect the world, their task, and their human partners. Cognitive human–robot interaction is a research area that considers human(s), robot(s), and their joint actions as a cognitive system and seeks to create models, algorithms, and design guidelines to enable the design of such systems. Core research activities in this area include the development of representations and actions that allow robots to participate in joint activities with people; a deeper understanding of human expectations and cognitive responses to robot actions; and, models of joint activity for human–robot interaction. This chapter surveys these research activities by drawing on research questions and advances from a wide range of fields including computer science, cognitive science, linguistics, and robotics.

Robotic secrets revealed, Episode 2: The trouble begins

Author  Greg Trafton

Video ID : 130

This video demonstrates research on robot perception (including object recognition and multimodal person identification) and embodied cognition (including theory of mind or the ability to reason about what others believe). The video features two people interacting with two robots.

Chapter 24 — Wheeled Robots

Woojin Chung and Karl Iagnemma

The purpose of this chapter is to introduce, analyze, and compare various wheeled mobile robots (WMRs) and to present several realizations and commonly encountered designs. The mobility of WMR is discussed on the basis of the kinematic constraints resulting from the pure rolling conditions at the contact points between the wheels and the ground. Practical robot structures are classified according to the number of wheels, and features are introduced focusing on commonly adopted designs. Omnimobile robot and articulated robots realizations are described. Wheel–terrain interaction models are presented in order to compute forces at the contact interface. Four possible wheel-terrain interaction cases are shown on the basis of relative stiffness of the wheel and terrain. A suspension system is required to move on uneven surfaces. Structures, dynamics, and important features of commonly used suspensions are explained.

An omnidirectional robot with four Swedish wheels

Author  Nexus Automation Limited

Video ID : 328

This video shows a holonomic omnidirectional mobile robot with four Swedish wheels. The wheel enables lateral motion by the use of rotating rollers. Although the structure of each wheel becomes complicated, the driving mechanisms of the wheels become simpler. Another advantage is that the footprint locations remain unchanged during omnidirectional movements.

Chapter 20 — Snake-Like and Continuum Robots

Ian D. Walker, Howie Choset and Gregory S. Chirikjian

This chapter provides an overview of the state of the art of snake-like (backbones comprised of many small links) and continuum (continuous backbone) robots. The history of each of these classes of robot is reviewed, focusing on key hardware developments. A review of the existing theory and algorithms for kinematics for both types of robot is presented, followed by a summary ofmodeling of locomotion for snake-like and continuum mechanisms.

Two-dimensional binary manipulator

Author  Greg Chirikjian

Video ID : 160

Greg Chirikjian's binary manipulator operating in two dimensions.

Chapter 72 — Social Robotics

Cynthia Breazeal, Kerstin Dautenhahn and Takayuki Kanda

This chapter surveys some of the principal research trends in Social Robotics and its application to human–robot interaction (HRI). Social (or Sociable) robots are designed to interact with people in a natural, interpersonal manner – often to achieve positive outcomes in diverse applications such as education, health, quality of life, entertainment, communication, and tasks requiring collaborative teamwork. The long-term goal of creating social robots that are competent and capable partners for people is quite a challenging task. They will need to be able to communicate naturally with people using both verbal and nonverbal signals. They will need to engage us not only on a cognitive level, but on an emotional level as well in order to provide effective social and task-related support to people. They will need a wide range of socialcognitive skills and a theory of other minds to understand human behavior, and to be intuitively understood by people. A deep understanding of human intelligence and behavior across multiple dimensions (i. e., cognitive, affective, physical, social, etc.) is necessary in order to design robots that can successfully play a beneficial role in the daily lives of people. This requires a multidisciplinary approach where the design of social robot technologies and methodologies are informed by robotics, artificial intelligence, psychology, neuroscience, human factors, design, anthropology, and more.

A scene of deictic interaction

Author  Takayuki Kanda

Video ID : 807

This video illustrates the "deictic interaction" in which the robot and a user interact using pointing gestures and verbal-reference terms. The robot has a capability to understand the user's deictic interaction recognizing both the pointing gesture and the reference term. In addition, there is a 'facilitation' mechanism (e.g., the robot engages in real-time joint attention), which makes the interaction smooth and natural.

Chapter 44 — Networked Robots

Dezhen Song, Ken Goldberg and Nak-Young Chong

As of 2013, almost all robots have access to computer networks that offer extensive computing, memory, and other resources that can dramatically improve performance. The underlying enabling framework is the focus of this chapter: networked robots. Networked robots trace their origin to telerobots or remotely controlled robots. Telerobots are widely used to explore undersea terrains and outer space, to defuse bombs and to clean up hazardous waste. Until 1994, telerobots were accessible only to trained and trusted experts through dedicated communication channels. This chapter will describe relevant network technology, the history of networked robots as it evolves from teleoperation to cloud robotics, properties of networked robots, how to build a networked robot, example systems. Later in the chapter, we focus on the recent progress on cloud robotics, and topics for future research.

A multi-operator, multi-robot teleoperation system

Author  Nak Young Chong

Video ID : 84

A multi-operator, multi-robot teleoperation system for collaborative maintenance operations: Video Proc. of ICRA 2001. Over the past decades, problems and notable results have been reported mainly in the single-operator single-robot (SOSR) teleoperation system. Recently, the need for cooperation has rapidly emerged in many possible applications such as plant maintenance, construction, and surgery, and considerable efforts have therefore been made toward the coordinated control of multi-operator, multi-robot (MOMR) teleoperation. We have developed coordinated control technologies for multi-telerobot cooperation in a common environment remotely controlled from multiple operators physically distant from each other. To overcome the operators' delayed visual perception arising from network throughput limitations, we have suggested several coordinated control aids at the local operator site. Operators control their master to get their telerobot to cooperate with the counterpart telerobot using the predictive simulator, as well as video image feedback. This video explains the details of the testbed and investigates the use of an online predictive simulator to assist the operator in coping with time delay.

Chapter 46 — Simultaneous Localization and Mapping

Cyrill Stachniss, John J. Leonard and Sebastian Thrun

This chapter provides a comprehensive introduction in to the simultaneous localization and mapping problem, better known in its abbreviated form as SLAM. SLAM addresses the main perception problem of a robot navigating an unknown environment. While navigating the environment, the robot seeks to acquire a map thereof, and at the same time it wishes to localize itself using its map. The use of SLAM problems can be motivated in two different ways: one might be interested in detailed environment models, or one might seek to maintain an accurate sense of a mobile robot’s location. SLAM serves both of these purposes.

We review the three major paradigms from which many published methods for SLAM are derived: (1) the extended Kalman filter (EKF); (2) particle filtering; and (3) graph optimization. We also review recent work in three-dimensional (3-D) SLAM using visual and red green blue distance-sensors (RGB-D), and close with a discussion of open research problems in robotic mapping.

Fast iterative alignment of pose graphs

Author  Edwin Olson

Video ID : 444

This video provides an illustration of graph-based SLAM, as described in Chap. 46.3.3, Springer Handbook of Robotics, 2nd edn (2016), using the MIT Killian Court data set. Reference: E. Olson, J. Leonard, S. Teller: Fast iterative alignment of pose graphs with poor initial estimates, Proc. IEEE Int. Conf. Robot. Autom. (ICRA), Orlando (2006), pp. 2262 - 2269; doi: 10.1109/ROBOT.2006.1642040.