View Chapter

Chapter 23 — Biomimetic Robots

Kyu-Jin Cho and Robert Wood

Biomimetic robot designs attempt to translate biological principles into engineered systems, replacing more classical engineering solutions in order to achieve a function observed in the natural system. This chapter will focus on mechanism design for bio-inspired robots that replicate key principles from nature with novel engineering solutions. The challenges of biomimetic design include developing a deep understanding of the relevant natural system and translating this understanding into engineering design rules. This often entails the development of novel fabrication and actuation to realize the biomimetic design.

This chapter consists of four sections. In Sect. 23.1, we will define what biomimetic design entails, and contrast biomimetic robots with bio-inspired robots. In Sect. 23.2, we will discuss the fundamental components for developing a biomimetic robot. In Sect. 23.3, we will review detailed biomimetic designs that have been developed for canonical robot locomotion behaviors including flapping-wing flight, jumping, crawling, wall climbing, and swimming. In Sect. 23.4, we will discuss the enabling technologies for these biomimetic designs including material and fabrication.

A single-motor-actuated, miniature, steerable jumping robot

Author  Jianguo Zhao, Jing Xu, Bingtuan Gao, Ning Xi, Fernando J. Cintron, Matt W. Mutka, Li Xiao

Video ID : 280

The contents of the video are divided into three parts. The first part illustrates the individual functions of the robot such as jumping, self-righting and steering. The second part demonstrates the robot's locomotion capability in indoor environments. Scenarios such as jumping from the floor, jumping in an office and jumping over stairs are included. The third part shows the robot's locomotion capability in outdoor environments. Experiments on uneven ground, ground with small gravels and ground with grass are included.

Chapter 17 — Limbed Systems

Shuuji Kajita and Christian Ott

A limbed system is a mobile robot with a body, legs and arms. First, its general design process is discussed in Sect. 17.1. Then we consider issues of conceptual design and observe designs of various existing robots in Sect. 17.2. As an example in detail, the design of a humanoid robot HRP-4C is shown in Sect. 17.3. To design a limbed system of good performance, it is important to take into account of actuation and control, like gravity compensation, limit cycle dynamics, template models, and backdrivable actuation. These are discussed in Sect. 17.4.

In Sect. 17.5, we overview divergence of limbed systems. We see odd legged walkers, leg–wheel hybrid robots, leg–arm hybrid robots, tethered walking robots, and wall-climbing robots. To compare limbed systems of different configurations,we can use performance indices such as the gait sensitivity norm, the Froude number, and the specific resistance, etc., which are introduced in Sect. 17.6.

3-D passive dynamic walking robot

Author  Steven Collins

Video ID : 532

A passive dynamic walking robot in 3-D developed by Dr.Collins.

Chapter 20 — Snake-Like and Continuum Robots

Ian D. Walker, Howie Choset and Gregory S. Chirikjian

This chapter provides an overview of the state of the art of snake-like (backbones comprised of many small links) and continuum (continuous backbone) robots. The history of each of these classes of robot is reviewed, focusing on key hardware developments. A review of the existing theory and algorithms for kinematics for both types of robot is presented, followed by a summary ofmodeling of locomotion for snake-like and continuum mechanisms.

Aiko obstacle-aided locomotion

Author  Pål Liljebäck

Video ID : 253

Video of Aiko snake robot developed at the Norwegian University of Science and Technology (NTNU)/SINTEF Advanced Robotics Laboratory. In this video the robot uses obstacles to propel itself.

Chapter 12 — Robotic Systems Architectures and Programming

David Kortenkamp, Reid Simmons and Davide Brugali

Robot software systems tend to be complex. This complexity is due, in large part, to the need to control diverse sensors and actuators in real time, in the face of significant uncertainty and noise. Robot systems must work to achieve tasks while monitoring for, and reacting to, unexpected situations. Doing all this concurrently and asynchronously adds immensely to system complexity.

The use of a well-conceived architecture, together with programming tools that support the architecture, can often help to manage that complexity. Currently, there is no single architecture that is best for all applications – different architectures have different advantages and disadvantages. It is important to understand those strengths and weaknesses when choosing an architectural approach for a given application.

This chapter presents various approaches to architecting robotic systems. It starts by defining terms and setting the context, including a recounting of the historical developments in the area of robot architectures. The chapter then discusses in more depth the major types of architectural components in use today – behavioral control (Chap. 13), executives, and task planners (Chap. 14) – along with commonly used techniques for interconnecting connecting those components. Throughout, emphasis will be placed on programming tools and environments that support these architectures. A case study is then presented, followed by a brief discussion of further reading.

Software product line engineering for robotics

Author  Davide Brugali

Video ID : 273

The video illustrates the software product-line approach to the development of robot software control systems and the open source HyperFlex toolchain that supports it.

Chapter 74 — Learning from Humans

Aude G. Billard, Sylvain Calinon and Rüdiger Dillmann

This chapter surveys the main approaches developed to date to endow robots with the ability to learn from human guidance. The field is best known as robot programming by demonstration, robot learning from/by demonstration, apprenticeship learning and imitation learning. We start with a brief historical overview of the field. We then summarize the various approaches taken to solve four main questions: when, what, who and when to imitate. We emphasize the importance of choosing well the interface and the channels used to convey the demonstrations, with an eye on interfaces providing force control and force feedback. We then review algorithmic approaches to model skills individually and as a compound and algorithms that combine learning from human guidance with reinforcement learning. We close with a look on the use of language to guide teaching and a list of open issues.

Demonstrations and reproduction of moving a chessman

Author  Sylvain Calinon, Florent Guenter, Aude Billard

Video ID : 97

A robot learns how to make a chess move from multiple demonstrations and to reproduce the skill in a new situation (different position of the chessman) by finding a controller which satisfies both the task constraints (what-to-imitate) and constraints relative to its body limitation (how-to-imitate). Reference: S. Calinon, F. Guenter, A. Billard: On earning, representing and generalizing a task in a humanoid robot, IEEE Trans. Syst. Man Cybernet. B 37(2), 286-298 (2007); URL: http://lasa.epfl.ch/videos/control.php.

Chapter 72 — Social Robotics

Cynthia Breazeal, Kerstin Dautenhahn and Takayuki Kanda

This chapter surveys some of the principal research trends in Social Robotics and its application to human–robot interaction (HRI). Social (or Sociable) robots are designed to interact with people in a natural, interpersonal manner – often to achieve positive outcomes in diverse applications such as education, health, quality of life, entertainment, communication, and tasks requiring collaborative teamwork. The long-term goal of creating social robots that are competent and capable partners for people is quite a challenging task. They will need to be able to communicate naturally with people using both verbal and nonverbal signals. They will need to engage us not only on a cognitive level, but on an emotional level as well in order to provide effective social and task-related support to people. They will need a wide range of socialcognitive skills and a theory of other minds to understand human behavior, and to be intuitively understood by people. A deep understanding of human intelligence and behavior across multiple dimensions (i. e., cognitive, affective, physical, social, etc.) is necessary in order to design robots that can successfully play a beneficial role in the daily lives of people. This requires a multidisciplinary approach where the design of social robot technologies and methodologies are informed by robotics, artificial intelligence, psychology, neuroscience, human factors, design, anthropology, and more.

A robot that provides a direction based on the model of the environment

Author  Takayuki Kanda

Video ID : 259

The video shows a scene of direction-giving interaction. The robot communicates the way to reach the destination with pointing in the direction to go. This interaction is supported with its capability to understand the environment. That is, the robot possesses the model of the environment, like a geographical map, topology, and landmarks from a first-person perspective, the so called route-perspective model.

Overview of Kismet's expressive behavior

Author  Cynthia Breazeal

Video ID : 557

This video presents an overview of Kismet's expressive behavior and rationale. The video presents how Kismet can express internal emotive/affective states through three modalities: facial expression, vocal affect, and body posture. The video also shows how Kismet can recognize aspects of affective intent in human speech (e.g., praising, scolding, soothing, and attentional bids). The video shows how human participants can interact in a natural and intuitive way with the robot, by reading and responding to its emotive and social cues.

Chapter 36 — Motion for Manipulation Tasks

James Kuffner and Jing Xiao

This chapter serves as an introduction to Part D by giving an overview of motion generation and control strategies in the context of robotic manipulation tasks. Automatic control ranging from the abstract, high-level task specification down to fine-grained feedback at the task interface are considered. Some of the important issues include modeling of the interfaces between the robot and the environment at the different time scales of motion and incorporating sensing and feedback. Manipulation planning is introduced as an extension to the basic motion planning problem, which can be modeled as a hybrid system of continuous configuration spaces arising from the act of grasping and moving parts in the environment. The important example of assembly motion is discussed through the analysis of contact states and compliant motion control. Finally, methods aimed at integrating global planning with state feedback control are summarized.

Robotic assembly of emergency-stop buttons

Author  Andreas Stolt et al.

Video ID : 358

The video presents a framework for dual-arm robotic assembly of stop buttons utilizing force/torque sensing under the fixture and force control.

Chapter 14 — AI Reasoning Methods for Robotics

Michael Beetz, Raja Chatila, Joachim Hertzberg and Federico Pecora

Artificial intelligence (AI) reasoning technology involving, e.g., inference, planning, and learning, has a track record with a healthy number of successful applications. So can it be used as a toolbox of methods for autonomous mobile robots? Not necessarily, as reasoning on a mobile robot about its dynamic, partially known environment may differ substantially from that in knowledge-based pure software systems, where most of the named successes have been registered. Moreover, recent knowledge about the robot’s environment cannot be given a priori, but needs to be updated from sensor data, involving challenging problems of symbol grounding and knowledge base change. This chapter sketches the main roboticsrelevant topics of symbol-based AI reasoning. Basic methods of knowledge representation and inference are described in general, covering both logicand probability-based approaches. The chapter first gives a motivation by example, to what extent symbolic reasoning has the potential of helping robots perform in the first place. Then (Sect. 14.2), we sketch the landscape of representation languages available for the endeavor. After that (Sect. 14.3), we present approaches and results for several types of practical, robotics-related reasoning tasks, with an emphasis on temporal and spatial reasoning. Plan-based robot control is described in some more detail in Sect. 14.4. Section 14.5 concludes.

From knowledge grounding to dialogue processing

Author  Séverin Lemaignan, Rachid Alami

Video ID : 705

This 2012 video documents the entire process of perspective-aware knowledge acquisition, knowledge representation and storage, and dialogue understanding. It demonstrates several examples of the natural interaction of a human with a PR2 robot, including speech recognition and action execution.

Chapter 32 — 3-D Vision for Navigation and Grasping

Danica Kragic and Kostas Daniilidis

In this chapter, we describe algorithms for three-dimensional (3-D) vision that help robots accomplish navigation and grasping. To model cameras, we start with the basics of perspective projection and distortion due to lenses. This projection from a 3-D world to a two-dimensional (2-D) image can be inverted only by using information from the world or multiple 2-D views. If we know the 3-D model of an object or the location of 3-D landmarks, we can solve the pose estimation problem from one view. When two views are available, we can compute the 3-D motion and triangulate to reconstruct the world up to a scale factor. When multiple views are given either as sparse viewpoints or a continuous incoming video, then the robot path can be computer and point tracks can yield a sparse 3-D representation of the world. In order to grasp objects, we can estimate 3-D pose of the end effector or 3-D coordinates of the graspable points on the object.

Google's Project Tango

Author  Google, Inc.

Video ID : 120

Google's Project Tango has been collaborating with robotics laboratories from around the world to synthesize the past decade of research and computer vision into the development of a new class of mobile devices. This video contains one of the first public announcements and presentations of a device that can be used for multiple robot-perception applications described in this chapter.