View Chapter

Chapter 61 — Robot Surveillance and Security

Wendell H. Chun and Nikolaos Papanikolopoulos

This chapter introduces the foundation for surveillance and security robots for multiple military and civilian applications. The key environmental domains are mobile robots for ground, aerial, surface water, and underwater applications. Surveillance literallymeans to watch fromabove,while surveillance robots are used to monitor the behavior, activities, and other changing information that are gathered for the general purpose of managing, directing, or protecting one’s assets or position. In a practical sense, the term surveillance is taken to mean the act of observation from a distance, and security robots are commonly used to protect and safeguard a location, some valuable assets, or personal against danger, damage, loss, and crime. Surveillance is a proactive operation,while security robots are a defensive operation. The construction of each type of robot is similar in nature with amobility component, sensor payload, communication system, and an operator control station.

After introducing the major robot components, this chapter focuses on the various applications. More specifically, Sect. 61.3 discusses the enabling technologies of mobile robot navigation, various payload sensors used for surveillance or security applications, target detection and tracking algorithms, and the operator’s robot control console for human–machine interface (HMI). Section 61.4 presents selected research activities relevant to surveillance and security, including automatic data processing of the payload sensors, automaticmonitoring of human activities, facial recognition, and collaborative automatic target recognition (ATR). Finally, Sect. 61.5 discusses future directions in robot surveillance and security, giving some conclusions and followed by references.

Tracking people for security

Author  Nikos Papanikolopoulos

Video ID : 683

Tracking of people in crowded scenes is challenging because people occlude each other as they walk around. The latest revision of the University of Minnesota's person tracker uses adaptive appearance models that explicitly account for the probability that a person may be partially occluded. All potentially occluding targets are tracked jointly, and the most likely visibility order is estimated (so we know the probability that person A is occluding person B). Target-size adaptation is performed using calibration information about the camera, and the reported target positions are made in real-world coordinates.

Indoor, urban aerial vehicle navigation

Author  Jonathan How

Video ID : 703

The MIT indoor multi-vehicle testbed is specially designed to study long duration missions in a controlled, urban environment. This testbed is being used to implement and analyze the performance of techniques for embedding the fleet and vehicle health state into the mission and UAV planning. More than four air vehicles can be flown in a typical-sized room, and it takes no more than one operator to set up the platform for flight testing at any time of day and for any length of time. At the heart of the testbed is a global metrology system that yields very accurate, high bandwidth position and attitude data for all vehicles in the entire room.

Chapter 7 — Motion Planning

Lydia E. Kavraki and Steven M. LaValle

This chapter first provides a formulation of the geometric path planning problem in Sect. 7.2 and then introduces sampling-based planning in Sect. 7.3. Sampling-based planners are general techniques applicable to a wide set of problems and have been successful in dealing with hard planning instances. For specific, often simpler, planning instances, alternative approaches exist and are presented in Sect. 7.4. These approaches provide theoretical guarantees and for simple planning instances they outperform samplingbased planners. Section 7.5 considers problems that involve differential constraints, while Sect. 7.6 overviews several other extensions of the basic problem formulation and proposed solutions. Finally, Sect. 7.8 addresses some important andmore advanced topics related to motion planning.

Simulation of a large crowd

Author  Dinesh Manocha

Video ID : 21

Motion-planning methods can be used to simulate a large crowd which is a system with a very high degree of freedom. This video illustrates an approach that uses an optimization method to compute a biomechanically energy-efficient, collision-free trajectory for each agent. Many phenomena arise such as lane formation.

Chapter 40 — Mobility and Manipulation

Oliver Brock, Jaeheung Park and Marc Toussaint

Mobile manipulation requires the integration of methodologies from all aspects of robotics. Instead of tackling each aspect in isolation,mobilemanipulation research exploits their interdependence to solve challenging problems. As a result, novel views of long-standing problems emerge. In this chapter, we present these emerging views in the areas of grasping, control, motion generation, learning, and perception. All of these areas must address the shared challenges of high-dimensionality, uncertainty, and task variability. The section on grasping and manipulation describes a trend towards actively leveraging contact and physical and dynamic interactions between hand, object, and environment. Research in control addresses the challenges of appropriately coupling mobility and manipulation. The field of motion generation increasingly blurs the boundaries between control and planning, leading to task-consistent motion in high-dimensional configuration spaces, even in dynamic and partially unknown environments. A key challenge of learning formobilemanipulation consists of identifying the appropriate priors, and we survey recent learning approaches to perception, grasping, motion, and manipulation. Finally, a discussion of promising methods in perception shows how concepts and methods from navigation and active perception are applied.

Adaptive synergies for a humanoid robot hand

Author  Centro di Ricerca Enrico Piaggio

Video ID : 658

We present the first implementation of the UNIPI-hand, a highly integrated prototype of an anthropomorphic hand that conciliates the idea of adaptive synergies with a human-form factor. The video validates the hand's versatility by showing grasp and manipulation actions on a variety of objects.

Chapter 72 — Social Robotics

Cynthia Breazeal, Kerstin Dautenhahn and Takayuki Kanda

This chapter surveys some of the principal research trends in Social Robotics and its application to human–robot interaction (HRI). Social (or Sociable) robots are designed to interact with people in a natural, interpersonal manner – often to achieve positive outcomes in diverse applications such as education, health, quality of life, entertainment, communication, and tasks requiring collaborative teamwork. The long-term goal of creating social robots that are competent and capable partners for people is quite a challenging task. They will need to be able to communicate naturally with people using both verbal and nonverbal signals. They will need to engage us not only on a cognitive level, but on an emotional level as well in order to provide effective social and task-related support to people. They will need a wide range of socialcognitive skills and a theory of other minds to understand human behavior, and to be intuitively understood by people. A deep understanding of human intelligence and behavior across multiple dimensions (i. e., cognitive, affective, physical, social, etc.) is necessary in order to design robots that can successfully play a beneficial role in the daily lives of people. This requires a multidisciplinary approach where the design of social robot technologies and methodologies are informed by robotics, artificial intelligence, psychology, neuroscience, human factors, design, anthropology, and more.

A robot that approaches pedestrians

Author  Takayuki Kanda

Video ID : 258

This video illustrates an example of a study in which the social robot's capability for nonverbal interaction was developed. In the study, an anticipation technique was developed, where the robot observes pedestrians' motions and anticipates each pedestrian's future motions thanks to the accumulation of a large amount of data on pedestrian trajectories. Then, it plans its motion to approach a pedestrian from a frontal direction and initiates a conversation with the pedestrian.

Chapter 53 — Multiple Mobile Robot Systems

Lynne E. Parker, Daniela Rus and Gaurav S. Sukhatme

Within the context of multiple mobile, and networked robot systems, this chapter explores the current state of the art. After a brief introduction, we first examine architectures for multirobot cooperation, exploring the alternative approaches that have been developed. Next, we explore communications issues and their impact on multirobot teams in Sect. 53.3, followed by a discussion of networked mobile robots in Sect. 53.4. Following this we discuss swarm robot systems in Sect. 53.5 and modular robot systems in Sect. 53.6. While swarm and modular systems typically assume large numbers of homogeneous robots, other types of multirobot systems include heterogeneous robots. We therefore next discuss heterogeneity in cooperative robot teams in Sect. 53.7. Once robot teams allow for individual heterogeneity, issues of task allocation become important; Sect. 53.8 therefore discusses common approaches to task allocation. Section 53.9 discusses the challenges of multirobot learning, and some representative approaches. We outline some of the typical application domains which serve as test beds for multirobot systems research in Sect. 53.10. Finally, we conclude in Sect. 53.11 with some summary remarks and suggestions for further reading.

Transport of a child by swarm-bots

Author  Ivan Aloisio, Michael Bonani, Francesco Mondada, Andre Guignard, Roderich Gross, Dario Floreano

Video ID : 212

This video shows a swarm of s-bot, miniature, mobile robots in swarm-bot formation pulling a child across the floor.

Chapter 40 — Mobility and Manipulation

Oliver Brock, Jaeheung Park and Marc Toussaint

Mobile manipulation requires the integration of methodologies from all aspects of robotics. Instead of tackling each aspect in isolation,mobilemanipulation research exploits their interdependence to solve challenging problems. As a result, novel views of long-standing problems emerge. In this chapter, we present these emerging views in the areas of grasping, control, motion generation, learning, and perception. All of these areas must address the shared challenges of high-dimensionality, uncertainty, and task variability. The section on grasping and manipulation describes a trend towards actively leveraging contact and physical and dynamic interactions between hand, object, and environment. Research in control addresses the challenges of appropriately coupling mobility and manipulation. The field of motion generation increasingly blurs the boundaries between control and planning, leading to task-consistent motion in high-dimensional configuration spaces, even in dynamic and partially unknown environments. A key challenge of learning formobilemanipulation consists of identifying the appropriate priors, and we survey recent learning approaches to perception, grasping, motion, and manipulation. Finally, a discussion of promising methods in perception shows how concepts and methods from navigation and active perception are applied.

Mobile robot helper

Author  Kazuhiro Kosuge, Manabu Sato, Norihide Kazamura

Video ID : 788

The mobile robot helper has two 7-DOF arms, force/torque sensors. Named Mr. Helper, it helps people to move objects, using FT sensor and impedance control system.

Chapter 55 — Space Robotics

Kazuya Yoshida, Brian Wilcox, Gerd Hirzinger and Roberto Lampariello

In the space community, any unmanned spacecraft can be called a robotic spacecraft. However, Space Robots are considered to be more capable devices that can facilitate manipulation, assembling, or servicing functions in orbit as assistants to astronauts, or to extend the areas and abilities of exploration on remote planets as surrogates for human explorers.

In this chapter, a concise digest of the historical overview and technical advances of two distinct types of space robotic systems, orbital robots and surface robots, is provided. In particular, Sect. 55.1 describes orbital robots, and Sect. 55.2 describes surface robots. In Sect. 55.3, the mathematical modeling of the dynamics and control using reference equations are discussed. Finally, advanced topics for future space exploration missions are addressed in Sect. 55.4.

DLR telepresence demo of removal of a cover

Author  Jordi Artigas, Gerd Hirzinger

Video ID : 337

Telepresence with force reflection using DLR’s light-weight robots as teleoperator-input devices.

Chapter 76 — Evolutionary Robotics

Stefano Nolfi, Josh Bongard, Phil Husbands and Dario Floreano

Evolutionary Robotics is a method for automatically generating artificial brains and morphologies of autonomous robots. This approach is useful both for investigating the design space of robotic applications and for testing scientific hypotheses of biological mechanisms and processes. In this chapter we provide an overview of methods and results of Evolutionary Robotics with robots of different shapes, dimensions, and operation features. We consider both simulated and physical robots with special consideration to the transfer between the two worlds.

Evolution of cooperative and communicative behaviors

Author  Stefano Nolfi, Joachim De Greeff

Video ID : 117

A group of two e-puck robots are evolved for the capacity to reach and to move back and forth between the two circular areas. The robots are provided with infrared sensors, a camera with which they can perceive the relative position of the other robot, a microphone with which they can sense the sound-signal produced by the other robot, two motors which set the desired speed of the two wheels, and a speaker to emit sound signals. The evolved robots coordinate and cooperate on the basis of an evolved communication system which includes several implicit and explicit signals constituted, respectively, by the relative positions assumed by the robots in the environment as perceived through the robots' cameras and by the sounds with varying frequencies emitted and perceived by the robots through the robots' speakers and microphones.

Evolution of visually-guided behaviour on Sussex gantry robot

Author  Phil Husbands

Video ID : 371

Behaviour evolved in the real world on the Sussex gantry robot in 1994. Controllers (evolved neural networks plus visual sampling morphology) are automatically evaluated on the actual robot. The required behaviour is a shape discrimination task: to move to the triangle, while ignoring the rectangle, under very noisy lighting conditions.