View Chapter

Chapter 41 — Active Manipulation for Perception

Anna Petrovskaya and Kaijen Hsiao

This chapter covers perceptual methods in which manipulation is an integral part of perception. These methods face special challenges due to data sparsity and high costs of sensing actions. However, they can also succeed where other perceptual methods fail, for example, in poor-visibility conditions or for learning the physical properties of a scene.

The chapter focuses on specialized methods that have been developed for object localization, inference, planning, recognition, and modeling in activemanipulation approaches.We concludewith a discussion of real-life applications and directions for future research.

Tactile exploration and modeling using shape primitives

Author  Francesco Mazzini

Video ID : 76

This video shows a robot performing tactile exploration and modeling of a lab-constructed scene that was designed to be similar to those found in interventions for underwater oil spills (leaking pipe). Representing the scene with geometric primitives enables the surface to be described using only sparse tactile data from joint encoders. The robot's movements are chosen to maximize the expected increase in knowledge about the scene.

Tactile localization of a power drill

Author  Kaijen Hsiao

Video ID : 77

This video shows a Barrett WAM arm tactilely localizing and reorienting a power drill under high positional uncertainty. The goal is for the robot to robustly grasp the power drill such that the trigger can be activated. The robot tracks the distribution of possible object poses on the table over a 3-D grid (the belief space). It then selects between information-gathering, reorienting, and goal-seeking actions by modeling the problem as a POMDP (partially observable Markov decision process) and using receding-horizon, forward search through the belief space. In the video, the inset window with the simulated robot is a visualization of the current belief state. The red spheres sit at the vertices of the object mesh placed at the most likely state, and the dark-blue box also shows the location of the most likely state. The purple box shows the location of the mean of the belief state, and the light-blue boxes show the variance of the belief state in the form of the locations of various states that are one standard deviation away from the mean in each of the three dimensions of uncertainty (x, y, and theta). The magenta spheres and arrows that appear when the robot touches the object show the contact locations and normals as reported by the sensors, and the cyan spheres that largely overlap the hand show where the robot controllers are trying to move the hand.

Modeling articulated objects using active manipulation

Author  Juergen Strum

Video ID : 78

The video illustrates a mobile, manipulation robot that interacts with various articulated objects, such as a fridge and a dishwasher, in a kitchen environment. During interaction, the robot learns their kinematic properties such as the rotation axis and the configuration space. Knowing the kinematic model of these objects improves the performance of the robot and enables motion planning. Service robots operating in domestic environments are typically faced with a variety of objects they have to deal with to fulfill their tasks. Some of these objects are articulated such as cabinet doors and drawers, or room and garage doors. The ability to deal with such articulated objects is relevant for service robots, as, for example, they need to open doors when navigating between rooms and to open cabinets to pick up objects in fetch-and-carry applications. We developed a complete probabilistic framework that enables robots to learn the kinematic models of articulated objects from observations of their motion. We combine parametric and nonparametric models consistently and utilize the advantages of both methods. As a result of our approach, a robot can robustly operate articulated objects in unstructured environments. All software is available open-source (including documentation and tutorials) on http://www.ros.org/wiki/articulation.

6-DOF object localization via touch

Author  Anna Petrovskaya

Video ID : 721

The PUMA robot arm performs 6-DOF localization of an object (i.e., a cash register) via touch starting with global uncertainty. After each contact, the robot analyzes the resulting belief about the object pose. If the uncertainty of the belief is too large, the robot continues to probe the object. Once, the uncertainty is small enough, the robot is able to push buttons and manipulate the drawer based on its knowledge of the object pose and prior knowledge of the object model. A prior 3-D mesh model of the object was constructed by touching the object with the robot's end-effector.

Touch-based, door-handle localization and manipulation

Author  Anna Petrovskaya

Video ID : 723

The harmonic arm robot localizes the door handle by touching it. 3-DOF localization is performed in this video. Once the localization is complete, the robot is able to grasp and manipulate the handle. The mobile platform is teleoperated, whereas the robotic arm motions are autonomous. A 2-D model of the door and handle was constructed from hand measurements for this experiment.