03-08-2012, 09:50 AM
Challenges for Robot Manipulation in Human Environments
1Challenges for Robot.pdf (Size: 1.55 MB / Downloads: 46)
To What End?
Commercially available robotic toys and vacuum cleaners inhabit our living spaces, and robotic
vehicles have raced across the desert. These successes appear to foreshadow an explosion of
robotic applications in our daily lives, but without advances in robot manipulation, many
promising robotic applications will not be possible. Whether in a domestic setting or the workplace,
we would like robots to physically alter the world through contact.
Robots have long been imagined as mechanical workers, helping us in our daily life.
Research on manipulation in human environments may someday lead to robots that work
alongside us, extending the time an elderly person can live at home, providing physical assistance
to a worker on an assembly line, or helping with household chores.
Today’s Robots
To date, robots have been very successful at manipulation in simulation and controlled environments
such as a factory. Outside of controlled environments, robots have only performed
sophisticated manipulation tasks when operated by a human.
Simulation
Within simulation, robots have performed sophisticated
manipulation tasks such as grasping convoluted objects, tying
knots, and carrying objects around complex obstacles. The
control algorithms for these demonstrations often employ
search algorithms to find satisfactory solutions, such as a path
to a goal state, or a set of contact points that maximize a
measure of grasp quality. For example, many virtual robots
use algorithms for motion planning that rapidly search for
paths through a state space that models the kinematics and
dynamics of the world [2]. Most of these simulations ignore
the robot’s sensory systems and assume that the state of the
world is known with certainty. For example, they often
assume that the robot knows the three-dimensional (3-D)
structure of the objects it is manipulating.
Controlled Environments
Within controlled environments, the world can be adapted to
match the capabilities of the robot. For example, within a
traditional factory setting engineers can ensure that a robot
knows the relevant state of the world with near certainty. The
robot typically needs to perform a few tasks using a few
known objects, and people are usually banned from the area
while the robot is in motion. Mechanical feeders can enforce
constraints on the pose of the objects to be manipulated. In
the event that a robot needs to sense the world, engineers can
make the environment favorable to sensing by controlling
factors such as the lighting and the placement of objects relative
to a sensor. Moreover, since the objects and tasks are
known in advance, perception can be specialized and
model-based.
Operated by a Human
Outside of controlled settings, robots have only performed
sophisticated manipulation tasks when operated by a human.
Through teleoperation, even highly complex humanoid
robots have performed a variety of challenging everyday
manipulation tasks, such as grasping everyday objects, using a
power drill, throwing away trash, and retrieving a drink from
a refrigerator (Figure 1). Similarly, disabled people have used
wheelchair mounted robot arms, such as the commercially
available Manus ARM (Figure 2), to perform everyday tasks
that would otherwise be beyond their abilities. Attendees of
the workshop were in agreement that today’s robots can successfully
perform sophisticated manipulation tasks in human
environments when under human control, albeit slowly and
with significant effort on the part of the human operator.
Tactile Sensing
Since robot manipulation fundamentally relies on contact
between the robot and the world, tactile sensing is an especially
appropriate modality that has too often been neglected
in favor of vision based approaches. As blind people convincingly
demonstrate, tactile sensing alone can support extremely
sophisticated manipulation.
Unfortunately, many traditional tactile sensing technologies,
such as force sensing resistors (FSRs), do not fit the
requirements of robot manipulation in human environments
due to a lack of sensitivity and dynamic range.
Learning
Today’s top performing computer vision algorithms for
object detection and recognition rely on machine learning,
so it seems almost inevitable that learning will play an
important role in robot manipulation. Explicit modelbased
control is still the dominant approach to manipulation,
and when the world’s state is known and consists of
rigid body motion, it’s hard to imagine something better.
Yet robots cannot expect to estimate the state of human
environments in such certain terms, and even motion planners
need to have goal states and measures of success,
which could potentially be learned.
Working with People
By treating tasks that involve manipulation as a cooperative
process, people and robots can perform tasks that neither one
could perform independently. For at least the near term,
robots in human environments will be dependent on people.
As long as a robot’s usefulness outweighs the efforts required
to help it, full robot autonomy is unnecessary.
On Human Form
Human environments tend to be well-matched to the human
body. Robots can sometimes simplify manipulation tasks by
taking advantage of these same characteristics. For example,
most everyday objects in human environments sit on top of flat
surfaces that can be comfortably viewed and reached by a
human. A robot can more easily perceive and manipulate these
objects if its sensors look down on the surfaces and its manipulators
easily reach the surfaces. Similarly, everyday hand-held
objects, such as tools, are designed to be grasped and manipulated
using a human hand. A gripper that has a similar range of
grasps will tend to be able to grasp everyday human objects.