26-03-2014, 04:28 PM
Visual Navigation With Obstacle Avoidance
Visual Navigation.pdf (Size: 845.98 KB / Downloads: 39)
Abstract
We present and validate a framework for vi-
sual navigation with obstacle avoidance. The approach was
originally designed in [1], but major improvements and real
outdoor experiments are added here. Visual navigation consists
of following a path, represented as an ordered set of key images,
that have been acquired in a preliminary teaching phase. While
following such path, the robot is able to avoid new obstacles
which were not present during teaching, and which are sensed
by a range scanner. We guarantee that collision avoidance
and navigation are achieved simultaneously by actuating the
camera pan angle, in the presence of obstacles, to maintain
scene visibility as the robot circumnavigates the obstacle. The
circumnavigation verse and the collision risk are estimated
using a potential vector field derived from an occupancy grid.
The framework can also deal with unavoidable obstacles, which
make the robot decelerate and eventually stop.
INTRODUCTION
A great amount of robotics research focuses on vehicle
guidance, with the goal of automatically reproducing the
tasks usually performed by humans [2], [3]. Among others,
an important task is obstacle avoidance, i.e., computing a
control such that the trajectory generated is collision-free,
and drives the robot to the goal [4]. A common obstacle
avoidance technique is the potential field method [5], which
is often associated to a global path planner. Instead of
using a global model, we propose a framework for obstacle
avoidance with simultaneous execution of a visual servoing
task [6]. Visual servoing is a well known method that
uses vision directly in the control loop, and that has been
applied on mobile robots in [7 – 9]. In [7] and [8], the
epipoles are exploited to drive a nonholonomic robot to a
desired configuration. Trajectory tracking is tackled in [9]
by merging differential flatness and predictive control.
System Characteristics
In this section, we will define the variables introduced
above. We will show how to derive the centroid abscissa
Jacobian Jx , the linear velocity in the safe and unsafe context
(vs and vu ), and the obstacle characteristics (heading for
avoidance α, and activation function H).
1) Jacobian of the Centroid Abscissa: We will hereby
derive the components of Jx introduced in (2). Let us define:
v = (vc , ωc ) the camera velocity, expressed in FC .
CONCLUSIONS
For the first time, a framework with simultaneous obsta-
cle avoidance and outdoor visual navigation is presented.
It merges techniques from potential fields and visual ser-
voing, and guarantees path following, obstacle bypassing,
and collision avoidance by deceleration. The method has
been validated by outdoor experiments with real obstacles
(parked cars and pedestrians). In the future, we plan to take
into account moving obstacles, as well as visual occlusions
provoked by the obstacles. Finally, it may be interesting to
record scanner data during the teaching step too, in order to
improve obstacle avoidance.