02-05-2013, 04:51 PM
Real-Time Vision-Aided Localization and Navigation Based on Three-View Geometry
Real-Time Vision.pdf (Size: 3.42 MB / Downloads: 24)
ABSTRACT
A new method for vision-aided navigation based on three-view
geometry is presented. The main goal of the proposed method is
to provide position estimation in GPS-denied environments for
vehicles equipped with a standard inertial navigation system (INS)
and a single camera only, without using any a priori information.
Images taken along the trajectory are stored and associated
with partial navigation data. By using sets of three overlapping
images and the concomitant navigation data, constraints relating
the motion between the time instances of the three images
are developed. These constraints include, in addition to the
well-known epipolar constraints, a new constraint related to the
three-view geometry of a general scene. The scale ambiguity,
inherent to pure computer vision-based motion estimation
techniques, is resolved by utilizing the navigation data attached
to each image. The developed constraints are fused with an
INS using an implicit extended Kalman filter. The new method
reduces position errors in all axes to the levels present while
the first two images were captured.
INTRODUCTION
Inertial navigation systems (INS) develop
navigation errors over time due to the imperfectness
of the inertial sensors. Over the past few decades,
many methods have been proposed for restraining
or eliminating these errors, assuming various types
of additional sensors and a priori information. The
majority of modern navigation systems rely on the
Global Positioning System (GPS) as the primary
means for mitigating the inertial measurement errors.
However, GPS might be unavailable or unreliable; this
happens when operating indoors, under water, or on
other planets. In these scenarios, vision-based methods
constitute an attractive alternative for navigation
aiding due to their relatively low cost and autonomous
nature. Vision-aided navigation has indeed become an
active research field alongside the rapid development
of computational power.
The current work is concerned with vision-aided
navigation for a vehicle equipped with a standard
INS and a single camera only, a setup that has been
studied in a number of previous works. Existing
methods vary by the number of overlapping images
and by the techniques used for fusing the imagery
data with the navigation system. Two related issues
that have drawn much attention are computational
requirements and the ability to handle loops, i.e., how
the navigation solution is updated when the platform
revisits some area.
FUSION WITH A NAVIGATION SYSTEM
In this section we present a technique for fusing
the three-view geometry constraints with a standard
navigation system, assuming three images with a
common overlapping area had been identified. The
data fusion is performed using an indirect IEKF that
estimates the navigation parameter errors instead of
the parameters themselves. These estimated errors
are then used for correcting the navigation solution
computed by the navigation system (see Fig. 1).
Extensions
It is straightforward to extend the developed
method for handling more than three overlapping
images, which may improve robustness to noise.
In the general case, assume k given images, such
that each three neighboring images are overlapping
(a common overlapping area for all the k images
is not required). Assume also that all these images
are associated with the required navigation data. In
the spirit of (6), we write an epipolar constraint for
each pair of consecutive images, and a constraint
for relating the magnitudes of the translation vectors
(similar to (6c)) for each three adjacent overlapping
images. Next, the residual measurement z is redefined
and the calculations of the required Jacobian matrices
in the IEKF formulation are repeated.