11-12-2012, 06:46 PM
Visual Servo Control
1Visual Servo.pdf (Size: 381.49 KB / Downloads: 19)
This article is the second of a two-part tutorial on
visual servo control. In Part I (IEEE Robotics and
Automation Magazine, vol. 13, no. 4), we introduced
fundamental concepts and described basic
approaches. Here we discuss more advanced concepts,
and present a number of recent approaches.
Estimation of 3-D Parameters
If a calibrated stereo vision system is used, all 3-D parameters
can be easily determined by triangulation, as evoked in Part I
of the tutorial. Similarly, if a 3-D model of the object is
known, all 3-D parameters can be computed from a pose estimation
algorithm. However, such an estimation can be quite
unstable due to image noise. It is also possible to estimate 3-D
parameters by using the epipolar geometry that relates the
images of the same scene observed from different viewpoints.
Epipolar Geometry
Given a set of matches between the image measurements in
the current image and in the desired one, the fundamental
matrix, or the esential matrix if the camera is calibrated, can
be recovered [1], and then used in visual servoing [2]. Indeed,
from the essential matrix, the rotation and the translation up
to a scalar factor between the two views can be estimated.
However, near the convergence of the visual servo, that is
when the current and desired images are similar, the epipolar
geometry becomes degenerate and it is not possible to estimate
accurately the partial pose between the two views. For
this reason, using homography is generally prefered.
Direct Estimation
The approach described previously can be used to estimate
the unknown 3-D parameters that appear in the analytical
form of the interaction matrix. It is also possible to estimate
directly its numerical value using either an off-line learning
step, or an on-line estimation scheme. This idea is only useful
for IBVS since, for PBVS, all the coefficients of the interaction
matrix are directly obtained from the features s used in
the control scheme.