25-01-2013, 03:04 PM
Fly spy: lightweight localization and target tracking for cooperating air and ground robots
1Fly spy lightweight.pdf (Size: 625.64 KB / Downloads: 57)
Abstract
Motivated by the requirements of micro air vehicles, we present a simple method
for estimating the position, heading and altitude of an aerial robot by tracking
the image of a communicating GPS-localized ground robot. The image-to-GPS
mapping thus generated can be used to localize other objects on the ground.
Results from experiments with real robots are described.
Introduction
Payload volume, mass and power consumption are critical factors in the perfor-
mance of aerial robots. Very small \micro" air vehicles (MAVs) are particularly
desirable for reasons of cost, portability and, for military applications, stealth.
Building on several years work with small robot helicopters, we are interested in
minimalist approaches to localization which reduce the number of sensors that
such robots carry with them. This paper presents a simple method for estimat-
ing the position, heading and altitude of an aerial robot using a single on-board
camera. Many applications of aerial robots such as reconnaissance and target
tracking already require an on-board camera. We exploit this sensor to simul-
taneously perform localization. Since ground platforms are more suited to carry
larger sensor payloads, and are typically localized, collaboration and data sharing
between ground and aerial robots is employed to provide the extra information
needed to localize the air vehicle.
Related work
The work that most closely parallels the system discussed in this paper is the
\visual odometry" system at CMU [1] which can visually lock-on to ground
objects and sense relative helicopter position in real time. The system tracks
image pair templates using custom vision hardware. Coupled with angular rate
sensing, the system is used to stabilize small helicopters over reasonable speeds.
Approach
If an MAV can observe the relative positions of itself and two objects with known
location on the ground below it, it can localize itself completely by triangulation.
If the objects on the ground were friendly robots (`Unmanned Ground Vehicles' -
UGVs) with on-board localization, they could inform the MAV of their positions
and even send updates as they moved. This requires only a camera, modest
computation and communications. All these resources are likely to be required
on the MAV for other tasks; this localization technique requires no dedicated
resources.
Alternatively, a single cooperating robot can move over time to establish a
baseline on the ground and on the image plane of the MAV's camera, as shown
in Figure 1. This is subject to some assumptions about the behaviour of the
MAV, but it is the minimal conguration for such a system and is the scenario
investigated in the rest of this paper.
Position
The MAV's (x,y) position is assumed to be at the center of the image i.e. at
the origin. To nd the GPS location of the MAV we nd the GPS location of
the image origin. To do this we construct the vector C from the position of the
current image sample to the image origin (Figure 4 top left) and transform it
into GPS space. First we translate C to the position of the current GPS sample
(xg; yg) (Figure 4 bottom left), then rotate it by (Figure 4 bottom middle),
then scale by S to complete the coordinate transformation (Figure 4 bottom
right). The transformed vector C now points to the corresponding GPS location
of the image origin, giving us the MAV's position (xu; yu).
Target localization
With the parameters of the image-to-GPS coordinate transform completely known,
any pixel in the image can be mapped to GPS space. Thus we can estimate the
position of any object tracked in the image. This has application for aerial map-
ping and reconnaissance, friend/foe identication and target interception.
We demonstrate this in the next Section by estimating the GPS location of
a `foe' robot placed arbitrarily in the MAV's eld of view.
Demonstration
The localization method described above is attractively simple, but has some
fairly strong constraints which could make it impractical. In order to assess its
utility for real world applications we implemented the system on our real MAV
and UGV systems. This section describes the implementation and compares the
localization estimates generated by the method with those obtained from an
independent combined GPS/IMU sensor.
MAV platform
The USC AVATAR (Autonomous Vehicle Aerial Tracking And Reconnaissance)
MAV is a gas-powered model helicopter tted with a PC104 computer and cus-
tom control electronics (Figure 6 left) [7,8]. It has been developed through three
generations over the last 9 years in our lab. It carries a high-quality Inertial Mea-
surement Unit (IMU), a Novatel RT20 GPS receiver/decoder, an engine RPM
sensor and a color video camera. A laser altimeter is currently being integrated.
Communication with ground workstations is via 2.4GHz wireless Ethernet and
2.3GHz wireless video.
AVATAR can autonomously servo to GPS locations and orient itself to GPS
headings. The AVATAR control system is implemented using a hierarchical
behavior-based control system architecture [9]. Brie
y, the behavior-based con-
trol approach partitions the control problem into a set of loosely coupled com-
puting modules called `behaviors'.
UGV platform
Our UGV platform is an ActivMedia Pioneer AT robot, augmented with a PC104
stack as shown in Figure 7 (left). It has ve forward and two side facing sonars, a
compass and wireless Ethernet. It carries a Novatel GPS receiver/decoder which
is older and less accurate than that carried by AVATAR.
The Pioneer's controller allows us to servo to a specied GPS location and
orient to a compass heading while avoiding obstacles using sonar. In this exper-
iment the UGV is required simply to move over the eld at a constant speed
and heading to establish the GPS and image vectors. The Pioneer broadcasts its
GPS location on the local network's broadcast channel at 5Hz. The AVATAR
monitors these messages to obtain the GPS vector.
Object tracking
A simple blob tracking algorithm is used to track the position of high contrast
regions in the image. The experiments were carried out over an open grassy eld
so the black and white UGVs stand out clearly from the background. The images
are sampled from the MAV's wireless video stream by a low-cost framegrabber
on a Pentium II workstation. An update rate of 25Hz was achieved while re-
liably tracking two objects. Figure 7 (right) shows an example frame from the
MAV video camera with the locations of the two UGVs correctly tracked by the
software.
Conclusions and further work
We have demonstrated a simple geometric method that approximately localizes
an aerial robot using minimal sensing. We suggest it may be suitable for use
in micro air vehicles, as it exploits general-purpose sensors that are likely to be
carried already and requires only modest computation. It also allows multiple
MAVs to share a single GPS receiver carried by a cooperating ground robot. We
aim to further evaluate this method and to increase the interaction between air
and ground vehicles. In particular we aim to direct the friendly ground robot to
intercept the foe robot using the estimate calculated by the MAV. The localiza-
tion accuracy observed in these experiments suggests that this is feasible.