11-05-2013, 04:09 PM
On-Road Vehicle Detection Using Optical Sensors: A Review
On-Road Vehicle Detection.pdf (Size: 102.6 KB / Downloads: 198)
Abstract
As one of the most promising applications of computer vision,
vision-based vehicle detection for driver assistance has received considerable
attention over the last 15 years. There are at least three reasons for the
blooming research in this field: first, the startling losses both in human lives
and finance caused by vehicle accidents; second, the availability of feasible
technologies accumulated within the last 30 years of computer vision research;
and third, the exponential growth of processor speed has paved the
way for running computation-intensive video-processing algorithms even
on a low-end PC in realtime. This paper provides a critical survey of recent
vision-based on-road vehicle detection systems appeared in the literature
(i.e., the cameras are mounted on the vehicle rather than being static such
as in traffic/driveway monitoring systems).
INTRODUCTION
Every minute, on average, at least one person dies in a vehicle
crash. Auto accidents also injure at least 10 million people each
year, and two or three million of them seriously. The hospital
bill, damaged property, and other costs are expected to add up
to 1%-3% of the world’s gross domestic product [1]. With the
aim of reducing injury and accident severity, pre-crash sensing
is becoming an area of active research among automotive manufacturers,
suppliers and universities. Vehicle accident statistics
disclose that the main threats a driver is facing are from other
vehicles. Consequently, developing on-board automotive driver
assistance systems aiming to alert a driver about driving environments,
and possible collision with other vehicles has attracted a
lot of attention. In these systems, robust and reliable vehicle
detection is the first step — a successful vehicle detection algorithm
will pave the way for vehicle recognition, vehicle tracking,
and collision avoidance. This paper provides a survey of
on-road vehicle detection systems using optical sensors. More
general overviews on intelligent driver assistance systems can
be found in [2].
VISION-BASED INTELLIGENT VEHICLE RESEARCH
WORLDWIDE
With the ultimate goal of building autonomous vehicles,
many government institutions have lunched various projects
worldwide, involving a large number of research units working
cooperatively. These efforts have produced several prototypes
and solutions, based on rather different approaches [2].
In Europe, the PROMETHEUS program (Program for European
Traffic with Highest Efficiency and Unprecedented Safety) pioneered
this exploration. More than 13 vehicle manufactures and
several research institutes from 19 European countries were involved.
Several prototype vehicles and systems (i.e., VaMoRs,
VITA, VaMP, MOB-LAB, GOLD)
ACTIVE VS. PASSIVE SENSORS
The most common approach to vehicle detection is using
active sensors such as lasers, lidar, or millimeter-wave radars.
They are called active because they detect the distance of an
object by measuring the travel time of a signal emitted by the
sensors and reflected by the object. Their main advantage is
that they can measure certain quantities (e.g., distance) directly
requiring limited computing resources. Prototype vehicles employing
active sensors have shown promising results. However,
active sensors have several drawbacks, such as low spatial resolution,
and slow scanning speed. Moreover, when a large number
of vehicles are moving simultaneously in the same direction,
interference among sensors of the same type poses a big problem.
Color
Although few existing systems use color information to its
full extent for HG, it is a very useful cue for obstacle detection,
lane/road following, etc. Several prototype systems investigated
the use of color information as a cue to follow lanes/roads, or
segment vehicles from background [12]. Similar methods could
be used for HG, because non-road regions within a road area are
potentially vehicles or obstacles. The lack of deploying color information
in HG is largely due to the difficulties of color-based
object detection or recognition methods in outdoor settings. The
color of an object depends on illumination, reflectance properties
of the object, viewing geometry, and sensor parameters.
Consequently, the apparent color of an object can be quite different
during different times of the day, under different weather
conditions, and under different poses.
Shadow
Using shadow information as a sign pattern for vehicle detection
was initially discussed in [13]. By investigating image
intensity, it was found that the area underneath a vehicle
is distinctly darker than any other areas on an asphalt paved
road. A first attempt to deploy this observation can be found
in [14], though there was no systematic way to choose appropriate
threshold values. The intensity of the shadow depends on
the illumination of the image, which in turn depends on weather
conditions. Therefore the thresholds are not, by no means, fixed.
In [15], a normal distribution was assumed for the intensity of
the free driving space. The mean and variance of the distribution
were estimated using Maximum Likelihood (ML). It should be
noted that the assumption about the distribution of road pixels
might not always hold when true. For example, rainy weather
conditions or bad illumination conditions will make the color of
road pixels dark, causing this method to fail.
Disparity map
The difference in the left and right images between corresponding
pixels is called disparity. The disparities of all the image
points form the so-called disparity-map. If the parameters
of the stereo rig are known, the disparity map can be converted
into a 3-D map of the viewed scene. Computing the disparity
map, however, is very time consuming. Hancock [26] proposed
a method employing the power of the disparity while avoiding
some heavy computations. In [27], Franke et al. argued that, to
solve the correspondence problem, area-based approaches were
too computationally expensive, and disparity maps from featurebased
methods were not dense enough. A local feature extractor
“structure classification” was proposed to solve the correspondence
problem easier.
CONCLUSIONS
We presented a critical survey of vision-based on-road vehicle
detection systems — one of the most important components of
a driver assistance system. Judging from the research activities
underway worldwide, it is certain that this area will continue to
be among the hottest research areas in the future. Major motor
companies, government agencies, and universities, are all expected
to work together to make significant progress in this area
over the next few years.