Seminar Topics & Project Ideas On Computer Science Electronics Electrical Mechanical Engineering Civil MBA Medicine Nursing Science Physics Mathematics Chemistry ppt pdf doc presentation downloads and Abstract

Full Version: Road Detection and Tracking for Autonomous Mobile Robots
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Road Detection and Tracking for Autonomous Mobile Robots

[attachment=21568]

INTRODUCTION

An autonomous vehicle intended for driving off-road (e.g., for military reconnaissance) should still be able to identify
roads and to drive along them when conditions allow. This ability will minimize terrain-based dangers and maximize
speed. Road following requires an ability to discriminate between the road and surrounding areas and is a well-studied
visual task1-5. The work described in this paper is part of the Army's Demo III project6. The requirements for the
Experimental Unmanned Vehicle (XUV) developed for Demo III include the ability to drive autonomously at speeds of
up to 60 kilometers per hour (km/h) on-road, 35 km/h off-road in daylight, and 15 km/h off-road at night or under bad
weather conditions. The control system for the vehicle is designed in accordance with the 4D-Real-time Control System
(RCS) 7 architecture, which divides the system into perception, world modeling, and behavior generation subsystems.
The XUV has two principal sets of sensors for navigation, as shown in Figure 1. On the left, outlined in white, is a Ladar
system that produces range images at about 20 Hz. Mounted above the Ladar is a color camera that produces images at
up to 30 Hz. On the right are a pair of stereo color cameras, and a set of stereo FLIR cameras.

Given the need for relatively high-speed driving, the sensory processing subsystem must be able to update the world
model with current information as quickly as possible. It is not practical to process all images completely in the time
available, so focusing attention on important regions is required. This is done by trying to predict which regions of future
images will contain the most useful information based on the current images and the current world model. Prediction is
carried out between images, across images, and between the world model and each type of image.
Prediction and focus of attention are of special interest to robotic systems because they frequently have the capability to
actively control their sensors8. The goal of focusing attention is to reduce the amount of processing necessary to
understand an image in the context of a task. Usually, large regions either contain information of no interest for the task,
or contain information that is unchanged from a previous view. If the regions that are of interest can be isolated, special
and perhaps expensive processing can be applied to them without exceeding the available computing resources.
One way that “focus of attention” systems can work is by looking for features defined by some explicit or implicit
model. The search may take many forms, from multi-resolution approaches that emulate human vision's peripheral and
foveal vision, to target-recognition methods that use explicit templates for matching 9-14. Once a set of attention regions
has been detected, a second stage of processing is often used to further process them or to rank them. This processing
may require more complex algorithms, but they are applied only to small regions of the image.
Another way focus of attention systems can work is by selecting the most appropriate sensory processing algorithm for
achieving a given task. Criteria for selecting the most appropriate algorithm may include environmental factors
(weather, day or night, road condition, road class, etc.) and the type of processing to be performed on the sensed
information. For example, if the task is daytime driving on highways, the sensory processing system must find lane lines
in daylight. A rule-based system, for example, would use these constraints to select the most appropriate algorithm to
perform the task.
Lane Model
x = a + b * y + c * y2
Figure 2. Lane model in a road with lanes
Lane Model
x = a + b * y + c * y2
Figure 3. Boundary model of a road
In this paper, we describe a road and lane detection and tracking method that falls within the above general description,
but differs from previous approaches in using multiple sensor types that interact to locate and identify roads. The way
each sensor is used in conjunction with another sensor and with the functions of vehicle's internal world model is the
focus of the paper. Further, a world model containing the system's current best guess about the state of the world and the
task is used to predict where roads should appear and how they should look to each of the sensors. The world model also
helps determine the most appropriate algorithm to apply to the sensor data.

THE WORLD MODEL FOR ROAD AND LANE TRACKING

We briefly introduce the world model we use for road extraction and tracking. The world model contains a
representation, or map, of the current state of the world surrounding the vehicle and is updated continually by the
sensors. We use a modified occupancy grid representation15, with the vehicle centered on the grid, and the grid tied to
the world. The world model scrolls under the vehicle as the vehicle moves about in the world. The world model is the
system's internal representation of the external world. It acts as a bridge between sensory processing and behavior
generation by providing a central repository for storing sensory data in a unified representation, and decouples the realtime
sensory updates from the rest of the system. The world model process has three primary functions in the road and
lane following tasks.

1. To create models of road elevation, road boundaries, and lanes within the road, and to keep them current and
consistent. In this role, it updates elevation and variance of elevation in maps and road/lane models (see Figure 2.,
Figure 3) in accordance with inputs from the sensors. It also assigns (multiple) confidence factors to the models and
all map data, and adjusts these factors as new data are sensed.
2. To generate predictions of expected sensory input based on the current state of the world and estimated future states
of the world. For the road boundary and lane marker following application, we assume that the location of the road
boundary or lane marker in the image domain changes very little (less than 10 pixels) between successive images.
Based on the road model, the predicted road boundary or lane is projected onto the current image. This provides the
focus of attention region for the sensory processing module.
3. To determine the most suitable algorithm for the sensory processing module based on the current state of the world
and the task. The system’s state is updated every cycle based on prior knowledge and the state of the world model16,
For lane marker following, the lane detection algorithm is used to find lane markers, and the edges most likely
correspond with each lane marker are used to update the lane model (see section 3.1). For road following, the road
detection algorithm (see section 3.2) is used to segment the road.
Figure 4. Initial road boundary. Blue points are road edges, green points show the model fitted to the boundary

SENSORY PROCESSING FOR ROAD AND LANE TRACKING
Lane tracking algorithm


The algorithm used for lane tracking is similar to that described in Schneiderman and Nashman17. The stages of the
algorithm involve first predicting the locations of lane markers, then extracting and classifying edges, and, finally,
updating the lane model. The lanes are represented by 2D models in the image plane. An initial approximate model of
the lane markers is determined by fitting a second order polynomial (Equation 1) to the set of edge points labeled as lane
marker points.