22-11-2012, 03:37 PM
driver less car
driverless car.pdf (Size: 2.38 MB / Downloads: 48)
Abstract:
This paper addresses the problem of autonomous navigation of a
car-like robot evolving in an urban environment. Such an environment exhibits
an heterogeneous geometry and is cluttered with moving obstacles. Furthermore,
in this context, motion safety is a critical issue. The proposed approach to the
problem lies in the design of perception and planning modules that consider
explicitly the dynamic nature of the vehicle and the environment while enforcing
the safety constraint. The main contributions of this work are the development
of such modules and integration into a single application. Initial full scale
experiments validating the approach are presented.
Introduction
In many urban environments, private automobile use has led to severe problems with respect
to congestion, pollution and safety. A large effort has been put in industrial countries into
developing newtypes of transportation systems, the Cybercars, as an answer to this problem
(Parent, 1997). Cybercars are city vehicles with fully automated driving capabilities. Such
autonomous systems cannot be realised without using several capabilities designed to work
together in a single application. Indeed, to safely navigate, the system will have to model
the environment while localising in it, plan its trajectory to the goal and finally execute it.
The problem of designing and integrating such capabilities, while accounting for the various
constraints of such an application, remains largely open and lies at the heart of the work
presented in this paper.
Autonomy in general and motion autonomy in particular has been a longstanding issue
in robotics. Several architecture have been proposed. They mainly differ in the context
as well as the platform which is intended to perform this task. Firstly, the environment
imposes its own constraints. Indeed, within an urban environment, its dynamic nature
(pedestrians, other cars, etc.) imposes on the navigation scheme a real time constraint over
the time that the system has to take a decision. In particular, when a robot is placed in a
dynamic environment, it cannot remain still, otherwise it might be hit by amoving obstacle.
Besides, in a dynamic environment, the future motion of the moving obstacles is usually not
known in advance and will have to be predicted. Since the urban environment is partially
predictable it is possible to provide a valid prediction over a limited time horizon. Secondly,
a complex system as a car-like robot is constrained by its (non-holonomic) kinematics as
well as its dynamics. The intent of our work is to explicitly account for these different
constraints in order to safely move the robot to its goal.
Perception in urban environments
Introduction
Perception is the process of transforming measures of the world into an internal model.
The kind of model (and the choice of the sensors) depends on the application. For
autonomous navigation, the world model needs to integrate at least four elements: the
target to attain, the position of the static obstacles, the current and future position of moving
obstacles and the current state of the vehicle (position, speed, etc.).
Due to occlusion and limited field of view, the robot cannot observe the entire world at
each measurement. Integrating successive observations into a consistent map of forward
obstacles is required to create an effective planning. It is well know that there exists a
duality between creating consistent maps and localising the robot, such a duality has been
extensively studied as the SLAM problem (Thrun, to appear).
Unfortunately, most of the works in SLAM suppose a static environment. The
presence of moving obstacles will contaminate the map and perturb the data
association between two observations. For the planning purpose, we require to explicitly
identify the moving obstacles and estimate their current state in order to predict their
future position.
We can see that for autonomous navigation, as a strict minimum, the robot requires to
solve the Simultaneous Localisation, Mapping and Moving Objects Tracking (SLAMMOT)
problem (Wang, 2004). In the following paragraphs, we will propose a solution to
this problem and then we will discuss the additional considerations required when
integrating perception and planning.
Laser scan data association
Laser scan data association (so called ‘scan matching’) can be used both to estimate small
displacements between two measures and to recognise a revisited place.
The classic method for scan matching (both in 2D and 3D) is the Iterative Closest
Point (ICP) (Zhang, 1992). This iterative method is very straightforward but provides
slow convergence rates and small attraction regions. This is why many variants have been
proposed (Rusinkiewicz and Levoy, 2001), changing the point-to-point association methods
or changing the optimisation metrics (Minguez et al., 2005).
Recently, a new approach has been proposed (Granger and Pennec, 2002; Zhang and
Hall-Holt, 2004). Instead of matching two clouds of points, a cloud of points is matched
over a distribution of probabilities indicating the probable presence of an object at each
point of the space. This approach has the advantages of allowing error modelling of the
sensor, avoiding the expensive closest point search and providing more robust results with
faster convergence rates.