26-10-2016, 09:43 AM
1461182056-ASEMINARREPORTONGOOGLEDRIVERLESSCA.pdf (Size: 3.69 MB / Downloads: 9)
ABSTRACT
Googles dramatic ascent and subsequent domination in the past fifteen years
of the technology and information industries has financially enabled Google to
explore seemingly unrelated projects ranging from Google Mail to the Google Car.
In particular, Google has invested a significant amount of resources in the Google
Car, an integrated system that allows for the driverless operation of a vehicle.
While initial reports indicate that the Google Car driverless automobile will be
more safe and efficient than current vehicles, the Google Car is not without its
critics. In particular, the existential threat that the car presents to several large
industries, including the insurance, health care and construction industries, creates
an additional challenge to the success of the Google Car well beyond the standard
competitive threats from other established car manufacturers in the automobile
industry, which begs the question, Can the Google Car be successful? With so many
challenges above and beyond the competitive forces typically threatening longterm
profitability, will the Google Car be able to create and sustain a competitive
advantage for Google in the driverless car space?
Introduction
The inventions of the integrated circuit and later, the microcomputer, were
major factors in the development of electronic control in automobiles. The importance
of the microcomputer cannot be overemphasized as it is the brain that
controls many systems in todays cars. For example, in a cruise control system, the
driver sets the desired speed and enables the system by pushing a button. A microcomputer
then monitors the actual speed of the vehicle using data from velocity
sensors. The actual speed is compared to the desired speed and the controller
adjusts the throttle as necessary.
A completely autonomous vehicle is one in which a computer performs all the
tasks that the human driver normally would. Ultimately, this would mean getting
in a car, entering the destination into a computer, and enabling the system.
From there, the car would take over and drive to the destination with no human
input. The car would be able to sense its environment and make steering and
speed changes as necessary. This scenario would require all of the automotive
technologies mentioned above: lane detection to aid in passing slower vehicles or
exiting a highway; obstacle detection to locate other cars, pedestrians, animals,
etc.; adaptive cruise control to maintain a safe speed; collision avoidance to avoid
hitting obstacles in the road way; and lateral control to maintain the cars position
on the roadway.
In addition, sensors would be needed to alert the car to road or weather conditions
to ensure safe traveling speeds. For example, the car would need to slow
down in snowy or icy conditions. We perform many tasks while driving without
even thinking about it. Completely automating the car is a challenging task and
is a long way off. However, advances have been made in the individual systems.
Googles robotic car is a fully autonomous vehicle which is equipped with radar
and LIDAR and such can take in much more information, process it much more
quickly and reliably, make a correct decision about a complex situation, and then
implement that decision far better than a human can. Google anticipates that the
increased accuracy of its automated driving system could help reduce the number
of traffic-related injuries and deaths
The Google car system combines information gathered for Google Street View
with artificial intelligence software that combines input from video cameras inside
the car, a LIDAR sensor on top of the vehicle, radar sensors on the front of the
vehicle and a position sensor attached to one of the rear wheels that helps locate
the car’s position on the map. As of 2010, Google has tested several vehicles
equipped with the system, driving 140,000 miles (230,000 km) without any human
intervention, the only accident occurring when one of the cars was rear-ended
while stopped at a red light. Google anticipates that the increased accuracy of its
automated driving system could help reduce the number of traffic-related injuries
and deaths, while using energy and space on roadways more efficiently.
The combination of these technologies and other systems such as video based
lane analysis, steering and brake actuation systems, and the programs necessary
to control all of the components will become a fully autonomous system. The
problem is winning the trust of the people to allow a computer to drive a vehicle
for them, because of this, there must be research and testing done over and over
again to assure a near fool proof final product. The product will not be accepted
instantly, but over time as the systems become more widely used people will realize
the benefits of it.
CONTROL UNIT:
3.1HARDWARE SENSORS
3.1.1 Radar:
Radar is an object-detection system which uses electromagnetic waves specifically
radio waves - to determine the range, altitude, direction, or speed of both
moving and fixed objects such as aircraft, ships, spacecraft, guided missiles, motor
vehicles, weather formations, and terrain
The radar dish, or antenna, transmits pulses of radio waves or microwaves
which bounce off any object in their path. The object returns a tiny part of
the wave’s energy to a dish or antenna which is usually located at the same site
as the transmitter. The modern uses of radar are highly diverse, including air
traffic control, radar astronomy, air-defense systems, antimissile systems; nautical
radars to locate landmarks and other ships; aircraft anti collision systems; oceansurveillance
systems, outer-space surveillance and rendezvous systems; meteorological
precipitation monitoring; altimetry and flight-control systems; guided-missile
target-locating systems; and ground-penetrating radar for geological observations.
AISSMS COE,PUNE.
High tech radar systems are associated with digital signal processing and are capable
of extracting objects from very high noise levels.
A radar system has a transmitter that emits radio waves called radar signals
inpredetermined directions. When these come into contact with an object they are
usually reflected and/or scattered in many directions. Radar signals are reflected
especially well by materials of considerable electrical conductivity- especially by
most metals, by seawater, by wet land, and by wetlands. Some of these make
the use of radar altimeters possible. The radar signals that are reflected back
towards the transmitter are the desirable ones that make radar work. If the object
is moving either closer or farther away, there is a slight change in the frequency of
the radio waves, due to the Doppler effect.
Radar receivers are usually, but not always, in the same location as the transmitter.
Although the reflected radar signals captured by the receiving antenna are
usually very weak, these signals can be strengthened by the electronic amplifiers
that all radar sets contain. More sophisticated methods of signal processing are
also nearly always used in order to recover useful radar signals.
The weak absorption of radio waves by the medium through which it passes is
what enables radar sets to detect objects at relatively-long ranges at which other
electromagnetic wavelengths, such as visible light, infrared light, and ultraviolet
light, are too strongly attenuated. Such things as fog, clouds, rain, falling snow,
and sleet that block visible light are usually transparent to radio waves. Certain,
specific radio frequencies that are absorbed or scattered by water vapor, raindrops,
or atmospheric gases (especially oxygen) are avoided in designing radars except
when detection of these is intended.
Finally, radar relies on its own transmissions, rather than light from the Sun
or the Moon, or from electromagnetic waves emitted by the objects themselves,
such as infrared wavelengths (heat). This process of directing artificial radio waves
towards objects is called illumination, regardless of the fact that radio waves are
completely invisible to the human eye or cameras. High tech radar systems are
associated with digital signal processing and are capable of extracting objects from
very high noise levels
3.1.2 Lidar
LIDAR (Light Detection And Ranging also LADAR) is an optical remote sensing
technology that can measure the distance to, or other properties of a target
by illuminating the target with light, often using pulses from a laser. LIDAR technology has application in geometrics, archaeology, geography, geology, geomorphology,
seismology, forestry, remote sensing and atmospheric physics, as well
as in airborne laser swath mapping (ALSM), laser altimetry and LIDAR Contour
Mapping. The acronym LADAR (Laser Detection and Ranging) is often used in
military contexts. The term ”laser radar” is sometimes used even though LIDAR
does not employ microwaves or radio waves and is not therefore in reality related
to radar.
LIDAR uses ultraviolet, visible, or near infrared light to image objects and
can be used with a wide range of targets, including non-metallic objects, rocks,
rain, chemical compounds, aerosols, clouds and even single molecules. A narrow
laserbeam can be used to map physical features with very high resolution. LIDAR
has been used extensively for atmospheric research and meteorology.
Advanced Research Lidar. In addition LIDAR has been identified by NASA as a
key technology for enabling autonomous precision safe landing of future robotic and
crewed lunar landing vehicles. Wavelengths in a range from about 10 micrometers
to the UV (ca.250 nm) are used to suit the target. Typically light is reflected via
back scattering. There are several major components to a LIDAR system:
Laser 6001000 nm lasers are most common for non-scientific applications.
They are inexpensive but since they can be focused and easily absorbed by the eye
the maximum power is limited by the need to make them eye-safe. Eye-safety is
often a requirement for most applications .A common alternative 1550 nm lasers
are eye-safe at much higher power levels since this wavelength is not focused by
the eye, but the detector technology is less advanced and so these wavelengths
are generally used at longer ranges and lower accuracies. They are also used for
military applications as 1550 nm is not visible in night vision goggles unlike the
shorter 1000 nm infrared laser. Airborne topographic mapping lidars generally
use 1064 nm diode pumped YAG lasers, while bathymetric systems generally use
532 nm frequency doubled diode pumped YAG lasers because 532 nm penetrates
water with much less attenuation than does 1064nm
Scanner and optics How fast images can be developed is also affected by the
speed at which it can be scanned into the system. There are several options to
scan the azimuth and elevation, including dual oscillating plane mirrors, a combination
with a polygon mirror, a dual axis scanner. Optic choices affect the angular
resolution and range that can be detected. A hole mirror or a beam splitter are
options to collect a return signal.
3. Photo detector and receiver electronics two main photo detector technologies
are used in lidars: solid state photo detectors, such as silicon avalanche photodiodes,
or photo multipliers. The sensitivity of the receiver is another parameter
that has to be balanced in a LIDAR design
4. Position and navigation systems LIDAR sensors that are mounted on mobile
platforms such as airplanes or satellites require instrumentation to determine the
absolute position and orientation of the sensor. Such devices generally include a
Global Positioning System receiver and an Inertial Measurement Unit (IMU).3D
imaging can be achieved using both scanning and non-scanning systems. ”3D
gatedviewing laser radar” is a non-scanning laser ranging system that applies a
pulsed laser and a fast gated camera.
3.1.3 Global Positioning System
The Global Positioning System (GPS) is a space-based global navigation satellite
System (GNSS) that provides location and time information in all weather,
anywhere on or near the Earth, where there is an unobstructed line of sight to four
or more GPS satellites.GPS receiver calculates its position by precisely timing the
signals sent by GPS satellites high above the Earth.
Each satellite continually transmits messages that include
1)The time the message was transmitted
2)Precise orbital information (the ephemeris)
3)The general system health and rough orbits of all GPS satellites
The receiver uses the messages it receives to determine the transit time of each
message and computes the distance to each satellite. These distances along with
the satellites’ locations are used with the possible aid of trilateration, depending
on whichalgorithm is used, to compute the position of the receiver. This position
is then displayed, perhaps with a moving map display or latitude and longitude;
elevation information may be included. Many GPS units show derived information
such as direction and speed, calculated from position changes. Three satellites
might seem enough to solve for position since space has three dimensions and a
position near the Earth’s surface can be assumed. However, even a very small
clock error multiplied by the very large speed of light the speed at which satellite
signals propagate results in a large positional error. Therefore receivers use four
or more satellites to solve for the receiver’s location and time. The very accurately
computed time is effectively hidden by most GPS applications, which use only
the location. A few specialized GPS applications do however use the time; these
include time transfer, traffic signal timing, and synchronization of cell phone base
stations.
3.1.4 Position sensor
A position sensor is any device that permits position measurement Here we use
a rotator encoder also called a shaft encoder, is an electro-mechanical device that
converts the angular position or motion of a shaft or axle to an analog or digital
code. The output of incremental encoders provides information about the motion
of the shaft which is typically further processed elsewhere into information such
as speed, distance, RPM and position.
The output of absolute encoders indicates the current position of the shaft,
making them angle transducers. Rotary encoders are used in many applications
that require precise shaft unlimited rotationincluding industrial controls, robotics,
special purpose photographic lenses, computer input devices (such as up to mechanical
mice and trackballs), and rotating radar platforms.
3.1.5 Cameras
Google has used three types of car-mounted cameras in the past to take Street
View photographs. Generations 13 were used to take photographs in the United
States. The first generation was quickly superseded and images were replaced with
images taken with 2nd and 3rd generation cameras. Second generation cameras
were used to take photographs in Australia.