25-06-2014, 12:41 PM
What is Google Glass?
Google Glass.docx (Size: 1.03 MB / Downloads: 14)
1. INTRODUCTION
1.1 What is Google Glass?
Project Glass is a research and development program by Google to develop an augmented reality head-mounted display (HMD).The intended purpose of Project Glass products would be the hands free displaying of information currently available to most smart phoneusers, and allowing for interaction with the Internet via natural language voice commands,in a manner which has been compared to the i-Phone feature Siri.
The functionality and physical appearance (minimalist design of the aluminium strip with 2 nose pads) has been compared to Steve Mann's Eye Tap,which was also referred to as "Glass" ("Eye Tap Digital Eye Glass", i.e. use of the word "Glass" in singular rather than plural form "Glasses").The operating system software used in the glass will be Google's Android.
Project Glass is part of the Google X Lab at the company,which has worked on other futuristic technologies, such as a self-driving car. The project was announced on Google+by BabakParviz, an electrical engineer who has also worked on putting displays into contact lenses; Steve Lee, a project manager and "geo-location specialist"; and Sebastian Thrun, who developed Udacity as well as worked on the self-driving car project. Google has patented the design of Project Glass.
1.2 PRESENT SYSTEM
While Google’s augmented-reality glasses are receiving immense attention and scrutiny they’re certainly not the first pieces of eyewear to include an integrated display.The following headsets are doing their best to entice us into a world of integrated-display eyewear.
1.2.1 Recon Mod Live Alpine Goggles
Recon has been in the head-up display (HUD) game since 2010. The company’s first product, the Transcend, was a partnership with Zeal Optics to bring a HUD to the eyes of skiers and snowboarders. The HUD goggles use a rider’s GPS location to display elevation, speed, and time of day in a small screen that sits at the bottom-right of the user’s field of vision — and it’s all in real time. All the cumulative data from a day on the slopes can be downloaded to a computer, and the GPS information can be associated with interactive maps so users can chart their speeds against location.
Recon’s current MOD ($300) and MOD Live ($400) products augment the original Transcend goggles with features that include jump analytics, buddy tracking, music playback, navigation,
Brother AiRScouter
The glasses employ a monocular (single display) design with a translucent LCD that sits in front of the wearer’s left eye. Brother says the resulting image is the equivalent of looking at a 16-inch monitor that’s one meter away. As a factory worker is operating machinery, the AiRScouter can overlay workflow instructions in real-time.
In addition to helping employees build and maintain products, the system can be used for communication. With an optional camera and audio attachment, the wearer can transmit video back to support center staff — which can then direct the machine operator on better ways to fix problems in real-time.
Everyone looks at the same work-in-progress, and from the same point of view. The support staff can even take screen shots of the transmitted video, annotate an image, and beam the image back to the wearer for clarification of how to fix an issue.
The entire system is pretty slick. Too bad it’s only currently available for purchase by commercial entities in Japan. Like the Vuzix Star 1200, the AiRScouter needs to be plugged into a computer or Smartphone to work
3 PROPOSED SYSTEM
Google’s project glass will make Google Goggles and other mobile apps more useful. Instead of using a Smartphone to find information about an object, translate a text, get directions, compare prices, you can use some smart glasses that augment the reality and help you understand more about that things around you.
"We think technology should work for you—to be there when you need it and get out of your way when you don't. A group of us from Google[x] started Project Glass to build this kind of technology, one that helps you explore and share your world, putting you back in the moment," says Google.Google's concept glasses have a camera, a microphone and can connect to the Internet to send and receive data in real time. The interface is simple and it only shows relevant information.
4 PROTOTYPES
Though head-worn displays for augmented reality are not a new idea, the project has drawn media attention primarily due to its backing by Google, as well as the prototype, which is smaller and slimmer than previous designs for head-mounted displays,The first Project Glass demo resembles a pair of normal eyeglasses where the lens is replaced by a heads-up display. In the future, new designs may allow integration of the display into people's normal eyewear.
The glasses would be available to the public for "around the cost of current smart phones" by the end of 2012, but other reports have stated that the glasses are not expected to be available for purchase soon. The product (Google Glass Explorer Edition) will be available to United State’s Google I/O developers for $1,500, shipping early in 2013, while a consumer version is slated to be ready within a year of that.
The product began testing in April 2012. Sergey Brin wore a prototype set of glasses to an April 5, 2012 Foundation Fighting Blindness event in San Francisco. On May 23, Sergey Brin demoed the glasses on The Gavin Newsom Show and let California Lieutenant Governor Gavin Newsom wear the glasses. On June 27th, Sergey Brin demoed the glasses at Google I/O where skydivers, assailers, and mountain bikers wore the glasses and live streamed their point of view to a Google+ Hangout, which was also shown live at the Google I/O presentation.
CONCEPT OF GOOGLE GLASS
The glasses use a side-mounted touch-pad that allows users to control its various functions according to the patent paperwork. The glasses will be able to display a wide range of views, depending on user needs and interests. One potential view is a real-time image on the see-through display on the glasses, the patent application states.
"Displaying the visual representation may include showing an image or graphic. However, the display may also allow a wearer to see through the image or graphic to provide the visual representation superimposed over or in conjunction with a real-world view as perceived by the wearer."
What's really fascinating about the patent application is that many of the ideas described include multiple ways of performing the same tasks, which show just how much the Project Glass effort is still evolving even as the project continues.
One description details how the side-mounted touch-pad could be a physical or virtual component and that it could include a heads-up display on the glasses with lights that get brighter as the user's finger nears the proper touch-pad button.
On the heads-up display viewed by the user on the glasses, the side-mounted touch-pad buttons would be represented as a series of dots so they can operate them by feel, the applications states. "The dots may be displayed in different colors. It should be understood that the symbols may appear in other shapes such as squares, rectangles, diamonds or other symbols. "
Also described in the patent application are potential uses of a microphone, a camera, a keyboard and a touch-pad either one at a time or together. The device could even include capabilities to understand and show just what the user wants to see, according to the patent application. In the absence of an explicit instruction to display certain content, the exemplary system may intelligently and automatically determine content for the multimode input field that is believed to be desired by the wearer.
"For example, a person's name may be detected in speech during a wearer's conversation with a friend, and, if available, the contact information for this person may be displayed in the multimode input field," the application states.
Another possibility is that the glasses "may detect a data pattern in incoming audio data that is characteristic of car engine noise (and possibly characteristic of a particular type of car, such as the type of car owned or registered to the wearer)," the application states. That information could be interpreted by the device "as an indication that the wearer is in a car and responsively launch a navigation system or mapping application in the multimode input field."
While early versions of Google Glass mount the controls and hardware on the right side of the glasses within the range of the wearer's right eye, other possible configurations are included in the patent application.
3AUGMENTED REALITY
Augmented reality (AR) is the concept used in Google’s project glass. It is a live, direct or indirect, view of a physical, real-world environment whose elements are augmented by computer-generated sensory input such as sound, video, graphics or GPS data. It is related to a more general concept called mediated reality, in which a view of reality is modified (possibly even diminished rather than augmented), by a computer. As a result, the technology functions by enhancing one’s current perception of reality. By contrast, virtual reality replaces the real world with a simulated one. Augmentation is conventionally in real-time and in semantic context with environmental elements, such as sports scores on TV during a match. With the help of advanced AR technology (e.g. adding computer vision and object recognition) the information about the surrounding real world of the user becomes interactive and digitally manipulable. Artificial information about the environment and its objects can be overlaid on the real world. The term augmented reality is believed to have been coined in 1990 by Thomas Caudell, working at Boeing.
Research explores the application of computer-generated imagery in live-video streams as a way to enhance the perception of the real world. AR technology includes head-mounted displays and virtual retinal displays for visualization purposes, and construction of controlled environments containing sensors and actuators.
A key measure of AR systems is how realistically they integrate augmentations with the real world. The software must derive real world coordinates, independent from the camera, from camera images. That process is called image registration and is part of Azuma's definition of augmented reality.
Image registration uses different methods of computer vision, mostly related to video tracking. Many computer vision methods of augmented reality are inherited from visual odometry. Usually those methods consist of two parts. First detect interest points, or fiduciary markers, or optical flow in the camera images. First stage can use feature detection methods like corner detection, blob detection, edge detection or thresholding and/or other image processing methods.
The second stage restores a real world coordinate system from the data obtained in the first stage. Some methods assume objects with known geometry (or fiduciary markers) present in the scene. In some of those cases the scene 3D structure should be pre-calculated beforehand. If part of the scene is unknown simultaneous localization and mapping (SLAM) can map relative positions. If no information about scene geometry is available, structure from motion methods like bundle adjustment are used. Mathematical methods used in the second stage
include projective (epipolar) geometry, geometric algebra, rotation representation with exponential map, kalman and particle filters, nonlinear optimization, robust statistics.
2.4 HEAD MOUNTED DISPLAY
A head-mounted display (HMD) places images of both the physical world and registered virtual graphical objects over the user's view of the world. The HMDs are either optical see-through or video see-through. Optical see-through employs half-silver mirrors to pass images through the lens and overlay information to be reflected into the user's eyes. The HMD must be tracked with sensor that provides six degrees of freedom. This tracking allows the system to align virtual information to the physical world. The main advantage of HMD AR is the user's immersive experience. The graphical information is slaved to the view of the user. The most common products employed are as follows: Micro Vision Nomad, Sony Glasstron, Vuzix,[8]Lumus, LASTER Technologies,and I/O Displays.
HISTORY
HUDs evolved from the reflector sight, a pre-World War II parallax free optical sight technology for military fighter aircraft.The first sight to add rudimentary information to the reflector sight was the gyro gun sight that projected an air speed and turn rate modified reticleto aid in leading the guns to hit a moving target (deflection aircraft gun aiming). As these sights advanced, more (and more complex) information was added. HUDs soon displayed computed gunnery solutions,using aircraft information such as airspeed and angle of attack, thus greatly increasing the accuracy pilots could achieve in air to air battles. An early example of what would now be termed a head-up display was the Projector System of the British AI Mk VIII air interception radar fitted to some de Havilland Mosquito night fighters, where the radar display was projected onto the aircraft's windscreen along with the artificial horizon, allowing the pilots to perform interceptions without taking their eyes from the windscreen.
In 1955 the US Navy's Office of Naval Research and Development did some research with a mock HUD concept unit along with a side stick controller in an attempt to ease the pilot’s burden flying modern jet aircraft and make the instrumentation less complicated during flight. While their research was never incorporated in any aircraft at that time, the crude HUD mockup they built had all the features of today's modern HUD units.
HUD technology was next advanced in the Buccaneer, the prototype of which first flew on30 April 1958. The aircraft's design called for an attack sight that would provide navigation and weapon release information for the low level attack mode. There was fierce competition between supporters of the new HUD design and supporters of the old electro-mechanical gun sight, with the HUD being described as a radical, even foolhardy option. The Air Arm branch of the Ministry sponsored the development of a Strike Sight. The Royal Aircraft Establishment (RAE) designed the equipment, it was built by Cintel, and the system was first integrated in 1958. The Cintel HUD business was taken over by Automation and the Buccaneer HUD was manufactured and further developed continuing up to a Mark III version with a total of 375 systems made; it was given a `fit and forget' title by the Royal Navy and it was still in service nearly 25 years later. BAE Systems thus has a claim to the world's first Heads-Up Display in operational service.
In the United Kingdom, it was soon noted that pilots flying with the new gun-sights were becoming better at piloting their aircraft. At this point, the HUD expanded its purpose beyond weapon aiming to general piloting. In the 1960s, French test-pilot Gilbert Klopfstein created the first modern HUD and a standardized system of HUD symbols so that pilots would only have to learn one system and could more easily transition between aircraft. The modern HUD used in instrument flight rules approaches to landing was developed in 1975.Klopfstein pioneered HUD technology in military fighter jets and helicopters, aiming to centralize
CONCLUSION
Google glass is an eye-based display computer which is coming out of the company’s experimental unit, Google[x]. Announced last April, it was dropped into the conference in dramatic fashion: An extravagant demo hosted by Google co-founder Sergey Brin involved skydivers, stunt cyclists, and a death-defying Google+ hangout. It quickly attained legendary status. Even before people got to sample Glass, it was popping their eyes out.
Google wouldn’t provide a date or product details for Glass’ eventual appearance as a consumer product — and in fact made it clear that the team was still figuring out the key details of what that product would be. But Google made waves by announcing that it would take orders for a $1,500 “explorer’s version,” sold only to I/O attendees and shipped sometime early next year. Hungry to get their hands on what seemed to be groundbreaking new technology, developers lined up to put their money down.
Project Glass is something that scientists of Google have worked on together for a bit more than two years now. It has gone through lots of prototypes and fortunately they arrived at something that sort of works right now. It still is a prototype, but we can do more experimentation with it. This could be a radically new technology that really enables people to do things that otherwise they couldn’t do. There are two broad areas that the company is looking at. One is to enable people to communicate with images in new ways, and in a better way. The second is very rapid access to information.
Right now it doesn’t have a cell radio; it has Wi-Fi and Bluetooth. If you’re outdoors or on the go, at least for the immediate future, if you would like to have data connection, you would need a phone. Eventually it’ll be a stand-alone product in its own right.
There is a pretty powerful processor and a lot of memory in the device. There’s quite a bit of storage on board, so you can store images and video on board, or you can just live stream it out. It has a see-through display, so it shows images and video if you like, and it’s all self-contained. It has a camera that can collect photographs or video. It has a touchpad so it can interact with the system, and it has gyroscope, accelerometers, and compasses for making the system aware in terms of location and direction. It has microphones for collecting sound, it has a small speaker for getting sound back to the person who’s wearing it, and it has Wi-Fi and Bluetooth and GPS.
This is the configuration that most likely will ship to the developers, but it’s not 100 percent sure that this is the configuration that will they ship to the broader consumer market. It’s comparable to a pair of sunglasses. You can stack three of these up and balance a scale with a smart phone
On the side of the device there’s a two-dimensional touchpad. We have a button that we typically use for taking pictures. There are microphones in the system, so you could have sound input to the system. Experiments are also going on with a time-lapse feature, which takes a photo every 10 seconds. It’s the perfect example of getting technology out of your way. It’ll be easier to initiate one of these live hangouts than placing a phone call today. The power of being able to share your view with other people is pretty incredible. Not just in extraordinary situations like the parachuting demo, but everyday situations like sharing moments with remote family members, or just having a richer experience in shopping where you could get feedback or advice from a spouse or partner or friend.
After testing this extensively in our lives, two things had discovered. One was about how we can communicate with the people we care about through images, so we can capture moments that otherwise we wouldn’t capture. The other one involved search. The search is available here with an audio input, so you could touch the device and say something, and get the response back. So literally you could touch the device and ask, “What’s the capital of China?” and the response would just appear in front of your eye. It’s a magical moment. You suddenly feel you’re a lot more knowledgeable.
If a device like Glass is successful, it’s definitely going to generate a lot more content, and so tools to manage that are incredibly important. , but simple approaches can help a lot, like discarding blurry photos and detecting the photos that have people’s faces, or landscapes. Just by doing those basic things you can quickly reduce 1,000 photos down to 20 or 30.
One of the goal is how to improve people’s lives in society, and not how to geek out with the most technology possible. But it’s definitely true that something like this could go either way. A poor design could absolutely distract you and isolate you as a person. Good design actually keeps you more engaged in your activities in life whether it’s a lunch with someone or riding your bike or whatever activity you do.
It makes the people engaged with the physical world. It makes you feel that you’re wearing technology. Where your eyes are pretty much open to the environment, your ears are open, your hands are free — but you can engage with the technology if you need to.
In 2013 developer version will be shipped to the developer community, and hopefully, in less than a year following that, the consumer version will be released to the public. The price of developer version will be $1,500 and that of consumer version will be cheaper than that.