01-04-2011, 11:55 AM
THE SIXTH SENSE.docx (Size: 815.84 KB / Downloads: 180)
Abstract
'Sixth Sense' is a wearable gestural interface that augments the physical world around us with digital information and lets us use natural hand gestures to interact with that information.
I.INTRODUCTION
We've evolved over millions of years to sense the world around us. When we encounter something, someone or some place, we use our five natural senses to perceive information about it; that information helps us make decisions and chose the right actions to take. But arguably the most useful information that can help us make the right decision is not naturally perceivable with our five senses, namely the data, information and knowledge that mankind has accumulated about everything and which is increasingly all available online. Although the miniaturization of computing devices allows us to carry computers in our pockets, keeping us continually connected to the digital world, there is no link between our digital devices and our interactions with the physical world. Information is confined traditionally on paper or digitally on a screen.
SixthSense bridges this gap, bringing intangible, digital information out into the tangible world, and allowing us to interact with this information via natural hand gestures. ‘Sixth Sense’ frees information from its confines by seamlessly integrating it with reality, and thus making the entire world your computer.
II.BIRTH OF SIXTH SENSE
Before the knowing about the origin of this “SIXTH SENSE “technology we must have to be aware of the first steps towards this invention, those were
1.GESTURAL INTERFACE
2.PEN COMPUTING
3.TAPUMA --- A Digital, tangible public map
1. GESTURE INTERFACE
Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Current focuses in the field include emotion recognition from the face and hand gesture recognition. Many approaches have been made using cameras and computer vision algorithms to interpret sign language. However, the identification and recognition of posture, gait, proxemics, and human behaviors is also the subject of gesture recognition techniques.
Gesture recognition can be seen as a way for computers to begin to understand human body language, thus building a richer bridge between machines and humans than primitive text user interfaces or even GUIs (graphical user interfaces), which still limit the majority of input to keyboard and mouse.
Gesture recognition enables humans to interface with the machine (HMI) and interact naturally without any mechanical devices. Using the concept of gesture recognition, it is possible to point a finger at the computer screen so that the cursor will move accordingly. This could potentially make conventional input devices such as mouse, keyboards and even touch-screens redundant.
Gesture recognition can be conducted with techniques from computer vision and image processing.
The literature includes ongoing work in the computer vision field on capturing gestures or more general human pose and movements by cameras connected to a computer.
GUESTURE RECOGNITION
In some literature, the term gesture recognition has been used to refer more narrowly to non-text-input handwriting symbols, such as inking on a graphics tablet, multi-touch gestures, and mouse gesture recognition. This is computer interaction through the drawing of symbols with a pointing device cursor.
2. PEN COMPUTING
Pen Computing has very deep historical roots. The depth of these roots can be quite surprising to people who are only familiar with current commercial products. For example, the first patent for an electronic tablet used for handwriting was granted in 1888. The first patent for a system that recognized handwritten characters by analyzing the handwriting motion was granted in 1915. The first publicly-demonstrated system using a tablet and handwriting text recognition instead of a keyboard for working with a modern digital computer dates to 1956.
In addition to many academic and research systems, there were several companies with commercial products in the 1980s: Percept, Communications Intelligence Corporation, and Linux were among the best known of a crowded field. Later, GO Corp. brought out the Pinpoint OS operating system for a tablet PC product: one of the patents from GO corporation was the subject of recent infringement lawsuit concerning the Tablet PC operating system.
Pen Computing refers to a computer user-interface using a pen (or stylus) and tablet, rather than devices such as a keyboard and a mouse.
Pen computing is also used to refer to the usage of mobile devices such as wireless tablet PCs, PDAs and GPS receivers. The term has been used to refer to the usage of any product allowing for mobile communication. An indication of such a device is a stylus, generally used to press upon a graphics tablet or touch screen, as opposed to using a more traditional interface such as a keyboard, keypad, mouse or touchpad.
Historically, pen computing predates the use of a mouse and graphical display by at least two decades, starting with the Stylator and RAND tablet systems of the 1950s and early 1960s
GENERAL TECHINQUES OF PEN COMPUTING
User interfaces for Pen computing can be implemented in several ways. Actual systems generally employ a combination of these techniques.
Pointing/Locator input
The tablet and stylus are used as pointing devices, such as to replace a mouse. Note that a mouse is aRelative pointing device—you use the mouse to "push the cursor around" on a screen.
However a tablet is an Absolute pointing device—where you put the stylus is exactly where the cursor goes.
There are a number of human factors considerations when actually substituting a stylus and tablet for a mouse. For example, it is much harder to target or tap the same exact position twice with a stylus, so "double-tap" operations with a stylus are harder to perform if the system is expecting "double-click" input from a mouse.
Note that a finger can be used as the stylus on a touch-sensitive tablet surface, such as with a touch screen.
Handwriting recognition
The tablet and stylus can be used to replace a keyboard or both a mouse and a keyboard, by using the tablet and stylus in two modes:
Pointing mode: The stylus is used as a pointing device as above.
On-line Handwriting recognition mode: The strokes made with the stylus are analyzed as a "electronic ink", by software which recognizes the shapes of the strokes or marks as handwritten characters. The characters are then input as text, as if from a keyboard.
Different systems switch between the modes (pointing vs. handwriting recognition) by different means, e.g.
By writing in separate areas of the tablet for pointing mode and for handwriting-recognition mode.
By pressing a special button on the side of the stylus to change modes.
By context, such as treating any marks not recognized as text as pointing input.
By recognizing a special gesture mark.
The term "on-line handwriting recognition" is used to distinguish recognition of handwriting using a real-time digitizing tablet for input, as contrasted to "off-line handwriting recognition", which is optical character recognition of static handwritten symbols from paper.