02-03-2013, 04:58 PM
plz send seminar report n full ppt/pdf ongesture recognition technlogy
my email id:k.sanklecha11[at]gmail.com
02-03-2013, 04:58 PM
plz send seminar report n full ppt/pdf ongesture recognition technlogy my email id:k.sanklecha11[at]gmail.com
04-03-2013, 11:09 AM
to get information about the topic "gesture recognition technology " full report ppt and related topic refer the link bellow
https://seminarproject.net/Thread-gestur...e=threaded https://seminarproject.net/Thread-hand-a...pplication https://seminarproject.net/Thread-gesture-recognition https://seminarproject.net/Thread-gestur...e=threaded
14-08-2013, 03:24 PM
GESTURE RECOGNITION A TECHNICAL SEMINOR REPORT GESTURE RECOGNITION .docx (Size: 395.1 KB / Downloads: 22) ABSTRACT A primary goal of gesture recognition research is to create a system which can identify specific human gestures and use them to convey information or for device control. Interface with computers using gestures of the human body, typically hand movements. In gesture recognition technology, a camera reads the movements of the human body and communicates the data to a computer that uses the gestures as input to control devices or applications. Gesture recognition enables humans to interface with the machine (HMI) and interact naturally without any mechanical devices. Using the concept of gesture recognition, it is possible to point a finger at the computer screen so that the cursor will move accordingly. This could potentially make conventional input devices such as mouse, keyboards and even touch-screens redundant. INTRODUCTION Since the introduction of the most common input computer devices not a lot have changed. This is probably because the existing devices are adequate. It is also now that computers have been so tightly integrated with everyday life, that new applications and hardware are constantly introduced. The means of communicating with computers at the moment are limited to keyboards, mice, light pen, trackball, keypads etc. These devices have grown to be familiar but inherently limit the speed and naturalness with which we interact with the computer. As the computer industry follows Moore’s Law since middle 1960s, powerful machines are built equipped with more peripherals. Vision based interfaces are feasible and at the present moment the computer is able to “see”. Hence users are allowed for richer and user friendly man-machine interaction. This can lead to new interfaces that will allow the deployment of new commands that are not possible with the current input devices. Plenty of time will be saved as well. Recently, there has been a surge in interest in recognizing human hand gestures. Hand gesture recognition has various applications like computer games, machinery control (e.g. crane), and thorough mouse replacement. One of the most structured sets of gestures belongs to sign language. In sign language, each gesture has an assigned meaning (or meanings). WHAT IS GESTURE RECOGNITION ? Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Current focuses in the field include emotion recognition from the face and hand gesture recognition. Many approaches have been made using cameras and computer vision algorithms to interpret sign language .It is the process by which the Gestures made by the user are recognized by the receiver. Gestures are expressive, meaningful body motions involving physical movements of the fingers . NEED FOR GESTURE RECOGNITION: The goal of virtual environments (VE) is to provide natural ,powerful, and flexible interaction. Gesture as an input modality can help meet these requirements because Human Gesture gestures are natural and flexible, and may be efficient and powerful, especially as compared with alternative interaction modes. The traditional two-dimensional (2D), keyboard- and mouse-oriented graphical user interface (GUI) is not well suited for virtual environments. Synthetic environments provide the opportunity to utilize several different sensing modalities and technologies and to integrate them into the user experience. Devices which sense and orientation, direction of gaze, speech and sound, facial expression, galvanic skin response, and other aspects of human behavior or state can be used to mediate communication human and the environment. Combinations of communication modalities and sensing devices can produce a wide range of unimodal and multimodal interface techniques. The potential for these techniques to support natural and powerful interface in VEs appears promising. Gesture is used for control and navigation in CAVEs (Cave Automatic Virtual Environments) and in other VEs, such as, virtual work environments, and performance spaces. In addition, gesture may be perceived by the environment in order to be transmitted elsewhere (e.g., as a compression technique, to the receiver). Gesture recognition may also influence intentionally or unintentionally – a system’s model of the user’s state. Gesture may also be used as a communication backchannel i.e., visual or verbal behaviors such as nodding or saying something, or raising a finger to indicate the desire to interrupt) to indicate agreement, participation, attention turn taking, etc. Clearly the position and orientation of each body part – the parameters of an articulated body model –would be useful, as well as features that are derived from those measurements, such as velocity and acceleration. Facial expressions are very expressive. More subtle cues such as hand tension, over all muscle tension, locations of self-contact, and even pupil dilation may be of use REQUIREMENTS AND CHALLENGES : REQUIREMENTS : The main requirement for gesture interface is the tracking technology used to capture gesture inputs and process them. Gesture-only interfaces with a syntax of many gestures typically require precise pose tracking. Common technique for hand pose tracking, is to instrument the hand with a glove which is equipped with a number of sensors which provide information about hand position, orientation, and flex of the fingers. The first commercially available hand tracker was the Dataglove. Although instrumented gloves provide very accurate results they are expensive and encumbering. Computer vision and image based gesture recognition techniques can be used overcoming some of the limitations. There are two different approaches to vision based gesture recognition; based techniques which try to create a three-dimensional model of the users pose and use this for recognition, and image based techniques which calculate recognition features directly from the image of the pose. DATAGLOVE: A data glove is an interactive device, resembling a glove worn on the hand, which facilitates tactile sensing and fine-motion control in robotics and virtual reality . Data gloves are one of several types of electromechanical devices used in haptics applications. Tactile sensing involves simulation of the sense of human touch and includes the ability to perceive pressure, linear force, torque, temperature, and surface texture. Fine-motion control involves the use of sensors to detect the movements of the user's hand and fingers, and the translation of these motions into signals that can be used by a virtual hand (for example, in gaming ) or a robotic hand (for example, in remote-control surgery). ROBOT CONTROL: A combination of static shape recognition, Kalman-filter-based hand tracking, and an HMM-based temporal characterization scheme is developed for reliable recognition of single-handed dynamic hand gestures. Here, the start and end of gesture sequences are automatically detected. Hand poses can undergomotion and discrete changes in between the gesture, there by enabling one to deal with a larger set of gestures. However, any continuous deformation of hand shapes is not allowed. The system is robust to background clutter, and uses skin color for static shape recognition and tracking. Real time implementation of robot control: The user is expected to bring the hand to a designated region for initiating a gestural action to be grabbed by a camera. The hand should be properly illuminated, and the user should avoid sudden jerky movements. When the user moves the hand away from the designated region, it signals the end of the gesture and the beginning of the recognition process. IMAGE PROCESSING AND DATA EXTRACTION : The final mask image is of good quality. There is no noise, and disconnected areas of the body (in this case it can be reduced by running an averaging filter over the mask data. A hand that is separated from its arm in (fig 4) reconnected (fig 5)Once a mask is generated, then that image can be processed for data to extract into a decision tree. Two strategies for extracting data from the image were tried. The first was to find the body and arms. Each column of pixels from the mask was summed, and the column with the highest sum was assumed to be the center of the body. This is a valid criteria for determining the body center, based on the assumptions of the input image. Arms are significantly thinner than the height of the main body of a person, and are so within the mask (even with the existence of shadow artifacts). The center of a person’s head is the highest point on the body, and the torso of the body extends to the bottom of the image. In all of the samples this technique was tried on, it was successful in finding the center of the body, within a few pixels. CONCLUSION: The importance of gesture recognition lies in building efficient human–machine interaction. Its applications range from sign language recognition through medical rehabilitation to virtual reality. The major tools surveyed for this purpose include HMMs. Soft computing tools pose another promising application to static hand gesture identification. For large datasets, neural networks have been used for representing and learning the gesture information. Both recurrent and feed forward networks, with a complex preprocessing phase, have been used for recognizing static postures. The dynamic movement of hand has been modeled by HMMs and FSMs. The similarity of a test hand shape may be determined with respect to prototype hand contours, using fuzzy sets. TDNN and recurrent neural networks offer promise in capturing the temporal and contextual relations in dynamic hand gestures. -based matching of the retrieved images may be performed on clusters of databases, using concepts from approximate reasoning, searching, and learning. Thus, gesture recognition promises wide-ranging applications in fields from photojournalism through medical technology to biometrics. |
|