Seminar Topics & Project Ideas On Computer Science Electronics Electrical Mechanical Engineering Civil MBA Medicine Nursing Science Physics Mathematics Chemistry ppt pdf doc presentation downloads and Abstract

Full Version: Indian Sign Language Character Recognition using Neural Networks Project Report
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Indian Sign Language Character Recognition using Neural Networks


[attachment=68268]


ABSTRACT


Deaf and dumb people communicate among themselves using
sign languages, but they find it difficult to expose themselves
to the outside world. This paper proposes a method to convert
the Indian Sign Language (ISL) hand gestures into appropriate
text message. In this paper the hand gestures corresponding to
ISL English alphabets are captured through a webcam. In the
captured frames the hand is segmented and the neural network
is used to recognize the alphabet. The features such as angle
made between fingers, number of fingers that are fully
opened, fully closed or semi closed and identification of each
finger are used as input to the neural network.
Experimentation done for single hand alphabets and the
results are summarized.



INTRODUCTION


Communication is defined as exchange of thoughts and
messages either by speech or visuals, signals or behavior.
Deaf and dumb people use their hands to express their ideas.
The gestures include the formation of English alphabets. This
is called sign language. When they communicate through the
computer; the gestures may not be comfortable for the person
on the other side. Therefore for them to understand easily,
these gestures can be converted to messages. The sign
language is regional. There is a separate sign language in
America that uses only one hand for picturing the gestures.
But the Indian sign language is totally different. It uses both
the hands for representing the alphabets. While there are lot of
efforts going into American Sign Language detection the
same cannot be said about Indian Sign Language. The existing
systems concentrate on general gesture recognition of hand
gestures for any action. This paper proposes recognition of
Standard Indian sign language gestures. Unlike the
conventional method for hand gesture recognition technique
which makes use of gloves or markers or any other devices,
the method proposed in this paper does not require any
additional hardware and makes the user comfortable. This
paper uses the hand gestures captured through webcam as the
input. Image processing techniques are being used to extract



LITERATURE SURVEY


Extraction of hand region plays a major role in hand gesture
recognition. Skin color based segmentation technique is
widely used for segmenting hands, faces etc. These techniques
rely upon the color model used for segmentation[6]. Various
extraction procedures such as using threshold values,
combining collection of low level information to high level
feature information, projecting the object using Eigen vectors,
detecting finger tips by segmentation, using the concept of
kinematics and dynamics of the body are discussed in [2]
.
Some feature extraction techniques like finger tip detection,
finger length detection are discussed in[6]. Many learning
algorithms can be used for recognition of gestures. SVM
based method is discussed in [5]
.A Hidden Markov model
based recognition is discussed in [3]. A neural network based
approach is discussed in [4]
. Feed back or Back Propagation of
the output is what enables learning in a neural network. One
of the first systems to use neural networks in hand posture and
gesture recognition was developed by Murakami. Hand
postures were recognized with back propagation neural
network that had 3 layers, 13 input nodes, 100 hidden nodes,
and 42 output nodes, one for each posture to be recognized.
The method in [11] achieved 77% accuracy with an initial
training set. An increase in the accuracy to 98% in [12] is
reported for participants in the original training set when the
number of training patterns was increased from 42 to 206.



PROPOSED METHODOLOGY

The alphabets in Indian Sign Language shown in Figure1,
uses both the hands which differentiate it from American Sign
Language. Indian Sign Language characters are almost similar
to the characters themselves. The alphabets could be
characterized by the angle between the fingers, which finger is
open, how much the particular finger is open and how many
fingers are open.


Segmentation

Since the segmentation is done using skin color model, effect
of luminosity should be segregated from the color
components. This makes HSI color model a better choice than
RGB. The optimal Hue and saturation values for hand as
specified in [6] is H<25 or H>230 and S<25 or S>230.
The input frames in RGB model are converted to HSI model
using Eq(1). One assumption made is that the hand is the
largest skin color object in the input image. After segmenting,
the hand region is assigned a white color and other areas are
assigned black.


Artificial Neural Network

An ANN is used to recognize single hand alphabets viz L, C,
I, U, V and W. The ANN learns to recognize these alphabets
by adjusting the weights associated with the neurons. The
features extracted from the previous phase are used as input
for the artificial neural network. The neural network used in
this paper is a back propagation neural network which has 1
input layer, 2 hidden layers and 1 output layer as shown in Fig



EXPERIMENTAL RESULTS

The ISL characters use both the hands, some fully open, some
semi closed and some half closed. The difference in these
features between the alphabets was identified and was
categorized into groups accordingly. One major category
depends on the number of hands used. The alphabets using
one hand in the Indian Sign Language and the features
required for their recognition are shown in Table 1. The
angles represented are an approximate estimation as the actual
values vary at runtime due to the change in hand position of
the person showing the gestures



CONCLUSION AND FUTURE WORK


The Indian sign language alphabets could be identified from
the input hand gesture video by identifying the fingers and
their postures. The segmentation of the hand and the fingers
play a crucial role in such process. The features of the English
characters have been obtained for single hand alphabets and
were classified using neural networks. Accuracy was
increased when neural networks were used. In our case
proposed neural networks with sigmoid transfer function gave
better results compared to other architectures. Due to
segmentation problem the features of letters C and U were
mixed up. The effect of illumination also adds up to the
improper segmentation. The implementation of segmentation