21-08-2013, 03:25 PM
Articulated Human Hand Model with Inter-Joint Dependency Constraints
Articulated Human Hand .pdf (Size: 1.07 MB / Downloads: 70)
Introduction
This report summarizes the background research, work activities, results, and conclusions that
came out of my term project in Computer Science 6752. The title of the project is, “Articulated
Human Hand Model with Inter-Joint Dependency Constraints”. The primary goal of this project
was to produce a model of the human hand that would allow manipulation of model parameters in
a manner consistent with the mechanics of the actual human hand. These parameters are
expressed as joint angles. The foreseen application for this model is a generate-and-test gesture
recognition system, which will be described shortly.
The mechanics of the joints in the human hand are to be modeled. The relevant information from
the physiology of the hand is that which answers the questionwhere are the axes of rotation that
create movement? The answer to this question will be given later, but let us now say that these
axes are located at the “movable joints” of the hand. Mathematically, the model is comprised of a
set of coordinate frames and the transformations that relate them. The model parameters are
components of these transformations. The coordinate frames were assigned to the movable joints
according to the Denavit-Hartenburg convention [1] employed in robotics.
This model also includes a number of constraints on model parameters. Some of these constraints
are static and simply describe ranges of allowable motion. Others are dynamic and introduce
dependencies between joints. This later category may be further subdivided into constraints that
act upon joints within the same finger, the intra-finger constraints, and those that act upon joints
within different fingers, the inter-finger constraints.
Application
The application originally envisioned for this model was a generate-and-test gesture recognition
system. This system is divided into two stages. In the first stage, images of an actual human
hand would be inputted. The system would take its internal hand model and manipulate its
parameters so that the model and the actual hand shared the same posture. This process will be
described in more detail shortly. The parameters of the matched model would be the output of
the first stage of the system. At the second stage, further processing could be done to classify a
particular posture, or sequence of postures, as a defined gesture. Classifying this gesture would
be the final output of the system. Thus, the model described in this paper would be used
primarily in the first stage of this system, but its parameters could also be used as a feature vector
for the classification stage of the system.
Contribution to Research
The only portion of this project which could be considered a new contribution to the field is the
addition of the inter-finger joint dependency constraints described later. No reference to the use
of such constraints was mentioned in any of the hand modeling literature surveyed [2, 3],
although it was mentioned in the medical literature [4, 5] that dependencies did exist between
joints on neighboring fingers. However, no quantitative relationships or data was found.
Also, while the Denavit-Hartenburg conventions have been applied in biomechanical modeling
[6], no reference to their use in hand modeling was found. This contribution, while not likely
original, was developed independently.
Project Implementation
The model described in this project has been implemented as a C++ program compiled using
Optima’s Power++ Compiler for the Windows95/98/NT platform. The model is rendered using
OpenGL. Each coordinate frame in the model and its associated finger bone is represented as an
instance of the "Thing” class. All “Things” are represented internally in a hierarchical structure
where all accesses are made through the root “Thing”. In this case, that would be the base of the
hand.
Results and Conclusions
It is difficult to assess how successful the creation of this model has been with regards to the
original project goal. The reason for this is the fact that our original goal involved the use of this
model as an embedded component in a gesture recognition system. However, the model is
currently the only existing component of this system. Therefore it is impossible to accurately
ascertain how closely this model has come to achieving its desired purpose.
This being said, the model has sufficient complexity to match virtually any conceivable hand
posture. Also, the addition of the constraints discussed significantly reduces both the total
number of degrees-of-freedom in the model (from 26 to 18), and the range of motion for some of
the remaining movable joints.