Seminar Topics & Project Ideas On Computer Science Electronics Electrical Mechanical Engineering Civil MBA Medicine Nursing Science Physics Mathematics Chemistry ppt pdf doc presentation downloads and Abstract

Full Version: Using Multimodal Interaction to Navigate in Arbitrary Virtual VRML Worlds
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Presented by:
Frank Althoff
Gregor McGlaun
Bj ¨ orn Schuller
Peter Morguet
Manfred Lang



Abstract

In this paper we present a multimodal interface for navigating in arbitrary virtual VRML worlds. Conventional haptic devices like keyboard, mouse, joystick and touchscreen can freely be combined with special Virtual-Reality hardware like spacemouse, data glove and position tracker. As a key feature, the system additionally provides intuitive input by command and natural speech utterances as well as dynamic head and hand gestures. The commuication of the interface components is based on the abstract formalism of a context-free grammar, allowing the representation of deviceindependent information. Taking into account the current system context, user interactions are combined in a semantic uni_cation process and mapped on a model of the viewer's functionality vocabulary. To integrate the continuous multimodal information stream we use a straight-forward rulebased approach and a new technique based on evolutionary algorithms. Our navigation interface has extensively been evaluated in usability studies, obtaining excellent results.

Introduction

, The study of user interfaces (UIs) has long drawn signi_- cant attention as an independent research _eld, especially as a fundamental problem of current computer systems is that due to their growing functionality they are often complex to handle. In the course of time the requirements concerning the usability of UIs have considerably increased[15], leading to various generations of UIs from purely hardware and simple batch systems over line-oriented and full screen interfaces to today's widely spread graphical UIs. As principally restricted to haptic interaction by mouse and keyboard, the majority of available application programs requires extensive learning periods and adaptation by the user to a high degree. More advanced interaction styles, such as the use of speech and gesture recognition as well as combinations of various input modalities, can only be seen in dedicated, mostly research motivated applications thus far. However, especially for average computer users those systems would be particularly desirable whose handling can be learned in a short time and that can be worked with quickly, easily, e_ectively and, above all, intuitively. Therefore current UI research endeavors aim at establishing computers as a common tool for a preferably large group of potential users.


for more details, please visit
http://citeseerx.ist.psu.edu/viewdoc/dow...1&type=pdf