25-08-2017, 09:32 PM
A TECHNICAL SEMINOR REPORT ON EYE-MOVEMENT BASED HUMAN-COMPUTER INTERACTION
A TECHNICAL SEMINOR.docx (Size: 92.01 KB / Downloads: 14)
ABSTRACT
User-computer dialogues are typically one-sided, with the bandwidth from
computer to user far greater than that from user to computer. The movement
of a user’s eyes can provide a convenient, natural, and high-bandwidth source
of additional user input, to help redress this imbalance. We therefore investigate
the introduction of eye movements as a computer input medium. Our
emphasis is on the study of interaction techniques that incorporate eye movements into the user-computer dialogue in a convenient and natural way. This chapter describes research at NRL on developing such interaction techniques and the broader issues raised by non-command-based interaction styles. It discusses some of the human factors and technical considerations that arise in trying to use eye movements as an input medium, describes our approach and the first eye movement-based interaction techniques that we have devised and implemented in our laboratory, reports our experiences and observations on them, and considers eye movement-based interaction as an exemplar of a new, more general class of non-command-based user-computer interaction.
INTRODUCTION
In searching for better interfaces between users and their computers, an additional mode of communication between the two parties would be of great use. The problem of human computer interaction can be viewed as two powerful information processors (human and computer) attempting to communicate with each other via a narrow-bandwidth, highly constrained interface [25]. Faster, more natural, more convenient (and, particularly, more parallel, less sequential) means for users and computers to exchange information are needed to increase the useful bandwidth across that interface. On the user’s side, the constraints are in the nature of the communication organs and abil-2 - ities with which humans are endowed; on the computer side, the only constraint is the range of devices and interaction techniques that we can invent and their performance. Current technology has been stronger in the computer-to-user direction than user-to-computer, hence today’s user-computer dialogues are typically one-sided, with the bandwidth from the computer to the user far greater than that from user to computer. We are especially interested in input media that can help redress this imbalance by obtaining data from the user conveniently and rapidly. We therefore investigate the possibility of using the movements of a user’s eyes to provide a high-bandwidth source of additional user input. While the technology for measuring a user’s visual line of gaze (where he or she is looking in space) and reporting it in real time has been improving, what is needed is appropriate interaction techniques that incorporate eye movements into the user-computer dialogue in a convenient and natural way.
NON-COMMAND INTERFACE STYLES
Eye movement-based interaction is one of several areas of current research in human computer interaction in which a new interface style seems to be emerging. It represents a change in input from objects for the user to actuate by specific commands to passive equipment that simply senses parameters of the user’s body. Jakob Nielsen describes this property- 4 -as non-command-based:
The fifth generation user interface paradigm seems to be centered around non command-based dialogues. This term is a somewhat negative way of characterizing a new form of interaction but so far, the unifying concept does seem to be exactly the abandonment of the principle underlying all earlier paradigms: That a dialogue has to be controlled by specific and precise commands issued by the user and processed and replied to by the computer. The new interfaces are often not even dialogues in the traditional meaning of the word, even though they obviously can be analyzed as having some dialogue content at some level since they do involve the exchange of information between a user and a computer. The principles shown at CHI’90 which I am summarizing as being non-command-based interaction are eye tracking interfaces, artificial realities, play-along music accompaniment, and agents[19]. Previous interaction styles–batch, command line, menu, full-screen, natural language, and even current desktop or "WIMP" (window-icon-menu-pointer) styles–all await, receive, and respond to explicit commands from the user to the computer. In the non-command style, the computer passively monitors the user and responds as appropriate, rather than waiting for the user to issue specific commands. This distinction can be a subtle one, since any user action, even a non-voluntary one, could be viewed as a command, particularly from the point of view of the software designer.
PERSPECTIVES ON EYE MOVEMENT-BASED INTERACTION
As with other areas of user interface design, considerable leverage can be obtained by drawing analogies that use people’s already-existing skills for operating in the natural environment and searching for ways to apply them to communicating with a computer. Direct manipulation interfaces have enjoyed great success, particularly with novice users, largely because they draw on analogies to existing human skills (pointing, grabbing, moving objects in physical space), rather than trained behaviors; and virtual realities offer the promise of usefully exploiting people’s existing physical navigation and manipulation abilities. These notions are more difficult to extend to eye movement-based interaction, since few objects in the real world respond to people’s eye movements.
The Eye
The retina of the eye is not uniform. Rather, one small portion near its center contains many densely-packed receptors and thus permits sharp vision, while the rest of the retina permits only much blurrier vision. That central portion (the fovea) covers a field of view approximately one degree in diameter (the width of one word in a book held at normal reading distance or slightly less than the width of your thumb held at the end of your extended arm). Anything outside that area is seen only with ‘‘peripheral vision,’’ with 15 to 50 percent of the acuity of the fovea. It follows that, to see an object clearly, it is necessary to move the eye so that the object appears on the fovea.
METHODS FOR MEASURING EYE MOVEMENTS
What to Measure
For human-computer dialogues, we wish to measure visual line of gaze, rather than simply the position of the eye in space or the relative motion of the eye within the head. Visual line of gaze is a line radiating forward in space from the eye; the user is looking at something along that line. To illustrate the difference, suppose an eye-tracking instrument detected a small lateral motion of the pupil. It could mean either that the user’s head moved in space (and his or her eye is still looking at nearly the same point) or that the eye rotated with respect to the head (causing a large change in where the eye is looking). We need to measure where the eye is pointing in space; not all eye tracking techniques do this. We do not normally measure how far out along the visual line of gaze the user is focusing (i.e., accommodation), but when viewing a two-dimensional surface like a computer console, it will be easy to deduce. Since both eyes generally point together, it is customary to track only one eye.
Electronic Methods
The simplest eye tracking technique is electronic recording, using electrodes placed on the skin around the eye to measure changes in the orientation of the potential difference that exists between the cornea and the retina. However, this method is more useful for measuring relative eye movements (i.e., AC electrode measurements) than absolute position (which requires DC measurements). It can cover a wide range of eye movements, but gives poor accuracy (particularly in absolute position).