19-09-2012, 01:19 PM
Human Brain-Teleoperated Robot between Remote Places
human brain.pdf (Size: 1.04 MB / Downloads: 15)
Abstract
This paper describes an EEG-based human brainactuated
robotic system, which allows performing navigation
and visual exploration tasks between remote places via internet,
using only brain activity. In operation, two teleoperation modes
can be combined: robot navigation and camera exploration.
In both modes, the user faces a real-time video captured by
the robot camera merged with augmented reality items. In
this representation, the user concentrates on a target area
to navigate to or visually explore; then, a visual stimulation
process elicits the neurological phenomenon that enables the
brain-computer system to decode the intentions of the user. In
the navigation mode, the target destination is transferred to the
autonomous navigation system, which drives the robot to the desired
place while avoiding collisions with the obstacles detected
by the laser scanner. In the camera mode, the camera is aligned
with the target area to perform an active visual exploration of
the remote scenario. In June 2008, within the framework of
the experimental methodology, five healthy subjects performed
pre-established navigation and visual exploration tasks for one
week between two cities separated by 260km. On the basis of
the results, a technical evaluation of the device and its main
functionalities is reported. The overall result is that all the
subjects were able to successfully solve all the tasks reporting
no failures, showing a high robustness of the system.
INTRODUCTION
To command robots just by thought is a technical and
social dream that is turning to be a reality due to the
recent advances in brain-machine interaction and robotics.
On one hand, brain-computer interfaces provide a new communication
channel for patients with severe neuromuscular
disabilities bypassing the normal output pathways, and also
can be used by healthy users. On the other hand, robotics
provides a physical entity embodied in a given space ready to
perceive, explore, navigate and interact with the environment.
Certainly, the combination of both areas opens a wide range
of possibilities in terms of research and applications.
One of the major goals for human applications is to
work with non-invasive methods (non intracraneal), where
the most popular is the electroencephalogram (EEG). So far,
systems based on human EEG’s have been used to control a
mouse on a screen [1], for communication such as a speller
[2], an internet browser [3], etc. Regarding brain-actuated
robots, the first control was demonstrated in 2004 [4], and
since then, research has focused on wheelchairs [5], manipulators
[6], small-size humanoids [7] and neuroprosthetics
[8]. All these developments share an important property: the
user and the robot are placed in the same environment.
BRAIN-COMPUTER SYSTEM
This section describes the brain-computer system, which
is located in the user’s station (Figure 1).
A. Neuropsychological protocol and instrumentation
The neurophysiological protocol followed in this study is
based on the P300 visually-evoked potential [10], which is
usually measured in a human EEG. In this protocol, the user
focuses his attention on one of the possible visual stimuli,
and the BCI uses the EEG to infer the stimulus that the user is
attending to. The P300 potential manifests itself as a positive
deflection in voltage at a latency of roughly 300 msec in the
EEG after the target stimulus is presented within a random
sequence of non-target stimuli (Figure 2). The elicitation time
and the amplitude of this potential are correlated with the
fatigue of the user and with the saliency of the stimulus (in
terms of color, contrast, brightness, duration, etc.) [10].
Graphical interface
The brain-computer system incorporates a graphical interface
with two functionalities: (i) it visually displays a
predefined set of options that the user can select to control the
robotic system, and (ii) it develops the stimulation process to
elicit the P300 visual-evoked potential and therefore, enables
the pattern recognition system to decode the user’s intents.
Both functionalities are described next:
1) Visual display: The basis of the visual display is the
live video received by the robot camera, which is used by
the user as visual feedback for decision-making and process
control. This video is augmented by overlapped items related
to the two teleoperation modes: the robot navigation mode
and the camera exploration mode.
ROBOTIC SYSTEM
The robot is a commercial Pioneer P3-DX equipped with
two computers. The low-level computer controls the back
wheels that work in differential-drive mode and the highlevel
computer manages the rest of the computational tasks.
The main sensor is a SICK planar laser placed on the frontal
part of the vehicle. It operates at a frequency of 5Hz, with a
180 field of view and a 0:5 resolution (361 points). This
sensor provides information about the obstacles located in
front of the vehicle. The robot is also equipped with wheel
encoders (odometry), with a wireless network interface card
(to connect the vehicle to a local network during operation)
and with a camera. The camera, placed on the laser, is a
pan/tilt Canon VC-C4 with a 100 pan field of view and a
+90=