13-10-2012, 02:23 PM
Non-Invasive Brain-Actuated Control of a Mobile Robot by Human EEG
Non-Invasive Brain-Actuated.pdf (Size: 549.55 KB / Downloads: 149)
Abstract
Brain activity recorded non-invasively is sufficient to control a mobile robot if
advanced robotics is used in combination with asynchronous EEG analysis and machine learning
techniques. Until now brain-actuated control has mainly relied on implanted electrodes, since EEGbased
systems have been considered too slow for controlling rapid and complex sequences of
movements. We show that two human subjects successfully moved a robot between several rooms
by mental control only, using an EEG-based brain-machine interface that recognized three mental
states. Mental control was comparable to manual control on the same task with a performance
ratio of 0.74.
INTRODUCTION
The idea of moving robots or prosthetic devices not by manual control, but by mere “thinking” (i.e.,
the brain activity of human subjects) has fascinated researchers over the last couple of years [1]-[9].
Initial demonstrations of the feasibility of such an approach have relied on intracranial electrodes
implanted in the motor cortex of monkeys [1]-[5]. For humans, non-invasive methods based on
electroencephalogram (EEG) signals are preferable, but they suffer from a reduced spatial resolution and
increased noise due to measurements on the scalp. So far control tasks based on human EEG have been
limited to simple exercises such as moving a computer cursor or opening a hand orthosis [6]-[9]. Here we
show that the signals derived from an EEG-based brain-machine interface (BMI) are sufficient to
continuously control a miniature mobile robot in an indoor environment with several rooms, corridor, and
doorways (Fig. 1). Moreover, experimental results obtained with two volunteer healthy subjects show that
brain-actuated control of the robot is nearly as efficient as manual control. The subjects achieved these
results after a few days of training with a portable non-invasive BMI that uses 8 scalp electrodes.
METHODS
How is it possible to control a robot that has to make accurate turns at precise moments in time using
signals that arrive at a rate of about one bit per second? There are three key features of our approach.
First, the user’s mental states are associated with high-level commands (e.g., “turn right at the next
occasion”) and the robot executes these commands autonomously using the readings of its on-board
sensors. Second, the subject can issue high-level commands at any moment. This is possible because the
operation of the BMI is asynchronous and, unlike synchronous approaches, does not require waiting for
external cues. The robot will continue executing a high-level command until the next is received. Third,
the robot relies on a behavior-based controller [11] to implement the high-level commands that guarantee
obstacle avoidance and smooth turns.
In our controller, both the user’s mental state and the robot’s perceptual state can be considered as
inputs to a finite state automaton with 6 states (or behaviors). These behaviors are “forward movement”,
“left turn”, “follow left wall”, “right turn”, “follow right wall”, and “stop”. Fig. 2 shows the essentials of
this finite state automaton. The transitions between behaviors are determined by the 3 mental states (#1,
#2, #3) of the user, supplemented by 4 perceptual states of the environment determined from the robot’s
sensory readings (left wall, right wall, wall or obstacle in front, and free space). In addition, the controller
uses two other perceptual states (left obstacle and right obstacle) and a few internal memory variables for
obstacle avoidance and stable implementation of the different behaviors. The robot’s interpretation of a
particular mental state depends on the perceptual state of the robot. Thus, in an open space, mental state
#2 means “left turn”; on the other hand, if a wall is detected on the left-hand side, mental state #2 is
interpreted as “follow left wall”. Similarly, depending on the perceptual state of the robot, mental state #3
can mean “right turn” or “follow right wall”. However, mental state #1 always means “move forward”.
Moreover, the robot “stops” whenever it perceived an obstacle in front to avoid collisions. Altogether
experimental subjects felt that our control schema was simple and intuitive to use.
Brain-Machine Interface Protocol
During an initial training period of 5 or 3 days, respectively, the two subjects learned to control 3
mental tasks of their choice with the interface operating in mode I. Neither subject had previous
experience with meditation or specific mental training. The subjects tried the following mental tasks:
“relax”, imagination of “left” and “right” hand (or arm) movements, “cube rotation”, “subtraction”, and
“word association”. The tasks consisted of getting relaxed, imagining repetitive self-paced movements of
the limb, visualizing a spinning cube, performing successive elementary subtractions by a fixed number
(e.g., 64–3=61, 61–3=58, etc.), and concatenating related words. All tasks (including relax) were
performed with eyes opened. After a short evaluation, the experimental subjects “A” and “B” chose to
work with the tasks relax-left-cube and relax-left-right, respectively. In the sequel, we will refer to these
mental tasks as #1, #2 and #3 (i.e., relax is #1, left is #2, and cube or right is #3).
RESULTS AND DISCUSSION
The task was to drive the robot through different rooms in a house-like environment (Fig. 1). During
training the subject had to drive the robot mentally from a starting position to a first target room; once the
robot arrived, a second target room was drawn at random and so on. At the end of the second day of
training, the trajectories were qualitatively rather good and the robot never failed to visit the target room.