Seminar Topics & Project Ideas On Computer Science Electronics Electrical Mechanical Engineering Civil MBA Medicine Nursing Science Physics Mathematics Chemistry ppt pdf doc presentation downloads and Abstract

Full Version: Brain-actuated Humanoid Robot Navigation Control using Asynchronous Brain-Computer
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Brain-actuated Humanoid Robot Navigation Control using Asynchronous Brain-Computer Interface

[attachment=40008]

Abstract

Brain-actuated robotic systems have been
proposed as a new control interface to translate different human
intentions into appropriate motion commands for robotic
applications. This study proposes a brain-actuated humanoid
robot navigation system that uses an EEG-BCI. The
experimental procedures consisted of offline training sessions,
online feedback test sessions, and real-time control sessions.
During the offline training sessions, amplitude features from the
EEGs were extracted using band power analysis, and the
informative feature components were selected using the Fisher
ratio and the linear discriminant analysis (LDA) distance metric.
The Intentional Activity Classifier (IAC) and the Motor
Direction Classifier (MDC) were hierarchically structured and
trained to build an asynchronous BCI system. During the
navigation experiments, the subject controlled the humanoid
robot in an indoor maze using the BCI system with real-time
images from the camera on the robot’s head. The results showed
that three subjects successfully navigated the indoor maze using
the proposed brain-actuated humanoid robot navigation system.

INTRODUCTION

Brain-Computer Interface (BCI) system has been
devised to translate different mental states into
appropriate commands. From a clinical viewpoint,
electroencephalography (EEG)-based BCIs have received
increasing interest because they are easier to record and are
associated with less risks compared to other more invasive
BCI systems [1], [2]. Recent studies have demonstrated the
feasibility of EEG-based brain-actuated devices, such as
mobile robots, neuroprosthetics, wheelchairs and humanoid
robots [3], [5]-[7].

METHODS

The system consisted of three sub-systems: the BCI system,
the interface system, and the control system. During the three
main procedures (offline training, online feedback testing,
and real-time control), the system processed three different
types of data, (i.e., sensed visual information, measured EEG
signals, and motion commands).

Training Protocol

During the offline training sessions, subjects were asked to
imagine the motor imagery (MI) tasks, which were referred to
as “left hand imagery”, “right hand imagery”, and “foot
imagery” or were asked to stay in the Non-Control (NC) state
referred to as “rest”. The subjects were instructed to select
one side of the foot consistently during the entire experiment
to prevent confusion. During the first two days, the subject
underwent three offline training sessions per day. Each
session consisted of 20 trials per task, and the interface
system provided training cues on the interface monitor as
illustrated in Fig. 1.A. During each session, nothing happened
for the first 2 s. Then, the first text cue (e.g., “rest”) with a
solid circle appeared in the center of screen.

Experimental Setup

Three healthy male subjects (right handed, age 26.3 ± 3.1
yr) participated in the experiments. They had not participated
in any prior BCI experiments. They were required to navigate
from a departure point to a destination point in the indoor
maze via five waypoints as Fig. 2 illustrated. The maze
measured 1.5 m (width) by 3 m (length). A circled number of
front waypoints and the arrows of guided direction were
denoted on the walls and the user was able to check these
guide signs using the interface system. To become familiar
with the control system, each subject underwent an open trial
for fifteen minutes before the main experiments. They
participated in all of the sessions to test the real-time control
scenario as follows. The subjects were able to obtain
information on the robot states and their mental states through
the interface system. A camera on the robot acquired visual
images at 5 frames per sec. The mental states from the fading
feedback system were updated every 250 msec. Each subject
had access to the robot state and the mental state information
on the PC screen using the interface system.

RESULTS

Feature Selection

To improve the signal-to-noise ratio and to enhance the
classification performance, a time-channel-frequency feature
set was selected for each subject as explained in Section II.C.
Table I describes the selected feature components of the three
subjects.
For the left-hand feature components, the two top-scoring
channels over the right sensorimotor cortex (i.e., electrode
locations C4, CP4 or FC4) and the frequencies around the
alpha (mu) frequency (i.e., 9-15 Hz) were selected. For the
right-hand feature components, channels over the left
sensorimotor cortex (i.e., electrode locations C3 or FC3) and
the frequencies around the alpha (mu) frequency (i.e., 7-16
Hz) were selected. In the experimental setup procedure, the
subjects were instructed to imagine movement of one side of
the foot. Because subject A chose the right foot and the others
chose the left foot, the selected channel locations tended to
bias toward the appropriate side. For the foot frequency
components, alpha (mu) bands (i.e., 6-14 Hz) and beta bands
(i.e., 21-32 Hz) were occupied.

CONCLUSION

Although our BCI system is less accurate than the
menu-based humanoid robot navigation system [5], it is
sufficient to navigate in the indoor environment using the
proposed direct-control paradigm. Furthermore, this study
introduces the novel humanoid navigation system for
controlling a humanoid robot using the low-level motion
commands with the asynchronous BCI system. In our control
system, subjects were able to command the humanoid robot
to position its head at any angle, turn the body to the target
angle, and walk to the destination position. As a result, the
ratio between the time required for operating the robot by
mental control and the time required for manual keyboard
control was 1.31. A previous investigation by Millan et al. [1]
obtained a ratio of 1.35 with the agent-based model that
restricts the motion of robot by environmental states. In our
experiments, the travelled distance ratio between the mental
and manual controls was an average of approximately 99%.
This study introduces the feasibility that a person can control
a humanoid robot in a remote place as if he or she was
mentally synchronized to the robot.