14-11-2012, 05:09 PM
Brain-actuated Control of Robot Navigation
brain.pdf (Size: 1.01 MB / Downloads: 34)
Introduction
Brain-driven robot navigation control is a new field stemming from recent successes in brain
interfaces. Broadly speaking, brain interfaces comprise any system that aims to enable user
control of a device based on brain activity-related signals, be them conscious or
unconscious, voluntary or evoked, invasive or non-invasive. Strictly speaking, the term
should also include technology that directly affects brains states (e.g., transcranial magnetic
stimulation), but these are not usually included in the terminology. Two main families of
brain interfaces exist according to the usual terminology, although the terms are often used
interchangeably as well: i) Brain-computer interfaces (or BCIs) usually refers to brain-tocomputer
interfaces that use non-invasive technology; ii) Brain-machine interfaces (or BMIs)
often refers to implanted brain-interfaces. This chapter shall use these terms (BCI and BMI)
as defined in this paragraph. Other sub-categories of BCIs are discussed below.
The idea of BCIs is credited to Jaques Vidal (1973) who first proposed the idea in concrete
scientific and technological terms. Until the late 1990’s the area progressed slowly as a result
of work in but a handful of laboratories in Europe and North America, most notably the
groups at the Wadsworth Centre (Albany, NY) and the Graz (Austria) group led by G.
Pfurtscheller. Aside from the few researchers working on BCIs in the ‘70s and ’80s, slow
progress then was largely due to limitations in: i) our understanding of brain
electrophysiology, ii) quality and cost of recording equipment, iii) computer memory and
processing speed, and iv) the performance of pattern recognition algorithms. The state-ofthe-
art in these areas and the number of BCI researchers have dramatically increased in the
last ten years or so. Yet, there is still an enormous amount of work do be done before BCIs
can be used reliably outside controlled laboratory conditions.
Overview of BCIs
A typical – and simplified – BCI system is illustrated in Fig. 1. In principle, the easiest way
for most users to control a device with their thoughts would be to have a computer read the
words a user is thinking. E.g., if a user wants to steer a robot to the left, he/she would only
need to think of the word ‘left’. However, while attempts at doing this have been made,
such approach currently leads to true positive recognition rates that are only slightly above
chance – at best. At present BCIs work in two ways (see item A in Fig. 1): either i) brain
signals are monitored while the user performs a specified cognitive task (e.g., imagination of hand movements), or ii) the computer makes decisions based on the user’s brain’s
involuntary response to a particular stimulus (e.g., flashing of an object on a computer
screen), although both approaches have been combined recently as well (Salvaris &
Sepulveda, 2010).
Invasive vs. non-invasive
In a narrow sense, there is an obvious difference between invasive interfaces (i.e.,
implanted) and those (non invasive) that go on the skin surface or farther from the body. In
a strict sense, however, any technology that deposits external elements on the body, be it
matter or even photons and magnetic fields, is invasive in that it directly affects the internal
physical state of the body. As discussed under Recording Equipment, below, technologies
such as near-infrared spectroscopy (NIRS, which deposits near-infrared light on tissue),
magnetic resonance imaging (which applies magnetic fields) and positron emission
tomography (which requires the administration of a radioactive substance) are all invasive
as the very mechanisms by which they work requires that the observed tissue (and
surrounding ones) be internally disturbed. For most of these technologies, the effects on the
body are well known. For NIRS, however, the effects of the absorbed energy on the brain
tissue have not been studied concerning long term and even short term but prolonged use
(e.g., rare occasions, but with hours of use every time), and thus caution is recommended at
this point in time.
Dependent vs. Independent BCIs
BCIs have been classified as either dependent or independent (Wolpaw et al., 2000). A
dependent BCI does not require the usual ways of brain output to express the internal
mental state (e.g., speech, facial expression, limb movement, etc.), but it does require that
some functionality (e.g., gaze control) remain beyond the brain. In practice what this means
is that a dependent BCI is not entirely reliant on the brain signals alone. For example (see
SSVEP BCIs below), in some cases the user must be able to fixate his/her gaze on a desired
flashing object on a computer screen in order for the BCI to determine which object (or
choice) the user wants amongst a finite set. This is a problem in principle as a ‘pure’ BCI
should not require any body-based functionality beyond being conscious and being able to
make decisions at the thought level. However, in practice very few users have no overt (i.e.,
observable via visual or auditory means) action abilities left. These, so called totally lockedindividuals,
would not be able to benefit from a dependent BCI, so independent BCIs are
needed in this case. On the other hand, most disabled and otherwise functionally restricted
people (including some locked in individuals), as well as able-bodied people, have at least
some voluntary eye movement control, which motivates further research in dependent BCIs.
Independent BCIs, in contrast, do not require any physical ability on the part of the user
other than the ability to mentally focus and make decisions.
Spontaneous vs. evoked vs. event-related
Evoked potentials (EPs) are observable brain responses to a given stimulus, e.g., a flashing
letter, a sound, etc., whether the user is aware of or interested in the stimulus or not. EPs are
time locked to the stimulus in that the observed brain signal will contain features that are
consistently timed with respect to the stimulus. For example, if a user focuses his/her gaze
on a flashing character on a computer screen, this flashing frequency (if it is between about
6Hz and 35Hz) will be observable in the visual cortex signals. Other brain signals can be
spontaneous, i.e., they do not require a given stimulus to appear. Thoughts in general can be
assumed to be spontaneous (although strictly speaking this is still debatable). An example
of spontaneous potentials are those related to movement intentions in the sensory motor
cortex and are thus not a result of specific input. Finally, a third class of signals is termed
‘event-related potentials’ (ERP). They are related to evoked potentials but also include brain
responses that are not directly elicited by the stimulus. I.e., they can include spontaneous or
deliberate thoughts such as mental counting, etc. (Rugg and Coles, 1995), but they have a
well controlled time window within which brain signals are monitored, whether
spontaneous or as a result of a specific stimulus. The term event-related potential is currently
seen as more accurate for all but the most restricted stimulation protocols and it is preferred
over the term evoked potentials.