20-03-2012, 11:45 AM
The Blue Brain
markham.pdf (Size: 959.78 KB / Downloads: 38)
Concepts of intelligence
IBM built the computer Deep Blue3 to
compete against and eventually beat Garry
Kasparov at chess, shaking the foundations
of our concepts of intelligence. Deep Blue
combined conventional methods from
computer science, but was able to win by
brute force, considering 200 million moves
per second using if–then-like routines
(BOX 1). Nevertheless, this defeat of a human
master by a computer on such a complex
cognitive task posed the question of whether
the relevant world of an organism could
simply be described by enough if–then
conditions. It could perhaps be argued that
artificial intelligence, robotics and even the
most advanced computational neuroscience
approaches that have been used to model
brain function are merely if–then-like
conditions in various forms. Adaptation
and learning algorithms have massively
enhanced the power of these systems, but it
could also be claimed that these approaches
merely enable the system to automatically
acquire more if–then rules. Regardless of the
complexity of such an operation, the quality
of the operation is much the same during
any stage of the computation, and this form
of intelligence could therefore be considered
as ‘linear intelligence’.
Detailed models
In 1952, Hodgkin and Huxley published
the highly successful model of ionic currents
that allowed simulation of the action
potential4. These simulations revealed the
emergent behaviour of ion channels, and
showed how only two types of ion channel
can give rise to the action potential
— the currency of the brain. These insights
fuelled experiments and simulations for
decades, and now explain how different
combinations of ion channels underlie
electrical diversity in the nervous system.
Wilfred Rall realized that the complexity of
the dendritic and axonal arborizations of
neurons would profoundly affect neuronal
processing, and developed cable theory for
neurons5 despite fierce resistance from the
entire community, which argued against
the need to consider such complexity.
The quantum leap
Neurons receive inputs from thousands of
other neurons, which are intricately mapped
onto different branches of highly complex
dendritic trees and require tens of thousands
of compartments to accurately represent
them. There is therefore a minimal size of a
microcircuit and a minimal complexity of a
neuron’s morphology that can fully sustain a
neuron. A massive increase in computational
power is required to make this quantum leap
— an increase that is provided by IBM’s Blue
Gene supercomputer2 (FIG. 1). By exploiting
the computing power of Blue Gene, the Blue
Brain Project1 aims to build accurate models
of the mammalian brain from first principles.
Building the Blue Column
Building the Blue Column requires a series
of data manipulations (FIG. 4). The first
step is to parse each three-dimensional
morphology and correct errors due to the
in vitro preparation and reconstruction.
The repaired neurons are placed in a database
from which statistics for the different
anatomical classes of neurons are obtained.
These statistics are used to clone an indefinite
number of neurons in each class to capture
the full morphological diversity. The
next step is to take each neuron and insert
ion channel models in order to produce
the array of electrical types.
markham.pdf (Size: 959.78 KB / Downloads: 38)
Concepts of intelligence
IBM built the computer Deep Blue3 to
compete against and eventually beat Garry
Kasparov at chess, shaking the foundations
of our concepts of intelligence. Deep Blue
combined conventional methods from
computer science, but was able to win by
brute force, considering 200 million moves
per second using if–then-like routines
(BOX 1). Nevertheless, this defeat of a human
master by a computer on such a complex
cognitive task posed the question of whether
the relevant world of an organism could
simply be described by enough if–then
conditions. It could perhaps be argued that
artificial intelligence, robotics and even the
most advanced computational neuroscience
approaches that have been used to model
brain function are merely if–then-like
conditions in various forms. Adaptation
and learning algorithms have massively
enhanced the power of these systems, but it
could also be claimed that these approaches
merely enable the system to automatically
acquire more if–then rules. Regardless of the
complexity of such an operation, the quality
of the operation is much the same during
any stage of the computation, and this form
of intelligence could therefore be considered
as ‘linear intelligence’.
Detailed models
In 1952, Hodgkin and Huxley published
the highly successful model of ionic currents
that allowed simulation of the action
potential4. These simulations revealed the
emergent behaviour of ion channels, and
showed how only two types of ion channel
can give rise to the action potential
— the currency of the brain. These insights
fuelled experiments and simulations for
decades, and now explain how different
combinations of ion channels underlie
electrical diversity in the nervous system.
Wilfred Rall realized that the complexity of
the dendritic and axonal arborizations of
neurons would profoundly affect neuronal
processing, and developed cable theory for
neurons5 despite fierce resistance from the
entire community, which argued against
the need to consider such complexity.
The quantum leap
Neurons receive inputs from thousands of
other neurons, which are intricately mapped
onto different branches of highly complex
dendritic trees and require tens of thousands
of compartments to accurately represent
them. There is therefore a minimal size of a
microcircuit and a minimal complexity of a
neuron’s morphology that can fully sustain a
neuron. A massive increase in computational
power is required to make this quantum leap
— an increase that is provided by IBM’s Blue
Gene supercomputer2 (FIG. 1). By exploiting
the computing power of Blue Gene, the Blue
Brain Project1 aims to build accurate models
of the mammalian brain from first principles.
Building the Blue Column
Building the Blue Column requires a series
of data manipulations (FIG. 4). The first
step is to parse each three-dimensional
morphology and correct errors due to the
in vitro preparation and reconstruction.
The repaired neurons are placed in a database
from which statistics for the different
anatomical classes of neurons are obtained.
These statistics are used to clone an indefinite
number of neurons in each class to capture
the full morphological diversity. The
next step is to take each neuron and insert
ion channel models in order to produce
the array of electrical types.