01-05-2013, 01:02 PM
Bioinspired Computing - the Evolving Scenario
Bioinspired Computing.pdf (Size: 257.12 KB / Downloads: 32)
This article aims to give a very generic introduction to bioinspired
computing, not only from an established view point,
but also from the perspective of evolution of the area. Biological
inspiration in computing is of course not something new. That
we very often are forced to compare the desk-tops, lap-tops
and palm-tops with the unbeatable (in tasks that really matter)
neck-top, tells the whole story. Sub-conciously, the whole idea
of a digital computer is bio-inspired. The metaphor of the brain
as CPU is as old as the modern digital computer. The input
output devices as the indriyas of the computer is
also well known. Not so well known metaphors
include the RAM as the mind and the operating
systems as culture.
In earlier days, possibly due to insulation
between disciplines, the bio-inspiration in
computing field was not so profound. The
Dartmouth Summer Research Conference on
Artificial Intelligence (1956) is considered by
many as a seminal event in the field. It featured
giants like John McCarthy, Marvin Minsky and
Claude Shannon, and recommended “a 2 month,
10 man study of artificial intelligence be carried out
during the summer of 1956 at Dartmouth College in
Hanover, New Hampshire.
The human brain is of course a
very natural metaphor for a computer.
Modelling of the human brain is an
attempt which can be traced back to
thousands of years, to the model of
sahasrara padma, the thousand-petal
lotus. After an initial excitement with
perceptrons, there was a period of setback
after realisation that perceptrons could
not do anything non-trivial. But the lost
interest was reborn after Rumelhart (who
passed away early this year) developed
the error back propagation algorithm for
multi-layer perceptrons in 1980s. The
area of artificial neural networks (ANN)
was established firmly with a number of
paradigms popping up such as Kohonen’s
network and Boltzmann machine. They
all basically try to capture some essential
behaviours of the human brain, specifically
the distributed and interconnected nature
of the neuronal cells. It was abstracted
that learning in human neural networks
amounted to adjusting weights associated
with synaptic connections. Various fine
tunings in these methodologies have
happened during the last two decades.
The support vector machines are one
of the successful modifications which
ensure that the pattern classification done
by ANNs are characterised by an optimal
band of separation rather than a plane.