Neuromorphic engineering, also known as neuromorphic computing,is a concept developed by Carver Mead,[citation needed] in the late 1980s, describing the use of very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures present in the nervous system. In recent times the term neuromorphic has been used to describe analog, digital, and mixed-mode analog/digital VLSI and software systems that implement models of neural systems (for perception, motor control, or multisensory integration). The implementation of neuromorphic computing on the hardware level can be realized by oxide-based memristors, threshold switches and transistors.A key aspect of neuromorphic engineering is understanding how the morphology of individual neurons, circuits, applications, and overall architectures creates desirable computations, affects how information is represented, influences robustness to damage, incorporates learning and development, adapts to local change (plasticity), and facilitates evolutionary change.Neuromorphic engineering is an interdisciplinary subject that takes inspiration from biology, physics, mathematics, computer science and electronic engineering to design artificial neural systems, such as vision systems, head-eye systems, auditory processors, and autonomous robots, whose physical architecture and design principles are based on those of biological nervous systems.
Neuromorphic Chips
The biological brain implements massive parallel computations using an architecture that is fundamentally different from that of today's computers. Even from a rudimentary comparison there are some obvious differences. The vast majority of silicon microprocessors are organized with a global clock whereas brain circuits operate in an asynchronous, event-driven fashion. Instructions in present-day computer architectures are executed sequentially whereas neural computations are massively parallel and distributed. There is a clear separation between processor and memory in microprocessors, whereas memory and computation are tightly integrated at the device (cellular) level in biology. A silicon logic gate communicates with only a few of its neighbors while each neuron may talk to thousands of others.
Current microprocessors can crunch numbers seamlessly, but they are terrible at tasks that brains do with ease. What is it about the architecture of neural systems that make them excel in pattern recognition, sensory integration, and cognitive tasks? How can we use insights from these systems to design better computing machines? Designing neuromorphic chips will help us investigate these questions.
Another motivation behind designing these chips is to build a platform for running large-scale, real-time simulations to aid neuroscience research. Studying a system as complex as the brain invariably leads to analytically intractible circuits or time-consuming, surgically-invasive laboratory experiments. VLSI implementations of biologically-accurate neural network models can result in powerful tools for experimentation, where theoretical insights can be put to the test through large-scale simulations. Hardware acceleration will allow researchers to run real-time simulations beyond scales possible in supercomputers, and at a fraction of the cost (once commercialized) and power consumption.
A reconfigurable event-driven neuromorphic chip that we designed in collaboration with IBM researchers. It consists of 256 neurons each with 1024 dendritic inputs and 255 axonal targets.
Some areas I'm investigating are :
Digital vs Analog Silicon Neurons: Analog circuits closely mimic the continuous time operation of neurons but are noisy and imprecise, making it difficult to make them correspond to mathematical models of neurons. In addition, they do not scale well to the latest fabrication technologies. Digital circuits reliably approximate neural operations. This is sufficient for the discrete-time simulations used in computational neuroscience research. Which representation is best for a given application?
Communication circuits for neuromorphic systems: Address-Event Representations, Spike-based routing, Chip-to-chip communication.
Reconfigurable neuromorphic architectures: Chips where networks are not hardwired but set up manually by the user.
Learning circuits through spike-timing-dependent plasticity.
What level of detail is useful for computational research? Are the rich dynamics in Hodgkin-Huxley models necessary? Or do reduced models help computational tractability? How should a synapse be modeled? Binary switches? Exponential rises and decays? Neurotransmitter diffusion processes? Is molecular-level analysis required for certain phenomena (e.g. long-term memory)?
Models of the retina in a configurable chip
Models of the olfactory bulb in a configurable chip
Dendritic Trees: A good amount of processing may be carried out in the dendritic branches of neurons. How should these computations be modeled and what are efficient circuits for them?
Simulation tradeoffs: Custom circuits vs FPGAs vs GPUs vs Parallel CPU clusters