Seminar Topics & Project Ideas On Computer Science Electronics Electrical Mechanical Engineering Civil MBA Medicine Nursing Science Physics Mathematics Chemistry ppt pdf doc presentation downloads and Abstract

Full Version: Artificial speech synthesizer control by brain-computer interface
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Artificial speech synthesizer control by brain-computer interface


[attachment=27397]

Introduction

Artificial speech synthesizers, for the most part, have been used primarily as a theoretical research tool for investigating normal speech production in humans. Notable exceptions include devices to replace glottal pulse activity for laryngectomy patients and text-to-speech synthesizers incorporated into augmentative and alternative communication (AAC) devices. Recent progress in the field of brain computer interfacing (BCI) has broadened the user base for such AAC devices to include those with extreme, near complete, paralysis. In particular, methods involving electromyographic (EMG) [1] and electroencephalographic (EEG) [2-6] potentials are bridging the gap between individuals with near-total paralysis and the external, communicating world.


Subjects and Methods

Speech restoration by wireless intracortical microelectrode BCI requires implantation of an extracellular recording electrode and electrical hardware for amplification and transmission brain activity. For our methods, the choice of implantation site, electrode type and decoding algorithms are crucially important for the success of the neural prosthesis. The complete neural prosthesis is illustrated in Figure 1 including implant location, wireless telemetry subsystems, signal processing, neural decoding and artificial speech synthesis modules.


Subject

The participant is a 25 year old male suffering from locked-in syndrome as a result of a brain stem stroke incurred at age 16. Locked-in syndrome is characterized by near-complete motor paralysis with only small amounts of voluntary control over the ocular muscles, though the remainder of the brain responsible for cognition and sensation is left largely intact. Indeed, our subject retains voluntary motor behavior only through slow vertical movements of the eyes, allowing him to answer yes/no questions. His audition and somatic sensation were not noticeably impaired though his visual perception has suffered due to an inability to control the eyes in a conjugate fashion.


Decoding Algorithm

We chose to decode neural signals into speech representations according to a discrete-time filtering method as opposed to discrete classification of intended phoneme or word productions (e.g. EMG and EEG methodologies). According to previous neuroimaging and computational modeling studies of speech production [17,18], the speech premotor cortex was found to represent acoustic features of intended speech sounds. Therefore, we utilized formant frequencies, the resonant frequencies of the vocal tract, as a continuously variable parameter used for control of a formant-based artificial speech synthesizer [19].



Implant

The implanted electronics of the neural prosthesis consist of a recording electrode and amplification and wireless telemetry systems. The recording electrode used in the device is a chronically implantable two-channel Neurotrophic Electrode [9,10]. This electrode is unique among multiunit extracellular electrodes.
Standard multiunit extracellular electrodes come in an array (multielectrode array, MEA) configuration with many tens of penetrating tips which are capable of recording one to two neural action potentials, or spikes, per tip [11-14]. These electrode arrays are implanted on the surface of the cerebral cortex, where they float while attached to amplification and transmission (most utilize wired connections while our implant is the only wireless system) hardware.



Discussion
The neural prosthesis described within this report is the first ever brain computer interface for continuous control over an artificial speech synthesizer. Our results establish the feasibility for continuous brain control for speech restoration in real-time by profoundly paralyzed and mute individuals. Our subject was capable of producing vowel sounds at average rates approaching 80% within each testing session (approximately two hours), with a maximum of 89% by the end of the final block of the final session.