Cochlear Implants: A Tutorial from Sound to Stimulus

Katherine J. Herrick and Jane K. Ueda



Introduction

Welcome to our tutorial, "Cochlear Implants, A Tutorial from Sound to Stimulus". This tutorial is intended as an educational tool for prospective cochlear implant recipients and their families. It includes basic information about auditory perception, and describes what the cochlear implant is, how it works, and how it is implanted.

Conversion of acoustic signals into electrode stimulation in cochlear implants is accomplished through signal processing techniques. Current research involves adjusting these techniques to improve audio perception based on spectral content. Featured in this tutorial is an in depth explanation of the signal processing involved, as well as a demonstration. Using simulations created with Matlab and Goldwave, a digital signal processing method for audio signals is presented in a step by step fashion. By increasing comprehension of the processing in these prostheses, recipients should be better equipped to interpret the electrical stimuli from their cochlear implants.

To use the tutorial, simply click on the item of interest in the Table of Contents. To choose another item, click on the 'back' button on Netscape (under File) and you will return to the Table of Contents. The Speech Processing Demo is a simulation feature. Directions on its use are within the text for that section. We hope you find this to be a very educational tool. Enjoy!

Table of Contents



How the ear works

Sound is produced from the time-varying motion, or compression and rarefraction, of air molecules. Considering a sound wave of regular compressions and rarefactions is analogous to a sine wave. Such a sound wave is heard by us as a steady musical note. A louder note differs from a softer note in that the compressed volumes of the former are more compressed, and the rarefied volumes more rarefied. The more the air is compressed, the more energy it contains and can expend. The loudness, or intensity, of sound is measured in terms of the quantity of energy passing each second through one square centimeter of area, the area being perpendicular to the direction of propagation of the sound. Energy expended per unit time is power, and the amount of power involved in sound is quite small. A watt is the mks unit of power and is equal to one joule per second. In comparison to an every day 75 watt light bulb, ordinary conversational sounds carry a power of only 1000 microwatts.

The ear detects differences in loudness by ratios of power rather than by actual differences. Thus a 2000-microwatt sound will seem a certain amount louder than a 1000-microwatt sound, but a 3000-microwatt sound will not appear louder by as much. This means that the ear acts not by the power of sound, but by the logarithm of that power, where the common logarithm of a number is its exponent when it is expressed as a power of 10. Therefore, the logarithm of 100 is 2 and that of 1000 is 3. So if one stimulation is 100,000 times as intense as another, the ear, working by logarithms, detects it as five times as intense. In this way, the ear, is useful over an enormous range of intensity. This difference in intensity is described by the decibel, where one sound is a decibel louder than another sound when the first is 1.26 times as powerful as the second (logarithm of 1.26 is 0.1).

The auditory system can be divided into two large subsystems, central and peripheral. The peripheral auditory system converts the condensations and rarefractions that produce sound into neural codes, which are then interpreted by the central auditory system. The peripheral auditory system is subdivided into the external ear, the middle ear, and the inner ear. The external ear collects sound energy as pressure waves which are converted to mechanical motion in the eardrum. This motion is transformed across the middle ear and transfered to the inner ear, where it is frequency analyzed and converted into neural codes that are carried by the eighth cranial nerve, or auditory nerve, to the central auditory system. This sound information, which is encoded as discharges in an array of thousands of auditory nerve fibers, is processed in the nuclei of the central auditory system.

For this tutorial, the inner ear anatomy is the most pertinent. Figure 1 shows the location of the cochlea (blue spiral structure) with respect to the external ear canal. Figure 2 displays the anatomy of the inner ear only, which includes the cochlea. The inner ear is responsible for converting sounds which enter the ear canal from mechanical vibrations into electrical signals. The key link is the receptor cells, or hair cells, which have cilia on the apical surfaces and neural synapses on the lateral walls and base. Generally for hair cells, mechanical displacement of the cilia in the forward direction toward the tallest cilia causes the generation of electrical impulses in the nerves. These electrical signals, which code the sound's characteristics, are carried to the brain by the auditory nerve. For frequencies in the hearing range, the cochlea provides the correct forcing of hair cell cilia. For humans this range is typically 20 Hz to 20 kHz in adolescence, and decreases naturally with age.

Figure 1 Figure 2

The Basilar Membrane(BM) supports the hair cells. Sound decomposition into its frequency components is a major role of the BM. A transient sound initiates a traveling wave of displacement in the BM, and this motion has frequency dependent characteristics which arise from properties of the membrane and its surrounding structures. The membrane's width varies as it traverses the cochlear duct. It is narrower at its basal end than at its apical end. It is also stiffer at the base than at the apex, with stiffness varying by about two orders of magnitude. Since the membrane is a distributed structure, the BM demonstrates a distance-dependent displacement when the ear is excited sinusoidally. The distance from the apex to the maximum displacement is logarithmically related to the frequency.


The Cochlear Implant

The cochlear implant is a prosthetic replacement for the inner ear, or cochlea. It bypasses damaged parts of the inner ear and electronically stimulates the auditory nerve. Part of the device is surgically implanted in the skull behind the ear and tiny electrode wires are inserted into the cochlea. The other part of the device is external and has a microphone, a speech processor (to convert sound into electrical impulses), and connecting cables. The speech processor is battery powered and adjustable.

The following diagrams shows the location and function of the various components:

1. The small directional microphone picks up sounds from the environment. 2. This thin cord sends sounds from the microphone to the speech processor. 3. The speech processor amplifies, filters, and digitizes the sound into digital coded signals. 4. These coded signals are sent to the transmitting coil via the cables. 5. The transmitting coil sends the signals transdermally to the implanted receiver/stimulator via an FM radio signal. 6. The receiver stimulator delivers the appropriate amount of electrical stimulation to the appropriate electrodes on the array. 7. The electrodes along the array stimulate the remaining auditory nerve fibers in the cochlea. 8. The resulting electrical sound information is sent through the auditory system to the brain for interpretation.

Today's cochlear implants are much more advanced than when the first research on cochlear implants was conducted 30 years ago. Since then, cochlear implant technology has evolved from single electrode devices to multiple electrode devices. These devices, such as the Nucleus 22 Channel Cochlear Implant, have the capability of stimulating the auditory nerve in a variety of places, yielding better pitch information.


Auditory Assessment

The cochlear implant is designed to provide sound information for post-lingually deafened adults and children with a profound sensironeural hearing loss who show no ability to understand speech through hearing aids.

Candidate Criterion

Success of the implant can depend on many factors, the most important is the functionabilty of the auditory nerve. The cochlear implant can only bridge the gap for signals to the auditory nerve but cannot transfer the information to the brain. Patients having had a short amount of time without hearing sounds prior to the implant generally accelerate at a more rapid rate. Other factors include support from family and friends, as well as self-motivation.

The cochlear implant process begins with patient evaluation and review, followed by surgery, fitting, and follow-ups. The evaluation may include the following:


Implantation of a Multichannel Cochlear Implant

To take advantage of the spatial frequency representation in the cochlea, an electrode array is placed within the scala tympani of the basal turn of the cochlea. This electrode array is then connected to a receiver-stimulator package (typically).The surgery takes approximately 1 1/2 hours, with a 1-2 day hospital stay. The risks are considered minimal. More information on this topic can be found in

  • Cooper, Huw, Cochlear Implants: A Practical Guide, Whurr Publishers, Ltd., London, 1991.

  • Strategies of Cochlear Implant Speech Processing

    The original wideband single-channel devices operate similarly to a hearing aid. Sound information obtained from the microphone is automatically gain-controlled, filtered, and given frequency dependent amplification. Since the near end of the basilar membrane responds to the high-frequencies, they are emphasized in the frequency amplification. Thus the high-frequency sound components are made audible to the implant patients.

    Single-channel devices used for feature extraction code the fundamental frequency. The average fundamental frequency of adult males is 132 Hz, of adult females is 223 Hz, and of children is 264 Hz. Pulses are sent at a rate proportional to the fundamental frequency. The level of the pulses is fixed at a constant loudness, independent of rate. The perception of differences in pulse repetition rate for electrical stimulation is poorer at higher frequencies above 150 or 200 Hz, depending on the individual.

    Wideband multichannel devices separate the incoming signal into a set number of frequency regions using bandpass filters, and are used to determine spectral peaks. Fixed amplitude and duration pulses are then produced sequentially on electrodes that correspond to those filters having spectral peaks. The interpulse interval is determined by the center frequency of the filter, lower frequency filters correspond to longer interpulse intervals.


    Speech Processing Demo

    The purpose of this demo is to explain the signal processing involved in transforming sound information to electrical stimulus. First let's review the task at hand. Hair cells are typically attached to the basilar membrane. These hair cells consist of neurons, with nerves underlying the basilar membrane. In cochlear implant patients, these hair cells have died leaving a gap in the connection to the auditory nerve. Thus they are missing the connecting mechanisms required for auditory function. The cochlear implant attempts to bridge that gap. Since the basilar membrane maps frequency logarithmically along its length, an implant with current running through it can be inserted into the BM and stimulate the nerves at appropriate locations. The problem is that the implant can only be safely inserted through the first turn of the cochlea. Recall that the near end responds to high frequencies, specifically 1-20 kHz. Speech is in general limited to .3-3kHz. Therefore, sound information must be frequency shifted for the implant.

    In this demo, the function of a multichannel signal processor will be displayed. What typically happens is the following. Every 4-5 msec, for example, time soundwave information is sampled. This information is bandpass filtered into a specific number of channels (2-16 typically). Given 16 different frequency ranges of information the processor might pick the 5 or 6 largest. Knowing where the electrodes are placed in the cochlea, the frequency information can be divided up, mapped, and sent to the appropriate locations.

    The signal processor to be modeled is the spectral maxima sound processor (SMSP) which has been developed for the University of Melbourne/Nucleus Limited multielectrode cochlear implant (J. Acoust. Soc. Am., 91(6): 3367-3371,1992).

    First download the waveform data with shift-left-click on waveform data and saving it to your working directory. By clicking on Simulation of the word Dilemma (sim2.m), you will view the source code for the matlab simulation. Simply save this code using Save As under the File menu.Be sure you are in the directory where you saved the file. Open a new shell and run matlab by typing 'matlab'. Then simply type 'sim2', and the simulation will begin. The following sequence of events will be seen and heard:


    References