Title: CSD 5400 REHABILITATION PROCEDURES FOR THE HARD OF HEARING
1CSD 5400REHABILITATION PROCEDURES FOR THE HARD
OF HEARING
- Auditory Perception of Speech and the
Consequences of Hearing Loss
2Overview
- The aural rehabilitation goal is to remediate the
effects of a hearing impairment - Ultimately comes down to the effect of the
hearing loss on speech recognition and perception - Develop a general understanding of what a hearing
loss does to the speech signal
3The Auditory System in Review
- The primary purpose of the auditory system is to
take the speech code at the periphery and convert
it to a representation used by the CNS to extract
meaning
4The Auditory System in Review
- Speech arrives to the auditory periphery as a
series of pressure variations as a function of
time - The normal auditory periphery converts these
pressure variations into physical movement of the
middle ear structures, which in turn causes fluid
movement in the cochlea
5The Auditory System in Review
- Cochlear fluid movement gives rise to the
traveling wave along the basilar membrane - Spectral code
- Depending on the site of maximum amplitude of
displacement of the traveling wave, certain
auditory nerves will be activated - Neural activity
- Critical band theory
- Spectral and temporal code
6The Auditory System in Review
- As the signal moves higher into the central
pathways, more complicated processing occurs - Binaural processing
- Temporal processing
- By the time the signal reaches the cortex, it has
been analyzed and re-coded in a number of
different ways - The cortex recognizes these various forms of
analysis and extracts what is necessary, given
the job at hand
7When the Auditory System is Impaired
- Speech is inaccurately coded at the periphery
- Distorted
- Missing
- Attenuated
- Loss of redundancy
- When the signal reaches the cortex, the coded
representation may be unrecognizable
8Whos Making Use of the Signal?
- Important consideration
- Adults
- Rely very heavily on the linguistic, contextual,
and nonverbal cues available - Children
- No extensive language base
9Acoustic Cues of Speech
- Frequency
- Intensity
- Temporal Characteristics
10Flexers Analogy
11Illustrating Hearing Loss
12Acoustic Cues of Speech
- Short Term Characteristics
- Long Term Characteristics
13Long Term Characteristics of Speech
- Average changes over relatively long periods of
time - Provides general acoustic characteristics of
speech
14Long Term Characteristics of Speech
- Mean intensity level of conversational speech is
65-70 dB SPL - Individual speech segments fluctuate around this
mean by 40 dB
15Long Term Speech Spectrum
- Long-interval acoustic spectrum of male voices
taken 17 inches from speakers lips - Maximum energy is at approximately 500 Hz
- Roll-off rate of 9 dB/octave
16Phonemes
- Smallest unit of speech to have linguistic
meaning - Traditional unit of speech to study short term
acoustic characteristics
17Phonemes
- Classification system
- Vowels
- Consonants
18Differences Between Vowels and Consonants
- These two classes of sounds differ in the manner
they are produced and in the way we perceive them - Vowels are considered more prime
- Rhyming
- Speech Errors
- Vocal tract configuration
- Voicing
19Short Term Acoustic Characteristics of Vowels
- Vowels are always voiced
- The vocal tract is relatively open
- Source-Filter Theory of vowel production
20Sound Source of Vowels
- The glottal pulse
- The lowest component is the fundamental frequency
(f0) - Harmonics are labeled Hx.
- Maximum energy is at the fundamental frequency of
the speaker - Above the fundamental frequency, the spectrum
rolls off 10-12 dB/octave
21Filter of Vowels
- The vocal tract, which can be thought of as a
tube open at one end, closed at the other, and of
a specified length
22Putting the Source and Filter Together
23Putting the Source and Filter Together
- The panel at the left shows the glottal source.
The panel at the right shows the spectrum of the
source after filtering by a filter representing a
neutral vocal tract. The spectral
characteristics of the filter is indicated in the
middle panel
24Changing the Effects of the Filter
- In order to produce these three different vowels,
we change the characteristics of the vocal tract.
This will alter the resonant frequency
characteristics of the tube and change the
combined spectrum of the glottal pulse and the
vocal tract
25Changing the Effects of the Source
- This is what happens when the same vowel is
produced by a man, a woman and a child
26An Important Short Term Acoustic Characteristic
of Vowels
- Formants are the regions of increased spectral
energy - They are only a characteristic of vowels
- The frequency regions they occupy, as well as
their relative intensities change as the vocal
tract changes with each vowel production - All English vowels have 5-7 formants
- Vowels can be distinguished from one another
using the lowest (frequency) 2-3
27Vocal Tract Shapes and Spectra
- Vocal tract shapes and corresponding spectra (F1
and F2 only) for four back vowels
28Vocal Tract Shapes and Spectra
- Vocal tract shapes and corresponding spectra (F1
and F2 only) for four front vowels
29Peterson Barney (1954)
- Landmark spectrographic study of 76 men, women,
and children producing vowels in isolation - Measured and reported the average fundamental
frequency and the frequency/intensity of the
first three formants of the ten English vowels
30A Summary of Peterson Barneys Results
31Articulation and the Formant Frequencies
- F1 corresponds to the degree of tongue
constriction in the vocal tract - F2 corresponds to how forward in the mouth the
tongue is - F3 is not related in a simple way to articulatory
parameters
32Vowel Normalization
- Vowel quadrilaterals for men, women, and children
- Whats thought to be important for vowel
perception is the relative spacing between F1 and
F2 not their absolute frequencies
33Consequences of Hearing Loss on Vowel Perception
- Vowel perception is impaired when a hearing loss
erodes the acoustic information in the F2 range - Generally 1000 Hz and above
- Vowels are generally robust to the effects of
hearing loss
34Short Term Acoustic Characteristics of Consonants
- Differences Between Vowels and Consonants
- Consonants
- Have a shorter duration
- Cant be isolated
- Dont have just one noise source
- Arent static
- Identification seems to rely primarily on the
vowel that precedes or follows - Have a variety of methods of production and
places in the vocal tract where they are produced
35Spectral Regions of Various Speech Sounds
- A common spectral representation of major speech
sounds - Related to the threshold of audibility curve
36Spectral Regions of Various Speech Sounds
- Another example
- Lines A, B, and C represent three different
configurations and degrees of hearing loss - What predictions can you make?
37Spectral Regions of Various Speech Sounds
- Intensity and frequency distribution of speech
sounds overlaid on an audiogram - Predictions based on characteristics of the
hearing loss
38Predicting the Degree and Type of Phoneme Errors
- These type of charts are used often to help
predict the effect of a particular degree and
configuration a hearing loss might have on speech
understanding - This works somewhat, but it only looks at the
influence of a hearing loss in terms of a filter - Sensorineural hearing loss is more complicated
than this - Attenuation and distortion
39Hearing Loss as a Loss of Redundancy
- Illustrates the reduction of pattern details
(redundancy)
40The Consonant Classification System
- Every American English consonant can be
identified uniquely according to its - Manner of articulation
- Place of articulation
- Voicing
41Consonant Feature Classification System
- Classification of the consonants of American
English according to the articulatory feature
system
42Acoustic Properties of Articulatory Features
- Voicing
- Energy is broadband and extends from 100-4000 Hz
43Acoustic Properties of Articulatory Features
- Place of Articulation
- Energy is very high frequency and confined to
1000-8000 Hz
44Acoustic Properties of Articulatory Features
- Manner of Articulation
- Energy is spread through the mid frequencies
(250-3500 Hz)
45Consonants and Vowels Together
- Schematic oral tract movements, etc for phrases a
buy, a pie in the top spectrograms and a dye and
a tie in the lower spectrograms
46Formant Transitions
- Schematic of a transition and steady-state
portion of a formant frequency
47F2 Formant Transitions
- The second formant transition provides a lot of
information about the consonant - Place of articulation is related to the direction
of the transition - Manner of articulation is related to the rate of
the transition
48Error Patterns with SNHL
- Place of articulation and manner of articulation
error rates for 38 SNHI listeners - Place of articulation errors are more prevalent,
followed by manner of articulation errors
49Feature Recognition as a Function of Degree of HL
- Auditory identification of temporal patterns of
vowels and consonant features by 121 HI children
as a function of PTA - Notice how place of artic feature recognition is
adversely affected by HL - Voicing and vowel id are better preserved
50Summary of Findings
- General findings of studies of phoneme perception
for SNHL when using meaningful CVC stimuli - Relatively few errors are made with the vowel
- When they do occur, they occur more often for
front vowels - Higher F2 frequency
- More errors are made with consonants
- Final position is extremely vulnerable
- Most common error type is place of articulation,
followed by manner of articulation - Voicing errors are rare
51In Closing..
- It appears that phoneme error types seem to
relate somewhat to the frequency region and
degree of hearing loss - If the hearing loss is primarily confined to the
high frequencies, then we tend to see more errors
with articulatory features that are more high
frequency weighted (e.g. place) - Our predictive ability stops here
52In Closing..
- In fact, we see tremendous variability among hard
of hearing listeners in terms of their ability to
perceive and understand speech - The amount and way information is coded varies
from listener - Varying degrees of distortion not related to the
characteristics of the audiogram - If information is restricted at the phonetic or
spectral level, it is also probably restricted at
the linguistic level - How well individuals are able to integrate
information varies
53(No Transcript)