In speech communication, intelligibility is a measure of how comprehensible speech is in given conditions. Intelligibility is affected by the level (loud but not too loud) and quality of the speech signal, the type and level of background noise, reverberation (some reflections but not too many), and, for speech over communication devices, the properties of the communication system. A common standard measurement for the quality of the intelligibility of speech is the Speech Transmission Index (STI). The concept of speech intelligibility is relevant to several fields, including phonetics, human factors, acoustical engineering, and audiometry.
Important Influences
Speech is considered to be the major method of communication between humans. Humans alter the way they speak and hear according to many factors, like the age, gender, native language and social relationship between talker and listener. Speech intelligibility may also be affected by pathologies such as speech and hearing disorders.[1][2]
Finally, speech intelligibility is influenced by the environment or limitations on the communication channel. How well a spoken message can be understood in a room is influenced by the
Intelligibility is negatively impacted by background noise and too much reverberation. The relationship between sound and noise levels is generally described in terms of a signal-to-noise ratio. With a background noise level between 35 and 100 dB, the threshold for 100% intelligibility is usually a signal-to-noise ratio of 12 dB.[3] 12 dB means that the signal should be roughly 4 times louder than the background noise. The speech signal ranges from about 200–8000 Hz, while human hearing ranges from about 20-20,000 Hz, so the effects of masking depend on the frequency range of the masking noise. Additionally, different speech sounds make use of different parts of the speech frequency spectrum, so a continuous background noise such as white or pink noise will have a different effect on intelligibility than a variable or modulated background noise such as competing speech, multi-talker or "cocktail party" babble, or industrial machinery.
Reverberation also affects the speech signal by blurring speech sounds over time. This has the effect of enhancing vowels with steady states, while masking stops, glides and vowel transitions, and prosodic cues such as pitch and duration.[4]
The fact that background noise compromises intelligibility is exploited in audiometric testing involving spoken speech and some linguistic perception experiments as a way to compensate for the ceiling effect by making listening tasks more difficult.
Word articulation remains high even when only 1–2% of the wave is unaffected by distortion.[5]
Intelligibility with different types of speech
Lombard speech
The human brain automatically changes speech made in noise through a process called the Lombard effect. Such speech has increased intelligibility compared to normal speech. It is not only louder but the frequencies of its phonetic fundamental are increased and the durations of its vowels are prolonged. People also tend to make more noticeable facial movements.[6][7]
Screaming
Shouted speech is less intelligible than Lombard speech because increased vocal energy produces decreased phonetic information.[8] However, "infinite peak clipping of shouted speech makes it almost as intelligible as normal speech."[9]
Clear speech
Clear speech is used when talking to a person with a hearing impairment. It is characterized by a slower speaking rate, more and longer pauses, elevated speech intensity, increased word duration, "targeted" vowel formants, increased consonant intensity compared to adjacent vowels, and a number of phonological changes (including fewer reduced vowels and more released stop bursts).[10][11]
Infant-directed speech
Infant-directed speech—or baby talk—uses a simplified syntax and a small and easier-to-understand vocabulary than speech directed to adults[12] Compared to adult directed speech, it has a higher fundamental frequency, exaggerated pitch range, and slower rate.[13]
Citation speech
Citation speech occurs when people engage self-consciously in spoken language research. It has a slower tempo and fewer connected speech processes (e.g., shortening of nuclear vowels, devoicing of word-final consonants) than normal speech.[14]
Hyperspace speech
Hyperspace speech, also known as the hyperspace effect, occurs when people are misled about the presence of environment noise. It involves modifying the formants F1 and F2 of phonetic vowel targets to ease perceived difficulties on the part of the listener in recovering information from the acoustic signal.[14]
^Robinson, G. S., and Casali, J. G. (2003). Speech communication and signal detection in noise. In E. H. Berger, L. H. Royster, J. D. Royster, D. P. Driscoll, and M. Layne (Eds.), The noise manual (5th ed.) (pp. 567-600). Fairfax, VA: American Industrial Hygiene Association.
^Moore, C.J. (1997). An introduction to the psychology of hearing. Academic Press. 4th ed. Academic Press. London. ISBN978-0-12-505628-1
^Junqua, J. C. (1993). "The Lombard reflex and its role on human listeners and automatic speech recognizers". The Journal of the Acoustical Society of America. 93 (1): 510–524. Bibcode:1993ASAJ...93..510J. doi:10.1121/1.405631. PMID8423266.
^Pickett, J. M. (1956). "Effects of Vocal Force on the Intelligibility of Speech Sounds". The Journal of the Acoustical Society of America. 28 (5): 902–905. Bibcode:1956ASAJ...28..902P. doi:10.1121/1.1908510.
^Picheny, M. A.; Durlach, N. I.; Braida, L. D. (1985). "Speaking clearly for the hard of hearing I: Intelligibility differences between clear and conversational speech". Journal of Speech and Hearing Research. 28 (1): 96–103. doi:10.1044/jshr.2801.96. PMID3982003.
^Picheny, M. A.; Durlach, N. I.; Braida, L. D. (1986). "Speaking clearly for the hard of hearing. II: Acoustic characteristics of clear and conversational speech". Journal of Speech and Hearing Research. 29 (4): 434–446. doi:10.1044/jshr.2904.434. PMID3795886.
^Snow CE. Ferguson CA. (1977). Talking to Children: Language Input and Acquisition, Cambridge University Press. ISBN978-0-521-29513-0
^Kuhl, P. K.; Andruski, J. E.; Chistovich, I. A.; Chistovich, L. A.; Kozhevnikova, E. V.; Ryskina, V. L.; Stolyarova, E. I.; Sundberg, U.; Lacerda, F. (1997). "Cross-language analysis of phonetic units in language addressed to infants". Science. 277 (5326): 684–686. doi:10.1126/science.277.5326.684. PMID9235890. S2CID32048191.
^ abJohnson K, Flemming E, Wright R (1993). "The hyperspace effect: Phonetic targets are hyperarticulated". Language. 69 (3): 505–28. doi:10.2307/416697. JSTOR416697.