Babies Recognize Key Vowel and Consonant Sounds of Their Native Language by
Perceptual Learning
Peter D. Eimas , in Psychology of Learning and Motivation, 1997
III. The Processing and Representational Units of Speech
A characteristic of spoken language is its very nearly continuous nature. Moreover, when discontinuities occur in the form of short periods of silence and abrupt changes in the nature of the acoustic energy, these discontinuities tend to bear no relation to our percepts, be they words or prelexical units. Moreover, there would appear to be no other acoustic cues, each of which reliably provides information about word junctures (but see, e.g., Gow & Gordon, 1995; Nakatani & Dukes, 1977; Nakatani & Schaffer, 1978, for evidence that segmental and prosodic sources of information for word boundaries exist under some circumstances). In addition, the speech signal is marked by a lack of invariant spectral and temporal information for the identification of specific phonetic contrasts (Liberman et al., 1967; but see Stevens & Blumstein, 1981) and to a lesser extent their syllabic combinations, thereby again making the process of word recognition by means of its phonetic or syllabic components difficult to describe.
Nevertheless, conventional wisdom holds that a direct mapping between the acoustics of speech and words—that is, lexical access in the absence of prelexical units (see, e.g., Klatt 1979, 1989)—is impractical at best. Presumably, the variation in production of words both between and within speakers would create an indefinite mapping between the signal and the mental instantiation of words. Consequently, processing hypotheses, despite the nearly continuous nature of speech and the absence of invariant cues for sublexical segmentations, have centered on the constraint that the speech stream is nonetheless initially segmented into smaller, prelexical representational units that serve as the basis for lexical access (e.g., Pisoni & Luce, 1987).
Phonemes and syllables are the most frequently proposed units. 1 Phonemes provide the smaller inventory of units, and given the relatively small number of phonemes in any language a phoneme-by-phoneme comparison of the input with members of the mental lexicon would be relatively efficient. However, the effects of coarticulation, among other contextual effects, makes the physical realization of the phoneme highly variable (Liberman et al., 1967) and consequently not easily segmented and identified. The repertoire of syllables in any language is considerably larger, especially in languages with complex syllabic structures such as English. Nevertheless, coarticulatory effects would be less a problem—the coarticulatory effects across segments within syllables are (or can be) informative and a part of the matching and word identification processes and not simply variation that must be overcome. Of course, coarticulatory effects across syllables remain a problem. In addition, it is also true that if lexical searches begin at any identified phoneme or any identified syllable, greater cognitive economy will be achieved if the search begins with syllabic structures as the ratio of successful to unsuccessful searches will be considerable higher with syllabic-initiated searches.
Research of the past several decades has focused on the role of phonemes vs syllables as possible prelexical units—units that provide the forms by which members of the lexicon are encoded and directly accessed. Which of these units (or both) provide the basis for word recognition is, I also believe, highly relevant to the search for the initial representational units of speech by young infant listeners. It is the view taken here that the processing characteristics necessary for lexical access derive from the inherent processing properties and representational units of speech in very young infants that are later tempered by experience with the parental language. Given this, the nature of these representational structures in infants will be initially described, after which I examine the recent evidence for prelexical units in adult listeners.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/S0079742108602832
Phonological Aspects of Aphasia
Sheila E. Blumstein , in Acquired Aphasia (Third Edition), 1998
Phonological Patterns of Speech Production
Clinical evidence shows that nearly all aphasic patients produce phonological errors in their speech output. These errors can be characterized according to four main types:
- 1.
-
Phoneme substitution errors, in which a phoneme is substituted for a different phoneme in the language, for example, teams →/kimz/.
- 2.
-
Simplification errors, in which a phoneme or syllable is deleted, for example, brown → /bawn/.
- 3.
-
Addition errors, in which an extra phoneme or syllable is added to a word, for example, papa → [paprә]
- 4.
-
Environment errors, in which the occurrence of a particular phoneme is influenced by the surrounding phonetic context. The order of the segments may be changed, for example, degree → [gәdri], or the presence of one sound may influence the occurrence of another, for example, Crete → [trit].
Within each of the four categories of errors, there are systematic patterns that have been observed among the aphasic patients studied and provide clues as to the basis of the deficit. The majority of phoneme substitution errors are characterized by the replacement of a single phonetic feature. For example, patients may make errors involving the phonetic feature [voice], for example, peace → [bis], the phonetic feature [place of articulation], for example, pay → [tei], or manner of articulation such as [nasal], for example, day → [nei]. Rarely do they make errors involving more than one phonetic feature. Moreover, there is a hierarchy of phoneme substitution errors, with a greater preponderance of errors involving place of articulation, then voicing, and fewest, manner of articulation. The overall pattern of sound substitutions is consistent with the view that the incorrect phonetic features have been selected or activated, but they have been correctly implemented by the articulatory system. Most simplification errors and addition errors result in what is believed to be the simplest and thus the canonical syllable structure of language, Consonant Vowel. For example, consonants are more likely deleted in a word beginning with two consonants, sky → ky, and are more likely added in a word beginning with a vowel, army → jarmy (Blumstein, 1990). And finally, environment errors which occur across word boundaries preserve the syllable structure relations of the lexical candidates. That is, if the influencing phoneme is at the beginning of the target word, so is the influenced phoneme, for example, history books → bistory books. If the influencing phoneme is at the end of the target word, so is the influenced phoneme: roast beef → roaf beef.
The stability of these patterns is evidenced by their occurrence across languages: French (Bouman & Grunbaum, 1925; Lecours & Lhermitte, 1969), German (Bouman & Grunbaum, 1925; Goldstein, 1948), English (Blumstein, 1973; Green, 1969), Turkish (Peuser & Fittschen, 1977), Russian (Luria, 1966), and Finnish (Niemi, Koivuselka-Sallinen, & Hanninen, 1985). Despite the systematicity and regularity of these phonological errors, their particular occurrence cannot be predicted. That is, sometimes the patient may make an error on a particular word, and other times she or he will produce it correctly. Moreover, the pattern of errors are bidirectional (Blumstein, 1973; Hatfield & Walton, 1975). A voiced stop consonant may become voiceless, /d/ → /t/, and a voiceless stop consonant may become voiced, /1/ → /d/.
Taken together, these results suggest that the patient has not "lost" the ability to produce particular phonemes or to instantiate particular features. Rather, his or her speech output mechanism does not seem to be able to encode consistently the correct phonemic (i.e., phonetic feature) representation of the word. As a consequence, the patient may produce an utterance that is articulatorily correct but deviates phonologically from the target word. On other occasions, the patient may produce the same target word correctly. These results are consistent with the view that the underlying phonological representations are intact, but there are deficits in accessing these representations (Butterworth, 1992). As such, these patients have a selection or phonological planning deficit (Blumstein, 1973, 1994; cf. also Nespoulous & Villiard, 1990). To return to the model for speech production in Figure 5.1, a word candidate is selected from the lexicon. To produce the word requires that its sound properties (i.e., its segments and features) be specified so that they can be "planned" for articulation and ultimately translated into neuromuscular commands relating to the speech apparatus. Phonological deficits seem then to relate to changes in the activation patterns of the nodes corresponding to the phonetic representations themselves (e.g., features, syllable structure) as the word candidate is selected, as well as to deficits in the processes involved in storage in the shortterm lexical buffer and in phonological planning (cf. also Schwartz et al., 1994; Waters & Caplan, 1995).
The similar patterns of performance are particularly striking given the very different clinical characteristics and neuropathology of the patients investigated. The groups studied have included both anterior and posterior patients. Anterior aphasics, especially Broca's aphasics, show a profound expressive deficit in the face of relatively preserved auditory language comprehension. Speech output is nonfluent in that it is slow, labored, and often dysarthric, and the melody pattern is often flat. Furthermore, speech output is often agrammatic. This agrammatism is characterized by the omission of grammatical words, such as the and is, as well as the substitution of grammatical inflectional endings marking number, tense, and so forth.
In contrast to the nonfluent speech output of the anterior aphasic, the posterior patient's speech output is fluent. Among the posterior aphasias, Wernicke's and conduction aphasia are perhaps the most studied in relation to phonology (cf. Ardila, 1992; Buckingham & Kertesz, 1976; Kohn, 1992; Schwartz et al., 1994). The characteristic features of the language abilities of Wernicke's aphasia include well articulated but paraphasic speech in the context of severe auditory language comprehension deficits. Paraphasias include literal paraphasias (sound substitutions), verbal paraphasias (word substitutions), or neologisms (productions that are phonologically possible but have no meaning associated with them). Speech output, although grammatically full, is often empty of semantic content and is marked by the overuse of high-frequency "contentless" nouns and verbs, such as thing and be. Another frequent characteristic of this disorder is logorrhea, or a press for speech.
Conduction aphasia refers to the syndrome in which there is a disproportionately severe repetition deficit in relation to the relative fluency and ease of spontaneous speech production and to the generally good auditory language comprehension of the patient. Speech output contains many literal paraphasias and some verbal paraphasias.
The results of the studies of the phonological patterns of speech production challenge the classical view of the clinical/neurological basis of language disorders in adult aphasics. The classical view has typically characterized the aphasia syndromes in broad anatomical (anterior and posterior) and functional (expressive and receptive) dichotomies (cf. Geschwind, 1965). To a first approximation, the anterior/posterior anatomical dichotomy corresponds well with the functional expressive/receptive dichotomy as anterior patients are typically nonfluent and posterior patients are typically fluent, and anterior patients typically have good comprehension and posterior patients typically have poor comprehension. Nonetheless, the similar patterns of performance across these aphasic syndromes indicates that both anterior and posterior brain structures contribute to the selection of phonological representations as well as to phonological planning in speech production.
An interesting syndrome from the perspective of phonological output disorders is jargon aphasia (those Wernicke's aphasics who produce neologisms or jargon, which are defined as the production of nonwords that do not derive from any obvious literal paraphasia or phonologically distorted semantic paraphasia). Phonological analyses reveal that neologisms follow the phonological patterns of the language. They respect the sound structure, stress rules, syllable structure, and phonotactics (allowable order of sounds). Although it is not clear what the source of these jargon productions are, their phonological characteristics are consistent with the general observation that the processes of lexical activation and retrieval are the source of the problem, not the more abstract phonological shape or organizational principles of the lexicon (Christman, 1994; Hanlon & Edmondson, 1996; Kohn, Melvold, & Smith, 1995).
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780126193220500087
Preverbal Development and Speech Perception
R. Panneton , ... N. Bhullar , in Encyclopedia of Infant and Early Childhood Development, 2008
Perception of Phonology
Phonemes are the basic sound units in any given language that have become incorporated into formal language systems. For many of the worlds' languages, phonemes consist of various combinations of consonants (C) and vowels (V). For other languages, a phoneme can also be defined as a CV+tone combination. For example, in Thai, ma(rising pitch) is a different phoneme from ma (falling pitch). Phonemes can be differentiated at many levels, such as: (1) their place and/or manner of articulation (e.g., whether the lips are closed or open during production), (2) their voicing properties (e.g., whether activity in the larynx begins prior to full production), and (3) degree of aspiration (or airflow) during production.
From the newborn period onward, infants from all language cultures appear capable of discriminating phonemes (notice a change from one to another), with two features of early phoneme perception being especially noteworthy. First, infants (like adults) perceive phonemes categorically. That is, they discriminate the phonemes /ba/ and /pa/, because they come from two distinct categories (according to an acoustic feature called voice onset time). However, they do not discriminate two versions of 'pa' [pa1 vs. pa2] or two versions of ba [ba1 vs. ba2] even though acoustically these pairs are just as distinct as the ba/pa contrast, yet they do not cross category boundaries.
The second interesting aspect of phoneme perception is that younger infants respond categorically to speech contrasts that are present in their native language, and also to those that are not present in their native language (i.e., non-native phonemes they have not previously heard). This is true for both consonants and vowels, suggesting that early phonetic perception derives from more general auditory competencies. However, with age and experience, infants continue to discriminate native phonemes, but have more difficulty discriminating non-native speech sounds. This has generally been referred to as perceptual attunement, resulting from infants' increasing attention to and encoding of native language information.
Interestingly, this pattern of initial perceptual openness followed by progressive narrowing across infancy is seen in other domains. For example, younger infants discriminate between pairs of human faces as well as primate faces, but older infants only maintain discrimination of human faces, even if the primate faces are accompanied by distinct vocalizations. Most recently, infants have even shown discrimination of video presentations of both native and non-native phonemes (with no sound track) at younger ages, but not at older ages. In the older group, only discrimination of native visual phonemes was evident.
Neurophysiological studies support these general behavioral patterns (categorical perception and perceptual attunement). Newborns and slightly older infants show distinct ERPs to categorical changes in consonants, especially those involving voice-onset-time differences. Such category-specific ERPs have been observed over several cortical areas, some involving the right or left hemisphere, and some involving both. Interestingly, distinct ERPs occur in infants when listening to phonemes with place-of-articulation differences but these effects are observed primarily over the left temporal areas (a more adult-like pattern). Discrimination of changes in phoneme categories (both consonants and vowels) has also been observed using MMN measures. For example, newborns show a distinct MMN pattern when presented with two Finnish vowels. MMN has also been observed in English 8-month-olds to the CV pairs /da/ and /ta/. In a similar study, MMN was recorded from Finnish infants at 6 and 12 months of age in a longitudinal design, and from Estonian 12-month-old infants. Both groups of infants were tested for their discrimination of changes in Finnish and in Estonian vowels. The results showed a significant MMN response in the 6-month-olds to both native (Finnish) and non-native (Estonian) vowels, and also in the 12-month-old Estonian infants to their native vowels. However, all infants at 12 months showed diminished MMN to non-native vowels.
Likewise, in a longitudinal ERP study, American infants at 7 and 11 months of age were presented with native and non-native speech contrasts. The results showed no difference in ERP latency or magnitude in speech-related components to either native or non-native contrasts at 7 months of age, but only the native contrasts elicited these same ERP patterns at 11 months of age. This is consistent with the behavioral data reported with non-native speech discrimination. We might expect, then, that the underlying biological substrates that subserve language processing are the same across infants and adults. Infants between 13- and 17-months of age also show larger amplitude ERP responses to known than to unknown words, with this difference evident in both hemispheres in the frontal, parietal and temporal lobes. By 20 months of age, however, this ERP enhancement is restricted to the left hemisphere over the temporal and parietal lobes only, indicating a gradual specialization of the neural systems for processing words (and much more akin to the pattern seen in adults). These results were further corroborated in a study with 14- and 20-month-olds in which they heard known words, unknown words that were phonetically similar to the known words, and unknown words that were phonetically dissimilar from the known words. Both age groups showed higher amplitude ERP responses to known than to unknown words. However, the 14-month-olds' ERP responses were similar in amplitude for known words and phonetically similar unknown words which imply that these words were confusing. In contrast, ERP responses in the 20-month-olds to the phonetically similar unknown words were the same as those to the unknown words. These findings show that the older infants improved processing of phonetic detail with experience compared to the younger infants.
Thus, young infants show consistent brain-related responses to different speech sounds (supporting the behavioral evidence) but their brain-localization patterns in response to different phonemes appears to depend (at least to some extent) on the nature of the information in the speech sounds that make them distinct. Vocal-timing differences appear to be represented more diffusely in the infant brain whereas place/manner of articulation takes on a more adult-like representation (left-temporal localization). This could be due to less cortical specificity for timing in general (because timing is a process involved in many domains of perceptual functioning) and/or because there are multiple pathways available for speech processing in the developing nervous system, given that the infant is less experienced with speech in general. This latter possibility may help to explain one study which examined infants' processing of their native language compared to a non-native language and to backwards speech. The results showed that areas of cortex in infants' brains that are activated by the native language are not completely confined to the primary auditory areas but include those similar to adults in their localization (temporal region) and lateralization (left hemisphere). This early lack of specificity has also been found using ERP methods with 6-month-olds, in which same-component ERPs to words are equally large over temporal and occipital (typically referred to as visual cortex) brain regions. Interestingly, between 6 and 36 months of age, there is a gradual decrease in the ERP amplitude to vocal words (i.e., decreased processing) over occipital areas but the amplitude remains unchanged over temporal areas.
Such findings point to a common process of perceptual attunement to culturally relevant information throughout the first postnatal year. In terms of language development, infants begin building linguistic representations of phonemes so that their subsequent perception is guided by the fit between an incoming speech sound and these phonemic representations. We have also seen that prosodic information appears to assist infants in this focusing on important elements of speech (prosody bootstraps the discovery of phonemic detail). But what happens if the speech stream that infants' hear is prosodically attenuated, as is more likely the case when the caregiver seeks to soothe and calm a distressed infant. Such soothing speech is more likely to be lower in pitch, pitch variance, amplitude, and slow. Therefore, are infants not as perceptually attuned to language in these instances? Would infants not learn language if only an ADS style of speaking was available to them?
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123708779001316
Recovery and Treatment of Acquired Reading and Spelling Disorders
ANNA BASSO , in Handbook of the Neuroscience of Language, 2008
41.4.2. Conversion Mechanisms
The phoneme-to-grapheme (component L) and grapheme-to-phoneme (component M) conversion routes are dedicated to reading and spelling. The two conversion routes are functionally independent and can be impaired separately but in most subjects they are both impaired. A third conversion route, the input-to-output phoneme conversion, which allows for the repetition of nonwords, is more resistant to functional damage, probably because the relationship between input and output phonemes is always one-to-one, without exception.
The rehabilitation program here illustrated involves all three routines at the same time. It may seem to be a loss of time to include a process that is intact, as is frequently the case for the input-to-output phoneme conversion, but the time spent by the subject to perform a task they can easily perform is trivial compared to the advantage of a varied and stimulating therapy in which subjects are encouraged by their successes.
In order to involve only the conversion mechanisms, it is suggested one work with nonwords that should be short in case of severe damage to any of the three routes and become longer and more difficult (from an orthographic and phonological point of view) as the impairment becomes less severe. If the subject has difficulty repeating, reading or spelling even single phonemes or letters, then simple CV syllables, where only the consonant varies whereas the vowel [A] is kept constant, should be used.
The subject is first asked to repeat a syllable; if he fails the stimulus is presented again. The subject is then required to write the syllable and to check whether what he has written corresponds to the syllable he has just repeated. If this is not the case and the subject does not spot the error, the therapist should attract the subject's attention to the error and ask him to read what he has written. If the subject fails, the therapist provides the correct answer and helps him to write the syllable. The subject is then invited to pay attention to the correct spelling and to copy the syllable after a short delay. A new stimulus is then given and the whole procedure is started again. After having repeated and written 3–4 syllables, the subject is asked to read them out loud in random order. Correct repetition of the syllable ensures that the subject has correctly identified the heard phonemes and can translate them from input-to-output phonemes. Writing the syllable requires the conversion of phonemes into graphemes, and reading it requires the conversion of graphemes into phonemes. After the subject is able to spell most of the single syllables, two-syllable nonwords are introduced.
Phonological awareness is held to be important for reading acquisition and is frequently impaired in phonological dyslexics. For this reason when two-syllable nonwords are introduced, exercises for phonological awareness are also proposed. After hearing the nonword, the subject has to repeat either the first or the second syllable, say what the last letter is, and so on. After this, the nonword should be written and then read aloud as illustrated before.
An interesting aspect of such a program is that it can be carried out at home with the help of any naïve person who has been adequately instructed and this allows for a more intensive treatment.
The program is easily applicable in Italian and Spanish whose orthographies are transparent; very few phonemes must be rendered by two letters and very few letters correspond to more than one phoneme. Other languages, such as English and French, have more opaque orthographies with less transparent conversion rules that must be explained and trained one by one. Carrying out the program in an opaque orthography will probably require more time for the learning of specific conversion rules.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780080453521000410
Brain Language Mechanisms Built on Action and Perception
Friedemann Pulvermüller , Luciano Fadiga , in Neurobiology of Language, 2016
26.2 Phonemes
Phonemes, the smallest units of speech that distinguish between meaningful spoken words, are a characteristic ingredient of spoken human languages. The world's languages include approximately 100 overall, and each language has approximately 50 of them. In comparison, apes only have a few oral gestures ( Call & Tomasello, 2007). Why is this so? The normal learning of phonemes requires that speech sounds be produced, which babies start doing at approximately 6 months and throughout the so-called babbling phase until the end of their first year of life (Locke, 1993). Babbling consists of productions of meaningless syllable sequences. Brain mechanisms implicated by such activity includes nerve cell activity in motor regions, where the articulatory gestures are initiated and controlled, and activity in the auditory and somatosensory cortex, because the infants also perceive the tactile self-stimulations produced by articulating along with the self-produced sounds. This means coactivation of nerve cells (or neurons) across a range of cortical areas close to the sylvian fissure (perisylvian areas: articulatory motor and somatosensory cortex, auditory cortex; see Figure 26.1D). If there were direct connections between the specific neurons involved in controlling articulatory movements and responding to sensory aspects of the sounds produced, then the neuroscience principle of correlation learning would imply that the distributed set of neurons strengthened these links (Fry, 1966). However, it seems that the frontotemporal connections available in the left hemisphere of humans do not strongly interlink these areas directly, but rather primarily connect areas adjacent to motor and auditory cortices—in inferior premotor and prefrontal areas and in superior and middle temporal cortex (Figure 26.1C; Braitenberg & Schüz, 1992). This means that the correlated neuronal activity in sensory and motor cortex can only be mapped indirectly, including neurons in "higher" zones adjacent to relevant primary areas (other hatched areas in Figure 26.1D). The connection structure of inferior-frontal and superior-temporal areas, sometimes called the language areas of Broca and Wernicke, is illustrated schematically in Figure 26.2A, where these larger areas are each further subdivided into premotor and prefrontal areas and auditory belt and parabelt areas, respectively. The corresponding connection diagram (Figure 26.2B) illustrates the available between-area connections, including long-distance connections through the arcuate fascicle, and points to the crucial role of the language areas as connection hubs within the language cortex (Garagnani & Pulvermüller, 2013; Garagnani, Wennekers, & Pulvermüller, 2008). Antonio Damasio called such hub areas convergence zones (Damasio, 1989).
Within this circuit structure, the arcuate fascicle may provide a prerequisite for building the more elaborate phoneme repertoire available to humans, but not to apes or monkeys. Together with the extreme capsule, which seems equally developed in apes and humans, the arcuate provides a powerful connection between a range of inferior-frontal (including prefrontal and premotor) and superior-temporal (including auditory belt and parabelt) areas (Rilling, 2014). The availability of a powerful data highway between articulation and auditory perception and the resultant more elaborate repertoire of articulatory gestures may have constituted a significant selection advantage for humans in their phylogenetic development.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780124077942000262
Improving Quality of Life With Hearing Aids and Cochlear Implants
Jos J. Eggermont , in The Auditory Brain and Age-Related Hearing Impairment, 2019
10.3.4 Summary Auditory Learning
Phoneme discrimination training benefits some but not all people with mild hearing loss, but improvement is modest (Ferguson et al., 2014). Normal-hearing participants learned more than participants with presbucusis in the speech-in-noise condition, but showed similar patterns of learning in the other conditions. Some training-related changes have also occurred at the level of phonemic representations in the presbycusis group, consistent with an interaction between bottom-up and top-down processes (Karawani et al., 2016). Older hearing-impaired listeners could improve their open-set recognition of words in noise after following the speech perception–training regimen. Training generalized to other talkers saying the same trained words, but only slight improvements (7%–10%) occurred for new words. Improvements from training were retained over periods as long as 6 months. Similar gains were only observed when there was feedback. The effects of the word-based training were transferred to novel sentences and talkers when the sentences were composed primarily of words used during training (Humes et al., 2014). Training may not engender a domain-general plasticity but instead may train a series of systems with different parameters and limits for changes (Anderson et al., 2013).
Behavioral training effects were reflected in the MMN, which showed an increase in duration and area when elicited by the training stimuli as well as a decrease in onset latency when elicited by the transfer stimuli (Tremblay et al., 1997). The P2 enhancement in the current study was observed in participants, who listened passively to the stimuli during the recordings, and were not required to complete a task. The longlasting increase in P2 amplitude indicates that the auditory P2 response is potentially an important biomarker of auditory learning, memory, and training (Ross and Tremblay, 2009). There was no significant difference between P3 stimuli, indicating once more that behavior and putative underlying electrophysiological responses reflect different mechanisms (Morais et al., 2015).
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128153048000104
Reading Disorders, Developmental
Virginia A. Mann , in Encyclopedia of the Human Brain, 2002
VI.C Morpheme Awareness: Another Problem for Poor Readers
Although deficient phoneme awareness is the most reliable attribute of disabled reading in the early elementary grades, deficient morpheme awareness begins to play an increasingly important role as children reach the later grades. Several researchers have shown that disabled readers in later grades have problems with the morphological aspects of both written and spoken language. The difficulties of the disabled readers are seen in spelling errors, in performance on cloze tasks, and in vocabulary exercises such as defining a word or giving its derivational forms. The difficulties are linked to poor phoneme awareness and poor vocabulary, but we have found that, after the third grade, they play a role of their own when phoneme awareness and vocabulary are controlled. The link between morphological abilities and reading ability should come as no surprise given the morphophonological nature of the English alphabet (see Section III).
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B0122272102003976
Screening and Assessment Tools
GLEN P. AYLWARD , ... LYNN M. JEFFRIES , in Developmental-Behavioral Pediatrics, 2008
BASIC DEFINITIONS
Language is the main medium through which humans share ideas, thoughts, emotions, and beliefs. Unlike other methods of communication, language is symbolic; meaning is conveyed by arbitrary signs. Cries and giggles are signs that arise as reflexive responses or as emissions from the emotional or motivational state that they represent. Therefore, cries and giggles are communicative but not language. In contrast, words and sentences are arbitrary and therefore can vary from language to language. A dog can be labeled perro in Spanish, chien in French, and so forth. Language is also rule-governed. In English, for example, the order of words in a sentence cannot be significantly altered without changing meaning or rendering the sequence ungrammatical. For example, "Bill kissed Sue" has a different meaning than "Sue kissed Bill." Moreover, "See the dog" is a grammatical sentence, but "dog the see" is not. These features of language—the use of symbols in a systematic manner to convey meanings—provide people with the ability to create and understand an infinite number of messages.
Language is distinct from speech. Speech refers specifically to articulation of sounds and syllables created by the complex interaction of the respiratory system, the larynx, the pharynx, the mouth structures, and the nose. Sign languages also meet the definition of language but entail the configuration and movement of hands, arms, facial muscles, and body to articulate meaning. Similarly, written languages convey meaning through the use of arbitrary symbols on a page. It is possible to have a speech disorder without a language disorder and the converse. However, children may exhibit disorders in speech and language concurrently. In this chapter, discussion focuses on the assessment of verbal language and speech.
For purposes of understanding mature language use, language is subdivided into several components. Table 7D-1 lists some of the terms used to describe components of language and their definitions. In terms of language, an important division is between receptive and expressive language. Receptive language refers to the ability to understand or comprehend another person's language. Expressive language refers to the ability to produce language. Receptive language typically begins to develop before expressive language. The two components typically progress in relative synchrony. In some toddlers, however, the ability to produce language lags significantly behind the ability to understand language. Older children may show uneven skills in their abilities to understand and produce, with either domain more advanced than the other. Therefore, comprehensive assessments of language usually include separate evaluations of receptive language or comprehension and expressive language or production. Some standardized measures include separate subtests for comprehension and production. Some measures focus on one or the other component.
Language is also subdivided into subsystems or components, in large part on the basis of the size of units. Comprehensive assessments evaluate multiple subsystems of language in terms of both comprehension and production.
- ▪
-
Phonemes are the smallest units in the sound system of a language that serve to change the meaning of a word. For example, in English bat, pat, bit, and bid are all recognized as different words. Therefore, the single sounds that differentiates among them—/b/, /p/, /a/, /i/, /t/, and /d/—all represent different phonemes in English. The phonological system of a language is composed of the inventory of phonemes and the rules by which phonemes can interact with each other. For example, if a new word in English were needed, the sounds represented by /i/ and /b/ could be combined to create the word ib, but the sounds /b/ and /d/ could not be combined because that combination violates the phonological rules of English.
- ▪
-
Morphemes are considered the smallest unit of meaning in oral and written language. Words are free-standing morphemes that are the meaningful building blocks of larger units, such as sentences. Meaningful parts of words, such as the plural "-s" or past tense "ed" markers are bound morphemes, which, when attached to another morpheme, alter the meaning of the word. In English, there are relatively few bound morphemes, but in other languages, such as Hebrew and Italian, there are many morphemes that can be attached to other morphemes and change the meaning of words.
- ▪
-
Syntax comprises the rules for combining morphemes and words into organized and meaningful sentences. In English, most sentences begin with a noun phrase, such as "The boy," followed by a verb phrase, such as "gave the girl a red book." In addition, the adjective red should come before the noun book, but that arrangement is reversed in some languages. In other languages, such as Italian and German, the syntactic rules require a different arrangement of words: for example, the adjective occurring after the noun it describes, and the verb appearing at the end of the sentence rather than in the middle.
- ▪
-
Semantics refers to the meaning of words and sentences. The number of words that a child produces and understands can be considered one element of the child's semantic knowledge. The meaning of sentences is described in such terms as agents and actions, as distinct from syntax in which sentences may be described in terms of noun and verb phrases. In the sentence "The boy gave the girl a red book," the boy is the agent, gave is the action, and the girl is the recipient or dative. Semantics also includes meaning at concrete and abstract levels, word definitions, and word categories such as synonyms and antonyms. During school age, semantic skills that are learned include knowledge of metaphorical language as in idioms, proverbs, and similes.
- ▪
-
Pragmatics refers to social aspects or actual use of language. Pragmatic skills address three broad areas of using language: discourse rules, communicative functions, and presuppositional skills. Examples of discourse rules include features such as appropriate use of intonation and tone of voice, as well as the inclusion of politeness markers in communication. Discourse guidelines also consider the ability to initiate, respond, maintain topics, and appropriately take turns. Discourse rules also cover aspects such as varying the language used in relation to different environments and social interactions. Children vary the style and tone of voice when asking an adult for a favor in comparison with asking a peer. Communicative functions examine the purpose behind a communication act (e.g., requesting, commenting, protesting). Presuppositional skills address the ability to provide a listener with appropriate background information. For example, once a speaker realizes that his or her listeners do not know that "Bob" is his or her cousin, the speaker needs to tell listeners who he is, to increase their understanding of the message.
Several aspects of verbal production are considered parts of speech. Table 7D-1 includes definitions and examples of these components. Speech includes the accuracy of speech sound production. Assessments of speech typically include analysis of the types of speech sound errors. Estimates of intelligibility are used to describe the functional consequences of speech sound errors. Another component of speech production is fluency, defined as the forward flow of speech. Stuttering is a type of dysfluency, characterized by repetition or prolongation of sounds and other fragmentation of the sounds, often accompanied by a sense of effort and by secondary behavioral characteristics that the speaker uses to attempt to reinitiate forward flow of speech. Voice and resonance also affect speech. The flow of air through the vocal cords into the nose and mouth affect the quality of speech. Voice disorders include hoarseness, which may be caused by temporary inflammation of the larynx or by nodules from vocal abuse. Resonance disorders include hyponasality, which is a reduction in the usual amount of air through the nose and may be caused by adenoidal hypertrophy, and hypernasality, which results from excessive air through the nose and may be secondary to a cleft palate.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780323040259500106
Word Recognition
J. Zevin , in Encyclopedia of Neuroscience, 2009
Phoneme Restoration and the Ganong Effect
Lexical effects on phoneme perception can, in appropriate circumstances, result in rather compelling auditory illusions. For example, when presented with a sentence in which a consonant has been replaced with white noise, people readily identify all words in the sentence and furthermore are frequently unable to determine which sound was manipulated. This is termed the 'phoneme restoration effect' because it is as if the manipulated speech sound were restored by the listener. A similar effect is named after its discoverer, William F. Ganong III: When ambiguous speech sounds are presented as stimuli in categorical perception experiments (e.g., an utterance halfway between 'pink' and 'bink'), people are more likely to identify the stimulus as a word. This is interesting because unlike the tasks typically used to study the phoneme restoration effect, in this case the experiment explicitly requires participants to pay attention to individual speech sounds, and yet there is a strong influence of lexical knowledge.
Although these effects clearly demonstrate that lexical knowledge influences how speech sounds are perceived, there is also abundant evidence that fine phonetic detail has an influence on how rapidly words are recognized. To some extent we hear what we are somehow 'prepared' to hear, but there is also an effect of what is actually impinging on our ears. Manipulations much subtler than replacing an entire phoneme with white noise – for example, splicing in the same sound from a different word – result in slower performance in a wide range of word recognition tasks. Furthermore, the strength of both the Ganong and phoneme restoration effects can be influenced by various stimulus parameters. Finally, there is some question as to whether or not identification of these 'smaller units' is a necessary precursor to identifying spoken words. In fact, a number of distinct theories share the assumption that identification of individual speech sounds is epiphenomenal to the goal of distinguishing words from one another.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780080450469018817
Speech Production
Rosaleen A. McCarthy , Elizabeth K. Warrington , in Cognitive Neuropsychology, 1990
Phonemic Disorders
Errors in phoneme sequencing and selection are relatively common in aphasic disorders (e.g., Blumstein, 1973). As an isolated deficit they are associated with the classical syndromes of "conduction aphasia" and "transcortical motor aphasia" (see Chapter 1). The framework put forward by Wernicke and Lichtheim suggested that the critical lesion site in conduction aphasia (speech-production errors predominantly in repetition) should affect the main pathways between Wernicke's area in the temporal lobe and Broca's area in the frontal lobe. Geschwind (e.g. 1970) argued that damage to this tract, the arcuate fasciculus, was critical for the syndrome of conduction aphasia to arise. Green & Howes (1977) reviewed 25 published cases of conduction aphasia for whom pathological evidence was available either surgically or at post-mortem (from Lichtheim, 1885, to Benson, Sheremata, Bouchard, Segarra, Price, & Geschwind, 1973). Their findings are summarised in Table 9.2 and Fig. 9.3. The highest incidence of damage was found in the supramarginal gyrus and adjacent areas, consistent with Geschwinds's hypothesis. Subsequent single case reports are in accord with this analysis (e.g., Damasio & Damasio, 1980; McCarthy & Warrington, 1984; Caplan, Vanier, & Baker, 1986a).
Table 9.2. Surgical and Post-mortem Reports of Lesions in 25 Cases of Conduction Aphasia
Type of lesion | Locus a | ||||||||
---|---|---|---|---|---|---|---|---|---|
Author and date of case report | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | |
Lichtheim (1884) | Infarct | 1 | 2 | 2 | 1 | ||||
Pick (1898) | Infarct | 2 | 1 | ||||||
Pershing (1900) | Infarct | 1 | 1 | ||||||
Goldstein (1911) | Tumour | 1 | 1 | ||||||
Liepmann & Pappenheim (1914) | Infarct | 1 | 1 | 2 | 1 | 1 | |||
Bonhoeffer (1923) | Infarct | 1 | 2 | 2 | 1 | ||||
Pötzl (1925) | Infarct | 2 | |||||||
Hilpert (1930) | Abscess | 2 | 2 | ||||||
Stengel (1933) | Tumour | 1 | 2 | 1 | |||||
Pötzl & Stengel (1937) | Infarct | 1 | 1 | 2 | 2 | 1 | |||
Goldstein & Marmor (1938) | Infarct | 1 | 2 | 1 | 1 | 1 | |||
Coenen (1940) | Infarct | 1 | 1 | 2 | 2 | ||||
Stengel & Lodge Patch (1955) | Infarct | 1 | 2 | 2 | 1 | 1 | |||
Hécaen et al. (1955) | Tumour | 1 | 2 | 1 | |||||
Hoeft (1957) | Infarct | 2 | 2 | ||||||
Konorski et al. (1961) | Tumour | 2 | |||||||
Kleist (1962) | Infarct | 2 | 2 | 1 | 1 | ||||
Kleist (1962) | Infarct | 1 | 1 | 2 | 2 | ||||
Caraceni (1962) | Tumour | 2 | |||||||
Warrington et al. (1971) | Tumour | 2 | 2 | ||||||
Warrington et al. (1971) | Tumour | 2 | 2 | 2 | 1 | ||||
Brown (1972) | Tumour | 1 | 2 | ||||||
Benson et al. (1973) | Infarct | 2 | 1 | 2 | |||||
Benson et al. (1973) | Infarct | 2 | 2 | 1 | |||||
Benson et al. (1973) | Infarct | 1 | 2 | 1 | 1 | ||||
Total cases of partial damage | 5 | 4 | 7 | 2 | 5 | 9 | 2 | 6 | |
Total cases of severe damage | 0 | 1 | 8 | 16 | 9 | 3 | 0 | 0 | |
Total damage (out of possible 50) | 5 | 6 | 23 | 34 | 23 | 15 | 2 | 6 |
Key to cell numbers: 1, partial damage; 2, severe damage. All 1's and 2's are summed separately to obtain column totals for partial and severe damage. Figures for total overall damage were obtained by multiplying each total for severe damage by two and adding that total column's total for partial damage.
- a
- Key to locus numbers: 1, Heschl's gyrus; 2, planum temporale (posterior); 3, first temporal gyrus (posterior); 4, supramarginal gyrus; 5, supramarginal gyrus; 6, angular gyrus; 7, parietal operculum; 8, insula.
(Green & Howes, 1977)
Copyright © 1977
Group studies have also been largely consistent with this localisation. Kertesz found both supra- and infra-sylvian lesions were present in 11 cases with clinically defined conduction aphasia. A comparable localisation was identified in three cases with chronic and persistent deficits (Kertesz, 1979).
There is very little evidence as to the precise localising significance of phonemic errors confined to spontaneous speech. Goldstein (1948) thought that this type of disorder could be attributed to mild damage affecting Broca's area, however, there is insufficient evidence from group studies or from single case studies to make this more than a plausible speculation.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780124818453500126
Babies Recognize Key Vowel and Consonant Sounds of Their Native Language by
Source: https://www.sciencedirect.com/topics/psychology/phonemes
0 Response to "Babies Recognize Key Vowel and Consonant Sounds of Their Native Language by"
แสดงความคิดเห็น