How the Brain Learns to Read Read online

Page 2


  ASSESSING YOUR CURRENT KNOWLEDGE OF READING

  The value of this book can be measured in part by how much it enhances your knowledge about reading. This might be a good time for you to take the following true-false test and assess your current understanding of some concepts related to language, learning to read, reading difficulties, and reading instruction. Decide whether the statements are generally true or false and circle T or F. Explanations for the answers are identified throughout the book in special boxes.

  What’s Coming?

  Children must learn to speak before they can learn to read. How well they learn to speak and what prereading vocabulary they acquire can have a great impact on how quickly and how successfully they will learn to read with comprehension. Just how we acquire spoken language is the subject of the first chapter.

  1

  Learning Spoken Language

  The richer and more copious one’s vocabulary and the greater one’s awareness of fine distinctions and subtle nuances of meaning, the more fertile and precise is likely to be one’s thinking.

  —Henry Hazlitt, Thinking as a Science

  How quickly and successfully the young brain learns to read is greatly influenced by the development of two capabilities: speech comprehension and visual recognition. By recognizing and trying out speech sounds, the child’s brain establishes the neural networks needed to manipulate sounds; to acquire and comprehend vocabulary; to detect a language’s accents, tone, and stress; and to map out sentence structure. A few years later, the brain will call on its visual recognition system to connect the sounds it has been practicing to abstract visual symbols—we call them the alphabet—so that it can learn to read.

  How broad is the child’s vocabulary? How many grammatical errors appear in speech? How sophisticated is the sentence structure? How well does the child comprehend variations in sentence structure? The answers to these questions help in determining the breadth and depth of the child’s spoken language networks and become the starting points for assessing how well the child will learn to read. Therefore, it is important to understand what cognitive neuroscience has revealed about how the brain acquires and processes spoken words. Table 1.1 presents a general timeline for the development of spoken language and visual recognition during the first three years of growth. The table is a rough approximation based on numerous research studies with young children. Obviously, some children will progress faster or slower than the table indicates. Nonetheless, it is a useful guide to show the progression of skills acquired during the process of learning language and developing visual recognition skills.

  Table 1.1 Typical Development of Language and Visual Recognition Systems from Birth to 36 Months of Age

  SOURCES: Mehler et al. 1988; de Haan et al., 2003; Kuhl, 2004; Robinson & Pascalis, 2004; Pascalis et al., 2005; Bhatt et al., 2006; Kraebel et al., 2007; Son et al., 2008; Southgate et al., 2008; Wang & Baillargeon, 2008; Dehaene, 2009; American Speech-Language-Hearing Association, 2013

  One of the most extraordinary features of the human brain is its ability to acquire spoken language quickly and accurately. We are born with an innate capacity to distinguish the distinct sounds (phonemes) of all the languages on this planet. Eventually, we are able to associate those sounds with arbitrary written symbols to express our thoughts and emotions to others.

  Other animals have developed ways to communicate with members of their species. Birds and apes bow and wave appendages; honeybees dance to map out the location of food; and even one-celled animals can signal neighbors by emitting an array of different chemicals. By contrast, human beings have developed an elaborate and complex means of spoken communication that many say is largely responsible for our place as the dominant species on this planet. Spoken language is truly a marvelous accomplishment for many reasons. At the very least, it gives form to our memories and words to express our thoughts. A single human voice can pronounce all the hundreds of vowel and consonant sounds that allow it to speak any of the nearly 7,000 languages that exist today. With practice, the voice becomes so fine-tuned that it makes only about one sound error per million sounds and one word error per million words (Pinker, 1994).

  Before the advent of scanning technologies, we explained how the brain produced spoken language on the basis of evidence from injured brains. In 1861, French physician Paul Broca noticed that patients with brain damage to an area near the left temple understood language but had difficulty speaking, a condition known as aphasia. This region of the brain is commonly referred to as Broca’s area (Figure 1.1).

  In 1881, German neurologist Carl Wernicke described a different type of aphasia—one in which patients could not make sense out of words they spoke or heard. These patients had damage in the left temporal lobe. Now called Wernicke’s area, as Figure 1.1 shows, it is located just above and slightly to the rear of the left ear. Those with damage to Wernicke’s area could speak fluently, but what they said was quite meaningless. Ever since Broca discovered that the left hemisphere of the brain was specialized for language, researchers have attempted to understand the way in which normal human beings acquire and process their native language.

  Figure 1.1 The language system in the left hemisphere comprises mainly Broca’s area and Wernicke’s area. The four lobes of the brain are also identified.

  Processing Spoken Language

  An infant’s ability to perceive and discriminate sounds in the environment begins after just a few months of life and develops rapidly. Recent research, using scanners, indicates that spoken language production is a far more complex process than previously thought. When preparing to produce a spoken sentence, the brain not only uses Broca’s and Wernicke’s areas, but also calls on several other neural networks scattered throughout the left hemisphere. Nouns are processed through one set of patterns; verbs are processed by separate neural networks. The more complex the sentence structure, the more areas that are activated, including some in the right hemisphere.

  In most people, the left hemisphere is home to the major components of the language processing system. Broca’s area is a region of the left frontal lobe that is believed to be responsible for processing vocabulary, syntax (how word order affects meaning), and rules of grammar. Recent imaging studies indicate that this area, in addition to helping construct language, is involved in determining the meaning of sentences (Caplan, 2006). Wernicke’s area is part of the left temporal lobe and is thought to process the sense and meaning of language. It works closely with Broca’s area whenever the brain is processing the elements of language. However, the emotional content of language is governed by areas in the right hemisphere.

  Brain imaging studies of infants as young as four months of age confirm that the brain possesses neural networks that specialize in responding to the auditory components of language. Dehaene-Lambertz (2000) used electroencephalograph (EEG) recordings to measure the brain activity of 16 four-month-old infants as they listened to language syllables and acoustic tones. After numerous trials, the data showed that syllables and tones were processed primarily in different areas of the left hemisphere, although there was also some right hemisphere activity. For language input, various features, such as the voice and the phonetic category of a syllable, were encoded by separate neural networks into sensory memory. These remarkable findings suggest that, even at this early age, the brain is already organized into functional networks that can distinguish between language fragments and other sounds. Another study of families with severe speech and language disorders has isolated a mutated gene believed to be responsible for their deficits (Lai, Fisher, Hurst, Vargha-Khadem, & Monaco, 2001). This and subsequent studies support the notion that the ability to acquire spoken language is encoded in our genes (Graham & Fisher, 2013).

  The apparent genetic predisposition of the brain to the sounds of language explains why normal young children respond to and acquire spoken language quickly. After the first year in a language environment, the child becomes increasingly able to differentiate tho
se sounds heard in the native language and begins to lose the ability to perceive other sounds. Imaging studies show that when children grow up learning two languages, all language activity is found in the same areas of the brain. How long the brain retains this responsiveness to the sounds of language is still open to question. However, there does seem to be general agreement among researchers that the window of opportunity for acquiring language within the language-specific areas of the brain begins to diminish for most people during the middle years of adolescence. Obviously, one can still acquire a new language after that age, but it takes more effort because the new language will be spatially separated in the brain from the native language areas (Bloch et al., 2009; Hernandez & Li, 2007).

  Gender Differences in Language Processing

  One of the earliest and most interesting discoveries neuroscientists made with functional imaging was that there were differences in the way male and female brains process language. Male brains tend to process language in the left hemisphere, while most female brains process language in both hemispheres (Burman, Bitan, & Booth, 2008; Clements et al., 2006). Of even greater interest was that these same cerebral areas in both genders were also activated during reading.

  Another interesting gender difference is the observation that the large bundle of neurons that connects the two hemispheres and allows them to communicate (called the corpus callosum) is proportionately larger and thicker in the female than in the male. Assuming function follows form, this difference implies that information travels between the two cerebral hemispheres more efficiently in females than in males. The combination of dual-hemisphere language processing and more efficient between-hemisphere communications may account for why young girls generally acquire spoken language easier and more quickly than young boys.

  There is still debate among scientists and psychologists over what these differences really mean. Some researchers suggest that the gender differences are minimal and that they decline in importance as we age (e.g., Sommer, Aleman, Somers, Boks, & Kahn, 2008; Wallentin, 2009). Others maintain that these differences continue to affect the way each gender uses and interacts with language, even as adults (e.g., Guiller & Durndell, 2007; Jaušovec & Jaušovec, 2009).

  Answer to Test Question #1

  Question: The brain’s ability to learn spoken language improves for most people as they age.

  Answer: False. Numerous studies show that the brain’s ability to acquire spoken language is best during the early adolescent years. Of course, people can learn a new language anytime during their lives. It just takes more effort and motivation.

  STRUCTURE OF LANGUAGE

  Considering that there are almost 7,000 distinct languages—not counting dialects—spoken on this planet, one might consider talking about the structure of language as an impossible task. Surely, the structures of these thousands of languages vary widely, but there are some surprisingly common elements. Obviously, all spoken language begins with sounds, so we start this discussion by looking at sound patterns and how they are combined to make words. The next step is to examine the rules that govern how words are merged into sentences that make sense and communicate information to others.

  Learning Phonemes

  All languages consist of distinct units of sound called phonemes. Although each language has its own unique set of phonemes, only about 170 phonemes comprise all the world’s languages. These phonemes consist of all the speech sounds that can be made by the human voice apparatus. Phonemes combine to form syllables. For example, in English, the consonant sound “t” and the vowel sound “o” are both phonemes that combine to form the syllable to-, as in tomato. Although the infant’s brain can perceive the entire range of phonemes, only those that are repeated get attention, as the neurons reacting to the unique sound patterns are continually stimulated and reinforced.

  At birth, or some researchers say even before birth (e.g., Porcaro et al., 2006; Voegtline, Costigan, Pater, & DiPietro, 2013), babies respond first to the prosody—the rhythm, cadence, and pitch—of their caregiver’s voice, not the words. Around the age of six months, infants start babbling, an early sign of language acquisition. The production of phonemes by infants is the result of genetically determined neural programs; however, language exposure is environmental. These two components interact to produce an individual’s language system and, assuming no abnormal conditions, sufficient competence to eventually communicate clearly with others.

  Infants’ babbling consists of all those phonemes, even ones they have never heard. Here the baby’s brain is developing a competence called phonemic awareness. Within a few months, the baby’s brain calculates which bits of speech are occurring more frequently than others. Pruning of the phonemes begins, and by about one year of age, the neural networks focus on the sounds of the language—or languages—being spoken most often in the infant’s environment (Dehaene, 2009). In fact, it will soon be very difficult for the baby to pronounce sounds not spoken in the environment, such as the four-letter consonant combinations found in Russian but not in English, or the guttural sounds in Dutch. The brain cells originally sensitive to these sounds have been either recruited for the native language or pruned away.

  Learning Words and Morphemes

  The next step for the brain is to detect words from the stream of sounds it is processing. This is not an easy task because people don’t pause between words when speaking. Yet the brain has to recognize differences between, say, green house and greenhouse. Studies show that parents help this process along by slipping automatically into a different speech pattern when talking to their babies than when speaking to adults. Mothers tend to go into a teaching mode with the vowels elongated and emphasized, what some researchers call parentese. They speak to their babies in a higher pitch, with a special intonation, rhythm, and feeling. The researchers suggested that mothers are instinctively attempting to help their babies recognize the sounds of language. Researchers found this pattern in other languages as well, such as Russian, Swedish, and Japanese (Burnham, Kitamura, & Vollmer-Conna, 2002).

  Remarkably, babies begin to distinguish word boundaries by the age of 8 months even though they don’t know what the words mean (Singh, 2008; Yeung & Werker, 2009). They now begin to acquire new vocabulary words at the rate of about 7 to 10 a day, helping to establish their working cerebral dictionary called the mental lexicon. By the age of 10 to 12 months, the toddler’s brain has begun to distinguish and remember phonemes of the native language and to ignore foreign sounds. For example, one study showed that at the age of 6 months, American and Japanese babies are equally good at discriminating between the “l” and “r” sounds, even though Japanese has no “l” sound. However, by age 10 months, Japanese babies have a tougher time making the distinction, while American babies have become much better at it. During this and subsequent periods of growth, one’s ability to distinguish native sounds improves, while the ability to distinguish normative speech sounds diminishes (Cheour et al., 1998).

  Soon, morphemes, such as -s, -ed, and -ing, are added to babies’ speaking vocabulary. Morphemes are the smallest units of language that carry some meaning, such as prefixes and suffixes. For example, the prefix un- almost always means not or opposite (unaware), and -ing often indicates an ongoing action (eating, walking). We will see in the next chapter the valuable contribution that morphemes make when learning to read.

  At the same time, working memory and Wernicke’s area are becoming fully functional, so the child can now attach meaning to words. Of course, learning words is one skill; putting them together to make sense is another, more complex skill.

  Verbal- and Image-Based Words

  How quickly a child understands words may be closely related to whether the word can generate a clear mental image. A word like elephant generates a picture in the mind’s eye and thus can be more easily understood than an abstract word like justice. Could it be that the brain maintains two distinct systems to process image-loaded words and abstract words?

 
; To further investigate this point, Tamara Y. Swaab and her colleagues used numerous EEGs to measure the brain’s response to concrete and abstract words in a dozen young adults (Swaab, Baynes, & Knight, 2002). EEGs measure changes in brain wave activity, called event-related potentials (ERPs), when the brain experiences a stimulus. The researchers found that image-loaded words produced more ERPs in the front area (frontal lobe—the part thought to be associated with imagery) while abstract words produced more ERPs in the top central (parietal lobe) and rear (occipital lobe) areas. Furthermore, there was little interaction between these disparate areas when processing any of the words (Figure 1.2). The results support the idea that the brain may hold two separate stores for semantics (meaning), one for verbal-based information and the other for image-based information. This discovery has implications for language instruction. Teachers should use concrete images when presenting an abstract concept. For example, teaching the idea of justice could be accompanied by pictures of a judge in robes, the scales of justice, and a courtroom scene.

  Implication for Teaching and Learning: “Teachers should use concrete images when presenting an abstract concept to young learners.”

  Vocabulary and Language Gaps in Toddlers

  In the early years, toddlers acquire most of their vocabulary words from their parents. Consequently, children who experience frequent adult-to-toddler conversations that contain a wide variety of words will build much larger vocabularies than those who experience infrequent conversations that contain fewer words. The incremental effect of this vocabulary difference grows exponentially and can lead to an enormous word gap during the child’s first three years.