Theme
5:25am July 16, 2014
karalianne:

neurosciencestuff:

Months before their first words, babies’ brains rehearse speech mechanics
Infants can tell the difference between sounds of all languages until about 8 months of age when their brains start to focus only on the sounds they hear around them. It’s been unclear how this transition occurs, but social interactions and caregivers’ use of exaggerated “parentese” style of speech seem to help.
University of Washington research in 7- and 11-month-old infants shows that speech sounds stimulate areas of the brain that coordinate and plan motor movements for speech.
The study, published July 14 in the Proceedings of the National Academy of Sciences, suggests that baby brains start laying down the groundwork of how to form words long before they actually begin to speak, and this may affect the developmental transition.
“Most babies babble by 7 months, but don’t utter their first words until after their first birthdays,” said lead author Patricia Kuhl, who is the co-director of the UW’s Institute for Learning and Brain Sciences. “Finding activation in motor areas of the brain when infants are simply listening is significant, because it means the baby brain is engaged in trying to talk back right from the start and suggests that 7-month-olds’ brains are already trying to figure out how to make the right movements that will produce words.”
Kuhl and her research team believe this practice at motor planning contributes to the transition when infants become more sensitive to their native language.
The results emphasize the importance of talking to kids during social interactions even if they aren’t talking back yet.
“Hearing us talk exercises the action areas of infants’ brains, going beyond what we thought happens when we talk to them,” Kuhl said. “Infants’ brains are preparing them to act on the world by practicing how to speak before they actually say a word.”
In the experiment, infants sat in a brain scanner that measures brain activation through a noninvasive technique called magnetoencephalography. Nicknamed MEG, the brain scanner resembles an egg-shaped vintage hair dryer and is completely safe for infants. The Institute for Learning and Brain Sciences was the first in the world to use such a tool to study babies while they engaged in a task.
The babies, 57 7- and 11- or 12-month-olds, each listened to a series of native and foreign language syllables such as “da” and “ta” as researchers recorded brain responses. They listened to sounds in English and in Spanish.
The researchers observed brain activity in an auditory area of the brain called the superior temporal gyrus, as well as in Broca’s area and the cerebellum, cortical regions responsible for planning the motor movements required for producing speech.
This pattern of brain activation occurred for sounds in the 7-month-olds’ native language (English) as well as in a non-native language (Spanish), showing that at this early age infants are responding to all speech sounds, whether or not they have heard the sounds before.
In the older infants, brain activation was different. By 11-12 months, infants’ brains increase motor activation to the non-native speech sounds relative to native speech, which the researchers interpret as showing that it takes more effort for the baby brain to predict which movements create non-native speech. This reflects an effect of experience between 7 and 11 months, and suggests that activation in motor brain areas is contributing to the transition in early speech perception.
The study has social implications, suggesting that the slow and exaggerated parentese speech – “Hiiiii! How are youuuuu?” – may actually prompt infants to try to synthesize utterances themselves and imitate what they heard, uttering something like “Ahhh bah bah baaah.”
“Parentese is very exaggerated, and when infants hear it, their brains may find it easier to model the motor movements necessary to speak,” Kuhl said.

This is the scientific reasons why Suzuki music lessons work so well and why the kids start so young.

Oh also apparently I never went through that phase, or I went through it very very late.  Because when I was nine my first French teacher called my parents demanding to know when I’d been exposed to French – I could not only differentiate particular sounds, I could repeat them with a perfect French accent (the teacher was from Paris).  She would not take “she’s never been exposed to French” for an answer because “everyone knows that past infancy, people can’t differentiate the sounds of languages they haven’t been exposed to early, let alone repeat them perfectly”.  I’ve heard of this with other autistic people too.
This is one reason that I don’t always like the fact that certain parts of autistic development are called regressions, whereas certain parts of nonautistic development are not.  This is clearly a loss of a language skill – and the fact that it goes along with learning other language skills doesn’t make it any less of a loss (often autistic “regressions” happen when we’re learning something too, nobody ever pays attention to that).  Yet you will never hear a scientist calling this a language regression.  Ever.  Even though that’s what it is, if you apply the same standards to nonautistic children as autistic ones.

karalianne:

neurosciencestuff:

Months before their first words, babies’ brains rehearse speech mechanics

Infants can tell the difference between sounds of all languages until about 8 months of age when their brains start to focus only on the sounds they hear around them. It’s been unclear how this transition occurs, but social interactions and caregivers’ use of exaggerated “parentese” style of speech seem to help.

University of Washington research in 7- and 11-month-old infants shows that speech sounds stimulate areas of the brain that coordinate and plan motor movements for speech.

The study, published July 14 in the Proceedings of the National Academy of Sciences, suggests that baby brains start laying down the groundwork of how to form words long before they actually begin to speak, and this may affect the developmental transition.

“Most babies babble by 7 months, but don’t utter their first words until after their first birthdays,” said lead author Patricia Kuhl, who is the co-director of the UW’s Institute for Learning and Brain Sciences. “Finding activation in motor areas of the brain when infants are simply listening is significant, because it means the baby brain is engaged in trying to talk back right from the start and suggests that 7-month-olds’ brains are already trying to figure out how to make the right movements that will produce words.”

Kuhl and her research team believe this practice at motor planning contributes to the transition when infants become more sensitive to their native language.

The results emphasize the importance of talking to kids during social interactions even if they aren’t talking back yet.

“Hearing us talk exercises the action areas of infants’ brains, going beyond what we thought happens when we talk to them,” Kuhl said. “Infants’ brains are preparing them to act on the world by practicing how to speak before they actually say a word.”

In the experiment, infants sat in a brain scanner that measures brain activation through a noninvasive technique called magnetoencephalography. Nicknamed MEG, the brain scanner resembles an egg-shaped vintage hair dryer and is completely safe for infants. The Institute for Learning and Brain Sciences was the first in the world to use such a tool to study babies while they engaged in a task.

The babies, 57 7- and 11- or 12-month-olds, each listened to a series of native and foreign language syllables such as “da” and “ta” as researchers recorded brain responses. They listened to sounds in English and in Spanish.

The researchers observed brain activity in an auditory area of the brain called the superior temporal gyrus, as well as in Broca’s area and the cerebellum, cortical regions responsible for planning the motor movements required for producing speech.

This pattern of brain activation occurred for sounds in the 7-month-olds’ native language (English) as well as in a non-native language (Spanish), showing that at this early age infants are responding to all speech sounds, whether or not they have heard the sounds before.

In the older infants, brain activation was different. By 11-12 months, infants’ brains increase motor activation to the non-native speech sounds relative to native speech, which the researchers interpret as showing that it takes more effort for the baby brain to predict which movements create non-native speech. This reflects an effect of experience between 7 and 11 months, and suggests that activation in motor brain areas is contributing to the transition in early speech perception.

The study has social implications, suggesting that the slow and exaggerated parentese speech – “Hiiiii! How are youuuuu?” – may actually prompt infants to try to synthesize utterances themselves and imitate what they heard, uttering something like “Ahhh bah bah baaah.”

“Parentese is very exaggerated, and when infants hear it, their brains may find it easier to model the motor movements necessary to speak,” Kuhl said.

This is the scientific reasons why Suzuki music lessons work so well and why the kids start so young.

Oh also apparently I never went through that phase, or I went through it very very late.  Because when I was nine my first French teacher called my parents demanding to know when I’d been exposed to French – I could not only differentiate particular sounds, I could repeat them with a perfect French accent (the teacher was from Paris).  She would not take “she’s never been exposed to French” for an answer because “everyone knows that past infancy, people can’t differentiate the sounds of languages they haven’t been exposed to early, let alone repeat them perfectly”.  I’ve heard of this with other autistic people too.

This is one reason that I don’t always like the fact that certain parts of autistic development are called regressions, whereas certain parts of nonautistic development are not.  This is clearly a loss of a language skill – and the fact that it goes along with learning other language skills doesn’t make it any less of a loss (often autistic “regressions” happen when we’re learning something too, nobody ever pays attention to that).  Yet you will never hear a scientist calling this a language regression.  Ever.  Even though that’s what it is, if you apply the same standards to nonautistic children as autistic ones.

Notes:
  1. katymic reblogged this from neurosciencestuff
  2. tinyandromeda reblogged this from neurosciencestuff
  3. daffodillight reblogged this from genderologist
  4. danteeeeeblr reblogged this from neurosciencestuff
  5. silentwithatee reblogged this from neurosciencestuff
  6. i-wanta-baby reblogged this from neurosciencestuff
  7. kktheninjapanda reblogged this from vanesa
  8. wandering-subconscious reblogged this from vanesa
  9. vanesa reblogged this from neurosciencestuff
  10. jesseniaisawesome reblogged this from neurosciencestuff
  11. o3k64 reblogged this from notaparagon
  12. ishikyu7192 reblogged this from neurosciencestuff
  13. anotherkinkybiboy reblogged this from notaparagon
  14. neuronerdette reblogged this from neuro-genesis
  15. notaparagon reblogged this from ancientcurios
  16. scorpshirl reblogged this from aspiringdoctors
  17. cinnamon-coffee reblogged this from neurosciencestuff
  18. microbiomusings reblogged this from neurosciencestuff
  19. moonlitnitely reblogged this from draconic-wishes
  20. komorebi-paintbrush reblogged this from neurosciencestuff
  21. tokellywithlove reblogged this from aspiringdoctors
  22. cocogonzalez26 reblogged this from neurosciencestuff
  23. raskolnika reblogged this from neurosciencestuff