Talk:Language learning strategies

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Wiki Education Foundation-supported course assignment[edit]

This article was the subject of a Wiki Education Foundation-supported course assignment, between 15 January 2021 and 14 April 2021. Further details are available on the course page. Student editor(s): Neast024.

Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 23:54, 17 January 2022 (UTC)[reply]

A neurological approach[edit]

Until now most language learning strategies have focused on a psychological approach such as rote memorization and repetition. Not that there is anything wrong with these approaches but, by themselves, they fail to take into account an understanding of the inner workings of the brain. Such an understanding is not needed to determine the benefits of repetition, indeed, people centuries ago were able to take advantage of repetition without such an understanding. I am about to introduce a neurological approach that may hold promise in improving listening comprehension.

One of the most difficult aspects of learning a new language is listening comprehension. People generally store their native language on the left side of their brain and languages learned as an adult are generally stored on the right. People who have been in, say, a car accident who have had brain injury to their left side may lose their ability to speak their native language and be stuck with whatever language they learned as an adult (though there is no evidence to suggest that brain injury results in an improvement in the second language). People who learn a new language past eleven tend to have an accent.

Ninety to ninety five percent of right handers process that information on the left side of their brain (depending on gender). Information heard from the right ear is generally processed in the left brain. Generally the ear that you speak your native language on the phone with is the ear associated with the side of the brain that processes your native language. If you can figure out what ear sends information to the side of the brain associated with your native language and begin listening to the target foreign language in that ear only, you can improve listening comprehension and have the information process and stored on the proper side of the brain. A better suggestion I found is to listen to something in a foreign language in one ear, listen to it in the other, and then listen to it in both. Over time it does seem to help improve listening comprehension and retention of the true sounds in ways that only listening to the information in both ears at once does not. What one would find is that sometimes some words sound clearer in one ear and other words sound clearer in the other. Exercising each ear individually until all words are clear in each ear and separately listening to the words in both ears until they are also clear in both seems to improve overall listening comprehension. The order doesn't necessarily matter, it is probably best to mix it up. Start with the right ear at times and continue with the left and at other times start with the left ear and continue with the right and then at other times start with both ears and then move on to listening to each ear individually. This way the brain is trained to properly perceive the words no matter where they come from (the left side or the right) in a real life situation (since reality is less predictable and does not occur under controlled conditions). Even if you can not understand each word in the short run, listening to speech in each ear individually and both ears collectively (as separate processes) will improve long term listening comprehension by better ensuring that both sides of the brain are exercised to work independently and collectively. So repetition hasn't gone anywhere, we are simply better ensuring that both sides of the brain receive the necessary repetition to learn the information.


It is unclear why languages learned as an adult are stored in a different part of the brain than languages learned as a child. One possibility is that the sounds heard from one ear are processed off sync with the same sounds heard from the other. Maybe partly because they arrive at each ear at a different time? This seems to cause the signals in the brain to sorta collide off sync which causes them to mesh wrong and cancel out. This might also partly explain why listening is more difficult when the speaker talks fast, the effects of desynchronization become more pronounced. There has been limited research showing the brain can distinguish sound arrival time from one year to the other and use that information to its advantage (ie: to help determine the location of a sound or to help calculate the true sounds arriving from a given direction when interfered with sounds from another direction. Though this research has mostly seen results only in low pitched sounds). The ability of the brain to take advantage of and properly process the difference in sound arrival time might work better at a younger age when the ears are more finely tuned but that ability may diminish with age.

With your native language your brain sorta fills in the 'unfocused' sounds with what it knows but with a new language it can't. If you listen with one ear at a time this unfocused effect seems to go away and the sound seems to become 'focused'. Maybe with headphones, since both sounds arrive at the ears at the same exact time, this problem is less pronounced. But if the delay has to do with the brain or the ears or the ear-brain complex then maybe an artificial delay can be introduced to someone's headphones in one ear compared to the other to sorta place the signals in sync when they reach the brain. The delay may vary from person to person though, just like how prescription glasses vary from person to person.

In retrospect it seems any latency delays are associated with the brains attempt to toggle between the two lobes in search of relevant information. Another possible explanation is that the left side of the brain becomes more efficient in adulthood and so the brain allocates less attention to it since less attention is needed for it to processes language. When a new language is introduced more attention is needed to process it. Since the brain is no longer used to allocating so much attention to the left side the right side attempts to process it instead. Since most of the attention span is being allocated to the right side the left side doesn't get the opportunity to process much information with the little bandwidth allocated to it. It normally doesn't need a lot of bandwidth to process what it needs to, the little allocated to it is suffice (with one's native language). Or perhaps the right brain in adults is more developed than in children and a more developed right brain demands more bandwidth which potentially deprives the left brain of such bandwidth interfering with its ability to process new information. This does not diminish the potential of the left brain to learn and process information if it is given the required bandwidth. By exercising and stimulating the left brain separately and introducing the information of a new language more directly to it (with less interference from the right side) you encourage the left brain to process more information. As it processes new information it becomes more efficient at processing it using less bandwidth. You are also encouraging the brain to get used to allocating more bandwidth to the left side. The next time information from the new language is introduced the left side will now be able to process more information even with less bandwidth allocated to it and the brain will also be more used to allocating more bandwidth to the left side. This will help the left side process more information. Since the left brain is better at processing language than the right this might be a better long term strategy. This may also suggest that left brain activity, such as math, could help stimulate and exercise the left brain which could help it learn new languages. Writing may also be useful, in opposed to typing, since writing with the right hand may stimulate the left brain (since the left brain controls the right hand).

Also see: Writing by Hand Helps train the Brain

http://science.slashdot.org/story/10/10/06/0313224/writing-by-hand-helps-train-the-brain

http://online.wsj.com/article/SB10001424052748704631504575531932754922518.html?mod=WSJ_LifeStyle_LeadStoryNA

Another relevant article is the following. Read the section "Advantages of brain lateralization"

http://en.wikipedia.org/wiki/Lateralization_of_brain_function

Which has supporting evidence suggesting that information entering one eye gets processed in a specific side of the brain and if the optimal eye associated with that side of the brain is used the task can be performed but if it's not used the task may not be performed as well. Encouraging language to be processed in the correct side of the brain without interference from the other side as an adult could help improve language learning.

Another possible explanation for why language learning is more difficult as an adult is that the brains of adults are more interconnected than those of children (so adults are not as lateralized) and that may explain why information from one ear interferes with information from the other. More research into the development of the corpus callosum maybe needed to determine this.


"Our findings indicate that language lateralization to the dominant hemisphere increases between the ages 5 and 20 years, plateaus between 20 and 25 years, and slowly decreases between 25 and 70 years."

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1464420/

If the brains of adults are less lateralized than children this could explain why using both ears at once causes interference in adults but not in children (I eventually suspected the difference in lateralization before reading the above study, googled it, and sure enough the results concur).

It may also help to close or cover the left eye, or close both eyes, while listening to the target language. By reducing stimuli to the right brain you are reducing its (relative) activity hence reducing the extent that it interferes with the learning process of the left brain.


The above ideas are not subject to patent or copyright protection and any programs utilizing any of the above information can not be subject to intellectual property. Anyone may create a language learning tool that utilizes these ideas (ie: by having something played in one ear, then the other, then both, or by synchronizing the sounds or by using any of the above language learning suggestions) but no one may use intellectual property laws to prevent others from doing the same.

— Preceding unsigned comment added by 99.109.144.159 (talk) 17:22, 25 July 2013 (UTC)[reply]

Very careful with extrapolations and inferences.[edit]

Science is based on inferential statistics above all. Making generalizations about results from non-representative samples is dangerous and can cause problems in people's health. Is that happening in neuroscience? We must be cautious. — Preceding unsigned comment added by Ruben Zelaya-Vargas (talkcontribs) 07:05, 29 September 2019 (UTC)[reply]