Why we have been teaching listening skills wrongly for decades

Dr Gianfranco Conti
12th April 2017 at 11:15

Subject Genius, Dr Gianfranco Conti, Why we have been teaching listening skills wrongly for decades

Introduction

Listening is often described as the ‘cinderella skill’, as it is by far the area of language instruction that language teachers neglect the most. The reasons for this neglect are manifold. First and foremost, as much research has shown, listening is the skill MFL teachers understand the least and consequently do not feel confident teaching. Add to this the fact that instructional materials are often uninspiring, poorly designed and usually under-exploited by course-books. To cap it all, possibly as a result of all of the above, many MFL students fail at listening tasks, with serious consequence for their self-efficacy as listeners and their motivation in general.

 

Listening – a pivotal skills

Yet, listening is by far the most important skill-set, especially if we are preparing our students for the real world, where 45 % of communication occurs through the aural medium and only 25 % through reading and writing. Moreover, the human brain is naturally wired to acquire languages through listening, as it is through our caregivers' aural input that we acquired our mother tongue in the first place. Not to mention the fact that in this day and age poor listening skills hinder access to an enormous wealth of freely available audio-visual online information in the target culture – a huge missed opportunity!

 

Why traditional listening instruction is ineffective

The main reason why many students fail at listening is that MFL teachers do not actually teach aural skills. In fact, as I found out through several surveys I conducted over the years, although MFL teachers claim to be teaching listening skills, when asked to make a list of the aural skills they impart in their lessons they are unable to provide an answer. Difficult to teach what you do not fully understand.

I, myself, started to understand listening very late in my career, after coming across work by Richards and Field a few years back. Up until then, like many other colleagues, I had been teaching listening by playing an audio-track and quizzing students on its content through true-or-false or who-has-done-what questions. In other words: I had been teaching through testing, not through modelling.

 

From quizzing to modelling

Yet listening instruction ought to be about modelling. Firstly, about modelling of language through the aural medium, like caregivers do with their children in first language acquisition; secondly, about modelling effectively. Listening strategies and skills. But what skills?

The answer to this question is crucial for effective instruction to take place, as teaching listening should be about training students in the execution of each of the major skill involved in comprehension. Just like football coaches do when they train their players in each discrete skillset (passing, dribbling, shooting, etc.) building up their craft gradually but steadily until they are ready to play a full match. What a lot of MFL teachers do, on the other hand, is the opposite to what football coaches do. They start straight from the full-match bit, neglecting the all-important skill-training part.

Subject Genius, Dr Gianfranco Conti, Why we have been teaching listening skills wrongly for decades

Listening comprehension unpacked

My teaching approach being rooted in Skill-theory accounts of language acquisition, understanding the skills L2-students need to master in order to effectively comprehend aural input has made all the difference. It has enhanced the quality of my teaching, not merely in the area of listening instruction but, across all four skills, as I firmly believe that successful listening skills prime successful acquisition – a finding confirmed by Feyten’s (2011) study.

As Skill-theory posits, instruction in any skill-set must proceed bottom-up, so to speak, starting from the automization of lower-order skills. The epistemological premise of this approach being that you need to be able to execute lower order skills automatically first so as to free up the cognitive space our brain needs to focus on executing higher order skills. To use car-driving as an analogy, one can only effectively focus on the road after one has fully automatized lower-order operations such as changing gear, braking, indicating, accelerating, etc.

 

John Field’s five-phases listening comprehension model

In my opinion, the best and most teacher-friendly skill-based account of listening comprehension to-date was provided by Field (2014), who identifies the following skill-set as crucial to the effective processing of aural input:

A decoding phase - input is ‘translated’ into the sounds of the language

A lexical search phase -  in which the listener searches his brain (long-term memory) for words which match or nearly match these sounds.

A parsing phase - in which he must recognise a grammar pattern in a string of words and fit a word to the linguistic context surrounding it

A meaning-building phase - in which, having ‘broken’ the speech flow, identified the words he heard and how they fit grammatically in the sentence he finally makes sense of it

A discourse-construction phase - in which the understanding of each unit of meaning (e.g. sentence) is connected to the larger context of the narrative. In this phase, one’s background knowledge will help enhance comprehension.

Field’s account provides MFL teachers with a useful reference framework for curriculum design, as it suggests the direction that listening instruction should take.

In my approach, that I call L.A.M. (Listening-As- Modelling), instruction with novice-to-intermediate learners concerns itself with all of the five levels of aural processing identified by Field, starting from extensive practise in decoding skills and lexical Search and then gradually moving on to the higher order skills.

 

The importance of decoding skills

Decoding skills are of paramount importance, as without the ability to identify sounds students cannot identify words. Segmentation, the ability to break down the stream of sounds they hear into meaningful units of meaning (lexical items) is the most crucial micro-skill involved in aural comprehension.

Decoding skills are rarely systematically taught and when they are indeed taught, they are often imparted through slideshows of phonics or through kinaesthetic methods which anchor sounds to specific gestures but not in meaningful input or output. They are never integrated with the other four skills, a must for effective decoding skills acquisition and the focus is more on production than reception, training students in the imitation of sounds rather than training them to discriminate them from the rest of the sounds they compete with during aural comprehension. On TES I have posted a wide range of L.A.M activities that focus on decoding skills (e.g. these).

 

Lexical search skills

MFL teachers typically teach vocabulary through worksheets, apps or internet-based games. Nothing wrong with that, except that students rarely get to process vocabulary aurally. Why not do odd-one-outs, sorting, gap-fills and all the other vocabulary building tasks aurally rather than in writing? They require less preparation and no photocopying. Example: (sorting task) teachers utter a bunch of words (e.g. means of transport), the students sort them on their mini whiteboards in two categories two-wheel vehicles and four-wheel vehicles). I usually do two or three aural vocab tasks in every lesson.

 

Parsing skills, the two seconds now-or-never bottle neck

Automatization of the above mentioned 5 skillsets is necessary because the human brain’s phonological storage is very limited: only around 2 seconds. To make things worse, the listener is usually only 0.25 seconds behind the speaker. This means that the listener has an extremely small amount of time to execute all of the above skills.  Christiansen calls this crucial two-second time-window, the now-or-never bottle neck. During this time-window, the listener makes a first-pass hypothesis based on a mechanism called ‘syntactic priming’, i.e.: on hearing specific words, the listener’s brain automatically predicts what sentence is likely to come next based on previous linguistic experiences associated with those words and other contextual cues. Example: On meeting someone for the first time, on hearing the person say ‘Comment…’ your brain will activate ‘…ça va’ or ‘… vous-appellez-vous?’ rather than ‘…serait ta femme idéale?’. In other words, the predictions our brain makes are statistically based, based on how frequently we have processed a specific sentence in a specific context.

Subject Genius, Dr Gianfranco Conti, Why we have been teaching listening skills wrongly for decades

Chunking and pattern recognition

This system is extremely efficient, because it relies on syntactic chunks stored in the brain which are applied very rapidly once one hears key words. By applying whole ready-made chunks containing several words, the brain doesn’t have to process each and every single word one by one, a much lengthier and more cumbersome process. This system is analogous to what Artificial Intelligence voice recognition systems such as Siri do. They match what they hear to patterns stored in their memory.

 

Teaching patterns through listening

The two phenomena of chunking and priming have huge implications for listening instruction, especially with regards to the Parsing phase of aural comprehension. The first implication is that we need to teach words not isolation as many teachers do, but in high frequency chunks of language; secondly, we need to recycle these chunks as often and across as many contexts as possible, rather than keeping them confined to a single unit of work, as course-books typically do; thirdly, we need to train students in the art of identifying and analysing patterns aurally, not simply through the written medium as it is common practice. In one of my most widely read Language Gym posts ‘Teaching grammar through listening’ I provide many examples of how this can be done.

Two of my favourite activities to focus my students on patterns using listening are: sentence builders and sentence puzzles. I use sentence builders (in figure 1 below)  in every single lesson of mine and my students report that they find them extremely useful; I use them to teach grammar, or rather, to model language patterns. I make up sentences which the students translate on mini boards as I utter them at moderate pace (as I am modelling, not testing). After several examples, it is their turn to make up sentences; first for the class, then for a partner.

 

Fig. 2 – Sentence builder

 Subject Genius, Dr Gianfranco Conti, Why we have been teaching listening skills wrongly for decades

 

After some oral-interaction task such as ‘Find someone who...’, I focus them on patterns again through sentence puzzles (see figure below).

Fig. 2 – Sentence puzzle

Subject Genius, Dr Gianfranco Conti, Why we have been teaching listening skills wrongly for decades

 

Conclusion

In conclusion, for years listening instruction has concerned itself with testing MFL students rather than training them in the effective execution of the five skill-sets crucial to comprehension. Listening comprehension tasks are tests through and through and they do have their place in the MFL classroom; as plenaries, at the end of a sequence of listening tasks which would have modelled the sounds, lexis and morphological and syntactic patterns that the to-be-administered listening comprehension task contain.

MFL teachers need to reconsider the way they teach listening. First of all, they need to drastically increase the students’ exposure to aural input, whether to L.A.M tasks or oral-interaction tasks. Secondly, they need to teach decoding skills extensively from the very early stages of instruction. Thirdly, they must teach vocabulary aurally, as much as possible, in high-frequency chunks, rather than in isolated words as they often appear in word lists. Finally, and more importantly, we must train our students in grammatical-pattern recognition and analysis through the aural medium, not only to facilitate the development or listening skills, but, more importantly because pattern-recognition promotes acquisition.

Steven Smith and I are currently working on a book entitled ‘Breaking the sound barrier’ in which we propose an integrated framework for listening instruction based on the principles laid out in this article, and suggest a wide range of listening-as-modelling activities. In the meantime, if you want to know more about my L.A.M. approach do read the related articles on my blog.   

 

 

Gianfranco Conti, PhD, is a French and Spanish teacher at Garden International School, Co-author of ‘The Language Teacher Toolkit’, and founder and owner of www.language-gym.com.