THE EUROCALL REVIEW

 

Issue number 8, November 2005

Editor: Ana Gimeno

ISSN: 1695-2618


Table of Contents
ReCALL Journal
Projects: LIPSTIC: a Limited Intelligence Parser Seeking Typical Interference Constructions
Recommended website:
English as a Foreign/Second Language in Secondary Education
Book review: Designing Authenticity into Language Learning Materials
Book review: Integrating Computers in Teaching
Events Calendar

ReCALL Journal

The forthcoming issue of ReCALL (Vol. 17, Part 2) will be distributed to EUROCALL members in November/December 2005. Please send articles, software reviews, details of relevant events or other items of interest for future issues to June Thompson, Editor ReCALL d.j.thompson@hull.ac.uk.

The journal contents are listed at:

All articles are considered by an intenational panel of referees. Notes for contributors can be found at:

http://www.upv.es/eurocall/contrib.htm


Projects

LIPSTIC
A Limited Intelligence Parser Seeking Typical Interference Constructions

Introduction
Several people have asked me why I embark on a major project for trapping errors in learners' free texts when there is a perfectly good Grammar Checker in Microsoft Word. Unfortunately, the Word Grammar Checker is ill-suited to learner English. I put the following text through it and it failed completely.

My friends who last time saw you wants that you come soon. Do you can get very early next week the bus because we must you see? I not see you for very long time.

In contrast, the errors that LIPSTIC will pick up and point out here are the following:
1: You cannot put adverbials of time like "last time" between a subject ("who") and the following verb.
2: The expression "my friends" is plural, but "wants" is 3rd singular.
3: After "want" you cannot have "that" and a finite clause.
4: You cannot use "can" after "do".
5: The expression "the bus" should come next to "get".
6: "You" should come after "see".
7: An auxiliary verb is needed in "I not see".

It probably will not be able to tell me that I must use the perfect auxiliary for "I not see you". However, the fact that there is "for" followed by a recognisable time expression means that it could suggest that the perfect may be required here. It definitely will not be able to suggest that "would like" would be more appropriate than "must".

The Word Grammar Checker accepted this text except for one thing. It told me that if I use "bus", then later on I should use "sees", or alternatively, change "bus" to "buses". It has a bad reputation, and not only for learner English, as you will see if you check this website: http://faculty.washington.edu/sandeep/check/

In recent years there have been papers at EUROCALL conferences on parsing, error trapping, mistake catching, or whatever you would like to call it. Many of them were concerned with the major task of parsing English sentences, and then adding procedures which would also be able to continue to parse the sentence if a sentence violated the grammatical rules of English, with the ultimate aim of giving the learner some kind of feedback. It is certainly not my wish to criticise these various excellent projects, but I have not yet seen one which was going to be usable for my goals. I want to have a tool which will look at a freely written text by a learner, in the first instance a German learner. Taking into account interference errors based on other languages and the large number of cross-linguistic typical errors, it would be possible to adapt it later on to learners of other learner backgrounds. It will look at the text sentence by sentence, check it and see if it can say anything helpful about mistakes in the grammar. This tool would be usable either within a web-based or CD-based language teaching program, or could also function as an Add-In for Microsoft Word.

My disappointment with projects I have seen to date is that they seemed to be nowhere near ready for real use. And, in all humility, I must say that the same goes for my own contribution. The reason I wish to discuss it now is that I need feedback and ideas from other researchers who are interested in this topic. So my aim here is to describe my program, set it in the context of its future use, explain a little about how I intend to approach various issues, and show how the provisional User Interface works.

My belief is that for what I want to do, short cuts rather than full-blown parsing can achieve results. I would like to draw your attention to the acronym in the title of my article, LIPSTIC, which stands for a "Limited Intelligence Parser Seeking Typical Interference Constructions". The acronym is there to pull it all together, but the real title makes a number of claims which describe how I have been going about this project. It has "limited intelligence", by which I mean that it does not attempt to be successful in parsing all sentences, in finding all the errors in every sentence, or even giving any feedback at all on every sentence. But it will undoubtedly identify quite a number of grammatical errors written in a free text. And those who use it will be aware that it identifies some errors, but not all. Clearly it will not be able to deal with semantic oddities, and its feedback on use of tense will be limited to an attempt to see whether past or perfect has been correctly selected.

It does, of necessity, have to partially parse a sentence, to identify subjects and verb complements, and much else. Otherwise it could do nothing. But once it has found major sentence elements (assuming that it is usually successful in that), it then abandons the elusive complete parse, and starts searching for typical interference errors. My aim here is to give some insight into how it deals with text pre-editing, lexical assignment, preliminary reduction of lexical ambiguity, further reduction of ambiguity, template application and appropriate feedback levels.

Text pre-editing
The first problem that one encounters is that learners' texts will not be the kinds of texts that we would like to have as input to our procedures. For example, one may hope that one will identify a sentence by a sentence-final punctuation mark. But a typical text will often begin with a title, or if it is a letter, an address, with telephone numbers. Learners may punctuate wrongly, leaving a space between word and punctuation mark, use inverted commas carelessly, use strange signs or characters which our search mechanisms cannot cope with. Well, this is simple but necessary housekeeping. In an example text, the following sentence is encountered by the Mistake Catcher.
The Hands and Labels are interesting , very interesting!

And then the following messages are delivered in turn, after the User has made the necessary correction:

  1. Surely the first word of the sentence should have a capital letter?
  2. You have a space before a punctuation mark. Please close "," up to the word before it.
  3. There are two words in the sentence (not the first) beginning with a capital letter. Are they names? If NO, please make the capitals into small letters.

Learner texts are full of such surface level irregularities, and these are just a few simple examples of the kind of pre-editing which is undertaken. The first paragraph, which appears in the topmost window of the tool below, contains odd symbols, wrongly placed inverted commas, numerals joined up to words, etc. These surface phenomena are worked through for each sentence before the parsing and real grammatical mistake catching takes place.

An explanation of this screenshot would perhaps be helpful. The whole text which has been written by the User appears in the upper window. The sentence currently being processed appears below it in red. Feedback messages appear in the window below that, as well as the operator button. In the first shaded box below the feedback message, the corrected text appears. Note that the title "Example Text" is copied in automatically from the topmost box because it is paragraph-like, but without punctuation, and is assumed to be a title, part of an address in a letter, or some other such thing. This is an important facility when processing typical texts which are learners are likely to produce. Finally, at the very bottom there is a table with numbers, which is there only as feedback for the developer, and can be turned off by clicking the relevant box at the bottom. This interface is only for development purposes, and does not represent what the final interface will be like.

Lexical Assignment
One of the reasons that my immediate goal is to provide this tool for intermediate learners of English is that I want to manage with a dictionary of about five thousand words. Apart from the huge task of relatively standard grammatical coding for these words, extensive attention has to be paid to the coding of verbs with respect to the complements that they can have, or in the terminology of the OALD, the verb patterns that can be associated with each of these verbs.

Another really important reason for restricting LIPSTIC in the first instance to intermediate learners is that the complexity of the sentences that they write is fairly limited. Although I am building LIPSTIC so that it will be able to deal with higher levels of complexity, e.g. multiply embedded relative clauses, I don't think I have to worry about more than one level of embedding in the first instance.

Obviously, I have a specific goal in mind, to deal with learner English, and to this end, I have divided up the dictionary two, or even three ways. The first dictionary contains all parts of speech, but excludes all the base regular forms of nouns, verbs and adjectives. So it looks for exact matches in the sentence under consideration. Words which it does not find, and also words which are irregular forms (like put as past tense), but where the form is also a regular (present or infinitive) form, are put into a new search list for a pass through the second dictionary. And the second dictionary only contains the base form of each word, so we do not have lots of multiple entries. The interaction of these two dictionaries makes for a very economical dictionary search.

Common misspellings or common error forms can also be included in the first dictionary search, so that more informative feedback can be given for such errors than would normally be provided by a spelling checker. So, for the lexeme "put", the following forms occur in the first dictionary, which searches for exact matches:

put#98 (Participle)
put#99 (Past Tense)
putted#100 (predicted learner error)

If put is encountered in the sentence being processed, forms 98 and 99 are entered into the active parse, but a code attached to these entries indicates that the dictionary search for instances of the lexeme "put" should not yet be considered as complete. So when the program proceeds to the second dictionary search, it will encounter another instance of put:

put#33 (base or infinitive form)

If any of the following regular forms are encountered in a sentence, they will be attached to the lexeme "put", together with the appropriate coding for their grammatical characteristics: put, puts, putting. If the form put is found in this second lexical pass, it will be a third lexical entry for this word, because the form put will already have been entered as being potentially a past tense form or a past participle when the first dictionary was consulted.

Finally, a word on the Proper Names dictionary. It makes sense to have one because we have proper names for time expressions which I need to recognise, like days of the week and months of the year. For common first names, it will usually allow us to know what kind of pronouns should relate to it, although I don't think we will be able to do much with gender. In addition, any words beginning with a capital letter will be classified as Proper Names, so they don't have to be in the dictionary. I should mention that this only happens after a special warning message for German learners. We have to make sure that they have not mistakenly written a common noun with a capital letter.

Sometimes learners will either misspell words in ways not predicted in the dictionary. I originally planned to have a spelling checker go over the whole text before the individual sentences were checked, but I think it makes more didactic sense to deal with them as apparently unknown words, on a sentence by sentence basis, in context, and ask the learner to check whether the spelling is correct. This is above all intended to be a learning tool, and not just an automatic correcting tool. A whole series of rapid learner decisions about spelling mistakes via a spelling checker is not very helpful didactically. Another argument in favour of not using a conventional spelling checker, already mentioned above, is that in cases where we have predicted a common German-based spelling error, we can give a useful feedback message on it.

Dealing with lexical ambiguity and reducing structures
Words which seem totally unambiguous to us in context are often multiply ambiguous from the lexical point of view. Take the apparently harmless sentence:

She let me put a bandage on her cut.
  5   4   3   2 5

The numbers below the underlined words indicate how many different grammatical forms this spelling represents. Multiplying these together, we see that there are 600 possible analyses of this sentence. Many of them will be ruled out as soon as we start trying to parse it. But if a nine word sentence can have so many variants, we would have a huge parsing task, so for practical reasons I try to reduce the number of ambiguities at the very time that I am building my parsing grid. Thus, although "bandage" is in the dictionary as either a finite or non-finite lexical verb, the fact that it is also in the dictionary as a noun and here in this sentence is adjacent to an article allows me NOT to build structures with "bandage" as a verb, even before I start. You will see some examples of this now, and how it can already lead to useful error messages at the pre-parsing stage. And I stress that these are NOT meant to be examples of German English errors, but are simply sentences which have been thought up to check various aspects of lexical search and pre-parsing.

It looks at the sentence The manages has managed. With the aim of reducing impossible structures, the pre-parsing analysis, which works by looking simply at contiguous words, rather than words in a structural relationship, notices that manages which can only be a third singular verb directly follows an article, and gives the following message:

1. It looks as if you have put the verb "manages" directly after an article or determiner. This can't be right. Please change it.

Preliminary analysis also deals with predicted spelling errors, so when it encounters the word refering in Is the man refering to the hand?, it points out:

2: When the second syllable is stressed, as in "refer", you have to double the final 'r' before you add the ending. Please correct it.

And its examination of contiguous words leads it to comment on James was breathing and still does breathes as follows:

3: It looks as if you have put a finite verb, "breathes", after an auxiliary or helping verb, "does". This can't be right. Please change it.

I could go on giving similar instances from the Example Text, but the point should be clear. It is essential to limit as much as possible the number of potential sentences that the Mistake Catcher will have to parse, so we have a device here which eliminates a large number of unacceptable variants even before we build up the parsing grid.

How to deal with remaining ambiguity
Let's call what I have just discussed with reference to "bandage", and demonstrated with other examples: 'adjacency parsing'. It is quite reliable, and its purpose is simply to get rid of obviously unnecessary structures. However, I calculate that even using that technique, I could still have up to 100 possible analyses of a fairly simple sentence. How do I deal with that? Well, it is simple and difficult.

There is a whole series of parsing and error trapping procedures to be applied one after the other to each version of the sentence. If the first procedure gets stuck on some of them, but not all of them, those that it gets stuck on are deleted. And we then move on to the next procedure on all the remaining alternatives.

What happens if it gets stuck on all of them? Well, it will give a feedback message on the first (or any one of them), however, it may be necessary to decide between possible feedback messages, i.e. one may be too severe for the immediate environment, e.g. "You have used a noun and you need a verb here", and the other more accurate and relevant: "You have used the wrong form of the verb here". The fairly difficult task in such a situation is to select the best message. But the really important thing is that even if the best feedback message isn't chosen, LIPSTIC will have identified a point where something needs to be changed.

This kind of potential ambiguity amongst various errors is probably the most difficult thing that I have to deal with in the coming months.

Application of templates with examples
One of the most common learner errors which we are familiar with is inserting some kind of adverbial between the verb and the object. If the verb takes an obligatory object, the Mistake Catcher will search for a likely candidate, then examine anything which comes between the lexical verb and that object, suggesting that the object string immediately follow the verb. But it will also count the words in the object string, and if they significantly exceed in number the intervening words, it will assume that the User has corrected postponed a heavy object, and give no message. An example of this is the following sentence, where the short item intervening between the verb and the long object is yesterday.

I phoned yesterday the boy who offered to repair my computer.

It should be pointed out that the correct recognition of time expressions like last week, is absolutely essential for catching mistakes. Compare these sentences, where last week could be an object and adverbial in i, only an object in iii and iv, and is ungrammatical on account of its position only in iv:

i: I remembered last week.
ii: I remembered her birthday
iii: I remembered her birthday last week.
iv: * I remembered last week her birthday.

The verbal part of the sentence is, even in English, the source of many morphemic or grammatical errors, but the structure of the auxiliary phrase is extremely rigid, and will not present many difficulties. Needless to say, we have to identify subjects efficiently in order to deal with questions where one verbal element is separated from the rest by the subject, which may also contain a verbal element in a relative clause, and we have to make due allowances for intervening adverbials.

I aim to deal with floating quantifiers, because quantifiers often float in situations where it is not permitted in learner English, e.g. iii and iv below.

i: All the girls have arrived.
ii: The girls have all arrived.
iii: *The girls have arrived all.
iv: *The girls all have arrived.

This phenomenon will only occur with relation to words such as "all" or "both", and is a good example of where my "template search" will identify common errors.

The position of adverbs will be difficult to give helpful feedback on, except in some well-defined positions where any kind of adverb will be unacceptable. However, the position of frequency adverbs (e.g. "usually", "never") is extremely fixed in English. There is a limited number of them, and they are a frequent source of word order errors. Again the template based approach will work well here.

Finally, with respect to the identification of mistakes, an error-based "phrase matching" lexicon is planned. What I mean by this is simply a search procedure which will look for what are almost always unacceptable phrases in English, but are often written by learners on the basis of transfer from their own language. An obvious one would be the use of "on" rather than "at", as for example in "She's on the grammar school" rather than "She's at the grammar school". A good phrasal error database will provide opportunities for feedback on this kind of error.

I do not wish to underestimate the task of catching errors. But I hope that this gives some insight into the approach I am adopting. Finally, a word on feedback.

Pedagogical aspects
The examples presented above in a number of feedback messages are not intended to be evaluated here at his point. They were written to make things clearer for the developer. I have nevertheless tried to write them in a user-friendly style. I plan to have at least two levels of feedback, one for those who know common grammatical terms, and one for those to whom such terminology is arcane. But the real question to conclude with is this: To what extent will such an Error Trapper help people in writing free texts?

There is always the danger that it will give them a false sense of security, because it will undoubtedly accept lots of unacceptable sentences. But it has the advantage of giving the User immediate feedback on lots of errors. It works at the press of a button, and it helps learners to learn. I do not really wish to use the "A" word, "autonomy", which has been so dominant in EuroCALL as in so many other academic groups in recent years, but perhaps I should. It makes the learner more independent, more autonomous. When he or she has to write a text, they will get useful information immediately, without a teacher intervening, and will be in control of making decisions about whether to accept or reject proposals. And I think it will help learners. Speaking for myself, I live and work in Germany, and would love to have such a tool for when I have to try to write in German.

James Pankhurst
Freelance software author and Editor Emeritus of Second Language Research

Top


Recommended website

English as a Foreign/Second Language in Secondary Education
Materials and Links for Teaching and Learning

http://www.isabelperez.com

Site at a glance
Isabel's ESL site is an educational website that provides plenty of useful online materials for teachers and learners of ESL/EFL at secondary education level. It is addressed to both learners and teachers who can find here hundreds of resources, including interactive exercises, teaching ideas and various links of interest, together with papers and other academic materials to be disseminated among teaching practitioners. The site was first set up back in 1997 by Isabel Pérez Torres, an experienced CALL researcher and practitioner. Isabel herself maintains the site and frequently updates it.

The site's main objective is to provide grammar instruction and miscellaneous linguistic practice to complement ESL learning and teaching. Although many of the activities are aimed at young learners (teenagers), the resources included are varied and flexible enough to be successfully applied to other educational contexts. Among the other main purposes of the website are language education, contextualized practice and the dissemination of information on ESL/EFL issues. The vehicular languages of the site are English and, to a lesser extent, Spanish. At first sight, the website may resemble yet another personal homepage, but in actual fact it is a very well elaborated educational site, i.e. a pedagogically sound resource full of interactive possibilities for teaching and learning English.

Site contact information
Isabel Pérez Torres
iperez@ugr.es
Centro de Enseñanzas Virtuales de la Universidad de Granada
Vicerrectorado de Nuevas Tecnologías
Universidad de Granada (Spain)


isabelperez.com homepage

Site description and analysis
The opening homepage acts as a site map, and displays a well organised list of sections from which the user can navigate very easily. The welcoming screen also includes some metadata about the site, such as monthly statistics, the e-mail address of the webmaster and information about recent updates. The title of each one of these sections is self-explanatory. A brief summary of the sections follows:

Regarding the communicative possibilities of the website, students and teachers may take part in a number of communicative tasks, including weblogs, e-mail interaction and online surveys and questionnaires. Although the kind of communication catered for within the site is asynchronous, rather than synchronous, interaction between the user and the site is high. The number of links to other sites of interest is also substantial.

The feedback provided on the resource is also interactive, varied (both verbal and non-verbal) and, in many cases, even metalinguistic and informative (comments, explanations…), which could be considered as highly motivating for language learning.

Language skills, bar listening and speaking, are directly covered by interactive practice and exercises. This apparent lack is skilfully counterbalanced by the integration of language skills which are widely present in tasks such as Webquests, Treasure Hunts and a considerable number of other Web-based activities.

The coverage of linguistic skills and components within the website includes the following:

Authenticity is generally present in language tasks, although many of them are clearly pedagogical and linguistically oriented. The coverage of linguistic topics is quite wide, including discourse, syntax, lexis and morphology, as well as information transfer tasks. On the other hand, the linguistic input (i.e. the materials available to students) is made comprehensible by means of explanations of words or concepts, attached audio files, pictures and graphics. Grammar is nearly always contextualised, and linguistic practice can be controlled, contextualised and communicative.

Motivation is sought through games, by providing a wide variety of exercises, including exploratory activities or development of web skills (in activities such as Webquests, for example). Another key educational aspect in language learning, that of language awareness, is also promoted. This and other methodological features of the site show its communicative approach to language learning and teaching quite clearly.

Several sections, tasks and characteristics within the resource can be regarded as very innovative. To give just a few, examples: the task called "The Happy Verby Gang", the use of TrackStar and Treasure Hunt activities, the use of online songs in ESL teaching, some discovery web-based projects or the publishing of students' work online.


The Happy Verby Gang

Technical and pedagogical summary
From a pedagogical point of view, Isabel Pérez's ESL site follows sound principles based on the communicative approach to language learning on the one hand, and on constructivist learning theories, on the other. Interaction is basically provided by JavaScript programming although the site also makes interesting and imaginative use of authoring tools such as Hot Potatoes. Variety and entertainment, plus the use of humour are among its outstanding motivational features.

From a more technological standpoint, the website is efficient, well designed and organised. It is therefore straightforward to use. It is a robust tool that successfully integrates content and organises links. It does not however make use of multimedia as much as one would expect (e.g. video is lacking on the site). In addition, the fact that it sometimes provides different links for the same exercise does not seem to be much of a problem.

Summing up, it is a highly recommendable resource for learning and teaching English, and it is also useful for researchers. The site developer is an experienced CALL practitioner who also co-ordinates several international e-learning projects at the Virtual Learning Centre of the University of Granada, in Spain. Moreover, one of the website activities (The Happy Verby Gang) has received a pedagogical award, and, at the moment of writing this review, the website project has just been awarded a prize by the European Commission and the Spanish Ministry of Education (The European Label for innovative projects in language teaching and learning).

Rafael Seiz
Universidad Politécnica de Valencia, Spain

Top


Book review

Designing Authenticity into Language Learning Materials

Author: Freda Mishan

Intellect Books, Bristol, UK

ISBN 1-84150-080-1
Paperback 224 pages 230x174mm
Published November 2004
Price £19.95, $39.95

This book aims to be 'a comprehensive approach to exploiting authentic texts in the language classroom' (p. ix). By 'text' Mishan does not simply mean a piece of written or spoken discourse; but a 'cultural product'. She explores how seven audio, visual, graphic and printed cultural products can be used in language classrooms as a basis for authentic input and for authentic tasks, providing not only a rationale for their use based on second language acquisition (SLA) research findings and on sound pedagogical theory, but practical, classroom tasks for a range of levels, which language teachers can use as a resource as written or adapt to their own teaching/learning situations.

The book is divided into two parts. The first three chapters of Part One establish the rationale behind what Mishan calls 'the authenticity-centred approach to language learning materials design' (p. 67). Chapter 4 then takes the theory and applies it to the construction of a 'task authenticity framework' whose purpose is to guide and rationalise the design of authentic materials for language learning, where 'materials' is understood as the combination of the text used plus the activities which are based on it. The practical applications of the task authenticity framework (and of the authenticity-centred approach established in Chapters 1-4) are developed in Part Two of the book. In a series of seven chapters Mishan explores how Literature, The broadcast media, Newspapers, Advertising, Music and song, Film and Information and communications technologies (ICT) can best be exploited for language learning materials and provides sets of classroom tasks most suited to each type of text.

Part One. The theoretical grounding.
The first chapter presents the historical background to the concept of authenticity in language learning and teaching. Mishan shows us that the use of authentic materials for communicative purposes has been around for centuries and she classifies the many precedents into three broad groups which she terms 'communicative' approaches, 'materials-focused' approaches and 'humanistic' approaches.

Given the ongoing efforts to establish common standards of second language learning in multilingual environments and the search for defining and assessing levels of language proficiency in terms of communicative competence (see, for example, the Common European Framework of Reference for Languages developed by the Council of Europe (2001), or the Proficiency Guidelines developed by the American Council on the Teaching of Foreign Languages (ACTFL, 1986), Mishan's discussion of the notion of communicativeness and its relation to authenticity, is a useful summary of current thinking and trends. She notes how 'our perceptions of authenticity are beginning to shift to accommodate the nascent linguistic varieties' (p. 19) emerging from the use of Internet technologies, and suggests a set of criteria (on p. 18) by which to assess the authenticity of texts for language learning materials.

The second and third chapters develop the arguments in favour of using authentic texts for language learning. Chapter Two does this by examining some of the recent SLA research evidence on how the use of such texts impacts on various affective factors (relating these to autonomous learning practices which, as Mishan has earlier noted, 'are subtly displacing' (p. 10) Communicative Language Teaching), input factors and language processing factors. The evidence from SLA research set out in Chapter 2 is then drawn on in her development of the pedagogical rationale for using authentic texts in the next chapter.

In Chapter 3, Mishan introduces three terms which she feels encapsulate the pedagogical arguments for using authentic texts in language learning: culture, currency and challenge. She points out that 'culture and language are indivisible' (p. 44), but argues that many ELT coursebooks designed for the global market are not as effective as they could be due to (amongst other reasons) cultural and geographic distance of the content. Her main point in the section on culture is that 'texts and materials should be culturally and experientially appropriate for learners' (Prodromou, 1988: 76, cited on p. 54). This is one of the advantages offered by the authenticity approach; namely, teachers can design their own materials to suit their own learners, taking into account their (the learners') background knowledge and preferred learning styles. Fortunately, she goes on to suggest ways of doing this later in the book.

Nearly twenty-five years ago, Nuttall (1982: 20) pointed out that many language teaching courses use texts 'which deal with over-familiar topics ... recounting facts that have long been part of the reader's general knowledge', and argued for using texts which 'have a message that is fresh and interesting' (ibid.). In the second and third sections of Chapter 3, Mishan echoes Nuttall's complaint, although her arguments are more developed. By currency Mishan means more than just 'up-to-date-ness' and topicality. The term also refers to relevance and interest to the learner. Mishan notes how commercial interests have diluted the authenticity of the language in many ELT coursebooks and discusses how we might go about building a more learner-centred, text-driven syllabus based on authentic texts. As regards her notion of challenge, Mishan argues that authentic texts can be, and should be, used in different ways depending on the proficiency level of the learners, thus it is the task which is graded rather than the text, allowing even low-level learners access to stimulating materials which produce genuine reaction and interest. Morever, access to authentic texts (and remember that 'text' can mean a range of material from songs to literature, films to e-mail, television programmes to corpora) is now facilitated through the Internet and related technologies. 'There are', says Mishan, 'myriad ways of maintaining text and task authenticity while providing a suitably gauged level of challenge at any proficiency level (p. 63).

In Chapter 4, Mishan's preliminary discussion of the notion of task in the context of language learning is drawn from several influences, ranging from genre and discourse analysis to work on task-based learning. Having set the scene, in which the focus of authenticity is on the task and not only on the text, she constructs what she calls the 'task authenticity framework' which is based on three sets of parameters: (1) Guidelines for task authenticity, (2) Communicative purpose, and (3) Task typologies. The guidelines for task authenticity (on p. 75) can be used either 'as a sort of checklist to be applied selectively while conceiving and designing tasks' (p. 75), or as 'a a set of criteria for evaluating the authenticity of learning tasks produced by others' (ibid.). That each of these parameters is based on research and on pedagogical practice (Mishan's own as well as that of others) comes through very clearly. Mishan is also aware of the fact that taxonomies, classifications and typologies are really only 'crutches' to help us get about, not to restrict or dominate our activity. And they should never be regarded as exclusive. She notes that the set of seven communicative purposes she identifies is not exhaustive but will provide 'sufficiently broad categories to enable the materials developer/task designer to assign a fairly accurate communicative purpose' (p. 79) to a chosen text, while the taxonomy of task typologies she works out on pages 83-92 is based on the 'broad linguistic, cognitive and/or physical activity/ies they entail' (p.92). Thus, despite, and because of, the theory, the task authenticity framework is allowed a certain flexibility in its practical application as 'any experienced teacher knows that no amount of theorising can predict what happens in the classroom' (p. 93).

Part Two. The resource section.
Chapters 5-10 each deal with a different kind of text, respectively, Literature, The broadcast media, Newspapers, Advertising, Music and song, and Film, and each is divided into two sections. In the first section, Mishan discusses the pedagogical issues involved in using the specific kind of text for language learning. This discussion is then summarised and condensed into a set of 6-8 principles which, together with the references for further reading she provides, are intended to act as a 'quick reference guide' to using the cultural product for language learning/teaching. The second section of each of these chapters is designed to be 'a practical teaching resource' (p. 95). It contains classroom tasks which could either be used as supplementary material on existing courses or as the basis of a learner-centred, text-driven syllabus.

The structure of Chapter 11, which deals with ICT, is rather different. There are separate sections on three technologies, chosen on the basis of being the 'most used for language learning' (p. 242); The Web, E-mail, Corpora and Concordancing -although later we learn that the corpus and the concordancer 'are probably the ICT systems least exploited for language learning' (p. 256). No summarising set of principles for the use of ICT is provided - perhaps because of the relative newness of the technologies, perhaps because of the greater degree of learner autonomy made possible by working online, or perhaps because teacher control is maintained through the task, and the design of authentic tasks is covered at length in Chapter 4. Mishan does, however, discuss the pedagogical practices emerging from these technologies, as well as the effect the new varieties of language arising from them may have on the concept of authenticity. Like the previous chapters in Part Two of the book, this one concludes with a set of carefully described tasks for use in the classroom.

No review of a resources book would be complete without some discussion of the tasks presented, so to the question: Do the tasks work? my answer is: Well, there are 162 of them and the editor of The EUROCALL Review is not prepared to give me the twelve months or so I estimate I would need to try them all out. All I can say is that the few I have tried fully engaged, absorbed and stimulated my students. I would add, though, that the resources Mishan offers us are not, in fact, limited to the tasks. They also include numerous checklists, criteria, summaries, principles and guidelines for use, not to mention the task authenticity framework itself, as well as the benefit of her practical experience and knowledge of theory.

Mishan states that the book is 'in affectionate homage' (p. xi) to her teaching background, and this affection, together with a certain underlying humour, comes through in her writing. We can sense that Mishan has enjoyed writing this book, and that she enjoys teaching and devising materials. In another excellent book, Urquhart & Weir (1998: 234) point out that teachers "... need a sound grasp of practical matters and an educated framework on which to base and to evaluate their methods". It is clear that Mishan has both these requirements in abundance. Designing Authenticity into Language Learning Materials will provide its readers with an opportunity to extend and develop the latter, and a basis on which to acquire the former.

To be truly 'comprehensive' it seems to me that the book needs some discussion of evaluation. Yet even if it stopped at the end of Part One, it would still be well worth reading. It goes on, however, showing us the task authenticity framework 'in action' in the database of classroom tasks, making the book a valuable source of ready-made materials as well as, and perhaps more importantly, a source of ideas and good practice based on sound, pedagogical principles and evidence from recent SLA research, and from which teachers can also derive ideas with which to design their own materials suited to their own teaching/learning situations.

References

American Council on the Teaching of Foreign Languages (ACTFL). (1986). Proficiency Guidelines. Hastings-on-Hudson, NY: ACTFL Materials Center.

Council of Europe. (2001). The Common European Framework of Reference for Languages: Learning, Teaching, Assessment. Cambridge: Cambridge University Press.

Nuttall, C. (1982). Teaching Reading Skills in a Foreign Language. Oxford: Heinemann.

Prodromou, L. (1988). English as cultural action. ELT Journal, 42, 2: 73-82.

Urquhart, S. and Weir, C. (1998). Reading in a Second Language. Harlow: Longman.

David Perry
Universidad Politécnica de Valencia, Spain

Top


Book review

ICT - Integrating Computers in Teaching
Creating a Computer-Based Language-Learning Environment

Author: David Barr

Peter Lang Publishing, 2004
Paperback 240 pages
ISBN 3-03910-191-9 / US-ISBN 0-8204-7176-3
Price: $52.95

Many teachers will, I feel sure, be attracted by the main title, and will then perhaps have second thoughts when they find that the sub-title is "Creating a Computer-Based Language-Learning Environment". My advice to those who are not language teachers, as well as to those who are, is that they will find this a rewarding read. Certainly the context is language learning, but that is a context within which many transferable points and suggestions are effectively made. I warmed to the fact that the first third of the book deals readably and usefully with the main features of today's computer-based learning environment, in a way which does not require one to be a computer-buff to assimilate. I found the use of language learning as an example extremely helpful, as well as interesting. The writer admittedly did not tell me, or perhaps other interested readers, much of which we are not already aware - but he managed to make and highlight some good general points, to which we have probably not given sufficient thought. The presentation style demonstrates how clearly such points can be made when they are presented in generic terms, illustrated perhaps by a particular example, but never overly subject-specific.

There follow three very full case-studies illustrating the contribution of computers to language learning in Ulster, Cambridge and Toronto. The non-linguist will find these over-detailed, I suspect, but the style is such that skimming by those who so wish is simple; and, after all, there must be few texts on educational topics where some sections are not of less interest to a particular reader than are others. The comparison which followed, for instance, sufficed for me with relatively little amplification from the case-study detail. The writer thoughtfully compares and contrasts these very different examples and draws from that analysis conclusions which I found thoughtful, generally relevant, and of interest and use to me. However I imagine that those who teach languages will read the case studies, which are written in reader-friendly style, with keen interest, picking out similarities and contrasts to their own present practices, and identifying useful ideas in the process.

The review with which the text then closes I found thoughtful and thought-provoking, though I could have wished for less reliance the three case-studies and rather more reference to other examples, of which even readers from this discipline may be unaware. If you teach languages, you will find much that is stimulating in this book, and useful frameworks within which to organise your thinking and planning. I would commend it as an item to put on your shopping list. For non-linguists who may be tempted by the promise of the main title, I would advise that the promise will be fulfilled - and that they will find this text readable and helpful, provided they are prepared to skim a little on the case studies, and so well-worth borrowing for selective reading.

John Cowan
Educational Development Unit, Napier University, UK


Events Calendar

For information on events, please refer to http://www.eurocall-languages.org/resources/calendar.html, which is regularly updated.

Top


Back issues: