Impressions, Reflections and Ideas by Sharon Hartle

IATEFL TEASIG/CRELLA Conference: Building Practical Assessment Skills for Teachers

Impressions and Themes

As usual, when I returned from the recent TEASIG/ CRELLA conference on Assessment Skills for teachers my head was buzzing with impressions, reflections and ideas, so I decided to put some of them down on paper and share them with you. The first impression that always strikes me is how friendly the TEASIG group is, and how small conferences like this one (approx. 100 participants) really give you the chance to see old faces and to meet new people. The audiences, as usual, were generally very supportive of the presenters and there was a genuine feeling of intellectual curiosity. Here were people who had come all the way to Luton to find out more about assessment and how to apply it in their contexts. I always find it useful, in any conference, to choose various themes to focus on because, even in a two-day event like this one, there is so much on offer that it is a good idea to concentrate on specific areas. My main focus, then, was on feedback, which is something I am personally very interested in, and technology. That is not to say, however, that I did not go to other things as well. John Field’s plenary, for instance, was, as always, a thought-provoking discussion of what it means for a learner to listen to a new language, what the pitfalls are, and how some of the strategies we commonly use when teaching, such as comprehension questions, only go so far in assessing understanding. In fact, you can only answer a comprehension question once you have understood, but it will not help you if you have not understood the context, or have understood the context but completely misheard one of the key words. He asked whether we, as teachers, are aware of or take into consideration ‘the psychological processes that underlie the skill’, and how far conventional test items actually engage learners. He described the listening process as ranging from ‘decoding’ one item from another to ‘word searches’ and ‘parsing’ and then moving on to include more complex skills such as ‘meaning construction’ and ‘discourse construction’, which can only be applied at higher levels. To be aware that ‘he’, for instance, refers back to someone who was introduced some time earlier, is an extremely complex juggling act that we can only do when we are expert listeners.


I attended two presentations on feedback, one by Christian Krekeler and one by Clare Maas. Christian examined task-based speaking tests such as video and news presentations. He stressed the need for clear assessment criteria to be set at the task design phase, so that feedback in the learning process can be useful. Giving only a grade or empty praise will not help learners to know what went well and where the problem areas were. Clare, on the other hand, presented an in-depth view of her multimodal feedback technique, which included different types of feedback to written tasks. These ranged from comment bubbles posted online to longer, more discursive emails which, as she explained, are taking the place of traditional, written paper-based feedback. She found, in fact, that learners experienced different types of feedback as beneficial to different areas of language learning. Many of her learners prefer comment bubbles to accuracy-related issues on surface features, but favour emails when it comes to matters of critical thinking or development of the writing itself. I also considered feedback as part of my presentation on applying formative assessment techniques systematically to the classroom. What emerges clearly from the formative assessment literature is the need for feedback to be much more than simply a grade or empty praise. It needs to be specific, pointing the learner to exactly what the problem is, and suggesting ways to work on solving such problems. Feedback that is simply correction is, in short, not enough. It needs to be supported by analysis, further experimentation with the language point/ pronunciation area/discourse management feature in question. Combining positive comments on those features of writing that are successful with those which are more problematic is also useful for learners, however, and by no means empty praise.


Technology is one area that moves so fast that it is notoriously hard to keep up with. Thom Kiddle, in both his workshop and in his plenary, explored ways in which technology has changed and is changing. Whereas just a few years ago using technology in teaching involved having expensive equipment in the classroom, it now involves using the ‘bring your own device’ technique (BYOD), devices such as tablets or smartphones that we all carry around with us . In the workshop, in particular, he introduced us to six extremely useful freemium interfaces that teachers and learners can use for a whole range of activities, and are not limited to assessment: Padlet, Socrative, Edpuzzle, Screencast-o-matic, Plickers and Flipgrid. Freemium interfaces are those which have a free version but some premium features which can be purchased. He used Padlet with us, and I noticed that he was not alone, so it seems useful to look at this tool in slightly more depth. Padlet, for those who remember, used to be known as Wallwisher,
but it has recently revamped its look and now includes interesting new features such as the chance to choose different layouts. It is, in its most basic form, a space where ideas can be brainstormed, questions can be asked and answered, and resources and ideas can be shared, thus providing a record that can then be brought back into the classroom and commented on or extended. Thom used these tools to explore six assessment-related activities that they can be used for: online brainstorming (Padlet), mobile based digital assessment (Socrative), collaborative video
commentaries (Edpuzzle), screen-casting (Screencast-o-matic), teacher-tech live digital assessment (Plickers), a highly effective tool for giving feedback for those lessons where digital options may not be available to all learners (although many of these tools can be used where learners work in pairs or small groups sharing a single device), and collaborative video discussion as well as sharing questions, resources and other digital items with different online communities (Flipgrid). This was very practical and everyone who attended went away feeling that they had been given a ‘going home present’.

Mary Whiteside also introduced a range of digital tools, including Padlet, which has to be the number one tool of the conference. Her topic was where assessment meets digital literacy so, in some ways, it followed on very aptly from Thom’s workshop. One of the main tools that she discussed was Cambridge English’s ‘Write and Improve’. This is a freely accessible interface where learners can input their written work and receive instant feedback on their performance at various levels. It is a tool which proves popular, particularly at B1 and B2 levels, both with learners and teachers. Learners, in fact, become more independent when writing, and teachers are able to provide them with more work without having to consider placing an additional burden on themselves through traditional marking techniques. ‘Write and Improve’ is work in progress and is being constantly updated, providing new features. A recent update, for instance, now includes IELTS test questions.

Appeals and Malpractice

Two other interesting sessions I attended were related to the topics of appeals and malpractice. Judith Mader talked about the appeals procedure in her own institution and looked at how appeals, in which students appeal against the grades they have been awarded, differ from culture to culture. It was surprising for me to learn, for instance, that very large numbers of students appeal in Germany as that is not generally the case in Italy, where I work. Students at my institution, for instance, have the right to accept or refuse the mark they have been awarded and discuss the test
they have taken, and are then allowed to take the exam again if they so wish. This, in fact, sparked off an interesting discussion of cultural norms. Judith underlined the fact that tests need to be designed in a principled way with assessment criteria that are clear to all involved. Perhaps the other side of the coin to appeals is malpractice or cheating, which Anna Soltyska talked about in her workshop. Once again, the workshop audience was multicultural, another strength of the conference, and this meant that it was possible to compare malpractice in different cultures.
What is cheating in one context, in fact, may not necessarily be perceived as such in another. My own take on this is that there is a clear line between dishonesty, where issues such as plagiarism rear their ugly heads, and cooperative activities, where learners work together collaboratively and are then assessed. This, in fact, brings us back to Christian’s question of how they should be assessed when they are working together. I feel the answer lies, as Judith said, in clear assessment criteria, and tasks that are in line with those criteria.
Unfortunately, there were so many sessions that I could not attend them all, even though I would have liked to. Socialising and talking to people during coffee and lunch breaks, however, is also a crucial part of any conference where impressions are exchanged. Those I talked to seemed to find the sessions just as thought-provoking as the ones I attended, and all of them led to a wealth of discussion. All in all, it was an intensive but very fruitful two days In Luton, which were enjoyed by everyone. The collaboration between IATEFL TEASIG and CRELLA is obviously a winning combination!

Powered by WordPress. Designed by WooThemes