Get the Picture: teaching with multi-modal texts Philip McConnell English Language Institute of Singapore To be literate in the 21st century, learners need to be able to interpret their increasingly prevalent and complex media environment, understanding that all media messages are, sometimes despite appearances, merely representations of reality, not „reality‟ itself. That is, such messages are inevitably selective, partial and incomplete. They are constructed to entertain, to inform and to persuade, often with a political or economic purpose. Different types of media and forms of text have their own unique features which viewers must be able to understand so that they can respond discriminately. Increasingly, texts such as websites and videos are non-linear, that is, messages are presented simultaneously and viewers are expected to make their own choices about what to view and in what order. The messages often use stereotypes and emotive images and language which are open to different interpretations and may be misleading. 21st century learners should therefore be able to interpret and evaluate meanings expressed in multiple modalities and use multiple modalities themselves to create meaning. Pictures can be used very effectively to engage students at any level in many kinds of learning activities, including: high order thinking; speaking and listening; literary methods such as irony and metaphor; grammar and
vocabulary. They are also a powerful stimulus for speaking, writing and representing. This paper offers a research-based rationale for pedagogies using pictures and other multimodal texts and a set of teaching strategies for the English classroom which are intended.
Theoretical frameworks According to cognitive psychology, learning occurs when individuals interact with people, objects and events and then reflect on their interaction. The learner actively constructs understanding by deciding what these experiences mean, thus building a personal set of mental models which in turn determine how new experiences are understood. About a quarter of the brain is occupied in processing visual information, far more than for any other sense. Rudolf Arnheim (1969) showed how, from infancy, we learn to recognise and classify all kinds of objects, people, actions and phenomena such as weather, colours or moods. Jean Piaget showed that we learn from interactions with our physical environment, which comes to include not only its physical aspects but also their representations in images and signs. Visual literacy includes everything from facial expressions and body language, to drawing, websites and films. It appears on the basis of research into factors that motivate children to read and write at home (Burnett and Myers 2
2002) that children write and read as part of their imaginative play. Eight pupils from Years 3 and 6 were invited to use disposable cameras to capture examples of the reading and writing they did at home. The results showed children used shared books and writing as a way of building friendships. They used computers to explore school topics or research areas of personal interest; created displays of pictures, certificates, religious texts, or prayer calendars; wrote notes to themselves or made props for make-believe play situations. Raising Boys’ Achievements in Writing (United Kingdom Literacy Association, 2004) found reliable evidence to show that the use of visual
images, such as videos, DVDs and photographs was effective in motivating boys and increased the quantity and quality of their writing. Using visual literacy can also develop boys‟ ability to articulate their understanding of the writing process using metalanguage. A follow up research project by the Department of Education and Science in 2005 showed that the boys saw themselves as being more in control of their own writing. Visual literacy can lead to:
increased quantity of writing increased quality of writing wider...
Please join StudyMode to read the full document