Tag Archives: National University of Singapore

Kay O’Halloran interview on multimodal discourse: Part 1 of 3

I am thrilled to announce that Kay O’Halloran an expert on multimodal discourse analysis has given me an interview. She recently spoke at the 2009 Congress of the Humanities and Social Sciences in Ottawa as a featured speaker (invited by the Canadian Association for the Study of Discourse and Writing). Kay is an Associate Professor in the Department of English Language and Literature at the National University of Singapore and she is the Director of the Multimodal Analysis Lab. (more details about Kay in future installments)

Before going with the introduction and the interview, I want to explain why I think this work is important. (Forgive me if I gush?) We have so much media coming at us at any one time and it is increasingly being ‘mashed up’, remixed, reused, and repurposed. How important is text going to be when we have icons and videos and audio materials to choose from? Take for example, the bubble charts on Andrew Maynard’s 2020 blog which are a means of representing science Twitters. How do you interpret the information? Could they be used for in-depth analysis? (I commented earlier about the bubble charts on June 23 and 24, 2009 and Maynard’s post is here. You might also want to check out the comments where Maynard explains few things that puzzled me.)

As Kay points out in her responses to my questions, we have more to interpret than just a new type of chart or data visualization.

1. I was quite intrigued by the title of your talk (A Multimodal Approach to Discourse Studies: A paradigm with new research questions, agendas and directions for the digital age) at the 2009 Congress for the Humanities and Social Sciences held in Ottawa, Canada this May. Could you briefly describe a multimodal approach for people who aren’t necessarily in the field of education?

Traditionally, language has been studied in isolation, largely due to an emphasis on the study of printed linguistic texts and existing technologies such as print media, telephone and radio where language was the primary resource which was used. However, various forms of images, animations and videos form the basis for sharing information in the digital age, and thus it has become necessary to move beyond the study of language to understand contemporary communicative practices. In a sense, the study of language alone was never really sufficient because analysing what people wrote or said missed significant choices such as typography, layout and the images which appeared in the written texts, and the intonation, actions and gestures which accompanied spoken language. In addition, disciplinary knowledge (e.g. mathematics, science and social science disciplines) involves mathematical symbolism and various kinds of images, in addition to language. Therefore, researchers in language studies and education are moving beyond the study of language to multimodal approaches in order to investigate how linguistic choices combine with choices from other meaning-making resources.

Basically multimodal research explores the various roles which language, visual images, movement, gesture, sound, music and other resources play, and the ways those resources integrate across modalities (visual, auditory, tactile, olfactory etc) to create meaning in artefacts and events which form and transform culture. For example, the focus may be written texts, day-to-day interactions, internet sites, videos and films and 3-D objects and sites. In fact, one can think of knowledge and culture as specific choices from meaning-making resources which combine and unfold in patterns which are familiar to members of groups and communities.

Moreover, there is now explicit acknowledgement in educational research that disciplinary knowledge is multimodal and that literacy extends beyond language.

The shift to multimodal research has taken place as a result of digital media which not only serves as the object of study, but also because digital media technologies offer new research tools to study multimodal texts. Such technologies have become available and affordable, and increasingly they are being utilised by multimodal researchers in order to make complex multimodal analysis possible. Lastly, scientists and engineers are increasingly looking to social scientists to solve important problems involving multimodal phenomena, for example, data analysis, search and retrieval and human computer interface design. Computer scientists and social sciences face similar problems in today’s world of digital media, and interdisciplinary collaboration is the promise of the future in what has become the age of information.

Have a nice weekend. There’ll be more of the interview next week, including a bibliography that Kay very kindly provided.

Sensing, nanotechnology and multimodal discourse analysis

Michael Berger has an interesting article on carbon nanotubes and how the act of observing them may cause damage. It’s part of the Nanowerk Spotlight series here,

A few days ago we ran a Nanowerk Spotlight (“Nanotechnology structuring of materials with atomic precision”) on a nanostructuring technique that uses an extremely narrow electron beam to knock individual carbon atoms from carbon nanotubes with atomic precision, a technique that could potentially be used to change the properties of the nanotubes. In contrast to this deliberately created defect, researchers are concerned about unintentional defects created by electron beams during examination of carbon nanomaterials with transmission electron microscopes like a high-resolution transmission electron microscope (HRTEM)

The concern is that that electrons in the beam will accidentally knock an atom out of place. It was believed that slowing the beam to 80 kV would address the problem but new research suggests that’s not the case.

If you go to Nanowerk to read more about this, you’ll find some images of what’s going on at the nanoscale. The images you see are not pictures per se. They are visual representations based on data that is being sensed at the nanoscale. The microscopes used to gather the data are not optical. As I understand it, these microscopes are haptic as the sensing is done by touch, not by sight. (If someone knows differently, please do correct me.) Scientists even have a term for interpreting this data, blobology.

I’ve been reading up on these things and it’s gotten me to thinking about how we understand and interpret not just the macroworld that our senses let us explore but the micro/nano/pico/xxx scale worlds which we cannot sense directly. In that light, the work that Kay O’Halloran, an associate professor in English Language and Literature and the Director of the Multimodal Analysis Lab at the National University of Singapore, is doing in the area of multimodal discourse analysis looks promising. From her article in Visual Communication, vol. 7 (4),

Mathematics and science, for example, produce a new space of interpretance through mixed-mode semiosis, i.e. the use of language, visual imagery, and mathematical symbolism to create a new world view which extends beyond the possible using language. (p. 454)