Tag Archives: Multimodal Analysis Lab

Kay O’Halloran interview on multimodal discourse: Part 2 of 3

Before going on to the second part of her interview, here’s a little more about Kay O’Halloran. She has a Ph.D. in Communication Studies from Murdoch University (Australia), a B.Sc. in Mathematics and a Dip. Ed. and B.Ed. (First Class Honours) from the University of Western Australia.

The Multimodal Analysis Lab of which she is the Director brings together researchers from engineering, the performing arts, medicine, computer science, arts and social sciences, architecture, and science working together in an interdisciplinary environment. (This is the first instance where I’ve seen the word interdisciplinary and can wholeheartedly agree with its use. As I have found, interdisciplinary can mean that an organic chemist is having to collaborate with an inorganic chemist or an historian is working with an anthropologist. I understand that there are leaps between, for example, history and anthropology but by comparison with engineering and the performing arts, the leap just isn’t that big.)

There’s more on Kay O’Halloran’s page here and more on the Multimodal Analysis Lab here.

2. Could you describe the research  questions, agendas and directions that are most compelling to you at this  time?

Multimodal research involves new questions and problems such as:

– What are the functionalities of the resources (e.g. language versus image)?

– How do choices combine to make meaning in artefacts and events?

– What types of reconstruals take place within and across semiotic artefacts and events and what type of metaphors consequently arise?

– How is digital meaning expanding our meaning-making potential?

The most compelling agendas and directions in multimodal research include developing new approaches to annotating, analysing, modeling, and interpreting semiotic patterns using digital media technologies, particularly in dynamic contexts (e.g. videos, film, website browsing, online learning materials). The development of new practices for multimodal analysis (e.g. multimodal corpus approaches) means we can investigate social cultural patterns and trends and the nature of knowledge and contemporary life in the age of digital media, together with its limitations. Surely new media offers us the potential for new research paradigms and making new types of meanings which will lead us to new ways of thinking about the world. Also, multimodal approaches offer the promise of new paradigms for educational research where classroom and pedagogical practices and disciplinary knowledge can be investigated in their entirety. Multimodal research opens up a new exciting world, one which is being eagerly embraced by academic researchers and postgraduate students as the way forward (in my experience at least).

Sensing, nanotechnology and multimodal discourse analysis

Michael Berger has an interesting article on carbon nanotubes and how the act of observing them may cause damage. It’s part of the Nanowerk Spotlight series here,

A few days ago we ran a Nanowerk Spotlight (“Nanotechnology structuring of materials with atomic precision”) on a nanostructuring technique that uses an extremely narrow electron beam to knock individual carbon atoms from carbon nanotubes with atomic precision, a technique that could potentially be used to change the properties of the nanotubes. In contrast to this deliberately created defect, researchers are concerned about unintentional defects created by electron beams during examination of carbon nanomaterials with transmission electron microscopes like a high-resolution transmission electron microscope (HRTEM)

The concern is that that electrons in the beam will accidentally knock an atom out of place. It was believed that slowing the beam to 80 kV would address the problem but new research suggests that’s not the case.

If you go to Nanowerk to read more about this, you’ll find some images of what’s going on at the nanoscale. The images you see are not pictures per se. They are visual representations based on data that is being sensed at the nanoscale. The microscopes used to gather the data are not optical. As I understand it, these microscopes are haptic as the sensing is done by touch, not by sight. (If someone knows differently, please do correct me.) Scientists even have a term for interpreting this data, blobology.

I’ve been reading up on these things and it’s gotten me to thinking about how we understand and interpret not just the macroworld that our senses let us explore but the micro/nano/pico/xxx scale worlds which we cannot sense directly. In that light, the work that Kay O’Halloran, an associate professor in English Language and Literature and the Director of the Multimodal Analysis Lab at the National University of Singapore, is doing in the area of multimodal discourse analysis looks promising. From her article in Visual Communication, vol. 7 (4),

Mathematics and science, for example, produce a new space of interpretance through mixed-mode semiosis, i.e. the use of language, visual imagery, and mathematical symbolism to create a new world view which extends beyond the possible using language. (p. 454)