Tag Archives: multimodal discourse

Cool science; where are the women?; biology discovers graphical notations

Popular Science’s Future of .., a programme [developed in response to a question “What’s missing from science programming?” posed by Debbie Myers, {US} Science Channel general manager] , was launched last night (Aug. 11, 2009). From the Fast Company posting by Lynne D. Johnston,

The overall response from the 50-plus room full of mostly New York digerati, was resoundingly, “a show that was both entertaining and smart–not dumbed down.”

Their host, Baratunde Thurston, offers an interesting combination of skills as he is a comedian, political pundit, and author. If you go to the posting, you can find the trailer. (It’s gorgeous and, I suspect, quite expensive due to the effects, and as you’d expect from a teaser, it’s short on science content.)

It does seem as if there’s some sort of campaign to make science ‘cool’ in the US. I say campaign because there was also, a few months ago, the World Science Festival in New York (mentioned in my June 12, 2009 posting). Thanks to Darren Barefoot’s blog I see they have posted some highlights and videos from the festival. Barefoot features one of musician Bobby McFerrin’s presentations here.

Barefoot comments on the oddity of having a musician presenting at a science event. The clip doesn’t clarify why McFerrin would be on the panel but neuroscientists have been expressing a lot of interest in musician’s brains and I noticed that there was at least one neuroscientist on the panel. Still, it would have been nice to have understood the thinking behind the panel composition. If you’re interested in more clips and information about the World Science Festival, go here.

Back to my thoughts on the ‘cool’ science campaign, there have been other initiatives including the ‘Dancing with scientists’ video contest put on by the American Association for the Advancement of Science and the nanotechnology video contests put on by the American Chemical Society. All of these initiatives have taken place this year. By contrast, nothing of a similar nature appears to be taking place in Canada. (If you know of a ‘cool science’ project in Canada, please do contact me as I’d be happy to feature it here.)

On the subject of putting together panels, there’s an interesting blog posting by Allyson Kapin (Fast Company) on the dearth of women on technology and/or social media panels. She points out that the problem has many aspects and requires more than one tactic for viable solutions.

She starts by talking about the lack of diversity and she very quickly shifts her primary focus to women. (I’ve seen this before in other writing and I think it happens because the diversity topic is huge so writers want to acknowledge the breadth but have time and expertise to discuss only a small piece of it.) On another tack altogether, I’ve been in the position of assembling a panel and trying to get a diverse group of people can be incredibly difficult. That said, I think more work needs to be done to make sure that panels are as diverse as possible.

Following on my interest in multimodal discourse and new ways of communicating science, a new set of standards for graphically representing biology has been announced. From Physorg.com,

Researchers at the European Molecular Biology Laboratory’s European Bioinformatics Institute (EMBL-EBI) and their colleagues in 30 labs worldwide have released a new set of standards for graphically representing biological information – the biology equivalent of the circuit diagram in electronics. This visual language should make it easier to exchange complex information, so that models are accurate, efficient and readily understandable. The new standard, called the Systems Biology Graphical Notation (SBGN), is published today (August 11, 2009) in Nature Biotechnology.

There’s more here and the article in Nature Biotechnology is here (keep scrolling).

Reimagining prosthetic arms; touchable holograms and brief thoughts on multimodal science communication; and nanoscience conference in Seattle

Reimagining the prosthetic arm, an article by Cliff Kuang in Fast Company (here) highlights a student design project at New York’s School of Visual Arts. Students were asked to improve prosthetic arms and were given four categories: decorative, playful, utilitarian, and awareness. This one by Tonya Douraghey and Carli Pierce caught my fancy, after all, who hasn’t thought of growing wings? (Rrom the Fast Company website),

Feathered cuff and wing arm

Feathered cuff and wing arm

I suggest reading Kuang’s article before heading off to the project website to see more student projects.

At the end of yesterday’s posting about MICA and multidimensional data visualization in spaces with up to 12 dimensions (here)  in virtual worlds such as Second Life, I made a comment about multimodal discourse which is something I think will become increasingly important. I’m not sure I can imagine 12 dimensions but I don’t expect that our usual means of visualizing or understanding data are going to be sufficient for the task. Consequently, I’ve been noticing more projects which engage some of our other senses, notably touch. For example, the SIGGRAPH 2009 conference in New Orleans featured a hologram that you can touch. This is another article by Cliff Kuang in Fast Company, Holograms that you can touch and feel. For anyone unfamiliar with SIGGRAPH, the show has introduced a number of important innovations, notably, clickable icons. It’s hard to believe but there was a time when everything was done by keyboard.

My August newsletter from NISE Net (Nanoscale Informal Science Education Network) brings news of a conference in Seattle, WA at the Pacific Science Centre, Sept. 8 – 11, 2009. It will feature (from the NISE Net blog),

Members of the NISE Net Program group and faculty and students at the Center for Nanotechnology in Society at Arizona State University are teaming up to demonstrate and discuss potential collaborations between the social science community and the informal science education community at a conference of the Society for the Study of Nanoscience and Emerging Technologies in Seattle in early September.

There’s more at the NISE Net blog here including a link to the conference site. (I gather the Society for the Study of Nanoscience and Emerging Nanotechnologies is in its very early stages of organizing so this is a fairly informal call for registrants.)

The NISE Net nano haiku this month is,


Surface plasmon resonance
Silver looks yellow

by Dr. Katie D. Cadwell of the University of Wisconsin-Madison MRSEC.

Have a nice weekend!

Flies carry nanoparticles; EPA invites comments; scientific collaboration in virtual worlds

A new study is suggesting that flies exposed to nanoparticles in manufacturing areas or other places with heavy concentrations could accumulate the particles on their bodies and transport them elsewhere. From the media release on Nanowerk News,

During the experiments, the researchers noted that contaminated flies transferred nanoparticles to other flies, and realized that such transfer could also occur between flies and humans in the future. The transfer involved very low levels of nanoparticles, which did not have adverse effects on the fruit flies.

It makes perfect sense when you think about it. Flies pick up and transport all manner of entities so why wouldn’t they pick up nanoparticles in their vicinity?

In other news, the US Environmental Protection Agency (EPA) has asked for comments on case studies of nanoscale titanium dioxide in water treatment and sunscreens. Presumably you have to be a US citizen to participate. For more information on the call for comments, check out this item on Nanowerk News. From the item,

EPA is announcing a 45-day public comment period for the draft document, Nanomaterial Case Studies: Nanoscale Titanium Dioxide in Water Treatment and Topical Sunscreen (External Review Draft), as announced in the July 31, 2009 Federal Register Notice. The deadline for comments is September 14, 2009.

Yesterday, I came across an announcement about scientific collaboration in a virtual world (specifically Second Life). It’s the first professional scientific organization, Meta Institute for Computational Astrophysics (MICA), based entirely in a virtual world.

This idea contrasts somewhat with the NanoLands concept from the National Physical Laboratory in the UK where an organization with a physical location creates a virtual location. (You can see my interview with Troy McConaghy, part of the original NanoLands design team, here.)  The project blog seems to have been newly revived and you can find out more about NanoLands and their latest machinima movies. (If you want to see the machinima, you need a Second Life account.)

What I found particularly interesting about MICA is this bit from their media release on Physorg.com,

In addition to getting people together in a free and convenient way, virtual worlds can offer new possibilities for scientific visualization or “visual analytics.” As data sets become larger and more complex, visualization can help researchers better understand different phenomena. Virtual worlds not only offer visualization, but also enable researchers to become immersed in data and simulations, which may help scientists think differently about data and patterns. Multi-dimensional data visualization can provide further advantages for certain types of data. The researchers found that they can encode data in spaces with up to 12 dimensions, although they run into the challenge of getting the human mind to easily grasp the encoded content.

Shades of multimodal discourse! More tomorrow.

Science’s exquisite corpse and other interesting science communication developments

The ‘exquisite corpse’ is a game that surrealists started playing in the earlyish part of the 20th century, according to the wikipedia essay here. I first came across the game in a poetry context. I was part of an online poetry organization and someone suggested (as I recall) that we start an exquisite corpse project on our website. Nothing much of came of it but I’ve always found the phrase quite intriguing. The idea is that a group of people play with words or images individually then put the pieces together to construct a final work.

Andrew Maynard’s 2020 Science blog has been featuring an art/science exquisite corpse project by Tim Jones. Billed as an experiment in science engagement, Jones and his colleagues (at the Imperial College) have created videos of two  members of the public, a science communicator, and a scientist talking about a drawing they’ve each created that expresses what they each think is important abou science.  What you’ll see are the interviews, the pictures that the people drew, and an exquisite corpse of science, if you go here.

Tim Jones has now invited more people to participate for the biggest art/science project in history (maybe) to create a bigger exquisite corpse of science. If you’re interested go here to Tim Jones’s site or you can read about it here at 2020 Science.

I came across a way for scientists to publish workflows and experiment plans  at myExperiment.

BBC4 has been conducting an experiment of their own, visualising radio. In this case, it’s a science show that’s cast over the internet. They’ve blogged about the project here.

All of this makes me think back to the interview that Kay O’Halloran (July 3, 6, and 7, 2009 postings) gave me on multimodal discourse analysis and Andrew Maynard’s bubble charts (June 24 and 29, 2009). It’s exciting to explore these new and rediscovered techniques and to think about how we perceive the information being conveyed to us.

One last bit, there’s been an announcement from Lord Drayson, UK’s Science and Innovation Minister and Chair of Ministerial Group on Nanotechnologies that the government is seeking advice for a national nanotechnology strategy. From the announcement on Nanowerks News,

Industry, academia and consumer groups were invited to use a new website to help develop the strategy, building on and consolidating the existing research and consultations that have already taken place. The website will gather views on core issues including research, regulation, innovation and commercialisation, measurement and standards and information as well as on the anticipated impact of nanotechnologies on a wide range of sectors. The aim of the strategy is to describe the actions necessary to ensure that the UK obtains maximum economic, environmental and societal benefit from nanotechnologies while keeping the risks properly managed.

The rest of the announcement is  here and the project website is here.  (NOTE: Consumer groups will have their own website although members of the public are welcome the new website is really intended for academia, industry, and NGOs.)

Happy weekend!

The other side of the multimodal discourse coin

Bill Thompson has an article, Giving life a shape, on BBC News which touches tangentially on approaching the world in a multimodal fashion. He takes a kind of digital approach i.e.when he uses the word technology he actually means digital technology and his examples come from social networking, Second Life, social gaming and other activities mediated through the Internet and computers. From the article,

… because in working through the creative potential of new technologies artists of all types are helping us to find new ways to think about these tools and working out how to integrate them into our wider cultural and commercial practice.

They are helping us to explore the latest chapter in the ongoing conversation between human psychology and the capabilities of modern technology, something which will matter more and more as the network becomes pervasive and digital devices penetrate every area of our lives.

Different modalities (audio files, graphics files, animation (Second Life), and others are referred to indirectly in the course of Thompson’s article, which is why I’ve picked up on it. In  light of the Kay O’Halloran interviews (on this website blog July 3, 6, and 7, 2009) Thompson’s description of how “artists help find us new ways to think about things” reveals the other side of the multimodal discourse coin.

While O’Halloran and her colleagues develop a framework for analyzing and understanding multimodal discourse, it’s artists (I define that word broadly) who enact and explore that discourse through their work.

One quibble, I think Thompson’s definition could be broadened so that technology  includes nanotechnology, biotechnology, synthetic biology and other emerging technologies. Now back to Thompson and a comment that works no matter how you define technology,

One problem in talking about this is that relatively few people understand the underlying technology sufficiently well to be comfortable with it. We have few stories that talk about technology and few workable metaphors or analogies that let us convey complex technological issues in ways that people really grasp.

Metaphors came up in the O’Halloran interview (July 6, 2009 posting) too and I got this in the comments (from inkbat),

I was struck by the point on metaphor. When you come right down to it, isn’t it sad that so many of our concepts are the result of some designer or advertiser or whoever deciding to create some kind of shortcut for us .. which would work if it was just in the one instance but then it takes on a life of its own and suddenly we no longer think of the heart AS IF it is a pump but as though it IS a pump. Or the brain as a computer. …

Unfortunately as inkbat points out, we forget we’ve created a metaphor and we treat it ‘as if it were so’ to results that can be disastrous. Still, I think that creating metaphors and then having to ‘break’ or ‘see through’ them ultimately discarding the old metaphor and developing a new one is part of the human condition.

Back to my nanotech ways tomorrow.

Kay O’Halloran interview on multimodal discourse: Part 3 of 3

Thanks to Kay O’Halloran for kindly giving me this interview and here’s the last part which also includes a bibliography.

3. I notice that you have a project examining PowerPoint in the classroom and in corporate settings which you are conducting for the Australian Research Council. Could you explain a little bit about the project?

The project ‘Towards a Social Theory of Semiotic Technology: Exploring PowerPoint’s Design and its Use Higher Education and Corporate Settings awarded by the Australian Research Council (ARC) (Discovery Grant No. DP09889939) is a collaborative project between Chief Investigator Professor Theo van Leeuwen (Dean for Faculty of Arts and Social Sciences, University of Technology, Sydney), Dr Emilia Djonov (Post-doctoral Fellow, University of Technology, Sydney) and myself. The following description of the project is drawn from our research proposal.

PowerPoint has become the dominant technology for designing and delivering presentations, particularly in education and business settings where success often depends on skills in the use of the application. Powerpoint is the subject of much debate and it creates strong reactions, both positive and negative. It’s either praised for increasing presenters’ confidence and eloquence (e.g. Gold 2002) or it’s condemned for limiting users’ ability to present complex ideas through an over-simplification of information presented in bullet points, linear slide-by-slide formats and illegible graphics (e.g. Tufte 2003).

From the multimodal perspective, Powerpoint is a semiotic technology which has a range of options (i.e. grammar) from which presenters make selections with regards to the linguistic text, images, animations and sounds. There are default themes which the presenter may choose as well. These choices integrate in multimodal presentations which are recontextualised by the speaker during the presentation. Most studies of Powerpoint adopt a different approach, however, by either exploring lecturers’ and students’ perceptions of PowerPoint to support learning, or alternatively they are experimental studies which investigate the effects of PowerPoint versus transparency-supported lectures on learning.

Our project adopts a multimodal approach to (a) conceptualise the grammar of Powerpoint through the study of its systems of meaning; (b) analyse and compare the choices which are made in higher education and corporate settings; and (c) investigate how these choices are contextualised in presentations. In this way, we will explore how the design of PowerPoint supports or hinders the achievement of the various goals of the presenters. At the moment, there are no studies which investigate differences in the use of Powerpoint across educational and corporate settings, and furthermore, there is no evidence for arguments that PowerPoint cannot support the representation of knowledge in technical disciplines such as engineering (Tufte, 2003) or the rich narrative and interpretative skills required for social science disciplines (Adams, 2006), nor is there evidence that PowerPoint has introduced corporate rhetoric into educational practices (Turkle, 2004). In addition, the study will provide guidelines for evaluating and improving the design and use of PowerPoint and other similar presentation software.


Adams, C. (2006). PowerPoint, habits of mind, and classroom culture. Journal of Curriculum Studies, 38(4), 389 – 411.

Gold, R. (2002). Reading PowerPoint. In N. J. Allen (Ed.), Working with words and images: New steps in an old dance. (pp. 256-270). Westport, Connecticut: Ablex.

Tufte, E. R. (2003). The cognitive style of PowerPoint (2nd edition). Cheshire, Connecticut: Graphics Press.

Turkle, S. (2004). The fellowship of the microchip: global technologies as evocative objects. In M. Suárez-Orozco & D.B. Qin-Hilliard (Eds.), Globalization: Culture and Education in the New Millennium (pp. 97-113). Berkeley, CA: University of California Press.

Kay O’Halloran interview on multimodal discourse: Part 2 of 3

Before going on to the second part of her interview, here’s a little more about Kay O’Halloran. She has a Ph.D. in Communication Studies from Murdoch University (Australia), a B.Sc. in Mathematics and a Dip. Ed. and B.Ed. (First Class Honours) from the University of Western Australia.

The Multimodal Analysis Lab of which she is the Director brings together researchers from engineering, the performing arts, medicine, computer science, arts and social sciences, architecture, and science working together in an interdisciplinary environment. (This is the first instance where I’ve seen the word interdisciplinary and can wholeheartedly agree with its use. As I have found, interdisciplinary can mean that an organic chemist is having to collaborate with an inorganic chemist or an historian is working with an anthropologist. I understand that there are leaps between, for example, history and anthropology but by comparison with engineering and the performing arts, the leap just isn’t that big.)

There’s more on Kay O’Halloran’s page here and more on the Multimodal Analysis Lab here.

2. Could you describe the research  questions, agendas and directions that are most compelling to you at this  time?

Multimodal research involves new questions and problems such as:

– What are the functionalities of the resources (e.g. language versus image)?

– How do choices combine to make meaning in artefacts and events?

– What types of reconstruals take place within and across semiotic artefacts and events and what type of metaphors consequently arise?

– How is digital meaning expanding our meaning-making potential?

The most compelling agendas and directions in multimodal research include developing new approaches to annotating, analysing, modeling, and interpreting semiotic patterns using digital media technologies, particularly in dynamic contexts (e.g. videos, film, website browsing, online learning materials). The development of new practices for multimodal analysis (e.g. multimodal corpus approaches) means we can investigate social cultural patterns and trends and the nature of knowledge and contemporary life in the age of digital media, together with its limitations. Surely new media offers us the potential for new research paradigms and making new types of meanings which will lead us to new ways of thinking about the world. Also, multimodal approaches offer the promise of new paradigms for educational research where classroom and pedagogical practices and disciplinary knowledge can be investigated in their entirety. Multimodal research opens up a new exciting world, one which is being eagerly embraced by academic researchers and postgraduate students as the way forward (in my experience at least).

Kay O’Halloran interview on multimodal discourse: Part 1 of 3

I am thrilled to announce that Kay O’Halloran an expert on multimodal discourse analysis has given me an interview. She recently spoke at the 2009 Congress of the Humanities and Social Sciences in Ottawa as a featured speaker (invited by the Canadian Association for the Study of Discourse and Writing). Kay is an Associate Professor in the Department of English Language and Literature at the National University of Singapore and she is the Director of the Multimodal Analysis Lab. (more details about Kay in future installments)

Before going with the introduction and the interview, I want to explain why I think this work is important. (Forgive me if I gush?) We have so much media coming at us at any one time and it is increasingly being ‘mashed up’, remixed, reused, and repurposed. How important is text going to be when we have icons and videos and audio materials to choose from? Take for example, the bubble charts on Andrew Maynard’s 2020 blog which are a means of representing science Twitters. How do you interpret the information? Could they be used for in-depth analysis? (I commented earlier about the bubble charts on June 23 and 24, 2009 and Maynard’s post is here. You might also want to check out the comments where Maynard explains few things that puzzled me.)

As Kay points out in her responses to my questions, we have more to interpret than just a new type of chart or data visualization.

1. I was quite intrigued by the title of your talk (A Multimodal Approach to Discourse Studies: A paradigm with new research questions, agendas and directions for the digital age) at the 2009 Congress for the Humanities and Social Sciences held in Ottawa, Canada this May. Could you briefly describe a multimodal approach for people who aren’t necessarily in the field of education?

Traditionally, language has been studied in isolation, largely due to an emphasis on the study of printed linguistic texts and existing technologies such as print media, telephone and radio where language was the primary resource which was used. However, various forms of images, animations and videos form the basis for sharing information in the digital age, and thus it has become necessary to move beyond the study of language to understand contemporary communicative practices. In a sense, the study of language alone was never really sufficient because analysing what people wrote or said missed significant choices such as typography, layout and the images which appeared in the written texts, and the intonation, actions and gestures which accompanied spoken language. In addition, disciplinary knowledge (e.g. mathematics, science and social science disciplines) involves mathematical symbolism and various kinds of images, in addition to language. Therefore, researchers in language studies and education are moving beyond the study of language to multimodal approaches in order to investigate how linguistic choices combine with choices from other meaning-making resources.

Basically multimodal research explores the various roles which language, visual images, movement, gesture, sound, music and other resources play, and the ways those resources integrate across modalities (visual, auditory, tactile, olfactory etc) to create meaning in artefacts and events which form and transform culture. For example, the focus may be written texts, day-to-day interactions, internet sites, videos and films and 3-D objects and sites. In fact, one can think of knowledge and culture as specific choices from meaning-making resources which combine and unfold in patterns which are familiar to members of groups and communities.

Moreover, there is now explicit acknowledgement in educational research that disciplinary knowledge is multimodal and that literacy extends beyond language.

The shift to multimodal research has taken place as a result of digital media which not only serves as the object of study, but also because digital media technologies offer new research tools to study multimodal texts. Such technologies have become available and affordable, and increasingly they are being utilised by multimodal researchers in order to make complex multimodal analysis possible. Lastly, scientists and engineers are increasingly looking to social scientists to solve important problems involving multimodal phenomena, for example, data analysis, search and retrieval and human computer interface design. Computer scientists and social sciences face similar problems in today’s world of digital media, and interdisciplinary collaboration is the promise of the future in what has become the age of information.

Have a nice weekend. There’ll be more of the interview next week, including a bibliography that Kay very kindly provided.

Sensing, nanotechnology and multimodal discourse analysis

Michael Berger has an interesting article on carbon nanotubes and how the act of observing them may cause damage. It’s part of the Nanowerk Spotlight series here,

A few days ago we ran a Nanowerk Spotlight (“Nanotechnology structuring of materials with atomic precision”) on a nanostructuring technique that uses an extremely narrow electron beam to knock individual carbon atoms from carbon nanotubes with atomic precision, a technique that could potentially be used to change the properties of the nanotubes. In contrast to this deliberately created defect, researchers are concerned about unintentional defects created by electron beams during examination of carbon nanomaterials with transmission electron microscopes like a high-resolution transmission electron microscope (HRTEM)

The concern is that that electrons in the beam will accidentally knock an atom out of place. It was believed that slowing the beam to 80 kV would address the problem but new research suggests that’s not the case.

If you go to Nanowerk to read more about this, you’ll find some images of what’s going on at the nanoscale. The images you see are not pictures per se. They are visual representations based on data that is being sensed at the nanoscale. The microscopes used to gather the data are not optical. As I understand it, these microscopes are haptic as the sensing is done by touch, not by sight. (If someone knows differently, please do correct me.) Scientists even have a term for interpreting this data, blobology.

I’ve been reading up on these things and it’s gotten me to thinking about how we understand and interpret not just the macroworld that our senses let us explore but the micro/nano/pico/xxx scale worlds which we cannot sense directly. In that light, the work that Kay O’Halloran, an associate professor in English Language and Literature and the Director of the Multimodal Analysis Lab at the National University of Singapore, is doing in the area of multimodal discourse analysis looks promising. From her article in Visual Communication, vol. 7 (4),

Mathematics and science, for example, produce a new space of interpretance through mixed-mode semiosis, i.e. the use of language, visual imagery, and mathematical symbolism to create a new world view which extends beyond the possible using language. (p. 454)

Nano bubbles and other bubbles laced with salt

I’ve not heard of nanobubbles before but apparently it is possible to form them from conventional microbubbles. Researchers in Japan have figured out how to make the nanobubbles more stable by using salt. Nanowerk has the media release, which includes a pretty graphic, here.  Applications for nanobubbles have much potential for preventing arteriosclerosis, better food preservation, and as cleaning agents.

Researchers have discovered that salt can be stretched physically. (That’s not what my science teachers told me!) The unexpected discovery may help researchers better understand sea salt aerosols which have been implicated in ozone depletion, smog formation, and as triggers for asthma. The full media release can be read here on Nanowerk News.

I mentioned the bubble charts on Andrew Maynard’s 2020 Science blog yesterday and noted that I have some difficulty fully understanding the information they convey. I’m much more comfortable with standard bar charts. I know how to read them and can tell if the information is being manipulated.

I noted that in Maynard’s screen cast he describes them as “classic” bubble charts. I haven’t come across them before but that doesn’t preclude their use in sectors that are not familiar to me. At any rate, it got me to thinking about a paper I just wrote called, ‘Nanotechnology, storytelling, sensing, and materiality‘. In it I suggest that we will need modes other than the purely visual to understand nanotechnology (or science at quantum scales) and implied that we rely too much on the visual. Then yesterday I posted here that I think visual data will become increasingly important. My suspicion is that both are somewhat true and I think the answer lies in a multimodal approach. More about that tomorrow.