Tag Archives: Congress of the Humanities and Social Sciences

Human-Computer interfaces: flying with thoughtpower, reading minds, and wrapping a telephone around your wrist

This time I’ve decided to explore a few of the human/computer interface stories I’ve run across lately. So this posting is largely speculative and rambling as I’m not driving towards a conclusion.

My first item is a May 3, 2011 news item on physorg.com. It concerns an art installation at Rensselaer Polytechnic Institute, The Ascent. From the news item,

A team of Rensselaer Polytechnic Institute students has created a system that pairs an EEG headset with a 3-D theatrical flying harness, allowing users to “fly” by controlling their thoughts. The “Infinity Simulator” will make its debut with an art installation [The Ascent] in which participants rise into the air – and trigger light, sound, and video effects – by calming their thoughts.

I found a video of someone demonstrating this project:
http://blog.makezine.com/archive/2011/03/eeg-controlled-wire-flight.html

Please do watch:

I’ve seen this a few times and it still absolutely blows me away.

If you should be near Rensselaer on May 12, 2011, you could have a chance to fly using your own thoughtpower, a harness, and an EEG helmet. From the event webpage,

Come ride The Ascent, a playful mash-up of theatrics, gaming and mind-control. The Ascent is a live-action, theatrical ride experience created for almost anyone to try. Individual riders wear an EEG headset, which reads brainwaves, along with a waist harness, and by marshaling their calm, focus, and concentration, try to levitate themselves thirty feet into the air as a small audience watches from below. The experience is full of obstacles-as a rider ascends via the power of concentration, sound and light also respond to brain activity, creating a storm of stimuli that conspires to distract the rider from achieving the goal: levitating into “transcendence.” The paradox is that in order to succeed, you need to release your desire for achievement, and contend with what might be the biggest obstacle: yourself.

Theater Artist and Experience Designer Yehuda Duenyas (XXXY) presents his MFA Thesis project The Ascent, and its operating platform the Infinity System, a new user driven experience created specifically for EMPAC’s automated rigging system.

The Infinity System is a new platform and user interface for 3D flying which combines aspects of thrill-ride, live-action video game, and interactive installation.

Using a unique and intuitive interface, the Infinity System uses 3D rigging to move bodies creatively through space, while employing wearable sensors to manipulate audio and visual content.

Like a live-action stunt-show crossed with a video game, the user is given the superhuman ability to safely and freely fly, leap, bound, flip, run up walls, fall from great heights, swoop, buzz, drop, soar, and otherwise creatively defy gravity.

“The effect is nothing short of movie magic.” – Sean Hollister, Engadget

Here’s a brief description of the technology behind this ‘Ascent’ (from the news item on physorg.com),

Ten computer programs running simultaneously link the commercially available EEG headset to the computer-controlled 3-D flying harness and various theater systems, said Todd. [Michael Todd, a Rensselaer 2010 graduate in computer science]

Within the theater, the rigging – including the harness – is controlled by a Stage Tech NOMAD console; lights are controlled by an ION console running MIDI show control; sound through MAX/MSP; and video through Isadora and Jitter. The “Infinity Simulator,” a series of three C programs written by Todd, acts as intermediary between the headset and the theater systems, connecting and conveying all input and output.

“We’ve built a software system on top of the rigging control board and now have control of it through an iPad, and since we have the iPad control, we can have anything control it,” said Duenyas. “The ‘Infinity Simulator’ is the center; everything talks to the ‘Infinity Simulator.’”

This May 3, 2011 article (Mystery Man Gives Mind-Reading Tech More Early Cash Than Facebook, Google Combined) by Kit Eaton on Fast Company also concerns itself with a brain/computer interface. From the article,

Imagine the money that could be made by a drug company that accurately predicted and treated the onset of Alzheimer’s before any symptoms surfaced. That may give us an idea why NeuroVigil, a company specializing in non-invasive, wireless brain-recording tech, just got a cash injection that puts it at a valuation “twice the combined seed valuations of Google’s and Facebook’s first rounds,” according to a company announcement

NeuroVigil’s key product at the moment is the iBrain, a slim device in a flexible head-cap that’s designed to be worn for continuous EEG monitoring of a patient’s brain function–mainly during sleep. It’s non-invasive, and replaces older technology that could only access these kind of brain functions via critically implanted electrodes actually on the brain itself. The idea is, first, to record how brain function changes over time, perhaps as a particular combination of drugs is administered or to help diagnose particular brain pathologies–such as epilepsy.

But the other half of the potentailly lucrative equation is the ability to analyze the trove of data coming from iBrain. And that’s where NeuroVigil’s SPEARS algorithm enters the picture. Not only is the company simplifying collection of brain data with a device that can be relatively comfortably worn during all sorts of tasks–sleeping, driving, watching advertising–but the combination of iBrain and SPEARS multiplies the efficiency of data analysis [emphasis mine].

I assume it’s the notion of combining the two technologies (iBrian and SPEARS) that spawned the ‘mind-reading’ part of this article’s title. The technology could be used for early detection and diagnosis, as well as, other possibilities as Eaton notes,

It’s also possible it could develop its technology into non-medicinal uses such as human-computer interfaces–in an earlier announcement, NeuroVigil noted, “We plan to make these kinds of devices available to the transportation industry, biofeedback, and defense. Applications regarding pandemics and bioterrorism are being considered but cannot be shared in this format.” And there’s even a popular line of kid’s toys that use an essentially similar technique, powered by NeuroSky sensors–themselves destined for future uses as games console controllers or even input devices for computers.

What these two technologies have in common is that, in some fashion or other, they have (shy of implanting a computer chip) a relatively direct interface with our brains, which means (to me anyway) a very different relationship between humans and computers.

In the next couple of items I’m going to profile a couple of very similar to each other technologies that allow for more traditional human/computer interactions, one of which I’ve posted about previously, the Nokia Morph (most recently in my Sept. 29, 2010 posting).

It was first introduced as a type of flexible phone with other capabilities. Since then, they seem to have elaborated on those capabilities. Here’s a description of what they now call the ‘Morph concept’ in a [ETA May 12, 2011: inserted correct link information] May 4, 2011 news item on Nanowerk,

Morph is a joint nanotechnology concept developed by Nokia Research Center (NRC) and the University of Cambridge (UK). Morph is a concept that demonstrates how future mobile devices might be stretchable and flexible, allowing the user to transform their mobile device into radically different shapes. It demonstrates the ultimate functionality that nanotechnology might be capable of delivering: flexible materials, transparent electronics and self-cleaning surfaces.

Morph, will act as a gateway. It will connect the user to the local environment as well as the global internet. It is an attentive device that adapts to the context – it shapes according to the context. The device can change its form from rigid to flexible and stretchable. Buttons of the user interface can grow up from a flat surface when needed. User will never have to worry about the battery life. It is a device that will help us in our everyday life, to keep our self connected and in shape. It is one significant piece of a system that will help us to look after the environment.

Without the new materials, i.e. new structures enabled by the novel materials and manufacturing methods it would be impossible to build Morph kind of device. Graphene has an important role in different components of the new device and the ecosystem needed to make the gateway and context awareness possible in an energy efficient way.

Graphene will enable evolution of the current technology e.g. continuation of the ever increasing computing power when the performance of the computing would require sub nanometer scale transistors by using conventional materials.

For someone who’s been following news of the Morph for the last few years, this news item doesn’t give you any new information. Still, it’s nice to be reminded of the Morph project. Here’s a video produced by the University of Cambridge that illustrates some of the project’s hopes for the Morph concept,

While the folks at the Nokia Research Centre and University of Cambridge have been working on their project, it appears the team at the Human Media Lab at the School of Computing at Queen’s University (Kingston, Ontario, Canada) in cooperation with a team from Arizona State University and E Ink Corporation have been able to produce a prototype of something remarkably similar, albeit with fewer functions. The PaperPhone is being introduced at the Association of Computing Machinery’s CHI 2011 (Computer Human Interaction) conference in Vancouver, Canada next Tuesday, May 10, 2011.

Here’s more about it from a May 4, 2011 news item on Nanowerk,

The world’s first interactive paper computer is set to revolutionize the world of interactive computing.

“This is the future. Everything is going to look and feel like this within five years,” says creator Roel Vertegaal, the director of Queen’s University Human Media Lab,. “This computer looks, feels and operates like a small sheet of interactive paper. You interact with it by bending it into a cell phone, flipping the corner to turn pages, or writing on it with a pen.”

The smartphone prototype, called PaperPhone is best described as a flexible iPhone – it does everything a smartphone does, like store books, play music or make phone calls. But its display consists of a 9.5 cm diagonal thin film flexible E Ink display. The flexible form of the display makes it much more portable that any current mobile computer: it will shape with your pocket.

For anyone who knows the novel, it’s very Diamond Age (by Neal Stephenson). On a more technical note, I would have liked more information about the display’s technology. What is E Ink using? Graphene? Carbon nanotubes?

(That does not look like to paper to me but I suppose you could call it ‘paperlike’.)

In reviewing all these news items, it seems to me there are two themes, the computer as bodywear and the computer as an extension of our thoughts. Both of these are more intimate relationships, the latter far more so than the former, than we’ve had with the computer till now. If any of you have any thoughts on this, please do leave a comment as I would be delighted to engage on some discussion about this.

You can get more information about the Association of Computing Machinery’s CHI 2011 (Computer Human Interaction) conference where Dr. Vertegaal will be presenting here.

You can find more about Dr. Vertegaal and the Human Media Lab at Queen’s University here.

The academic paper being presented at the Vancouver conference is here.

Also, if you are interested in the hardware end of things, you can check out E Ink Corporation, the company that partnered with the team from Queen’s and Arizona State University to create the PaperPhone. Interestingly, E Ink is a spin off company from the Massachusetts Institute of Technology (MIT).

Arts scholar in residence at National Institute of Technology: Heather Graves

Early in the new year, the University of Alberta announced the appointment of its first Scholar in Residence for Arts in Nanotechnology, Heather Graves (mentioned in my Jan. 19, 2011 posting). I contacted Dr. Graves for an interview which she very kindly gave. Before proceeding here’s a little bit of biographical information from the WRS webpage) about her [ETA Mar.11.11: photo and information about WRS webpage added],

Heather Graves is an Associate Professor of Writing Studies and the Department English and Film Studies. She is the author of Rhetoric in(to) Science: Style as Invention in Inquiry (Cresskill, NJ: Hampton, 2005); co-editor with Roger Graves of Writing Centres, Writing Seminars, Writing Culture: Teaching Writing in Anglo-Canadian Universities (Winnipeg: Inkshed Publications, 2006) and Inkshed: Newsletter of the Canadian Assoication for the Study of Language and Learning; and co-author of the Canadian Edition of The Brief Penguin Handbook (Pearson/Longman, 2007) and A Strategic Guide to Technical Communication (Peterborough: Broadview, 2007). …

As co-president of the Canadian Association for the Study of Discourse and Writing (CASDW)/ L’Association canadienne de rédactologie (ACR) (formerly the Canadian Association of Teachers of Technical Writing (CATTW)/ L’Association canadienne de professeurs de rédaction technique and scientific (ACPRTS), she has served as program chair of the annual conference held at the Congress of the Humanities and Social Sciences.

Professor Heather Graves, Canada's new Arts Scholar in Residence in Nanotechnology (photo from WRS website)

Heather will be working with scientists at the National Institute of Nanotechnology (NINT) which is located in Edmonton at the University of Alberta. The interview starts here:

(a) I was thrilled to see that a ‘scholar in residence for arts research in nanotechnology’. How do you feel about the appointment?

It’s a real opportunity to be invited into a community of practicing scientists. A number of them have been quite generous with their time to help me with their project. I have worked with scientists before but this is the first time that the invitation came, basically, from them, rather than me inviting myself in. It is wonderful to learn new things and to extend my understanding of science and how science people use rhetoric and writing in their work and professional lives.

(b) I believe this is the first such appointment in Canada, is that right? Why was the position created?

I am not aware of any other such appointments (there is only one National Institute in Canada, but the various centres for nanotechnology being built at various Canadian universities could also follow suit). I think the position was created because someone at NINT wished to develop closer links between Arts and Science, specifically nanoscience/technology. The hope is that greater knowledge of what scientists are doing with their research in nanotechnology will get a bit more publicity through this position (it will get more play on campus for sure, and likely a bit more exposure to the broader public). The position is sponsored by the Vice President of Research here at U of A but I’m not exactly clear on where the money came from (to buy out my teaching for this term, so give me some development money with which I am employing a Graduate Research Assistant, and a modest travel budget to present a conference paper or two). I expect the university and the National Institute for Nanotechnology (NINT) are sharing the costs.

(c) What will you be doing as a ‘scholar in residence for arts research in nanotechnology’? (i. e., Are there deliverables for this project and what might they be?)

I am conducting a research project on language and writing in the work of scientists doing research in nanotechnology/nanoscience. There are several strands to the project: interviews with scientists about their research and about how they use writing in their professional work and how they teach writing to the graduate students who work with them; attending meetings between supervisors and their graduate students as they meet regularly to talk about their progress on individual experimental work; attending seminars by visiting researchers about their recent work; and analyzing drafts of research reports to identify the discursive conventions of the discipline, including the features of argument structure. My focus is on how scientists use language and writing to communicate about their research; how they understand the process of drafting a convincing argument for their interpretations of the research findings, and how they structure that argument; and how newcomers to the field acculturate into the norms and conventions of the discourse in this field. The discourse conventions of nanotechnology (as an emerging discipline) are still being negotiated: they evolve out of the collaborative efforts of the interdisciplinary scientists who work together on various projects, as well as between writers and editors for scholarly journals in nanotechnology. I’m interested in documenting, as far as possible, some of this negotiation from the scientists’ perspectives and from studies of their published (and in some cases draft) reports of research. This study also analyses the linguistic constructions that the scientists use to conceptualize and communicate the scientific phenomena that they are studying. Research on the nanoscale is mediated by both technology and language, making it a fascinating site for exploring how these mediations are translated into knowledge and eventually commercial products. I expect that these different strands of the project will result in a series of conference papers and then several academic articles or even a book-length manuscript on rhetoric and nanotechnology. I also expect that some of these insights will be valuable in writing textbooks on writing in disciplines other than Arts and Humanities. I may also write some articles on nanotechnology for more popular audiences.

(d) What aspects of your previous work are you bringing to this position (e.g., rhetorical function of visuals in science research and/or model of argumentation in scientific discourse)?

Much all the work that I’ve done earlier on rhetoric of science and on argument in the disciplines is relevant to this project. For example, many discussions of scientific phenomena take place based on visuals, so a better understanding of relationship between the visuals and their rhetorical purpose is crucial to understanding the processes of knowledge creation engaged in by scientists. The visuals in science generally function as evidence supporting the claims made for new knowledge in the arguments constructed in oral presentations of work as well as in journal publications. These aspects tie in to my long-standing interest in argument in the disciplines and especially science-related disciplines. Since I also teach writing to first year science majors and to graduate students in science disciplines, this study will enable me to develop new and better teaching materials for these audiences of learners. So on a practical level this research project could well translate eventually into better instructional material for writers in science and better writers of scientific discourse in Canada.

(e) Do you have colleagues, i.e. other ‘scholars in residence for arts research in nanotechnology’, internationally and who might they be? In other words, how does this position fit within the international scene?

I am not aware of any other “scholars in residence for arts research in nanotechnology” elsewhere at this point. Please let me know if you encounter any more! I am working pretty much in isolation; of course it would be great to have colleagues to talk to who are in similar circumstances but when you are carving your own path it’s also freeing, in a way, because there is no standard procedure or approach. You can invent your project and its execution any way you want. This is generally how I have proceeded in the past because my area of interest (the study of the language and rhetoric/writing of working scientists) was sparsely populated by other scholars, especially from 1995 to the early 2000s. In the last five years or so, however, I have met a number of other rhetoric of science and writing in science scholars who are addressing some of the same issues.

Beyond the “Scholar in Residence . . .” title, however, I know there is significant interest in nanoscience and nanotechnology from many different types of people from both academic and more popular perspectives, but this collaboration between the University of Alberta and the National Institute for Nanotechnology does seem like a brand new idea. It certainly encourages interaction between two areas that don’t generally mix professionally, and it will be interesting to see what comes of this interaction in the long term, since the “Scholar in Residence for Arts Research in Nanotechnology” pilot project is slated to run for two more years after me and perhaps to be made permanent if it is deemed a success. I look forward to also hearing about subsequent research projects that follow mine. Perhaps other Centres for Nanotechnology across Canada and around the world might follow the lead here by University of Alberta and NINT. I certainly hope so.

(f) Is there anything you’d like to add?

I think many people have little idea about what is required to do this kind of research project successfully at least from the perspective of the number of hours it takes. You do have to commit significant numbers regularly over a period of time to get to know anyone in the community and to gain a reasonable level of understanding of the community. This means just hanging out for several hours a day as often as possible and collecting information as you hang out. The more information you collect the better you understand your area of study and the more data you have to work with, but processing all of this information also becomes a huge task. For example, sifting through interviews and research presentations and meeting transcripts takes a lot of time and energy. Transcribing digital recordings of key interchanges also takes time (although voice recognition software has improved immensely in the last few years, one still can’t devote one-third of a half-hour interview with a busy person to getting the technology up to speed). What I’m trying to say is that you cannot do this kind of research while also teaching a full load of classes; this type of research is only practical and possible if you have the luxury of time, which is what a program such as the Scholar in Residence for Arts Research in Nanotechnology provides. More people might conduct this kind of research if such a program were more widely available but in the absence of this type of support other types of less time-intensive research has to be undertaken, changing the types of research questions that you can ask and re-directing to somewhere else the advance of knowledge from this area.

Thank you Heather. I look forward to hearing and reading more about your work as the project progresses. I wish you the best of luck with it.

Science and multimodal media approaches

There’s an interesting article on an experiment being conducted at Fortune magazine. For anyone who’s not aware, the publishing industry is in a serious quandary and many publishers are struggling for survival. This explains why Fortune magazine has a multimodal media version of its print cover story available on the web. From the article by Andrew Vanacore on the Physorg.com site here,

Dispensing advice on finding a job during a recession, the piece had a soundtrack, a troupe of improv actors from Chicago and about 4,000 fewer words than your average magazine feature. Instead of scrolling through a column of text, readers (if the term can be applied) flipped through nine pages that told the story with a mix of text, photo-illustrations, interactive graphics and video clips.

I like that bit about “readers (if the term can be applied)” because I’ve been coming to the conclusion that with less and less text (think Twitter) that we may be returning to a more oral society as opposed to our still literate-dominant society. I’ve been thinking about this since some time in the early 1990’s when a communications professor (Paul Heyer) at Simon Fraser University first made the suggestion to us in class.

Following on this idea that we will be less and less text oriented, the work that Kay O’Halloran is doing at her Mulimodal Lab (situated at the National University of Singapore) casts an interesting light on where this all may be going with regard to science communication.  An associate professor in the Dept. of English Language and Literature, O’Halloran is speaking tomorrow (in Ottawa, Canada) at the 2009 Congress of Humanities and Social Science about reading, mathematics, and digital media. I hope there will be a webcast of her talk available afterwards (I suggested it to the folks from the Canadian Association for the Study of Discourse and Writing (CASDW) who are sponsoring her talk. If there is a webcast, I’ll post a link.

Meanwhile, for those of us not lucky enough to be there, from the programme,

To understand digital texts we need theories that study more than words alone. This talk will show how images, mathematical and scientific symbols, gestures, actions, music, and sound can all be studied along with words using examples from the classroom, digital media, and mathematics.

I believe that more and more of our communication, science and otherwise, is moving in a multimodal direction. It seems so obvious to me that it surprises me that it’s not commonly accepted wisdom.

Later this week, I will have more about science funding and I have notice of another sythetic biology event coming up at the Project on Emerging Nanotechnologie.