Tag Archives: musings

Human-Computer interfaces: flying with thoughtpower, reading minds, and wrapping a telephone around your wrist

This time I’ve decided to explore a few of the human/computer interface stories I’ve run across lately. So this posting is largely speculative and rambling as I’m not driving towards a conclusion.

My first item is a May 3, 2011 news item on physorg.com. It concerns an art installation at Rensselaer Polytechnic Institute, The Ascent. From the news item,

A team of Rensselaer Polytechnic Institute students has created a system that pairs an EEG headset with a 3-D theatrical flying harness, allowing users to “fly” by controlling their thoughts. The “Infinity Simulator” will make its debut with an art installation [The Ascent] in which participants rise into the air – and trigger light, sound, and video effects – by calming their thoughts.

I found a video of someone demonstrating this project:
http://blog.makezine.com/archive/2011/03/eeg-controlled-wire-flight.html

Please do watch:

I’ve seen this a few times and it still absolutely blows me away.

If you should be near Rensselaer on May 12, 2011, you could have a chance to fly using your own thoughtpower, a harness, and an EEG helmet. From the event webpage,

Come ride The Ascent, a playful mash-up of theatrics, gaming and mind-control. The Ascent is a live-action, theatrical ride experience created for almost anyone to try. Individual riders wear an EEG headset, which reads brainwaves, along with a waist harness, and by marshaling their calm, focus, and concentration, try to levitate themselves thirty feet into the air as a small audience watches from below. The experience is full of obstacles-as a rider ascends via the power of concentration, sound and light also respond to brain activity, creating a storm of stimuli that conspires to distract the rider from achieving the goal: levitating into “transcendence.” The paradox is that in order to succeed, you need to release your desire for achievement, and contend with what might be the biggest obstacle: yourself.

Theater Artist and Experience Designer Yehuda Duenyas (XXXY) presents his MFA Thesis project The Ascent, and its operating platform the Infinity System, a new user driven experience created specifically for EMPAC’s automated rigging system.

The Infinity System is a new platform and user interface for 3D flying which combines aspects of thrill-ride, live-action video game, and interactive installation.

Using a unique and intuitive interface, the Infinity System uses 3D rigging to move bodies creatively through space, while employing wearable sensors to manipulate audio and visual content.

Like a live-action stunt-show crossed with a video game, the user is given the superhuman ability to safely and freely fly, leap, bound, flip, run up walls, fall from great heights, swoop, buzz, drop, soar, and otherwise creatively defy gravity.

“The effect is nothing short of movie magic.” – Sean Hollister, Engadget

Here’s a brief description of the technology behind this ‘Ascent’ (from the news item on physorg.com),

Ten computer programs running simultaneously link the commercially available EEG headset to the computer-controlled 3-D flying harness and various theater systems, said Todd. [Michael Todd, a Rensselaer 2010 graduate in computer science]

Within the theater, the rigging – including the harness – is controlled by a Stage Tech NOMAD console; lights are controlled by an ION console running MIDI show control; sound through MAX/MSP; and video through Isadora and Jitter. The “Infinity Simulator,” a series of three C programs written by Todd, acts as intermediary between the headset and the theater systems, connecting and conveying all input and output.

“We’ve built a software system on top of the rigging control board and now have control of it through an iPad, and since we have the iPad control, we can have anything control it,” said Duenyas. “The ‘Infinity Simulator’ is the center; everything talks to the ‘Infinity Simulator.’”

This May 3, 2011 article (Mystery Man Gives Mind-Reading Tech More Early Cash Than Facebook, Google Combined) by Kit Eaton on Fast Company also concerns itself with a brain/computer interface. From the article,

Imagine the money that could be made by a drug company that accurately predicted and treated the onset of Alzheimer’s before any symptoms surfaced. That may give us an idea why NeuroVigil, a company specializing in non-invasive, wireless brain-recording tech, just got a cash injection that puts it at a valuation “twice the combined seed valuations of Google’s and Facebook’s first rounds,” according to a company announcement

NeuroVigil’s key product at the moment is the iBrain, a slim device in a flexible head-cap that’s designed to be worn for continuous EEG monitoring of a patient’s brain function–mainly during sleep. It’s non-invasive, and replaces older technology that could only access these kind of brain functions via critically implanted electrodes actually on the brain itself. The idea is, first, to record how brain function changes over time, perhaps as a particular combination of drugs is administered or to help diagnose particular brain pathologies–such as epilepsy.

But the other half of the potentailly lucrative equation is the ability to analyze the trove of data coming from iBrain. And that’s where NeuroVigil’s SPEARS algorithm enters the picture. Not only is the company simplifying collection of brain data with a device that can be relatively comfortably worn during all sorts of tasks–sleeping, driving, watching advertising–but the combination of iBrain and SPEARS multiplies the efficiency of data analysis [emphasis mine].

I assume it’s the notion of combining the two technologies (iBrian and SPEARS) that spawned the ‘mind-reading’ part of this article’s title. The technology could be used for early detection and diagnosis, as well as, other possibilities as Eaton notes,

It’s also possible it could develop its technology into non-medicinal uses such as human-computer interfaces–in an earlier announcement, NeuroVigil noted, “We plan to make these kinds of devices available to the transportation industry, biofeedback, and defense. Applications regarding pandemics and bioterrorism are being considered but cannot be shared in this format.” And there’s even a popular line of kid’s toys that use an essentially similar technique, powered by NeuroSky sensors–themselves destined for future uses as games console controllers or even input devices for computers.

What these two technologies have in common is that, in some fashion or other, they have (shy of implanting a computer chip) a relatively direct interface with our brains, which means (to me anyway) a very different relationship between humans and computers.

In the next couple of items I’m going to profile a couple of very similar to each other technologies that allow for more traditional human/computer interactions, one of which I’ve posted about previously, the Nokia Morph (most recently in my Sept. 29, 2010 posting).

It was first introduced as a type of flexible phone with other capabilities. Since then, they seem to have elaborated on those capabilities. Here’s a description of what they now call the ‘Morph concept’ in a [ETA May 12, 2011: inserted correct link information] May 4, 2011 news item on Nanowerk,

Morph is a joint nanotechnology concept developed by Nokia Research Center (NRC) and the University of Cambridge (UK). Morph is a concept that demonstrates how future mobile devices might be stretchable and flexible, allowing the user to transform their mobile device into radically different shapes. It demonstrates the ultimate functionality that nanotechnology might be capable of delivering: flexible materials, transparent electronics and self-cleaning surfaces.

Morph, will act as a gateway. It will connect the user to the local environment as well as the global internet. It is an attentive device that adapts to the context – it shapes according to the context. The device can change its form from rigid to flexible and stretchable. Buttons of the user interface can grow up from a flat surface when needed. User will never have to worry about the battery life. It is a device that will help us in our everyday life, to keep our self connected and in shape. It is one significant piece of a system that will help us to look after the environment.

Without the new materials, i.e. new structures enabled by the novel materials and manufacturing methods it would be impossible to build Morph kind of device. Graphene has an important role in different components of the new device and the ecosystem needed to make the gateway and context awareness possible in an energy efficient way.

Graphene will enable evolution of the current technology e.g. continuation of the ever increasing computing power when the performance of the computing would require sub nanometer scale transistors by using conventional materials.

For someone who’s been following news of the Morph for the last few years, this news item doesn’t give you any new information. Still, it’s nice to be reminded of the Morph project. Here’s a video produced by the University of Cambridge that illustrates some of the project’s hopes for the Morph concept,

http://www.youtube.com/watch?v=PKihhDC7-bI

While the folks at the Nokia Research Centre and University of Cambridge have been working on their project, it appears the team at the Human Media Lab at the School of Computing at Queen’s University (Kingston, Ontario, Canada) in cooperation with a team from Arizona State University and E Ink Corporation have been able to produce a prototype of something remarkably similar, albeit with fewer functions. The PaperPhone is being introduced at the Association of Computing Machinery’s CHI 2011 (Computer Human Interaction) conference in Vancouver, Canada next Tuesday, May 10, 2011.

Here’s more about it from a May 4, 2011 news item on Nanowerk,

The world’s first interactive paper computer is set to revolutionize the world of interactive computing.

“This is the future. Everything is going to look and feel like this within five years,” says creator Roel Vertegaal, the director of Queen’s University Human Media Lab,. “This computer looks, feels and operates like a small sheet of interactive paper. You interact with it by bending it into a cell phone, flipping the corner to turn pages, or writing on it with a pen.”

The smartphone prototype, called PaperPhone is best described as a flexible iPhone – it does everything a smartphone does, like store books, play music or make phone calls. But its display consists of a 9.5 cm diagonal thin film flexible E Ink display. The flexible form of the display makes it much more portable that any current mobile computer: it will shape with your pocket.

For anyone who knows the novel, it’s very Diamond Age (by Neal Stephenson). On a more technical note, I would have liked more information about the display’s technology. What is E Ink using? Graphene? Carbon nanotubes?

(That does not look like to paper to me but I suppose you could call it ‘paperlike’.)

In reviewing all these news items, it seems to me there are two themes, the computer as bodywear and the computer as an extension of our thoughts. Both of these are more intimate relationships, the latter far more so than the former, than we’ve had with the computer till now. If any of you have any thoughts on this, please do leave a comment as I would be delighted to engage on some discussion about this.

You can get more information about the Association of Computing Machinery’s CHI 2011 (Computer Human Interaction) conference where Dr. Vertegaal will be presenting here.

You can find more about Dr. Vertegaal and the Human Media Lab at Queen’s University here.

The academic paper being presented at the Vancouver conference is here.

Also, if you are interested in the hardware end of things, you can check out E Ink Corporation, the company that partnered with the team from Queen’s and Arizona State University to create the PaperPhone. Interestingly, E Ink is a spin off company from the Massachusetts Institute of Technology (MIT).

Women in nanoscience and other sciences too

Last week, three women were honoured for their work in nanoscience with  L’Oréal Singapore for Women in Science Fellowships (from the news item on Nanowerk),

In its second year, the Fellowships is organised with the support of the Singapore National Commission for UNESCO and in partnership with the Agency for Science, Technology and Research (A*STAR). The Fellowships aim to recognise the significant contribution of talented women to scientific progress, encourage young women to pursue science as a career and promote their effective participation in the scientific development of Singapore.

The three outstanding women were awarded fellowships worth S$20,000 to support them in their doctorate or post-doctorate research. This year’s National Fellows are:

– Dr. Low Hong Yee, 2010 L’Oréal Singapore For Women in Science National Fellow and Senior Scientist at A*STAR’s Institute of Materials Research and Engineering. Her work in nanoimprint technology, an emerging technique in nanotechnology, focuses on eco solutions and brings to reality the ability to mimic and apply on synthetic surfaces the structure found in naturally occurring exteriors or skin such as the iridescent colours of a butterfly’s wings or the water-proofing of lotus leaves. This new development offers an eco-friendly, non-chemical method to improve the properties and functionalities of common plastic film.

– Dr. Madhavi Srinivasan, 2010 L’Oréal Singapore For Women in Science National Fellow and Assistant Professor at the Nanyang Technological University. Dr Srinivasan seeks to harness the power of nanoscale materials for the answer to the future of energy storage. Such technologies are vital for the future of a clean energy landscape. Its applications include powering electric vehicles, thus reducing overall CO2 emission, and reducing global warming or enhancing renewable energy sources (solar/wind), thus reducing pollution and tapping on alternative energy supplies.

– Dr. Yang Huiying, 2010 L’Oréal Singapore For Women in Science National Fellow and Assistant Professor at Singapore University of Technology and Design. Dr Yang’s fascination with the beauty of the nano-world prompted her research into the fabrication of metal oxide nanostructures, investigation of their optical properties, and the development of nanophotonics devices. These light emitting devices will potentially be an answer to the need for energy-saving and lower cost display screens, LED bulbs, TV and DVD players etc.

This announcement reminded me of a question I occasionally ask myself, why aren’t there more women mentioned prominently in the nanotechnology/nanoscience narratives? There are a few (the ones I’ve heard of are from the US: Christine Peterson/Foresight Institute; Mildred Dresselhaus, advisor to former US Pres. Bill Clinton; Kristen Kulinowski/Rice University and the Good Nano Guide, please let me know of any others that should be added to this list) just not as many as I would have expected.

On a somewhat related note, there was this blog post by one of the co-authors of the article, The Internet as a resource and support network for diverse geoscientists, which focused largely on women,

In the September issue of GSA Today, you can find our article on The Internet as a resource and support network for diverse geoscientists. We wrote the article with with the idea of reaching beyond the audience that already reads blogs (or attends education/diversity sessions at GSA), with the view that we might be able to open some eyes as to why time spent on-line reading and writing blogs and participating in Twitter might be a valuable thing for geoscientists to be doing. And, of course, we had some data to support our assertions.

As a white woman geoscientist in academia, I have definitely personally and professionally benefited from my blog reading and writing time. (I even have a publication to show for it!) But I would to love to hear more from minority and outside-of-academia geoscientists about what blogs, Twitter, and other internet-based forms of support could be doing to better support you. As you can see from the paragraph above, what we ended up advocating was that institutional support for blogging and blog-reading would help increase participation. We thought that, with increased participation, more minority and outside-of-academia geosciences voices would emerge, helping others find support, community, role models, and mentoring in voices similar to their own. Meanwhile those of us closer to the white/academic end of the spectrum could learn from all that a diverse geoscientist community has to offer.

The 2-page article is open access and can be found here.

Meanwhile, women in technology should be taking this tack according to an article by Allyson Kapin on the Fast Company website,

We have a rampant problem in the tech world. It’s called the blame game. Here’s how it works. You ask the question, “Why aren’t there enough women in tech or launching startups?” From some you get answers like, “Because it’s an exclusive white boys club.” But others say, “Not true! It’s because women don’t promote their expertise enough and they are more risk averse.” How can we truly address the lack of women in tech and startups and develop realistic solutions if we continue to play this silly blame game?

Yesterday, Michael Arrington of TechCrunch wrote a blog post saying, “It doesn’t matter how old you are, what sex you are, what politics you support or what color you are. If your idea rocks and you can execute, you can change the world and/or get really, stinking rich.”

That’s a nice idea and if it were true then the amount of wealthy entrepreneurs would better match our population’s racial and gender demographics. The fact remains that in 2009 angel investors dished out $17.6 billion to fund startups. Wonder how many funded startups were women-run? 9.4%, according to the 2009 angel investor report from Center for Venture Research at University of New Hampshire. And only 6% of investor money funded startups run by people of color.

Yet Arrington says it’s because women just don’t want it enough and that he is sick and tired of being blamed for it. He also says TechCrunch has “beg[ged] women to come and speak” and participate in their events and reached out to communities but many women still decline.

Unfortunately, the article is expositing two different ideas (thank you Allyson Kapin for refuting Arrington’s thesis) and not relating them to each other. First, there is a ‘blame game’ which isn’t getting anyone anywhere and there are issues with getting women to speak on technology panels.There are some good suggestions in the article for how to deal with the 2nd problem while the first problem is left to rest.

Kapin is right, the blame game doesn’t work in anyone’s favour but then we have to develop some alternatives. I have something here from Science Cheerleader which offers a stereotype-breaking approach to dealing with some of the issues that women in science confront. Meet Christine,

Meet Crhstine (image found on sciencecheerleader.com

Meet Erica,

Meet Erica (image found on sciencecheerleader.com)

One of these women is a software engineer and the other is a biomedical engineer.  Do visit Science Cheerleader to figure out which woman does what.

Changing the way women are perceived is a slow and arduous process and requires a great number of strategies along with the recognition that the strategies have to be adjusted as the nature of the prejudice/discrimination also changes in response to the strategies designed to counter it in the first place.  For example, efforts like the L’Oréal fellowships for women have been described as reverse-discrimination since men don’t have access to the awards by reason of their gender while standard fellowship programmes are open to all. It’s true the programmes are open to all but we need to use a variety of ways (finding speakers for panels, special financial awards programmes, stereotype-breaking articles, refuting an uninformed statement, etc.) to encourage greater participation by women and the members of other groups that have traditionally not been included. After all, there’s a reason why most of the prominent Nobel science prize winners  are white males and it’s not because they are naturally better at science.

Oil in the Gulf of Mexico, science, and not taking sides

Linda Hooper-Bui is a professor in Louisiana who studies insects.She’s also one of the scientists who’s been denied access to freely available (usually) areas in the Gulf of Mexico wetlands. She and her students want to gather data for examination about the impact that the oil spill has had on the insect populations. BP Oil and the US federal government are going court over the oil spill and both sides want scientific evidence to buttress their respective cases. Scientists wanting access to areas controlled by either of the parties are required to sign nondisclosure agreements (NDAs) by either BP Oil or the Natural Resource Damage Assessment federal agency. The NDA’s extend not just to the publication of data but also to informal sharing.

From the article by Hooper-Bui in The Scientist,

The ants, crickets, flies, bees, dragon flies, and spiders I study are important components of the coastal food web. They function as soil aerators, seed dispersers, pollinators, and food sources in complex ecosystems of the Gulf.

Insects were not a primary concern when oil was gushing into the Gulf, but now they may be the best indicator of stressor effects on the coastal northern Gulf of Mexico. Those stressors include oil, dispersants, and cleanup activities. If insect populations survive, then frogs, fish, and birds will survive. If frogs, fish, and birds are there, the fishermen and the birdwatchers will be there. The Gulf’s coastal communities will survive. But if the bugs suffer, so too will the people of the Gulf Coast.

This is why my continued research is important: to give us an idea of just how badly the health of the Gulf Coast ecosystems has been damaged and what, if anything, we can do to stave off a full-blown ecological collapse. But I am having trouble conducting my research without signing confidentiality agreements or agreeing to other conditions that restrict my ability to tell a robust and truthful scientific story.

I want to collect data to answer scientific questions absent a corporate or governmental agenda. I won’t collect data specifically to support the government’s lawsuit against BP nor will I collect data only to be used in BP’s defense. Whereas I think damage assessment is important, it’s my job to be independent — to tell an accurate, unbiased story. But because I choose not to work for BP’s consultants or NRDA, my job is difficult and access to study sites is limited.

Hooper-Bui goes on to describe a situation where she and her students had to surrender samples to a US Fish and Wildlife officer because their project (on public lands therefore they should have been freely accessible) had not been approved. Do read the article before it disappears behind a paywall but if you prefer. you can listen to a panel discussion with her and colleagues Christopher D’Elia and Cary Nelson on the US National Public Radio (NPR) website, here. One of the people who calls in to the show is another professor, this one from Texas, who has the same problem collecting data. He too refused to sign any NDAs. One group of nonaligned scientists has been able to get access and that’s largely because they acted before the bureaucracy snapped into place. They got permission (without having to sign NDAs) while the federal bureaucracy was still organizing itself in the early days of the spill.

These practices are antithetical to the practice of science. Meanwhile, the contrast between this situation and the move to increase access and make peer review a more open process (in my August 20, 2010 posting) could not be more glaring. Very simply, the institutions want more control while the grassroots science practitioners want a more open environment in which to work.

Hooper-Bui comments on NPR that she views her work as public service. It’s all that and more; it’s global public service.

What happens in the Gulf over the next decades will have a global impact. For example, there’s a huge colony of birds that make their way from the Gulf of Mexico to the Gaspé Peninsula in Québec for the summer returning to the Gulf in the winter.  They should start making their way back in the next few months. Who knows what’s going to happen to that colony and the impact this will have on other ecosystems?

We need policies that protect scientists and ensure, as much as possible, that their work be conducted in the public interest.

ASME’s introductory nanotechnology podcast doesn’t mention the word billionth

It’s a landmark moment, I have never before come across an introductory nanotechnology presentation where they make no reference to ‘billionth’ as in, nanometre means one billionth of a metre.

The American Society of Mechanical Engineers now known as ASME offers a series of podcasts about nanotechnology on its website. This page is where you can sign up to get free access. (You might want to take a look at that agreement before submitting it. More about that later.) I saw the first installation on Andrew Maynard’s 2020 Science blog here. Andrew is prominently featured in this first podcast.

I enjoyed the podcast and found this new approach to introducing nanotechnology quite intriguing and I suspect they’re going in the right direction. 1 billionth of a metre or of a second doesn’t really convey that much information for most of us. Personally, I visualize the existence of alternate realities, tiny worlds of atoms and molecules which I believe to be present but are not perceptible to me through my senses.

It’s been decades since I first saw a representation of an atom or a molecule but the resemblance to planets has often played in my imagination since. They will always be planets for me, regardless of the fact that more accurate representations exist than the ones I saw so many years ago.

I think it’s the poetic aspect of it all, as if we carry worlds within us while our own planet may be simply an atom in someone else’s universe. One of these days when I have a better handle on what I’m trying to say here,  I will write a poem about it.

Actually, I’ve been meaning to do a series of poems based on the periodic table of elements ever since I saw a revisioning of the periodic table, The Chemical Galaxy by Philip Stewart. The desire was reawakened recently on finding Sam Kean’s series Blogging the Periodic Table, for Slate Magazine. From Kean’s first entry,

I’m blogging about the periodic table this month in conjunction with my new book, The Disappearing Spoon: And Other True Tales of Madness, Love, and the History of the World From the Periodic Table of the Elements. Now, I know not everyone has fond memories of the periodic table, but it got to me early—thanks to one element, mercury. I used to break those old-fashioned mercury thermometers all the time as a kid (accidentally, I swear), and I was always fascinated to see the little balls of liquid metal rolling around on the floor. My mother used to sweep them up with a toothpick, and we kept a jar with a pecan-size glob of all the mercury from all the broken thermometers on a knickknack shelf in our house.

But what really reinforced my love of mercury—and got me interested in the periodic table as a whole—was learning about all the places that mercury popped up in history. Lewis and Clark hauled 600 mercury-laced laxative tablets with them when they explored the interior of America—historians have tracked down some places where they stayed based on deposits in the soil. The so-called mad hatters (like the one in Alice in Wonderland) went crazy because of the mercury in the vats in which they cleaned fur pelts.

Mercury made me see how many different areas of life the periodic table intersects with, and I wrote The Disappearing Spoon because I realized that you can say the same about every single element on the table. There are hidden tales about familiar elements like gold, carbon, and lead and even obscure elements like tellurium and molybdenum have wonderful, often wild back stories.

There are eight more entries as of 11:25 am PST, July 15, 2010. I wish Kean good luck as he sells his book. By the way, he’ll be blogging until early August 2010.

Getting back to ASME and their nanotechnology podcasts. I haven’t signed up and am not sure I will. They are insisting on copyright in their  user agreement (link to page),

Copyrights. All rights, including copyright and database right, in this Site and its contents (including, but not limited to, all text, images, software, video clips, audio clips) (collectively, “Content”), are owned by the American Society of Mechanical Engineers (ASME), or otherwise used by ASME as permitted by applicable law or agreement.

Content Displayed on the Website. User shall not remove, obscure or alter the Content. User shall not distribute, rent, lease, transfer or otherwise make the Content available to any third party, or use the Content for systematic downloading, and/or the making of print or electronic copies for transmission to non-subscribers. User may download only the video clips designated on the Website as downloadable and may not share video URLs with non-subscribers. [emphases mine]

If I read those passages correctly, I’m prevented from copying any portion of the materials from their website and reproducing them on this blog to nonsubscribers. (I trust reproducing portions of their ‘user agreement’ won’t land me into trouble.) Since I copy and excerpt with a very high rate of frequency (being careful to give attribution and links while excerpting portions only), I don’t want to be placed in the position of having to ask for permission each and every time I’d like to copy something from the ASME site.  A lot of my entries are timely so I don’t want to wait and, frankly, I don’t understand what their problems with activities such as mine might be.  I suspect that this agreement will prove overly prohibitive and I hope the ASME folks will reconsider their approach to copyright. I really would like to view a few of their podcasts.

Comments on the Golden Triangle workshop for PCAST’s PITAC

I didn’t catch the entire webcast as it was live streaming but what I caught was fascinating to observe. For those who don’t know, PCAST is the US President’s Council of Advisors on Science and Technology and PITAC is the President’s Innovation and Technology Advisory Committee. This morning they held a workshop mentioned in yesterday’s posting here that was focused on innovation in the US regarding information technology, nanotechnology, and biotechnology (the Golden Triangle). You can go to the PCAST website for information about this morning’s workshop and hopefully find a copy of the webcast once they’ve posted it.

A few items from the webcast caught my attention such as a comment by Judith Estrin (invitee and business woman). She talked about a laboratory gap (aka valley of death) while referencing the loss of large industrial labs such as the Bell Labs where as of Aug. 2008 the focus shifted from basic science to more easily commercialized applications.

I think there’s a significant difference between doing basic research in an academic environment and doing it in an industrial environment. I believe what Estrin is referencing is the support an industrial laboratory can offer a scientist who wants to pursue an avenue of basic research which might not find initial support within the academic structure and/or ongoing support as it makes its arduous way to commercialization.

With the loss of a number of large laboratories, start-up companies are under pressure to fill the gap but they have a big problem trying to support that interstitial space between basic research and applied research as they don’t have sufficient capitalization.

The similarity to the Canadian situation with its lack of industrial laboratories really caught my attention.

Franco Vitiliano, President and CEO of ExQor Technologies Inc., reiterated a point made earlier and afterwards about the interdisciplinary nature of the work and difficulty of operating in a business environment that is suspicious and/or fails to understand that kind of work. I was captivated by his story about bio-nanolasers and how these were developed from an observations made about water drops.

Anita Goel, Chairman and CEO of Nanobiosym Inc., noted that another problem with financing lies with the current financial models which are increasingly focused on the short-term and are risk-averse. As well, the current venture capital model is designed to support one technology application for one market. This presents a problem with the interdisciplinary nature of the work in the biotechnology, nanotechnology, and information technology fields currently taking place with its applications being considered for multiple markets.

There were many astute and interesting speakers. I can’t always remember who said what and sometimes I couldn’t see the person’s placard so I apologize if I’ve wrongly attributed some of the comments. If someone could correct me, I’d be more than happy to edit the changes in.

I was suprised that there were no individuals from the venture capital  community or representatives from some of the large companies such as HP Labs, IBM, etc. Most of the start-ups represented at the meeting came from the biomedical sector. I did not hear anyone discuss energy, clean water, site remediation, or other such applications. As far as I could tell there weren’t any nongovernmental agencies present either. Nonetheless, it was a very crowded table and I imagine that more people would have necessitated a much longer session.

I found the webcast was stimulating but the acid test for this meeting and others of its type is always whether or not action is taken.

As for the Canadian situation with it’s ‘innovation gap’, there’s more in Rob Annan’s posting, Research policy odds and sods, where he highlights a number of recent articles  about Canadian innovation laced with some of his observations. It’s a good roundup of the latest and I encourage you to check it out.

ETA June 23 2010: Dexter Johnson at Nanoclast offers his thoughts on the webcast and notes that while the promotional material suggested a discussion about public engagement, the workshop itself was focused on the ‘innovation gap’. He highlights comments from speakers I did not mention, as well as some of the questions received via Facebook and Twitter. For someone who doesn’t have the time to sit through the webcast, I strongly suggest that you check out Dexter’s posting as he adds insight borne of more intimate knowledge than mine of the US situation.

Interacting with stories and/or with data

A researcher, Ivo Swarties, at the University of Twente in The Netherlands is developing a means of allowing viewers to enter into a story (via avatar) and affect the plotline in what seems like a combination of what you’d see in 2nd Life and gaming. The project also brings to mind The Diamond Age by Neal Stephenson and its intelligent nanotechnology-enabled book along with Stephenson’s latest publishing project, Mongoliad (which I blogged about here).

The article about Swarties’ project on physorg.com by Rianne Wanders goes on to note,

The ‘Virtual Storyteller’, developed by Ivo Swartjes of the University of Twente, is a computer-controlled system that generates stories automatically. Soon it will be possible for you as a player to take on the role of a character and ‘step inside’ the story, which then unfolds on the basis of what you as a player do. In the gaming world there are already ‘branching storylines’ in which the gamer can influence the development of a story, but Swartjes’ new system goes a step further. [emphasis mine]The world of the story is populated with various virtual figures, each with their own emotions, plans and goals. ‘Rules’ drawn up in advance determine the characters’ behaviour, and the story comes about as the different characters interact.

There’s a video with the article if you want to see this project for yourself.

On another related front, Cliff Kuang profiles in an article (The Genius Behind Minority Report’s Interfaces Resurfaces, With Mind-blowing New Tech) on the Fast Company site describes a new human-computer interface. This story provides a contrast to the one about the ‘Virtual Storyteller’ because this time you don’t have to become an avatar to interact with the content. From the article,

It’s a cliche to say that Minority Report-style interfaces are just around the corner. But not when John Underkoffler [founder of Oblong Industries] is involved. As tech advistor on the film, he was the guy whose work actually inspired the interfaces that Tom Cruise used. The real-life system he’s been developing, called g-speak, is unbelievable.

Oblong hasn’t previously revealed most of the features you see in the later half of the video [available in the article’s web page or on YouTube], including the ability zoom in and fly through a virtual, 3-D image environment (6:30); the ability to navigate an SQL database in 3-D (8:40); the gestural wand that lets you manipulate and disassemble 3-D models (10:00); and the stunning movie-editing system, called Tamper (11:00).

Do go see the video. At one point, Underkoffler (who was speaking at the February 2010 TED) drags data from the big screen in front of him onto a table set up on the stage where he’s speaking.

Perhaps most shockingly (at least for me) was the information that this interface is already in use commercially (probably in a limited way).

These developments and many others suggest that the printed word’s primacy is seriously on the wane, something I first heard 20 years ago. Oftentimes when ideas about how technology will affect us are discussed, there’s a kind of hysterical reaction which is remarkably similar across at least two centuries. Dave Bruggeman at his Pasco Phronesis blog has a posting about the similarities between Twitter and 19th century diaries,

Lee Humphreys, a Cornell University communications professor, has reviewed several 18th and 19th century diaries as background to her ongoing work in classifying Twitter output (H/T Futurity). These were relatively small journals, necessitating short messages. And those messages bear a resemblance to the kinds of Twitter messages that focus on what people are doing (as opposed to the messages where people are reacting to things).

Dave goes on to recommend The Shock of the Old; Technology and Global History since 1900 by David Edgerton as an antidote to our general ignorance (from the book’s web page),

Edgerton offers a startling new and fresh way of thinking about the history of technology, radically revising our ideas about the interaction of technology and society in the past and in the present.

I’d also recommend Carolyn Marvin’s book, When old technologies were new, where she discusses the introduction of telecommunications technology and includes the electric light with these then new technologies (telegraph and telephone). She includes cautionary commentary from the newspapers, magazines, and books of the day which is remarkably similar to what’s available in our contemporary media environment.

Adding a little more fuel is Stephen Hume in a June 12, 2010 article about Shakespeare for the Vancouver Sun who asks,

But is the Bard relevant in an age of atom bombs; a world of instant communication gratified by movies based on comic books, sex-saturated graphic novels, gory video games, the television soaps and the hip tsunami of fan fiction that swashes around the Internet?

[and answers]

So, the Bard may be stereotyped as the bane of high school students, symbol of snooty, barely comprehensible language, disparaged as sexist, racist, anti-Semitic, representative of an age in which men wore tights and silly codpieces to inflate their egos, but Shakespeare trumps his critics by remaining unassailably popular.

His plays have been performed on every continent in every major language. He’s been produced as classic opera in China; as traditional kabuki in Japan. He’s been enthusiastically embraced and sparked an artistic renaissance in South Asia. In St. Petersburg, Russia, there can be a dozen Shakespeare plays running simultaneously. Shakespeare festivals occur in Austria, Belgium, Finland, Portugal, Sweden and Turkey, to list but a few.

Yes to Pasco Phronesis, David Edgerton, Carolyn Marvin, and Stephen Hume, I agree that we have much  in common with our ancestors but there are also some profound and subtle differences not easily articulated.  I suspect that if time travel were possible and we could visit Shakespeare’s time we would find that the basic human experience doesn’t change that much but that we would be hardpressed to fit into that society as our ideas wouldn’t just be outlandish they would be unthinkable. I mean literally unthinkable.

As Walter Ong noted in his book, Orality and Literacy, the concept of a certain type of list is a product of literacy. Have you ever done that test where you pick out the item that doesn’t belong on the list? Try: hammer, saw, nails, tree. The correct answer anybody knows is tree since it’s not a tool. However, someone from oral culture would view the exclusion of the tree as crazy since you need both tools and  wood to build something and clearly the tree provides wood. (I’ll see if I can find the citation in Ong’s book as he provides research to prove his point.) A list is a particular way of organizing information and thinking about it.

Science and politics

I was gobsmacked by a link I followed from a Foresight Institute posting about a nanotechnologist running for the US Congress. From the Foresight posting (which was kept rigidly nonpartisan),

Bill McDonald brings to our attention the U.S. Congressional campaign of Mike Stopa, a Harvard nanotechnologist and physicist.

This is probably the first time that a nanotechnologist has run for Congress.

However, his profession may not get much attention, as his campaign is focusing on other issues.

I too am going to be rigidly nonpartisan as my interest here is in a kind of thought experiment: What happens if you read the campaign literature and realize that the  scientist running for political office can’t manage a logical thought process or argument outside her or his own specialty?

I think there’s an assumption that because someone is a scientist that the individual will be able to present logical arguments and come to thoughtful decisions. I’m not saying that one has to agree with the scientist just that the thinking and decisionmaking process should be cohesive but that’s not fair. Humans are messy. We can hold competing and incompatible opinions and we rationalize them when challenged. Since scientists are human (for the near future anyway), then they too are prey to both the messiness of the human condition and, by extension, the democratic process.

I’m going to continue ruminating on science and politics as I am increasingly struck by a sense that there is a trend toward incorporating more and more voices into processes (public consultation on science issues, on housing issues, on cultural issues, etc.) that were the domain of experts or policymakers simultaneous with attempts to either suppress that participation by arranging consultations in situations that are already decided or to suggest that too much participation is taking us into a state of chaos and rendering democracy as per public consultations untenable. Well, that was a mouthful.

As scientists and politics in other countries, do take a look at this Pasco Phronesis posting,

The Conservative Party [UK], when it was still shadowing the Brown government, indicated that it would require all new Members of Parliament in the party to take some training in basic science concepts [emphases mine] as part of their new member training. This was back in 2008, and would take place after the next election (which was to happen at some unspecified point in the future when the announcement was made).

While there is a new person responsible for science for the Conservatives, the plan will be put into action…and expanded.

This notion is along the lines that Preston Manning (founder of the Reform Party and the Canadian Reform Conservative Alliance Party [now absorbed by the Conservative Party] in Canada and opposition science critic) has been suggesting. Since leaving the political life, Manning has founded the Manning Centre and continues with his commentary on science and other issues.

That’s it for today.

What would happen if Canadian universities outsourced the grading of science, technology and mathematics assignments?

While I don’t usually blog about formal education, it is an important part of the science ‘environment’ and, last week. I came across a new development in US post secondary education  which could have an impact on Canadian post secondary education.  What could be the possible impact of outsourcing some of the duties usually associated with teaching assistants such as grading papers? A question that rose when reading this article by Andrea Belz in Beta News,

…  College professors are now outsourcing grading, The Chronicle of Higher Education reported in April.

Teaching assistants (TAs) have provided that service for generations, but now it is going overseas. Recession-hit universities get even better deals outsourcing than they did with notoriously underpaid graduate students. Now, this work often ends up in the hands of credentialed Indian stay-at-home moms eager to work part-time.

In the sciences, department operating funds paid graduate students while they completed years of coursework; usually, doctoral students then eventually transitioned to receiving stipends from a professor’s research grants. In the humanities, the TA phase could extend even longer as students toiled away on their theses.

As Belz goes on to note, there’s a danger that graduate students from all faculties will simply abandon their studies. Since Belz roused my curiosity, I found the original article by Audrey Williams June in The Chronicle of Higher Education and found that, to date, it’s mostly business faculties (although the practice is not confined to them) who seem to have explored this option. From the article,

Lori Whisenant knows that one way to improve the writing skills of undergraduates is to make them write more. But as each student in her course in business law and ethics at the University of Houston began to crank out—often awkwardly—nearly 5,000 words a semester, it became clear to her that what would really help them was consistent, detailed feedback.

Her seven teaching assistants, some of whom did not have much experience, couldn’t deliver. Their workload was staggering: About 1,000 juniors and seniors enroll in the course each year. “Our graders were great,” she says, “but they were not experts in providing feedback.”

That shortcoming led Ms. Whisenant, director of business law and ethics studies at Houston, to a novel solution last fall. She outsourced assignment grading to a company whose employees are mostly in Asia.

June goes on to interview both critics and supporters for this practice  and does reiterate the point that students are more likely to persevere and improve their performance when they’re given substantive feedback on their efforts. She does not, as Belz does,  speculate as to the possible impact on the education system as a whole.

I have a question, is anyone suggesting that 1000 students in a class is too many? That number suggests a theatre or music performance not a teaching situation.

As for Whisenant, the professor featured in the excerpt, I find her reasoning odd and here’s why. If she has seven teaching assistants (TA) and 1000 students and assuming that she grades some of the papers herself, that works out to 125 papers to mark for each assignment. Presumably, her TAs are graduate students who are also taking courses and writing papers and/or working on their theses. So, reading/marking 125 assignments plus doing your own student work is a huge workload. So here’s my next question. How can Whisenant make this comment? “Our graders were great,” she says, “but they were not experts in providing feedback.” With 125 papers to mark one or more times in a semester (each student was producing 5000 words), the issue can’t be expertise in providing feedback. I don’t care how expert you are, you’ll never be able to give adequate feedback with that kind of a workload.

In speculating on which disciplines might lend themselves most easily to this type of outsourcing, I would have thought that sciences, mathematics, and technology (i.e. engineering and the like) programmes would be the most likely candidates. I find it fascinating that the uptake, so far, is with courses that heavily require written assignments.

I can see the advantages for undergraduate students in classes that are huge and/or online but the impact on graduate students seems nothing short of devastating unless remedies are applied before a crisis occurs.

In any event, I’m sure Canadian universities are watching with great interest.

Measuring professional and national scientific achievements; Canadian science policy conferences

I’m going to start with an excellent study about publication bias in science papers and careerism that I stumbled across this morning on physorg.com (from the news item),

Dr [Daniele] Fanelli [University of Edinburgh] analysed over 1300 papers that declared to have tested a hypothesis in all disciplines, from physics to sociology, the principal author of which was based in a U.S. state. Using data from the National Science Foundation, he then verified whether the papers’ conclusions were linked to the states’ productivity, measured by the number of papers published on average by each academic.

Findings show that papers whose authors were based in more “productive” states were more likely to support the tested hypothesis, independent of discipline and funding availability. This suggests that scientists working in more competitive and productive environments are more likely to make their results look “positive”. It remains to be established whether they do this by simply writing the papers differently or by tweaking and selecting their data.

I was happy to find out that Fanelli’s paper has been published by the PLoS [Public Library of Science] ONE , an open access journal. From the paper [numbers in square brackets are citations found at the end of the published paper],

Quantitative studies have repeatedly shown that financial interests can influence the outcome of biomedical research [27], [28] but they appear to have neglected the much more widespread conflict of interest created by scientists’ need to publish. Yet, fears that the professionalization of research might compromise its objectivity and integrity had been expressed already in the 19th century [29]. Since then, the competitiveness and precariousness of scientific careers have increased [30], and evidence that this might encourage misconduct has accumulated. Scientists in focus groups suggested that the need to compete in academia is a threat to scientific integrity [1], and those guilty of scientific misconduct often invoke excessive pressures to produce as a partial justification for their actions [31]. Surveys suggest that competitive research environments decrease the likelihood to follow scientific ideals [32] and increase the likelihood to witness scientific misconduct [33] (but see [34]). However, no direct, quantitative study has verified the connection between pressures to publish and bias in the scientific literature, so the existence and gravity of the problem are still a matter of speculation and debate [35].

Fanelli goes on to describe his research methods and how he came to his conclusion that the pressure to publish may have a significant impact on ‘scientific objectivity’.

This paper provides an interesting counterpoint to a discussion about science metrics or bibliometrics taking place on (the journal) Nature’s website here. It was stimulated by Judith Lane’s recent article titled, Let’s Make Science Metrics More Scientific. The article is open access and comments are invited. From the article [numbers in square brackets refer to citations found at the end of the article],

Measuring and assessing academic performance is now a fact of scientific life. Decisions ranging from tenure to the ranking and funding of universities depend on metrics. Yet current systems of measurement are inadequate. Widely used metrics, from the newly-fashionable Hirsch index to the 50-year-old citation index, are of limited use [1]. Their well-known flaws include favouring older researchers, capturing few aspects of scientists’ jobs and lumping together verified and discredited science. Many funding agencies use these metrics to evaluate institutional performance, compounding the problems [2]. Existing metrics do not capture the full range of activities that support and transmit scientific ideas, which can be as varied as mentoring, blogging or creating industrial prototypes.

The range of comments is quite interesting, I was particularly taken by something Martin Fenner said,

Science metrics are not only important for evaluating scientific output, they are also great discovery tools, and this may indeed be their more important use. Traditional ways of discovering science (e.g. keyword searches in bibliographic databases) are increasingly superseded by non-traditional approaches that use social networking tools for awareness, evaluations and popularity measurements of research findings.

(Fenner’s blog along with more of his comments about science metrics can be found here. If this link doesn’t work, you can get to Fenner’s blog by going to Lane’s Nature article and finding him in the comments section.)

There are a number of issues here: how do we measure science work (citations in other papers?) as well as how do we define the impact of science work (do we use social networks?) which brings the question to: how do we measure the impact when we’re talking about a social network?

Now, I’m going to add timeline as an issue. Over what period of time are we measuring the impact? I ask the question because of the memristor story.  Dr. Leon Chua wrote a paper in 1971 that, apparently, didn’t receive all that much attention at the time but was cited in a 2008 paper which received widespread attention. Meanwhile, Chua had continued to theorize about memristors in a 2003 paper that received so little attention that Chua abandoned plans to write part 2. Since the recent burst of renewed interest in the memristor and his 2003 paper, Chua has decided to follow up with part 2, hopefully some time in 2011. (as per this April 13, 2010 posting) There’s one more piece to the puzzle: an earlier paper by F. Argall. From Blaise Mouttet’s April 5, 2010 comment here on this blog,

In addition HP’s papers have ignored some basic research in TiO2 multi-state resistance switching from the 1960’s which disclose identical results. See F. Argall, “Switching Phenomena in Titanium Oxide thin Films,” Solid State Electronics, 1968.
http://pdf.com.ru/a/ky1300.pdf

[ETA: April 22, 2010 Blaise Mouttet has provided a link to an article  which provides more historical insight into the memristor story. http://knol.google.com/k/memistors-memristors-and-the-rise-of-strong-artificial-intelligence#

How do you measure or even track  all of that? Shy of some science writer taking the time to pursue the story and write a nonfiction book about it.

I’m not counselling that the process be abandoned but since it seems that the people are revisiting the issues, it’s an opportune time to get all the questions on the table.

As for its importance, this process of trying to establish better and new science metrics may seem irrelevant to most people but it has a much larger impact than even the participants appear to realize. Governments measure their scientific progress by touting the number of papers their scientists have produced amongst other measures such as  patents. Measuring the number of published papers has an impact on how governments want to be perceived internationally and within their own borders. Take for example something which has both international and national impact, the recent US National Nanotechnology Initiative (NNI) report to the President’s Council of Science and Technology Advisors (PCAST). The NNI used the number of papers published as a way of measuring the US’s possibly eroding leadership in the field. (China published about 5000 while the US published about 3000.)

I don’t have much more to say other than I hope to see some new metrics.

Canadian science policy conferences

We have two such conferences and both are two years old in 2010. The first one is being held in Gatineau, Québec, May 12 – 14, 2010. Called Public Science  in Canada: Strengthening Science and Policy to Protect Canadians [ed. note: protecting us from what?], the target audience for the conference seems to be government employees. David Suzuki (tv host, scientist, evironmentalist, author, etc.) and Preston Manning (ex-politico) will be co-presenting a keynote address titled: Speaking Science to Power.

The second conference takes place in Montréal, Québec, Oct. 20-22, 2010. It’s being produced by the Canadian Science Policy Centre. Other than a notice on the home page, there’s not much information about their upcoming conference yet.

I did note that Adam Holbrook (aka J. Adam Holbrook) is both speaking at the May conference and is an advisory committee member for the folks who are organizing the October conference. At the May conference, he will be participating in a session titled: Fostering innovation: the role of public S&T. Holbrook is a local (to me) professor as he works at Simon Fraser University, Vancouver, Canada.

That’s all of for today.