Tag Archives: Kit Eaton

‘Scotch-tape’ technique for isolating graphene

The ‘scotch-tape’ technique is mythologized in the graphene origins story which has scientists, Andre Geim and Konstantin Novoselov, first isolating the material by using adhesive (aka ‘sticky’ tape or ‘scotch’ tape) as per my Oct. 7, 2010 posting,

The technique that Geim and Novoselov used to create the first graphene sheets both amuses and fascinates me (from the article by Kit Eaton on the Fast Company website),

The two scientists came up with the technique that first resulted in samples of graphene–peeling individual atoms-deep sheets of the material from a bigger block of pure graphite. The science here seems almost foolishly simple, but it took a lot of lateral thinking to dream up, and then some serious science to investigate: Geim and Novoselo literally “ripped” single sheets off the graphite by using regular adhesive tape. Once they’d confirmed they had grabbed micro-flakes of the material, Geim and Novoselo were responsible for some of the very early experiments into the material’s properties. Novel stuff indeed, but perhaps not so unexpected from a scientist (Geim) who the Nobel Committe notes once managed to make a frog levitate in a magnetic field.

A May 21, 2014 article about Geim who has won both a Nobel and an Ig Nobel (the only scientist to do so) and graphene by Sarah Lewis for Fast Company offers more details about the discovery,

The graphene FNE [Friday Night Experiments] began when Geim asked Da Jiang, a doctoral student from China, to polish a piece of graphite an inch across and a few millimeters thick down to 10 microns using a specialized machine. Partly due to a language barrier, Jiang polished the graphite down to dust, but not the ultimate thinness Geim wanted.

Helpfully, the Geim lab was also observing graphite using scanning tunneling microscopy (STM). The experimenters would clean the samples beforehand using Scotch tape, which they would then discard. “We took it out of the trash and just used it,” Novoselov said. The flakes of graphite on the tape from the waste bin were finer and thinner than what Jiang had found using the fancy machine. They weren’t one layer thick—that achievement came by ripping them some more with Scotch tape.

They swapped the adhesive for Japanese Nitto tape, “probably because the whole process is so simple and cheap we wanted to fancy it up a little and use this blue tape,” Geim said. Yet “the method is called the ‘Scotch tape technique.’ I fought against this name, but lost.”

Scientists elsewhere have been inspired to investigate the process in minute detail as per a June 27, 2014 news item on Nanowerk,

The simplest mechanical cleavage technique using a primitive “Scotch” tape has resulted in the Nobel-awarded discovery of graphenes and is currently under worldwide use for assembling graphenes and other two-dimensional (2D) graphene-like structures toward their utilization in novel high-performance nanoelectronic devices.

The simplicity of this method has initiated a booming research on 2D materials. However, the atomistic processes behind the micromechanical cleavage have still been poorly understood.

A June 27, 2014 MANA (International Center for Materials Nanoarchitectoinics) news release, which originated the news item, provides more information,

A joined team of experimentalists and theorists from the International Center for Young Scientists, International Center for Materials Nanoarchitectonics and Surface Physics and Structure Unit of the National Institute for Materials Science, National University of Science and Technology “MISiS” (Moscow, Russia), Rice University (USA) and University of Jyväskylä (Finland) led by Daiming Tang and Dmitri Golberg for the first time succeeded in complete understanding of physics, kinetics and energetics behind the regarded “Scotch-tape” technique using molybdenum disulphide (MoS2) atomic layers as a model material.

The researchers developed a direct in situ probing technique in a high-resolution transmission electron microscope (HRTEM) to investigate the mechanical cleavage processes and associated mechanical behaviors. By precisely manipulating an ultra-sharp metal probe to contact the pre-existing crystalline steps of the MoS2 single crystals, atomically thin flakes were delicately peeled off, selectively ranging from a single, double to more than 20 atomic layers. The team found that the mechanical behaviors are strongly dependent on the number of layers. Combination of in situ HRTEM and molecular dynamics simulations reveal a transformation of bending behavior from spontaneous rippling (< 5 atomic layers) to homogeneous curving (~ 10 layers), and finally to kinking (20 or more layers).

By considering the force balance near the contact point, the specific surface energy of a MoS2 monoatomic layer was calculated to be ~0.11 N/m. This is the first time that this fundamentally important property has directly been measured.

After initial isolation from the mother crystal, the MoS2 monolayer could be readily restacked onto the surface of the crystal, demonstrating the possibility of van der Waals epitaxy. MoS2 atomic layers could be bent to ultimate small radii (1.3 ~ 3.0 nm) reversibly without fracture. Such ultra-reversibility and extreme flexibility proves that they could be mechanically robust candidates for the advanced flexible electronic devices even under extreme folding conditions.

Here’s a link to and a citation for the research paper,

Nanomechanical cleavage of molybdenum disulphide atomic layers by Dai-Ming Tang, Dmitry G. Kvashnin, Sina Najmaei, Yoshio Bando, Koji Kimoto, Pekka Koskinen, Pulickel M. Ajayan, Boris I. Yakobson, Pavel B. Sorokin, Jun Lou, & Dmitri Golberg. Nature Communications 5, Article number: 3631 doi:10.1038/ncomms4631 Published 03 April 2014

This paper is behind a paywall but there is a free preview available through ReadCube Access.

About GoldiBlox, the Beastie Boys, girls in science, and intellectual property

This story about GoldiBlox, was supposed to be a ‘feel good’ piece about the company, girls,  and STEM (science, technology, engineering, and mathematics)—but that was last week. At this point (Nov. 26, 2013), we can add a squabble over intellectual property (copyright) to the mix.

GoldiBlox, a company that makes engineering toys for girls (previously mentioned in my Dec. 6, 2012 posting) has produced an advertisement that has been attracting a lot of interest on the internet including this Nov. 19, 2013 story by Katy Waldman for Slate (Note: Links have been removed),

This is a stupendously awesome commercial from a toy company called GoldieBlox, which has developed a set of interactive books and games to “disrupt the pink aisle and inspire the future generation of female engineers.” The CEO, Debbie Sterling, studied engineering at Stanford, where she was dismayed by the lack of women in her program. (For a long look at the Gordian knot that is women’s underrepresentation in STEM fields,  … . Sterling wants to light girls’ inventive spark early, supplementing the usual diet of glittery princess products with construction toys “from a female perspective.”

We love this video because it subverts a bunch of dumb gender stereotypes—all to the strains of a repurposed Beastie Boys song. [emphasis mine] In it, a trio of smart girls could not be less impressed by the flouncing beauty queens in the commercial they’re watching. So they use a motley collection of toys and household items (including a magenta feather boa and a pink plastic tea set) to assemble a huge Rube Goldberg machine. …

Here’s the video (no longer available with Beastie Boys parody song as of Nov. 27, 2013; I have placed the latest version at the end of this posting),,

You can find GoldieBlox here.

Things have turned a little since Waldman’s rapturous story. The Beastie Boys do not want their music to be used in advertisements, of any kind. From Christina Chaey’s Nov. 25, 2013 article for Fast Company,

Beastie Boys members Mike D and Ad-Rock, who survive the late Adam “MCA” Yauch, have issued the following open letter addressed to GoldieBlox:

Like many of the millions of people who have seen your toy commercial “GoldieBlox, Rube Goldberg & the Beastie Boys,” we were very impressed by the creativity and the message behind your ad. We strongly support empowering young girls, breaking down gender stereotypes and igniting a passion for technology and engineering.

As creative as it is, make no mistake, your video is an advertisement that is designed to sell a product, and long ago, we made a conscious decision not to permit our music and/or name to be used in product ads. When we tried to simply ask how and why our song “Girls” had been used in your ad without our permission, YOU sued US.

Chaey’s article goes on to document responses from other musicians about this incident and notes that GoldiBlox has not commented.

Techdirt’s Mike Masnic, also has a Nov. 25, 2013 article on the topic where he notes that neither party has filed suit  (at least, not yet),

Now, it is true that some in the press have mistakenly stated that the Beastie Boys sued GoldieBlox, and that’s clearly not the case. GoldieBlox filed for declaratory judgment, which is a fairly standard move after someone claims that you violated their rights. It’s not a lawsuit seeking money — just to declare that the use is fair use. While the Beastie Boys say they made no threat or demand, the lawsuit notes that their letter (which still has not been revealed in full) made a direct claim that the video was copyright infringement, and also that this was a “big problem” that has a “very significant impact.”

As Masnick goes on to mention (Note: A link has been removed),

.. in fact, that in Adam Yauch’s  [deceased band member] will, it explicitly stated that none of their music was ever to be used in advertising. And, from the Beastie Boys’ open letter, it appears that was their main concern.

But, here’s the thing: as principled as Yauch was about this, and as admirable as it may be for him and the band to not want their music appearing in advertisements that does not matter under the law. If the use is considered fair use, then it can be used. Period. There is no clause in fair use law that says “except if someone’s will says otherwise.” The very point of fair use is that you don’t need permission and you don’t need a license.

Sometimes (often) the resolution to these disagreements has more to do with whomever can best afford legal costs and less to do with points of law, even if they are in your favour. From Masnick’s article,

I’ve spoken to a bunch of copyright lawyers about this, and almost all of them agree that this is likely fair use (with some arguing that it’s a totally clear-cut case). Some have argued that because it’s an advertisement for a company that precludes any possibility of fair use, but that’s absolutely not true. Plenty of commercial efforts have been considered fair use, and, in fact, many of the folks who rely the most on fair use are large media companies who are using things in a commercial context.

It’s nice when the good guys are clearly distinguishable from the bad guys but it appears this may not entirely be the case with GoldiBlox, which apparently believes it can grant licences to link to their website, as per Mike Masnick’s Nov. 26, 2013 Techdirt posting on the topic (Note: Links have been removed),

… as noted in Jeff Roberts’ coverage of the case over at Gigaom, it appears that Goldieblox might want to take a closer look at their own terms of service, which makes some absolutely ridiculous and laughable claims about how you can’t link to their website …

… Because just as you don’t need a license to create a parody song, you don’t need a license to link to someone’s website.

I do hope things work out with regard to the parody song and as for licencing links to their website, that’s just silly.  One final note, Canadians do not have ‘fair use’ provisions under the law, we have ‘fair dealing’ and that is a little different. From the Wikipedia essay on Fair Dealing (Note: Links have been removed),

Fair dealing is a statutory exception to copyright infringement. It is a defence, with the burden of proof upon the defendant.

Should I ever learn of the outcome of this GoldiBlox/Beastie Boys conflict I will provide an update.

ETA Nov. 27, 2013: GoldiBlox has changed the soundtrack for their video as per the Nov. 27, 2013 article by Kit Eaton for Fast Company,

The company explains it has replaced the video and is ready to quash its lawsuit “as long as this means we will no longer be under threat from [the band’s] legal team.”

Eaton has more quotes from the letter written by the GoldiBlox team in his article. For the curious, I have the latest version of the commercial here,

I don’t think the new music is as effective but if I remember the video properly, they’ve made some changes and I like those.

ETA Nov. 27, 2013 (2): I can’t believe I’m adding material to this posting for the second time today. Ah well. Katy Waldman over at Slate has weighed in for the second time with a Nov. 27, 2013 article discussing the Beastie Boys situation briefly while focussing primarily on whether or not the company actually does produce toys that encourage girls in their engineering and science efforts. It seems the consensus, such as it is, would be: not really. Not having played with the toys myself, I have no worthwhile opinion to offer on the topic but you might want to check Waldman’s article to see what more informed folks have to say.

Mary Elizabetth Williams in her Nov. 27, 2013 article for Salon.com seems more supportive of the Beastie Boys’ position than the Mike Masnick at Techdirt. She’s also quite critical of GoldieBlox’s open letter mentioned in today’s first ETA. I agree with many of her criticisms.

Hopefully, this will be it for this story.

Material changes

A few items have caught my attention lately and the easiest way to categorize them is with the term, ‘materials’.  First, a June 7, 2012 article by Jane Wakefield about fashion and technology on the BBC News website that features a designer, Suzanne Lee, who grows clothing. I’m glad to see Lee is still active (I first mentioned her work with bacteria and green tea in a July 13, 2010 posting). From Wakefield’s 2012 article,

“I had a conversation with a biologist who raised the idea of growing a garment in a laboratory,” she [Biocouture designer, Suzanne Lee] told the BBC.

In her workshop in London, she is doing just that.

Using a recipe of green tea, sugar, bacteria and yeast she is able to ‘grow’ a material which she describes as a kind of “vegetable leather”.

The material takes about two weeks to grow and can then be folded around a mould – she has made a dress from a traditional tailor’s model but handbags and furniture are also possibilities.

Bio-biker image courtesy of Bio Couture (http://www.biocouture.co.uk/)

Designer Suzanne Lee’s website is http://www.biocouture.co.uk

Wakefield’s article goes on to discuss technologies being integrated into design,

While computer-aided design and drafting (CADD) is not a new technology, it has rarely been used in the fashion world before but French fashion designer Julien Fournié wants to change that.

Mr Fournié began working in fashion industry under Jean-Paul Gaultier but these days he is more likely to be found hanging out with engineers than with fashionistas.

He has teamed up with engineers at Dassault Systèmes, a French software company which more usually creates 3D designs for the car and aerospace industries.

Recently Mr Fournié has been experimenting with making clothes from neoprene, a type of rubber.

It is a difficult material to work with and Mr Fournié’s seamstresses suggested that the only way to stitch it would be to use glue.

“To my mind a glued dress wasn’t very sexy,” he said.

So he handed the problem over to the engineers.

“They found the right pressure for the needle so it didn’t break the material,” he said.

Wakefield discusses more of Fournié’s work as well as a ‘magic mirror’ being developed by the FashionLab at Dassault Systèmes,

“A store may have a magic mirror with a personal avatar that can use your exact body measurements to show you how new clothes would look on you,” explained Jerome Bergeret, director of FashionLab.

There is more in the Wakefield including the ‘future of fashion shopping’.

Still on the material theme but in a completely different category, flat screens that are tactile. From the June 6, 2012 news item by Nancy Owano on the physorg.com website,

Why settle for flat? That is the question highlighted on the home page of Tactus Technology, which does not want device users to settle for any of today’s tactile limitations on flatscreen devices. The Fremont, California-based company has figured out how to put physical buttons on a display when we want them and no buttons when we don’t. Tactus has announced its tactile user interface for touchscreen devices that are real, physical buttons that can rise up from the touchscreen surface on demand.

The customizable buttons can appear in a range of shapes and configurations. Buttons may run across the display, or in another collection of round buttons to represent a gamepad for playing games. “We are a user interface technology where people can take our technology and create whatever kind of interface they want,” said Nate Saaal, VP business development. He said it could be any shape or construct on the surface.

Lakshmi Sandhana also wrote about Tactus and its new keyboard in a June 6, 2012 article for Fast Company,

The idea of a deformable touchscreen surface came to Craig Ciesla, CEO of Tactus, way back in 2007, when he found himself using his BlackBerry instead of the newly released iPhone because of its keyboard. …

“I realized that this question could be answered by using microfluidics,” Ciesla says. Their design calls for a thin transparent cover layer with some very special properties to be laid on top of a touchscreen display. Made of glass or plastic, the 1mm-thick slightly elastic layer has numerous micro-channels filled with a non-toxic fluid. Increasing fluid pressure with the aid of a small internal controller causes transparent physical buttons to grow out of the surface of the layer in less than a second. Once formed, you can feel the buttons, rest your fingers or type on them, just like a mechanical keyboard. “When you don’t want the buttons, you reduce the fluid pressure, draw the fluid out and the buttons recede back to their original flat state.” (No messy cleanup–the minimal amount of fluid is all contained within the device.) “You’re left with a surface where you don’t see anything,” Ciesla explains.

The company, Tactus Technology Inc.,  does have a product video,

It’s a little bit on the dramatic side, I think their professional voiceover actor could have a future career  as a Rod Serling (Twilight Zone) sound alike. Regardless, I do like the idea of a product than can function as a flat screen and as a screen with buttons.

My last item is about an emotion-recognition phone. Kit Eaton who writes for Fast Company on a pretty regular basis posted a June 7, 2012 article about systems that recognize your emotions (Note: I have removed links from the excerpt),

Nunance [sic], which makes PC voice recognition systems and the tech that powers Apple’s famous Siri digital PA, have revealed their next tech is voice recognition in cars and for TVs. But the firm also wants to add more than voice recognition in an attempt to build a real-life KITT–it wants to add emotion detection so its system can tell how you’re feeling while you gab away. …

Nuance’s chief of marketing Peter Mahoney spoke to the Boston Globe last week about the future of the company’s tech, and noted that in a driving environment emotion detection could be a vital tool. For example, if your car thinks you sound stressed, it may SMS your office to say you’re late or even automatically suggest another route that avoids traffic. (Or how about a voice-controlled Ford system that starts playing you, say, Enya to calm the nerves.) Soon enough, you may deviate from your existing “shortest route” algorithms, while being whisked to parts of the city you never otherwise visit. Along the way, you might discover a more pleasant route to the office, or a new place to buy coffee.

But Nuance says it has far bigger plans to make your emotional input valuable: It’s looking into ways to monetize its voice systems, including your emotional input, to directly recommend services and venues to you.

There are more details and a video demonstrating Nuance’s Dragon Drive product in Eaton’s article. As for me, I’m not excited about decreasing my personal agency in an attempt to sell me yet more products and services. But perhaps I’m being overly pessimistic.

Since my weekend is about to start and these items got me to thinking about materials, it seems only right that I end this posting with,


It takes about one minute before the singing starts but it’s worth the wait. Happy weekend!

Eye, arm, & leg prostheses, cyborgs, eyeborgs, Deus Ex, and ableism

Companies are finding more ways to publicize and promote themselves and their products. For example there’s Intel, which seems to have been especially active lately with its Tomorrow Project (my August 22, 2011 posting) and its sponsorship (being one of only four companies to do so) of the Discovery Channel’s Curiosity television programme (my July 15, 2011 posting). What I find interesting in these efforts is their range and the use of old and new techniques.

Today I found (August 30, 2011 article by Nancy Owano) a documentary made by Robert Spence, Canadian filmmaker and eyeborg, for the recently released Deus Ex: Human Revolution game (both the game and Spence are mentioned in my August 18, 2011 posting) from the company, Eidos Montréal. If you’re squeamish (medical operation is featured), you might want to miss the first few minutes,

I found it quite informative but curiously US-centric. How could they discuss prostheses for the legs and not mention Oscar Pistorius, the history-making South African double amputee runner who successfully petitioned the Court for Arbitration for Sport for the right to compete with able-bodied athletes? (In July this year, Pistorius qualified for the 2012 Olympics.) By the way, they do mention the Icelandic company, Össur, which created Pistorius’ “cheetah” legs. (There’s more about Pistorius and human enhancement in my Feb. 2, 2010 posting. [scroll down about 1/3 of the way])

There’s some very interesting material about augmented reality masks for firefighters in this documentary. Once functional and commercially available, the masks would give firefighters information about toxic gases, temperature, etc. as they move through a burning building. There’s a lot of interest in making augmented reality commercially available via smartphones as Kit Eaton notes in an August 29, 2011 article for Fast Company,

Junaio’s 3.0 release is a big transformation for the software–it included limited object recognition powers for about a year, but the new system is far more sophisticated. As well as relying on the usual AR sensor suite of GPS (to tell the software where the smartphone is on the planet), compass, and gyros to work out what angle the phone’s camera is looking, it also uses feature tracking to give it a better idea of the objects in its field of view. As long as one of Junaio’s channels or databases or the platforms of its developer partners has information on the object, it’ll pop up on screen.

When it recognizes a barcode, for example, the software “combines and displays data sources from various partner platforms to provide useful consumer information on a given product,” which can be a “website, a shopping micro-site or other related information” such as finding recipes based on the ingredients. It’s sophisticated enough so you can scan numerous barcoded items from your fridge and add in extras like “onions” and then get it to find a recipe that uses them.

Eaton notes that people might have an objection to holding up their smartphones for long periods of time. That’s a problem that could be solved of course if we added a prosthetic to the eye or replaced an organic eye with a bionic eye as they do in the game and as they suggest in the documentary.

Not everyone is quite so sanguine about this bright new future. I featured a documentary, Fixed, about some of the discussion regarding disability, ability, and human enhancement in my August 3, 2010 posting. One of the featured academics is Gregor Wolbring, assistant professor, Dept of Community Health Sciences, Program in Community Rehabilitation and Disability Studies, University of Calgary; and president of the Canadian Disability Studies Association.  From Gregor’s June 17, 2011 posting on the FedCan blog,

The term ableism evolved from the disabled people rights movements in the United States and Britain during the 1960s and 1970s.  It questions and highlights the prejudice and discrimination experienced by persons whose body structure and ability functioning were labelled as ‘impaired’ as sub species-typical. Ableism of this flavor is a set of beliefs, processes and practices, which favors species-typical normative body structure based abilities. It labels ‘sub-normative’ species-typical biological structures as ‘deficient’, as not able to perform as expected.

The disabled people rights discourse and disability studies scholars question the assumption of deficiency intrinsic to ‘below the norm’ labeled body abilities and the favoritism for normative species-typical body abilities. The discourse around deafness and Deaf Culture would be one example where many hearing people expect the ability to hear. This expectation leads them to see deafness as a deficiency to be treated through medical means. In contrast, many Deaf people see hearing as an irrelevant ability and do not perceive themselves as ill and in need of gaining the ability to hear. Within the disabled people rights framework ableism was set up as a term to be used like sexism and racism to highlight unjust and inequitable treatment.

Ableism is, however, much more pervasive.

Ableism based on biological structure is not limited to the species-typical/ sub species-typical dichotomy. With recent science and technology advances, and envisioned advances to come, we will see the dichotomy of people exhibiting species-typical and the so-called sub species-typical abilities labeled as impaired, and in ill health. On the other side we will see people exhibiting beyond species-typical abilities as the new expectation norm. An ableism that favours beyond species-typical abilities over species-typical and sub species-typical abilities will enable a change in meaning and scope of concepts such as health, illness, rehabilitation, disability adjusted life years, medicine, health care, and health insurance. For example, one will only be labeled as healthy if one has received the newest upgrade to one’s body – meaning one would by default be ill until one receives the upgrade.

Here’s an excerpt from my Feb. 2, 2010 posting which reinforces what Gregor is saying,

This influx of R&D cash, combined with breakthroughs in materials science and processor speed, has had a striking visual and social result: an emblem of hurt and loss has become a paradigm of the sleek, modern, and powerful. Which is why Michael Bailey, a 24-year-old student in Duluth, Georgia, is looking forward to the day when he can amputate the last two fingers on his left hand.

“I don’t think I would have said this if it had never happened,” says Bailey, referring to the accident that tore off his pinkie, ring, and middle fingers. “But I told Touch Bionics I’d cut the rest of my hand off if I could make all five of my fingers robotic.” [originally excerpted from Paul Hochman’s Feb. 1, 2010 article, Bionic Legs, i-Limbs, and Other Super Human Prostheses You’ll Envy for Fast Company]

I don’t really know how to take the fact that the documentary is in fact product placement for the game, Deus Ex: Human Revolution. On the up side, it opens up a philosophical discussion in a very engaging way. On the down side, it closes down the discussion because drawbacks are not seriously mentioned.

Science research spending and innovation in Europe and reflections on the Canadian situation

I thought I’d pull together some information about science funding and innovation for closer examination. First, in early July 2011 the European Union announced plans for a huge spending increase, approximately 45%, for science. Their current programme, the Seventh Framework Programme (US$79B budget) is coming to an end in 2013 and the next iteration will be called, Horizon 2020 (proposed US$114B budget).  Here’s more from Kit Eaton’s July 6, 2011 article on Fast Company,

The proposal still awaits approval by the E.U.’s parliament and member states, but just getting this far is a milestone. The next phase is to forge spending into the next generation of the E.U.’s Framework Programme, which is its main research spending entity, to produce a plan called Horizon 2020. The spending shift has been championed by E.U. research commissioner Márie Geoghan-Quinn, and means that the share of the E.U. budget portioned out for scientific research will eventually double from its 4.5% figure in 2007 to 9% in 2020.

How will Europe pay for it? This is actually the biggest trick being pulled off: More than €4.5 billion would be transferred from the E.U.’s farm subsidies program, the Common Agricultural Policy. This is the enormous pile of cash paid by E.U. authorities to farmers each year to keep them in business, to keep food products rolling off the production line, and to keep fields fallow–as well as to diversify their businesses.

Nature journal also covered the news in a July 5, 2011 article by Colin Macilwane,

Other research advocates say that the proposal — although falling short of the major realignment of funding priorities they had been hoping for — was as good as could be expected in the circumstances. “Given the times we’re in, we couldn’t realistically have hoped for much more,” says Dieter Imboden, president of Eurohorcs, the body representing Europe’s national research agencies.

Geoghegan-Quinn told Nature that the proposal was “a big vote of confidence in science” but also called on researchers to push to get the proposal implemented — especially in their home countries. “The farmers will be out there lobbying, and scientists and researchers need to do the same,” she says.

While the European Union wrangles over a budget that could double their investment in science research, Canadians evince, at best, a mild interest in science research.

The latest Science, Technology and Innovation Council report, State of the Nation 2010: Canada’s Science, Technology and Innovation System, was released in June 2011 and has, so far, occasioned little interest despite an article in the Globe & Mail and a Maclean’s blog posting by Paul Wells. Hopefully,  The Black Hole Blog, where Beth Swan and David Kent are writing a series about the report, will be able to stimulate some discussion.

From Beth’s July 12, 2011 posting,

The report – at least the section I’m talking about today – is based on data from the Organisation for Economic Co-operation and Development’s (OECD) Programme for International Student Assessment and Statistics Canada. Some of the interesting points include:

  • 15-year-old Canadians rank in the top 10 of OECD countries for math and science in 20091.
  • 80% of 15-19 year-old Canadians are pursuing a formal education, which is lower than the OECD average
  • But Canada ranks 1st in OECD countries for adults (ages 25–64 years) in terms of the percentage of the population with a post-secondary education (49%)
  • The numbers of Canadian students in science and engineering at the undergraduate level increased (18% increase in the number of science undergraduate degrees, 9% increase in the number of engineering undergraduate degrees) in 2008 compared to 2005

This all begs the question, though, of what those science-based graduates do once they graduate. It’s something that we’ve talked about a fair bit here on the Black Hole and the STIC report gives us some unhappy data on it. Canada had higher unemployment rates for science-based PhDs (~3-4%) compared to other OECD countries (e.g., in the US, it’s about ~1-1.5%).  Specifically, in 2006 Canada had the highest rate of unemployment for the medical sciences -3%- and engineering -4%- and the third highest rate of unemployment for the natural sciences -3%- among the OECD countries: the data are from 2006.

David, in his July 16, 2011 posting, focuses on direct and indirect Canadian federal government Research & Development (R&D) spending,

It appears from a whole host of statistics, reports, etc – that Canada lags in innovation, but what is the government’s role in helping to nurture its advancement.  Is it simply to create fertile ground for “the market” to do its work?  or is it a more interventionist style of determining what sorts of projects the country needs and investing as such?  Perhaps it involves altering the way we train and inspire our young people?

Beth then comments on Canadian business R&D investment, which has always been a low priority according to the material I’ve read, in her July 25, 2011 posting on ,

Taken together, this shows a rather unfavourable trend in Canadian businesses not investing in research & development – i.e, not contributing to innovation. We know from Dave’s last posting that Canada is not very good at contributing direct funds to research and my first posting in this series illustrated that while Canada is pretty good at getting PhDs trained, we are not so good at having jobs for those PhDs once they are done their schooling.

The latest July 27, 2011 posting from David asks the age old question, Why does Canada lag in R&D spending?

Many reports have been written over the past 30 years about Canada and its R&D spending, and they clamour one after the other about Canada’s relative lack of investment into R&D.  We’ve been through periods of deep cutbacks and periods of very strong growth, yet one thing remains remarkably consistent – Canada underspends on R&D relative to other countries.

The waters around such questions are extremely murky and tangible outcomes are tough to identify and quantify when so many factors are at play.  What does seem reasonable though is to ask where this investment gap is filled from in other countries that currently outstrip Canada’s spending – is it public money, private money, foreign money, or domestic money?  Hopefully these questions are being asked and answered before we set forth on another 30 year path of poor relative investment.

As I stated in my submission to the federal government’s R&D review panel and noted in my March 15, 2011 posting about the ‘Innovation’ consultation, I think we need to approach the issues in more imaginative ways.

Informal science education, DARPA and NASA style

I like to mention imaginative science education projects from time to time and this one caught my attention. The US National Aeronautics and Space Administration (NASA) and the Defense Advanced Research Projects Agency (DARPA) are offering students the opportunity to have one of their experiments tested under live conditions in outer space. From the Kit Eaton June 20, 2011 article (How NASA, DARPA Are Keeping Kids Interested In Space),

To keep folks interested [now that the Space Shuttle era is over], NASA and DARPA are pushing (a little) money into a program that’s directly aimed at students themselves.

Synchronized Position Hold, Engage, Reorient, Experimental Satellites (SPHERES) are an existing experiment that uses tiny ball-shaped robots that fly inside the International Space Station. They test techniques for keeping real satellites maneuvering in sync so that they can rendezvous and work as part of a swarm–a task that’s useful for autonomous satellite servicing, and even the building of future spacecraft.

The offer that NASA’s making is that if you design an interesting experiment, and it wins their approval, it’ll be used to fly the SPHERES robots for real. In space.

There are more details about the 2011 SPHERES Challenge tournament at the Massachusetts Institute of Technology’s (MIT) Zero Robotics website. Here’s a little of the information available on that site,

“Zero Robotics” is a robotics programming competition that opens the world-class research facilities on the International Space Station (ISS) to high-school students. Students will actually write programs at their High School that may control a satellite in space! The goal is to build critical engineering skills for students, such as problem solving, design thought process, operations training, and team work. Ultimately we hope to inspire future scientists and engineers so that they will view working in space as “normal”, and will grow up pushing the limits of engineering and space exploration.

They’ve had annual challenges since 2009 and this year’s is the SPHERES challenge. There are six stages to this year’s competition,

The 2011 SPHERES Challenge tournament has 6 stages:

  1. Learn to program / tutorials / initial programming
  2. 2D Simulation: the game will be played in 2-dimensions. All teams will submit a player and will compete, in a full round robin simulation, against all other teams. Their score will count towards elimination later on, but no teams will be eliminated in this round.
  3. 2D Ground Competition: the top scorers from the 2D simulation will see their players compete against each other on the SPHERES ground satellites, learning directly some of the important differences between simulation and real hardware. Scores in this round will not count towards elimination, as not all teams will compete. All teams will be able  to watch the competition at MIT via webcast.
  4. 3D Simulation: all participating teams will extend their game to 3 dimensions and submit their final individual player. MIT will run a full round robin simulation. The score of this round will be combined with the score of the 2D simulation to seed all teams.
  5. 3D Semi-Finals: the top 48 teams will be required to form alliances of 3 teams per player, creating a total of 16 players. Preference will be given to the choices of higher seeds. These alliances will compete in a full round-robin simulation. The top scoring players/alliances will be invited to submit an entry for the ISS finals.
  6. ISS Finals: the top 9 players of the semi-finals will be invited to participate in the ISS finals (a total of 27 teams, as there will be 3 teams per player).  Teams may visit MIT to see the live feed, or watch via the webcast. Players will compete in a bracketed round-robin aboard the ISS and a champion will be declared.   (note: date depends on astronaut time availability)

This is a competition for US high school students from grades 9 – 12.  The application deadline is Sept. 5, 2011.

British Library’s new iPad app

How do I love thee, British Library? Let me count the ways. (I know it’s a cheap move paraphrasing these lines from Elizabeth Barrett Browning but it’s a compromise since it can take me years to come up with the perfect poetic line by which time this news will be ancient history.)

The British Library has announced an iPad application (app) which will make over 60,000 19th century titles from their collection available through Apple’s iTunes store later this summer. At this point, there are approximately 1000 titles available in the app they are calling the 19th Century Historical Collection. Neal Ungerleider on the Fast Company website writes in his June 15, 2011 article,

The British Library is launching a new library-in-an-iPad application that gives tablet users access to tens of thousands of 19th-century books in their original form. The app, called the 19th Century Historical Collection, is taking a notably different tack to putting classic literature online than rivals such as the Kindle platform: Antiquarian books viewed through the British Library application will come in their original form–complete with illustrations, typefaces, pull-out maps and even the occasional paper wear.

This project follows from the British Library’s previous mobile app project, Treasures. Here’s a video about that one,

Getting back to the most recent project the 19th Century Historical Collection (from the British Library June 7, 2011 news release),

The British Library 19th Century Historical Collection App forms a treasure trove of classics and lesser known titles in fields ranging from travel writing and natural history to fiction and philosophy. The app represents the latest landmark in the British Library’s progress towards its long-term vision of making more of its historic collections available to many more users through innovative technology. [emphasis mine]

I’m happy to see that the staff at the British Library remain open to ideas and experimentation. As I noted in my July 29, 2010 posting (Love letter to the British Library) about copyright, I’ve been having an affair with the British Library since 2000. Here’s an excerpt from that posting which relates directly to these latest initiatives,

Dame Lynne Brindley, the Chief Executive Officer for the British Library had this to say in her introduction to the [British Library’s] paper [Driving UK Research — Is copyright a help or a hindrance?],

There is a supreme irony that just as technology is allowing greater access to books and other creative works than ever before for education and research, new restrictions threaten to lock away digital content in a way we would never countenance for printed material.

Let’s not wake up in five years’ time and realise we have unwittingly lost a fundamental building block for innovation, education and research in the UK. Who is protecting the public interest in the digital world? We need to redefine copyright in the digital age and find a balance to benefit creators, educators, researchers, the creative industries – and the knowledge economy. (p. 3)

In this case, the action matches what’s been said. Bravo!

ETA June 21, 2011: The British Library has recently made a deal with Google to digitize 250,000 texts. All of the books are in the public domain. You can read more about the project/deal in Kit Eaton’s June 20, 2011 article for Fast Company, Pulp, Non-Fiction: On The British Library’s Book-Digitizing Deal With Google. From the article,

Google’s got several other high-profile deals with other libraries, but the British Library deal is significant because the BL is the second biggest library in the world, after the Library of Congress (if you’re counting books, rather than periodicals). There are 14 million books among 150 million texts in a variety of formats and three million are added every year–because the BL is a legal deposit library, so it gets a copy of all books produced in the U.K. and Ireland, including many books from overseas that are published in Britain.

The Library’s chief executive Dame Lynne Brindley has commented on the new deal, highlighting the original mission of the Library to make knowledge accessible to everyone–the Google deal is “building on this proud tradition.” Since anyone with a browser can now access the material for free from anywhere in the world, the deal sets an important precedent that may be expanded in the future.

Making 60,000 texts immediately readable on your iPad is one thing, and adding another 250,000 is another. The British Library is sending a big signal out about historic texts, and it could subtly change how you think about books. For one thing, student’s essays are going to be peppered with even more esoteric quotes from obscure publications as they ill-advisedly Google their way through writing term papers. It also boosts Google’s standing in the “free” books stakes compared to competitors like Amazon, and it does imply that in the future even more of the 150 million texts in the British Library may make it online.

Interesting development!

 

Human-Computer interfaces: flying with thoughtpower, reading minds, and wrapping a telephone around your wrist

This time I’ve decided to explore a few of the human/computer interface stories I’ve run across lately. So this posting is largely speculative and rambling as I’m not driving towards a conclusion.

My first item is a May 3, 2011 news item on physorg.com. It concerns an art installation at Rensselaer Polytechnic Institute, The Ascent. From the news item,

A team of Rensselaer Polytechnic Institute students has created a system that pairs an EEG headset with a 3-D theatrical flying harness, allowing users to “fly” by controlling their thoughts. The “Infinity Simulator” will make its debut with an art installation [The Ascent] in which participants rise into the air – and trigger light, sound, and video effects – by calming their thoughts.

I found a video of someone demonstrating this project:
http://blog.makezine.com/archive/2011/03/eeg-controlled-wire-flight.html

Please do watch:

I’ve seen this a few times and it still absolutely blows me away.

If you should be near Rensselaer on May 12, 2011, you could have a chance to fly using your own thoughtpower, a harness, and an EEG helmet. From the event webpage,

Come ride The Ascent, a playful mash-up of theatrics, gaming and mind-control. The Ascent is a live-action, theatrical ride experience created for almost anyone to try. Individual riders wear an EEG headset, which reads brainwaves, along with a waist harness, and by marshaling their calm, focus, and concentration, try to levitate themselves thirty feet into the air as a small audience watches from below. The experience is full of obstacles-as a rider ascends via the power of concentration, sound and light also respond to brain activity, creating a storm of stimuli that conspires to distract the rider from achieving the goal: levitating into “transcendence.” The paradox is that in order to succeed, you need to release your desire for achievement, and contend with what might be the biggest obstacle: yourself.

Theater Artist and Experience Designer Yehuda Duenyas (XXXY) presents his MFA Thesis project The Ascent, and its operating platform the Infinity System, a new user driven experience created specifically for EMPAC’s automated rigging system.

The Infinity System is a new platform and user interface for 3D flying which combines aspects of thrill-ride, live-action video game, and interactive installation.

Using a unique and intuitive interface, the Infinity System uses 3D rigging to move bodies creatively through space, while employing wearable sensors to manipulate audio and visual content.

Like a live-action stunt-show crossed with a video game, the user is given the superhuman ability to safely and freely fly, leap, bound, flip, run up walls, fall from great heights, swoop, buzz, drop, soar, and otherwise creatively defy gravity.

“The effect is nothing short of movie magic.” – Sean Hollister, Engadget

Here’s a brief description of the technology behind this ‘Ascent’ (from the news item on physorg.com),

Ten computer programs running simultaneously link the commercially available EEG headset to the computer-controlled 3-D flying harness and various theater systems, said Todd. [Michael Todd, a Rensselaer 2010 graduate in computer science]

Within the theater, the rigging – including the harness – is controlled by a Stage Tech NOMAD console; lights are controlled by an ION console running MIDI show control; sound through MAX/MSP; and video through Isadora and Jitter. The “Infinity Simulator,” a series of three C programs written by Todd, acts as intermediary between the headset and the theater systems, connecting and conveying all input and output.

“We’ve built a software system on top of the rigging control board and now have control of it through an iPad, and since we have the iPad control, we can have anything control it,” said Duenyas. “The ‘Infinity Simulator’ is the center; everything talks to the ‘Infinity Simulator.’”

This May 3, 2011 article (Mystery Man Gives Mind-Reading Tech More Early Cash Than Facebook, Google Combined) by Kit Eaton on Fast Company also concerns itself with a brain/computer interface. From the article,

Imagine the money that could be made by a drug company that accurately predicted and treated the onset of Alzheimer’s before any symptoms surfaced. That may give us an idea why NeuroVigil, a company specializing in non-invasive, wireless brain-recording tech, just got a cash injection that puts it at a valuation “twice the combined seed valuations of Google’s and Facebook’s first rounds,” according to a company announcement

NeuroVigil’s key product at the moment is the iBrain, a slim device in a flexible head-cap that’s designed to be worn for continuous EEG monitoring of a patient’s brain function–mainly during sleep. It’s non-invasive, and replaces older technology that could only access these kind of brain functions via critically implanted electrodes actually on the brain itself. The idea is, first, to record how brain function changes over time, perhaps as a particular combination of drugs is administered or to help diagnose particular brain pathologies–such as epilepsy.

But the other half of the potentailly lucrative equation is the ability to analyze the trove of data coming from iBrain. And that’s where NeuroVigil’s SPEARS algorithm enters the picture. Not only is the company simplifying collection of brain data with a device that can be relatively comfortably worn during all sorts of tasks–sleeping, driving, watching advertising–but the combination of iBrain and SPEARS multiplies the efficiency of data analysis [emphasis mine].

I assume it’s the notion of combining the two technologies (iBrian and SPEARS) that spawned the ‘mind-reading’ part of this article’s title. The technology could be used for early detection and diagnosis, as well as, other possibilities as Eaton notes,

It’s also possible it could develop its technology into non-medicinal uses such as human-computer interfaces–in an earlier announcement, NeuroVigil noted, “We plan to make these kinds of devices available to the transportation industry, biofeedback, and defense. Applications regarding pandemics and bioterrorism are being considered but cannot be shared in this format.” And there’s even a popular line of kid’s toys that use an essentially similar technique, powered by NeuroSky sensors–themselves destined for future uses as games console controllers or even input devices for computers.

What these two technologies have in common is that, in some fashion or other, they have (shy of implanting a computer chip) a relatively direct interface with our brains, which means (to me anyway) a very different relationship between humans and computers.

In the next couple of items I’m going to profile a couple of very similar to each other technologies that allow for more traditional human/computer interactions, one of which I’ve posted about previously, the Nokia Morph (most recently in my Sept. 29, 2010 posting).

It was first introduced as a type of flexible phone with other capabilities. Since then, they seem to have elaborated on those capabilities. Here’s a description of what they now call the ‘Morph concept’ in a [ETA May 12, 2011: inserted correct link information] May 4, 2011 news item on Nanowerk,

Morph is a joint nanotechnology concept developed by Nokia Research Center (NRC) and the University of Cambridge (UK). Morph is a concept that demonstrates how future mobile devices might be stretchable and flexible, allowing the user to transform their mobile device into radically different shapes. It demonstrates the ultimate functionality that nanotechnology might be capable of delivering: flexible materials, transparent electronics and self-cleaning surfaces.

Morph, will act as a gateway. It will connect the user to the local environment as well as the global internet. It is an attentive device that adapts to the context – it shapes according to the context. The device can change its form from rigid to flexible and stretchable. Buttons of the user interface can grow up from a flat surface when needed. User will never have to worry about the battery life. It is a device that will help us in our everyday life, to keep our self connected and in shape. It is one significant piece of a system that will help us to look after the environment.

Without the new materials, i.e. new structures enabled by the novel materials and manufacturing methods it would be impossible to build Morph kind of device. Graphene has an important role in different components of the new device and the ecosystem needed to make the gateway and context awareness possible in an energy efficient way.

Graphene will enable evolution of the current technology e.g. continuation of the ever increasing computing power when the performance of the computing would require sub nanometer scale transistors by using conventional materials.

For someone who’s been following news of the Morph for the last few years, this news item doesn’t give you any new information. Still, it’s nice to be reminded of the Morph project. Here’s a video produced by the University of Cambridge that illustrates some of the project’s hopes for the Morph concept,

While the folks at the Nokia Research Centre and University of Cambridge have been working on their project, it appears the team at the Human Media Lab at the School of Computing at Queen’s University (Kingston, Ontario, Canada) in cooperation with a team from Arizona State University and E Ink Corporation have been able to produce a prototype of something remarkably similar, albeit with fewer functions. The PaperPhone is being introduced at the Association of Computing Machinery’s CHI 2011 (Computer Human Interaction) conference in Vancouver, Canada next Tuesday, May 10, 2011.

Here’s more about it from a May 4, 2011 news item on Nanowerk,

The world’s first interactive paper computer is set to revolutionize the world of interactive computing.

“This is the future. Everything is going to look and feel like this within five years,” says creator Roel Vertegaal, the director of Queen’s University Human Media Lab,. “This computer looks, feels and operates like a small sheet of interactive paper. You interact with it by bending it into a cell phone, flipping the corner to turn pages, or writing on it with a pen.”

The smartphone prototype, called PaperPhone is best described as a flexible iPhone – it does everything a smartphone does, like store books, play music or make phone calls. But its display consists of a 9.5 cm diagonal thin film flexible E Ink display. The flexible form of the display makes it much more portable that any current mobile computer: it will shape with your pocket.

For anyone who knows the novel, it’s very Diamond Age (by Neal Stephenson). On a more technical note, I would have liked more information about the display’s technology. What is E Ink using? Graphene? Carbon nanotubes?

(That does not look like to paper to me but I suppose you could call it ‘paperlike’.)

In reviewing all these news items, it seems to me there are two themes, the computer as bodywear and the computer as an extension of our thoughts. Both of these are more intimate relationships, the latter far more so than the former, than we’ve had with the computer till now. If any of you have any thoughts on this, please do leave a comment as I would be delighted to engage on some discussion about this.

You can get more information about the Association of Computing Machinery’s CHI 2011 (Computer Human Interaction) conference where Dr. Vertegaal will be presenting here.

You can find more about Dr. Vertegaal and the Human Media Lab at Queen’s University here.

The academic paper being presented at the Vancouver conference is here.

Also, if you are interested in the hardware end of things, you can check out E Ink Corporation, the company that partnered with the team from Queen’s and Arizona State University to create the PaperPhone. Interestingly, E Ink is a spin off company from the Massachusetts Institute of Technology (MIT).

Blood, memristors, cyborgs plus brain-controlled computers, prosthetics, and art

The memristor, a circuit element that quite interests me [April 7, 2010 posting], seems to be moving from being a purely electrical engineering term to one that’s used metaphorically to describe biological processes in a way that is transforming my understanding of machine/human (and other animal) interfaces from a science fiction concept to reality.

March 2, 2011 Kate McAlpine wrote an article for the New Scientist which suggested that skin has memristive properties while noting that the same has been said of the brain. From Sweat ducts make skin a memristor,

Synapses, junctions between neurons in the brain, display electrical behaviour that depends on past activity and are said to behave like memristors. This has raised the prospect of using memristors as the basis of an artificial brain.

Now, by re-examining data from the early 1980s on the electrical conductivity of human skin in response to various voltages, Gorm Johnsen and his colleagues at the University of Oslo in Norway have uncovered a more prosaic example of memristive behaviour in nature.

They found that when a negative electrical potential is applied to skin on various parts of the arm, creating a current, that stretch of skin exhibits a low resistance to a subsequent current flowing through the skin. But if the first potential is positive relative to the skin, then a subsequent potential produces a current that meets with a much higher resistance. In other words, the skin has a memory of previous currents. The finding is due to be published in Physical Review E.
The researchers attribute skin’s memristor behaviour to sweat pores.

More recently, there’s been some excitement about a research team in India that’s working with blood so they can eventually create a ‘liquid memristor’. Rachel Courtland wrote a brief item on the ‘blood memristor’ on April 1, 2011 for the IEEE Tech Talk blog,

S.P. Kosta of the Education Campus Changa in Gujarat, India and colleagues have published a paper in the International Journal of Medical Engineering and Informatics showing that human blood changes its electrical resistance depending on how much voltage is applied. It also seems to retain memory of this resistance for at least five minutes.

The team says that makes human blood a memristor: the fourth in the family of fundamental circuit elements that includes the resistor, the capacitor, and the inductor. Proposed in 1971, the memristor’s existence wasn’t proven until 2008, when HP senior fellow Stanley Williams and colleagues demonstrated a memristor device made of doped titanium dioxide.

There was also a March 30, 2011 news item about the Indian research titled, Blood simple circuitry for cyborgs, on Nanowerk, which provided this information,

They [the research team] constructed the laboratory-based biological memristor using a 10 ml test tube filled with human blood held at 37 Celsius into which two electrodes are inserted; appropriate measuring instrumentation was attached. The experimental memristor shows that resistance varies with applied voltage polarity and magnitude and this memory effect is sustained for at least five minutes in the device.

Having demonstrated memristor behavior in blood, the next step was to test that the same behavior would be observed in a device through which blood is flowing. This step was also successful. The next stage will be to develop a micro-channel version of the flow memristor device and to integrate several to carry out particular logic functions. This research is still a long way from an electronic to biological interface, but bodes well for the development of such devices in the future.

Kit Eaton in an April 4, 2011 article (Electronics Made from Human Blood Cells Suggest Cyborg Interfaces, Spark Nightmares) on the Fast Company website gives more details about possible future applications,

Ultimately, the fact that a biological system could be used to interact with a hard semiconductor system could revolutionize biomechanics. That’s because wiring devices like cochlear implants, nerve-triggered artificial limbs and artificial eyeballs into the body at the moment involves a terribly difficult integration of metal wiring–with all the associated risk of infection and rejection. Plus it’s really a very promising first step toward making a cyborg. Countdown to military interest in this tech in 5…4…3…

It should be noted that the team in India is working towards applications in neuroprosthetics. As for the Norwegian team with their ‘sweat duct/skin memristor’, the article did not specify what types of applications, if any, their work might lead to.

As evidenced by the research covered in these news items, the memristor seems to be drifting or, more accurately, developing a second identity/ghost identity as the term is applied to biological processes.

The body as a machine is a notion that’s been around for a while as has the notion of combining the two. The first notion is a metaphor while the second is a staple in science fiction which, in a minor way, has found a home in the real life practice of body hacking where someone implants a magnetic or computer chip into their body (my May 27, 2010 posting). So the memristor becoming a metaphor for certain biological processes doesn’t seem something new but rather the next step in a process that’s well on its way.

Two students at Ryerson University (Toronto, Canada) recently announced that they had developed a brain-controlled prosthetic. From the March 30, 2011 news item on Nanowerk,

Two Ryerson University undergraduate biomedical engineering students are changing the world of medical prosthetics with a newly developed prosthetic arm that is controlled by brain signals. The Artificial Muscle-Operated (AMO) Arm not only enables amputees more range of movement as compared to other prosthetic arms but it allows amputees to avoid invasive surgeries and could potentially save hundreds of thousands of dollars. The AMO Arm is controlled by the user’s brain signals and is powered by ‘artificial muscles’ – simple pneumatic pumps and valves – to create movements. In contrast, traditional prosthetic limbs – which typically offer more limited movements – rely on intricate and expensive electrical and mechanical components.

Developed by third-year student Thiago Caires and second-year student Michal Prywata, the AMO Arm is controlled by the brain and uses compressed air as the main source of power. The digital device makes use of signals in the brain that continue to fire even after a limb is amputated. Users wear a head-set that senses a signal – for example, the thought “up” – and sends it wirelessly to a miniature computer in the arm. The computer then compares the signal to others in a database. The resulting information is sent to the pneumatic system, which in turn, activates the arm to create the correct movement. Simulating the expansion and contraction of real muscles, the system makes use of compressed air from a small, refillable tank in the user’s pocket.

I think what they mean is that the components are not traditionally electrical and mechanical but in fact informed by emerging technologies and the science that supports them. After all, the computer must run on some kind of electricity and brain activity (wireless signals from the brain will be controlling the prosthetic) is often described as electrical. The result is that the human and the machine are effectively made one since the prosthetic arm is controlled as if it were ‘biological’ arm.

On another part of the spectrum, Iraqui artist Wafaa Bilal made headlines recently when he had a camera implanted into the back of his head creating a third eye. Designed to be a one year project, the artist had to remove the camera when he developed an infection at the site of one of the metal posts used to anchor the camera to his head. From the Feb. 11, 2011 BBC news item,

An artist who had a camera implanted into the back of his head has been forced to remove it after his body rejected part of the device.

Iraqi-born Wafaa Bilal had surgery last week to remove one of three posts holding the camera in place as it posed a risk of infection.

The camera had been taking a photo every minute as part of a year-long project.

Wafaa Bilal and camera (image downloaded from BBC website)

(The artist would like to try it again but, in the meantime, has slung the camera around his neck as a substitute.)

In Bilal’s case, the body is being significantly altered as the machine (camera) is implanted in a place (back of head) where no animal has them located.

What I’m getting at with all of this is that at the same time we seem to be expanding the memristor’s meaning from a term used to describe a concept in electrical engineering to include biological processes, we are exploring new ways of integrating machinery into our bodies. In effect our relationships to our bodies and machines are changing and that change can be traced in the language we use to describe ourselves.
 

Finger pinches today, heartbeats tomorrow and electricity forever

Devices powered by energy generated and harvested from one’s own body have been of tremendous interest to me. Last year I mentioned some research in this area by Professor Zhong Lin Wang at Georgia Tech (Georgia Institute of Technology) in a July 12, 2010 posting. Well, Wang and his team recently announced that they have developed the first commercially viable nanogenerator. From the March 29, 2011 news item on Physorg.com,

After six years of intensive effort, scientists are reporting development of the first commercially viable nanogenerator, a flexible chip that can use body movements — a finger pinch now en route to a pulse beat in the future — to generate electricity. Speaking here today at the 241st National Meeting & Exposition of the American Chemical Society, they described boosting the device’s power output by thousands times and its voltage by 150 times to finally move it out of the lab and toward everyday life.

“This development represents a milestone toward producing portable electronics that can be powered by body movements without the use of batteries or electrical outlets,” said lead scientist Zhong Lin Wang, Ph.D. “Our nanogenerators are poised to change lives in the future. Their potential is only limited by one’s imagination.”

Here’s how it works  (from Kit Eaton’s article on Fast Company),

The trick used by Dr. Zhong Lin Wang’s team has been to utilize nanowires made of zinc oxide (ZnO). ZnO is a piezoelectric material–meaning it changes shape slightly when an electrical field is applied across it, or a current is generated when it’s flexed by an external force. By combining nanoscopic wires (each 500 times narrower than a human hair) of ZnO into a flexible bundle, the team found it could generate truly workable amounts of energy. The bundle is actually bonded to a flexible polymer slice, and in the experimental setup five pinky-nail-size nanogenerators were stacked up to create a power supply that can push out 1 micro Amp at about 3 volts. That doesn’t sound like a lot, but it was enough to power an LED and an LCD screen in a demonstration of the technology’s effectiveness.

Dexter Johnson at Nanoclast on the IEEE (Institute of Electrical Engineering and Electronics) website notes in his March 30, 2010 posting (http://spectrum.ieee.org/nanoclast/semiconductors/nanotechnology/powering-our-electronic-devices-with-nanogenerators-looks-more-feasible) that the nanogenerator’s commercial viability is dependent on work being done at the University of Illinois,

I would have happily chalked this story [about the nanogenerator] up to one more excellent job of getting nanomaterial research into the mainstream press, but because of recent work by Eric Pop and his colleagues at the University of Illinois’s Beckman Institute in reducing the energy consumed by electronic devices it seems a bit more intriguing now.

So low is the energy consumption of the electronics proposed by the University of Illinois research it is to the point where a mobile device may not need a battery but could possibly operate on the energy generated from piezoelectric-enabled nanogenerators contained within such devices like those proposed by Wang.

I have a suspicion it’s going to be a while before I will be wearing nanogenerators to harvest the electricity my body produces. Meanwhile, I have some questions about the possible uses for nanogenerators (from the Kit Eaton article),

The search for tiny power generator technology has slowly inched forward for years for good reason–there are a trillion medical and surveillance uses–not to mention countless consumer electronics applications– for a system that could grab electrical power from something nearby that’s moving even just a tiny bit. Imagine an implanted insulin pump, or a pacemaker that’s powered by the throbbing of the heart or blood vessels nearby (and then imagine the pacemaker powering the heart, which is powered by the pacemaker, and so on and so on….) and you see how useful such a system could be.

It’s the reference to surveillance that makes me a little uneasy.