Tag Archives: Nokia Research Centre

Skin as a touchscreen (“smart” hands)

An April 11, 2016 news item on phys.org highlights some research presented at the IEEE (Institute of Electrical and Electronics Engineers) Haptics (touch) Symposium 2016,

Using your skin as a touchscreen has been brought a step closer after UK scientists successfully created tactile sensations on the palm using ultrasound sent through the hand.

The University of Sussex-led study – funded by the Nokia Research Centre and the European Research Council – is the first to find a way for users to feel what they are doing when interacting with displays projected on their hand.

This solves one of the biggest challenges for technology companies who see the human body, particularly the hand, as the ideal display extension for the next generation of smartwatches and other smart devices.

Current ideas rely on vibrations or pins, which both need contact with the palm to work, interrupting the display.

However, this new innovation, called SkinHaptics, sends sensations to the palm from the other side of the hand, leaving the palm free to display the screen.

An April 11, 2016 University of Sussex press release (also on EurekAlert) by James Hakmer, which originated the news item, provides more detail,

The device uses ‘time-reversal’ processing to send ultrasound waves through the hand. This technique is effectively like ripples in water but in reverse – the waves become more targeted as they travel through the hand, ending at a precise point on the palm.

It draws on a rapidly growing field of technology called haptics, which is the science of applying touch sensation and control to interaction with computers and technology.

Professor Sriram Subramanian, who leads the research team at the University of Sussex, says that technologies will inevitably need to engage other senses, such as touch, as we enter what designers are calling an ‘eye-free’ age of technology.

He says: “Wearables are already big business and will only get bigger. But as we wear technology more, it gets smaller and we look at it less, and therefore multisensory capabilities become much more important.

“If you imagine you are on your bike and want to change the volume control on your smartwatch, the interaction space on the watch is very small. So companies are looking at how to extend this space to the hand of the user.

“What we offer people is the ability to feel their actions when they are interacting with the hand.”

The findings were presented at the IEEE Haptics Symposium [April 8 – 11] 2016 in Philadelphia, USA, by the study’s co-author Dr Daniel Spelmezan, a research assistant in the Interact Lab.

There is a video of the work (I was not able to activate sound, if there is any accompanying this video),

The consequence of watching this silent video was that I found the whole thing somewhat mysterious.

Flexible, graphene-based display: first ever?

It seems like there’s been a lot of discussion about flexible displays, graphene or not, over the years so the announcement of the first graphene-based flexible display might seem a little anticlimactic. That’s one of the problems with the technology and science communities. Sometimes there’s so much talk about an idea or concept that by the time it becomes reality people think it’s already been done and is not news.

So, kudos to the folks at the University of Cambridge who have been working on this development for a long time. From a Sept. 10, 2014 news release on EurekAlert,

The partnership between the two organisations combines the graphene expertise of the Cambridge Graphene Centre (CGC), with the transistor and display processing steps that Plastic Logic has already developed for flexible electronics. This prototype is a first example of how the partnership will accelerate the commercial development of graphene, and is a first step towards the wider implementation of graphene and graphene-like materials into flexible electronics.

The new prototype is an active matrix electrophoretic display, similar to the screens used in today’s e-readers, except it is made of flexible plastic instead of glass. In contrast to conventional displays, the pixel electronics, or backplane, of this display includes a solution-processed graphene electrode, which replaces the sputtered metal electrode layer within Plastic Logic’s conventional devices, bringing product and process benefits.

Graphene is more flexible than conventional ceramic alternatives like indium-tin oxide (ITO) and more transparent than metal films. The ultra-flexible graphene layer may enable a wide range of products, including foldable electronics. Graphene can also be processed from solution bringing inherent benefits of using more efficient printed and roll-to-roll manufacturing approaches.

The new 150 pixel per inch (150 ppi) backplane was made at low temperatures (less than 100°C) using Plastic Logic’s Organic Thin Film Transistor (OTFT) technology. The graphene electrode was deposited from solution and subsequently patterned with micron-scale features to complete the backplane.

For this prototype, the backplane was combined with an electrophoretic imaging film to create an ultra-low power and durable display. Future demonstrations may incorporate liquid crystal (LCD) and organic light emitting diodes (OLED) technology to achieve full colour and video functionality. Lightweight flexible active-matrix backplanes may also be used for sensors, with novel digital medical imaging and gesture recognition applications already in development.

“We are happy to see our collaboration with Plastic Logic resulting in the first graphene-based electrophoretic display exploiting graphene in its pixels’ electronics,” said Professor Andrea Ferrari, Director of the Cambridge Graphene Centre. “This is a significant step forward to enable fully wearable and flexible devices. This cements the Cambridge graphene-technology cluster and shows how an effective academic-industrial partnership is key to help move graphene from the lab to the factory floor.”

As an example of how long this development has been in the works, I have a Nov. 7, 2011 posting about a University of Cambridge stretchable, electronic skin produced by what was then the university’s Nokia Research Centre. That ‘skin’ was a big step forward to achieving a phone/device/flexible display (the Morph), wrappable around your wrist, first publicized in 2008 as I noted in a March 30, 2010 posting.

According to the news release, there should be some more news soon,

This joint effort between Plastic Logic and the CGC was also recently boosted by a grant from the UK Technology Strategy Board, within the ‘realising the graphene revolution’ initiative. This will target the realisation of an advanced, full colour, OELD based display within the next 12 months.

My colleague Dexter Johnson has offered some business-oriented insight into this development at Cambridge in his Sept. 9, 2014 posting on the Nanoclast blog on the IEEE (Institute of Electrical and Electronics Engineers) website (Note: Links have been removed),

In the UK’s concerted efforts to become a hub for graphene commercialization, one of the key partnerships between academic research and industry has been the one between the Cambridge Graphene Centre located at the University of Cambridge and a number of companies, including Nokia, Dyson, BaE systems, Philips and Plastic Logic. The last on this list, Plastic Logic, was spun out originally from the University of Cambridge in 2000. However, since its beginnings it has required a $200 million investment from RusNano to keep itself afloat back in 2011 for a time called Mountain View, California, home.

The post is well worth reading for anyone interested in the twists and turns of graphene commercialization in the UK.

Graphene, IBM’s first graphene-based integrated circuit, and the European Union’s pathfinder programme in information technologies

A flat layer of carbon atoms packed into a two-dimensional honeycomb arrangement, graphene is being touted as a miracle (it seems)  material which will enable new kinds of electronic products. Recently, there have been a number of news items and articles featuring graphene research.

Here’s my roundup of the latest and greatest graphene news. I’m starting with an application that is the closest to commercialization: IBM recently announced the creation of the first graphene-based integrated circuit. From the Bob Yirka article dated June 10, 2011 on physorg.com,

Taking a giant step forward in the creation and production of graphene based integrated circuits, IBM has announced in Science, the fabrication of a graphene based integrated circuit [IC] on a single chip. The demonstration chip, known as a radio frequency “mixer” is capable of producing frequencies up to 10 GHz, and demonstrates that it is possible to overcome the adhesion problems that have stymied researchers efforts in creating graphene based IC’s that can be used in analog applications such as cell phones or more likely military communications.

The graphene circuits were started by growing a two or three layer graphene film on a silicon surface which was then heated to 1400°C. The graphene IC was then fabricated by employing top gated, dual fingered graphene FET’s (field-effect transistors) which were then integrated with inductors. The active channels were made by spin-coating the wafer with a thin polymer and then applying a layer of hydrogen silsequioxane. The channels were then carved by e-beam lithography. Next, the excess graphene was removed with an oxygen plasma laser, and then the whole works was cleaned with acetone. The result is an integrated circuit that is less than 1mm2 in total size.

Meanwhile, there’s a graphene research project in contention for a major research prize in Europe. Worth 1B Euros, the European Union’s 2011 pathfinder programme (Future and Emerging Technologies [Fet11]) in information technology) will select two from six pilot actions currently under way to be awarded a Flagship Initiative prize.  From the Fet11 flagships project page,

FET Flagships are large-scale, science-driven and mission oriented initiatives that aim to achieve a visionary technological goal. The scale of ambition is over 10 years of coordinated effort, and a budget of up to one billion Euro for each Flagship. They initiatives are coordinated between national and EU programmes and present global dimensions to foster European leadership and excellence in frontier research.

To prepare the launch of the FET Flagships, 6 Pilot Actions are funded for a 12-month period starting in May 2011. In the second half of 2012 two of the Pilots will be selected and launched as full FET Flagship Initiatives in 2013.

Here’s the description of the Graphene Science and technology for ICT and beyond pilot action,

Graphene, a new substance from the world of atomic and molecular scale manipulation of matter, could be the wonder material of the 21st century. Discovering just how important this material will be for Information and Communication Technologies is the long term focus of the Flagship Initiative, simply called, GRAPHENE. This aims to explore revolutionary potentials, in terms of both conventional as well as radically new fields of Information and Communication Technologies applications.

Bringing together multiple disciplines and addressing research across a whole range of issues, from fundamental understandings of material properties to Graphene production, the Flagship will provide the platform for establishing European scientific and technological leadership in the application of Graphene to Information and Communication Technologies. The proposed research includes coverage of electronics, spintronics, photonics, plasmonics and mechanics, all based on Graphene.

[Project Team:]

Andrea Ferrari, Cambridge University, UK
Jari Kinaret, Chalmers University, Sweden
Vladimir Falko, Lancaster University, UK
Jani Kivioja, NOKIA, Finland [emphases mine]

Not so coincidentally (given one member of the team is associated with Nokia and another is associated with Cambridge University), the Nokia Research Centre jointly with Cambridge University issued a May 4, 2011 news release (I highlighted it in my May 6, 2011 posting [scroll down past the theatre project information]) about the Morph concept (a rigid, flexible, and stretchable phone/blood pressure cuff/calculator/and  other electronic devices in one product) which they have been publicizing for years now. The news release concerned itself with how graphene would enable the researchers to take the Morph from idea to actuality. The webpage for the Graphene Pilot Action is here.

There’s something breathtaking when there is no guarantee of success about the willingness to invest up to 1B Euros in a project that spans 10 years. We’ll have to wait until 2013 before learning whether the graphene project will be one of the two selected as Flagship Initiatives.

I must say the timing for the 2010 Nobel Prize for Physics which went to two scientists (Andre Geim and Konstantin Novoselov) for their groundbreaking work with graphene sems interesting (featured in my Oct. 7, 2010 posting) in light of this graphene activity.

The rest of these graphene items are about research that could lay the groundwork for future commercialization.

Friday, June 13, 2011 there was a news item about foaming graphene on Nanowerk (from the news item),

Hui-Ming Cheng and co-workers from the Chinese Academy of Sciences’ Institute of Metal Research at Shenyang have now devised a chemical vapor deposition (CVD) method for turning graphene sheets into porous three-dimensional ‘foams’ with extremely high conductivity (“Three-dimensional flexible and conductive interconnected graphene networks grown by chemical vapour deposition” [published in Nature Materials 10, 424–428 (2011) doi:10.1038/nmat3001 Published online 10 April 2011]). By permeating this foam with a siloxane-based polymer, the researchers have produced a composite that can be twisted, stretched and bent without harming its electrical or mechanical properties.

Here’s an image from the Nature Publishing Group (NPG) of both the vapour and the bendable, twistable, stretchable composite (downloaded from the news item on Nanowerk where you can find a larger version of the image),

A scanning electron microscopy image of the net-like structure of graphene foam (left), and a photograph of a highly conductive elastic conductor produced from the foam. (© 2011 NPG)

The ‘elastic’ conductor (image to the right) reminds me of the ‘paper’ phone which I wrote about May 8, 2011 and May 12, 2011. (It’s a project where teams from Queen’s University [in Ontario] and Arizona State University are working to create flexible screens that give you telephony, music playing and other capabilities  much like the Morph concept.)

Researchers in Singapore have developed a graphene quantum dot using a C60 (a buckminster fullerene). From the June 13, 2011 news item (Graphene: from spheres to perfect dots) on Nanowerk,

An electron trapped in a space of just a few nanometers across behaves very differently to one that is free. Structures that confine electrons in all three dimensions can produce some useful optical and electronic effects. Known as quantum dots, such structures are being widely investigated for use in new types of optical and electronics technologies, but because they are so small it is difficult to fabricate quantum dots reproducibly in terms of shape and size. Researchers from the National University of Singapore (NUS) and A*STAR have now developed a technique that enables graphene quantum dots of a known size to be created repeatedly and quickly (“Transforming C60 molecules into graphene quantum dots” [published in Nature Nanotechnology 6, 247–252 (2011) doi:10.1038/nnano.2011.30 Published online 20 March 2011]).

This final bit is about a nano PacMan that allows for more precise patterning from a June 13, 2011 article written by Michael Berger,

A widely discussed method for the patterning of graphene is the channelling of graphite by metal nanoparticles in oxidizing or reducing environments (see for instance: “Nanotechnology PacMan cuts straight graphene edges”).

“All previous studies of channelling behavior have been limited by the need to perform the experiment ex situ, i.e. comparing single ‘before’ and ‘after’ images,” Peter Bøggild, an associate professor at DTU [Danish Technical University] Nanotech, explains to Nanowerk. “In these and other ex situ experiments the dynamic behavior must be inferred from the length of channels and heating time after completion of the experiment, with the rate of formation of the channel assumed to be consistent over the course of the experiment.”

In new work, reported in the June 9, 2011 advance online edition of Nano Letters (“Discrete dynamics of nanoparticle channelling in suspended graphene” [published in Nano Letters, Article ASAP, DOI: 10.1021/nl200928k, Publication Date (Web): June 9, 2011]), Bøggild and his team report the nanoscale observation of this channelling process by silver nanoparticles in an oxygen atmosphere in-situ on suspended mono- and bilayer graphene in an environmental transmission electron microscope, enabling direct concurrent observation of the process, impossible in ex-situ experiments.

Personally, I love the youtube video I’ve included here largely because it features blobs (as many of these videos do) where they’ve added music and titles (many of these videos do not) so you can better appreciate the excitement,

From the article by Michael Berger,

As a result of watching this process occur live in a transmission electron microscope, the researchers say they have seen many details that were hidden before, and video really brings the “nano pacman” behavior to life …

There’s a reason why they’re so interested in cutting graphene,

“With a deeper understanding of the fine details we hope to one day use this nanoscale channelling behavior to directly cut desired patterns out of suspended graphene sheets, with a resolution and accuracy that isn’t achievable with any other technique,” says Bøggild. “A critical advantage here is that the graphene crystal structure guides the patterning, and in our case all of the cut edges of the graphene are ‘zigzag’ edges.”

So there you have it. IBM creates the first integrated graphene-based circuit, there’s the prospect of a huge cash prize for a 10-year project on graphene so they could produce the long awaited Morph concept and other graphene-based electronics products while a number of research teams around the world continue teasing out its secrets with graphene ‘foam’ projects, graphene quantum dots, and nano PacMen who cut graphene’s zigzag edges with precision.

ETA June 16, 2011: For those interested in the business end of things, i.e. market value of graphene-based products, Cameron Chai features a report, Graphene: Technologies, Applications, and Markets, in his June 16, 2011 news item on Azonano.

From the bleeding edge to the cutting edge to ubiquitous? The PaperPhone, an innovation case study in progress

This story has it all: military, patents, international competition and cooperation, sex (well, not according to the academics but I think it’s possible), and a bizarre device – the PaperPhone (last mentioned in my May 6, 2011 posting on Human-Computer Interfaces).

“If you want to know what technologies people will be using 10 years in the future, talk to the people who’ve been working on a lab project for 10 years,” said Dr. Roel Vertegaal, Director of the Human Media Lab at Queen’s University in Kingston, Ontario. By the way, 10 years is roughly the length of time Vertegaal and his team have been working on a flexible/bendable phone/computer and he believes that it will be another five to 10 years before the device is available commercially.

Image from Human Media Lab press kit

As you can see in the image, the prototype device looks like a thin piece of plastic that displays a menu. In real life that black bit to the left of the image is the head of a cable with many wires connecting it to a computer. Here’s a physical description of the device copied from the paper (PaperPhone: Understanding the Use of Bend Gestures in Mobile Devices with Flexible Electronic Paper Displays) written by Byron Lahey, Audrey Girouard, Winslow Burleson and Vertegaal,

PaperPhone consists of an Arizona State University Flexible Display Center 3.7” Bloodhound flexible electrophoretic display, augmented with a layer of 5 Flexpoint 2” bidirectional bend sensors. The prototype is driven by an E Ink Broadsheet AM 300 Kit featuring a Gumstix processor. The prototype has a refresh rate of 780 ms for a typical full screen gray scale image.

An Arduino microcontroller obtains data from the Flexpoint bend sensors at a frequency of 20 Hz. Figure 2 shows the back of the display, with the bend sensor configuration mounted on a flexible printed circuit (FPC) of our own design. We built the FPC by printing its design on DuPont Pyralux flexible circuit material with a solid ink printer, then etching the result to obtain a fully functional flexible circuit substrate. PaperPhone is not fully wireless. This is because of the supporting rigid electronics that are required to drive the display. A single, thin cable bundle connects the AM300 and Arduino hardware to the display and sensors. This design maximizes the flexibility and mobility of the display, while keeping its weight to a minimum. The AM300 and Arduino are connected to a laptop running a Max 5 patch that processes sensor data, performs bend gesture recognition and sends images to the display. p. 3

It may look ungainly but it represents a significant step forward for the technology as this team (composed of researchers from Queen’s University, Arizona State University, and E Ink Corporation) appears to have produced the only working prototype in the world for a personal portable flexible device that will let you make phone calls, play music, read a book, and more by bending it. As they continue to develop the product, the device will become wireless.

The PaperPhone and the research about ‘bending’, i.e., the kinds of bending gestures people would find easiest and most intuitive to use when activating the device, were presented in Vancouver in an early session at the CHI 2011 Conference where I got a chance to speak to Dr. Vertegaal and his team.

Amongst other nuggets, I found out the US Department of Defense (not DARPA [Defense Advanced Research Projects Agency] oddly enough) has provided funding for the project. Military interest is focused on the device’s low energy requirements, lowlight screen, and light weight in addition to its potential ability to be folded up and carried like a piece of paper (i. e., it could mould itself to fit a number of tight spaces) as opposed to the rigid, ungiving borders of a standard mobile device. Of course, all of these factors are quite attractive to consumers too.

As is imperative these days, the ‘bends’ that activate the device have been patented and Vertegaal is in the process of developing a startup company that will bring this device and others to market. Queen’s University has an ‘industrial transfer’ office (they probably call it something else) which is assisting him with the startup.

There is international interest in the PaperPhone that is collaborative and competitive. Vertegaal’s team at Queen’s is partnered with a team at Arizona State University led by Dr. Winslow Burleson, professor in the Computer Systems Engineering and the Arts, Media, and Engineering graduate program and with Michael McCreary, Vice President Research & Development of E Ink Corporation representing an industry partner.

On the competitive side of things, the UK’s University of Cambridge and the Finnish Nokia Research Centre have been working on the Morph which as I noted in my May 6, 2011 posting still seems to be more concept than project.

Vertegaal noted that the idea of a flexible screen is not new and that North American companies have gone bankrupt trying to bring the screens to market. These days, you have to go to Taiwan for industrial production of flexible screens such as the PaperPhone’s.

One of my last questions to the team was about pornography. (In the early days of the Internet [which had its origins in military research], there were only two industries that made money online, pornography and gambling. The gambling opportunities seem pretty similar to what we already enjoy.) After an amused response, the consensus was that like gambling it’s highly unlikely a flexible phone could lend itself to anything new in the field of pornography. Personally, I’m not convinced about that one.

So there you have a case study for innovation. Work considered bleeding edge 10 years ago is now cutting edge and, in the next five to 10 years, that work will be become a consumer product. Along the way you have military investment, international collaboration and competition, failure and success, and, possibly, sex.

Human-Computer interfaces: flying with thoughtpower, reading minds, and wrapping a telephone around your wrist

This time I’ve decided to explore a few of the human/computer interface stories I’ve run across lately. So this posting is largely speculative and rambling as I’m not driving towards a conclusion.

My first item is a May 3, 2011 news item on physorg.com. It concerns an art installation at Rensselaer Polytechnic Institute, The Ascent. From the news item,

A team of Rensselaer Polytechnic Institute students has created a system that pairs an EEG headset with a 3-D theatrical flying harness, allowing users to “fly” by controlling their thoughts. The “Infinity Simulator” will make its debut with an art installation [The Ascent] in which participants rise into the air – and trigger light, sound, and video effects – by calming their thoughts.

I found a video of someone demonstrating this project:
http://blog.makezine.com/archive/2011/03/eeg-controlled-wire-flight.html

Please do watch:

I’ve seen this a few times and it still absolutely blows me away.

If you should be near Rensselaer on May 12, 2011, you could have a chance to fly using your own thoughtpower, a harness, and an EEG helmet. From the event webpage,

Come ride The Ascent, a playful mash-up of theatrics, gaming and mind-control. The Ascent is a live-action, theatrical ride experience created for almost anyone to try. Individual riders wear an EEG headset, which reads brainwaves, along with a waist harness, and by marshaling their calm, focus, and concentration, try to levitate themselves thirty feet into the air as a small audience watches from below. The experience is full of obstacles-as a rider ascends via the power of concentration, sound and light also respond to brain activity, creating a storm of stimuli that conspires to distract the rider from achieving the goal: levitating into “transcendence.” The paradox is that in order to succeed, you need to release your desire for achievement, and contend with what might be the biggest obstacle: yourself.

Theater Artist and Experience Designer Yehuda Duenyas (XXXY) presents his MFA Thesis project The Ascent, and its operating platform the Infinity System, a new user driven experience created specifically for EMPAC’s automated rigging system.

The Infinity System is a new platform and user interface for 3D flying which combines aspects of thrill-ride, live-action video game, and interactive installation.

Using a unique and intuitive interface, the Infinity System uses 3D rigging to move bodies creatively through space, while employing wearable sensors to manipulate audio and visual content.

Like a live-action stunt-show crossed with a video game, the user is given the superhuman ability to safely and freely fly, leap, bound, flip, run up walls, fall from great heights, swoop, buzz, drop, soar, and otherwise creatively defy gravity.

“The effect is nothing short of movie magic.” – Sean Hollister, Engadget

Here’s a brief description of the technology behind this ‘Ascent’ (from the news item on physorg.com),

Ten computer programs running simultaneously link the commercially available EEG headset to the computer-controlled 3-D flying harness and various theater systems, said Todd. [Michael Todd, a Rensselaer 2010 graduate in computer science]

Within the theater, the rigging – including the harness – is controlled by a Stage Tech NOMAD console; lights are controlled by an ION console running MIDI show control; sound through MAX/MSP; and video through Isadora and Jitter. The “Infinity Simulator,” a series of three C programs written by Todd, acts as intermediary between the headset and the theater systems, connecting and conveying all input and output.

“We’ve built a software system on top of the rigging control board and now have control of it through an iPad, and since we have the iPad control, we can have anything control it,” said Duenyas. “The ‘Infinity Simulator’ is the center; everything talks to the ‘Infinity Simulator.’”

This May 3, 2011 article (Mystery Man Gives Mind-Reading Tech More Early Cash Than Facebook, Google Combined) by Kit Eaton on Fast Company also concerns itself with a brain/computer interface. From the article,

Imagine the money that could be made by a drug company that accurately predicted and treated the onset of Alzheimer’s before any symptoms surfaced. That may give us an idea why NeuroVigil, a company specializing in non-invasive, wireless brain-recording tech, just got a cash injection that puts it at a valuation “twice the combined seed valuations of Google’s and Facebook’s first rounds,” according to a company announcement

NeuroVigil’s key product at the moment is the iBrain, a slim device in a flexible head-cap that’s designed to be worn for continuous EEG monitoring of a patient’s brain function–mainly during sleep. It’s non-invasive, and replaces older technology that could only access these kind of brain functions via critically implanted electrodes actually on the brain itself. The idea is, first, to record how brain function changes over time, perhaps as a particular combination of drugs is administered or to help diagnose particular brain pathologies–such as epilepsy.

But the other half of the potentailly lucrative equation is the ability to analyze the trove of data coming from iBrain. And that’s where NeuroVigil’s SPEARS algorithm enters the picture. Not only is the company simplifying collection of brain data with a device that can be relatively comfortably worn during all sorts of tasks–sleeping, driving, watching advertising–but the combination of iBrain and SPEARS multiplies the efficiency of data analysis [emphasis mine].

I assume it’s the notion of combining the two technologies (iBrian and SPEARS) that spawned the ‘mind-reading’ part of this article’s title. The technology could be used for early detection and diagnosis, as well as, other possibilities as Eaton notes,

It’s also possible it could develop its technology into non-medicinal uses such as human-computer interfaces–in an earlier announcement, NeuroVigil noted, “We plan to make these kinds of devices available to the transportation industry, biofeedback, and defense. Applications regarding pandemics and bioterrorism are being considered but cannot be shared in this format.” And there’s even a popular line of kid’s toys that use an essentially similar technique, powered by NeuroSky sensors–themselves destined for future uses as games console controllers or even input devices for computers.

What these two technologies have in common is that, in some fashion or other, they have (shy of implanting a computer chip) a relatively direct interface with our brains, which means (to me anyway) a very different relationship between humans and computers.

In the next couple of items I’m going to profile a couple of very similar to each other technologies that allow for more traditional human/computer interactions, one of which I’ve posted about previously, the Nokia Morph (most recently in my Sept. 29, 2010 posting).

It was first introduced as a type of flexible phone with other capabilities. Since then, they seem to have elaborated on those capabilities. Here’s a description of what they now call the ‘Morph concept’ in a [ETA May 12, 2011: inserted correct link information] May 4, 2011 news item on Nanowerk,

Morph is a joint nanotechnology concept developed by Nokia Research Center (NRC) and the University of Cambridge (UK). Morph is a concept that demonstrates how future mobile devices might be stretchable and flexible, allowing the user to transform their mobile device into radically different shapes. It demonstrates the ultimate functionality that nanotechnology might be capable of delivering: flexible materials, transparent electronics and self-cleaning surfaces.

Morph, will act as a gateway. It will connect the user to the local environment as well as the global internet. It is an attentive device that adapts to the context – it shapes according to the context. The device can change its form from rigid to flexible and stretchable. Buttons of the user interface can grow up from a flat surface when needed. User will never have to worry about the battery life. It is a device that will help us in our everyday life, to keep our self connected and in shape. It is one significant piece of a system that will help us to look after the environment.

Without the new materials, i.e. new structures enabled by the novel materials and manufacturing methods it would be impossible to build Morph kind of device. Graphene has an important role in different components of the new device and the ecosystem needed to make the gateway and context awareness possible in an energy efficient way.

Graphene will enable evolution of the current technology e.g. continuation of the ever increasing computing power when the performance of the computing would require sub nanometer scale transistors by using conventional materials.

For someone who’s been following news of the Morph for the last few years, this news item doesn’t give you any new information. Still, it’s nice to be reminded of the Morph project. Here’s a video produced by the University of Cambridge that illustrates some of the project’s hopes for the Morph concept,

While the folks at the Nokia Research Centre and University of Cambridge have been working on their project, it appears the team at the Human Media Lab at the School of Computing at Queen’s University (Kingston, Ontario, Canada) in cooperation with a team from Arizona State University and E Ink Corporation have been able to produce a prototype of something remarkably similar, albeit with fewer functions. The PaperPhone is being introduced at the Association of Computing Machinery’s CHI 2011 (Computer Human Interaction) conference in Vancouver, Canada next Tuesday, May 10, 2011.

Here’s more about it from a May 4, 2011 news item on Nanowerk,

The world’s first interactive paper computer is set to revolutionize the world of interactive computing.

“This is the future. Everything is going to look and feel like this within five years,” says creator Roel Vertegaal, the director of Queen’s University Human Media Lab,. “This computer looks, feels and operates like a small sheet of interactive paper. You interact with it by bending it into a cell phone, flipping the corner to turn pages, or writing on it with a pen.”

The smartphone prototype, called PaperPhone is best described as a flexible iPhone – it does everything a smartphone does, like store books, play music or make phone calls. But its display consists of a 9.5 cm diagonal thin film flexible E Ink display. The flexible form of the display makes it much more portable that any current mobile computer: it will shape with your pocket.

For anyone who knows the novel, it’s very Diamond Age (by Neal Stephenson). On a more technical note, I would have liked more information about the display’s technology. What is E Ink using? Graphene? Carbon nanotubes?

(That does not look like to paper to me but I suppose you could call it ‘paperlike’.)

In reviewing all these news items, it seems to me there are two themes, the computer as bodywear and the computer as an extension of our thoughts. Both of these are more intimate relationships, the latter far more so than the former, than we’ve had with the computer till now. If any of you have any thoughts on this, please do leave a comment as I would be delighted to engage on some discussion about this.

You can get more information about the Association of Computing Machinery’s CHI 2011 (Computer Human Interaction) conference where Dr. Vertegaal will be presenting here.

You can find more about Dr. Vertegaal and the Human Media Lab at Queen’s University here.

The academic paper being presented at the Vancouver conference is here.

Also, if you are interested in the hardware end of things, you can check out E Ink Corporation, the company that partnered with the team from Queen’s and Arizona State University to create the PaperPhone. Interestingly, E Ink is a spin off company from the Massachusetts Institute of Technology (MIT).