Tag Archives: Association of Computing Machinery

Teaching kids to code with cultural research and embroidery machines

Caption: University of Washington researchers taught a group of high schoolers to code by combining cultural research into various embroidery traditions with “computational embroidery.” The method teaches kids to encode embroidery patterns on a computer through a coding language called Turtlestitch. Here, a student stitched plants with code, then hand-embroidered a bee. Credit: Kivuva et al./SIGCSE

Textiles and computing are more closely linked than most of us realize. It was a surprise (to me, anyway) to learn that the Jacquard loom was influential in the development of the computer (see this June 25, 2019 essay “Programming patterns: the story of the Jacquard loom” on the Science and Industry Museum in Manchester [UK] website). As for embroidery, that too has an historical link to computing (see my May 22, 2023 posting “Ada Lovelace’s skills (embroidery, languages, and more) led to her pioneering computer work in the 19th century“).

The latest embroidery link to computing was announced in a March 14, 2024 news item on phys.org, Note: A link has been removed,

Even in tech-heavy Washington state, the numbers of students with access to computer science classes aren’t higher than national averages: In the 2022–2023 school year, 48% of public high schools offered foundational CS [computer science] classes and 5% of middle school and high school students took such classes.

Those numbers have inched up, but historically marginalized populations are still less likely to attend schools teaching computer science, and certain groups—such as Latinx students and young women—are less likely than their peers to be enrolled in the classes even if the school offers them.

To reach a greater diversity of grade-school students, University of Washington researchers have taught a group of high schoolers to code by combining cultural research into various embroidery traditions—such as Mexican, Arab and Japanese—with “computational embroidery.” The method lets users encode embroidery patterns on a computer through an open-source coding language called Turtlestitch, in which they fit visual blocks together. An electronic embroidery machine then stitches the patterns into fabric.

A March 14, 2024 University of Washington news release (also on EurekAlert), which originated the news item, describes the research in more detail, Note: Links have been removed,

“We’ve come a long way as a country in offering some computer science courses in schools,” said co-lead author F. Megumi Kivuva, a UW doctoral student in the Information School. “But we’re learning that access doesn’t necessarily mean equity. It doesn’t mean underrepresented minority groups are always getting the opportunity to learn. And sometimes all it means is that if there’s one 20-student CS class, all 3,000 students at the school count as having ‘access.’ [emphases mine] Our computational embroidery class was really a way to engage diverse groups of students and show that their identities have a place in the classroom.”

In designing the course, the researchers aimed to make coding accessible to a demographically diverse group of 12 students. To make space for them to explore their curiosities, the team used a method called “co-construction” where the students had a say each week in what they learned and how they’d be assessed.

“We wanted to dispel the myth that a coder is someone sitting in a corner, not being very social, typing on their computer,” Kivuva said.

Before delving into Turtlestitch, students spent a week exploring cultural traditions in embroidery — whether those connected to their own cultures or those they were curious about. For one student, bringing his identity into the work meant taking inspiration from his Mexican heritage; for others, it meant embroidering an image of bubble tea because it’s her favorite drink, or stitching a corgi.

Students also spent a week learning to embroider by hand. The craft is an easy fit for coding because both rely on structures of repetition. But embroidery is tactile, so students were able to see their code move from the screen into the physical world. They were also able to augment what they coded with hand stitching, letting them distinguish what the human and the machine were good at. For instance, one student decided to code the design for a flower, then add a bee by hand.

“There’s a long history of overlooking crafts that have traditionally been perceived as feminized,” said co-lead author Jayne Everson, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering. “So combining this overlooked art that is deeply technical with computing was really fun, because I don’t see computing as more or less technical than embroidery.”

The class ran for six weeks over the summer, and researchers were impressed by the interest it elicited. In fact, one of the main drawbacks researchers found was that six weeks felt too short, given the curiosity the students showed. Since the technology is affordable — the embroidery machine is $400 and the software is free — Kivuva plans to tailor the course to be approachable for kindergarteners to 5th-grade refugee students. Since they were so pleased with the high student engagement, Kivuva and Everson will also run a workshop on their method at the Computer Science Teachers Association [CSTA] conference this summer.

“I was constantly blown away by the way students were engaging when they were given freedom. Some were staying after class to keep working,” said Everson. “I come from a math and science teaching background. To get students to stick around after class is kind of like, ‘Alright, we’ve done it. That’s all I want.’”

Additional co-authors on the paper were Camilo Montes De Haro, a UW undergraduate researcher in the iSchool, and Amy J. Ko, a UW professor in the iSchool. This research was funded by the National Science Foundation, Micorosoft, Adobe and Google.

I wanted to know a little more about equity and access and found this in the introduction to the paper (link to and citation for the paper follow or there’s the PDF of the paper),

Efforts to broaden participation in computing at the K-12 level have
led to an increasing number of schools (53%) offering CS, however,
participation is low. Code.org reports that 6% of high school, 3.9%
of middle school, and 7.3% of primary school students are enrolled
[ 4]. Furthermore, historically marginalized populations are also
underrepresented in K-12 CS [4 , 9]. Prior work suggests that there
are systemic barriers like sexism, racism, and classism that lead to
inequities in primary and secondary computing education [9].

Here’s a link to and a citation for the paper,

Cultural-Centric Computational Embroidery by F. Megumi Kivuva, Jayne Everson, Camilo Montes De Haro, and Amy J. Ko. SIGCSE 2024: Proceedings of the 55th ACM [Association of Computing Machinery] Technical Symposium on Computer Science Education V. 1March 2024Pages 673–679 DOI: https://doi.org/10.1145/3626252.3630818 Published: 07 March 2024

This paper is open access.

The Computer Science Teachers Association (CSTA) 2024 conference mentioned in the news release is being held in Las Vegas, Nevada, July 16 -19, 2024.

Shapeshifting on demand but no stretching yet: morphees

This research (Morphees) is from Bristol University where researchers have created prototypes for shapeshifting mobile devices,

A high-fidelity prototype using projection and tracking on wood tiles that are actuated with thin shape-memory alloy wires [downloaded from http://www.bris.ac.uk/news/2013/9332.html/]

A high-fidelity prototype using projection and tracking on wood tiles that are actuated with thin shape-memory alloy wires [downloaded from http://www.bris.ac.uk/news/2013/9332.html/]

The Apr. 28, 2013 news release on EurekAlert provides more detail,

The research, led by Dr Anne Roudaut and Professor Sriram Subramanian, from the University of Bristol’s Department of Computer Science, have used ‘shape resolution’ to compare the resolution of six prototypes the team have built using the latest technologies in shape changing material, such as shape memory alloy and electro active polymer.

One example of a device is the team’s concept of Morphees, self-actuated flexible mobile devices that can change shape on-demand to better fit the many services they are likely to support.

The team believe Morphees will be the next generation of mobile devices, where users can download applications that embed a dedicated form factor, for instance the “stress ball app” that collapses the device in on itself or the “game app” that makes it adopt a console-like shape.

Dr Anne Roudaut, Research Assistant in the Department of Computer Science’s Bristol Interaction and Graphics group, said: “The interesting thing about our work is that we are a step towards enabling our mobile devices to change shape on-demand. Imagine downloading a game application on the app-store and that the mobile phone would shape-shift into a console-like shape in order to help the device to be grasped properly. The device could also transform into a sphere to serve as a stress ball, or bend itself to hide the screen when a password is being typed so passers-by can’t see private information.”

By comparing the shape resolution of their prototypes, the researchers have created insights to help designers towards creating high shape resolution Morphees.

In the future the team hope to build higher shape resolution Morphees by investigating the flexibility of materials. They are also interested in exploring other kinds of deformations that the prototypes did not explore, such as porosity and stretchability.

Here’s the video where the researchers demonstrate their morphees,


The work will be presented at ACM CHI 2013, sometime between Saturday 27 April to Thursday 2 May 2013, in Paris, France. For those who’d like to see the paper which will be presented, here’s a link to it,

Morphees: Toward High “Shape Resolution” in Self-Actuated Flexible Mobile Devices by
Anne Roudaut, Abhijit Karnik, Markus Löchtefeld, and Sriram Subramanian

After reading the news release and watching the video, I am reminded of the ‘morph’ concept, a shapeshifting, wearable device proposed by Cambridge University and Nokia. Last I wrote about that project, they had announced a stretchable skin, as per my Nov. 7, 2011 posting.

For those who are interested in what ACM CHI 2013 is all about, from the home page,

The ACM SIGCHI Conference on Human Factors in Computing Systems is the premier international conference on human-computer interaction. CHI 2013 is about changing perspectives: we draw from the constantly changing perspectives of the diverse CHI community and beyond, but we also change perspectives, offering new visions of people interacting with technology. The conference is multidisciplinary, drawing from science, engineering and design, with contributions from research and industry in 15 different venues. CHI brings together students and experts from over 60 countries, representing different cultures and different application areas, whose diverse perspectives influence each other.

CHI 2013 is located in vibrant Paris, France, the most visited city in the world. The conference will be held at the Palais de Congrès de Paris. First in Europe in research and development, with the highest concentration of higher education students in Europe, Paris is a world-class center for business and culture, with over 3800 historical monuments.The Louvre’s pyramid captures the spirit of CHI’13, offering diverse perspectives on design and technology, contrasting the old and new. The simple glass sides reveal inner complexity, sometimes transparent, sometimes reflecting the people and buildings that surround it, in the constantly
changing Paris light.

CHI 2013 welcomes works addressing research on all aspects of human-computer interaction (HCI), as well as case studies of interactive system designs, innovative proof-of-concept, and presentations by experts on the latest challenges and innovations in the field. In addition to a long-standing focus on professionals in design, engineering, management, and user experience; this year’s conference has made special efforts to serve communities in the areas of: design, management, engineering, user experience, arts, sustainability, children, games and health. We look forward to seeing you at CHI 2013 in Paris!

As I recall, ACM stands for Association of Computing Machinery, CHI stands for computer-human interface, and SIG stands for Special Interest Group.

ETA May 13, 2013: I meant to do this two weeks ago (Apr. 30,2013), ah well. Roel Vertegaal and his team at Canada’s Queen’s University introduced something called a MorePhone, which can curl up and change shape, at the CHI 2013. From the Apr. 30, 2013 news release on EurekAlert*,

Researchers at Queen’s University’s Human Media Lab have developed a new smartphone – called MorePhone – which can morph its shape to give users a silent yet visual cue of an incoming phone call, text message or email.

“This is another step in the direction of radically new interaction techniques afforded by smartphones based on thin film, flexible display technologies” says Roel Vertegaal (School of Computing), director of the Human Media Lab at Queen’s University who developed the flexible PaperPhone and PaperTab.

“Users are familiar with hearing their phone ring or feeling it vibrates in silent mode. One of the problems with current silent forms of notification is that users often miss notifications when not holding their phone. With MorePhone, they can leave their smartphone on the table and observe visual shape changes when someone is trying to contact them.”

MorePhone is not a traditional smartphone. It is made of a thin, flexible electrophoretic display manufactured by Plastic Logic – a British company and a world leader in plastic electronics. Sandwiched beneath the display are a number of shape memory alloy wires that contract when the phone notifies the user. This allows the phone to either curl either its entire body, or up to three individual corners. Each corner can be tailored to convey a particular message. For example, users can set the top right corner of the MorePhone to bend when receiving a text message, and the bottom right corner when receiving an email. Corners can also repeatedly bend up and down to convey messages of greater urgency.

I have written about Vertegaal and his team’s ‘paper’ devices previously. The most recent piece is this Jan. 9, 2013 posting, Canada’s Queen’s University strikes again with its ‘paper’ devices. You can find out more about Plastic Logic here.

*’Eurkealert’ changed to ‘EurekAlert’ on Feb. 17, 2016.

Skills training: get ready for the robots

If the boffins at the Massachusetts Institute of Technology (MIT) are right, soon we may be learning alongside robots and using the same techniques.  Helen Knight’s Feb. 11, 2013 news release for MIT highlights a recent study showing that robots, like humans, learn better if they cross-train. From the news release,

Robots are increasingly being used in the manufacturing industry to perform tasks that bring them into closer contact with humans. But while a great deal of work is being done to ensure robots and humans can operate safely side-by-side, more effort is needed to make robots smart enough to work effectively with people, says Julie Shah, an assistant professor of aeronautics and astronautics at MIT and head of the Interactive Robotics Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL).

“People aren’t robots, they don’t do things the same way every single time,” Shah says. “And so there is a mismatch between the way we program robots to perform tasks in exactly the same way each time and what we need them to do if they are going to work in concert with people.”

Most existing research into making robots better team players is based on the concept of interactive reward, in which a human trainer gives a positive or negative response each time a robot performs a task.

However, human studies carried out by the military have shown that simply telling people they have done well or badly at a task is a very inefficient method of encouraging them to work well as a team.

Here’s the experiment Shah and her student performed,

So Shah and PhD student Stefanos Nikolaidis began to investigate whether techniques that have been shown to work well in training people could also be applied to mixed teams of humans and robots. One such technique, known as cross-training, sees team members swap roles with each other on given days. “This allows people to form a better idea of how their role affects their partner and how their partner’s role affects them,” Shah says.

In a paper to be presented at the International Conference on Human-Robot Interaction in Tokyo in March [2013], Shah and Nikolaidis will present the results of experiments they carried out with a mixed group of humans and robots, demonstrating that cross-training is an extremely effective team-building tool.

More specifically,

To allow robots to take part in the cross-training experiments, the pair first had to design a new algorithm to allow the devices to learn from their role-swapping experiences. So they modified existing reinforcement-learning algorithms to allow the robots to take in not only information from positive and negative rewards, but also information gained through demonstration. In this way, by watching their human counterparts switch roles to carry out their work, the robots were able to learn how the humans wanted them to perform the same task.

Each human-robot team then carried out a simulated task in a virtual environment, with half of the teams using the conventional interactive reward approach, and half using the cross-training technique of switching roles halfway through the session. Once the teams had completed this virtual training session, they were asked to carry out the task in the real world, but this time sticking to their own designated roles.

Shah and Nikolaidis found that the period in which human and robot were working at the same time — known as concurrent motion — increased by 71 percent in teams that had taken part in cross-training, compared to the interactive reward teams. They also found that the amount of time the humans spent doing nothing — while waiting for the robot to complete a stage of the task, for example — decreased by 41 percent.

What’s more, when the pair studied the robots themselves, they found that the learning algorithms recorded a much lower level of uncertainty about what their human teammate was likely to do next — a measure known as the entropy level — if they had been through cross-training.

Finally, when responding to a questionnaire after the experiment, human participants in cross-training were far more likely to say the robot had carried out the task according to their preferences than those in the reward-only group, and reported greater levels of trust in their robotic teammate. “This is the first evidence that human-robot teamwork is improved when a human and robot train together by switching roles, in a manner similar to effective human team training practices,” Nikolaidis says.

Shah believes this improvement in team performance could be due to the greater involvement of both parties in the cross-training process. “When the person trains the robot through reward it is one-way: The person says ‘good robot’ or the person says ‘bad robot,’ and it’s a very one-way passage of information,” Shah says. “But when you switch roles the person is better able to adapt to the robot’s capabilities and learn what it is likely to do, and so we think that it is adaptation on the person’s side that results in a better team performance.”

The work shows that strategies that are successful in improving interaction among humans can often do the same for humans and robots, says Kerstin Dautenhahn, a professor of artificial intelligence at the University of Hertfordshire in the U.K. “People easily attribute human characteristics to a robot and treat it socially, so it is not entirely surprising that this transfer from the human-human domain to the human-robot domain not only made the teamwork more efficient, but also enhanced the experience for the participants, in terms of trusting the robot,” Dautenhahn says.

The paper (Human-Robot Cross-Training: Computational Formulation, Modeling and Evaluation of a Human Team Training Strategy) written by Nikolaidis and Shah can be found here and the website for the conference (International Conference on Human-Robot Interaction [HRI]; 8th ACM [Association of Computing Machinery]/IEEE [Institute of Electrical and Electronics Engineers] Conference on Human-Robot Interaction) where it will be presented is here.

Human-Computer interfaces: flying with thoughtpower, reading minds, and wrapping a telephone around your wrist

This time I’ve decided to explore a few of the human/computer interface stories I’ve run across lately. So this posting is largely speculative and rambling as I’m not driving towards a conclusion.

My first item is a May 3, 2011 news item on physorg.com. It concerns an art installation at Rensselaer Polytechnic Institute, The Ascent. From the news item,

A team of Rensselaer Polytechnic Institute students has created a system that pairs an EEG headset with a 3-D theatrical flying harness, allowing users to “fly” by controlling their thoughts. The “Infinity Simulator” will make its debut with an art installation [The Ascent] in which participants rise into the air – and trigger light, sound, and video effects – by calming their thoughts.

I found a video of someone demonstrating this project:
http://blog.makezine.com/archive/2011/03/eeg-controlled-wire-flight.html

Please do watch:

I’ve seen this a few times and it still absolutely blows me away.

If you should be near Rensselaer on May 12, 2011, you could have a chance to fly using your own thoughtpower, a harness, and an EEG helmet. From the event webpage,

Come ride The Ascent, a playful mash-up of theatrics, gaming and mind-control. The Ascent is a live-action, theatrical ride experience created for almost anyone to try. Individual riders wear an EEG headset, which reads brainwaves, along with a waist harness, and by marshaling their calm, focus, and concentration, try to levitate themselves thirty feet into the air as a small audience watches from below. The experience is full of obstacles-as a rider ascends via the power of concentration, sound and light also respond to brain activity, creating a storm of stimuli that conspires to distract the rider from achieving the goal: levitating into “transcendence.” The paradox is that in order to succeed, you need to release your desire for achievement, and contend with what might be the biggest obstacle: yourself.

Theater Artist and Experience Designer Yehuda Duenyas (XXXY) presents his MFA Thesis project The Ascent, and its operating platform the Infinity System, a new user driven experience created specifically for EMPAC’s automated rigging system.

The Infinity System is a new platform and user interface for 3D flying which combines aspects of thrill-ride, live-action video game, and interactive installation.

Using a unique and intuitive interface, the Infinity System uses 3D rigging to move bodies creatively through space, while employing wearable sensors to manipulate audio and visual content.

Like a live-action stunt-show crossed with a video game, the user is given the superhuman ability to safely and freely fly, leap, bound, flip, run up walls, fall from great heights, swoop, buzz, drop, soar, and otherwise creatively defy gravity.

“The effect is nothing short of movie magic.” – Sean Hollister, Engadget

Here’s a brief description of the technology behind this ‘Ascent’ (from the news item on physorg.com),

Ten computer programs running simultaneously link the commercially available EEG headset to the computer-controlled 3-D flying harness and various theater systems, said Todd. [Michael Todd, a Rensselaer 2010 graduate in computer science]

Within the theater, the rigging – including the harness – is controlled by a Stage Tech NOMAD console; lights are controlled by an ION console running MIDI show control; sound through MAX/MSP; and video through Isadora and Jitter. The “Infinity Simulator,” a series of three C programs written by Todd, acts as intermediary between the headset and the theater systems, connecting and conveying all input and output.

“We’ve built a software system on top of the rigging control board and now have control of it through an iPad, and since we have the iPad control, we can have anything control it,” said Duenyas. “The ‘Infinity Simulator’ is the center; everything talks to the ‘Infinity Simulator.’”

This May 3, 2011 article (Mystery Man Gives Mind-Reading Tech More Early Cash Than Facebook, Google Combined) by Kit Eaton on Fast Company also concerns itself with a brain/computer interface. From the article,

Imagine the money that could be made by a drug company that accurately predicted and treated the onset of Alzheimer’s before any symptoms surfaced. That may give us an idea why NeuroVigil, a company specializing in non-invasive, wireless brain-recording tech, just got a cash injection that puts it at a valuation “twice the combined seed valuations of Google’s and Facebook’s first rounds,” according to a company announcement

NeuroVigil’s key product at the moment is the iBrain, a slim device in a flexible head-cap that’s designed to be worn for continuous EEG monitoring of a patient’s brain function–mainly during sleep. It’s non-invasive, and replaces older technology that could only access these kind of brain functions via critically implanted electrodes actually on the brain itself. The idea is, first, to record how brain function changes over time, perhaps as a particular combination of drugs is administered or to help diagnose particular brain pathologies–such as epilepsy.

But the other half of the potentailly lucrative equation is the ability to analyze the trove of data coming from iBrain. And that’s where NeuroVigil’s SPEARS algorithm enters the picture. Not only is the company simplifying collection of brain data with a device that can be relatively comfortably worn during all sorts of tasks–sleeping, driving, watching advertising–but the combination of iBrain and SPEARS multiplies the efficiency of data analysis [emphasis mine].

I assume it’s the notion of combining the two technologies (iBrian and SPEARS) that spawned the ‘mind-reading’ part of this article’s title. The technology could be used for early detection and diagnosis, as well as, other possibilities as Eaton notes,

It’s also possible it could develop its technology into non-medicinal uses such as human-computer interfaces–in an earlier announcement, NeuroVigil noted, “We plan to make these kinds of devices available to the transportation industry, biofeedback, and defense. Applications regarding pandemics and bioterrorism are being considered but cannot be shared in this format.” And there’s even a popular line of kid’s toys that use an essentially similar technique, powered by NeuroSky sensors–themselves destined for future uses as games console controllers or even input devices for computers.

What these two technologies have in common is that, in some fashion or other, they have (shy of implanting a computer chip) a relatively direct interface with our brains, which means (to me anyway) a very different relationship between humans and computers.

In the next couple of items I’m going to profile a couple of very similar to each other technologies that allow for more traditional human/computer interactions, one of which I’ve posted about previously, the Nokia Morph (most recently in my Sept. 29, 2010 posting).

It was first introduced as a type of flexible phone with other capabilities. Since then, they seem to have elaborated on those capabilities. Here’s a description of what they now call the ‘Morph concept’ in a [ETA May 12, 2011: inserted correct link information] May 4, 2011 news item on Nanowerk,

Morph is a joint nanotechnology concept developed by Nokia Research Center (NRC) and the University of Cambridge (UK). Morph is a concept that demonstrates how future mobile devices might be stretchable and flexible, allowing the user to transform their mobile device into radically different shapes. It demonstrates the ultimate functionality that nanotechnology might be capable of delivering: flexible materials, transparent electronics and self-cleaning surfaces.

Morph, will act as a gateway. It will connect the user to the local environment as well as the global internet. It is an attentive device that adapts to the context – it shapes according to the context. The device can change its form from rigid to flexible and stretchable. Buttons of the user interface can grow up from a flat surface when needed. User will never have to worry about the battery life. It is a device that will help us in our everyday life, to keep our self connected and in shape. It is one significant piece of a system that will help us to look after the environment.

Without the new materials, i.e. new structures enabled by the novel materials and manufacturing methods it would be impossible to build Morph kind of device. Graphene has an important role in different components of the new device and the ecosystem needed to make the gateway and context awareness possible in an energy efficient way.

Graphene will enable evolution of the current technology e.g. continuation of the ever increasing computing power when the performance of the computing would require sub nanometer scale transistors by using conventional materials.

For someone who’s been following news of the Morph for the last few years, this news item doesn’t give you any new information. Still, it’s nice to be reminded of the Morph project. Here’s a video produced by the University of Cambridge that illustrates some of the project’s hopes for the Morph concept,

http://www.youtube.com/watch?v=PKihhDC7-bI

While the folks at the Nokia Research Centre and University of Cambridge have been working on their project, it appears the team at the Human Media Lab at the School of Computing at Queen’s University (Kingston, Ontario, Canada) in cooperation with a team from Arizona State University and E Ink Corporation have been able to produce a prototype of something remarkably similar, albeit with fewer functions. The PaperPhone is being introduced at the Association of Computing Machinery’s CHI 2011 (Computer Human Interaction) conference in Vancouver, Canada next Tuesday, May 10, 2011.

Here’s more about it from a May 4, 2011 news item on Nanowerk,

The world’s first interactive paper computer is set to revolutionize the world of interactive computing.

“This is the future. Everything is going to look and feel like this within five years,” says creator Roel Vertegaal, the director of Queen’s University Human Media Lab,. “This computer looks, feels and operates like a small sheet of interactive paper. You interact with it by bending it into a cell phone, flipping the corner to turn pages, or writing on it with a pen.”

The smartphone prototype, called PaperPhone is best described as a flexible iPhone – it does everything a smartphone does, like store books, play music or make phone calls. But its display consists of a 9.5 cm diagonal thin film flexible E Ink display. The flexible form of the display makes it much more portable that any current mobile computer: it will shape with your pocket.

For anyone who knows the novel, it’s very Diamond Age (by Neal Stephenson). On a more technical note, I would have liked more information about the display’s technology. What is E Ink using? Graphene? Carbon nanotubes?

(That does not look like to paper to me but I suppose you could call it ‘paperlike’.)

In reviewing all these news items, it seems to me there are two themes, the computer as bodywear and the computer as an extension of our thoughts. Both of these are more intimate relationships, the latter far more so than the former, than we’ve had with the computer till now. If any of you have any thoughts on this, please do leave a comment as I would be delighted to engage on some discussion about this.

You can get more information about the Association of Computing Machinery’s CHI 2011 (Computer Human Interaction) conference where Dr. Vertegaal will be presenting here.

You can find more about Dr. Vertegaal and the Human Media Lab at Queen’s University here.

The academic paper being presented at the Vancouver conference is here.

Also, if you are interested in the hardware end of things, you can check out E Ink Corporation, the company that partnered with the team from Queen’s and Arizona State University to create the PaperPhone. Interestingly, E Ink is a spin off company from the Massachusetts Institute of Technology (MIT).