Category Archives: New Media

What about the heart? and the quest to make androids lifelike

Japanese scientist Hiroshi Ishiguro has been mentioned here several times in the context of ‘lifelike’ robots. Accordingly, it’s no surprise to see Ishiguro’s name in a June 24, 2014 news item about uncannily lifelike robotic tour guides in a Tokyo museum (CBC (Canadian Broadcasting Corporation) News online),

The new robot guides at a Tokyo museum look so eerily human and speak so smoothly they almost outdo people — almost.

Japanese robotics expert Hiroshi Ishiguro, an Osaka University professor, says they will be useful for research on how people interact with robots and on what differentiates the person from the machine.

“Making androids is about exploring what it means to be human,” he told reporters Tuesday [June 23, 2014], “examining the question of what is emotion, what is awareness, what is thinking.”

In a demonstration, the remote-controlled machines moved their pink lips in time to a voice-over, twitched their eyebrows, blinked and swayed their heads from side to side. They stay seated but can move their hands.

Ishiguro and his robots were also mentioned in a May 29, 2014 article by Carey Dunne for Fast Company. The article concerned a photographic project of Luisa Whitton’s.

In her series “What About the Heart?,” British photographer Luisa Whitton documents one of the creepiest niches of the Japanese robotics industry--androids. Here, an eerily lifelike face made for a robot. [dowloaded from http://www.fastcodesign.com/3031125/exposure/japans-uncanny-quest-to-humanize-robots?partner=rss]

In her series “What About the Heart?,” British photographer Luisa Whitton documents one of the creepiest niches of the Japanese robotics industry–androids. Here, an eerily lifelike face made for a robot. [dowloaded from http://www.fastcodesign.com/3031125/exposure/japans-uncanny-quest-to-humanize-robots?partner=rss]

From Dunne’s May 29, 2014 article (Note: Links have been removed),

We’re one step closer to a robot takeover. At least, that’s one interpretation of “What About the Heart?” a new series by British photographer Luisa Whitton. In 17 photos, Whitton documents one of the creepiest niches of the Japanese robotics industry–androids. These are the result of a growing group of scientists trying to make robots look like living, breathing people. Their efforts pose a question that’s becoming more relevant as Siri and her robot friends evolve: what does it mean to be human as technology progresses?

Whitton spent several months in Japan working with Hiroshi Ishiguro, a scientist who has constructed a robotic copy of himself. Ishiguro’s research focused on whether his robotic double could somehow possess his “Sonzai-Kan,” a Japanese term that translates to the “presence” or “spirit” of a person. It’s work that blurs the line between technology, philosophy, psychology, and art, using real-world studies to examine existential issues once reserved for speculation by the likes of Philip K. Dick or Sigmund Freud. And if this sounds like a sequel to Blade Runner, it gets weirder: after Ishiguro aged, he had plastic surgery so that his face still matched that of his younger, mechanical doppelganger.

I profiled Ishiguro’s robots (then called Geminoids) in a March 10, 2011 posting which featured a Danish philosopher, Henrik Scharfe, who’d commissioned a Geminoid identical to himself for research purposes. He doesn’t seem to have published any papers about his experience but there is this interview of Scharfe and his Geminoid twin by Aldith Hunkar (she’s very good) at a 2011 TEDxAmsterdam,

Mary King’s 2007 research project notes a contrast, Robots and AI in Japan and The West and provides an excellent primer (Note: A link has been removed),

The Japanese scientific approach and expectations of robots and AI are far more down to earth than those of their Western counterparts. Certainly, future predictions made by Japanese scientists are far less confrontational or sci-fi-like. In an interview via email, Canadian technology journalist Tim N. Hornyak described the Japanese attitude towards robots as being “that of the craftsman, not the philosopher” and cited this as the reason for “so many rosy imaginings of a future Japan in which robots are a part of people’s everyday lives.”

Hornyak, who is author of “Loving the Machine: The Art and Science of Japanese Robots,” acknowledges that apocalyptic visions do appear in manga and anime, but emphasizes that such forecasts do not exist in government circles or within Japanese companies. Hornyak also added that while AI has for many years taken a back seat to robot development in Japan, this situation is now changing. Honda, for example, is working on giving better brains to Asimo, which is already the world’s most advanced humanoid robot. Japan is also already legislating early versions of Asimov’s laws by introducing design requirements for next-generation mobile robots.

It does seem there might be more interest in the philosophical issues in Japan these days or possibly it’s a reflection of Ishiguro’s own current concerns (from Dunne’s May 29, 2014 article),

The project’s title derives from a discussion with Ishiguro about what it means to be human. “The definition of human will be more complicated,” Ishiguro said.

Dunne reproduces a portion of Whitton’s statement describing her purpose for these photographs,

Through Ishiguro, Whitton got in touch with a number of other scientists working on androids. “In the photographs, I am trying to subvert the traditional formula of portraiture and allure the audience into a debate on the boundaries that determine the dichotomy of the human/not human,” she writes in her artist statement. “The photographs become documents of objects that sit between scientific tool and horrid simulacrum.”

I’m not sure what she means by “horrid simulacrum” but she seems to be touching on the concept of the ‘uncanny valley’. Here’s a description I provided in a May 31, 2013 posting about animator Chris Landreth and his explorations of that valley within the context of his animated film, Subconscious Password,,

Landreth also discusses the ‘uncanny valley’ and how he deliberately cast his film into that valley. For anyone who’s unfamiliar with the ‘uncanny valley’ I wrote about it in a Mar. 10, 2011 posting concerning Geminoid robots,

It seems that researchers believe that the ‘uncanny valley’ doesn’t necessarily have to exist forever and at some point, people will accept humanoid robots without hesitation. In the meantime, here’s a diagram of the ‘uncanny valley’,

From the article on Android Science by Masahiro Mori (translated by Karl F. MacDorman and Takashi Minato)

Here’s what Mori (the person who coined the term) had to say about the ‘uncanny valley’ (from Android Science),

Recently there are many industrial robots, and as we know the robots do not have a face or legs, and just rotate or extend or contract their arms, and they bear no resemblance to human beings. Certainly the policy for designing these kinds of robots is based on functionality. From this standpoint, the robots must perform functions similar to those of human factory workers, but their appearance is not evaluated. If we plot these industrial robots on a graph of familiarity versus appearance, they lie near the origin (see Figure 1 [above]). So they bear little resemblance to a human being, and in general people do not find them to be familiar. But if the designer of a toy robot puts importance on a robot’s appearance rather than its function, the robot will have a somewhat humanlike appearance with a face, two arms, two legs, and a torso. This design lets children enjoy a sense of familiarity with the humanoid toy. So the toy robot is approaching the top of the first peak.

Of course, human beings themselves lie at the final goal of robotics, which is why we make an effort to build humanlike robots. For example, a robot’s arms may be composed of a metal cylinder with many bolts, but to achieve a more humanlike appearance, we paint over the metal in skin tones. These cosmetic efforts cause a resultant increase in our sense of the robot’s familiarity. Some readers may have felt sympathy for handicapped people they have seen who attach a prosthetic arm or leg to replace a missing limb. But recently prosthetic hands have improved greatly, and we cannot distinguish them from real hands at a glance. Some prosthetic hands attempt to simulate veins, muscles, tendons, finger nails, and finger prints, and their color resembles human pigmentation. So maybe the prosthetic arm has achieved a degree of human verisimilitude on par with false teeth. But this kind of prosthetic hand is too real and when we notice it is prosthetic, we have a sense of strangeness. So if we shake the hand, we are surprised by the lack of soft tissue and cold temperature. In this case, there is no longer a sense of familiarity. It is uncanny. In mathematical terms, strangeness can be represented by negative familiarity, so the prosthetic hand is at the bottom of the valley. So in this case, the appearance is quite human like, but the familiarity is negative. This is the uncanny valley.

[keep scrolling, I'm having trouble getting rid of this extra space below]

It seems that Mori is suggesting that as the differences between the original and the simulacrum become fewer and fewer, the ‘uncanny valley’ will disappear. It’s possible but I suspect before that day occurs those of us who were brought up in a world without synthetic humans (androids) may experience an intensification of the feelings aroused by an encounter with the uncanny valley even as it disappears. For those who’d like a preview, check out Luisa Whitton’s What About The Heart? project.

Tweet the International Space Station on the solstice, June 21, 2014

On the heels of the nanosatellite project (see this June 19, 2014 posting) here’s an email announcement about a very interesting project for the Summer Solstice (June 21, 2014),

The June Solstice (Saturday, June 21) is the best time to view the International Space Station [ISS] in the northern hemisphere.

But now there¹s another way.

Crowdsource the pictures via Twitter.

Space enthusiasts are being encouraged to tag their tweets with #SpotTheStation and include a location name and it will go on an interactive map.

Astronaut Reid Wiseman had the idea while on the International Space Station.  His tweet for example was ³During #Exp40, spot the #ISS & tweet your town, country-or-state w/ #spotthestation (pics welcome); we’ll map it! bit.ly/SpotTheStation2²

Here’s a little more detail as to the company and agency behind this project,

Esri, a GIS mapping software provider, has partnered with the Center of Geographic Sciences in Canada to develop a Twitter app to pinpoint the exact location of the ISS sightings around the world in order to give a complete view. The global map documenting the recent ISS sightings is already live.

I have looked at the live map and tweeters have been active. You can check to see the locations. For example, as of June 19, 2014 1000 hours PDT, Canada has some 26 tweets while Florida has 40 and Munich tops them both with 132 tweets.

I have looked up the company, Esri, and found this on the About Esri History page,

Jack and Laura Dangermond founded Esri in 1969 as a small research group focused on land-use planning. The company’s early mission was to organize and analyze geographic information to help land planners and land resource managers make well-informed environmental decisions.

There’s a very interesting article on the Esri website, which provides some insight into the origins for the June 21, 2014 ‘#SpotTheStation’ project. Written by Carla Wheeler (an Esri writer), it is undated but there is mention of Chris Hadfield’s sojourn on the ISS and his attendance at an event in June 2013 after he landed. From Wheeler’s 2013 (?) article, A Map App Odyssey,

Today social media, with doses of humor, are very much a part of the space mission, with the National Aeronautics and Space Administration (NASA), the Canadian Space Agency (CSA), and many astronauts sending messages, videos, and photos back to Earth via Twitter, Facebook, and YouTube. Followers post messages for the astronauts too, making interaction about space interactive.

The photos Hadfield and fellow ISS astronaut Thomas Marshburn sent via Twitter inspired their follower David MacLean, a faculty member at the Centre of Geographic Sciences (COGS), Nova Scotia Community College, and his students to create a mapping app called Our World from the ISS. It used Esri ArcGIS Online to map more than 950 photographs of interesting places on Earth that Hadfield and Marshburn shot from space. They took the photos during their December 2012–May 2013 mission and posted the images on Twitter with their observations of each scene (in 140 characters or fewer, of course). Hadfield, a Canadian, was especially prolific and poetic. …

MacLean, also a Canadian, was intrigued by the astronauts’ unique perspective as they orbited 400 kilometers (250 miles) above earth, photographing everything from cities to barrier reefs and sand formations to smoke from brush fires. He didn’t want their geologically and geographically interesting images and descriptions—such as “taffy-twisted African rock” and the “yin and yang of ice and land”—to quickly get swallowed and lost in the fast-moving Twitterverse.

“[Hadfield] took pictures all over the earth, with wonderful prose as he described the outback of Australia and parts of Mauritania and Algeria that no one would [otherwise] get to see,” MacLean said. “Unfortunately, Twitter seems to be a very temporal medium, and all these wonderful pictures—these rich resources—slip away and you have to really look to find them.”

MacLean wondered if there was a way to preserve the images and messages in the Tweets in a form that was easy for people to find and view. He decided to try building a mapping app, which he and his students created using geographic information system (GIS) technology from Esri, online comma-separated value (CSV) files, and Google Docs spreadsheets in Google Drive. Their map displays icons, provided courtesy of the Canadian Space Agency, that look like small space stations. These show the approximate (or, at times, quite accurate) locations of each photograph. Viewers can pan the map, zoom in to any area of interest, and tap an icon. A pop-up window will appear that includes a thumbnail of the picture and the message from the astronaut. You can also click the thumbnail to see the full-size Tweet in the astronauts’ Twitter feed. (Clicking the photo in Twitter will then bring up a larger, sharper image.) It’s a little like seeing photos of landscapes in National Geographic—only taken from space.

Tap an icon north of Medina, Saudi Arabia, to see Hadfield’s May 3 [2013?] photo of the Harrat Khaybar volcanic lava field and read his post: “The Earth bubbled and spat, like boiling porridge, long ago in Saudi Arabia.” Another geologic wonder caught his eye Down Under: “A splash of dry salt, white on seared red, in Australia’s agonizingly beautiful Outback.”

So, on June 21, 2014 get ready to tweet ‘#SpotTheStation’ and have a joyous Summer Solstice!

The Space, a new digital museum opens with an international splash

Erica Berger in a June 14, 2014 article for Fast Company provides a fascinating account of a project where Arts Council England, the BBC, Open Data Institute, and other cultural groups partnered to create: The Space (Note: Links have been removed),

This Space is no final frontier. Rather, it’s just begun as a new place for digital and experimental art.

A free and public website aimed at discovering the best emerging digital artistic talent around the world, The Space opened yesterday and is launching with a weekend [June 14 - 15, 2014] hackathon hosted by the Tate Modern in London, a first for the formidable institution. Born from a partnership between Arts Council England, the BBC, Open Data Institute, and other cultural groups, it’s “a gallery without walls,” says Alex Graham, chair of The Space. The Space is putting out an international open call for projects, the first round of which is due July 11. The projects will be funded by the partnering groups with amounts ranging from £20,000 (about $34,000) to £60,000 ($101,000) for an individual commission, and up to 50% of the total cost. Each Friday, new collaborations will launch.

Among the first installations are pieces from high-profile artists, including Marina Abramovic, who broadcasted live on the site at midnight last night, and Ai Weiwei, who has an interactive piece on The Space. There will also be a live, Google hangout theater project with actors in London, Barcelona, and Lagos and directed by Erin Gilley.

The Space can be found here,

About The Space

The Space is a free website for artists and audiences to create and explore exciting new art, commissioned by us and shared around the Whole Wide World.

We commission new talent and great artists from all art forms, creative industries, technical and digital backgrounds, through Open Calls and partnerships. The Space is one of the most exciting places on the internet to find new art to explore and enjoy.

An open call was launched on June 12, 2014,

The Space launches first Open Call
Posted … on 12 June 2014

The Space Open Call is looking for original, groundbreaking ideas for digital art. We are encouraging artists to take risks and do crazy things with technology!

This is a great opportunity for artists to be bold, ambitious and experimental, creating a work which can communicate wi people round the World via mobile, tablets and desktops.

We are seeking artists working across a range of art forms and industries including, creative and digital, technology and coding, art and culture sectors, to pitch the very best original ideas to the Open Call.

If you have an idea for The Space, please go to thespace.org/opencall and complete the online form before the closing date: 12 noon (GMT) 11 July 2014.​

Organizers have produced an inspirational video for this call,

I don’t know if this offer is still available (from Erica Berger’s Fast Company article about The Space) but here it is,

Sign up to be one of the first 10,000 newsletter subscribers to The Space and receive a free digital work of art from Turner Prize winner Jeremy Deller.

I availed myself of the offer at approximately 1000 hours PDT, June 16, 2014.

Call for submissions for two Electronic Literature Organization (ELO) prizes

Nothing is more heartbreaking than to be late for a submission, so, here’s the deadline for the Electronic Literature prizes: May 10, 2014. The Electronic Literature Organization’s gives more details on its call for prize submissions webpage,

The ELO is proud to announce the ”The N. Katherine Hayles Award for Criticism of Electronic Literature” and “The Robert Coover Award for a Work of Electronic Literature.” Below is information including guidelines for submissions for each.

“The N. Katherine Hayles Award for Criticism of Electronic Literature”

“The N. Katherine Hayles Award for Criticism of Electronic Literature” is an award given for the best work of criticism, of any length, on the topic of electronic literature. Bestowed by the Electronic Literature Organization and funded through a generous donation from N. Katherine Hayles and others, this $1000 annual prize aims to recognize excellence in the field. The prize comes with a plaque showing the name of the winner and an acknowledgement of the achievement, and a one-year membership in the Electronic Literature Organization at the Associate Level.

We invite critical works of any length. Submissions must follow these guidelines:

1. This is an open submission. Self nominations and nominations are both welcome. Membership in the Electronic Literature Organization is not required.
2. There is no cost involved in nominations. This is a free and open award aimed at rewarding excellence.
3. ELO Board Members serving their term of office on the Board are ineligible for nomination for the award. Members of the Jury are also not allowed to be nominated for the award.
4. Three finalists for the award will be selected by a jury of specialists in electronic literature; N. Katherine Hayles will choose the winner from among the finalists.
5. Because of the nature of online publishing, it is not possible to conduct a blind review of the submissions; the jury will be responsible for fair assessment of the work.
6. Those nominated may only have one work considered for the prize. In the event that several works are identified for a nominee, the nominee will choose the work that he or she wishes to be juried.
7. All works must have already been published or made available to the public within 18 months, no earlier than December 2012.
8. All print articles must be submitted in .pdf format. Books can be sent either in .pdf format or in print format. Online articles should be submitted as a link to an online site.
9. Nominations by self or others must include a 250-word explanation of the work’s impact in the field. The winner selected for the prize must also include a professional bio and a headshot or avatar.
10. All digital materials should be emailed to [email protected] by May 15, 2014; three copies of the book should be mailed to Dr. Dene Grigar, Creative Media & Digital Culture, Washington State University Vancouver, 14204 NE Salmon Creek Ave., Vancouver, WA 98686 by May 15, 2014. [emphasis mine] Those making the nomination or the nominees themselves are responsible for mailing materials for jurying. Print materials will be returned via a self-addressed mailer.
11. Nominees and the winner retain all rights to their works. If copyright allows, ELO will be given permission to share the work or portions of it on the award webpage. Journals and presses that have published the winning work will be acknowledged on the award webpage.
12. The winner is not expected to attend the ELO conference banquet. The award will be mailed to the winner.

Timeline
Call for Nominations: April 15-May 10
Jury Deliberations: May 15-June 10
Award Announcement: ELO Conference Banquet

For more information, contact Dr. Dene Grigar, President, Electronic Literature Organization.

“The Robert Coover Award for a Work of Electronic Literature”

“The Robert Coover Award for a Work of Electronic Literature” is an award given for the best work of electronic literature of any length or genre. Bestowed by the Electronic Literature Organization and funded through a generous donation from supporters and members of the ELO, this $1000 annual prize aims to recognize creative excellence. The prize comes with a plaque showing the name of the winner and an acknowledgement of the achievement, and a one-year membership in the Electronic Literature Organization at the Associate Level.

We invite critical works of any length and genre. Submissions must follow these guidelines:

1. This is an open submission. Self nominations and nominations are both welcome. Membership in the Electronic Literature Organization is not required.
2. There is no cost involved in nominations. This is a free and open award aimed at rewarding excellence.
3. ELO Board Members serving their term of office on the Board are ineligible for nomination for the award. Members of the Jury are also not allowed to be nominated for the award.
4. Three finalists for the award will be selected by a jury of specialists in electronic literature; Robert Coover or a representative of his will choose the winner from among the finalists.
5. Because of the nature of online publishing, it is not possible to conduct a blind review of the submissions; the jury will be responsible for fair assessment of the work.
6. Those nominated may only have one work considered for the prize. In the event that several works are identified for a nominee, the nominee will choose the work that he or she wishes to be juried.
7. All works must have already been published or made available to the public within 18 months, no earlier than December 2012.
8. Works should be submitted either as a link to an online site or in the case of non-web work, available via Dropbox or sent as a CD/DVD or flash drive.
9. Nominations by self or others must include a 250-word explanation of the work’s impact in the field. The winner selected for the prize must also include a professional bio and a headshot or avatar.
10. Links to the digital materials or to Dropbox should be emailed to [email protected] by May 15, 2014; three copies of the CD/DVDs and flash drives should be mailed to Dr. Dene Grigar, Creative Media & Digital Culture, Washington State University Vancouver, 14204 NE Salmon Creek Ave., Vancouver, WA 98686 by May 15, 2014. [emphasis mine] Those making the nomination or the nominees themselves are responsible for mailing materials for jurying. Physical materials will be returned via a self-addressed mailer.
11. Nominees and the winner retain all rights to their works. If copyright allows, ELO will be given permission to share the work or portions of it on the award webpage. Journals and presses that have published the winning work will be acknowledged on the award webpage.
12. The winner is not expected to attend the ELO conference banquet. The award will be mailed to the winner.

Timeline
Call for Nominations: April 19-May 10
Jury Deliberations: May 15-June 10
Award Announcement: ELO Conference Banquet

For more information, contact Dr. Dene Grigar, President, Electronic Literature Organization.

Good luck and please note the mailing address in the submission guidelines is for Vancouver, US and not for Vancouver, Canada. Finally, thank you to Christine Wilks of crissxross for the heads up via LinkedIn.

The human body as a musical instrument: performance at the University of British Columbia on April 10, 2014

It’s called The Bang! Festival of interactive music with performances of one kind or another scheduled throughout the day on April 10, 2014 (12 pm: MUSC 320; 1:30 PM: Grad Work; 2 pm: Research) and a finale featuring the Laptop Orchestra at 8 pm at the University of British Columbia’s (UBC) School of Music (Barnett Recital Hall on the Vancouver campus, Canada).

Here’s more about Bob Pritchard, professor of music, and the students who have put this programme together (from an April 7, 2014 UBC news release; Note: Links have been removed),

Pritchard [Bob Prichard], a professor of music at the University of British Columbia, is using technologies that capture physical movement to transform the human body into a musical instrument.

Pritchard and the music and engineering students who make up the UBC Laptop Orchestra wanted to inject more human performance in digital music after attending one too many uninspiring laptop music sets. “Live electronic music can be a bit of an oxymoron,” says Pritchard, referring to artists gazing at their laptops and a heavy reliance on backing tracks.

“Emerging tools and techniques can help electronic musicians find more creative and engaging ways to present their work. What results is a richer experience, which can create a deeper, more emotional connection with your audience.”

The Laptop Orchestra, which will perform a free public concert on April 10, is an extension of a music technology course at UBC’s School of Music. Comprised of 17 students from Arts, Science and Engineering, its members act as musicians, dancers, composers, programmers and hardware specialists. They create adventurous electroacoustic music using programmed and acoustic instruments, including harp, piano, clarinet and violin.

Despite its name, surprisingly few laptops are actually touched onstage. “That’s one of our rules,” says Pritchard, who is helping to launch UBC’s new minor degree in Applied Music Technology in September with Laptop Orchestra co-director Keith Hamel. “Avoid touching the laptop!”

Instead, students use body movements to trigger programmed synthetic instruments or modify the sound of their live instruments in real-time. They strap motion sensors to their bodies and instruments, play wearable iPhone instruments, swing Nintendo Wiis or PlayStation Moves, while Kinect video cameras from Sony Xboxes track their movements.

“Adding movement to our creative process has been awesome,” says Kiran Bhumber, a fourth-year music student and clarinet player. The program helped attract her back to Vancouver after attending a performing arts high school in Toronto. “I really wanted to do something completely different. When I heard of the Laptop Orchestra, I knew it was perfect for me. I begged Bob to let me in.”

The Laptop Orchestra has partnered itself with UBC’s Dept. of Computer and Electrical Engineering (from the news release),

The engineers come with expertise in programming and wireless systems and the musicians bring their performance and composition chops, and program code as well.

Besides creating their powerful music, the students have invented a series of interfaces and musical gadgets. The first is the app sensorUDP, which transforms musicians’ smartphones into motion sensors. Available in the Android app store and compatible with iPhones, it allows performers to layer up to eight programmable sounds and modify them by moving their phone.

Music student Pieteke MacMahon modified the app to create an iPhone Piano, which she plays on her wrist, thanks to a mount created by engineering classmates. As she moves her hands up, the piano notes go up in pitch. When she drops her hands, the sound gets lower, and a delay effect increases if her palm faces up. “Audiences love how intuitive it is,” says the composition major. “It creates music in a way that really makes sense to people, and it looks pretty cool onstage.”

Here’s a video of the iPhone Piano (aka PietekeIPhoneSensor) in action,

The members of the Laptop Orchestra have travelled to collaborate internationally (Note: Links have been removed),

Earlier this year, the ensemble’s unique music took them to Europe. The class spent 10 days this February in Belgium where they collaborated and performed in concert with researchers at the University of Mons, a leading institution for research on gesture-tracking technology.

The Laptop Orchestra’s trip was sponsored by UBC’s Go Global and Arts Research Abroad, which together send hundreds of students on international learning experiences each year.

In Belgium, the ensemble’s dancer Diana Brownie wore a body suit covered head-to-toe in motion sensors as part of a University of Mons research project on body movement. The researchers – one a former student of Pritchard’s – will use the suit’s data to help record and preserve cultural folk dances.

For anyone who needs directions, here’s a link to UBC’s Vancouver Campus Maps, Directions, & Tours webpage.

Call for abstracts; Volume 2 of the International Handbook of Internet Research

This call for abstracts (received from my Writing and the Digital Life list) has a deadline of June 1, 2014. From the call,

Call for Abstracts for Chapters
Volume 2 of the International Handbook of Internet Research
(editors Jeremy Hunsinger, Lisbeth Klastrup, and Matthew Allen)

Abstracts due June 1 2014; full chapters due Sept. 1 2015

After the remarkable success of the first International Handbook of Internet Research (2010), Springer has contracted with its editors to produce a second volume. This new volume will be arranged in three sections, that address one of three different aspects of internet research: foundations, futures, and critiques. Each of these meta-themes will have its own section of the new handbook.

Foundations will approach a method, a theory, a perspective, a topic or field that has been and is still a location of significant internet research. These chapters will engage with the current and historical scholarly literature through extended reviews and also as a way of developing insights into the internet and internet research. Futures will engage with the directions the field of internet research might take over the next five years. These chapters will engage current methods, topics, perspectives, or fields that will expand and re-invent the field of internet research, particularly in light of emerging social and technological trends. The material for these chapters will define the topic they describe within the framework of internet research so that it can be understand as a place of future inquiry. Critique chapters will define and develop critical positions in the field of internet research. They can engage a theoretical perspective, a methodological perspective, a historical trend or topic in internet research and provide a critical perspective. These chapters might also define one type of critical perspective, tradition, or field in the field of internet research.

We value the way in which this call for papers will itself shape the contents, themes, and coverage of the Handbook. We encourage potential authors to present abstracts that will consolidate current internet research, critically analyse its directions past and future, and re-invent the field for the decade to come. Contributions about the internet and internet research are sought from scholars in any discipline, and from many points of view. We therefore invite internet researchers working within the fields of communication, culture, politics, sociology, law and privacy, aesthetics, games and play, surveillance and mobility, amongst others, to consider contributing to the volume.

Initially, we ask scholars and researchers to submit an 500 word abstract detailing their own chapter for one of the three sections outlined above. The abstract must follow the format presented below. After the initial round of submissions, there may be a further call for papers and/or approaches to individuals to complete the volume. The final chapters will be chosen from the submitted abstracts by the editors or invited by the editors. The chapter writers will be notified of acceptance by January 1st, 2015. The chapters will be due September 2015, should be between 6,000 and 10,000 words (inclusive of references, biographical statement and all other text).

Each abstract needs to be presented in the following form:

· Section (Either Foundations, Futures, or Critiques)

· Title of chapter

· Author name/s, institutional details

· Corresponding author’s email address

· Keywords (no more than 5)

· Abstract (no more than 500 words)

· References

Please e-mail your abstract/s to: [email protected]

We look forward to your submissions and working with you to produce another definitive collection of thought-provoking internet research. Please feel free to distribute this CfP widely.

As I recall (accurately I hope), I met Jeremy Hunsinger some years ago at an Association of Internet Researchers (AoIR) conference held in Vancouver in 2007 with the theme, Let’s Play. He’s an academic based at Wilfrid Laurier University in Waterloo, Ontario, Canada.

Good luck with your submission!

For the smell of it

Having had a tussle with a fellow student some years ago about what constituted multimedia, I wanted to discuss smell as a possible means of communication and he adamantly disagreed (he won),  these  two items that feature the sense of smell  are of particular interest, especially (tongue firmly in cheek) as one of these items may indicate I was* ahead of my time.

The first is about about a phone-like device that sends scent (from a Feb. 11, 2014 news item on ScienceDaily),

A Paris laboratory under the direction of David Edwards, Michigan Technological University alumnus, has created the oPhone, which will allow odors — oNotes — to be sent, via Bluetooth and smartphone attachments, to oPhones across the state, country or ocean, where the recipient can enjoy American Beauties or any other variety of rose.

It can be sent via email, tweet, or text.

Edwards says the idea started with student designers in his class at Harvard, where he is a professor.

“We invite young students to bring their design dreams,” he says. “We have a different theme each year, and that year it was virtual worlds.”

The all-female team came up with virtual aromas, and he brought two of the students to Paris to work on the project. Normally, he says, there’s a clear end in sight, but with their project no one had a clue who was going to pay for the research or if there was even a market.

A Feb. 11, 2014 Michigan Technological University news release by Dennis Walikainen, which originated the news item, provides more details about the project development and goals,

“We create unique aromatic profiles,” says Blake Armstrong, director of business communications at Vapor Communications, an organization operating out of Le Laboratorie (Le Lab) in Paris. “We put that into the oChip that faithfully renders that smell.”

Edwards said that the initial four chips that will come with the first oPhones can be combined into thousands different odors—produced for 20 to 30 seconds—creating what he calls “an evolution of odor.”

The secret is in accurate scent reproduction, locked in those chips plugged into the devices. Odors are first captured in wax after they are perfected using “The Nose”– an aroma expert at Le Lab, Marlène Staiger — who deconstructs the scents.

For example, with coffee, “the most universally recognized aroma,” she replaces words like “citrus” or “berry” with actual scents that will be created by ordering molecules and combining them in different percentages.

In fact, Le Lab is working with Café Coutume, the premier coffee shop in Paris, housing baristas in their building and using oPhones to create full sensory experiences.

“Imagine you are online and want to know what a particular brand of coffee would smell like,” Edwards says. “Or, you are in an actual long line waiting to order. You just tap on the oNote and get the experience.”

The result for Coutume, and all oPhone recipients, is a pure cloud of scent close to the device. Perhaps six inches in diameter, it is released and then disappears, retaining its personal and subtle aura.

And there other sectors that could benefit, Edwards says.

“Fragrance houses, of course, culinary, travel, but also healthcare.”

He cites an example at an exhibition last fall in London when someone with brain damage came forward. He had lost memory, and with it his sense of taste and smell.  The oPhone can help bring that memory back, Edwards says.

“We think there could be help for Alzheimer’s patients, related to the decline and loss of memory and olfactory sensation,” he says.

There is an image accompanying the news release which I believe are variations of the oPhone device,

Sending scents is closer than you think. [downloaded from http://www.mtu.edu/news/stories/2014/february/story102876.html]

Sending scents is closer than you think. [downloaded from http://www.mtu.edu/news/stories/2014/february/story102876.html]

You can find David Edwards’ Paris lab, Le Laboratoire (Le Lab), ici. From Le Lab’s homepage,

Opened since 2007, Le Laboratoire is a contemporary art and design center in central Paris, where artists and designers experiment at frontiers of science. Exhibition of works-in-progress from these experiments are frequently first steps toward larger scale cultural humanitarian and commercial works of art and design.

 

Le Laboratoire was founded in 2007 by David Edwards as the core-cultural lab of the international network, Artscience Labs.

Le Lab also offers a Mar. ?, 2013 news release describing the project then known as The Olfactive Project Or, The Third Dimension Global Communication (English language version ou en français).

The second item is concerned with some research from l’Université de Montréal as a Feb. 11, 2014 news item on ScienceDaily notes,

According to Simona Manescu and Johannes Frasnelli of the University of Montreal’s Department of Psychology, an odour is judged differently depending on whether it is accompanied by a positive or negative description when it is smelled. When associated with a pleasant label, we enjoy the odour more than when it is presented with a negative label. To put it another way, we also smell with our eyes!

This was demonstrated by researchers in a study recently published in the journal Chemical Senses.

A Feb. 11, 2014 Université de Montréal news release, which originated the news item, offers details about the research methodology and the conclusions,

For their study, they recruited 50 participants who were asked to smell the odours of four odorants (essential oil of pine, geraniol, cumin, as well as parmesan cheese). Each odour (administered through a mask) was randomly presented with a positive or negative label displayed on a computer screen. In this way, pine oil was presented either with the label “Pine Needles” or the label “Old Solvent”; geraniol was presented with the label “Fresh Flowers” or “Cheap Perfume”; cumin was presented with the label “Indian Food” or “Dirty Clothes; and finally, parmesan cheese was presented with the label of either the cheese or dried vomit.

The result was that all participants rated the four odours more positively when they were presented with positive labels than when presented with negative labels. Specifically, participants described the odours as pleasant and edible (even those associated with non-food items) when associated with positive labels. Conversely, the same odours were considered unpleasant and inedible when associated with negative labels – even the food odours. “It shows that odour perception is not objective: it is affected by the cognitive interpretation that occurs when one looks at a label,” says Manescu. “Moreover, this is the first time we have been able to influence the edibility perception of an odour, even though the positive and negative labels accompanying the odours showed non-food words,” adds Frasnelli.

Here’s a link to and a citation for the paper,

Now You Like Me, Now You Don’t: Impact of Labels on Odor Perception by  Simona Manescu, Johannes Frasnelli, Franco Lepore, and Jelena Djordjevic. Chem. Senses (2013) doi: 10.1093/chemse/bjt066 First published online: December 13, 2013

This paper is behind a paywall.

* Added ‘I was’ to sentence June 18, 2014. (sigh) Maybe I should spend less time with my tongue in cheek and give more time to my grammar.

A wearable book (The Girl Who Was Plugged In) makes you feel the protagonists pain

A team of students taking an MIT (Massachusetts Institute of Technology) course called ‘Science Fiction to Science Fabrication‘ have created a new kind of category for books, sensory fiction.  John Brownlee in his Feb. 10, 2014 article for Fast Company describes it this way,

Have you ever felt your pulse quicken when you read a book, or your skin go clammy during a horror story? A new student project out of MIT wants to deepen those sensations. They have created a wearable book that uses inexpensive technology and neuroscientific hacking to create a sort of cyberpunk Neverending Story that blurs the line between the bodies of a reader and protagonist.

Called Sensory Fiction, the project was created by a team of four MIT students–Felix Heibeck, Alexis Hope, Julie Legault, and Sophia Brueckner …

Here’s the MIT video demonstrating the book in use (from the course’s sensory fiction page),

Here’s how the students have described their sensory book, from the project page,

Sensory fiction is about new ways of experiencing and creating stories.

Traditionally, fiction creates and induces emotions and empathy through words and images.  By using a combination of networked sensors and actuators, the Sensory Fiction author is provided with new means of conveying plot, mood, and emotion while still allowing space for the reader’s imagination. These tools can be wielded to create an immersive storytelling experience tailored to the reader.

To explore this idea, we created a connected book and wearable. The ‘augmented’ book portrays the scenery and sets the mood, and the wearable allows the reader to experience the protagonist’s physiological emotions.

The book cover animates to reflect the book’s changing atmosphere, while certain passages trigger vibration patterns.

Changes in the protagonist’s emotional or physical state triggers discrete feedback in the wearable, whether by changing the heartbeat rate, creating constriction through air pressure bags, or causing localized temperature fluctuations.

Our prototype story, ‘The Girl Who Was Plugged In’ by James Tiptree showcases an incredible range of settings and emotions. The main protagonist experiences both deep love and ultimate despair, the freedom of Barcelona sunshine and the captivity of a dark damp cellar.

The book and wearable support the following outputs:

  • Light (the book cover has 150 programmable LEDs to create ambient light based on changing setting and mood)
  • Sound
  • Personal heating device to change skin temperature (through a Peltier junction secured at the collarbone)
  • Vibration to influence heart rate
  • Compression system (to convey tightness or loosening through pressurized airbags)

One of the earliest stories about this project was a Jan. 28,2014 piece written by Alison Flood for the Guardian where she explains how vibration, etc. are used to convey/stimulate the reader’s sensations and emotions,

MIT scientists have created a ‘wearable’ book using temperature and lighting to mimic the experiences of a book’s protagonist

The book, explain the researchers, senses the page a reader is on, and changes ambient lighting and vibrations to “match the mood”. A series of straps form a vest which contains a “heartbeat and shiver simulator”, a body compression system, temperature controls and sound.

“Changes in the protagonist’s emotional or physical state trigger discrete feedback in the wearable [vest], whether by changing the heartbeat rate, creating constriction through air pressure bags, or causing localised temperature fluctuations,” say the academics.

Flood goes on to illuminate how science fiction has explored the notion of ‘sensory books’ (Note: Links have been removed) and how at least one science fiction novelist is responding to this new type of book,,

The Arthur C Clarke award-winning science fiction novelist Chris Beckett wrote about a similar invention in his novel Marcher, although his “sensory” experience comes in the form of a video game:

Adam Roberts, another prize-winning science fiction writer, found the idea of “sensory” fiction “amazing”, but also “infantalising, like reverting to those sorts of books we buy for toddlers that have buttons in them to generate relevant sound-effects”.

Elise Hu in her Feb. 6, 2014 posting on the US National Public Radio (NPR) blog, All Tech Considered, takes a different approach to the topic,

The prototype does work, but it won’t be manufactured anytime soon. The creation was only “meant to provoke discussion,” Hope says. It was put together as part of a class in which designers read science fiction and make functional prototypes to explore the ideas in the books.

If it ever does become more widely available, sensory fiction could have an unintended consequence. When I shared this idea with NPR editor Ellen McDonnell, she quipped, “If these device things are helping ‘put you there,’ it just means the writing won’t have to be as good.”

I hope the students are successful at provoking discussion as so far they seem to have primarily provoked interest.

As for my two cents, I think that in a world where it seems making personal connections  is increasingly difficult (i.e., people becoming more isolated) that sensory fiction which stimulates people into feeling something as they read a book seems a logical progression.  It’s also interesting to me that all of the focus is on the reader with no mention as to what writers might produce (other than McDonnell’s cheeky comment) if they knew their books were going to be given the ‘sensory treatment’. One more musing, I wonder if there might a difference in how males and females, writers and readers, respond to sensory fiction.

Now for a bit of wordplay. Feeling can be emotional but, in English, it can also refer to touch and researchers at MIT have also been investigating new touch-oriented media.  You can read more about that project in my Reaching beyond the screen with the Tangible Media Group at the Massachusetts Institute of Technology (MIT) posting dated Nov. 13, 2013. One final thought, I am intrigued by how interested scientists at MIT seem to be in feelings of all kinds.

1st code poetry slam at Stanford University

It’s code as in computer code and slam as in performance competition which when added to the word poetry takes most of us into uncharted territory. Here’s a video clip featuring the winning entry, Say 23 by Leslie Wu, competing in Stanford University’s (located in California) 1st code poetry slam,


If you listen closely (this clip does not have the best sound quality), you can hear the words to Psalm 23 (from the bible).

Thanks to this Dec. 29, 2013 news item on phys.org for bringing this code poetry slam to my attention (Note: Links have been removed),

Leslie Wu, a doctoral student in computer science at Stanford, took an appropriately high-tech approach to presenting her poem “Say 23″ at the first Stanford Code Poetry Slam.

Wu wore Google Glass as she typed 16 lines of computer code that were projected onto a screen while she simultaneously recited the code aloud. She then stopped speaking and ran the script, which prompted the computer program to read a stream of words from Psalm 23 out loud three times, each one in a different pre-recorded-computer voice.

Wu, whose multimedia presentation earned her first place, was one of eight finalists to present at the Code Poetry Slam. Organized by Melissa Kagen, a graduate student in German studies, and Kurt James Werner, a graduate student in computer-based music theory and acoustics, the event was designed to explore the creative aspects of computer programming.

The Dec. 27, 2013 Stanford University news release by Mariana Lage, which originated the news item, goes on to describe the concept. the competition, and the organizers’ aims,

With presentations that ranged from poems written in a computer language format to those that incorporated digital media, the slam demonstrated the entrants’ broad interpretation of the definition of “code poetry.”

Kagen and Werner developed the code poetry slam as a means of investigating the poetic potentials of computer-programming languages.

“Code poetry has been around a while, at least in programming circles, but the conjunction of oral presentation and performance sounded really interesting to us,” said Werner. Added Kagen, “What we are interested is in the poetic aspect of code used as language to program a computer.”

Sponsored by the Division of Literatures, Cultures, and Languages, the slam drew online submissions from Stanford and beyond.

High school students and professors, graduate students and undergraduates from engineering, computer science, music, language and literature incorporated programming concepts into poem-like forms. Some of the works were written entirely in executable code, such as Ruby and C++ languages, while others were presented in multimedia formats. The works of all eight finalists can be viewed on the Code Poetry Slam website.

Kagen, Werner and Wu agree that code poetry requires some knowledge of programming from the spectators.

“I feel it’s like trying to read a poem in a language with which you are not comfortable. You get the basics, but to really get into the intricacies you really need to know that language,” said Kagen, who studies the traversal of musical space in Wagner and Schoenberg.

Wu noted that when she was typing the code most people didn’t know what she was doing. “They were probably confused and curious. But when I executed the poem, the program interpreted the code and they could hear words,” she said, adding that her presentation “gave voice to the code.”

“The code itself had its own synthesized voice, and its own poetics of computer code and singsong spoken word,” Wu said.

One of the contenders showed a poem that was “misread” by the computer.

“There was a bug in his poem, but more interestingly, there was the notion of a correct interpretation which is somewhat unique to computer code. Compared to human language, code generally has few interpretations or, in most cases, just one,” Wu said.

So what exactly is code poetry? According to Kagen, “Code poetry can mean a lot of different things depending on whom you ask.

“It can be a piece of text that can be read as code and run as program, but also read as poetry. It can mean a human language poetry that has mathematical elements and codes in it, or even code that aims for elegant expression within severe constraints, like a haiku or a sonnet, or code that generates automatic poetry. Poems that are readable to humans and readable to computers perform a kind of cyborg double coding.”

Werner noted that “Wu’s poem incorporated a lot of different concepts, languages and tools. It had Ruby language, Japanese and English, was short, compact and elegant. It did a lot for a little code.” Werner served as one of the four judges along with Kagen; Caroline Egan, a doctoral student in comparative literature; and Mayank Sanganeria, a master’s student at the Center for Computer Research in Music and Acoustics (CCRMA).

Kagen and Werner got some expert advice on judging from Michael Widner, the academic technology specialist for the Division of Literatures, Cultures and Languages.

Widner, who reviewed all of the submissions, noted that the slam allowed scholars and the public to “probe the connections between the act of writing poetry and the act of writing code, which as anyone who has done both can tell you are oddly similar enterprises.”

A scholar who specializes in the study of both medieval and machine languages, Widner said that “when we realize that coding is a creative act, we not only value that part of the coder’s labor, but we also realize that the technologies in which we swim have assumptions and ideologies behind them that, perhaps, we should challenge.”

I first encountered code poetry in 2006 and I don’t think it was new at that time but this is the first time I’ve encountered a code poetry slam. For the curious, here’s more about code poetry from the Digital poetry essay in Wikipedia (Note: Links have been removed),

… There are many types of ‘digital poetry’ such as hypertext, kinetic poetry, computer generated animation, digital visual poetry, interactive poetry, code poetry, holographic poetry (holopoetry), experimental video poetry, and poetries that take advantage of the programmable nature of the computer to create works that are interactive, or use generative or combinatorial approach to create text (or one of its states), or involve sound poetry, or take advantage of things like listservs, blogs, and other forms of network communication to create communities of collaborative writing and publication (as in poetical wikis).

The Stanford organizers have been sufficiently delighted with the response to their 1st code poetry slam that they are organizing a 2nd slam (from the Code Poetry Slam 1.1. homepage),

Call for Works 1.1

Submissions for the second Slam are now open! Submit your code/poetry to the Stanford Code Poetry Slam, sponsored by the Department of Literatures, Cultures, and Languages! Submissions due February 12th, finalists invited to present their work at a poetry slam (place and time TBA). Cash prizes and free pizza!

Stanford University’s Division of Literatures, Cultures, and Languages (DLCL) sponsors a series of Code Poetry Slams. Code Poetry Slam 1.0 was held on November 20th, 2013, and Code Poetry Slam 1.1 will be held Winter quarter 2014.

According to Lage’s news release you don’t have to be associated with Stanford University to be a competitor but, given that you will be performing your poetry there, you will likely have to live in some proximity to the university.

Reaching beyond the screen with the Tangible Media Group at the Massachusetts Institute of Technology (MIT)

Researchers at MIT’s (Massachusetts Institute of Technology) Tangible Media Group are quite literally reaching beyond the screen with inFORM, their Dynamic Shape Display,

John Brownlee’s Nov. 12, 2013 article for Fast Company describes the project this way (Note: A link has been removed),

Created by Daniel Leithinger and Sean Follmer and overseen by Professor Hiroshi Ishii, the technology behind the inFORM isn’t that hard to understand. It’s basically a fancy Pinscreen, one of those executive desk toys that allows you to create a rough 3-D model of an object by pressing it into a bed of flattened pins. With inFORM, each of those “pins” is connected to a motor controlled by a nearby laptop, which can not only move the pins to render digital content physically, but can also register real-life objects interacting with its surface thanks to the sensors of a hacked Microsoft Kinect.

To put it in the simplest terms, the inFORM is a self-aware computer monitor that doesn’t just display light, but shape as well. Remotely, two people Skyping could physically interact by playing catch, for example, or manipulating an object together, or even slapping high five from across the planet.

I found this bit in Brownlee’s article particularly interesting,

As the world increasingly embraces touch screens, the pullable knobs, twisting dials, and pushable buttons that defined the interfaces of the past have become digital ghosts. The tactile is gone and the Tangible Media Group sees that as a huge problem.

I echo what the researchers suggest about the loss of the tactile. Many years ago, when I worked in libraries, we digitized the  card catalogues and it was, for me, the beginning of the end for my career in the world of libraries. To this day, I still miss the cards.(I suspect there’s a subtle relationship between tactile cues and memory.)

Research in libraries was a more physical pursuit then. Now, almost everything can be done with a computer screen; you need never leave your chair to research and retrieve your documents. Of course, there are some advantages to this world of screens; I can access documents in a way that would have been unthinkable in a world dominated by library card catalogues. Still, I am pleased to see work being done to reintegrate the tactile into our digitized world as I agree with the researchers who view this loss as a problem. It’s not just exercise that we’re missing with our current regime.

The researchers have produced a paper for a SIGCHI (Special Interest Group, Computer Human Interface; Association for Computing Machinery) conference but it appears to be unpublished and it is undated,

inFORM: Dynamic Physical Affordances and Constraints through Shape and Object Actuation by Sean Follmer, Daniel Leithinger, Alex Olwal, Akimitsu Hogge, and  Hiroshi Ishi.

The researchers have made this paper freely available.