Category Archives: New Media

The human body as a musical instrument: performance at the University of British Columbia on April 10, 2014

It’s called The Bang! Festival of interactive music with performances of one kind or another scheduled throughout the day on April 10, 2014 (12 pm: MUSC 320; 1:30 PM: Grad Work; 2 pm: Research) and a finale featuring the Laptop Orchestra at 8 pm at the University of British Columbia’s (UBC) School of Music (Barnett Recital Hall on the Vancouver campus, Canada).

Here’s more about Bob Pritchard, professor of music, and the students who have put this programme together (from an April 7, 2014 UBC news release; Note: Links have been removed),

Pritchard [Bob Prichard], a professor of music at the University of British Columbia, is using technologies that capture physical movement to transform the human body into a musical instrument.

Pritchard and the music and engineering students who make up the UBC Laptop Orchestra wanted to inject more human performance in digital music after attending one too many uninspiring laptop music sets. “Live electronic music can be a bit of an oxymoron,” says Pritchard, referring to artists gazing at their laptops and a heavy reliance on backing tracks.

“Emerging tools and techniques can help electronic musicians find more creative and engaging ways to present their work. What results is a richer experience, which can create a deeper, more emotional connection with your audience.”

The Laptop Orchestra, which will perform a free public concert on April 10, is an extension of a music technology course at UBC’s School of Music. Comprised of 17 students from Arts, Science and Engineering, its members act as musicians, dancers, composers, programmers and hardware specialists. They create adventurous electroacoustic music using programmed and acoustic instruments, including harp, piano, clarinet and violin.

Despite its name, surprisingly few laptops are actually touched onstage. “That’s one of our rules,” says Pritchard, who is helping to launch UBC’s new minor degree in Applied Music Technology in September with Laptop Orchestra co-director Keith Hamel. “Avoid touching the laptop!”

Instead, students use body movements to trigger programmed synthetic instruments or modify the sound of their live instruments in real-time. They strap motion sensors to their bodies and instruments, play wearable iPhone instruments, swing Nintendo Wiis or PlayStation Moves, while Kinect video cameras from Sony Xboxes track their movements.

“Adding movement to our creative process has been awesome,” says Kiran Bhumber, a fourth-year music student and clarinet player. The program helped attract her back to Vancouver after attending a performing arts high school in Toronto. “I really wanted to do something completely different. When I heard of the Laptop Orchestra, I knew it was perfect for me. I begged Bob to let me in.”

The Laptop Orchestra has partnered itself with UBC’s Dept. of Computer and Electrical Engineering (from the news release),

The engineers come with expertise in programming and wireless systems and the musicians bring their performance and composition chops, and program code as well.

Besides creating their powerful music, the students have invented a series of interfaces and musical gadgets. The first is the app sensorUDP, which transforms musicians’ smartphones into motion sensors. Available in the Android app store and compatible with iPhones, it allows performers to layer up to eight programmable sounds and modify them by moving their phone.

Music student Pieteke MacMahon modified the app to create an iPhone Piano, which she plays on her wrist, thanks to a mount created by engineering classmates. As she moves her hands up, the piano notes go up in pitch. When she drops her hands, the sound gets lower, and a delay effect increases if her palm faces up. “Audiences love how intuitive it is,” says the composition major. “It creates music in a way that really makes sense to people, and it looks pretty cool onstage.”

Here’s a video of the iPhone Piano (aka PietekeIPhoneSensor) in action,

The members of the Laptop Orchestra have travelled to collaborate internationally (Note: Links have been removed),

Earlier this year, the ensemble’s unique music took them to Europe. The class spent 10 days this February in Belgium where they collaborated and performed in concert with researchers at the University of Mons, a leading institution for research on gesture-tracking technology.

The Laptop Orchestra’s trip was sponsored by UBC’s Go Global and Arts Research Abroad, which together send hundreds of students on international learning experiences each year.

In Belgium, the ensemble’s dancer Diana Brownie wore a body suit covered head-to-toe in motion sensors as part of a University of Mons research project on body movement. The researchers – one a former student of Pritchard’s – will use the suit’s data to help record and preserve cultural folk dances.

For anyone who needs directions, here’s a link to UBC’s Vancouver Campus Maps, Directions, & Tours webpage.

Call for abstracts; Volume 2 of the International Handbook of Internet Research

This call for abstracts (received from my Writing and the Digital Life list) has a deadline of June 1, 2014. From the call,

Call for Abstracts for Chapters
Volume 2 of the International Handbook of Internet Research
(editors Jeremy Hunsinger, Lisbeth Klastrup, and Matthew Allen)

Abstracts due June 1 2014; full chapters due Sept. 1 2015

After the remarkable success of the first International Handbook of Internet Research (2010), Springer has contracted with its editors to produce a second volume. This new volume will be arranged in three sections, that address one of three different aspects of internet research: foundations, futures, and critiques. Each of these meta-themes will have its own section of the new handbook.

Foundations will approach a method, a theory, a perspective, a topic or field that has been and is still a location of significant internet research. These chapters will engage with the current and historical scholarly literature through extended reviews and also as a way of developing insights into the internet and internet research. Futures will engage with the directions the field of internet research might take over the next five years. These chapters will engage current methods, topics, perspectives, or fields that will expand and re-invent the field of internet research, particularly in light of emerging social and technological trends. The material for these chapters will define the topic they describe within the framework of internet research so that it can be understand as a place of future inquiry. Critique chapters will define and develop critical positions in the field of internet research. They can engage a theoretical perspective, a methodological perspective, a historical trend or topic in internet research and provide a critical perspective. These chapters might also define one type of critical perspective, tradition, or field in the field of internet research.

We value the way in which this call for papers will itself shape the contents, themes, and coverage of the Handbook. We encourage potential authors to present abstracts that will consolidate current internet research, critically analyse its directions past and future, and re-invent the field for the decade to come. Contributions about the internet and internet research are sought from scholars in any discipline, and from many points of view. We therefore invite internet researchers working within the fields of communication, culture, politics, sociology, law and privacy, aesthetics, games and play, surveillance and mobility, amongst others, to consider contributing to the volume.

Initially, we ask scholars and researchers to submit an 500 word abstract detailing their own chapter for one of the three sections outlined above. The abstract must follow the format presented below. After the initial round of submissions, there may be a further call for papers and/or approaches to individuals to complete the volume. The final chapters will be chosen from the submitted abstracts by the editors or invited by the editors. The chapter writers will be notified of acceptance by January 1st, 2015. The chapters will be due September 2015, should be between 6,000 and 10,000 words (inclusive of references, biographical statement and all other text).

Each abstract needs to be presented in the following form:

· Section (Either Foundations, Futures, or Critiques)

· Title of chapter

· Author name/s, institutional details

· Corresponding author’s email address

· Keywords (no more than 5)

· Abstract (no more than 500 words)

· References

Please e-mail your abstract/s to: [email protected]

We look forward to your submissions and working with you to produce another definitive collection of thought-provoking internet research. Please feel free to distribute this CfP widely.

As I recall (accurately I hope), I met Jeremy Hunsinger some years ago at an Association of Internet Researchers (AoIR) conference held in Vancouver in 2007 with the theme, Let’s Play. He’s an academic based at Wilfrid Laurier University in Waterloo, Ontario, Canada.

Good luck with your submission!

For the smell of it

Having had a tussle with a fellow student some years ago about what constituted multimedia, I wanted to discuss smell as a possible means of communication and he adamantly disagreed (he won),  these  two items that feature the sense of smell  are of particular interest, especially (tongue firmly in cheek) as one of these items may indicate ahead of my time.

The first is about about a phone-like device that sends scent (from a Feb. 11, 2014 news item on ScienceDaily),

A Paris laboratory under the direction of David Edwards, Michigan Technological University alumnus, has created the oPhone, which will allow odors — oNotes — to be sent, via Bluetooth and smartphone attachments, to oPhones across the state, country or ocean, where the recipient can enjoy American Beauties or any other variety of rose.

It can be sent via email, tweet, or text.

Edwards says the idea started with student designers in his class at Harvard, where he is a professor.

“We invite young students to bring their design dreams,” he says. “We have a different theme each year, and that year it was virtual worlds.”

The all-female team came up with virtual aromas, and he brought two of the students to Paris to work on the project. Normally, he says, there’s a clear end in sight, but with their project no one had a clue who was going to pay for the research or if there was even a market.

A Feb. 11, 2014 Michigan Technological University news release by Dennis Walikainen, which originated the news item, provides more details about the project development and goals,

“We create unique aromatic profiles,” says Blake Armstrong, director of business communications at Vapor Communications, an organization operating out of Le Laboratorie (Le Lab) in Paris. “We put that into the oChip that faithfully renders that smell.”

Edwards said that the initial four chips that will come with the first oPhones can be combined into thousands different odors—produced for 20 to 30 seconds—creating what he calls “an evolution of odor.”

The secret is in accurate scent reproduction, locked in those chips plugged into the devices. Odors are first captured in wax after they are perfected using “The Nose”– an aroma expert at Le Lab, Marlène Staiger — who deconstructs the scents.

For example, with coffee, “the most universally recognized aroma,” she replaces words like “citrus” or “berry” with actual scents that will be created by ordering molecules and combining them in different percentages.

In fact, Le Lab is working with Café Coutume, the premier coffee shop in Paris, housing baristas in their building and using oPhones to create full sensory experiences.

“Imagine you are online and want to know what a particular brand of coffee would smell like,” Edwards says. “Or, you are in an actual long line waiting to order. You just tap on the oNote and get the experience.”

The result for Coutume, and all oPhone recipients, is a pure cloud of scent close to the device. Perhaps six inches in diameter, it is released and then disappears, retaining its personal and subtle aura.

And there other sectors that could benefit, Edwards says.

“Fragrance houses, of course, culinary, travel, but also healthcare.”

He cites an example at an exhibition last fall in London when someone with brain damage came forward. He had lost memory, and with it his sense of taste and smell.  The oPhone can help bring that memory back, Edwards says.

“We think there could be help for Alzheimer’s patients, related to the decline and loss of memory and olfactory sensation,” he says.

There is an image accompanying the news release which I believe are variations of the oPhone device,

Sending scents is closer than you think. [downloaded from http://www.mtu.edu/news/stories/2014/february/story102876.html]

Sending scents is closer than you think. [downloaded from http://www.mtu.edu/news/stories/2014/february/story102876.html]

You can find David Edwards’ Paris lab, Le Laboratoire (Le Lab), ici. From Le Lab’s homepage,

Opened since 2007, Le Laboratoire is a contemporary art and design center in central Paris, where artists and designers experiment at frontiers of science. Exhibition of works-in-progress from these experiments are frequently first steps toward larger scale cultural humanitarian and commercial works of art and design.

 

Le Laboratoire was founded in 2007 by David Edwards as the core-cultural lab of the international network, Artscience Labs.

Le Lab also offers a Mar. ?, 2013 news release describing the project then known as The Olfactive Project Or, The Third Dimension Global Communication (English language version ou en français).

The second item is concerned with some research from l’Université de Montréal as a Feb. 11, 2014 news item on ScienceDaily notes,

According to Simona Manescu and Johannes Frasnelli of the University of Montreal’s Department of Psychology, an odour is judged differently depending on whether it is accompanied by a positive or negative description when it is smelled. When associated with a pleasant label, we enjoy the odour more than when it is presented with a negative label. To put it another way, we also smell with our eyes!

This was demonstrated by researchers in a study recently published in the journal Chemical Senses.

A Feb. 11, 2014 Université de Montréal news release, which originated the news item, offers details about the research methodology and the conclusions,

For their study, they recruited 50 participants who were asked to smell the odours of four odorants (essential oil of pine, geraniol, cumin, as well as parmesan cheese). Each odour (administered through a mask) was randomly presented with a positive or negative label displayed on a computer screen. In this way, pine oil was presented either with the label “Pine Needles” or the label “Old Solvent”; geraniol was presented with the label “Fresh Flowers” or “Cheap Perfume”; cumin was presented with the label “Indian Food” or “Dirty Clothes; and finally, parmesan cheese was presented with the label of either the cheese or dried vomit.

The result was that all participants rated the four odours more positively when they were presented with positive labels than when presented with negative labels. Specifically, participants described the odours as pleasant and edible (even those associated with non-food items) when associated with positive labels. Conversely, the same odours were considered unpleasant and inedible when associated with negative labels – even the food odours. “It shows that odour perception is not objective: it is affected by the cognitive interpretation that occurs when one looks at a label,” says Manescu. “Moreover, this is the first time we have been able to influence the edibility perception of an odour, even though the positive and negative labels accompanying the odours showed non-food words,” adds Frasnelli.

Here’s a link to and a citation for the paper,

Now You Like Me, Now You Don’t: Impact of Labels on Odor Perception by  Simona Manescu, Johannes Frasnelli, Franco Lepore, and Jelena Djordjevic. Chem. Senses (2013) doi: 10.1093/chemse/bjt066 First published online: December 13, 2013

This paper is behind a paywall.

A wearable book (The Girl Who Was Plugged In) makes you feel the protagonists pain

A team of students taking an MIT (Massachusetts Institute of Technology) course called ‘Science Fiction to Science Fabrication‘ have created a new kind of category for books, sensory fiction.  John Brownlee in his Feb. 10, 2014 article for Fast Company describes it this way,

Have you ever felt your pulse quicken when you read a book, or your skin go clammy during a horror story? A new student project out of MIT wants to deepen those sensations. They have created a wearable book that uses inexpensive technology and neuroscientific hacking to create a sort of cyberpunk Neverending Story that blurs the line between the bodies of a reader and protagonist.

Called Sensory Fiction, the project was created by a team of four MIT students–Felix Heibeck, Alexis Hope, Julie Legault, and Sophia Brueckner …

Here’s the MIT video demonstrating the book in use (from the course’s sensory fiction page),

Here’s how the students have described their sensory book, from the project page,

Sensory fiction is about new ways of experiencing and creating stories.

Traditionally, fiction creates and induces emotions and empathy through words and images.  By using a combination of networked sensors and actuators, the Sensory Fiction author is provided with new means of conveying plot, mood, and emotion while still allowing space for the reader’s imagination. These tools can be wielded to create an immersive storytelling experience tailored to the reader.

To explore this idea, we created a connected book and wearable. The ‘augmented’ book portrays the scenery and sets the mood, and the wearable allows the reader to experience the protagonist’s physiological emotions.

The book cover animates to reflect the book’s changing atmosphere, while certain passages trigger vibration patterns.

Changes in the protagonist’s emotional or physical state triggers discrete feedback in the wearable, whether by changing the heartbeat rate, creating constriction through air pressure bags, or causing localized temperature fluctuations.

Our prototype story, ‘The Girl Who Was Plugged In’ by James Tiptree showcases an incredible range of settings and emotions. The main protagonist experiences both deep love and ultimate despair, the freedom of Barcelona sunshine and the captivity of a dark damp cellar.

The book and wearable support the following outputs:

  • Light (the book cover has 150 programmable LEDs to create ambient light based on changing setting and mood)
  • Sound
  • Personal heating device to change skin temperature (through a Peltier junction secured at the collarbone)
  • Vibration to influence heart rate
  • Compression system (to convey tightness or loosening through pressurized airbags)

One of the earliest stories about this project was a Jan. 28,2014 piece written by Alison Flood for the Guardian where she explains how vibration, etc. are used to convey/stimulate the reader’s sensations and emotions,

MIT scientists have created a ‘wearable’ book using temperature and lighting to mimic the experiences of a book’s protagonist

The book, explain the researchers, senses the page a reader is on, and changes ambient lighting and vibrations to “match the mood”. A series of straps form a vest which contains a “heartbeat and shiver simulator”, a body compression system, temperature controls and sound.

“Changes in the protagonist’s emotional or physical state trigger discrete feedback in the wearable [vest], whether by changing the heartbeat rate, creating constriction through air pressure bags, or causing localised temperature fluctuations,” say the academics.

Flood goes on to illuminate how science fiction has explored the notion of ‘sensory books’ (Note: Links have been removed) and how at least one science fiction novelist is responding to this new type of book,,

The Arthur C Clarke award-winning science fiction novelist Chris Beckett wrote about a similar invention in his novel Marcher, although his “sensory” experience comes in the form of a video game:

Adam Roberts, another prize-winning science fiction writer, found the idea of “sensory” fiction “amazing”, but also “infantalising, like reverting to those sorts of books we buy for toddlers that have buttons in them to generate relevant sound-effects”.

Elise Hu in her Feb. 6, 2014 posting on the US National Public Radio (NPR) blog, All Tech Considered, takes a different approach to the topic,

The prototype does work, but it won’t be manufactured anytime soon. The creation was only “meant to provoke discussion,” Hope says. It was put together as part of a class in which designers read science fiction and make functional prototypes to explore the ideas in the books.

If it ever does become more widely available, sensory fiction could have an unintended consequence. When I shared this idea with NPR editor Ellen McDonnell, she quipped, “If these device things are helping ‘put you there,’ it just means the writing won’t have to be as good.”

I hope the students are successful at provoking discussion as so far they seem to have primarily provoked interest.

As for my two cents, I think that in a world where it seems making personal connections  is increasingly difficult (i.e., people becoming more isolated) that sensory fiction which stimulates people into feeling something as they read a book seems a logical progression.  It’s also interesting to me that all of the focus is on the reader with no mention as to what writers might produce (other than McDonnell’s cheeky comment) if they knew their books were going to be given the ‘sensory treatment’. One more musing, I wonder if there might a difference in how males and females, writers and readers, respond to sensory fiction.

Now for a bit of wordplay. Feeling can be emotional but, in English, it can also refer to touch and researchers at MIT have also been investigating new touch-oriented media.  You can read more about that project in my Reaching beyond the screen with the Tangible Media Group at the Massachusetts Institute of Technology (MIT) posting dated Nov. 13, 2013. One final thought, I am intrigued by how interested scientists at MIT seem to be in feelings of all kinds.

1st code poetry slam at Stanford University

It’s code as in computer code and slam as in performance competition which when added to the word poetry takes most of us into uncharted territory. Here’s a video clip featuring the winning entry, Say 23 by Leslie Wu, competing in Stanford University’s (located in California) 1st code poetry slam,


If you listen closely (this clip does not have the best sound quality), you can hear the words to Psalm 23 (from the bible).

Thanks to this Dec. 29, 2013 news item on phys.org for bringing this code poetry slam to my attention (Note: Links have been removed),

Leslie Wu, a doctoral student in computer science at Stanford, took an appropriately high-tech approach to presenting her poem “Say 23″ at the first Stanford Code Poetry Slam.

Wu wore Google Glass as she typed 16 lines of computer code that were projected onto a screen while she simultaneously recited the code aloud. She then stopped speaking and ran the script, which prompted the computer program to read a stream of words from Psalm 23 out loud three times, each one in a different pre-recorded-computer voice.

Wu, whose multimedia presentation earned her first place, was one of eight finalists to present at the Code Poetry Slam. Organized by Melissa Kagen, a graduate student in German studies, and Kurt James Werner, a graduate student in computer-based music theory and acoustics, the event was designed to explore the creative aspects of computer programming.

The Dec. 27, 2013 Stanford University news release by Mariana Lage, which originated the news item, goes on to describe the concept. the competition, and the organizers’ aims,

With presentations that ranged from poems written in a computer language format to those that incorporated digital media, the slam demonstrated the entrants’ broad interpretation of the definition of “code poetry.”

Kagen and Werner developed the code poetry slam as a means of investigating the poetic potentials of computer-programming languages.

“Code poetry has been around a while, at least in programming circles, but the conjunction of oral presentation and performance sounded really interesting to us,” said Werner. Added Kagen, “What we are interested is in the poetic aspect of code used as language to program a computer.”

Sponsored by the Division of Literatures, Cultures, and Languages, the slam drew online submissions from Stanford and beyond.

High school students and professors, graduate students and undergraduates from engineering, computer science, music, language and literature incorporated programming concepts into poem-like forms. Some of the works were written entirely in executable code, such as Ruby and C++ languages, while others were presented in multimedia formats. The works of all eight finalists can be viewed on the Code Poetry Slam website.

Kagen, Werner and Wu agree that code poetry requires some knowledge of programming from the spectators.

“I feel it’s like trying to read a poem in a language with which you are not comfortable. You get the basics, but to really get into the intricacies you really need to know that language,” said Kagen, who studies the traversal of musical space in Wagner and Schoenberg.

Wu noted that when she was typing the code most people didn’t know what she was doing. “They were probably confused and curious. But when I executed the poem, the program interpreted the code and they could hear words,” she said, adding that her presentation “gave voice to the code.”

“The code itself had its own synthesized voice, and its own poetics of computer code and singsong spoken word,” Wu said.

One of the contenders showed a poem that was “misread” by the computer.

“There was a bug in his poem, but more interestingly, there was the notion of a correct interpretation which is somewhat unique to computer code. Compared to human language, code generally has few interpretations or, in most cases, just one,” Wu said.

So what exactly is code poetry? According to Kagen, “Code poetry can mean a lot of different things depending on whom you ask.

“It can be a piece of text that can be read as code and run as program, but also read as poetry. It can mean a human language poetry that has mathematical elements and codes in it, or even code that aims for elegant expression within severe constraints, like a haiku or a sonnet, or code that generates automatic poetry. Poems that are readable to humans and readable to computers perform a kind of cyborg double coding.”

Werner noted that “Wu’s poem incorporated a lot of different concepts, languages and tools. It had Ruby language, Japanese and English, was short, compact and elegant. It did a lot for a little code.” Werner served as one of the four judges along with Kagen; Caroline Egan, a doctoral student in comparative literature; and Mayank Sanganeria, a master’s student at the Center for Computer Research in Music and Acoustics (CCRMA).

Kagen and Werner got some expert advice on judging from Michael Widner, the academic technology specialist for the Division of Literatures, Cultures and Languages.

Widner, who reviewed all of the submissions, noted that the slam allowed scholars and the public to “probe the connections between the act of writing poetry and the act of writing code, which as anyone who has done both can tell you are oddly similar enterprises.”

A scholar who specializes in the study of both medieval and machine languages, Widner said that “when we realize that coding is a creative act, we not only value that part of the coder’s labor, but we also realize that the technologies in which we swim have assumptions and ideologies behind them that, perhaps, we should challenge.”

I first encountered code poetry in 2006 and I don’t think it was new at that time but this is the first time I’ve encountered a code poetry slam. For the curious, here’s more about code poetry from the Digital poetry essay in Wikipedia (Note: Links have been removed),

… There are many types of ‘digital poetry’ such as hypertext, kinetic poetry, computer generated animation, digital visual poetry, interactive poetry, code poetry, holographic poetry (holopoetry), experimental video poetry, and poetries that take advantage of the programmable nature of the computer to create works that are interactive, or use generative or combinatorial approach to create text (or one of its states), or involve sound poetry, or take advantage of things like listservs, blogs, and other forms of network communication to create communities of collaborative writing and publication (as in poetical wikis).

The Stanford organizers have been sufficiently delighted with the response to their 1st code poetry slam that they are organizing a 2nd slam (from the Code Poetry Slam 1.1. homepage),

Call for Works 1.1

Submissions for the second Slam are now open! Submit your code/poetry to the Stanford Code Poetry Slam, sponsored by the Department of Literatures, Cultures, and Languages! Submissions due February 12th, finalists invited to present their work at a poetry slam (place and time TBA). Cash prizes and free pizza!

Stanford University’s Division of Literatures, Cultures, and Languages (DLCL) sponsors a series of Code Poetry Slams. Code Poetry Slam 1.0 was held on November 20th, 2013, and Code Poetry Slam 1.1 will be held Winter quarter 2014.

According to Lage’s news release you don’t have to be associated with Stanford University to be a competitor but, given that you will be performing your poetry there, you will likely have to live in some proximity to the university.

Reaching beyond the screen with the Tangible Media Group at the Massachusetts Institute of Technology (MIT)

Researchers at MIT’s (Massachusetts Institute of Technology) Tangible Media Group are quite literally reaching beyond the screen with inFORM, their Dynamic Shape Display,

John Brownlee’s Nov. 12, 2013 article for Fast Company describes the project this way (Note: A link has been removed),

Created by Daniel Leithinger and Sean Follmer and overseen by Professor Hiroshi Ishii, the technology behind the inFORM isn’t that hard to understand. It’s basically a fancy Pinscreen, one of those executive desk toys that allows you to create a rough 3-D model of an object by pressing it into a bed of flattened pins. With inFORM, each of those “pins” is connected to a motor controlled by a nearby laptop, which can not only move the pins to render digital content physically, but can also register real-life objects interacting with its surface thanks to the sensors of a hacked Microsoft Kinect.

To put it in the simplest terms, the inFORM is a self-aware computer monitor that doesn’t just display light, but shape as well. Remotely, two people Skyping could physically interact by playing catch, for example, or manipulating an object together, or even slapping high five from across the planet.

I found this bit in Brownlee’s article particularly interesting,

As the world increasingly embraces touch screens, the pullable knobs, twisting dials, and pushable buttons that defined the interfaces of the past have become digital ghosts. The tactile is gone and the Tangible Media Group sees that as a huge problem.

I echo what the researchers suggest about the loss of the tactile. Many years ago, when I worked in libraries, we digitized the  card catalogues and it was, for me, the beginning of the end for my career in the world of libraries. To this day, I still miss the cards.(I suspect there’s a subtle relationship between tactile cues and memory.)

Research in libraries was a more physical pursuit then. Now, almost everything can be done with a computer screen; you need never leave your chair to research and retrieve your documents. Of course, there are some advantages to this world of screens; I can access documents in a way that would have been unthinkable in a world dominated by library card catalogues. Still, I am pleased to see work being done to reintegrate the tactile into our digitized world as I agree with the researchers who view this loss as a problem. It’s not just exercise that we’re missing with our current regime.

The researchers have produced a paper for a SIGCHI (Special Interest Group, Computer Human Interface; Association for Computing Machinery) conference but it appears to be unpublished and it is undated,

inFORM: Dynamic Physical Affordances and Constraints through Shape and Object Actuation by Sean Follmer, Daniel Leithinger, Alex Olwal, Akimitsu Hogge, and  Hiroshi Ishi.

The researchers have made this paper freely available.

Oxford’s (UK) Bodleian Library gets a new chair while Vancouver’s (Canada) Public Library gets a ‘creative studio’

One of my interests vis à vis science and technology has to do with consequences, intended or otherwise. In this case, I”m considering the impact that the digital domain has had on one of my favourite analogue forms, books, more specifically, I’m interested in one of their homes, libraries.

It’s lovely being online and being able to access information and people in ways that were undreamed of even 20 years ago. There have also been some consequences as music, movies, books, etc. have entered the digital domain either directly or from their original analogue forms. Copyright law, access to science research papers, business models for writers, musicians, and other creative types, etc. have all been hugely affected by the advent of a digital domain  enabled by the fields of computer science, mathematics, etc.

Before discussing the two library stories (Oxford and Vancouver), here’s a brief description of libraries from a Wikipedia essay on the topic (Note: Links have been removed),

A library (from French “librairie”; Latin “liber” = book) is an organized collection of information resources made accessible to a defined community for reference or borrowing. It provides physical or digital access to material, and may be a physical building or room, or a virtual space, or both.[1] A library’s collection can include books, periodicals, newspapers, manuscripts, films, maps, prints, documents, microform, CDs, cassettes, videotapes, DVDs, Blu-ray Discs, e-books, audiobooks, databases, and other formats. Libraries range in size from a few shelves of books to several million items. In Latin and Greek, the idea of bookcase is represented by Bibliotheca and Bibliothēkē (Greek: βιβλιοθήκη): derivatives of these mean library in many modern languages, e.g. French bibliothèque.

The first libraries consisted of archives of the earliest form of writing—the clay tablets in cuneiform script discovered in Sumer, some dating back to 2600 BC …

Keeping that definition in mind, it’s fascinating to note that Oxford’s Bodleian Library has just announced a winner for its chair competition. From an Oct. 15, 2013 article by John Pavlus for Fast Company,

The Bodleian Libraries at the University of Oxford have housed precious literature and scholarly documents for the past 400 years. It’s a special place, with its own special chairs–and over those last four centuries, only three chair designs have graced the Bodleian’s halls. The latest, designed by Barber and Osgerby, beat out competing designs by Herman Miller and four other firms. So how do you create a chair for the ages–something that can fit into Oxford’s storied history while updating it at the same time?

Oliver Wainwright in his Sept. 13, 2013 article for the Guardian provides some context for the chairs and their role in the Bodleian library and others,

Founded in 1602, the Bodleian Library rooms were always furnished with either raised reading lecterns – to study manuscripts standing up – or low wooden benches fixed to the bookshelves, to which the precious volumes were chained. It was not until the mid-18th century that the radical idea of the chair was introduced.

Records show that in 1756, three dozen Windsor chairs were bought from a Mr Munday, for the princely sum of 8s 6d each (about £120 in today’s money) – beginning a story of scholarly sitting that reaches its latest chapter this week.

They are competing for a prestigious commission that was last awarded to Giles Gilbert Scott in 1936, when he designed two seats to furnish his New Bodleian Library building, in the form of heavy leather-clad bucket chairs to match his stripped stone fortress of books. The building is currently undergoing a £78m renovation by Wilkinson Eyre architects – due to open next year – as a home for special collections. And special collections clearly need a very special chair.

“We wanted something that would be iconic and representative of the library,” says the Bodleian’s estates manager, Toby Kirtley. “It should be contemporary in style, but not out of place in a heritage setting – innovative and original, without being too experimental and risky.”

“People are now used to reading all over the place on their iPhones, while waiting for the bus or on the train, so there is a renewed attraction to coming back to the sanctity of a specific, static space.”

Perhaps surprisingly, Fletcher [Chris Fletcher, keeper of special collections] has also seen the use of the special collections increase, despite the wide availability of much of the material online.

“As digital information becomes more accessible, so the importance of the analogue also surfaces. It’s like vinyl, or 35mm film: people are interested in objects and the innate quality of things.”

By contrast, Vancouver Public Library’s chief librarian, Sandra Singh, wants to embark on a different approach to the library experience (I will get back to the Bodleian library and the winning chair). From an Oct. 1, 2013 article by Cheryl Rossi for the Vancouver Courier,

Once a bastion of silence, the Vancouver Public Library wants to build a creative technology lab that includes a recording studio with sound mixing equipment.

Open to library patrons, the Inspiration Lab would include a recording studio, digital devices to preserve and share stories, video editing software and self-publishing tools that include software and hardware to produce print or eBooks.

“What our hope would be down the road is if they come in and record an oral history or create a movie or a new piece of music or something, that we can actually add it to our collection,” said chief librarian Sandra Singh. “As a community we’re enriched when we learn about each other, we learn about each others’ experiences, we learn how to see the world through each others eyes. It helps build connectivity, trust, empathy and a sense of belonging.”

The library anticipates needing up to $600,000 to create the 3,000-square-foot lab on the third floor of the Central Branch at 350 West Georgia St. The lab is slated to open in late 2014.

Interestingly, it seems that anyone with an objection to this grand plan is going to be called old (and presumably described as ‘out of touch’) as per Rod Mickleburgh’s Dec. 21, 2012 article about the ‘inspiration’ lab for the Globe & Mail,

Under the direction of its enthusiastic chief librarian, Sandra Singh – at 39, the youngest head of a major public library system in Canada – visits to the VPL are up, circulation is up, and wireless use in particular, not surprisingly, is skyrocketing. [emphasis mine]

“This is a very exciting time to be in libraries. We are being transformed,” Ms. Singh said. “When they think of libraries, many still have the old mental model of books on shelves. Well, we are books on shelves, no doubt about it. But we are so much more.” [emphasis mine]

The evidence was clear on a wet, miserable afternoon this week at the VPL’s multi-storey main branch downtown. The place was packed. Most were not borrowing books.

Teenager Jerrison Oracion excitedly checked out a couple of dance video games. “It’s the only place where you can get video games for free,” he said, with a big grin.

Up on the fifth floor, a group of community college students sat around a table, talking over their opened binders. “There’s peace and quiet here,” Jen Hall said. “If I go to Starbucks, I get nothing done.”

Nearby, long rows of computer stations were full up. “Just browsing,” said fashion designer Jewelz Mills, one of the users. “I try to come in here once a week.”

While insisting that the books are all right, with a long life ahead of them, Ms. Singh said the library is meeting the challenges of the digital age head-on.

The popularity of e-books, which can be downloaded directly from the VPL’s website, is on the rise. The cost of most Internet paywalls is absorbed by the library, and computer courses abound, ranging from basic skills for seniors to surfing the net beyond the obvious.

“What libraries are really about is learning. It’s not really about the format,” Ms. Singh said.

Where to start? The youngest chief librarian talks about old mental models followed by anecdotes about teenagers and college students who are at the library because it’s quiet (pause for an ironic moment) and breathless excitement over e-books and the digital domain.

Having visited the Vancouver Public Library (central branch and my neighbourhood branch), I can tell you there is significantly less product on the floor and by product  I don’t mean just books. There’s less of everything. I guess they’re making room for the new studio. How many of us are going to fit into that studio which is located in the central branch only (discards going to the now ‘emptyish’ neighbouhood branches) ? Who’s going to get access? I’m also curious about intellectual property. For example, if I make a movie that spawns much money, do I owe the library anything? What about my self-published book? Or am I paying the library for the privilege of using the equipment after I’ve paid in taxes to have the studio built?

As for Singh’s contention that libraries are for learning, she and I have a significant difference of opinion. I think they’re chief function is access and a public library is supposed to ensure access for everyone.

I have some issues with this grand studio plan but no doubt Ms. Singh (and I have met and talked with her so I have no doubt) would ascribe my objections to my age rather than any reasonable objections based on a lack of data and information. What statistics or data to support this notion that the library should supply someone or other with an ‘inspiration’ lab? There are similar experiments in the US and elsewhere. Have these been successful and has anyone analyzed the reasons for success and/or failure?

Apparently, there was some sort of public consultation. According to Mickleburgh’s article,

An extensive series of Free For All sessions, seeking community opinion on what was wanted from their libraries, including the wishes of teens, produced more than 7,000 responses over 10 months. What emerged is that people still value the VPL’s extensive collections, and they treasure its space, a refuge from the density of modern living.

When and where? How were people notified and who was invited? Who crunched the data? Is it possible the data crunchers had an agenda (consciously or unconsciously)? The answer to that last question is yes and one always has to compensate for one’s own agenda.

Some of these questions could also be aimed at the Bodleian Library folks and this contention “As digital information becomes more accessible, so the importance of the analogue also surfaces. It’s like vinyl, or 35mm film: people are interested in objects and the innate quality of things.” Do you have data supporting your contention or is that what you want to believe?

Finally, here’s what Wainwright had to say about the winning design,

Finally we come to Barber Osgerby, working with classic English modernist manufacturer Isokon. Either the designers are fans of Christine de Pizan, or I have been looking at medieval illuminations for too long, but their chair has definite echoes of some of the low, round-backed seats the Renaissance feminist is depicted sitting in.

With a single straight spine that joins a continuous curving arm rest to a similarly-shaped rail on the floor, the form is also strongly reminiscent of Frank Lloyd Wright’s Barrel Chair, designed in 1937 for the Wingspread house in Wisconsin. Seen in a row from behind, as they will be installed in the library, they appear to form a line of little rooms around the readers, defining a series of individual territories from the floor to the desk.

As Barber Osgerby have cleverly done before with their Tip Ton school chair, the bottom rail is also angled to allow the chair to be subtly tilted forward, or leaned back to recline.

“That could be an important feature for the users of special collections,” says Fletcher. “You often want to get right in to see the variations in type, or annotations, or the chain lines in the paper.”

Sitting down, it appears to be the most comfortable, with broad armrests set at the right height; although, as I tilt forward – engrossed in the detail of a ligature – it feels like there might be a chance of being deposited head-first into the folio.

As a classic form that would sit at home in the Gilbert Scott interiors, yet which has its own distinctive identity as an elegant and ergonomic design, my money’s on Barber Osgerby.

You can see a photograph of the three finalist chairs and enjoy Wainwright’s full article here.

The UK’s Futurefest and an interview with Sue Thomas

Futurefest with “some of the planet’s most radical thinkers, makers and performers” is taking place in London next weekend on Sept. 28 – 29, 2013 and  I am very pleased to be featuring an interview with one of  Futurefest’s speakers, Sue Thomas who amongst many other accomplishments was also the founder of the  Creative Writing and New Media programme at De Montfort University, UK, where I got my master’s degree.

Here’s Sue,

suethomas

Sue Thomas was formerly Professor of New Media at De Montfort University. Now she writes and consults on digital well-being. Her new book ‘Technobiophilia: nature and cyberspace’ explains how contact with the natural world can help soothe our connected lives.http://www.suethomas.net @suethomas

  • I understand you are participating in Futurefest’s SciFi Writers’ Parliament; could you explain what that is and what the nature of your participation will be?

The premise of the session is to invite Science Fiction writers to play with the idea that they have been given the power to realise the kinds of new societies and cultures they imagine in their books. Each of us will present a brief proposal for the audience to vote on. The panel will be chaired by Robin Ince, a well-known comedian, broadcaster, and science enthusiast. The presenters are Cory Doctorow, Pat Cadigan, Ken MacLeod, Charles Stross, Roz Kaveney and myself.

  • Do you have expectations for who will be attending ‘Parliament’ and will they be participating as well as watching?

I’m expecting the audience for FutureFest http://www.futurefest.org/ to be people interested in future forecasting across the four themes of the event: Well-becoming, In the imaginarium,  We are all gardeners now, and The value of everything. There are plenty of opportunities for them to participate, not just in discussing and voting in panels like ours, but also in The Daily Future, a Twitter game, and Playify, which will run around and across the weekend. 

  • How are you preparing for ‘Parliament’?

 I will propose A Global Environmental Protection Act for Cyberspace The full text of the proposal is  on my blog here http://suethomasnet.wordpress.com/2013/09/05/futurefest/ It’s based on the thinking and research around my new book Technobiophilia: nature and cyberspace http://suethomasnet.wordpress.com/technobiophilia/ which coincidentally comes out in the UK two days before FutureFest. In the runup to the event I’ll also be gathering peoples’ views and refining my thoughts.

sue thomas_technobiophilia

  • Is there any other event you’re looking forward to in particular and why would that be?

The whole of FutureFest looks great and I’m excited about being there all weekend to enjoy it. The following week I’m doing a much smaller but equally interesting event at my local Cafe Scientifique, which is celebrating its first birthday with a talk from me about Technobiophilia. I’ve only recently moved to Bournemouth so this will be a great chance to meet the kinds of interesting local people who come to Cafe Scientifique in all parts of the world. http://suethomasnet.wordpress.com/2013/09/12/cafe-scientifique/

 

I’ll also be launching the book in North America with an online lecture in the Metaliteracy MOOC at SUNY Empire State University. The details are yet to be released but it’s booked for 18 November. http://metaliteracy.cdlprojects.com/index.html

  • Is there anything you’d like to add?

I’m also doing another event at FutureFest which might be of interest, especially to people interested in the future of death. It’s called xHumed and this is what it’s about: If we can archive and store our personal data, media, DNA and brain patterns, the question of whether we can bring back the dead is almost redundant. The right question is should we? It is the year 2050AD and great thought leaders from history have been “xHumed”. What could possibly go wrong? Through an interactive performance Five10Twelve will provoke and encourage the audience to consider the implications via soundbites and insights from eminent experts – both living and dead. I’m expecting some lively debate!

Thank you,  Sue for bringing Futurefest to life and congratulations on your new book!

You can find out more about Futurefest and its speakers here at the Futurefest website. I found Futurefest’s ticket webpage (which is associated with the National Theatre) a little more  informative about the event as a whole,

Some of the planet’s most radical thinkers, makers and performers are gathering in East London this September to create an immersive experience of what the world will feel like over the next few decades.

From the bright and uplifting to the dark and dystopian, FutureFest will present a weekend of compelling talks, cutting-edge shows, and interactive performances that will inspire and challenge you to change the future.

Enter the wormhole in Shoreditch Town Hall on the weekend of 28 and 29 September 2013 and experience the next phase of being human.

FutureFest is split into four sessions, Saturday Morning, Saturday Afternoon, Sunday Morning and Sunday Afternoon. You can choose to come to one, two, three or all sessions. They all have a different flavour, but each one will immerse you deep in the future.

Please note that FutureFest is a living, breathing festival so sessions are subject to change. We’ll keep you up to date on our FutureFest website.

Saturday Morning will feature The Blind Giant author Nick Harkaway, bionic man Bertolt Meyer and techno-cellist Peter Gregson. There will also be secret agents, villages of the future and a crowd-sourced experiment in futurology with some dead futurists.

Saturday Afternoon has forecaster Tamar Kasriel helping to futurescape your life, and gamemaker Alex Fleetwood showing us what life will be like in the Gameful century. We’ve got top political scientists David Runciman and Diane Coyle exploring the future of democracy. There will also be a mass-deception experiment, more secret agents and a look forward to what the weather will be like in 2100.

Sunday Morning sees Sermons of the Future. Taking the pulpit will be Wikipedia’s Jimmy Wales, social entrepreneur and model Lily Cole, and Astronomer Royal Martin Rees. Meanwhile the comedian Robin Ince will be chairing a Science Fiction Parliament with top SF authors, Roberto Unger will be analysing the future of religion and one of the world’s top chefs, Andoni Aduriz, will be exploring how food will make us feel in the future.

Sunday Afternoon will feature a futuristic take on the Sunday lunch, with food futurologist Morgaine Gaye inviting you for lunch in the Gastrodome with insects and 3D meat print-outs on the menu. Smari McCarthy, founder of Iceland’s Pirate Party and Wikileaks worker, will be exploring life in a digitised world, and Charlie Leadbeater, Diane Coyle and Mark Stevenson will be imagining cities and states of the future.

I noticed that a few Futurefest speakers have been featured here:

Eric Drexler, ‘Mr. Nano’, was last mentioned in a May 6, 2013 posting about a talk he was giving in Seattle, Washington to promote his new book, Radical Abundance.

Martin Rees, Emeritus Professor of Cosmology and Astrophysics, was mentioned in a Nov. 26, 3012 posting about the Cambridge Project for Existential Risk (humans relative to robots).

Bertolt Meyer, a young researcher from Zurich University and a lifelong user of prosthetic technology, in a Jan. 30, 2013 posting about building a bionic man.

Cory Doctorow, a science fiction writer, who ran afoul of James Moore, then Minister of Canadian Heritage and now Minister of Industry Canada, who accused him of being a ‘radical extremists’  prior to new copyright legislation  for Canadians, was mentioned in a June 25, 2010 posting.

Wish I could be at London’s Futurefest in lieu of that I will wish the organizers and participants all the best.

* On a purely cosmetic note, on Dec. 5, 2013, I changed the paragraph format in the responses.

Defiance, a transmedia project, goes nano (for one episode anyway)

Defiance sounds more like the name for a warship than the title of transmedia (tv/games) science fiction project. It (both the tv series and the game) debuted with much fanfare in April 2013 on the US SyFy channel. Given the alien invasion aspect of the show I wasn’t expecting any nanotechnology but episode eight broadcast on June 3, 2013 has a character being ‘brought back to life’ by nanomachines according to the Defiance recaplet by Jacob Clifton for Television Without Pity,

In fact, Sukar’s first death was the result of a bit of Ark that contained nanomachines and were piloting his body around to save the Votans in town. She [Irisa] takes his comatose body back to the Badlands tribe, and I guess deals with the fact that what little guidance she had for dealing with her coming godhood is now gone, which has to suck. But then too, she seems to understand that miracles never look like miracles — that just because it was nanomachines doesn’t mean it wasn’t also a miracle — so that’s comforting.

I’m not entirely sure how the nanomachines piloted a dead (?) character’s body around town but I don’t think that was the recapper’s main concern. However, curiosity aroused I found some interviews with the science advisor for Defiance, Kevin Grazier. Here’s an excerpt from Grazier’s April 15, 2013 Q&A with Emilie Lorditch for Inside Science,

Kevin Grazier is a planetary physicist who worked at NASA’s Jet Propulsion Laboratory on the Cassini/Huygens Mission to Saturn and Titan, and is currently conducting research on long-term, large-scale computational simulations of Solar System dynamics and evolution. Grazier has also been a science advisor for numerous television shows such as “Eureka,” “Battlestar Galactica,” and the new SyFy show “Defiance.” …

IS: What is your typical day like?
KG:
My interaction with the writers and producers depends upon the show, and for each episode it frequently depends upon the writer. Some shows (“Eureka,” “Falling Skies”) have brought me in prior to the beginning of a season to recommend technology or elaborate on scientific concepts for the upcoming season. Some writers will have an idea for a story, and will chat with me before they even start writing. Sometimes writers solicit input at the story outline stage, sometimes at the first draft stage. Sometimes, on the less tech-heavy stories, I have no interaction until there is a completed script, and then I weigh in with my notes.
On a few occasions I’ve been called into the writers’ room to do a presentation when we’re planning a particularly big or blockbuster season finale. Sometimes I get called to help with the visual effects. That happened a lot on Eureka.
For two episodes of Eureka, I was even asked to write a several pages of book chapters. In these episodes characters opened books and, since we shot in high-definition, fans could freeze the frame and read the text – so the text had to be original, not copyrighted, and, most importantly, correct.
On Defiance, I’ve had more telecons [telephone conferences]  than I’ve had on previous series, primarily because our game designer, Trion Worlds, is located in San Diego. I’ve also been editing a lot of online content, which I’ve never got to do before. As I said, nothing is “typical.”
IS: What advice do you have for scientists who want to work as a science advisor?
KG: It’s actually a lot easier to break in these days than it was when I started. There is an organization, program of the National Academy of Sciences, called The Science and Entertainment Exchange. They pair up scientists as consultants to the productions that need expertise. If you’re a scientist, and are interested in consulting (usually non-paid, at least at first), they maintain a database of scientists and their areas of expertise. If science consulting is something that interests you, start there.
One of the most important recommendations I could offer is that to do the job well, to be able to relate to the writers with whom you’re working, it really pays to have taken a screenwriting class or three. When it was obvious that I was going to get continued work in the industry, I went to UCLA Extension and earned a certificate in television writing. That’s been supremely helpful.
When you have an inkling of how difficult it is to tell a story in 42 minutes, with a beginning, middle, and end, along with five act breaks, you’re a much better advisor.

That last response from Grazier gives me daymares as I imagine some science type who’s taken a few courses and decides s/he is not just a science advisor but also the head writer. I’ve seen the phenomenon at work. All some people need is a workshop or a course and suddenly, they’ve become experts.

The article about Defiance on ScriptPhD is not credited or dated but I’m assuming it was posted in the last few months,

ScriptPhD.com was very honored to have the opportunity to sit down with both series writer and co-creator and executive producer Michael Taylor, as well as the show’s scientific advisor Kevin Grazier, to get a better idea of the characters, storyline and what we can expect going forward.

Taylor, also a series writer and producer on breakout SyFy hit series Battlestar Galactica, was involved in the early development of the series, which took over one and a half years to re-conceptualize and bring to the small screen from its initial concept. “Keep in mind, the original draft [of the pilot] was very different,” Taylor says. “The Chief Lawkeeper role was prototyped as this older, wry Brian Dennehy-type of character, for example. Irathient warrior Irisa was more of a wide-eyed, naïve girl than she is in the current version. We even had about two to three episodes of the series done. But as we went along, we were finding it hard to keep thinking up episodes from week to week.” Which is when the series went back to the drawing boards.

And reimagine the series they did! Unlike the vast majority of sci-fi shows, which explore the process of warring factions integrating and co-existing, in Defiance, this has already occurred, something that Taylor calls a “cool experiment.” “The 30-year-war has already been fought, all that stuff is long in the past,” Taylor reminds us. “And now we are at the point where the 8 races are trying to co-exist together. …

As for integrating the video game concept, it predated the show by five years, which allowed writers to establish stories and character development that will happen separately from, albeit concurrently with, the action in Defiance onscreen. …

“We’ve seen time and time again small plot points that have become little tidbits, or plot points or even major points driving an episode when you get the science right,” Grazier notes. “Caring about the science [in a series plot] can be as much of a strength as it is a constraint.”

And while it’s true that the science of Defiance does seem a bit less obvious or upfront than in shows like BSG or Eureka, it’s no less important nor is it any less incorporated. “We have a really rich, really well thought-out backstory, and that is very much informed by the science,” Grazier says. “We know that the V-7 [Votan] races came from the Votan System. What happened to their system? Well, we have that [mapped out], we know that.” He also pointed to subtle implications such as in the first few minutes of the pilot. When Irisa looks up at the sleeper pods, she says, “All those hundreds of years in space just to die in your sleep.” Grazier notes: “The subtle implication is that the V-7 aliens don’t go FTL [faster than light]. So we have figured out where they’re from and how far away they’re from and which direction of the sky they’re from and how long it took to get here.”

In addition to its elemental role in the backstory, science has also also had fun ‘little’ moments in the show, like the importance of the terrasphere in defending the Volge attack in the pilot or the hell bugs (a genetic amalgam of several earth critters) in episode 3. Some of these small scientific details were even able to result in cool visual effects. For example, when the table of writers was discussing the ark falls, Grazier, an astrophysicist by training, noted that the conservation of angular momentum meant that these things would not land vertically, but rather horizontally, using the screaming overhead comets in Deep Impact as a touchstone. Sure enough, in the first few minutes, you see Nolan and Irisa tracking what’s about to be an ark fall and you see them screaming overhead. “That will, by the way, come into play in a later episode,” Grazier teases. “We know where the ark belt is. Where the ships were when they blew up, how far away they are.”

Sadly, I couldn’t find any details about Defiance’s nanotechnology aspects but both the articles I’ve excerpted feature intriguing science and insider information.

Canadian filmmaker Chris Landreth’s Subconscious Password explores the uncanny valley

I gather Chris Landreth’s short animation, Subconscious Password, hasn’t been officially released yet by the National Film Board (NFB) of Canada but there are clips and trailers which hint at some of the filmmaker’s themes. Landreth in a May 23, 2013 guest post for the NFB.ca blog spells out one of them,

Subconscious Password, my latest short film, travels to the inner mind of a fellow named Charles Langford, as he struggles to remember the name of his friend at a party. In his subconscious, he encounters a game show, populated with special guest stars:  archetypes, icons, distant memories, who try to help him find the connection he needs: His friend’s name.

The film is a psychological romp into a person’s inner mind where (I hope) you will see something of your own mind working, thinking, feeling. Even during a mundane act like remembering the name of an acquaintance at a party, someone you only vaguely remember. To me, mundane accomplishments like these are miracles we all experience many times each day.

Landreth also discusses the ‘uncanny valley’ and how he deliberately cast his film into that valley. For anyone who’s unfamiliar with the ‘uncanny valley’ I wrote about it in a Mar. 10, 2011 posting concerning Geminoid robots,

It seems that researchers believe that the ‘uncanny valley’ doesn’t necessarily have to exist forever and at some point, people will accept humanoid robots without hesitation. In the meantime, here’s a diagram of the ‘uncanny valley’,

From the article on Android Science by Masahiro Mori (translated by Karl F. MacDorman and Takashi Minato)

Here’s what Mori (the person who coined the term) had to say about the ‘uncanny valley’ (from Android Science),

Recently there are many industrial robots, and as we know the robots do not have a face or legs, and just rotate or extend or contract their arms, and they bear no resemblance to human beings. Certainly the policy for designing these kinds of robots is based on functionality. From this standpoint, the robots must perform functions similar to those of human factory workers, but their appearance is not evaluated. If we plot these industrial robots on a graph of familiarity versus appearance, they lie near the origin (see Figure 1 [above]). So they bear little resemblance to a human being, and in general people do not find them to be familiar. But if the designer of a toy robot puts importance on a robot’s appearance rather than its function, the robot will have a somewhat humanlike appearance with a face, two arms, two legs, and a torso. This design lets children enjoy a sense of familiarity with the humanoid toy. So the toy robot is approaching the top of the first peak.

Of course, human beings themselves lie at the final goal of robotics, which is why we make an effort to build humanlike robots. For example, a robot’s arms may be composed of a metal cylinder with many bolts, but to achieve a more humanlike appearance, we paint over the metal in skin tones. These cosmetic efforts cause a resultant increase in our sense of the robot’s familiarity. Some readers may have felt sympathy for handicapped people they have seen who attach a prosthetic arm or leg to replace a missing limb. But recently prosthetic hands have improved greatly, and we cannot distinguish them from real hands at a glance. Some prosthetic hands attempt to simulate veins, muscles, tendons, finger nails, and finger prints, and their color resembles human pigmentation. So maybe the prosthetic arm has achieved a degree of human verisimilitude on par with false teeth. But this kind of prosthetic hand is too real and when we notice it is prosthetic, we have a sense of strangeness. So if we shake the hand, we are surprised by the lack of soft tissue and cold temperature. In this case, there is no longer a sense of familiarity. It is uncanny. In mathematical terms, strangeness can be represented by negative familiarity, so the prosthetic hand is at the bottom of the valley. So in this case, the appearance is quite human like, but the familiarity is negative. This is the uncanny valley.

Landreth discusses the ‘uncanny valley’ in relation to animated characters,

Many of you know what this is. The Uncanny Valley describes a common problem that audiences have with CG-animated characters. Here’s a graph that shows this:

Follow the curvy line from the lower left. If a character is simple (like a stick figure) we have little or no empathy with it. A more complex character, like Snow White or Pixar’s Mr. Incredible, gives us more human-like mannerisms for us to identify with.

But then the Uncanny Valley kicks in. That curvy line changes direction, plunging downwards. This is the pit into which many characters from The Polar Express, Final Fantasy and Mars Needs Moms fall. We stop empathizing with these characters. They are unintentionally disturbing, like moving corpses. This is a big problem with realistic CGI characters: that unshakable perception that they are animated zombies. [zombie emphasis mine]

You’ll notice that the diagram from my posting features a zombie at the very bottom of the curve.

Landreth goes on to compare the ‘land’ in the uncanny valley to real estate,

… The value of land in the Uncanny Valley has plunged to zero. There are no buyers.

Well, except perhaps me.

Some of you know that my films have a certain obsession with visual realism with their human characters. I like doing this. I find value in this realism that goes beyond simply copying what humans look and act like. If used intelligently and with imagination, realism can capture something deeper, something weird and emotional and psychological about our collective experience on this planet. But it has to be honest. That’s hard.

He also explains what he’s hoping to accomplish by inhabiting the uncanny valley,

When making this film, we knew we were going into the Uncanny Valley. We did it because your subconscious processes, and mine, are like this valley. We project our waking world into our subconscious minds. The ‘characters’ in this inner world are realistic approximations of actual people, without actually being real. This is the miracle of how we get by. My protagonist, Charles, has a mixture of both realistic approximations and crazy warped versions of the people and icons in his life. He is indeed a bit off-kilter. But he gets by, like most of us do. As you probably have guessed, both Charles and the Host are self-portraits. I want to be honest in showing you this world. My own Uncanny Valley. You have one too. It’s something to celebrate.

On the that note, here’s a clip from Subconscious Password,

Subconscious Password (Clip) by Chris Landreth, National Film Board of Canada

 I last wrote about Landreth and his work in an April 14, 2010 posting (scroll down about 1/4 of the way) regarding mathematics and the arts. This post features excerpts from an interview with the University of Toronto (Ontario, Canada) mathematician, Karan Singh who worked with Landreth on their award-winning, Ryan.