Category Archives: wearable electronics

Electrotactile rendering device virtualizes the sense of touch

I stumbled across this November 15, 2022 news item on Nanowerk highlighting work on the sense of touch in the virual originally announced in October 2022,

A collaborative research team co-led by City University of Hong Kong (CityU) has developed a wearable tactile rendering system, which can mimic the sensation of touch with high spatial resolution and a rapid response rate. The team demonstrated its application potential in a braille display, adding the sense of touch in the metaverse for functions such as virtual reality shopping and gaming, and potentially facilitating the work of astronauts, deep-sea divers and others who need to wear thick gloves.

Here’s what you’ll need to wear for this virtual tactile experience,

Caption: The new wearable tactile rendering system can mimic touch sensations with high spatial resolution and a rapid response rate. Credit: Robotics X Lab and City University of Hong Kong

An October 20, 2022 City University of Hong Kong (CityU) press release (also on EurekAlert), which originated the news item, delves further into the research,

“We can hear and see our families over a long distance via phones and cameras, but we still cannot feel or hug them. We are physically isolated by space and time, especially during this long-lasting pandemic,” said Dr Yang Zhengbao,Associate Professor in the Department of Mechanical Engineering of CityU, who co-led the study. “Although there has been great progress in developing sensors that digitally capture tactile features with high resolution and high sensitivity, we still lack a system that can effectively virtualize the sense of touch that can record and playback tactile sensations over space and time.”

In collaboration with Chinese tech giant Tencent’s Robotics X Laboratory, the team developed a novel electrotactile rendering system for displaying various tactile sensations with high spatial resolution and a rapid response rate. Their findings were published in the scientific journal Science Advances under the title “Super-resolution Wearable Electro-tactile Rendering System”.

Limitations in existing techniques

Existing techniques to reproduce tactile stimuli can be broadly classified into two categories: mechanical and electrical stimulation. By applying a localised mechanical force or vibration on the skin, mechanical actuators can elicit stable and continuous tactile sensations. However, they tend to be bulky, limiting the spatial resolution when integrated into a portable or wearable device. Electrotactile stimulators, in contrast, which evoke touch sensations in the skin at the location of the electrode by passing a local electric current though the skin, can be light and flexible while offering higher resolution and a faster response. But most of them rely on high voltage direct-current (DC) pulses (up to hundreds of volts) to penetrate the stratum corneum, the outermost layer of the skin, to stimulate the receptors and nerves, which poses a safety concern. Also, the tactile rendering resolution needed to be improved.

The latest electro-tactile actuator developed by the team is very thin and flexible and can be easily integrated into a finger cot. This fingertip wearable device can display different tactile sensations, such as pressure, vibration, and texture roughness in high fidelity. Instead of using DC pulses, the team developed a high-frequency alternating stimulation strategy and succeeded in lowering the operating voltage under 30 V, ensuring the tactile rendering is safe and comfortable.

They also proposed a novel super-resolution strategy that can render tactile sensation at locations between physical electrodes, instead of only at the electrode locations. This increases the spatial resolution of their stimulators by more than three times (from 25 to 105 points), so the user can feel more realistic tactile perception.

Tactile stimuli with high spatial resolution

“Our new system can elicit tactile stimuli with both high spatial resolution (76 dots/cm2), similar to the density of related receptors in the human skin, and a rapid response rate (4 kHz),” said Mr Lin Weikang, a PhD student at CityU, who made and tested the device.

The team ran different tests to show various application possibilities of this new wearable electrotactile rendering system. For example, they proposed a new Braille strategy that is much easier for people with a visual impairment to learn.

The proposed strategy breaks down the alphabet and numerical digits into individual strokes and order in the same way they are written. By wearing the new electrotactile rendering system on a fingertip, the user can recognise the alphabet presented by feeling the direction and the sequence of the strokes with the fingertip sensor. “This would be particularly useful for people who lose their eye sight later in life, allowing them to continue to read and write using the same alphabetic system they are used to, without the need to learn the whole Braille dot system,” said Dr Yang.

Enabling touch in the metaverse

Second, the new system is well suited for VR/AR [virtual reality/augmented reality] applications and games, adding the sense of touch to the metaverse. The electrodes can be made highly flexible and scalable to cover larger areas, such as the palm. The team demonstrated that a user can virtually sense the texture of clothes in a virtual fashion shop. The user also experiences an itchy sensation in the fingertips when being licked by a VR cat. When stroking a virtual cat’s fur, the user can feel a variance in the roughness as the strokes change direction and speed.

The system can also be useful in transmitting fine tactile details through thick gloves. The team successfully integrated the thin, light electrodes of the electrotactile rendering system into flexible tactile sensors on a safety glove. The tactile sensor array captures the pressure distribution on the exterior of the glove and relays the information to the user in real time through tactile stimulation. In the experiment, the user could quickly and accurately locate a tiny steel washer just 1 mm in radius and 0.44mm thick based on the tactile feedback from the glove with sensors and stimulators. This shows the system’s potential in enabling high-fidelity tactile perception, which is currently unavailable to astronauts, firefighters, deep-sea divers and others who need wear thick protective suits or gloves.

“We expect our technology to benefit a broad spectrum of applications, such as information transmission, surgical training, teleoperation, and multimedia entertainment,” added Dr Yang.

Here’s a link to and a citation for the paper,

Super-resolution wearable electrotactile rendering system by Weikang Lin, Dongsheng Zhang, Wang Wei Lee, Xuelong Li, Ying Hong, Qiqi Pan, Ruirui Zhang, Guoxiang Peng, Hong Z. Tan, Zhengyou Zhang, Lei Wei, and Zhengbao Yang. Science Advances 9 Sep 2022 Vol 8, Issue 36 DOI: 10.1126/sciadv.abp8738

This paper is open access.

Wearable devices for plants

For those with a taste for text, a May 4, 2022 news item on ScienceDaily announces wearable technology for plants,

Plants can’t speak up when they are thirsty. And visual signs, such as shriveling or browning leaves, don’t start until most of their water is gone. To detect water loss earlier, researchers reporting in ACS Applied Materials & Interfaces have created a wearable sensor for plant leaves. The system wirelessly transmits data to a smartphone app, allowing for remote management of drought stress in gardens and crops.

A May 4, 2022 American Chemical Society (ACS) news release (also on EurekAlert), which originated the news item,

Newer wearable devices are more than simple step-counters. Some smart watches now monitor the electrical activity of the wearer’s heart with electrodes that sit against the skin. And because many devices can wirelessly share the data that are collected, physicians can monitor and assess their patients’ health from a distance. Similarly, plant-wearable devices could help farmers and gardeners remotely monitor their plants’ health, including leaf water content — the key marker of metabolism and drought stress. Previously, researchers had developed metal electrodes for this purpose, but the electrodes had problems staying attached, which reduced the accuracy of the data. So, Renato Lima and colleagues wanted to identify an electrode design that was reliable for long-term monitoring of plants’ water stress, while also staying put.

The researchers created two types of electrodes: one made of nickel deposited in a narrow, squiggly pattern, and the other cut from partially burnt paper that was coated with a waxy film. When the team affixed both electrodes to detached soybean leaves with clear adhesive tape, the nickel-based electrodes performed better, producing larger signals as the leaves dried out. The metal ones also adhered more strongly in the wind, which was likely because the thin squiggly design of the metallic film allowed more of the tape to connect with the leaf surface. Next, the researchers created a plant-wearable device with the metal electrodes and attached it to a living plant in a greenhouse. The device wirelessly shared data to a smartphone app and website, and a simple, fast machine learning technique successfully converted these data to the percent of water content lost. The researchers say that monitoring water content on leaves can indirectly provide information on exposure to pests and toxic agents. Because the plant-wearable device provides reliable data indoors, they now plan to test the devices in outdoor gardens and crops to determine when plants need to be watered, potentially saving resources and increasing yields.

The authors acknowledge support from the São Paulo Research Foundation and the Brazilian Synchrotron Light Laboratory. Two of the study’s authors are listed on a patent filing application for the technology.

..

Here’s a link to and a citation for the paper,

Biocompatible Wearable Electrodes on Leaves toward the On-Site Monitoring of Water Loss from Plants by Júlia A. Barbosa, Vitoria M. S. Freitas, Lourenço H. B. Vidotto, Gabriel R. Schleder, Ricardo A. G. de Oliveira, Jaqueline F. da Rocha, Lauro T. Kubota, Luis C. S. Vieira, Hélio C. N. Tolentino, Itamar T. Neckel, Angelo L. Gobbi, Murilo Santhiago, and Renato S. Lima. ACS Appl. Mater. Interfaces 2022, XXXX, XXX, XXX-XXX DOI: https://doi.org/10.1021/acsami.2c02943 Publication Date:March 21, 2022 © 2022 American Chemical Society

This paper is behind a paywall.

Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver (Canada) Art Gallery (2 of 2): Meditations

Dear friend,

I thought it best to break this up a bit. There are a couple of ‘objects’ still to be discussed but this is mostly the commentary part of this letter to you. (Here’s a link for anyone who stumbled here but missed Part 1.)

Ethics, the natural world, social justice, eeek, and AI

Dorothy Woodend in her March 10, 2022 review for The Tyee) suggests some ethical issues in her critique of the ‘bee/AI collaboration’ and she’s not the only one with concerns. UNESCO (United Nations Educational, Scientific and Cultural Organization) has produced global recommendations for ethical AI (see my March 18, 2022 posting). More recently, there’s “Racist and sexist robots have flawed AI,” a June 23, 2022 posting, where researchers prepared a conference presentation and paper about deeply flawed AI still being used in robots.

Ultimately, the focus is always on humans and Woodend has extended the ethical AI conversation to include insects and the natural world. In short, something less human-centric.

My friend, this reference to the de Young exhibit may seem off topic but I promise it isn’t in more ways than one. The de Young Museum in San Francisco (February 22, 2020 – June 27, 2021) also held and AI and art show called, “Uncanny Valley: Being Human in the Age of AI”), from the exhibitions page,

In today’s AI-driven world, increasingly organized and shaped by algorithms that track, collect, and evaluate our data, the question of what it means to be human [emphasis mine] has shifted. Uncanny Valley is the first major exhibition to unpack this question through a lens of contemporary art and propose new ways of thinking about intelligence, nature, and artifice. [emphasis mine]

Courtesy: de Young Museum [downloaded from https://deyoung.famsf.org/exhibitions/uncanny-valley]

As you can see, it hinted (perhaps?) at an attempt to see beyond human-centric AI. (BTW, I featured this ‘Uncanny Valley’ show in my February 25, 2020 posting where I mentioned Stephanie Dinkins [featured below] and other artists.)

Social justice

While the VAG show doesn’t see much past humans and AI, it does touch on social justice. In particular there’s Pod 15 featuring the Algorithmic Justice League (AJL). The group “combine[s] art and research to illuminate the social implications and harms of AI” as per their website’s homepage.

In Pod 9, Stephanie Dinkins’ video work with a robot (Bina48), which was also part of the de Young Museum ‘Uncanny Valley’ show, addresses some of the same issues.

Still of Stephanie Dinkins, “Conversations with Bina48,” 2014–present. Courtesy of the artist [downloaded from https://deyoung.famsf.org/stephanie-dinkins-conversations-bina48-0]

From the the de Young Museum’s Stephanie Dinkins “Conversations with Bina48” April 23, 2020 article by Janna Keegan (Dinkins submitted the same work you see at the VAG show), Note: Links have been removed,

Transdisciplinary artist and educator Stephanie Dinkins is concerned with fostering AI literacy. The central thesis of her social practice is that AI, the internet, and other data-based technologies disproportionately impact people of color, LGBTQ+ people, women, and disabled and economically disadvantaged communities—groups rarely given a voice in tech’s creation. Dinkins strives to forge a more equitable techno-future by generating AI that includes the voices of multiple constituencies …

The artist’s ongoing Conversations with Bina48 takes the form of a series of interactions with the social robot Bina48 (Breakthrough Intelligence via Neural Architecture, 48 exaflops per second). The machine is the brainchild of Martine Rothblatt, an entrepreneur in the field of biopharmaceuticals who, with her wife, Bina, cofounded the Terasem Movement, an organization that seeks to extend human life through cybernetic means. In 2007 Martine commissioned Hanson Robotics to create a robot whose appearance and consciousness simulate Bina’s. The robot was released in 2010, and Dinkins began her work with it in 2014.

Part psychoanalytical discourse, part Turing test, Conversations with Bina48 also participates in a larger dialogue regarding bias and representation in technology. Although Bina Rothblatt is a Black woman, Bina48 was not programmed with an understanding of its Black female identity or with knowledge of Black history. Dinkins’s work situates this omission amid the larger tech industry’s lack of diversity, drawing attention to the problems that arise when a roughly homogenous population creates technologies deployed globally. When this occurs, writes art critic Tess Thackara, “the unconscious biases of white developers proliferate on the internet, mapping our social structures and behaviors onto code and repeating imbalances and injustices that exist in the real world.” One of the most appalling and public of these instances occurred when a Google Photos image-recognition algorithm mislabeled the faces of Black people as “gorillas.”

Eeek

You will find as you go through the ‘imitation game’ a pod with a screen showing your movements through the rooms in realtime on a screen. The installation is called “Creepers” (2021-22). The student team from Vancouver’s Centre for Digital Media (CDM) describes their project this way, from the CDM’s AI-driven Installation Piece for the Vancouver Art Gallery webpage,

Project Description

Kaleidoscope [team name] is designing an installation piece that harnesses AI to collect and visualize exhibit visitor behaviours, and interactions with art, in an impactful and thought-provoking way.

There’s no warning that you’re being tracked and you can see they’ve used facial recognition software to track your movements through the show. It’s claimed on the pod’s signage that they are deleting the data once you’ve left.

‘Creepers’ is an interesting approach to the ethics of AI. The name suggests that even the student designers were aware it was problematic.

For the curious, there’s a description of the other VAG ‘imitation game’ installations provided by CDM students on the ‘Master of Digital Media Students Develop Revolutionary Installations for Vancouver Art Gallery AI Exhibition‘ webpage.

In recovery from an existential crisis (meditations)

There’s something greatly ambitious about “The Imitation Game: Visual Culture in the Age of Artificial Intelligence” and walking up the VAG’s grand staircase affirms that ambition. Bravo to the two curators, Grenville and Entis for an exhibition.that presents a survey (or overview) of artificial intelligence, and its use in and impact on creative visual culture.

I’ve already enthused over the history (specifically Turing, Lovelace, Ovid), admitted to being mesmerized by Scott Eaton’s sculpture/AI videos, and confessed to a fascination (and mild repulsion) regarding Oxman’s honeycombs.

It’s hard to remember all of the ‘objects’ as the curators have offered a jumble of work, almost all of them on screens. Already noted, there’s Norbert Wiener’s The Moth (1949) and there are also a number of other computer-based artworks from the 1960s and 1970s. Plus, you’ll find works utilizing a GAN (generative adversarial network), an AI agent that is explained in the exhibit.

It’s worth going more than once to the show as there is so much to experience.

Why did they do that?

Dear friend, I’ve already commented on the poor flow through the show and It’s hard to tell if the curators intended the experience to be disorienting but this is to the point of chaos, especially when the exhibition is crowded.

I’ve seen Grenville’s shows before. In particular there was “MashUp: The Birth of Modern Culture, a massive survey documenting the emergence of a mode of creativity that materialized in the late 1800s and has grown to become the dominant model of cultural production in the 21st century” and there was “KRAZY! The Delirious World of Anime + Manga + Video Games + Art.” As you can see from the description, he pulls together disparate works and ideas into a show for you to ‘make sense’ of them.

One of the differences between those shows and the “imitation Game: …” is that most of us have some familiarity, whether we like it or not, with modern art/culture and anime/manga/etc. and can try to ‘make sense’ of it.

By contrast, artificial intelligence (which even experts have difficulty defining) occupies an entirely different set of categories; all of them associated with science/technology. This makes for a different kind of show so the curators cannot rely on the audience’s understanding of basics. It’s effectively an art/sci or art/tech show and, I believe, the first of its kind at the Vancouver Art Gallery. Unfortunately, the curators don’t seem to have changed their approach to accommodate that difference.

AI is also at the centre of a current panic over job loss, loss of personal agency, automated racism and sexism, etc. which makes the experience of viewing the show a little tense. In this context, their decision to commission and use ‘Creepers’ seems odd.

Where were Ai-Da and Dall-E-2 and the others?

Oh friend, I was hoping for a robot. Those roomba paintbots didn’t do much for me. All they did was lie there on the floor

To be blunt I wanted some fun and perhaps a bit of wonder and maybe a little vitality. I wasn’t necessarily expecting Ai-Da, an artisitic robot, but something three dimensional and fun in this very flat, screen-oriented show would have been nice.

This image has an empty alt attribute; its file name is image-asset.jpeg
Ai-Da was at the Glastonbury Festival in the U from 23-26th June 2022. Here’s Ai-Da and her Billie Eilish (one of the Glastonbury 2022 headliners) portrait. [downloaded from https://www.ai-darobot.com/exhibition]

Ai-Da was first featured here in a December 17, 2021 posting about performing poetry that she had written in honour of the 700th anniversary of poet Dante Alighieri’s death.

Named in honour of Ada Lovelace, Ai-Da visited the 2022 Venice Biennale as Leah Henrickson and Simone Natale describe in their May 12, 2022 article for Fast Company (Note: Links have been removed),

Ai-Da sits behind a desk, paintbrush in hand. She looks up at the person posing for her, and then back down as she dabs another blob of paint onto the canvas. A lifelike portrait is taking shape. If you didn’t know a robot produced it, this portrait could pass as the work of a human artist.

Ai-Da is touted as the “first robot to paint like an artist,” and an exhibition of her work, called Leaping into the Metaverse, opened at the Venice Biennale.

Ai-Da produces portraits of sitting subjects using a robotic hand attached to her lifelike feminine figure. She’s also able to talk, giving detailed answers to questions about her artistic process and attitudes toward technology. She even gave a TEDx talk about “The Intersection of Art and AI” in Oxford a few years ago. While the words she speaks are programmed, Ai-Da’s creators have also been experimenting with having her write and perform her own poetry.

She has her own website.

If not Ai-Da, what about Dall-E-2? Aaron Hertzmann’s June 20, 2022 commentary, “Give this AI a few words of description and it produces a stunning image – but is it art?” investigates for Salon (Note: Links have been removed),

DALL-E 2 is a new neural network [AI] algorithm that creates a picture from a short phrase or sentence that you provide. The program, which was announced by the artificial intelligence research laboratory OpenAI in April 2022, hasn’t been released to the public. But a small and growing number of people – myself included – have been given access to experiment with it.

As a researcher studying the nexus of technology and art, I was keen to see how well the program worked. After hours of experimentation, it’s clear that DALL-E – while not without shortcomings – is leaps and bounds ahead of existing image generation technology. It raises immediate questions about how these technologies will change how art is made and consumed. It also raises questions about what it means to be creative when DALL-E 2 seems to automate so much of the creative process itself.

A July 4, 2022 article “DALL-E, Make Me Another Picasso, Please” by Laura Lane for The New Yorker has a rebuttal to Ada Lovelace’s contention that creativity is uniquely human (Note: A link has been removed),

“There was this belief that creativity is this deeply special, only-human thing,” Sam Altman, OpenAI’s C.E.O., explained the other day. Maybe not so true anymore, he said. Altman, who wore a gray sweater and had tousled brown hair, was videoconferencing from the company’s headquarters, in San Francisco. DALL-E is still in a testing phase. So far, OpenAI has granted access to a select group of people—researchers, artists, developers—who have used it to produce a wide array of images: photorealistic animals, bizarre mashups, punny collages. Asked by a user to generate “a plate of various alien fruits from another planet photograph,” DALL-E returned something kind of like rambutans. “The rest of mona lisa” is, according to DALL-E, mostly just one big cliff. Altman described DALL-E as “an extension of your own creativity.”

There are other AI artists, in my August 16, 2019 posting, I had this,

AI artists first hit my radar in August 2018 when Christie’s Auction House advertised an art auction of a ‘painting’ by an algorithm (artificial intelligence). There’s more in my August 31, 2018 posting but, briefly, a French art collective, Obvious, submitted a painting, “Portrait of Edmond de Belamy,” that was created by an artificial intelligence agent to be sold for an estimated to $7000 – $10,000. They weren’t even close. According to Ian Bogost’s March 6, 2019 article for The Atlantic, the painting sold for $432,500 In October 2018.

That posting also included AI artist, AICAN. Both artist-AI agents (Obvious and AICAN) are based on GANs (generative adversarial networks) for learning and eventual output. Both artist-AI agents work independently or with human collaborators on art works that are available for purchase.

As might be expected not everyone is excited about AI and visual art. Sonja Drimmer, Professor of Medieval Art, University of Massachusetts at Amherst, provides another perspective on AI, visual art, and, her specialty, art history in her November 1, 2021 essay for The Conversation (Note: Links have been removed),

Over the past year alone, I’ve come across articles highlighting how artificial intelligence recovered a “secret” painting of a “lost lover” of Italian painter Modigliani, “brought to life” a “hidden Picasso nude”, “resurrected” Austrian painter Gustav Klimt’s destroyed works and “restored” portions of Rembrandt’s 1642 painting “The Night Watch.” The list goes on.

As an art historian, I’ve become increasingly concerned about the coverage and circulation of these projects.

They have not, in actuality, revealed one secret or solved a single mystery.

What they have done is generate feel-good stories about AI.

Take the reports about the Modigliani and Picasso paintings.

These were projects executed by the same company, Oxia Palus, which was founded not by art historians but by doctoral students in machine learning.

In both cases, Oxia Palus relied upon traditional X-rays, X-ray fluorescence and infrared imaging that had already been carried out and published years prior – work that had revealed preliminary paintings beneath the visible layer on the artists’ canvases.

The company edited these X-rays and reconstituted them as new works of art by applying a technique called “neural style transfer.” This is a sophisticated-sounding term for a program that breaks works of art down into extremely small units, extrapolates a style from them and then promises to recreate images of other content in that same style.

As you can ‘see’ my friend, the topic of AI and visual art is a juicy one. In fact, I have another example in my June 27, 2022 posting, which is titled, “Art appraised by algorithm.” So, Grenville’s and Entis’ decision to focus on AI and its impact on visual culture is quite timely.

Visual culture: seeing into the future

The VAG Imitation Game webpage lists these categories of visual culture “animation, architecture, art, fashion, graphic design, urban design and video games …” as being represented in the show. Movies and visual art, not mentioned in the write up, are represented while theatre and other performing arts are not mentioned or represented. That’ s not a surprise.

In addition to an area of science/technology that’s not well understood even by experts, the curators took on the truly amorphous (and overwhelming) topic of visual culture. Given that even writing this commentary has been a challenge, I imagine pulling the show together was quite the task.

Grenville often grounds his shows in a history of the subject and, this time, it seems especially striking. You’re in a building that is effectively a 19th century construct and in galleries that reflect a 20th century ‘white cube’ aesthetic, while looking for clues into the 21st century future of visual culture employing technology that has its roots in the 19th century and, to some extent, began to flower in the mid-20th century.

Chung’s collaboration is one of the only ‘optimistic’ notes about the future and, as noted earlier, it bears a resemblance to Wiener’s 1949 ‘Moth’

Overall, it seems we are being cautioned about the future. For example, Oxman’s work seems bleak (bees with no flowers to pollinate and living in an eternal spring). Adding in ‘Creepers’ and surveillance along with issues of bias and social injustice reflects hesitation and concern about what we will see, who sees it, and how it will be represented visually.

Learning about robots, automatons, artificial intelligence, and more

I wish the Vancouver Art Gallery (and Vancouver’s other art galleries) would invest a little more in audience education. A couple of tours, by someone who may or may not know what they’re talking, about during the week do not suffice. The extra material about Stephanie Dinkins and her work (“Conversations with Bina48,” 2014–present) came from the de Young Museum’s website. In my July 26, 2021 commentary on North Vancouver’s Polygon Gallery 2021 show “Interior Infinite,” I found background information for artist Zanele Muholi on the Tate Modern’s website. There is nothing on the VAG website that helps you to gain some perspective on the artists’ works.

It seems to me that if the VAG wants to be considered world class, it should conduct itself accordingly and beefing up its website with background information about their current shows would be a good place to start.

Robots, automata, and artificial intelligence

Prior to 1921, robots were known exclusively as automatons. These days, the word ‘automaton’ (or ‘automata’ in the plural) seems to be used to describe purely mechanical representations of humans from over 100 years ago whereas the word ‘robot’ can be either ‘humanlike’ or purely machine, e.g. a mechanical arm that performs the same function over and over. I have a good February 24, 2017 essay on automatons by Miguel Barral for OpenMind BBVA*, which provides some insight into the matter,

The concept of robot is relatively recent. The idea was introduced in 1921 by the Czech writer Karel Capek in his work R.U.R to designate a machine that performs tasks in place of man. But their predecessors, the automatons (from the Greek automata, or “mechanical device that works by itself”), have been the object of desire and fascination since antiquity. Some of the greatest inventors in history, such as Leonardo Da Vinci, have contributed to our fascination with these fabulous creations:

The Al-Jazari automatons

The earliest examples of known automatons appeared in the Islamic world in the 12th and 13th centuries. In 1206, the Arab polymath Al-Jazari, whose creations were known for their sophistication, described some of his most notable automatons: an automatic wine dispenser, a soap and towels dispenser and an orchestra-automaton that operated by the force of water. This latter invention was meant to liven up parties and banquets with music while floating on a pond, lake or fountain.

As the water flowed, it started a rotating drum with pegs that, in turn, moved levers whose movement produced different sounds and movements. As the pegs responsible for the musical notes could be exchanged for different ones in order to interpret another melody, it is considered one of the first programmable machines in history.

If you’re curious about automata, my friend, I found this Sept. 26, 2016 ABC news radio news item about singer Roger Daltrey’s and his wife, Heather’s auction of their collection of 19th century French automata (there’s an embedded video showcasing these extraordinary works of art). For more about automata, robots, and androids, there’s an excellent May 4, 2022 article by James Vincent, ‘A visit to the human factory; How to build the world’s most realistic robot‘ for The Verge; Vincent’s article is about Engineered Arts, the UK-based company that built Ai-Da.

AI is often used interchangeably with ‘robot’ but they aren’t the same. Not all robots have AI integrated into their processes. At its simplest AI is an algorithm or set of algorithms, which may ‘live’ in a CPU and be effectively invisible or ‘live’ in or make use of some kind of machine and/or humanlike body. As the experts have noted, the concept of artificial intelligence is a slippery concept.

*OpenMind BBVA is a Spanish multinational financial services company, Banco Bilbao Vizcaya Argentaria (BBVA), which runs the non-profit project, OpenMind (About us page) to disseminate information on robotics and so much more.*

You can’t always get what you want

My friend,

I expect many of the show’s shortcomings (as perceived by me) are due to money and/or scheduling issues. For example, Ai-Da was at the Venice Biennale and if there was a choice between the VAG and Biennale, I know where I’d be.

Even with those caveats in mind, It is a bit surprising that there were no examples of wearable technology. For example, Toronto’s Tapestry Opera recently performed R.U.R. A Torrent of Light (based on the word ‘robot’ from Karel Čapek’s play, R.U.R., ‘Rossumovi Univerzální Roboti’), from my May 24, 2022 posting,

I have more about tickets prices, dates, and location later in this post but first, here’s more about the opera and the people who’ve created it from the Tapestry Opera’s ‘R.U.R. A Torrent of Light’ performance webpage,

“This stunning new opera combines dance, beautiful multimedia design, a chamber orchestra including 100 instruments creating a unique electronica-classical sound, and wearable technology [emphasis mine] created with OCAD University’s Social Body Lab, to create an immersive and unforgettable science-fiction experience.”

And, from later in my posting,

“Despite current stereotypes, opera was historically a launchpad for all kinds of applied design technologies. [emphasis mine] Having the opportunity to collaborate with OCAD U faculty is an invigorating way to reconnect to that tradition and foster connections between art, music and design, [emphasis mine]” comments the production’s Director Michael Hidetoshi Mori, who is also Tapestry Opera’s Artistic Director. 

That last quote brings me back to the my comment about theatre and performing arts not being part of the show. Of course, the curators couldn’t do it all but a website with my hoped for background and additional information could have helped to solve the problem.

The absence of the theatrical and performing arts in the VAG’s ‘Imitation Game’ is a bit surprising as the Council of Canadian Academies (CCA) in their third assessment, “Competing in a Global Innovation Economy: The Current State of R&D in Canada” released in 2018 noted this (from my April 12, 2018 posting),

Canada, relative to the world, specializes in subjects generally referred to as the
humanities and social sciences (plus health and the environment), and does
not specialize as much as others in areas traditionally referred to as the physical
sciences and engineering. Specifically, Canada has comparatively high levels
of research output in Psychology and Cognitive Sciences, Public Health and
Health Services, Philosophy and Theology, Earth and Environmental Sciences,
and Visual and Performing Arts. [emphasis mine] It accounts for more than 5% of world research in these fields. Conversely, Canada has lower research output than expected in Chemistry, Physics and Astronomy, Enabling and Strategic Technologies,
Engineering, and Mathematics and Statistics. The comparatively low research
output in core areas of the natural sciences and engineering is concerning,
and could impair the flexibility of Canada’s research base, preventing research
institutions and researchers from being able to pivot to tomorrow’s emerging
research areas. [p. xix Print; p. 21 PDF]

US-centric

My friend,

I was a little surprised that the show was so centered on work from the US given that Grenville has curated ate least one show where there was significant input from artists based in Asia. Both Japan and Korea are very active with regard to artificial intelligence and it’s hard to believe that their artists haven’t kept pace. (I’m not as familiar with China and its AI efforts, other than in the field of facial recognition, but it’s hard to believe their artists aren’t experimenting.)

The Americans, of course, are very important developers in the field of AI but they are not alone and it would have been nice to have seen something from Asia and/or Africa and/or something from one of the other Americas. In fact, anything which takes us out of the same old, same old. (Luba Elliott wrote this (2019/2020/2021?) essay, “Artificial Intelligence Art from Africa and Black Communities Worldwide” on Aya Data if you want to get a sense of some of the activity on the African continent. Elliott does seem to conflate Africa and Black Communities, for some clarity you may want to check out the Wikipedia entry on Africanfuturism, which contrasts with this August 12, 2020 essay by Donald Maloba, “What is Afrofuturism? A Beginner’s Guide.” Maloba also conflates the two.)

As it turns out, Luba Elliott presented at the 2019 Montréal Digital Spring event, which brings me to Canada’s artificial intelligence and arts scene.

I promise I haven’t turned into a flag waving zealot, my friend. It’s just odd there isn’t a bit more given that machine learning was pioneered at the University of Toronto. Here’s more about that (from Wikipedia entry for Geoffrey Hinston),

Geoffrey Everest HintonCCFRSFRSC[11] (born 6 December 1947) is a British-Canadian cognitive psychologist and computer scientist, most noted for his work on artificial neural networks.

Hinton received the 2018 Turing Award, together with Yoshua Bengio [Canadian scientist] and Yann LeCun, for their work on deep learning.[24] They are sometimes referred to as the “Godfathers of AI” and “Godfathers of Deep Learning“,[25][26] and have continued to give public talks together.[27][28]

Some of Hinton’s work was started in the US but since 1987, he has pursued his interests at the University of Toronto. He wasn’t proven right until 2012. Katrina Onstad’s February 29, 2018 article (Mr. Robot) for Toronto Life is a gripping read about Hinton and his work on neural networks. BTW, Yoshua Bengio (co-Godfather) is a Canadian scientist at the Université de Montréal and Yann LeCun (co-Godfather) is a French scientist at New York University.

Then, there’s another contribution, our government was the first in the world to develop a national artificial intelligence strategy. Adding those developments to the CCA ‘State of Science’ report findings about visual arts and performing arts, is there another word besides ‘odd’ to describe the lack of Canadian voices?

You’re going to point out the installation by Ben Bogart (a member of Simon Fraser University’s Metacreation Lab for Creative AI and instructor at the Emily Carr University of Art + Design (ECU)) but it’s based on the iconic US scifi film, 2001: A Space Odyssey. As for the other Canadian, Sougwen Chung, she left Canada pretty quickly to get her undergraduate degree in the US and has since moved to the UK. (You could describe hers as the quintessential success story, i.e., moving from Canada only to get noticed here after success elsewhere.)

Of course, there are the CDM student projects but the projects seem less like an exploration of visual culture than an exploration of technology and industry requirements, from the ‘Master of Digital Media Students Develop Revolutionary Installations for Vancouver Art Gallery AI Exhibition‘ webpage, Note: A link has been removed,

In 2019, Bruce Grenville, Senior Curator at Vancouver Art Gallery, approached [the] Centre for Digital Media to collaborate on several industry projects for the forthcoming exhibition. Four student teams tackled the project briefs over the course of the next two years and produced award-winning installations that are on display until October 23 [2022].

Basically, my friend, it would have been nice to see other voices or, at the least, an attempt at representing other voices and visual cultures informed by AI. As for Canadian contributions, maybe put something on the VAG website?

Playing well with others

it’s always a mystery to me why the Vancouver cultural scene seems comprised of a set of silos or closely guarded kingdoms. Reaching out to the public library and other institutions such as Science World might have cost time but could have enhanced the show

For example, one of the branches of the New York Public Library ran a programme called, “We are AI” in March 2022 (see my March 23, 2022 posting about the five-week course, which was run as a learning circle). The course materials are available for free (We are AI webpage) and I imagine that adding a ‘visual culture module’ wouldn’t be that difficult.

There is one (rare) example of some Vancouver cultural institutions getting together to offer an art/science programme and that was in 2017 when the Morris and Helen Belkin Gallery (at the University of British Columbia; UBC) hosted an exhibition of Santiago Ramon y Cajal’s work (see my Sept. 11, 2017 posting about the gallery show) along with that show was an ancillary event held by the folks at Café Scientifique at Science World and featuring a panel of professionals from UBC’s Faculty of Medicine and Dept. of Psychology, discussing Cajal’s work.

In fact, where were the science and technology communities for this show?

On a related note, the 2022 ACM SIGGRAPH conference (August 7 – 11, 2022) is being held in Vancouver. (ACM is the Association for Computing Machinery; SIGGRAPH is for Special Interest Group on Computer Graphics and Interactive Techniques.) SIGGRAPH has been holding conferences in Vancouver every few years since at least 2011.

At this year’s conference, they have at least two sessions that indicate interests similar to the VAG’s. First, there’s Immersive Visualization for Research, Science and Art which includes AI and machine learning along with other related topics. There’s also, Frontiers Talk: Art in the Age of AI: Can Computers Create Art?

This is both an international conference and an exhibition (of art) and the whole thing seems to have kicked off on July 25, 2022. If you’re interested, the programme can be found here and registration here.

Last time SIGGRAPH was here the organizers seemed interested in outreach and they offered some free events.

In the end

It was good to see the show. The curators brought together some exciting material. As is always the case, there were some missed opportunities and a few blind spots. But all is not lost.

July 27, 2022, the VAG held a virtual event with an artist,

Gwenyth Chao to learn more about what happened to the honeybees and hives in Oxman’s Synthetic Apiary project. As a transdisciplinary artist herself, Chao will also discuss the relationship between art, science, technology and design. She will then guide participants to create a space (of any scale, from insect to human) inspired by patterns found in nature.

Hopefully there will be more more events inspired by specific ‘objects’. Meanwhile, August 12, 2022, the VAG is hosting,

… in partnership with the Canadian Music Centre BC, New Music at the Gallery is a live concert series hosted by the Vancouver Art Gallery that features an array of musicians and composers who draw on contemporary art themes.

Highlighting a selection of twentieth- and twenty-first-century music compositions, this second concert, inspired by the exhibition The Imitation Game: Visual Culture in the Age of Artificial Intelligence, will spotlight The Iliac Suite (1957), the first piece ever written using only a computer, and Kaija Saariaho’s Terra Memoria (2006), which is in a large part dependent on a computer-generated musical process.

It would be lovely if they could include an Ada Lovelace Day event. This is an international celebration held on October 11, 2022.

Do go. Do enjoy, my friend.

Sci-fi opera: R.U.R. A Torrent of Light opens May 28, 2022 in Toronto, Canada

Even though it’s a little late, I guess you could call the opera opening in Toronto on May 28, 2022 a 100th anniversary celebration of the word ‘robot’. Introduced in 1920 by Czech playwright Karel Čapek in his play, R.U.R., which stands for ‘Rossumovi Univerzální Roboti’ or, in English, ‘Rossum’s Universal Robots’, the word was first coined by Čapek’s brother, Josef (see more about the play and the word in the R.U.R. Wikipedia entry).

The opera, R.U.R. A Torrent of Light, is scheduled to open at 8 pm ET on Saturday, May 28, 2022 (after being rescheduled due to a COVID case in the cast) at OCAD University’s (formerly the Ontario College of Art and Design) The Great Hall.

I have more about tickets prices, dates, and location later in this post but first, here’s more about the opera and the people who’ve created it from the Tapestry Opera’s ‘R.U.R. A Torrent of Light’ performance webpage,

This stunning new opera combines dance, beautiful multimedia design, a chamber orchestra including 100 instruments creating a unique electronica-classical sound, and wearable technology [emphasis mine] created with OCAD University’s Social Body Lab, to create an immersive and unforgettable science-fiction experience.

As for the opera’s story,

The fictional tech company R.U.R., founded by couple Helena and Dom, dominates the A.I. software market and powers the now-ubiquitous androids that serve their human owners. 

As Dom becomes more focused on growing R.U.R’s profits, Helena’s creative research leads to an unexpected technological breakthrough that pits the couples’ visions squarely against each other. They’ve reached a turning point for humanity, but is humanity ready? 

Inspired by Karel Čapek’s 1920’s science-fiction play Rossum’s Universal Robots (which introduced the word “robot” to the English language), composer Nicole Lizée’s and writer Nicolas Billon’s R.U.R. A Torrent of Light grapples with one of our generation’s most fascinating questions. [emphasis mine]

So, what is the fascinating question? The answer is here in a March 7, 2022 OCAD news release,

Last Wednesday [March 2, 2022], OCAD U’s Great Hall at 100 McCaul St. was filled with all manner of sound making objects. Drum kits, gongs, chimes, typewriters and most exceptionally, a cello bow that produces bird sounds when glided across any surface were being played while musicians, dancers and opera singers moved among them.  

All were abuzz preparing for Tapestry Opera’s new production, R.U.R. A Torrent of Light, which will be presented this spring in collaboration with OCAD University. 

An immersive, site-specific experience, the new chamber opera explores humanity’s relationship to technology. [emphasis mine] Inspired by Karel Čapek’s 1920s science-fiction play Rossum’s Universal Robots, this latest version is set 20 years in the future when artificial intelligence (AI) has become fully sewn into our everyday lives and is set in the offices of a fictional tech company.

Čapek’s original script brought the word robot into the English language and begins in a factory that manufactures artificial people. Eventually these entities revolt and render humanity extinct.  

The innovative adaptation will be a unique addition to Tapestry Opera’s more than 40-year history of producing operatic stage performances. It is the only company in the country dedicated solely to the creation and performance of original Canadian opera. 

The March 7, 2022 OCAD news release goes on to describe the Social Body Lab’s involvement,

OCAD U’s Social Body Lab, whose mandate is to question the relationship between humans and technology, is helping to bring Tapestry’s vision of the not-so-distant future to the stage. Director of the Lab and Associate Professor in the Faculty of Arts & Science, Kate Hartman, along with Digital Futures Associate Professors Nick Puckett and Dr. Adam Tindale have developed wearable technology prototypes that will be integrated into the performers’ costumes. They have collaborated closely with the opera’s creative team to embrace the possibilities innovative technologies can bring to live performance. 

“This collaboration with Tapestry Opera has been incredibly unique and productive. Working in dialogue with their designers has enabled us to translate their ideas into cutting edge technological objects that we would have never arrived at individually,” notes Professor Puckett. 

The uncanny bow that was being tested last week is one of the futuristic devices that will be featured in the performance and is the invention of Dr. Tindale, who is himself a classically trained musician. He has also developed a set of wearable speakers for R.U.R. A Torrent of Light that when donned by the dancers will allow sound to travel across the stage in step with their choreography. 

Hartman and Puckett, along with the production’s costume, light and sound designers, have developed an LED-based prototype that will be worn around the necks of the actors who play robots and will be activated using WIFI. These collar pieces will function as visual indicators to the audience of various plot points, including the moments when the robots receive software updates.  

“Despite current stereotypes, opera was historically a launchpad for all kinds of applied design technologies. [emphasis mine] Having the opportunity to collaborate with OCAD U faculty is an invigorating way to reconnect to that tradition and foster connections between art, music and design,” comments the production’s Director Michael Hidetoshi Mori, who is also Tapestry Opera’s Artistic Director. 

“New music and theatre are perfect canvases for iterative experimentation. We look forward to the unique fruits of this collaboration and future ones,” he continues. 

Unfortunately, I cannot find a preview but there is this video highlighting the technology being used in the opera (there are three other videos highlighting the choreography, the music, and the story, respectively, if you scroll about 40% down this page),


As I promised, here are the logistics,

University address:

OCAD University
100 McCaul Street,
Toronto, Ontario, Canada, M5T 1W1

Performance venue:

The Great Hall at OCAD University
Level 2, beside the Anniversary Gallery

Ticket prices:

The following seating sections are available for this performance. Tickets are from $10 to $100. All tickets are subject to a $5 transaction fee.

Orchestra Centre
Orchestra Sides
Orchestra Rear
Balcony (standing room)

Performances:

May 28 at 8:00 pm

May 29 at 4:00 pm

June 01 at 8:00 pm

June 02 at 8:00 pm

June 03 at 8:00 pm

June 04 at 8:00 pm

June 05 at 4:00 pm

Tapestry Opera’s ‘R.U.R. A Torrent of Light’ performance webpage offers a link to buy tickets but it lands on a page that doesn’t seem to be functioning properly. I have contacted (as of Tuesday, May 24, 2022 at about 10:30 am PT) the Tapestry Opera folks to let them know about the problem. Hopefully soon, I will be able to update this page when they’ve handled the issue.

ETA May 30, 2022: You can buy tickets here. There are tickets available for only two of the performances left, Thursday, June 2, 2022 at 8 pm and Sunday, June 5, 2022 at 4 pm.

Printing wearable circuits onto skin

It seems that this new technique for creating wearable electronics will be more like getting a permanent tattoo where the circuits are applied directly to your skin as opposed to being like a temporary tattoo where the circuits are printed onto a substrate and then applied to then, worn on your skin.

Caption: On-body sensors, such as electrodes and temperature sensors, were directly printed and sintered on the skin surface. Credit: Adapted from ACS Applied Materials & Interfaces 2020, DOI: 10.1021/acsami.0c11479

An Oct. 14, 2020 American Chemical Society (ACS) news release (also on EurekAlert) announced this latest development in wearable electronics,

Wearable electronics are getting smaller, more comfortable and increasingly capable of interfacing with the human body. To achieve a truly seamless integration, electronics could someday be printed directly on people’s skin. As a step toward this goal, researchers reporting in ACS Applied Materials & Interfaces have safely placed wearable circuits directly onto the surface of human skin to monitor health indicators, such as temperature, blood oxygen, heart rate and blood pressure.

The latest generation of wearable electronics for health monitoring combines soft on-body sensors with flexible printed circuit boards (FPCBs) for signal readout and wireless transmission to health care workers. However, before the sensor is attached to the body, it must be printed or lithographed onto a carrier material, which can involve sophisticated fabrication approaches. To simplify the process and improve the performance of the devices, Peng He, Weiwei Zhao, Huanyu Cheng and colleagues wanted to develop a room-temperature method to sinter metal nanoparticles onto paper or fabric for FPCBs and directly onto human skin for on-body sensors. Sintering — the process of fusing metal or other particles together — usually requires heat, which wouldn’t be suitable for attaching circuits directly to skin.

The researchers designed an electronic health monitoring system that consisted of sensor circuits printed directly on the back of a human hand, as well as a paper-based FPCB attached to the inside of a shirt sleeve. To make the FPCB part of the system, the researchers coated a piece of paper with a novel sintering aid and used an inkjet printer with silver nanoparticle ink to print circuits onto the coating. As solvent evaporated from the ink, the silver nanoparticles sintered at room temperature to form circuits. A commercially available chip was added to wirelessly transmit the data, and the resulting FPCB was attached to a volunteer’s sleeve. The team used the same process to sinter circuits on the volunteer’s hand, except printing was done with a polymer stamp. As a proof of concept, the researchers made a full electronic health monitoring system that sensed temperature, humidity, blood oxygen, heart rate, blood pressure and electrophysiological signals and analyzed its performance. The signals obtained by these sensors were comparable to or better than those measured by conventional commercial devices. 

Here’s a link to and a citation for the paper,

Wearable Circuits Sintered at Room Temperature Directly on the Skin Surface for Health Monitoring by Ling Zhang, Hongjun Ji, Houbing Huang, Ning Yi, Xiaoming Shi, Senpei Xie, Yaoyin Li, Ziheng Ye, Pengdong Feng, Tiesong Lin, Xiangli Liu, Xuesong Leng, Mingyu Li, Jiaheng Zhang, Xing Ma, Peng He, Weiwei Zhao, and Huanyu Cheng. ACS Appl. Mater. Interfaces 2020, 12, 40, 45504–45515 Publication Date:September 11, 2020 DOI: https://doi.org/10.1021/acsami.0c11479 Copyright © 2020 American Chemical Society

This paper is behind a paywall.

Humans can distinguish molecular differences by touch

Yesterday, in my December 18, 2017 post about medieval textiles, I posed the question, “How did medieval artisans create nanoscale and microscale gilding when they couldn’t see it?” I realized afterwards that an answer to that question might be in this December 13, 2017 news item on ScienceDaily,

How sensitive is the human sense of touch? Sensitive enough to feel the difference between surfaces that differ by just a single layer of molecules, a team of researchers at the University of California San Diego has shown.

“This is the greatest tactile sensitivity that has ever been shown in humans,” said Darren Lipomi, a professor of nanoengineering and member of the Center for Wearable Sensors at the UC San Diego Jacobs School of Engineering, who led the interdisciplinary project with V. S. Ramachandran, director of the Center for Brain and Cognition and distinguished professor in the Department of Psychology at UC San Diego.

So perhaps those medieval artisans were able to feel the difference before it could be seen in the textiles they were producing?

Getting back to the matter at hand, a December 13, 2017 University of California at San Diego (UCSD) news release (also on EurekAlert) by Liezel Labios offers more detail about the work,

Humans can easily feel the difference between many everyday surfaces such as glass, metal, wood and plastic. That’s because these surfaces have different textures or draw heat away from the finger at different rates. But UC San Diego researchers wondered, if they kept all these large-scale effects equal and changed only the topmost layer of molecules, could humans still detect the difference using their sense of touch? And if so, how?

Researchers say this fundamental knowledge will be useful for developing electronic skin, prosthetics that can feel, advanced haptic technology for virtual and augmented reality and more.

Unsophisticated haptic technologies exist in the form of rumble packs in video game controllers or smartphones that shake, Lipomi added. “But reproducing realistic tactile sensations is difficult because we don’t yet fully understand the basic ways in which materials interact with the sense of touch.”

“Today’s technologies allow us to see and hear what’s happening, but we can’t feel it,” said Cody Carpenter, a nanoengineering Ph.D. student at UC San Diego and co-first author of the study. “We have state-of-the-art speakers, phones and high-resolution screens that are visually and aurally engaging, but what’s missing is the sense of touch. Adding that ingredient is a driving force behind this work.”

This study is the first to combine materials science and psychophysics to understand how humans perceive touch. “Receptors processing sensations from our skin are phylogenetically the most ancient, but far from being primitive they have had time to evolve extraordinarily subtle strategies for discerning surfaces—whether a lover’s caress or a tickle or the raw tactile feel of metal, wood, paper, etc. This study is one of the first to demonstrate the range of sophistication and exquisite sensitivity of tactile sensations. It paves the way, perhaps, for a whole new approach to tactile psychophysics,” Ramachandran said.

Super-Sensitive Touch

In a paper published in Materials Horizons, UC San Diego researchers tested whether human subjects could distinguish—by dragging or tapping a finger across the surface—between smooth silicon wafers that differed only in their single topmost layer of molecules. One surface was a single oxidized layer made mostly of oxygen atoms. The other was a single Teflon-like layer made of fluorine and carbon atoms. Both surfaces looked identical and felt similar enough that some subjects could not differentiate between them at all.

According to the researchers, human subjects can feel these differences because of a phenomenon known as stick-slip friction, which is the jerking motion that occurs when two objects at rest start to slide against each other. This phenomenon is responsible for the musical notes played by running a wet finger along the rim of a wine glass, the sound of a squeaky door hinge or the noise of a stopping train. In this case, each surface has a different stick-slip frequency due to the identity of the molecules in the topmost layer.

In one test, 15 subjects were tasked with feeling three surfaces and identifying the one surface that differed from the other two. Subjects correctly identified the differences 71 percent of the time.

In another test, subjects were given three different strips of silicon wafer, each strip containing a different sequence of 8 patches of oxidized and Teflon-like surfaces. Each sequence represented an 8-digit string of 0s and 1s, which encoded for a particular letter in the ASCII alphabet. Subjects were asked to “read” these sequences by dragging a finger from one end of the strip to the other and noting which patches in the sequence were the oxidized surfaces and which were the Teflon-like surfaces. In this experiment, 10 out of 11 subjects decoded the bits needed to spell the word “Lab” (with the correct upper and lowercase letters) more than 50 percent of the time. Subjects spent an average of 4.5 minutes to decode each letter.

“A human may be slower than a nanobit per second in terms of reading digital information, but this experiment shows a potentially neat way to do chemical communications using our sense of touch instead of sight,” Lipomi said.

Basic Model of Touch

The researchers also found that these surfaces can be differentiated depending on how fast the finger drags and how much force it applies across the surface. The researchers modeled the touch experiments using a “mock finger,” a finger-like device made of an organic polymer that’s connected by a spring to a force sensor. The mock finger was dragged across the different surfaces using multiple combinations of force and swiping velocity. The researchers plotted the data and found that the surfaces could be distinguished given certain combinations of velocity and force. Meanwhile, other combinations made the surfaces indistinguishable from each other.

“Our results reveal a remarkable human ability to quickly home in on the right combinations of forces and swiping velocities required to feel the difference between these surfaces. They don’t need to reconstruct an entire matrix of data points one by one as we did in our experiments,” Lipomi said.

“It’s also interesting that the mock finger device, which doesn’t have anything resembling the hundreds of nerves in our skin, has just one force sensor and is still able to get the information needed to feel the difference in these surfaces. This tells us it’s not just the mechanoreceptors in the skin, but receptors in the ligaments, knuckles, wrist, elbow and shoulder that could be enabling humans to sense minute differences using touch,” he added.

This work was supported by member companies of the Center for Wearable Sensors at UC San Diego: Samsung, Dexcom, Sabic, Cubic, Qualcomm and Honda.

For those who prefer their news by video,

Here’s a link to and a citation for the paper,

Human ability to discriminate surface chemistry by touch by Cody W. Carpenter, Charles Dhong, Nicholas B. Root, Daniel Rodriquez, Emily E. Abdo, Kyle Skelil, Mohammad A. Alkhadra, Julian Ramírez, Vilayanur S. Ramachandran and Darren J. Lipomi. Mater. Horiz., 2018, Advance Article DOI: 10.1039/C7MH00800G

This paper is open access but you do need to have opened a free account on the website.

A watch that conducts sound through your body and into your ear

Apparently, you all you have to do is tap your ear to access your telephone calls. A Jan. 8, 2016 article by Mark Wilson for Fast Company describes the technology and the experience of using Samsung’s TipTalk device,

It’s not so helpful to see a call on your smartwatch when you have to pull our your phone to take it anyway. And therein lies the problem with products like the Apple Watch: They’re often not a replacement for your phone, but an intermediary to inevitably using it.

But at this year’s Consumer Electronics Show [held in Las Vegas (Nevada, US) annually (Jan. 6 – 9, 2016)], Samsung’s secret R&D lab … showed off a promising concept to fix one of the biggest problems with smartwatches. Called TipTalk, it’s technology that can send sound from your smartwatch through your arm so when you touch your finger to your ear, you can hear a call or a voicemail—no headphones required.

Engineering breakthroughs like these can be easy to dismiss as a gimmick rather than revolutionary UX [user experience], but I like TipTalk for a few reasons. First, it maps hearing UI [user interface] into a gesture that we already might use to hear to something better … . Second, it could be practical in real world use. You see a new voicemail on your watch, and without even a button press, you listen—but crucially, you still opt-in to hear the message rather than just have it play. And third, the gesture conveys to people around you that you’re occupied.

Ulrich Rozier in his Jan. 8, 2016 article for frandroid.com also raves albeit, in French,

Samsung a développé un bracelet que l’on peut utiliser sur n’importe quelle montre.

Ce bracelet vibre lorsque l’on reçoit un appel… il est ainsi possible de décrocher. Il faut ensuite positionner son doigt au niveau du pavillon de l’oreille. C’est là que la magie opère. On se retrouve à entendre des sons. Contrairement à ce que je pensais, le son ne se transmet pas par conduction osseuse, mais grâce à des vibrations envoyées à partir de votre poignet à travers votre corps. Vous pouvez l’utiliser pour prendre des appels ou pour lire vos SMS et autres messages. Et ça fonctionne.

Here’s my very rough translation,

Samsung has developed a bracelet that can worn under any watch’s band strap.

It’s the ‘bracelet’ that vibrates when you get a phone call. If you want to answer the call, reach up and tap your ear. That’s when the magic happens and sound is transmitted to your ear. Not through your bones as I thought but with vibrations transmitted by your wrist through your body. This way you can answer your calls or read SMS and other messages [?]. It works

I get sound vibration being transmitted to your ear but I don’t understand how you’d be able to read SMS or other messages.

Wearable tech for Christmas 2015 and into 2016

This is a roundup post of four items to cross my path this morning (Dec. 17, 2015), all of them concerned with wearable technology.

The first, a Dec. 16, 2015 news item on phys.org, is a fluffy little piece concerning the imminent arrival of a new generation of wearable technology,

It’s not every day that there’s a news story about socks. But in November [2015], a pair won the Best New Wearable Technology Device Award at a Silicon Valley conference. The smart socks, which track foot landings and cadence, are at the forefront of a new generation of wearable electronics, according to an article in Chemical & Engineering News (C&EN), the weekly newsmagazine of the American Chemical Society [ACS].

That news item was originated by a Dec. 16, 2015 ACS news release on EurekAlert which adds this,

Marc S. Reisch, a senior correspondent at C&EN, notes that stiff wristbands like the popular FitBit that measure heart rate and the number of steps people take have become common. But the long-touted technology needed to create more flexible monitoring devices has finally reached the market. Developers have successfully figured out how to incorporate stretchable wiring and conductive inks in clothing fabric, program them to transmit data wirelessly and withstand washing.

In addition to smart socks, fitness shirts and shoe insoles are on the market already or are nearly there. Although athletes are among the first to gain from the technology, the less fitness-oriented among us could also benefit. One fabric concept product — designed not for covering humans but a car steering-wheel — could sense driver alertness and make roads safer.

Reisch’s Dec. 7, 2015 article (C&EN vol. 93, issue 48, pp. 28-90) provides more detailed information and market information such as this,

Materials suppliers, component makers, and apparel developers gathered at a printed-electronics conference in Santa Clara, Calif., within a short drive of tech giants such as Google and Apple, to compare notes on embedding electronics into the routines of daily life. A notable theme was the effort to stealthily [emphasis mine] place sensors on exercise shirts, socks, and shoe soles so that athletes and fitness buffs can wirelessly track their workouts and doctors can monitor the health of their patients.

“Wearable technology is becoming more wearable,” said Raghu Das, chief executive officer of IDTechEx [emphasis mine], the consulting firm that organized the conference. By that he meant the trend is toward thinner and more flexible devices that include not just wrist-worn fitness bands but also textiles printed with stretchable wiring and electronic sensors, thanks to advances in conductive inks.

Interesting use of the word ‘stealthy’, which often suggests something sneaky as opposed to merely secretive. I imagine what’s being suggested is that the technology will not impose itself on the user (i.e., you won’t have to learn how to use it as you did with phones and computers).

Leading into my second item, IDC (International Data Corporation), not to be confused with IDTechEx, is mentioned in a Dec. 17, 2015 news item about wearable technology markets on phys.org,

The global market for wearable technology is seeing a surge, led by watches, smart clothing and other connected gadgets, a research report said Thursday [Dec. 16, 2015].

IDC said its forecast showed the worldwide wearable device market will reach a total of 111.1 million units in 2016, up 44.4 percent from this year.

By 2019, IDC sees some 214.6 million units, or a growth rate averaging 28 percent.

A Dec. 17, 2015 IDC press release, which originated the news item, provides more details about the market forecast,

“The most common type of wearables today are fairly basic, like fitness trackers, but over the next few years we expect a proliferation of form factors and device types,” said Jitesh Ubrani , Senior Research Analyst for IDC Mobile Device Trackers. “Smarter clothing, eyewear, and even hearables (ear-worn devices) are all in their early stages of mass adoption. Though at present these may not be significantly smarter than their analog counterparts, the next generation of wearables are on track to offer vastly improved experiences and perhaps even augment human abilities.”

One of the most popular types of wearables will be smartwatches, reaching a total of 34.3 million units shipped in 2016, up from the 21.3 million units expected to ship in 2015. By 2019, the final year of the forecast, total shipments will reach 88.3 million units, resulting in a five-year CAGR of 42.8%.

“In a short amount of time, smartwatches have evolved from being extensions of the smartphone to wearable computers capable of communications, notifications, applications, and numerous other functionalities,” noted Ramon Llamas , Research Manager for IDC’s Wearables team. “The smartwatch we have today will look nothing like the smartwatch we will see in the future. Cellular connectivity, health sensors, not to mention the explosive third-party application market all stand to change the game and will raise both the appeal and value of the market going forward.

“Smartwatch platforms will lead the evolution,” added Llamas. “As the brains of the smartwatch, platforms manage all the tasks and processes, not the least of which are interacting with the user, running all of the applications, and connecting with the smartphone. Once that third element is replaced with cellular connectivity, the first two elements will take on greater roles to make sense of all the data and connections.”

Top Five Smartwatch Platform Highlights

Apple’s watchOS will lead the smartwatch market throughout our forecast, with a loyal fanbase of Apple product owners and a rapidly growing application selection, including both native apps and Watch-designed apps. Very quickly, watchOS has become the measuring stick against which other smartwatches and platforms are compared. While there is much room for improvement and additional features, there is enough momentum to keep it ahead of the rest of the market.

Android/Android Wear will be a distant second behind watchOS even as its vendor list grows to include technology companies (ASUS, Huawei, LG, Motorola, and Sony) and traditional watchmakers (Fossil and Tag Heuer). The user experience on Android Wear devices has been largely the same from one device to the next, leaving little room for OEMs to develop further and users left to select solely on price and smartwatch design.

Smartwatch pioneer Pebble will cede market share to AndroidWear and watchOS but will not disappear altogether. Its simple user interface and devices make for an easy-to-understand use case, and its price point relative to other platforms makes Pebble one of the most affordable smartwatches on the market.

Samsung’s Tizen stands to be the dark horse of the smartwatch market and poses a threat to Android Wear, including compatibility with most flagship Android smartphones and an application selection rivaling Android Wear. Moreover, with Samsung, Tizen has benefited from technology developments including a QWERTY keyboard on a smartwatch screen, cellular connectivity, and new user interfaces. It’s a combination that helps Tizen stand out, but not enough to keep up with AndroidWear and watchOS.

There will be a small, but nonetheless significant market for smart wristwear running on a Real-Time Operating System (RTOS), which is capable of running third-party applications, but not on any of these listed platforms. These tend to be proprietary operating systems and OEMs will use them when they want to champion their own devices. These will help within specific markets or devices, but will not overtake the majority of the market.

The company has provided a table with five-year CAGR (compound annual growth rate) growth estimates, which can be found with the Dec. 17, 2015 IDC press release.

Disclaimer: I am not endorsing IDC’s claims regarding the market for wearable technology.

For the third and fourth items, it’s back to the science. A Dec. 17, 2015 news item on Nanowerk, describes, in general terms, some recent wearable technology research at the University of Manchester (UK), Note: A link has been removed),

Cheap, flexible, wireless graphene communication devices such as mobile phones and healthcare monitors can be directly printed into clothing and even skin, University of Manchester academics have demonstrated.

In a breakthrough paper in Scientific Reports (“Highly Flexible and Conductive Printed Graphene for Wireless Wearable Communications Applications”), the researchers show how graphene could be crucial to wearable electronic applications because it is highly-conductive and ultra-flexible.

The research could pave the way for smart, battery-free healthcare and fitness monitoring, phones, internet-ready devices and chargers to be incorporated into clothing and ‘smart skin’ applications – printed graphene sensors integrated with other 2D materials stuck onto a patient’s skin to monitor temperature, strain and moisture levels.

Detail is provided in a Dec. 17, 2015 University of Manchester press release, which originated the news item, (Note: Links have been removed),

Examples of communication devices include:

• In a hospital, a patient wears a printed graphene RFID tag on his or her arm. The tag, integrated with other 2D materials, can sense the patient’s body temperature and heartbeat and sends them back to the reader. The medical staff can monitor the patient’s conditions wirelessly, greatly simplifying the patient’s care.

• In a care home, battery-free printed graphene sensors can be printed on elderly peoples’ clothes. These sensors could detect and collect elderly people’s health conditions and send them back to the monitoring access points when they are interrogated, enabling remote healthcare and improving quality of life.

Existing materials used in wearable devices are either too expensive, such as silver nanoparticles, or not adequately conductive to have an effect, such as conductive polymers.

Graphene, the world’s thinnest, strongest and most conductive material, is perfect for the wearables market because of its broad range of superlative qualities. Graphene conductive ink can be cheaply mass produced and printed onto various materials, including clothing and paper.

“Sir Kostya Novoselov

To see evidence that cheap, scalable wearable communication devices are on the horizon is excellent news for graphene commercial applications.

Sir Kostya Novoselov (tweet)„

The researchers, led by Dr Zhirun Hu, printed graphene to construct transmission lines and antennas and experimented with these in communication devices, such as mobile and Wifi connectivity.

Using a mannequin, they attached graphene-enabled antennas on each arm. The devices were able to ‘talk’ to each other, effectively creating an on-body communications system.

The results proved that graphene enabled components have the required quality and functionality for wireless wearable devices.

Dr Hu, from the School of Electrical and Electronic Engineering, said: “This is a significant step forward – we can expect to see a truly all graphene enabled wireless wearable communications system in the near future.

“The potential applications for this research are huge – whether it be for health monitoring, mobile communications or applications attached to skin for monitoring or messaging.

“This work demonstrates that this revolutionary scientific material is bringing a real change into our daily lives.”

Co-author Sir Kostya Novoselov, who with his colleague Sir Andre Geim first isolated graphene at the University in 2004, added: “Research into graphene has thrown up significant potential applications, but to see evidence that cheap, scalable wearable communication devices are on the horizon is excellent news for graphene commercial applications.”

Here’s a link to and a citation for the paper,

Highly Flexible and Conductive Printed Graphene for Wireless Wearable Communications Applications by Xianjun Huang, Ting Leng, Mengjian Zhu, Xiao Zhang, JiaCing Chen, KuoHsin Chang, Mohammed Aqeeli, Andre K. Geim, Kostya S. Novoselov, & Zhirun Hu. Scientific Reports 5, Article number: 18298 (2015) doi:10.1038/srep18298 Published online: 17 December 2015

This is an open access paper.

The next and final item concerns supercapacitors for wearable tech, which makes it slightly different from the other items and is why, despite the date, this is the final item. The research comes from Case Western Research University (CWRU; US) according to a Dec. 16, 2015 news item on Nanowerk (Note: A link has been removed),

Wearable power sources for wearable electronics are limited by the size of garments.

With that in mind, researchers at Case Western Reserve University have developed flexible wire-shaped microsupercapacitors that can be woven into a jacket, shirt or dress (Energy Storage Materials, “Flexible and wearable wire-shaped microsupercapacitors based on highly aligned titania and carbon nanotubes”).

A Dec. 16, 2015 CWRU news release (on EurekAlert), which originated the news item, provides more detail about a device that would make wearable tech more wearable (after all, you don’t want to recharge your clothes the same way you do your phone and other mobile devices),

By their design or by connecting the capacitors in series or parallel, the devices can be tailored to match the charge storage and delivery needs of electronics donned.

While there’s been progress in development of those electronics–body cameras, smart glasses, sensors that monitor health, activity trackers and more–one challenge remaining is providing less obtrusive and cumbersome power sources.

“The area of clothing is fixed, so to generate the power density needed in a small area, we grew radially-aligned titanium oxide nanotubes on a titanium wire used as the main electrode,” said Liming Dai, the Kent Hale Smith Professor of Macromolecular Science and Engineering. “By increasing the surface area of the electrode, you increase the capacitance.”

Dai and Tao Chen, a postdoctoral fellow in molecular science and engineering at Case Western Reserve, published their research on the microsupercapacitor in the journal Energy Storage Materials this week. The study builds on earlier carbon-based supercapacitors.

A capacitor is cousin to the battery, but offers the advantage of charging and releasing energy much faster.

How it works

In this new supercapacitor, the modified titanium wire is coated with a solid electrolyte made of polyvinyl alcohol and phosphoric acid. The wire is then wrapped with either yarn or a sheet made of aligned carbon nanotubes, which serves as the second electrode. The titanium oxide nanotubes, which are semiconducting, separate the two active portions of the electrodes, preventing a short circuit.

In testing, capacitance–the capability to store charge–increased from 0.57 to 0.9 to 1.04 milliFarads per micrometer as the strands of carbon nanotube yarn were increased from 1 to 2 to 3.

When wrapped with a sheet of carbon nanotubes, which increases the effective area of electrode, the microsupercapactitor stored 1.84 milliFarads per micrometer. Energy density was 0.16 x 10-3 milliwatt-hours per cubic centimeter and power density .01 milliwatt per cubic centimeter.

Whether wrapped with yarn or a sheet, the microsupercapacitor retained at least 80 percent of its capacitance after 1,000 charge-discharge cycles. To match various specific power needs of wearable devices, the wire-shaped capacitors can be connected in series or parallel to raise voltage or current, the researchers say.

When bent up to 180 degrees hundreds of times, the capacitors showed no loss of performance. Those wrapped in sheets showed more mechanical strength.

“They’re very flexible, so they can be integrated into fabric or textile materials,” Dai said. “They can be a wearable, flexible power source for wearable electronics and also for self-powered biosensors or other biomedical devices, particularly for applications inside the body.” [emphasis mine]

Dai ‘s lab is in the process of weaving the wire-like capacitors into fabric and integrating them with a wearable device.

So one day we may be carrying supercapacitors in our bodies? I’m not sure how I feel about that goal. In any event, here’s a link and a citation for the paper,

Flexible and wearable wire-shaped microsupercapacitors based on highly aligned titania and carbon nanotubes by Tao Chen, Liming Dai. Energy Storage Materials Volume 2, January 2016, Pages 21–26 doi:10.1016/j.ensm.2015.11.004

This paper appears to be open access.

A wearable book (The Girl Who Was Plugged In) makes you feel the protagonists pain

A team of students taking an MIT (Massachusetts Institute of Technology) course called ‘Science Fiction to Science Fabrication‘ have created a new kind of category for books, sensory fiction.  John Brownlee in his Feb. 10, 2014 article for Fast Company describes it this way,

Have you ever felt your pulse quicken when you read a book, or your skin go clammy during a horror story? A new student project out of MIT wants to deepen those sensations. They have created a wearable book that uses inexpensive technology and neuroscientific hacking to create a sort of cyberpunk Neverending Story that blurs the line between the bodies of a reader and protagonist.

Called Sensory Fiction, the project was created by a team of four MIT students–Felix Heibeck, Alexis Hope, Julie Legault, and Sophia Brueckner …

Here’s the MIT video demonstrating the book in use (from the course’s sensory fiction page),

Here’s how the students have described their sensory book, from the project page,

Sensory fiction is about new ways of experiencing and creating stories.

Traditionally, fiction creates and induces emotions and empathy through words and images.  By using a combination of networked sensors and actuators, the Sensory Fiction author is provided with new means of conveying plot, mood, and emotion while still allowing space for the reader’s imagination. These tools can be wielded to create an immersive storytelling experience tailored to the reader.

To explore this idea, we created a connected book and wearable. The ‘augmented’ book portrays the scenery and sets the mood, and the wearable allows the reader to experience the protagonist’s physiological emotions.

The book cover animates to reflect the book’s changing atmosphere, while certain passages trigger vibration patterns.

Changes in the protagonist’s emotional or physical state triggers discrete feedback in the wearable, whether by changing the heartbeat rate, creating constriction through air pressure bags, or causing localized temperature fluctuations.

Our prototype story, ‘The Girl Who Was Plugged In’ by James Tiptree showcases an incredible range of settings and emotions. The main protagonist experiences both deep love and ultimate despair, the freedom of Barcelona sunshine and the captivity of a dark damp cellar.

The book and wearable support the following outputs:

  • Light (the book cover has 150 programmable LEDs to create ambient light based on changing setting and mood)
  • Sound
  • Personal heating device to change skin temperature (through a Peltier junction secured at the collarbone)
  • Vibration to influence heart rate
  • Compression system (to convey tightness or loosening through pressurized airbags)

One of the earliest stories about this project was a Jan. 28,2014 piece written by Alison Flood for the Guardian where she explains how vibration, etc. are used to convey/stimulate the reader’s sensations and emotions,

MIT scientists have created a ‘wearable’ book using temperature and lighting to mimic the experiences of a book’s protagonist

The book, explain the researchers, senses the page a reader is on, and changes ambient lighting and vibrations to “match the mood”. A series of straps form a vest which contains a “heartbeat and shiver simulator”, a body compression system, temperature controls and sound.

“Changes in the protagonist’s emotional or physical state trigger discrete feedback in the wearable [vest], whether by changing the heartbeat rate, creating constriction through air pressure bags, or causing localised temperature fluctuations,” say the academics.

Flood goes on to illuminate how science fiction has explored the notion of ‘sensory books’ (Note: Links have been removed) and how at least one science fiction novelist is responding to this new type of book,,

The Arthur C Clarke award-winning science fiction novelist Chris Beckett wrote about a similar invention in his novel Marcher, although his “sensory” experience comes in the form of a video game:

Adam Roberts, another prize-winning science fiction writer, found the idea of “sensory” fiction “amazing”, but also “infantalising, like reverting to those sorts of books we buy for toddlers that have buttons in them to generate relevant sound-effects”.

Elise Hu in her Feb. 6, 2014 posting on the US National Public Radio (NPR) blog, All Tech Considered, takes a different approach to the topic,

The prototype does work, but it won’t be manufactured anytime soon. The creation was only “meant to provoke discussion,” Hope says. It was put together as part of a class in which designers read science fiction and make functional prototypes to explore the ideas in the books.

If it ever does become more widely available, sensory fiction could have an unintended consequence. When I shared this idea with NPR editor Ellen McDonnell, she quipped, “If these device things are helping ‘put you there,’ it just means the writing won’t have to be as good.”

I hope the students are successful at provoking discussion as so far they seem to have primarily provoked interest.

As for my two cents, I think that in a world where it seems making personal connections  is increasingly difficult (i.e., people becoming more isolated) that sensory fiction which stimulates people into feeling something as they read a book seems a logical progression.  It’s also interesting to me that all of the focus is on the reader with no mention as to what writers might produce (other than McDonnell’s cheeky comment) if they knew their books were going to be given the ‘sensory treatment’. One more musing, I wonder if there might a difference in how males and females, writers and readers, respond to sensory fiction.

Now for a bit of wordplay. Feeling can be emotional but, in English, it can also refer to touch and researchers at MIT have also been investigating new touch-oriented media.  You can read more about that project in my Reaching beyond the screen with the Tangible Media Group at the Massachusetts Institute of Technology (MIT) posting dated Nov. 13, 2013. One final thought, I am intrigued by how interested scientists at MIT seem to be in feelings of all kinds.

Internet of Things 2012 conference: call for papers

The 3rd International Conference on the Internet of Things will be held Oct. 24 – 26, 2012 in Wuxi, China. From the Call for papers page,

In what is called the Internet of Things (IoT), sensors and actuators embedded in physical objects — from containers to pacemakers — are linked through both wired and wireless networks to the Internet. When objects in the IoT can sense the environment, interpret the data, and communicate with each other, they become tools for understanding complexity and for responding to events and irregularities swiftly. The IoT is therefore seen by many as the ultimate solution for getting fine grained insights into business processes — in the real-world and in real-time. Started one decade ago as a wild academic idea, this interlinking of the physical world and cyberspace foreshadows an exciting endeavour that is highly relevant to researchers, corporations, and individuals.

The IoT2012 conference will focus on these core research challenges.

IoT 2012

The IoT conference series has become the major biennial event that brings internationally leading researchers and practitioners from both academia and industry together to facilitate the sharing of applications, research results, and knowledge. Building on the success of the last two conferences (2008 in Zurich and 2010 in Tokyo), the 3rd International Conference on the Internet of Things (IoT2012) will include a highly selective dual-track program for technical papers, accompanied by reports on business projects from seasoned practitioners, poster sessions summarizing late-breaking results, and hands-on demos of current technology.  We invite submissions of original and unpublished work covering areas related to the IoT, in one or more of the following three categories: technical papers, posters, and demonstrations.

IoT Topics of Interest

IoT 2012 welcomes submissions on the following topics:

* IoT architectures and system design
* IoT networking and communication
* Circuit and system design for smart objects in the IoT
* Novel IoT services and applications for society/corporations/individuals
* Emerging IoT business models and corresponding process changes
* Cooperative data processing for IoT
* Social impacts such as security, privacy, and trust in the IoT

Work addressing real-world implementation and deployment issues is encouraged.

The deadlines (according to the newsletter I received) are:

papers : May 1 2012 | posters, demos: August 1 2012

Given last week’s flutter of interest (See Brian Braiker’s April 5, 2012 posting for the Guardian, etc.)  in the Google goggles or as they prefer to call it, the Google Project Glass, this conference would offer information about the practical aspects of  implementation for at least one of these scenarios,

Assuming you’ve watched the video, imagine the number of embedded sensors and tracking information needed to give the user up-to-date instructions on his walking route to the bookstore. On that note, I’m glad to see there’s one IoT 2012 conference theme devoted to social impacts such as security and privacy.