Category Archives: robots

Electronics begone! Enter: the light-based brainlike computing chip

At this point, it’s possible I’m wrong but I think this is the first ‘memristor’ type device (also called a neuromorphic chip) based on light rather than electronics that I’ve featured here on this blog. In other words, it’s not, technically speaking, a memristor but it does have the same properties so it is a neuromorphic chip.

Caption: The optical microchips that the researchers are working on developing are about the size of a one-cent piece. Credit: WWU Muenster – Peter Leßmann

A May 8, 2019 news item on Nanowerk announces this new approach to neuromorphic hardware (Note: A link has been removed),

Researchers from the Universities of Münster (Germany), Oxford and Exeter (both UK) have succeeded in developing a piece of hardware which could pave the way for creating computers which resemble the human brain.

The scientists produced a chip containing a network of artificial neurons that works with light and can imitate the behaviour of neurons and their synapses. The network is able to “learn” information and use this as a basis for computing and recognizing patterns. As the system functions solely with light and not with electrons, it can process data many times faster than traditional systems. …

A May 8, 2019 University of Münster press release (also on EurekAlert), which originated the news item, reveals the full story,

A technology that functions like a brain? In these times of artificial intelligence, this no longer seems so far-fetched – for example, when a mobile phone can recognise faces or languages. With more complex applications, however, computers still quickly come up against their own limitations. One of the reasons for this is that a computer traditionally has separate memory and processor units – the consequence of which is that all data have to be sent back and forth between the two. In this respect, the human brain is way ahead of even the most modern computers because it processes and stores information in the same place – in the synapses, or connections between neurons, of which there are a million-billion in the brain. An international team of researchers from the Universities of Münster (Germany), Oxford and Exeter (both UK) have now succeeded in developing a piece of hardware which could pave the way for creating computers which resemble the human brain. The scientists managed to produce a chip containing a network of artificial neurons that works with light and can imitate the behaviour of neurons and their synapses.

The researchers were able to demonstrate, that such an optical neurosynaptic network is able to “learn” information and use this as a basis for computing and recognizing patterns – just as a brain can. As the system functions solely with light and not with traditional electrons, it can process data many times faster. “This integrated photonic system is an experimental milestone,” says Prof. Wolfram Pernice from Münster University and lead partner in the study. “The approach could be used later in many different fields for evaluating patterns in large quantities of data, for example in medical diagnoses.” The study is published in the latest issue of the “Nature” journal.

The story in detail – background and method used

Most of the existing approaches relating to so-called neuromorphic networks are based on electronics, whereas optical systems – in which photons, i.e. light particles, are used – are still in their infancy. The principle which the German and British scientists have now presented works as follows: optical waveguides that can transmit light and can be fabricated into optical microchips are integrated with so-called phase-change materials – which are already found today on storage media such as re-writable DVDs. These phase-change materials are characterised by the fact that they change their optical properties dramatically, depending on whether they are crystalline – when their atoms arrange themselves in a regular fashion – or amorphous – when their atoms organise themselves in an irregular fashion. This phase-change can be triggered by light if a laser heats the material up. “Because the material reacts so strongly, and changes its properties dramatically, it is highly suitable for imitating synapses and the transfer of impulses between two neurons,” says lead author Johannes Feldmann, who carried out many of the experiments as part of his PhD thesis at the Münster University.

In their study, the scientists succeeded for the first time in merging many nanostructured phase-change materials into one neurosynaptic network. The researchers developed a chip with four artificial neurons and a total of 60 synapses. The structure of the chip – consisting of different layers – was based on the so-called wavelength division multiplex technology, which is a process in which light is transmitted on different channels within the optical nanocircuit.

In order to test the extent to which the system is able to recognise patterns, the researchers “fed” it with information in the form of light pulses, using two different algorithms of machine learning. In this process, an artificial system “learns” from examples and can, ultimately, generalise them. In the case of the two algorithms used – both in so-called supervised and in unsupervised learning – the artificial network was ultimately able, on the basis of given light patterns, to recognise a pattern being sought – one of which was four consecutive letters.

“Our system has enabled us to take an important step towards creating computer hardware which behaves similarly to neurons and synapses in the brain and which is also able to work on real-world tasks,” says Wolfram Pernice. “By working with photons instead of electrons we can exploit to the full the known potential of optical technologies – not only in order to transfer data, as has been the case so far, but also in order to process and store them in one place,” adds co-author Prof. Harish Bhaskaran from the University of Oxford.

A very specific example is that with the aid of such hardware cancer cells could be identified automatically. Further work will need to be done, however, before such applications become reality. The researchers need to increase the number of artificial neurons and synapses and increase the depth of neural networks. This can be done, for example, with optical chips manufactured using silicon technology. “This step is to be taken in the EU joint project ‘Fun-COMP’ by using foundry processing for the production of nanochips,” says co-author and leader of the Fun-COMP project, Prof. C. David Wright from the University of Exeter.

Here’s a link to and a citation for the paper,

All-optical spiking neurosynaptic networks with self-learning capabilities by J. Feldmann, N. Youngblood, C. D. Wright, H. Bhaskaran & W. H. P. Pernice. Nature volume 569, pages208–214 (2019) DOI: https://doi.org/10.1038/s41586-019-1157-8 Issue Date: 09 May 2019

This paper is behind a paywall.

For the curious, I found a little more information about Fun-COMP (functionally-scaled computer technology). It’s a European Commission (EC) Horizon 2020 project coordinated through the University of Exeter. For information with details such as the total cost, contribution from the EC, the list of partnerships and more there is the Fun-COMP webpage on fabiodisconzi.com.

Automated science writing?

It seems that automated science writing is not ready—yet. Still, an April 18, 2019 news item on ScienceDaily suggests that progress is being made,

The work of a science writer, including this one, includes reading journal papers filled with specialized technical terminology, and figuring out how to explain their contents in language that readers without a scientific background can understand.

Now, a team of scientists at MIT [Massachusetts Institute of Technology] and elsewhere has developed a neural network, a form of artificial intelligence (AI), that can do much the same thing, at least to a limited extent: It can read scientific papers and render a plain-English summary in a sentence or two.

An April 17, 2019 MIT news release, which originated the news item, delves into the research and its implications,

Even in this limited form, such a neural network could be useful for helping editors, writers, and scientists [emphasis mine] scan a large number of papers to get a preliminary sense of what they’re about. But the approach the team developed could also find applications in a variety of other areas besides language processing, including machine translation and speech recognition.

The work is described in the journal Transactions of the Association for Computational Linguistics, in a paper by Rumen Dangovski and Li Jing, both MIT graduate students; Marin Soljačić, a professor of physics at MIT; Preslav Nakov, a principal scientist at the Qatar Computing Research Institute, HBKU; and Mićo Tatalović, a former Knight Science Journalism fellow at MIT and a former editor at New Scientist magazine.

From AI for physics to natural language

The work came about as a result of an unrelated project, which involved developing new artificial intelligence approaches based on neural networks, aimed at tackling certain thorny problems in physics. However, the researchers soon realized that the same approach could be used to address other difficult computational problems, including natural language processing, in ways that might outperform existing neural network systems.

“We have been doing various kinds of work in AI for a few years now,” Soljačić says. “We use AI to help with our research, basically to do physics better. And as we got to be  more familiar with AI, we would notice that every once in a while there is an opportunity to add to the field of AI because of something that we know from physics — a certain mathematical construct or a certain law in physics. We noticed that hey, if we use that, it could actually help with this or that particular AI algorithm.”

This approach could be useful in a variety of specific kinds of tasks, he says, but not all. “We can’t say this is useful for all of AI, but there are instances where we can use an insight from physics to improve on a given AI algorithm.”

Neural networks in general are an attempt to mimic the way humans learn certain new things: The computer examines many different examples and “learns” what the key underlying patterns are. Such systems are widely used for pattern recognition, such as learning to identify objects depicted in photos.

But neural networks in general have difficulty correlating information from a long string of data, such as is required in interpreting a research paper. Various tricks have been used to improve this capability, including techniques known as long short-term memory (LSTM) and gated recurrent units (GRU), but these still fall well short of what’s needed for real natural-language processing, the researchers say.

The team came up with an alternative system, which instead of being based on the multiplication of matrices, as most conventional neural networks are, is based on vectors rotating in a multidimensional space. The key concept is something they call a rotational unit of memory (RUM).

Essentially, the system represents each word in the text by a vector in multidimensional space — a line of a certain length pointing in a particular direction. Each subsequent word swings this vector in some direction, represented in a theoretical space that can ultimately have thousands of dimensions. At the end of the process, the final vector or set of vectors is translated back into its corresponding string of words.

“RUM helps neural networks to do two things very well,” Nakov says. “It helps them to remember better, and it enables them to recall information more accurately.”

After developing the RUM system to help with certain tough physics problems such as the behavior of light in complex engineered materials, “we realized one of the places where we thought this approach could be useful would be natural language processing,” says Soljačić,  recalling a conversation with Tatalović, who noted that such a tool would be useful for his work as an editor trying to decide which papers to write about. Tatalović was at the time exploring AI in science journalism as his Knight fellowship project.

“And so we tried a few natural language processing tasks on it,” Soljačić says. “One that we tried was summarizing articles, and that seems to be working quite well.”

The proof is in the reading

As an example, they fed the same research paper through a conventional LSTM-based neural network and through their RUM-based system. The resulting summaries were dramatically different.

The LSTM system yielded this highly repetitive and fairly technical summary: “Baylisascariasis,” kills mice, has endangered the allegheny woodrat and has caused disease like blindness or severe consequences. This infection, termed “baylisascariasis,” kills mice, has endangered the allegheny woodrat and has caused disease like blindness or severe consequences. This infection, termed “baylisascariasis,” kills mice, has endangered the allegheny woodrat.

Based on the same paper, the RUM system produced a much more readable summary, and one that did not include the needless repetition of phrases: Urban raccoons may infect people more than previously assumed. 7 percent of surveyed individuals tested positive for raccoon roundworm antibodies. Over 90 percent of raccoons in Santa Barbara play host to this parasite.

Already, the RUM-based system has been expanded so it can “read” through entire research papers, not just the abstracts, to produce a summary of their contents. The researchers have even tried using the system on their own research paper describing these findings — the paper that this news story is attempting to summarize.

Here is the new neural network’s summary: Researchers have developed a new representation process on the rotational unit of RUM, a recurrent memory that can be used to solve a broad spectrum of the neural revolution in natural language processing.

It may not be elegant prose, but it does at least hit the key points of information.

Çağlar Gülçehre, a research scientist at the British AI company Deepmind Technologies, who was not involved in this work, says this research tackles an important problem in neural networks, having to do with relating pieces of information that are widely separated in time or space. “This problem has been a very fundamental issue in AI due to the necessity to do reasoning over long time-delays in sequence-prediction tasks,” he says. “Although I do not think this paper completely solves this problem, it shows promising results on the long-term dependency tasks such as question-answering, text summarization, and associative recall.”

Gülçehre adds, “Since the experiments conducted and model proposed in this paper are released as open-source on Github, as a result many researchers will be interested in trying it on their own tasks. … To be more specific, potentially the approach proposed in this paper can have very high impact on the fields of natural language processing and reinforcement learning, where the long-term dependencies are very crucial.”

The research received support from the Army Research Office, the National Science Foundation, the MIT-SenseTime Alliance on Artificial Intelligence, and the Semiconductor Research Corporation. The team also had help from the Science Daily website, whose articles were used in training some of the AI models in this research.

As usual, this ‘automated writing system’ is framed as a ‘helper’ not an usurper of anyone’s job. However, its potential for changing the nature of the work is there. About five years ago I featured another ‘automated writing’ story in a July 16, 2014 posting titled: ‘Writing and AI or is a robot writing this blog?’ You may have been reading ‘automated’ news stories for years. At the time, the focus was on sports and business.

Getting back to 2019 and science writing, here’s a link to and a citation for the paper,

Rotational Unit of Memory: A Novel Representation Unit for RNNs with Scalable Applications by Rumen Dangovski, Li Jing, Preslav Nakov, Mićo Tatalović and Marin Soljačić. Transactions of the Association for Computational Linguistics Volume 07, 2019 pp.121-138 DOI: https://doi.org/10.1162/tacl_a_00258 Posted Online 2019

© 2019 Association for Computational Linguistics. Distributed under a CC-BY 4.0 license.

This paper is open access.

September 2019’s science’ish’ events in Toronto and Vancouver (Canada)

There are movies, plays, a multimedia installation experience all in Vancouver, and the ‘CHAOSMOSIS mAchInesexhibition/performance/discussion/panel/in-situ experiments/art/ science/ techne/ philosophy’ event in Toronto. But first, there’s a a Vancouver talk about engaging scientists in the upcoming federal election. .

Science in the Age of Misinformation (and the upcoming federal election) in Vancouver

Dr. Katie Gibbs, co-founder and executive director of Evidence for Democracy, will be giving a talk today (Sept. 4, 2019) at the University of British Columbia (UBC; Vancouver). From the Eventbrite webpage for Science in the Age of Misinformation,

Science in the Age of Misinformation, with Katie Gibbs, Evidence for Democracy
In the lead up to the federal election, it is more important than ever to understand the role that researchers play in shaping policy. Join us in this special Policy in Practice event with Dr. Katie Gibbs, Executive Director of Evidence for Democracy, Canada’s leading, national, non-partisan, and not-for-profit organization promoting science and the transparent use of evidence in government decision making. A Musqueam land acknowledgement, welcome remarks and moderation of this event will be provided by MPPGA students Joshua Tafel, and Chengkun Lv.

Wednesday, September 4, 2019
12:30 pm – 1:50 pm (Doors will open at noon)
Liu Institute for Global Issues – xʷθəθiqətəm (Place of Many Trees), 1st floor
Pizza will be provided starting at noon on first come, first serve basis. Please RSVP.

What role do researchers play in a political environment that is increasingly polarized and influenced by misinformation? Dr. Katie Gibbs, Executive Director of Evidence for Democracy, will give an overview of the current state of science integrity and science policy in Canada highlighting progress made over the past four years and what this means in a context of growing anti-expert movements in Canada and around the world. Dr. Gibbs will share concrete ways for researchers to engage heading into a critical federal election [emphasis mine], and how they can have lasting policy impact.

Bio: Katie Gibbs is a scientist, organizer and advocate for science and evidence-based policies. While completing her Ph.D. at the University of Ottawa in Biology, she was one of the lead organizers of the ‘Death of Evidence’—one of the largest science rallies in Canadian history. Katie co-founded Evidence for Democracy, Canada’s leading, national, non-partisan, and not-for-profit organization promoting science and the transparent use of evidence in government decision making. Her ongoing success in advocating for the restoration of public science in Canada has made Katie a go-to resource for national and international media outlets including Science, The Guardian and the Globe and Mail.

Katie has also been involved in international efforts to increase evidence-based decision-making and advises science integrity movements in other countries and is a member of the Open Government Partnership Multi-stakeholder Forum.

Disclaimer: Please note that by registering via Eventbrite, your information will be stored on the Eventbrite server, which is located outside Canada. If you do not wish to use this service, please email Joelle.Lee@ubc.ca directly to register. Thank you.

Location
Liu Institute for Global Issues – Place of Many Trees
6476 NW Marine Drive
Vancouver, British Columbia V6T 1Z2

Sadly I was not able to post the information about Dr. Gibbs’s more informal talk last night (Sept. 3, 2019) which was a special event with Café Scientifique but I do have a link to a website encouraging anyone who wants to help get science on the 2019 federal election agenda, Vote Science. P.S. I’m sorry I wasn’t able to post this in a more timely fashion.

Transmissions; a multimedia installation in Vancouver, September 6 -28, 2019

Here’s a description for the multimedia installation, Transmissions, in the August 28, 2019 Georgia Straight article by Janet Smith,

Lisa Jackson is a filmmaker, but she’s never allowed that job description to limit what she creates or where and how she screens her works.

The Anishinaabe artist’s breakout piece was last year’s haunting virtual-reality animation Biidaaban: First Light. In its eerie world, one that won a Canadian Screen Award, nature has overtaken a near-empty, future Toronto, with trees growing through cracks in the sidewalks, vines enveloping skyscrapers, and people commuting by canoe.

All that and more has brought her here, to Transmissions, a 6,000-square-foot, immersive film installation that invites visitors to wander through windy coastal forests, by hauntingly empty glass towers, into soundscapes of ancient languages, and more.

Through the labyrinthine multimedia work at SFU [Simon Fraser University] Woodward’s, Jackson asks big questions—about Earth’s future, about humanity’s relationship to it, and about time and Indigeneity.

Simultaneously, she mashes up not just disciplines like film and sculpture, but concepts of science, storytelling, and linguistics [emphasis mine].

“The tag lines I’m working with now are ‘the roots of meaning’ and ‘knitting the world together’,” she explains. “In western society, we tend to hive things off into ‘That’s culture. That’s science.’ But from an Indigenous point of view, it’s all connected.”

Transmissions is split into three parts, with what Jackson describes as a beginning, a middle, and an end. Like Biidaaban, it’s also visually stunning: the artist admits she’s playing with Hollywood spectacle.

Without giving too much away—a big part of the appeal of Jackson’s work is the sense of surprise—Vancouver audiences will first enter a 48-foot-long, six-foot-wide tunnel, surrounded by projections that morph from empty urban streets to a forest and a river. Further engulfing them is a soundscape that features strong winds, while black mirrors along the floor skew perspective and play with what’s above and below ground.

“You feel out of time and space,” says Jackson, who wants to challenge western society’s linear notions of minutes and hours. “I want the audience to have a physical response and an emotional response. To me, that gets closer to the Indigenous understanding. Because the Eurocentric way is more rational, where the intellectual is put ahead of everything else.”

Viewers then enter a room, where the highly collaborative Jackson has worked with artist Alan Storey, who’s helped create Plexiglas towers that look like the ghost high-rises of an abandoned city. (Storey has also designed other components of the installation.) As audience members wander through them on foot, projections make their shadows dance on the structures. Like Biidaaban, the section hints at a postapocalyptic or posthuman world. Jackson operates in an emerging realm of Indigenous futurism.

The words “science, storytelling, and linguistics” were emphasized due to a minor problem I have with terminology. Linguistics is defined as the scientific study of language combining elements from the natural sciences, social sciences, and the humanities. I wish either Jackson or Smith had discussed the scientific element of Transmissions at more length and perhaps reconnected linguistics to science along with the physics of time and space, as well as, storytelling, film, and sculpture. It would have been helpful since it’s my understanding, Transmissions is designed to showcase all of those connections and more in ways that may not be obvious to everyone. On the plus side, perhaps the tour, which is part of this installation experience includes that information.

I have a bit .more detail (including logistics for the tours) from the SFU Events webpage for Transmissions,

Transmissions
September 6 – September 28, 2019

The Roots of Meaning
World Premiere
September 6 – 28, 2019

Fei & Milton Wong Experimental Theatre
SFU Woodward’s, 149 West Hastings
Tuesday to Friday, 1pm to 7pm
Saturday and Sunday, 1pm to 5pm
FREE

In partnership with SFU Woodward’s Cultural Programs and produced by Electric Company Theatre and Violator Films.

TRANSMISSIONS is a three-part, 6000 square foot multimedia installation by award-winning Anishinaabe filmmaker and artist Lisa Jackson. It extends her investigation into the connections between land, language, and people, most recently with her virtual reality work Biidaaban: First Light.

Projections, sculpture, and film combine to create urban and natural landscapes that are eerie and beautiful, familiar and foreign, concrete and magical. Past and future collide in a visceral and thought-provoking journey that questions our current moment and opens up the complexity of thought systems embedded in Indigenous languages. Radically different from European languages, they embody sets of relationships to the land, to each other, and to time itself.

Transmissions invites us to untether from our day-to-day world and imagine a possible future. It provides a platform to activate and cross-pollinate knowledge systems, from science to storytelling, ecology to linguistics, art to commerce. To begin conversations, to listen deeply, to engage varied perspectives and expertise, to knit the world together and find our place within the circle of all our relations.

Produced in association with McMaster University Socrates Project, Moving Images Distribution and Cobalt Connects Creativity.

….

Admission:  Free Public Tours
Tuesday through Sunday
Reservations accepted from 1pm to 3pm.  Reservations are booked in 15 minute increments.  Individuals and groups up to 10 welcome.
Please email: sfuw@sfu.ca for more information or to book groups of 10 or more.

Her Story: Canadian Women Scientists (short film subjects); Sept. 13 – 14, 2019

Curiosity Collider, producer of art/science events in Vancouver, is presenting a film series featuring Canadian women scientists, according to an August 27 ,2019 press release (received via email),

Her Story: Canadian Women Scientists,” a film series dedicated to sharing the stories of Canadian women scientists, will premiere on September 13th and 14th at the Annex theatre. Four pairs of local filmmakers and Canadian women scientists collaborated to create 5-6 minute videos; for each film in the series, a scientist tells her own story, interwoven with the story of an inspiring Canadian women scientist who came before her in her field of study.

Produced by Vancouver-based non-profit organization Curiosity Collider, this project was developed to address the lack of storytelling videos showcasing remarkable women scientists and their work available via popular online platforms. “Her Story reveals the lives of women working in science,” said Larissa Blokhuis, curator for Her Story. “This project acts as a beacon to girls and women who want to see themselves in the scientific community. The intergenerational nature of the project highlights the fact that women have always worked in and contributed to science.

This sentiment was reflected by Samantha Baglot as well, a PhD student in neuroscience who collaborated with filmmaker/science cartoonist Armin Mortazavi in Her Story. “It is empowering to share stories of previous Canadian female scientists… it is empowering for myself as a current female scientist to learn about other stories of success, and gain perspective of how these women fought through various hardships and inequality.”

When asked why seeing better representation of women in scientific work is important, artist/filmmaker Michael Markowsky shared his thoughts. “It’s important for women — and their male allies — to question and push back against these perceived social norms, and to occupy space which rightfully belongs to them.” In fact, his wife just gave birth to their first child, a daughter; “It’s personally very important to me that she has strong female role models to look up to.” His film will feature collaborating scientist Jade Shiller, and Kathleen Conlan – who was named one of Canada’s greatest explorers by Canadian Geographic in 2015.

Other participating filmmakers and collaborating scientists include: Leslie Kennah (Filmmaker), Kimberly Girling (scientist, Research and Policy Director at Evidence for Democracy), Lucas Kavanagh and Jesse Lupini (Filmmakers, Avocado Video), and Jessica Pilarczyk (SFU Assistant Professor, Department of Earth Sciences).

This film series is supported by Westcoast Women in Engineering, Science and Technology (WWEST) and Eng.Cite. The venue for the events is provided by Vancouver Civic Theatres.

Event Information

Screening events will be hosted at Annex (823 Seymour St, Vancouver) on September 13th and 14th [2019]. Events will also include a talkback with filmmakers and collab scientists on the 13th, and a panel discussion on representations of women in science and culture on the 14th. Visit http://bit.ly/HerStoryTickets2019 for tickets ($14.99-19.99) and http://bit.ly/HerStoryWomenScientists for project information.

I have a film collage,

Courtesy: Curiosity Collider

I looks like they’re presenting films with a diversity of styles. You can find out more about Curiosity Collider and its various programmes and events here.

Vancouver Fringe Festival September 5 – 16, 2019

I found two plays in this year’s fringe festival programme that feature science in one way or another. Not having seen either play I make no guarantees as to content. First up is,

AI Love You
Exit Productions
London, UK
Playwright: Melanie Anne Ball
exitproductionsltd.com

Adam and April are a regular 20-something couple, very nearly blissfully generic, aside from one important detail: one of the pair is an “artificially intelligent companion.” Their joyful veneer has begun to crack and they need YOU to decide the future of their relationship. Is the freedom of a robot or the will of a human more important?
For AI Love You: 

***** “Magnificent, complex and beautifully addictive.” —Spy in the Stalls 
**** “Emotionally charged, deeply moving piece … I was left with goosebumps.” —West End Wilma 
**** —London City Nights 
Past shows: 
***** “The perfect show.” —Theatre Box

Intellectual / Intimate / Shocking / 14+ / 75 minutes

The first show is on Friday, September 6, 2019 at 5 pm. There are another five showings being presented. You can get tickets and more information here.

The second play is this,

Red Glimmer
Dusty Foot Productions
Vancouver, Canada
Written & Directed by Patricia Trinh

Abstract Sci-Fi dramedy. An interdimensional science experiment! Woman involuntarily takes an all inclusive internal trip after falling into a deep depression. A scientist is hired to navigate her neurological pathways from inside her mind – tackling the fact that humans cannot physically re-experience somatosensory sensation, like pain. What if that were the case for traumatic emotional pain? A creepy little girl is heard running by. What happens next?

Weird / Poetic / Intellectual / LGBTQ+ / Multicultural / 14+ / Sexual Content / 50 minutes

This show is created by an underrepresented Artist.
Written, directed, and produced by local theatre Artist Patricia Trinh, a Queer, Asian-Canadian female.

The first showing is tonight, September 5, 2019 at 8:30 pm. There are another six showings being presented. You can get tickets and more information here.

CHAOSMOSIS mAchInes exhibition/performance/discussion/panel/in-situ experiments/art/ science/ techne/ philosophy, 28 September, 2019 in Toronto

An Art/Sci Salon September 2, 2019 announcement (received via email), Note: I have made some formatting changes,

CHAOSMOSIS mAchInes

28 September, 2019 
7pm-11pm.
Helen-Gardiner-Phelan Theatre, 2nd floor
University of Toronto. 79 St. George St.

A playful co-presentation by the Topological Media Lab (Concordia U-Montreal) and The Digital Dramaturgy Labsquared (U of T-Toronto). This event is part of our collaboration with DDLsquared lab, the Topological Lab and the Leonardo LASER network


7pm-9.30pm, Installation-performances, 
9.30pm-11pm, Reception and cash bar, Front and Long Room, Ground floor


Description:
From responsive sculptures to atmosphere-creating machines; from sensorial machines to affective autonomous robots, Chaosmosis mAchInes is an eclectic series of installations and performances reflecting on today’s complex symbiotic relations between humans, machines and the environment.


This will be the first encounter between Montreal-based Topological Media Lab (Concordia University) and the Toronto-based Digital Dramaturgy Labsquared (U of T) to co-present current process-based and experimental works. Both labs have a history of notorious playfulness, conceptual abysmal depth, human-machine interplays, Art&Science speculations (what if?), collaborative messes, and a knack for A/I as in Artistic Intelligence.


Thanks to  Nina Czegledy (Laser series, Leonardo network) for inspiring the event and for initiating the collaboration


Visit our Facebook event page 
Register through Evenbrite


Supported by


Main sponsor: Centre for Drama, Theatre and Performance Studies, U of T
Sponsors: Computational Arts Program (York U.), Cognitive Science Program (U of T), Knowledge Media Design Institute (U of T), Institute for the History and Philosophy of Science and Technology (IHPST)Fonds de Recherche du Québec – Société et culture (FRQSC)The Centre for Comparative Literature (U of T)
A collaboration between
Laser events, Leonardo networks – Science Artist, Nina Czegledy
ArtsSci Salon – Artistic Director, Roberta Buiani
Digital Dramaturgy Labsquared – Creative Research Director, Antje Budde
Topological Media Lab – Artistic-Research Co-directors, Michael Montanaro | Navid Navab


Project presentations will include:
Topological Media Lab
tangibleFlux φ plenumorphic ∴ chaosmosis
SPIEL
On Air
The Sound That Severs Now from Now
Cloud Chamber (2018) | Caustic Scenography, Responsive Cloud Formation
Liquid Light
Robots: Machine Menagerie
Phaze
Phase
Passing Light
Info projects
Digital Dramaturgy Labsquared
Btw Lf & Dth – interFACING disappearance
Info project

This is a very active September.

ETA September 4, 2019 at 1607 hours PDT: That last comment is even truer than I knew when I published earlier. I missed a Vancouver event, Maker Faire Vancouver will be hosted at Science World on Saturday, September 14. Here’s a little more about it from a Sept. 3, 2019 at Science World at Telus Science World blog posting,

Earlier last month [August 2019?], surgeons at St Paul’s Hospital performed an ankle replacement for a Cloverdale resident using a 3D printed bone. The first procedure of its kind in Western Canada, it saved the patient all of his ten toes — something doctors had originally decided to amputate due to the severity of the motorcycle accident.

Maker Faire Vancouver Co-producer, John Biehler, may not be using his 3D printer for medical breakthroughs, but he does see a subtle connection between his home 3D printer and the Health Canada-approved bone.

“I got into 3D printing to make fun stuff and gadgets,” John says of the box-sized machine that started as a hobby and turned into a side business. “But the fact that the very same technology can have life-changing and life-saving applications is amazing.”

When John showed up to Maker Faire Vancouver seven years ago, opportunities to access this hobby were limited. Armed with a 3D printer he had just finished assembling the night before, John was hoping to meet others in the community with similar interests to build, experiment and create. Much like the increase in accessibility to these portable machines has changed over the years—with universities, libraries and makerspaces making them readily available alongside CNC Machines, laser cutters and more — John says the excitement around crafting and tinkering has skyrocketed as well.

“The kind of technology that inspires people to print a bone or spinal insert all starts at ground zero in places like a Maker Faire where people get exposed to STEAM,” John says …

… From 3D printing enthusiasts like John to knitters, metal artists and roboticists, this full one-day event [Maker Faire Vancouver on Saturday, September 14, 2019] will facilitate cross-pollination between hobbyists, small businesses, artists and tinkerers. Described as part science fair, part county fair and part something entirely new, Maker Faire Vancouver hopes to facilitate discovery and what John calls “pure joy moments.”

Hopefully that’s it.

Biohybrid cyborgs

Cyborgs are usually thought of as people who’ve been enhanced with some sort of technology, In contemporary real life that technology might be a pacemaker or hip replacement but in science fiction it’s technology such as artificial retinas (for example) that expands the range of visible light for an enhanced human.

Rarely does the topic of a microscopic life form come up in discussion about cyborgs and yet, that’s exactly what an April 3, 2019 Nanowerk spotlight article by Michael Berger describes in relationship to its use in water remediation efforts (Note: links have been removed),

Researchers often use living systems as inspiration for the design and engineering of micro- and nanoscale propulsion systems, actuators, sensors, and robots. …

“Although microrobots have recently proved successful for remediating decontaminated water at the laboratory scale, the major challenge in the field is to scale up these applications to actual environmental settings,” Professor Joseph Wang, Chair of Nanoengineering and Director, Center of Wearable Sensors at the University California San Diego, tells Nanowerk. “In order to do this, we need to overcome the toxicity of their chemical fuels, the short time span of biocompatible magnesium-based micromotors and the small domain operation of externally actuated microrobots.”

In their recent work on self-propelled biohybrid microrobots, Wang and his team were inspired by recent developments of biohybrid cyborgs that integrate self-propelling bacteria with functionalized synthetic nanostructures to transport materials.

“These tiny cyborgs are incredibly efficient for transport materials, but the limitation that we observed is that they do not provide large-scale fluid mixing,” notes Wang. ” We wanted to combine the best properties of both worlds. So, we searched for the best candidate to create a more robust biohybrid for mixing and we decided on using rotifers (Brachionus) as the engine of the cyborg.”

These marine microorganisms, which measure between 100 and 300 micrometers, are amazing creatures as they already possess sensing ability, energetic autonomy, and provide large-scale fluid mixing capability. They are also are very resilient and can survive in very harsh environments and even are one of the few organisms that have survived via asexual reproduction.

“Taking inspiration from the science fiction concept of a cybernetic organism, or cyborg – where an organism has enhanced abilities due to the integration of some artificial component – we developed a self-propelled biohybrid microrobot, that we named rotibot, employing rotifers as their engine,” says Fernando Soto, first author of a paper on this work (Advanced Functional Materials, “Rotibot: Use of Rotifers as Self-Propelling Biohybrid Microcleaners”).

This is the first demonstration of a biohybrid cyborg used for the removal and degradation of pollutants from solution. The technical breakthrough that allowed the team to achieve this task is based on a novel fabrication mechanism based on the selective accumulation of functionalized microbeads in the microorganism’s mouth: The rotifer serves not only as a transport vessel for active material or cargo but also acting as a powerful biological pump, as it creates fluid flows directed towards its mouth

Nanowerk has made this video demonstrating a rotifer available along with a description,

“The rotibot is a rotifer (a marine microorganism) that has plastic microbeads attached to the mouth, which are functionalized with pollutant-degrading enzymes. This video illustrates a free swimming rotibot mixing tracer particles in solution. “

Here’s a link to and a citation for the paper,

Rotibot: Use of Rotifers as Self‐Propelling Biohybrid Microcleaners by Fernando Soto, Miguel Angel Lopez‐Ramirez, Itthipon Jeerapan, Berta Esteban‐Fernandez de Avila, Rupesh, Kumar Mishra, Xiaolong Lu, Ingrid Chai, Chuanrui Chen, Daniel Kupor. Advanced Functional Materials DOI: https://doi.org/10.1002/adfm.201900658 First published: 28 March 2019

This paper is behind a paywall.

Berger’s April 3, 2019 Nanowerk spotlight article includes some useful images if you are interested in figuring out how these rotibots function.

AI (artificial intelligence) artist got a show at a New York City art gallery

AI artists first hit my radar in August 2018 when Christie’s Auction House advertised an art auction of a ‘painting’ by an algorithm (artificial intelligence). There’s more in my August 31, 2018 posting but, briefly, a French art collective, Obvious, submitted a painting, “Portrait of Edmond de Belamy,” that was created by an artificial intelligence agent to be sold for an estimated to $7000 – $10,000. They weren’t even close. According to Ian Bogost’s March 6, 2019 article for The Atlantic, the painting sold for $432,500 In October 2018.

It has also, Bogost notes in his article, occasioned an art show (Note: Links have been removed),

… part of “Faceless Portraits Transcending Time,” an exhibition of prints recently shown [Februay 13 – March 5, 2019] at the HG Contemporary gallery in Chelsea, the epicenter of New York’s contemporary-art world. All of them were created by a computer.

The catalog calls the show a “collaboration between an artificial intelligence named AICAN and its creator, Dr. Ahmed Elgammal,” a move meant to spotlight, and anthropomorphize, the machine-learning algorithm that did most of the work. According to HG Contemporary, it’s the first solo gallery exhibit devoted to an AI artist.

If they hadn’t found each other in the New York art scene, the players involved could have met on a Spike Jonze film set: a computer scientist commanding five-figure print sales from software that generates inkjet-printed images; a former hotel-chain financial analyst turned Chelsea techno-gallerist with apparent ties to fine-arts nobility; a venture capitalist with two doctoral degrees in biomedical informatics; and an art consultant who put the whole thing together, A-Team–style, after a chance encounter at a blockchain conference. Together, they hope to reinvent visual art, or at least to cash in on machine-learning hype along the way.

The show in New York City, “Faceless Portraits …,” exhibited work by an artificially intelligent artist-agent (I’m creating a new term to suit my purposes) that’s different than the one used by Obvious to create “Portrait of Edmond de Belamy,” As noted earlier, it sold for a lot of money (Note: Links have been removed),

Bystanders in and out of the art world were shocked. The print had never been shown in galleries or exhibitions before coming to market at auction, a channel usually reserved for established work. The winning bid was made anonymously by telephone, raising some eyebrows; art auctions can invite price manipulation. It was created by a computer program that generates new images based on patterns in a body of existing work, whose features the AI “learns.” What’s more, the artists who trained and generated the work, the French collective Obvious, hadn’t even written the algorithm or the training set. They just downloaded them, made some tweaks, and sent the results to market.

“We are the people who decided to do this,” the Obvious member Pierre Fautrel said in response to the criticism, “who decided to print it on canvas, sign it as a mathematical formula, put it in a gold frame.” A century after Marcel Duchamp made a urinal into art [emphasis mine] by putting it in a gallery, not much has changed, with or without computers. As Andy Warhol famously said, “Art is what you can get away with.”

A bit of a segue here, there is a controversy as to whether or not that ‘urinal art’, also known as, The Fountain, should be attributed to Duchamp as noted in my January 23, 2019 posting titled ‘Baroness Elsa von Freytag-Loringhoven, Marcel Duchamp, and the Fountain’.

Getting back to the main action, Bogost goes on to describe the technologies underlying the two different AI artist-agents (Note: Links have been removed),

… Using a computer is hardly enough anymore; today’s machines offer all kinds of ways to generate images that can be output, framed, displayed, and sold—from digital photography to artificial intelligence. Recently, the fashionable choice has become generative adversarial networks, or GANs, the technology that created Portrait of Edmond de Belamy. Like other machine-learning methods, GANs use a sample set—in this case, art, or at least images of it—to deduce patterns, and then they use that knowledge to create new pieces. A typical Renaissance portrait, for example, might be composed as a bust or three-quarter view of a subject. The computer may have no idea what a bust is, but if it sees enough of them, it might learn the pattern and try to replicate it in an image.

GANs use two neural nets (a way of processing information modeled after the human brain) to produce images: a “generator” and a “discerner.” The generator produces new outputs—images, in the case of visual art—and the discerner tests them against the training set to make sure they comply with whatever patterns the computer has gleaned from that data. The quality or usefulness of the results depends largely on having a well-trained system, which is difficult.

That’s why folks in the know were upset by the Edmond de Belamy auction. The image was created by an algorithm the artists didn’t write, trained on an “Old Masters” image set they also didn’t create. The art world is no stranger to trend and bluster driving attention, but the brave new world of AI painting appeared to be just more found art, the machine-learning equivalent of a urinal on a plinth.

Ahmed Elgammal thinks AI art can be much more than that. A Rutgers University professor of computer science, Elgammal runs an art-and-artificial-intelligence lab, where he and his colleagues develop technologies that try to understand and generate new “art” (the scare quotes are Elgammal’s) with AI—not just credible copies of existing work, like GANs do. “That’s not art, that’s just repainting,” Elgammal says of GAN-made images. “It’s what a bad artist would do.”

Elgammal calls his approach a “creative adversarial network,” or CAN. It swaps a GAN’s discerner—the part that ensures similarity—for one that introduces novelty instead. The system amounts to a theory of how art evolves: through small alterations to a known style that produce a new one. That’s a convenient take, given that any machine-learning technique has to base its work on a specific training set.

The results are striking and strange, although calling them a new artistic style might be a stretch. They’re more like credible takes on visual abstraction. The images in the show, which were produced based on training sets of Renaissance portraits and skulls, are more figurative, and fairly disturbing. Their gallery placards name them dukes, earls, queens, and the like, although they depict no actual people—instead, human-like figures, their features smeared and contorted yet still legible as portraiture. Faceless Portrait of a Merchant, for example, depicts a torso that might also read as the front legs and rear haunches of a hound. Atop it, a fleshy orb comes across as a head. The whole scene is rippled by the machine-learning algorithm, in the way of so many computer-generated artworks.

Faceless Portrait of a Merchant, one of the AI portraits produced by Ahmed Elgammal and AICAN. (Artrendex Inc.) [downloaded from https://www.theatlantic.com/technology/archive/2019/03/ai-created-art-invades-chelsea-gallery-scene/584134/]

Bogost consults an expert on portraiture for a discussion about the particularities of portraiture and the shortcomings one might expect of an AI artist-agent (Note: A link has been removed),

“You can’t really pick a form of painting that’s more charged with cultural meaning than portraiture,” John Sharp, an art historian trained in 15th-century Italian painting and the director of the M.F.A. program in design and technology at Parsons School of Design, told me. The portrait isn’t just a style, it’s also a host for symbolism. “For example, men might be shown with an open book to show how they are in dialogue with that material; or a writing implement, to suggest authority; or a weapon, to evince power.” Take Portrait of a Youth Holding an Arrow, an early-16th-century Boltraffio portrait that helped train the AICAN database for the show. The painting depicts a young man, believed to be the Bolognese poet Girolamo Casio, holding an arrow at an angle in his fingers and across his chest. It doubles as both weapon and quill, a potent symbol of poetry and aristocracy alike. Along with the arrow, the laurels in Casio’s hair are emblems of Apollo, the god of both poetry and archery.

A neural net couldn’t infer anything about the particular symbolic trappings of the Renaissance or antiquity—unless it was taught to, and that wouldn’t happen just by showing it lots of portraits. For Sharp and other critics of computer-generated art, the result betrays an unforgivable ignorance about the supposed influence of the source material.

But for the purposes of the show, the appeal to the Renaissance might be mostly a foil, a way to yoke a hip, new technology to traditional painting in order to imbue it with the gravity of history: not only a Chelsea gallery show, but also an homage to the portraiture found at the Met. To reinforce a connection to the cradle of European art, some of the images are presented in elaborate frames, a decision the gallerist, Philippe Hoerle-Guggenheim (yes, that Guggenheim; he says the relation is “distant”) [the Guggenheim is strongly associated with the visual arts by way the two Guggeheim museums, one in New York City and the other in Bilbao, Portugal], told me he insisted upon. Meanwhile, the technical method makes its way onto the gallery placards in an official-sounding way—“Creative Adversarial Network print.” But both sets of inspirations, machine-learning and Renaissance portraiture, get limited billing and zero explanation at the show. That was deliberate, Hoerle-Guggenheim said. He’s betting that the simple existence of a visually arresting AI painting will be enough to draw interest—and buyers. It would turn out to be a good bet.

The art market is just that: a market. Some of the most renowned names in art today, from Damien Hirst to Banksy, trade in the trade of art as much as—and perhaps even more than—in the production of images, objects, and aesthetics. No artist today can avoid entering that fray, Elgammal included. “Is he an artist?” Hoerle-Guggenheim asked himself of the computer scientist. “Now that he’s in this context, he must be.” But is that enough? In Sharp’s estimation, “Faceless Portraits Transcending Time” is a tech demo more than a deliberate oeuvre, even compared to the machine-learning-driven work of his design-and-technology M.F.A. students, who self-identify as artists first.

Judged as Banksy or Hirst might be, Elgammal’s most art-worthy work might be the Artrendex start-up itself, not the pigment-print portraits that its technology has output. Elgammal doesn’t treat his commercial venture like a secret, but he also doesn’t surface it as a beneficiary of his supposedly earnest solo gallery show. He’s argued that AI-made images constitute a kind of conceptual art, but conceptualists tend to privilege process over product or to make the process as visible as the product.

Hoerle-Guggenheim worked as a financial analyst for Hyatt before getting into the art business via some kind of consulting deal (he responded cryptically when I pressed him for details). …

This is a fascinating article and I have one last excerpt, which poses this question, is an AI artist-agent a collaborator or a medium? There ‘s also speculation about how AI artist-agents might impact the business of art (Note: Links have been removed),

… it’s odd to list AICAN as a collaborator—painters credit pigment as a medium, not as a partner. Even the most committed digital artists don’t present the tools of their own inventions that way; when they do, it’s only after years, or even decades, of ongoing use and refinement.

But Elgammal insists that the move is justified because the machine produces unexpected results. “A camera is a tool—a mechanical device—but it’s not creative,” he said. “Using a tool is an unfair term for AICAN. It’s the first time in history that a tool has had some kind of creativity, that it can surprise you.” Casey Reas, a digital artist who co-designed the popular visual-arts-oriented coding platform Processing, which he uses to create some of his fine art, isn’t convinced. “The artist should claim responsibility over the work rather than to cede that agency to the tool or the system they create,” he told me.

Elgammal’s financial interest in AICAN might explain his insistence on foregrounding its role. Unlike a specialized print-making technique or even the Processing coding environment, AICAN isn’t just a device that Elgammal created. It’s also a commercial enterprise.

Elgammal has already spun off a company, Artrendex, that provides “artificial-intelligence innovations for the art market.” One of them offers provenance authentication for artworks; another can suggest works a viewer or collector might appreciate based on an existing collection; another, a system for cataloging images by visual properties and not just by metadata, has been licensed by the Barnes Foundation to drive its collection-browsing website.

The company’s plans are more ambitious than recommendations and fancy online catalogs. When presenting on a panel about the uses of blockchain for managing art sales and provenance, Elgammal caught the attention of Jessica Davidson, an art consultant who advises artists and galleries in building collections and exhibits. Davidson had been looking for business-development partnerships, and she became intrigued by AICAN as a marketable product. “I was interested in how we can harness it in a compelling way,” she says.

The art market is just that: a market. Some of the most renowned names in art today, from Damien Hirst to Banksy, trade in the trade of art as much as—and perhaps even more than—in the production of images, objects, and aesthetics. No artist today can avoid entering that fray, Elgammal included. “Is he an artist?” Hoerle-Guggenheim asked himself of the computer scientist. “Now that he’s in this context, he must be.” But is that enough? In Sharp’s estimation, “Faceless Portraits Transcending Time” is a tech demo more than a deliberate oeuvre, even compared to the machine-learning-driven work of his design-and-technology M.F.A. students, who self-identify as artists first.

Judged as Banksy or Hirst might be, Elgammal’s most art-worthy work might be the Artrendex start-up itself, not the pigment-print portraits that its technology has output. Elgammal doesn’t treat his commercial venture like a secret, but he also doesn’t surface it as a beneficiary of his supposedly earnest solo gallery show. He’s argued that AI-made images constitute a kind of conceptual art, but conceptualists tend to privilege process over product or to make the process as visible as the product.

Hoerle-Guggenheim worked as a financial analyst[emphasis mine] for Hyatt before getting into the art business via some kind of consulting deal (he responded cryptically when I pressed him for details). …

If you have the time, I recommend reading Bogost’s March 6, 2019 article for The Atlantic in its entirety/ these excerpts don’t do it enough justice.

Portraiture: what does it mean these days?

After reading the article I have a few questions. What exactly do Bogost and the arty types in the article mean by the word ‘portrait’? “Portrait of Edmond de Belamy” is an image of someone who doesn’t and never has existed and the exhibit “Faceless Portraits Transcending Time,” features images that don’t bear much or, in some cases, any resemblance to human beings. Maybe this is considered a dull question by people in the know but I’m an outsider and I found the paradox: portraits of nonexistent people or nonpeople kind of interesting.

BTW, I double-checked my assumption about portraits and found this definition in the Portrait Wikipedia entry (Note: Links have been removed),

A portrait is a painting, photograph, sculpture, or other artistic representation of a person [emphasis mine], in which the face and its expression is predominant. The intent is to display the likeness, personality, and even the mood of the person. For this reason, in photography a portrait is generally not a snapshot, but a composed image of a person in a still position. A portrait often shows a person looking directly at the painter or photographer, in order to most successfully engage the subject with the viewer.

So, portraits that aren’t portraits give rise to some philosophical questions but Bogost either didn’t want to jump into that rabbit hole (segue into yet another topic) or, as I hinted earlier, may have assumed his audience had previous experience of those kinds of discussions.

Vancouver (Canada) and a ‘portraiture’ exhibit at the Rennie Museum

By one of life’s coincidences, Vancouver’s Rennie Museum had an exhibit (February 16 – June 15, 2019) that illuminates questions about art collecting and portraiture, From a February 7, 2019 Rennie Museum news release,

‘downloaded from https://renniemuseum.org/press-release-spring-2019-collected-works/] Courtesy: Rennie Museum

February 7, 2019

Press Release | Spring 2019: Collected Works
By rennie museum

rennie museum is pleased to present Spring 2019: Collected Works, a group exhibition encompassing the mediums of photography, painting and film. A portraiture of the collecting spirit [emphasis mine], the works exhibited invite exploration of what collected objects, and both the considered and unintentional ways they are displayed, inform us. Featuring the works of four artists—Andrew Grassie, William E. Jones, Louise Lawler and Catherine Opie—the exhibition runs from February 16 to June 15, 2019.

Four exquisite paintings by Scottish painter Andrew Grassie detailing the home and private storage space of a major art collector provide a peek at how the passionately devoted integrates and accommodates the physical embodiments of such commitment into daily life. Grassie’s carefully constructed, hyper-realistic images also pose the question, “What happens to art once it’s sold?” In the transition from pristine gallery setting to idiosyncratic private space, how does the new context infuse our reading of the art and how does the art shift our perception of the individual?

Furthering the inquiry into the symbiotic exchange between possessor and possession, a selection of images by American photographer Louise Lawler depicting art installed in various private and public settings question how the bilateral relationship permeates our interpretation when the collector and the collected are no longer immediately connected. What does de-acquisitioning an object inform us and how does provenance affect our consideration of the art?

The question of legacy became an unexpected facet of 700 Nimes Road (2010-2011), American photographer Catherine Opie’s portrait of legendary actress Elizabeth Taylor. Opie did not directly photograph Taylor for any of the fifty images in the expansive portfolio. Instead, she focused on Taylor’s home and the objects within, inviting viewers to see—then see beyond—the façade of fame and consider how both treasures and trinkets act as vignettes to the stories of a life. Glamorous images of jewels and trophies juxtapose with mundane shots of a printer and the remote-control user manual. Groupings of major artworks on the wall are as illuminating of the home’s mistress as clusters of personal photos. Taylor passed away part way through Opie’s project. The subsequent photos include Taylor’s mementos heading off to auction, raising the question, “Once the collections that help to define someone are disbursed, will our image of that person lose focus?”

In a similar fashion, the twenty-two photographs in Villa Iolas (1982/2017), by American artist and filmmaker William E. Jones, depict the Athens home of iconic art dealer and collector Alexander Iolas. Taken in 1982 by Jones during his first travels abroad, the photographs of art, furniture and antiquities tell a story of privilege that contrast sharply with the images Jones captures on a return visit in 2016. Nearly three decades after Iolas’s 1989 death, his home sits in dilapidation, looted and vandalized. Iolas played an extraordinary role in the evolution of modern art, building the careers of Max Ernst, Yves Klein and Giorgio de Chirico. He gave Andy Warhol his first solo exhibition and was a key advisor to famed collectors John and Dominique de Menil. Yet in the years since his death, his intention of turning his home into a modern art museum as a gift to Greece, along with his reputation, crumbled into ruins. The photographs taken by Jones during his visits in two different eras are incorporated into the film Fall into Ruin (2017), along with shots of contemporary Athens and antiquities on display at the National Archaeological Museum.

“I ask a lot of questions about how portraiture functionswhat is there to describe the person or time we live in or a certain set of politics…”
 – Catherine Opie, The Guardian, Feb 9, 2016

We tend to think of the act of collecting as a formal activity yet it can happen casually on a daily basis, often in trivial ways. While we readily acknowledge a collector consciously assembling with deliberate thought, we give lesser consideration to the arbitrary accumulations that each of us accrue. Be it master artworks, incidental baubles or random curios, the objects we acquire and surround ourselves with tell stories of who we are.

Andrew Grassie (Scotland, b. 1966) is a painter known for his small scale, hyper-realist works. He has been the subject of solo exhibitions at the Tate Britain; Talbot Rice Gallery, Edinburgh; institut supérieur des arts de Toulouse; and rennie museum, Vancouver, Canada. He lives and works in London, England.

William E. Jones (USA, b. 1962) is an artist, experimental film-essayist and writer. Jones’s work has been the subject of retrospectives at Tate Modern, London; Anthology Film Archives, New York; Austrian Film Museum, Vienna; and, Oberhausen Short Film Festival. He is a recipient of the John Simon Guggenheim Memorial Fellowship and the Creative Capital/Andy Warhol Foundation Arts Writers Grant. He lives and works in Los Angeles, USA.

Louise Lawler (USA, b. 1947) is a photographer and one of the foremost members of the Pictures Generation. Lawler was the subject of a major retrospective at the Museum of Modern Art, New York in 2017. She has held exhibitions at the Whitney Museum of American Art, New York; Stedelijk Museum, Amsterdam; National Museum of Art, Oslo; and Musée d’Art Moderne de La Ville de Paris. She lives and works in New York.

Catherine Opie (USA, b. 1961) is a photographer and educator. Her work has been exhibited at Wexner Center for the Arts, Ohio; Henie Onstad Art Center, Oslo; Los the Angeles County Museum of Art; Portland Art Museum; and the Guggenheim Museum, New York. She is the recipient of United States Artist Fellowship, Julius Shulman’s Excellence in Photography Award, and the Smithsonian’s Archive of American Art Medal.  She lives and works in Los Angeles.

rennie museum opened in October 2009 in historic Wing Sang, the oldest structure in Vancouver’s Chinatown, to feature dynamic exhibitions comprising only of art drawn from rennie collection. Showcasing works by emerging and established international artists, the exhibits, accompanied by supporting catalogues, are open free to the public through engaging guided tours. The museum’s commitment to providing access to arts and culture is also expressed through its education program, which offers free age-appropriate tours and customized workshops to children of all ages.

rennie collection is a globally recognized collection of contemporary art that focuses on works that tackle issues related to identity, social commentary and injustice, appropriation, and the nature of painting, photography, sculpture and film. Currently the collection includes works by over 370 emerging and established artists, with over fifty collected in depth. The Vancouver based collection engages actively with numerous museums globally through a robust, artist-centric, lending policy.

So despite the Wikipedia definition, it seems that portraits don’t always feature people. While Bogost didn’t jump into that particular rabbit hole, he did touch on the business side of art.

What about intellectual property?

Bogost doesn’t explicitly discuss this particular issue. It’s a big topic so I’m touching on it only lightly, if an artist worsk with an AI, the question as to ownership of the artwork could prove thorny. Is the copyright owner the computer scientist or the artist or both? Or does the AI artist-agent itself own the copyright? That last question may not be all that farfetched. Sophia, a social humanoid robot, has occasioned thought about ‘personhood.’ (Note: The robots mentioned in this posting have artificial intelligence.) From the Sophia (robot) Wikipedia entry (Note: Links have been removed),

Sophia has been interviewed in the same manner as a human, striking up conversations with hosts. Some replies have been nonsensical, while others have impressed interviewers such as 60 Minutes’ Charlie Rose.[12] In a piece for CNBC, when the interviewer expressed concerns about robot behavior, Sophia joked that he had “been reading too much Elon Musk. And watching too many Hollywood movies”.[27] Musk tweeted that Sophia should watch The Godfather and asked “what’s the worst that could happen?”[28][29] Business Insider’s chief UK editor Jim Edwards interviewed Sophia, and while the answers were “not altogether terrible”, he predicted it was a step towards “conversational artificial intelligence”.[30] At the 2018 Consumer Electronics Show, a BBC News reporter described talking with Sophia as “a slightly awkward experience”.[31]

On October 11, 2017, Sophia was introduced to the United Nations with a brief conversation with the United Nations Deputy Secretary-General, Amina J. Mohammed.[32] On October 25, at the Future Investment Summit in Riyadh, the robot was granted Saudi Arabian citizenship [emphasis mine], becoming the first robot ever to have a nationality.[29][33] This attracted controversy as some commentators wondered if this implied that Sophia could vote or marry, or whether a deliberate system shutdown could be considered murder. Social media users used Sophia’s citizenship to criticize Saudi Arabia’s human rights record. In December 2017, Sophia’s creator David Hanson said in an interview that Sophia would use her citizenship to advocate for women’s rights in her new country of citizenship; Newsweek criticized that “What [Hanson] means, exactly, is unclear”.[34] On November 27, 2018 Sophia was given a visa by Azerbaijan while attending Global Influencer Day Congress held in Baku. December 15, 2018 Sophia was appointed a Belt and Road Innovative Technology Ambassador by China'[35]

As for an AI artist-agent’s intellectual property rights , I have a July 10, 2017 posting featuring that question in more detail. Whether you read that piece or not, it seems obvious that artists might hesitate to call an AI agent, a partner rather than a medium of expression. After all, a partner (and/or the computer scientist who developed the programme) might expect to share in property rights and profits but paint, marble, plastic, and other media used by artists don’t have those expectations.

Moving slightly off topic , in my July 10, 2017 posting I mentioned a competition (literary and performing arts rather than visual arts) called, ‘Dartmouth College and its Neukom Institute Prizes in Computational Arts’. It was started in 2016 and, as of 2018, was still operational under this name: Creative Turing Tests. Assuming there’ll be contests for prizes in 2019, there’s (from the contest site) [1] PoetiX, competition in computer-generated sonnet writing; [2] Musical Style, composition algorithms in various styles, and human-machine improvisation …; and [3] DigiLit, algorithms able to produce “human-level” short story writing that is indistinguishable from an “average” human effort. You can find the contest site here.

Human Brain Project: update

The European Union’s Human Brain Project was announced in January 2013. It, along with the Graphene Flagship, had won a multi-year competition for the extraordinary sum of one million euros each to be paid out over a 10-year period. (My January 28, 2013 posting gives the details available at the time.)

At a little more than half-way through the project period, Ed Yong, in his July 22, 2019 article for The Atlantic, offers an update (of sorts),

Ten years ago, a neuroscientist said that within a decade he could simulate a human brain. Spoiler: It didn’t happen.

On July 22, 2009, the neuroscientist Henry Markram walked onstage at the TEDGlobal conference in Oxford, England, and told the audience that he was going to simulate the human brain, in all its staggering complexity, in a computer. His goals were lofty: “It’s perhaps to understand perception, to understand reality, and perhaps to even also understand physical reality.” His timeline was ambitious: “We can do it within 10 years, and if we do succeed, we will send to TED, in 10 years, a hologram to talk to you.” …

It’s been exactly 10 years. He did not succeed.

One could argue that the nature of pioneers is to reach far and talk big, and that it’s churlish to single out any one failed prediction when science is so full of them. (Science writers joke that breakthrough medicines and technologies always seem five to 10 years away, on a rolling window.) But Markram’s claims are worth revisiting for two reasons. First, the stakes were huge: In 2013, the European Commission awarded his initiative—the Human Brain Project (HBP)—a staggering 1 billion euro grant (worth about $1.42 billion at the time). Second, the HBP’s efforts, and the intense backlash to them, exposed important divides in how neuroscientists think about the brain and how it should be studied.

Markram’s goal wasn’t to create a simplified version of the brain, but a gloriously complex facsimile, down to the constituent neurons, the electrical activity coursing along them, and even the genes turning on and off within them. From the outset, the criticism to this approach was very widespread, and to many other neuroscientists, its bottom-up strategy seemed implausible to the point of absurdity. The brain’s intricacies—how neurons connect and cooperate, how memories form, how decisions are made—are more unknown than known, and couldn’t possibly be deciphered in enough detail within a mere decade. It is hard enough to map and model the 302 neurons of the roundworm C. elegans, let alone the 86 billion neurons within our skulls. “People thought it was unrealistic and not even reasonable as a goal,” says the neuroscientist Grace Lindsay, who is writing a book about modeling the brain.
And what was the point? The HBP wasn’t trying to address any particular research question, or test a specific hypothesis about how the brain works. The simulation seemed like an end in itself—an overengineered answer to a nonexistent question, a tool in search of a use. …

Markram seems undeterred. In a recent paper, he and his colleague Xue Fan firmly situated brain simulations within not just neuroscience as a field, but the entire arc of Western philosophy and human civilization. And in an email statement, he told me, “Political resistance (non-scientific) to the project has indeed slowed us down considerably, but it has by no means stopped us nor will it.” He noted the 140 people still working on the Blue Brain Project, a recent set of positive reviews from five external reviewers, and its “exponentially increasing” ability to “build biologically accurate models of larger and larger brain regions.”

No time frame, this time, but there’s no shortage of other people ready to make extravagant claims about the future of neuroscience. In 2014, I attended TED’s main Vancouver conference and watched the opening talk, from the MIT Media Lab founder Nicholas Negroponte. In his closing words, he claimed that in 30 years, “we are going to ingest information. …

I’m happy to see the update. As I recall, there was murmuring almost immediately about the Human Brain Project (HBP). I never got details but it seemed that people were quite actively unhappy about the disbursements. Of course, this kind of uproar is not unusual when great sums of money are involved and the Graphene Flagship also had its rocky moments.

As for Yong’s contribution, I’m glad he’s debunking some of the hype and glory associated with the current drive to colonize the human brain and other efforts (e.g. genetics) which they often claim are the ‘future of medicine’.

To be fair. Yong is focused on the brain simulation aspect of the HBP (and Markram’s efforts in the Blue Brain Project) but there are other HBP efforts, as well, even if brain simulation seems to be the HBP’s main interest.

After reading the article, I looked up Henry Markram’s Wikipedia entry and found this,

In 2013, the European Union funded the Human Brain Project, led by Markram, to the tune of $1.3 billion. Markram claimed that the project would create a simulation of the entire human brain on a supercomputer within a decade, revolutionising the treatment of Alzheimer’s disease and other brain disorders. Less than two years into it, the project was recognised to be mismanaged and its claims overblown, and Markram was asked to step down.[7][8]

On 8 October 2015, the Blue Brain Project published the first digital reconstruction and simulation of the micro-circuitry of a neonatal rat somatosensory cortex.[9]

I also looked up the Human Brain Project and, talking about their other efforts, was reminded that they have a neuromorphic computing platform, SpiNNaker (mentioned here in a January 24, 2019 posting; scroll down about 50% of the way). For anyone unfamiliar with the term, neuromorphic computing/engineering is what scientists call the effort to replicate the human brain’s ability to synthesize and process information in computing processors.

In fact, there was some discussion in 2013 that the Human Brain Project and the Graphene Flagship would have some crossover projects, e.g., trying to make computers more closely resemble human brains in terms of energy use and processing power.

The Human Brain Project’s (HBP) Silicon Brains webpage notes this about their neuromorphic computing platform,

Neuromorphic computing implements aspects of biological neural networks as analogue or digital copies on electronic circuits. The goal of this approach is twofold: Offering a tool for neuroscience to understand the dynamic processes of learning and development in the brain and applying brain inspiration to generic cognitive computing. Key advantages of neuromorphic computing compared to traditional approaches are energy efficiency, execution speed, robustness against local failures and the ability to learn.

Neuromorphic Computing in the HBP

In the HBP the neuromorphic computing Subproject carries out two major activities: Constructing two large-scale, unique neuromorphic machines and prototyping the next generation neuromorphic chips.

The large-scale neuromorphic machines are based on two complementary principles. The many-core SpiNNaker machine located in Manchester [emphasis mine] (UK) connects 1 million ARM processors with a packet-based network optimized for the exchange of neural action potentials (spikes). The BrainScaleS physical model machine located in Heidelberg (Germany) implements analogue electronic models of 4 Million neurons and 1 Billion synapses on 20 silicon wafers. Both machines are integrated into the HBP collaboratory and offer full software support for their configuration, operation and data analysis.

The most prominent feature of the neuromorphic machines is their execution speed. The SpiNNaker system runs at real-time, BrainScaleS is implemented as an accelerated system and operates at 10,000 times real-time. Simulations at conventional supercomputers typical run factors of 1000 slower than biology and cannot access the vastly different timescales involved in learning and development ranging from milliseconds to years.

Recent research in neuroscience and computing has indicated that learning and development are a key aspect for neuroscience and real world applications of cognitive computing. HBP is the only project worldwide addressing this need with dedicated novel hardware architectures.

I’ve highlighted Manchester because that’s a very important city where graphene is concerned. The UK’s National Graphene Institute is housed at the University of Manchester where graphene was first isolated in 2004 by two scientists, Andre Geim and Konstantin (Kostya) Novoselov. (For their effort, they were awarded the Nobel Prize for physics in 2010.)

Getting back to the HBP (and the Graphene Flagship for that matter), the funding should be drying up sometime around 2023 and I wonder if it will be possible to assess the impact.

Elder care robot being tested by Washington State University team

I imagine that at some point the Washington State University’s (WSU) ‘elder care’ robot will be tested by senior citizens as opposed to the students described in a January 14, 2019 WSU news release (also on EurekAlert) by Will Ferguson,

A robot created by Washington State University scientists could help elderly people with dementia and other limitations live independently in their own homes.

The Robot Activity Support System, or RAS, uses sensors embedded in a WSU smart home to determine where its residents are, what they are doing and when they need assistance with daily activities.

It navigates through rooms and around obstacles to find people on its own, provides video instructions on how to do simple tasks and can even lead its owner to objects like their medication or a snack in the kitchen.

“RAS combines the convenience of a mobile robot with the activity detection technology of a WSU smart home to provide assistance in the moment, as the need for help is detected,” said Bryan Minor, a postdoctoral researcher in the WSU School of Electrical Engineering and Computer Science.

Minor works in the lab of Diane Cook, professor of electrical engineering and computer science and director of the WSU Center for Advanced Studies in Adaptive Systems.

For the last decade, Cook and Maureen Schmitter-Edgecombe, a WSU professor of psychology, have led CASAS researchers in the development of smart home technologies that could enable elderly adults with memory problems and other impairments to live independently.

Currently, an estimated 50 percent of adults over the age of 85 need assistance with every day activities such as preparing meals and taking medication and the annual cost for this assistance in the US is nearly $2 trillion.

With the number of adults over 85 expected to triple by 2050, Cook and Schmitter-Edgecombe hope that technologies like RAS and the WSU smart home will alleviate some of the financial strain on the healthcare system by making it easier for older adults to live alone.

“Upwards of 90 percent of older adults prefer to age in place as opposed to moving into a nursing home,” Cook said. “We want to make it so that instead of bringing in a caregiver or sending these people to a nursing home, we can use technology to help them live independently on their own.”

RAS is the first robot CASAS researchers have tried to incorporate into their smart home environment. They recently published a study in the journal Cognitive Systems Research that demonstrates how RAS could make life easier for older adults struggling to live independently

In the study CASAS researchers recruited 26 undergraduate and graduate students [emphasis mine] to complete three activities in a smart home with RAS as an assistant.

The activities were getting ready to walk the dog, taking medication with food and water and watering household plants.

When the smart home sensors detected a human failed to initiate or was struggling with one of the tasks, RAS received a message to help.

The robot then used its mapping and navigation camera, sensors and software to find the person and offer assistance.

The person could then indicate through a tablet interface that they wanted to see a video of the next step in the activity they were performing, a video of the entire activity or they could ask the robot to lead them to objects needed to complete the activity like the dog’s leash or a granola bar from the kitchen.

Afterwards the study participants were asked to rate the robot’s performance. Most of the participants rated RAS’ performance favorably and found the robot’s tablet interface to be easy to use. They also reported the next step video as being the most useful of the prompts.

“While we are still in an early stage of development, our initial results with RAS have been promising,” Minor said. “The next step in the research will be to test RAS’ performance with a group of older adults to get a better idea of what prompts, video reminders and other preferences they have regarding the robot.”

Here’s a link to and a citation for the paper,

Robot-enabled support of daily activities in smart home environment by Garrett Wilson, Christopher Pereyda, Nisha Raghunath, Gabriel de la Cruz, Shivam Goel, Sepehr Nesaei, Bryan Minor, Maureen Schmitter-Edgecombe, Matthew E.Taylor, Diane J.Cook. Cognitive Systems Research Volume 54, May 2019, Pages 258-272 DOI: https://doi.org/10.1016/j.cogsys.2018.10.032

This paper is behind a paywall.

Other ‘caring’ robots

Dutch filmmaker, Sander Burger, directed a documentary about ‘caredroids’ for seniors titled ‘Alice Cares’ or ‘Ik ben Alice’ in Dutch. It premiered at the 2015 Vancouver (Canada) International Film Festival and was featured in a January 22, 2015 article by Neil Young for the Hollywood Reporter,


The benign side of artificial intelligence enjoys a rare cinematic showcase in Sander Burger‘s Alice Cares (Ik ben Alice), a small-scale Dutch documentary that reinvents no wheels but proves as unassumingly delightful as its eponymous, diminutive “care-robot.” Touching lightly on social and technological themes that are increasingly relevant to nearly all industrialized societies, this quiet charmer bowed at Rotterdam ahead of its local release and deserves wider exposure via festivals and small-screen outlets.

… Developed by the US firm Hanson Robotics, “Alice”— has the stature and face of a girl of eight, but an adult female’s voice—is primarily intended to provide company for lonely seniors.

Burger shows Alice “visiting” the apartments of three octogenarian Dutch ladies, the contraption overcoming their hosts’ initial wariness and quickly forming chatty bonds. This prototype “care-droid” represents the technology at a relatively early stage, with Alice unable to move anything apart from her head, eyes (which incorporate tiny cameras) and mouth. Her body is made much more obviously robotic in appearance than the face, to minimize the chances of her interlocutors mistaking her for an actual human. Such design-touches are discussed by Alice’s programmer in meetings with social-workers, which Burger and his editor Manuel Rombley intersperses between the domestic exchanges that provide the bulk of the running-time.

‘Alice’ was also featured in the Lancet’s (a general medical journal) July 18, 2015 article by Natalie Harrison,

“I’m going to ask you some questions about your life. Do you live independently? Are you lonely?” If you close your eyes and start listening to the film Alice Cares, you would think you were overhearing a routine conversation between an older woman and a health-care worker. It’s only when the woman, Martha Remkes, ends the conversation with “I don’t feel like having a robot in my home, I prefer a human being” that you realise something is amiss. In the Dutch documentary Alice Cares, Alice Robokind, a prototype caredroid developed in a laboratory in Amsterdam, is sent to live with three women who require care and company, with rather surprising results

Although the idea of health robots has been around for a couple of decades, research into the use of robots with older adults is a fairly new area. Alex Mihailidis, from the Intelligent Assistive Technology and Systems Lab [University of Toronto] in Toronto, ON, Canada, explains: “For carers, robots have been used as tools that can help to alleviate burden typically associated with providing continuous care”. He adds that “as robots become more viable and are able to perform common physical tasks, they can be very valuable in helping caregivers complete common tasks such as moving a person in and out of bed”. Although Japan and Korea are regarded as the world leaders in this research, the European Union and the USA are also making progress. At the Edinburgh Centre for Robotics, for example, researchers are working to develop more complex sensor and navigation technology for robots that work alongside people and on assisted living prosthetics technologies. This research is part of a collaboration between the University of Edinburgh and Heriot-Watt University that was awarded £6 million in funding as part of a wider £85 million investment into industrial technology in the UK Government’s Eight Great Technologies initiative. Robotics research is clearly flourishing and the global market for service and industrial robots is estimated to reach almost US$60 billion by 2020.

The idea for Alice Cares came to director Sander Burger after he read about a group of scientists at the VU University of Amsterdam in the Netherlands who were about to test a health-care robot on older people. “The first thing I felt was some resentment against the idea—I was curious why I was so offended by the whole idea and just called the scientists to see if I could come by to see what they were doing. …

… With software to generate and regulate Alice’s emotions, an artificial moral reasoner, a computational model of creativity, and full access to the internet, the investigators hoped to create a robotic care provider that was intelligent, sensitive, creative, and entertaining. “The robot was specially developed for social skills, in short, she was programmed to make the elderly women feel less lonely”, explains Burger.

Copyright © 2015 Alice Cares KeyDocs

Both the Young and Harrison articles are well worth the time, should you have enough to read them. Also, there’s an Ik ben Alice website (it’s in Dutch only).

Meanwhile, Canadians can look at Humber River Hospital (HHR; Toronto, Ontario) for a glimpse at another humanoid ‘carebot’, from a July 25, 2018 HHR Foundation blog entry,

Earlier this year, a special new caregiver joined the Child Life team at the Humber River Hospital. Pepper, the humanoid robot, helps our Child Life Specialists decrease patient anxiety, increase their comfort and educate young patients and their families. Pepper embodies perfectly the intersection of compassion and advanced technology for which Humber River is renowned.

Pepper helps our Child Life Specialists decrease patient anxiety, increase their comfort and educate young patients.

Humber River Hospital is committed to making the hospital experience a better one for our patients and their families from the moment they arrive and Pepper the robot helps us do that! Pepper is child-sized with large, expressive eyes and a sweet voice. It greets visitors, provides directions, plays games, does yoga and even dances. Using facial recognition to detect human emotions, it adapts its behaviour according to the mood of the person with whom it’s interacting. Pepper makes the Hospital an even more welcoming place for everyone it encounters.

Humber currently has two Peppers on staff: one is used exclusively by the Child Life Program to help young patients feel at ease and a second to greet patients and their families in the Hospital’s main entrance.

While Pepper robots are used around the world in such industries as retail and hospitality, Humber River is the first hospital in Canada to use Pepper in a healthcare setting. Using dedicated applications built specifically for the Hospital, Pepper’s interactive touch-screen display helps visitors find specific departments, washrooms, exits and more. In addition to answering questions and sharing information, Pepper entertains, plays games and is always available for a selfie.

I’m guessing that they had a ‘soft’ launch for Pepper because there’s an Oct. 25, 2018 HHR news release announcing Pepper’s deployment,

Pepper® can greet visitors, provide directions, play games, do yoga and even dance

Humber River Hospital has joined forces with SoftBank Robotics America (SBRA) to launch a new pilot program with Pepper the humanoid robot.  Beginning this week, Pepper will greet, help guide, engage and entertain patients and visitors who enter the hospital’s main entrance hall.

“While the healthcare sector has talked about this technology for some time now, we are ambitious and confident at Humber River Hospital to make the move and become the first hospital in Canada to pilot this technology,” states Barbara Collins, President and CEO, Humber River Hospital. 


Pepper by the numbers:
Stands 1.2 m (4ft) tall and weighs 29 kg (62lb)
Features three cameras – two 2 HD cameras and one 3D depth sensor – to “see” and interact with people
20 engines in Pepper’s head, arms and back control its precise movements
A 10-inch chest-mounted touchscreen tablet that Pepper uses to convey information and encourage input

Finally, there’s a 2012 movie, Robot & Frank (mentioned here before in this Oct. 13, 2017 posting; scroll down to Robots and pop culture subsection) which provides an intriguing example of how ‘carebots’ might present unexpected ethical challenges. Hint: Frank is a senior citizen and former jewel thief who decides to pass on some skills.

Final thoughts

It’s fascinating to me that every time I’ve looked at articles about robots being used for tasks usually performed by humans that some expert or other sweetly notes that robots will be used to help humans with tasks that are ‘boring’ or ‘physical’ with the implication that humans will focus on more rewarding work, from Harrison’s Lancet article (in a previous excerpt),

… Alex Mihailidis, from the Intelligent Assistive Technology and Systems Lab in Toronto, ON, Canada, explains: “For carers, robots have been used as tools that can help to alleviate burden typically associated with providing continuous care”. He adds that “as robots become more viable and are able to perform common physical tasks, they can be very valuable in helping caregivers …

For all the emphasis on robots as taking over burdensome physical tasks, Burger’s documentary makes it clear that these early versions are being used primarily to provide companionship. Yes, HHR’s Pepper® is taking over some repetitive tasks, such as giving directions, but it’s also playing and providing companionship.

As for what it will mean ultimately, that’s something we, as a society, need to consider.

An artificial synapse tuned by light, a ferromagnetic memristor, and a transparent, flexible artificial synapse

Down the memristor rabbit hole one more time.* I started out with news about two new papers and inadvertently found two more. In a bid to keep this posting to a manageable size, I’m stopping at four.

UK

In a June 19, 2019 Nanowerk Spotlight article, Dr. Neil Kemp discusses memristors and some of his latest work (Note: A link has been removed),

Memristor (or memory resistors) devices are non-volatile electronic memory devices that were first theorized by Leon Chua in the 1970’s. However, it was some thirty years later that the first practical device was fabricated. This was in 2008 when a group led by Stanley Williams at HP Research Labs realized that switching of the resistance between a conducting and less conducting state in metal-oxide thin-film devices was showing Leon Chua’s memristor behaviour.

The high interest in memristor devices also stems from the fact that these devices emulate the memory and learning properties of biological synapses. i.e. the electrical resistance value of the device is dependent on the history of the current flowing through it.

There is a huge effort underway to use memristor devices in neuromorphic computing applications and it is now reasonable to imagine the development of a new generation of artificial intelligent devices with very low power consumption (non-volatile), ultra-fast performance and high-density integration.

These discoveries come at an important juncture in microelectronics, since there is increasing disparity between computational needs of Big Data, Artificial Intelligence (A.I.) and the Internet of Things (IoT), and the capabilities of existing computers. The increases in speed, efficiency and performance of computer technology cannot continue in the same manner as it has done since the 1960s.

To date, most memristor research has focussed on the electronic switching properties of the device. However, for many applications it is useful to have an additional handle (or degree of freedom) on the device to control its resistive state. For example memory and processing in the brain also involves numerous chemical and bio-chemical reactions that control the brain structure and its evolution through development.

To emulate this in a simple solid-state system composed of just switches alone is not possible. In our research, we are interested in using light to mediate this essential control.

We have demonstrated that light can be used to make short and long-term memory and we have shown how light can modulate a special type of learning, called spike timing dependent plasticity (STDP). STDP involves two neuronal spikes incident across a synapse at the same time. Depending on the relative timing of the spikes and their overlap across the synaptic cleft, the connection strength is other strengthened or weakened.

In our earlier work, we were only able to achieve to small switching effects in memristors using light. In our latest work (Advanced Electronic Materials, “Percolation Threshold Enables Optical Resistive-Memory Switching and Light-Tuneable Synaptic Learning in Segregated Nanocomposites”), we take advantage of a percolating-like nanoparticle morphology to vastly increase the magnitude of the switching between electronic resistance states when light is incident on the device.

We have used an inhomogeneous percolating network consisting of metallic nanoparticles distributed in filamentary-like conduction paths. Electronic conduction and the resistance of the device is very sensitive to any disruption of the conduction path(s).

By embedding the nanoparticles in a polymer that can expand or contract with light the conduction pathways are broken or re-connected causing very large changes in the electrical resistance and memristance of the device.

Our devices could lead to the development of new memristor-based artificial intelligence systems that are adaptive and reconfigurable using a combination of optical and electronic signalling. Furthermore, they have the potential for the development of very fast optical cameras for artificial intelligence recognition systems.

Our work provides a nice proof-of-concept but the materials used means the optical switching is slow. The materials are also not well suited to industry fabrication. In our on-going work we are addressing these switching speed issues whilst also focussing on industry compatible materials.

Currently we are working on a new type of optical memristor device that should give us orders of magnitude improvement in the optical switching speeds whilst also retaining a large difference between the resistance on and off states. We hope to be able to achieve nanosecond switching speeds. The materials used are also compatible with industry standard methods of fabrication.

The new devices should also have applications in optical communications, interfacing and photonic computing. We are currently looking for commercial investors to help fund the research on these devices so that we can bring the device specifications to a level of commercial interest.

If you’re interested in memristors, Kemp’s article is well written and quite informative for nonexperts, assuming of course you can tolerate not understanding everything perfectly.

Here are links and citations for two papers. The first is the latest referred to in the article, a May 2019 paper and the second is a paper appearing in July 2019.

Percolation Threshold Enables Optical Resistive‐Memory Switching and Light‐Tuneable Synaptic Learning in Segregated Nanocomposites by Ayoub H. Jaafar, Mary O’Neill, Stephen M. Kelly, Emanuele Verrelli, Neil T. Kemp. Advanced Electronic Materials DOI: https://doi.org/10.1002/aelm.201900197 First published: 28 May 2019

Wavelength dependent light tunable resistive switching graphene oxide nonvolatile memory devices by Ayoub H.Jaafar, N.T.Kemp. DOI: https://doi.org/10.1016/j.carbon.2019.07.007 Carbon Available online 3 July 2019

The first paper (May 2019) is definitely behind a paywall and the second paper (July 2019) appears to be behind a paywall.

Dr. Kemp’s work has been featured here previously in a January 3, 2018 posting in the subsection titled, Shining a light on the memristor.

China

This work from China was announced in a June 20, 2019 news item on Nanowerk,

Memristors, demonstrated by solid-state devices with continuously tunable resistance, have emerged as a new paradigm for self-adaptive networks that require synapse-like functions. Spin-based memristors offer advantages over other types of memristors because of their significant endurance and high energy effciency.

However, it remains a challenge to build dense and functional spintronic memristors with structures and materials that are compatible with existing ferromagnetic devices. Ta/CoFeB/MgO heterostructures are commonly used in interfacial PMA-based [perpendicular magnetic anisotropy] magnetic tunnel junctions, which exhibit large tunnel magnetoresistance and are implemented in commercial MRAM [magnetic random access memory] products.

“To achieve the memristive function, DW is driven back and forth in a continuous manner in the CoFeB layer by applying in-plane positive or negative current pulses along the Ta layer, utilizing SOT that the current exerts on the CoFeB magnetization,” said Shuai Zhang, a coauthor in the paper. “Slowly propagating domain wall generates a creep in the detection area of the device, which yields a broad range of intermediate resistive states in the AHE [anomalous Hall effect] measurements. Consequently, AHE resistance is modulated in an analog manner, being controlled by the pulsed current characteristics including amplitude, duration, and repetition number.”

“For a follow-up study, we are working on more neuromorphic operations, such as spike-timing-dependent plasticity and paired pulsed facilitation,” concludes You. …

Here’s are links to and citations for the paper (Note: It’s a little confusing but I believe that one of the links will take you to the online version, as for the ‘open access’ link, keep reading),

A Spin–Orbit‐Torque Memristive Device by Shuai Zhang, Shijiang Luo, Nuo Xu, Qiming Zou, Min Song, Jijun Yun, Qiang Luo, Zhe Guo, Ruofan Li, Weicheng Tian, Xin Li, Hengan Zhou, Huiming Chen, Yue Zhang, Xiaofei Yang, Wanjun Jiang, Ka Shen, Jeongmin Hong, Zhe Yuan, Li Xi, Ke Xia, Sayeef Salahuddin, Bernard Dieny, Long You. Advanced Electronic Materials Volume 5, Issue 4 April 2019 (print version) 1800782 DOI: https://doi.org/10.1002/aelm.201800782 First published [online]: 30 January 2019 Note: there is another DOI, https://doi.org/10.1002/aelm.201970022 where you can have open access to Memristors: A Spin–Orbit‐Torque Memristive Device (Adv. Electron. Mater. 4/2019)

The paper published online in January 2019 is behind a paywall and the paper (almost the same title) published in April 2019 has a new DOI and is open access. Final note: I tried accessing the ‘free’ paper and opened up a free file for the artwork featuring the work from China on the back cover of the April 2019 of Advanced Electronic Materials.

Korea

Usually when I see the words transparency and flexibility, I expect to see graphene is one of the materials. That’s not the case for this paper (link to and citation for),

Transparent and flexible photonic artificial synapse with piezo-phototronic modulator: Versatile memory capability and higher order learning algorithm by Mohit Kumar, Joondong Kim, Ching-Ping Wong. Nano Energy Volume 63, September 2019, 103843 DOI: https://doi.org/10.1016/j.nanoen.2019.06.039 Available online 22 June 2019

Here’s the abstract for the paper where you’ll see that the material is made up of zinc oxide silver nanowires,

An artificial photonic synapse having tunable manifold synaptic response can be an essential step forward for the advancement of novel neuromorphic computing. In this work, we reported the development of highly transparent and flexible two-terminal ZnO/Ag-nanowires/PET photonic artificial synapse [emphasis mine]. The device shows purely photo-triggered all essential synaptic functions such as transition from short-to long-term plasticity, paired-pulse facilitation, and spike-timing-dependent plasticity, including in the versatile memory capability. Importantly, strain-induced piezo-phototronic effect within ZnO provides an additional degree of regulation to modulate all of the synaptic functions in multi-levels. The observed effect is quantitatively explained as a dynamic of photo-induced electron-hole trapping/detraining via the defect states such as oxygen vacancies. We revealed that the synaptic functions can be consolidated and converted by applied strain, which is not previously applied any of the reported synaptic devices. This study will open a new avenue to the scientific community to control and design highly transparent wearable neuromorphic computing.

This paper is behind a paywall.

CARESSES your elders (robots for support)

Culturally sensitive robots for elder care! It’s about time. The European Union has funded the Culture Aware Robots and Environmental Sensor Systems for Elderly Support (CARESSES) project being coordinated in Italy. A December 13, 2018 news item on phys.org describes the project,

Researchers have developed revolutionary new robots that adapt to the culture and customs of the elderly people they assist.

Population ageing has implications for many sectors of society, one of which is the increased demand on a country’s health and social care resources. This burden could be greatly eased through advances in artificial intelligence. Robots have the potential to provide valuable assistance to caregivers in hospitals and care homes. They could also improve home care and help the elderly live more independently. But to do this, they will have to be able to respond to older people’s needs in a way that is more likely to be trusted and accepted.
The EU-funded project CARESSES has set out to build the first ever culturally competent robots to care for the elderly. The groundbreaking idea involved designing these robots to adapt their way of acting and speaking to match the culture and habits of the elderly person they’re assisting.

“The idea is that robots should be capable of adapting to human culture in a broad sense, defined by a person’s belonging to a particular ethnic group. At the same time, robots must be able to adapt to an individual’s personal preferences, so in that sense, it doesn’t matter if you’re Italian or Indian,” explained researcher Alessandro Saffiotti of project partner Örebro University, Sweden, …

A December 13, 2018 (?) CORDIS press release, which originated the news item, adds more detail about the robots and their anticipated relationship to their elderly patients,

Through its communication with an elderly person, the robot will fine-tune its knowledge by adapting it to that person’s cultural identity and individual characteristics. Using this knowledge, it will be able to remind the elderly person to take their prescribed medication, encourage them to eat healthily and be active, or help them stay in touch with family and friends. The robot will also be able to make suggestions about the appropriate clothing for specific occasions and remind people of upcoming religious and other celebrations. It doesn’t replace a care home worker. Nevertheless, it will play a vital role in helping to make elderly people’s lives less lonely and reducing the need to have a caregiver nearby at all times.

Scientists are testing the first CARESSES robots in care homes in the United Kingdom and Japan. They’re being used to assist elderly people from different cultural backgrounds. The aim is to see if people feel more comfortable with robots that interact with them in a culturally sensitive manner. They’re also examining whether such robots improve the elderly’s quality of life. “The testing of robots outside of the laboratory environment and in interaction with the elderly will without a doubt be the most interesting part of our project,” added Saffiotti.

The innovative CARESSES (Culture Aware Robots and Environmental Sensor Systems for Elderly Support) robots may pave the way to more culturally sensitive services beyond the sphere of elderly care, too. “It will add value to robots intended to interact with people. Which is not to say that today’s robots are completely culture-neutral. Instead, they unintentionally reflect the culture of the humans who build and program them.”

Having had a mother who recently died in a care facility, I can testify to the importance of cultural and religious sensitivity on the part of caregivers. As for this type of robot not replacing anyone, I take that with a grain of salt. They always say that and I expect it’s true in the initial stages but once the robots are well established and working well? Why not? After all, they’re cheaper in many, many ways and with the coming tsunami of elders in many countries around the world, it seems to me that displacement by robots is an inevitability.

Turn yourself into a robot

Turning yourself into a robot is a little easier than I would have thought,

William Weir’s September 19, 2018 Yale University news release (also on EurekAlert) covers some of the same ground and fills in a few details,

When you think of robotics, you likely think of something rigid, heavy, and built for a specific purpose. New “Robotic Skins” technology developed by Yale researchers flips that notion on its head, allowing users to animate the inanimate and turn everyday objects into robots.

Developed in the lab of Rebecca Kramer-Bottiglio, assistant professor of mechanical engineering & materials science, robotic skins enable users to design their own robotic systems. Although the skins are designed with no specific task in mind, Kramer-Bottiglio said, they could be used for everything from search-and-rescue robots to wearable technologies. The results of the team’s work are published today in Science Robotics.

The skins are made from elastic sheets embedded with sensors and actuators developed in Kramer-Bottiglio’s lab. Placed on a deformable object — a stuffed animal or a foam tube, for instance — the skins animate these objects from their surfaces. The makeshift robots can perform different tasks depending on the properties of the soft objects and how the skins are applied.

We can take the skins and wrap them around one object to perform a task — locomotion, for example — and then take them off and put them on a different object to perform a different task, such as grasping and moving an object,” she said. “We can then take those same skins off that object and put them on a shirt to make an active wearable device.”

Robots are typically built with a single purpose in mind. The robotic skins, however, allow users to create multi-functional robots on the fly. That means they can be used in settings that hadn’t even been considered when they were designed, said Kramer-Bottiglio.

Additionally, using more than one skin at a time allows for more complex movements. For instance, Kramer-Bottiglio said, you can layer the skins to get different types of motion. “Now we can get combined modes of actuation — for example, simultaneous compression and bending.”

To demonstrate the robotic skins in action, the researchers created a handful of prototypes. These include foam cylinders that move like an inchworm, a shirt-like wearable device designed to correct poor posture, and a device with a gripper that can grasp and move objects.

Kramer-Bottiglio said she came up with the idea for the devices a few years ago when NASA  [US National Aeronautics and Space Administration] put out a call for soft robotic systems. The technology was designed in partnership with NASA, and its multifunctional and reusable nature would allow astronauts to accomplish an array of tasks with the same reconfigurable material. The same skins used to make a robotic arm out of a piece of foam could be removed and applied to create a soft Mars rover that can roll over rough terrain. With the robotic skins on board, the Yale scientist said, anything from balloons to balls of crumpled paper could potentially be made into a robot with a purpose.

One of the main things I considered was the importance of multifunctionality, especially for deep space exploration where the environment is unpredictable,” she said. “The question is: How do you prepare for the unknown unknowns?”

For the same line of research, Kramer-Bottiglio was recently awarded a $2 million grant from the National Science Foundation, as part of its Emerging Frontiers in Research and Innovation program.

Next, she said, the lab will work on streamlining the devices and explore the possibility of 3D printing the components.

Just in case the link to the paper becomes obsolete, here’s a citation for the paper,

OmniSkins: Robotic skins that turn inanimate objects into multifunctional robots by Joran W. Booth, Dylan Shah, Jennifer C. Case, Edward L. White, Michelle C. Yuen, Olivier Cyr-Choiniere, and Rebecca Kramer-Bottiglio. Science Robotics 19 Sep 2018: Vol. 3, Issue 22, eaat1853 DOI: 10.1126/scirobotics.aat1853

This paper is behind a paywall.