Category Archives: robots

Brain cell-like nanodevices

Given R. Stanley Williams’s presence on the author list, it’s a bit surprising that there’s no mention of memristors. If I read the signs rightly the interest is shifting, in some cases, from the memristor to a more comprehensive grouping of circuit elements referred to as ‘neuristors’ or, more likely, ‘nanocirucuit elements’ in the effort to achieve brainlike (neuromorphic) computing (engineering). (Williams was the leader of the HP Labs team that offered proof and more of the memristor’s existence, which I mentioned here in an April 5, 2010 posting. There are many, many postings on this topic here; try ‘memristors’ or ‘brainlike computing’ for your search terms.)

A September 24, 2020 news item on ScienceDaily announces a recent development in the field of neuromorphic engineering,

In the September [2020] issue of the journal Nature, scientists from Texas A&M University, Hewlett Packard Labs and Stanford University have described a new nanodevice that acts almost identically to a brain cell. Furthermore, they have shown that these synthetic brain cells can be joined together to form intricate networks that can then solve problems in a brain-like manner.

“This is the first study where we have been able to emulate a neuron with just a single nanoscale device, which would otherwise need hundreds of transistors,” said Dr. R. Stanley Williams, senior author on the study and professor in the Department of Electrical and Computer Engineering. “We have also been able to successfully use networks of our artificial neurons to solve toy versions of a real-world problem that is computationally intense even for the most sophisticated digital technologies.”

In particular, the researchers have demonstrated proof of concept that their brain-inspired system can identify possible mutations in a virus, which is highly relevant for ensuring the efficacy of vaccines and medications for strains exhibiting genetic diversity.

A September 24, 2020 Texas A&M University news release (also on EurekAlert) by Vandana Suresh, which originated the news item, provides some context for the research,

Over the past decades, digital technologies have become smaller and faster largely because of the advancements in transistor technology. However, these critical circuit components are fast approaching their limit of how small they can be built, initiating a global effort to find a new type of technology that can supplement, if not replace, transistors.

In addition to this “scaling-down” problem, transistor-based digital technologies have other well-known challenges. For example, they struggle at finding optimal solutions when presented with large sets of data.

“Let’s take a familiar example of finding the shortest route from your office to your home. If you have to make a single stop, it’s a fairly easy problem to solve. But if for some reason you need to make 15 stops in between, you have 43 billion routes to choose from,” said Dr. Suhas Kumar, lead author on the study and researcher at Hewlett Packard Labs. “This is now an optimization problem, and current computers are rather inept at solving it.”

Kumar added that another arduous task for digital machines is pattern recognition, such as identifying a face as the same regardless of viewpoint or recognizing a familiar voice buried within a din of sounds.

But tasks that can send digital machines into a computational tizzy are ones at which the brain excels. In fact, brains are not just quick at recognition and optimization problems, but they also consume far less energy than digital systems. Hence, by mimicking how the brain solves these types of tasks, Williams said brain-inspired or neuromorphic systems could potentially overcome some of the computational hurdles faced by current digital technologies.

To build the fundamental building block of the brain or a neuron, the researchers assembled a synthetic nanoscale device consisting of layers of different inorganic materials, each with a unique function. However, they said the real magic happens in the thin layer made of the compound niobium dioxide.

When a small voltage is applied to this region, its temperature begins to increase. But when the temperature reaches a critical value, niobium dioxide undergoes a quick change in personality, turning from an insulator to a conductor. But as it begins to conduct electric currents, its temperature drops and niobium dioxide switches back to being an insulator.

These back-and-forth transitions enable the synthetic devices to generate a pulse of electrical current that closely resembles the profile of electrical spikes, or action potentials, produced by biological neurons. Further, by changing the voltage across their synthetic neurons, the researchers reproduced a rich range of neuronal behaviors observed in the brain, such as sustained, burst and chaotic firing of electrical spikes.

“Capturing the dynamical behavior of neurons is a key goal for brain-inspired computers,” said Kumar. “Altogether, we were able to recreate around 15 types of neuronal firing profiles, all using a single electrical component and at much lower energies compared to transistor-based circuits.”

To evaluate if their synthetic neurons [neuristor?] can solve real-world problems, the researchers first wired 24 such nanoscale devices together in a network inspired by the connections between the brain’s cortex and thalamus, a well-known neural pathway involved in pattern recognition. Next, they used this system to solve a toy version of the viral quasispecies reconstruction problem, where mutant variations of a virus are identified without a reference genome.

By means of data inputs, the researchers introduced the network to short gene fragments. Then, by programming the strength of connections between the artificial neurons within the network, they established basic rules about joining these genetic fragments. The jigsaw puzzle-like task for the network was to list mutations in the virus’ genome based on these short genetic segments.

The researchers found that within a few microseconds, their network of artificial neurons settled down in a state that was indicative of the genome for a mutant strain.

Williams and Kumar noted this result is proof of principle that their neuromorphic systems can quickly perform tasks in an energy-efficient way.

The researchers said the next steps in their research will be to expand the repertoire of the problems that their brain-like networks can solve by incorporating other firing patterns and some hallmark properties of the human brain like learning and memory. They also plan to address hardware challenges for implementing their technology on a commercial scale.

“Calculating the national debt or solving some large-scale simulation is not the type of task the human brain is good at and that’s why we have digital computers. Alternatively, we can leverage our knowledge of neuronal connections for solving problems that the brain is exceptionally good at,” said Williams. “We have demonstrated that depending on the type of problem, there are different and more efficient ways of doing computations other than the conventional methods using digital computers with transistors.”

If you look at the news release on EurekAlert, you’ll see this informative image is titled: NeuristerSchematic [sic],

Caption: Networks of artificial neurons connected together can solve toy versions the viral quasispecies reconstruction problem. Credit: Texas A&M University College of Engineering

(On the university website, the image is credited to Rachel Barton.) You can see one of the first mentions of a ‘neuristor’ here in an August 24, 2017 posting.

Here’s a link to and a citation for the paper,

Third-order nanocircuit elements for neuromorphic engineering by Suhas Kumar, R. Stanley Williams & Ziwen Wang. Nature volume 585, pages518–523(2020) DOI: https://doi.org/10.1038/s41586-020-2735-5 Published: 23 September 2020 Issue Date: 24 September 2020

This paper is behind a paywall.

A robot that sucks up oil spills

I was surprised to find out that between 1989 when the Exxon Valdez oil spill fouled the coastline along Alaska and northern British Columbia and 2010 when the BP (British Petroleum) oil spill fouled the Gulf of Mexico and a number of US states, which border it, and Mexico’s state coastlines, there had been virtually no improvement in the environmental remediation technologies for oil spills (see my June 4, 2010 posting).

This summer we’ve had two major oil spills, one in the Russian Arctic (as noted in my August 14, 2020 posting; scroll down to the subhead ‘As for the Russian Arctic oil spill‘) and in the Indian Ocean near Mauritius and near a coral reef and marine protected areas (see this August 13, 2020 news item on the Canadian Broadcasting Corporation [CBC] news online website).

No word yet on whether or not remediation techniques have improved but this August 6, 2020 article by Adele Peters for Fast Company highlights a new robotic approach to cleaning marine oil spills,

A decade after a BP drilling rig exploded in the Gulf of Mexico, sending an estimated 168 million gallons of oil gushing into the water over the course of months, local wildlife are still struggling to recover. Many of the people who worked to clean up the spill are still experiencing health effects. At the time, the “cleanup” strategy involved setting oil slicks on fire and spraying mass quantities of a chemical meant to disperse it, both of which helped get rid of the oil, but also worsened pollution [emphasis mine].

A new robot designed to clean oil spills, now in development, demonstrates how future spills could be handled differently. The robot navigates autonomously on the ocean surface, running on solar power. When oil sensors on the device detect a spill, it triggers a pump that pushes oil and water inside, where a custom nanomaterial sucks up the oil and releases clean water.

Kabra [Tejas Sanjay Kabra, a graduate student at North Carolina State University] 3D-printed a small prototype of the robot, which he tested in a lab, a swimming pool, and then the open ocean. (The small version, about two feet across, can collect 20 gallons of oil at a time; the same device can be scaled up to much larger sizes). He now hopes to bring the product to market as quickly as possible, as major oil spills continue to occur—such as the spill in Russia in June that sent more than 20,000 metric tons of diesel into a pristine part of the Arctic.

Peters’s article provides more details and features an embedded video.

Kabra calls his technology, SoilioS (Spilled OIL recovery by Isis & Oleophilic Sponge) and he entered it in the 2020 James Dyson Awards. The undated James Dyson Award news release announcing the 2020 national winners does not include Kabra’s entry. Mind you, over 1700 inventors entered the 2020 competition.

I hope Kabra perseveres as his robot project looks quite interesting for a number of reasons as can be seen in his entry submission (from the James Dyson Award website),

Initially, I started with a literature review on various Nanomaterials made from tree leaves with specific properties of Hydrophobicity and oleophilicity. Then I narrowed down my research on four different types of leaves i.e., Holy basil, betel, subabul, and mango. Nanoparticles from these leaves were made by green synthesis method and SEM, EDX and XRD tests were conducted. From these tests, I found that the efficiency of material made from the subabul tree was max (82.5%). In order to carry out surface cleaning at sea, different robot designs were studied. Initially, the robot was built in a box structure with arms. The arms contained Nano-capillary; however, the prototype was bulky and inefficient. A new model was devised to reduce the weight as well as increase the efficiency of absorbing the oil spill. The new robot was designed to be in a meta-stable state. The curves of the robot are designed in such a way that it gives stability as well as hold all the components. The top part of the robot is a hollow dome to improve the stability in water. The robot is 3D printed to reduce weight. The 3D printed robot was tested in a pool. Further, work is going on to build a 222 feet robot to test with hardware suitable for sea.

Here’s what SoilioS looks like,

[downloaded from https://www.jamesdysonaward.org/en-US/2020/project/soilios/]

Kabra described what makes his technology from what is currently the state-of-the-art and his future plans (from the James Dyson Award website),

The current technology uses carbon Nano-particle, and some other uses plastic PVC with a chemical adhesive, which is harmful to the environment. On the other hand, SoilioS uses Nano-material made from tree leaves. The invented technology absorbs the oil and stores inside the container with a recovery rate of 80%. The recovered oil can be used for further application; however, on the other hand, the current products burn the oil [emphasis mine] at the cleaning site itself without any recovery rate, thereby increasing pollution. The durability of the invented technology is 8-10 years, and the Nanomaterial used for cleaning the oil spill is reusable for 180 cycles. On the other hand, the durability of the current technology is up to 3-5 years, and the material used is non-reusable. The cost of the invented product is only $5 and on the other hand, the existing technology costs up to $750.

I aim to develop, manufacture, and practically test the robot prototype in the sea so that it can be used to solve oil spill issues and can save billions of dollars. I hope this device will help the environment in a lot of ways and eventually decrease the side effects caused due to oil spills such as leukemia and dying marine life. Currently, I am testing the product on different grades of oil to improve its efficiency further and improving its scope of the application so that it can also be used in industries and household purposes.

I wish Kabra good luck as he works to bring his technology to market.

Toronto’s ArtSci Salon and its Kaleidoscopic Imaginations on Oct 27, 2020 – 7:30 pm (EDT)

The ArtSci Salon is getting quite active these days. Here’s the latest from an Oct. 22, 2020 ArtSci Salon announcement (received via email), which can also be viewed on their Kaleidoscope event page,

Kaleidoscopic Imaginations

Performing togetherness in empty spaces

An experimental  collaboration between the ArtSci Salon, the Digital Dramaturgy Lab_squared/ DDL2 and Sensorium: Centre for Digital Arts and Technology, York University (Toronto, Ontario, Canada)

Tuesday, October 27, 2020

7:30 pm [EDT]

Join our evening of live-streamed, multi-media  performances, following a kaleidoscopic dramaturgy of complexity discourses as inspired by computational complexity theory gatherings.

We are presenting installations, site-specific artistic interventions and media experiments, featuring networked audio and video, dance and performances as we repopulate spaces – The Fields Institute and surroundings – forced to lie empty due to the pandemic. Respecting physical distance and new sanitation and safety rules can be challenging, but it can also open up new ideas and opportunities.

NOTE: DDL2  contributions to this event are sourced or inspired by their recent kaleidoscopic performance “Rattling the the Curve – Paradoxical ECODATA performances of A/I (artistic intelligence), and facial recognition of humans and trees

Virtual space/live streaming concept and design: DDL2  Antje Budde, Karyn McCallum and Don Sinclair

Virtual space and streaming pilot: Don Sinclair

Here are specific programme details (from the announcement),

  1. Signing the Virus – Video (2 min.)
    Collaborators: DDL2 Antje Budde, Felipe Cervera, Grace Whiskin
  2. Niimi II – – Performance and outdoor video projection (15 min.)
    (Nimii means in Anishinaabemowin: s/he dances) Collaborators: DDL2 Candy Blair, Antje Budde, Jill Carter, Lars Crosby, Nina Czegledy, Dave Kemp
  3. Oracle Jane (Scene 2) – A partial playreading on the politics of AI (30 min.)
    Playwright: DDL2 Oracle Collaborators: DDL2 Antje Budde, Frans Robinow, George Bwannika Seremba, Amy Wong and AI ethics consultant Vicki Zhang
  4. Vriksha/Tree – Dance video and outdoor projection (8 min.)
    Collaborators: DDL2 Antje Budde, Lars Crosby, Astad Deboo, Dave Kemp, Amit Kumar
  5. Facial Recognition – Performing a Plate Camera from a Distance (3 min.)
    Collaborators: DDL2 Antje Budde, Jill Carter, Felipe Cervera, Nina Czegledy, Karyn McCallum, Lars Crosby, Martin Kulinna, Montgomery C. Martin, George Bwanika Seremba, Don Sinclair, Heike Sommer
  6. Cutting Edge – Growing Data (6 min.)
    DDL2 A performance by Antje Budde
  7. “void * ambience” – Architectural and instrumental acoustics, projection mapping Concept: Sensorium: The Centre for Digital Art and Technology, York University Collaborators: Michael Palumbo, Ilze Briede [Kavi], Debashis Sinha, Joel Ong

This performance is part of a series (from the announcement),

These three performances are part of Boundary-Crossings: Multiscalar Entanglements in Art, Science and Society, a public Outreach program supported by the Fiends [sic] Institute for Research in Mathematical Science. Boundary Crossings is a series exploring how the notion of boundaries can be transcended and dissolved in the arts and the humanities, the biological and the mathematical sciences, as well as human geography and political economy. Boundaries are used to establish delimitations among disciplines; to discriminate between the human and the non-human (body and technologies, body and bacteria); and to indicate physical and/or artificial boundaries, separating geographical areas and nation states. Our goal is to cross these boundaries by proposing new narratives to show how the distinctions, and the barriers that science, technology, society and the state have created can in fact be re-interpreted as porous and woven together.

This event is curated and produced by ArtSci Salon; Digital Dramaturgy Lab_squared/ DDL2; Sensorium: Centre for Digital Arts and Technology, York University; and Ryerson University; it is supported by The Fields Institute for Research in Mathematical Sciences

Streaming Link 

Finally, the announcement includes biographical information about all of the ‘boundary-crossers’,

Candy Blair (Tkaron:to/Toronto)
Candy Blair/Otsίkh:èta (they/them) is a mixed First Nations/European,
2-spirit interdisciplinary visual and performing artist from Tio’tía:ke – where the group split (“Montreal”) in Québec.

While continuing their work as an artist they also finished their Creative Arts, Literature, and Languages program at Marianopolis College (cégep), their 1st year in the Theatre program at York University, and their 3rd year Acting Conservatory Program at the Centre For Indigenous Theatre in Tsí Tkaròn:to – Where the trees stand in water (Toronto”).

Some of Candy’s noteable performances are Jill Carter’s Encounters at the Edge of the Woods, exploring a range of issues with colonization; Ange Loft’s project Talking Treaties, discussing the treaties of the “Toronto” purchase; Cheri Maracle’s The Story of Six Nations, exploring Six Nation’s origin story through dance/combat choreography, and several other performances, exploring various topics around Indigenous language, land, and cultural restoration through various mediums such as dance,
modelling, painting, theatre, directing, song, etc. As an activist and soon to be entrepreneur, Candy also enjoys teaching workshops around promoting Indigenous resurgence such as Indigenous hand drumming, food sovereignty, beading, medicine knowledge, etc..

Working with their collectives like Weave and Mend, they were responsible for the design, land purification, and installation process of the four medicine plots and a community space with their 3 other members. Candy aspires to continue exploring ways of decolonization through healthy traditional practices from their mixed background and the arts in the hopes of eventually supporting Indigenous relations
worldwide.

Antje Budde
Antje Budde is a conceptual, queer-feminist, interdisciplinary experimental scholar-artist and an Associate Professor of Theatre Studies, Cultural Communication and Modern Chinese Studies at the Centre for Drama, Theatre and Performance Studies, University of Toronto. Antje has created multi-disciplinary artistic works in Germany, China and Canada and works tri-lingually in German, English and Mandarin. She is the founder of a number of queerly feminist performing art projects including most recently the (DDL)2 or (Digital Dramaturgy Lab)Squared – a platform for experimental explorations of digital culture, creative labor, integration of arts and science, and technology in performance. She is interested in the intersections of natural sciences, the arts, engineering and computer science.

Roberta Buiani
Roberta Buiani (MA; PhD York University) is the Artistic Director of the ArtSci Salon at the Fields Institute for Research in Mathematical Sciences (Toronto). Her artistic work has travelled to art festivals (Transmediale; Hemispheric Institute Encuentro; Brazil), community centres and galleries (the Free Gallery Toronto; Immigrant Movement
International, Queens, Myseum of Toronto), and science institutions (RPI; the Fields Institute). Her writing has appeared on Space and Culture, Cultural Studies and The Canadian Journal of Communication_among others. With the ArtSci Salon she has launched a series of experiments in “squatting academia”, by re-populating abandoned spaces and cabinets across university campuses with SciArt installations.

Currently, she is a research associate at the Centre for Feminist Research and a Scholar in Residence at Sensorium: Centre for Digital Arts and Technology at York University [Toronto, Ontario, Canada].

Jill Carter (Tkaron:to/ Toronto)
Jill (Anishinaabe/Ashkenazi) is a theatre practitioner and researcher, currently cross appointed to the Centre for Drama, Theatre and Performance Studies; the Transitional Year Programme; and Indigenous Studies at the University of Toronto. She works with many members of Tkaron:to’s Indigenous theatre community to support the development of new works and to disseminate artistic objectives, process, and outcomes through community- driven research projects. Her scholarly research,
creative projects, and activism are built upon ongoing relationships with Indigenous Elders, Artists and Activists, positioning her as witness to, participant in, and disseminator of oral histories that speak to the application of Indigenous aesthetic principles and traditional knowledge systems to contemporary performance.The research questions she pursues revolve around the mechanics of story creation,
the processes of delivery and the manufacture of affect.

More recently, she has concentrated upon Indigenous pedagogical models for the rehearsal studio and the lecture hall; the application of Indigenous [insurgent] research methods within performance studies; the politics of land acknowledgements; and land – based dramaturgies/activations/interventions.

Jill also works as a researcher and tour guide with First Story Toronto; facilitates Land Acknowledgement, Devising, and Land-based Dramaturgy Workshops for theatre makers in this city; and performs with the Talking Treaties Collective (Jumblies Theatre, Toronto).

In September 2019, Jill directed Encounters at the Edge of the Woods. This was a devised show, featuring Indigenous and Settler voices, and it opened Hart House Theatre’s 100th season; it is the first instance of Indigenous presence on Hart House Theatre’s stage in its 100 years of existence as the cradle for Canadian theatre.

Nina Czegledy
(Toronto) artist, curator, educator, works internationally on collaborative art, science & technology projects. The changing perception of the human body and its environment as well as paradigm shifts in the arts inform her projects. She has exhibited and published widely, won awards for her artwork and has initiated, lead and participated in workshops, forums and festivals worldwide at international events.

Astad Deboo (Mumbai, India)
Astad Deboo is a contemporary dancer and choreographer who employs his
training in Indian classical dance forms of Kathak as well as Kathakali to create a dance form that is unique to him. He has become a pioneer of modern dance in India. Astad describes his style as “contemporary in vocabulary and traditional in restraints.” Throughout his long and illustrious career, he has worked with various prominent performers such as Pina Bausch, Alis on Becker Chase and Pink Floyd and performed in many parts of the world. He has been awarded the Sangeet Natak Akademi Award (1996) and Padma Shri (2007), awarded by the Government of India. In January 2005 along with 12 young women with hearing impairment supported by the Astad Deboo Dance Foundation, he performed at the 20th Annual Deaf Olympics at Melbourne, Australia. Astad has a long record of working with disadvantaged youth.

Ilze Briede [Kavi]
Ilze Briede [artist name: Kavi] is a Latvian/Canadian artist and researcher with broad and diverse interests. Her artistic practice, a hybrid of video, image and object making, investigates the phenomenon of perception and the constraints and boundaries between the senses and knowing. Kavi is currently pursuing a PhD degree in Digital Media at York University with a research focus on computational creativity and generative art. She sees computer-generated systems and algorithms as a potentiality for co-creation and collaboration between human and machine. Kavi has previously worked and exhibited with Fashion Art Toronto, Kensington Market Art Fair, Toronto Burlesque Festival, Nuit Blanche, Sidewalk Toronto and the Toronto Symphony Orchestra.

Dave Kemp
Dave Kemp is a visual artist whose practice looks at the intersections and interactions between art, science and technology: particularly at how these fields shape our perception and understanding of the world. His artworks have been exhibited widely at venues such as at the McIntosh Gallery, The Agnes Etherington Art Centre, Art Gallery of Mississauga, The Ontario Science Centre, York Quay Gallery, Interaccess,
Modern Fuel Artist-Run Centre, and as part of the Switch video festival in Nenagh, Ireland. His works are also included in the permanent collections of the Agnes Etherington Art Centre and the Canada Council Art Bank.

Stephen Morris
Stephen Morris is Professor of experimental non-linear Physics in the faculty of Physics at the University of Toronto. He is the scientific Director of the ArtSci salon at the Fields Institute for Research in Mathematical Sciences. He often collaborates with artists and has himself performed and produced art involving his own scientific instruments and experiments in non-linear physics and pattern formation

Michael Palumbo
Michael Palumbo (MA, BFA) is an electroacoustic music improviser, coder, and researcher. His PhD research spans distributed creativity and version control systems, and is expressed through “git show”, a distributed electroacoustic music composition and design experiment, and “Mischmasch”, a collaborative modular synthesizer in virtual reality. He studies with Dr. Doug Van Nort as a researcher in the Distributed
Performance and Sensorial Immersion Lab, and Dr. Graham Wakefield at the Alice Lab for Computational Worldmaking. His works have been presented internationally, including at ISEA, AES, NIME, Expo ’74, TIES, and the Network Music Festival. He performs regularly with a modular synthesizer, runs the Exit Points electroacoustic improvisation series, and is an enthusiastic gardener and yoga practitioner.

Joel Ong (PhD. Digital Arts and Experimental Media (DXARTS, University
of Washington)

Joel Ong is a media artist whose works connect scientific and artistic approaches to the environment, particularly with respect to sound and physical space.  Professor Ong’s work explores the way objects and spaces can function as repositories of ‘frozen sound’, and in elucidating these, he is interested in creating what systems theorist Jack Burnham (1968) refers to as “art (that) does not reside in material entities, but in relations between people and between people and the components of their environment”.

A serial collaborator, Professor Ong is invested in the broader scope of Art-Science collaborations and is engaged constantly in the discourses and processes that facilitate viewing these two polemical disciplines on similar ground.  His graduate interdisciplinary work in nanotechnology and sound was conducted at SymbioticA, the Center of Excellence for Biological Arts at the University of Western Australia and supervised by BioArt pioneers and TCA (The Tissue Culture and Art Project) artists Dr Ionat Zurr and Oron Catts.

George Bwanika Seremba
George Bwanika Seremba,is an actor, playwright and scholar. He was born
in Uganda. George holds an M. Phil, and a Ph.D. in Theatre Studies, from Trinity
College Dublin. In 1980, having barely survived a botched execution by the Military Intelligence, he fled into exile, resettling in Canada (1983). He has performed in numerous plays including in his own, “Come Good Rain”, which was awarded a Dora award (1993). In addition, he published a number of edited play collections including “Beyond the pale: dramatic writing from First Nations writers & writers of colour” co-edited by Yvette Nolan, Betty Quan, George Bwanika Seremba. (1996).

George was nominated for the Irish Times’ Best Actor award in Dublin’s Calypso Theatre’s for his role in Athol Fugard’s “Master Harold and the boys”. In addition to theatre he performed in several movies and on television. His doctoral thesis (2008) entitled “Robert Serumaga and the Golden Age of Uganda’s Theatre (1968-1978): (Solipsism, Activism, Innovation)” will be published as a monograph by CSP (U.K) in 2021.

Don Sinclair (Toronto)
Don is Associate Professor in the Department of Computational Arts at York University. His creative research areas include interactive performance, projections for dance, sound art, web and data art, cycling art, sustainability, and choral singing most often using code and programming. Don is particularly interested in processes of artistic creation that integrate digital creative coding-based practices with performance in dance and theatre. As well, he is an enthusiastic cyclist.

Debashis Sinha
Driven by a deep commitment to the primacy of sound in creative expression, Debashis Sinha has realized projects in radiophonic art, music, sound art, audiovisual performance, theatre, dance, and music across Canada and internationally. Sound design and composition credits include numerous works for Peggy Baker Dance Projects and productions with Canada’s premiere theatre companies including The Stratford Festival, Soulpepper, Volcano Theatre, Young People’s Theatre, Project Humanity, The Theatre Centre, Nightwood Theatre, Why Not Theatre, MTC Warehouse and Necessary Angel. His live sound practice on the concert stage has led to appearances at MUTEK Montreal, MUTEK Japan, the Guelph Jazz Festival, the Banff Centre, The Music Gallery, and other venues. Sinha teaches sound design at York University and the National Theatre School, and is currently working on a multi-part audio/performance work incorporating machine learning and AI funded by the Canada Council for the Arts.

Vicki (Jingjing) Zhang (Toronto)
Vicki Zhang is a faculty member at University of Toronto’s statistics department. She is the author of Uncalculated Risks (Canadian Scholar’s Press, 2014). She is also a playwright, whose plays have been produced or stage read in various festivals and venues in Canada including Toronto’s New Ideas Festival, Winnipeg’s FemFest, Hamilton Fringe Festival, Ergo Pink Fest, InspiraTO festival, Toronto’s Festival of Original Theatre (FOOT), Asper Center for Theatre and Film, Canadian Museum for Human Rights, Cultural Pluralism in the Arts Movement Ontario (CPAMO), and the Canadian Play Thing. She has also written essays and short fiction for Rookie Magazine and Thread.

If you can’t attend this Oct. 27, 2020 event, there’s still the Oct. 29, 2020 Boundary-Crossings event: Beauty Kit (see my Oct. 12, 2020 posting for more).

As for Kaleidoscopic Imaginations, you can access the Streaming Link On Oct. 27, 2020 at 7:30 pm EDT (4 pm PDT).

Technical University of Munich: embedded ethics approach for AI (artificial intelligence) and storing a tv series in synthetic DNA

I stumbled across two news bits of interest from the Technical University of Munich in one day (Sept. 1, 2020, I think). The topics: artificial intelligence (AI) and synthetic DNA (deoxyribonucleic acid).

Embedded ethics and artificial intelligence (AI)

An August 27, 2020 Technical University of Munich (TUM) press release (also on EurekAlert but published Sept. 1, 2020) features information about a proposal to embed ethicists in with AI development teams,

The increasing use of AI (artificial intelligence) in the development of new medical technologies demands greater attention to ethical aspects. An interdisciplinary team at the Technical University of Munich (TUM) advocates the integration of ethics from the very beginning of the development process of new technologies. Alena Buyx, Professor of Ethics in Medicine and Health Technologies, explains the embedded ethics approach.

Professor Buyx, the discussions surrounding a greater emphasis on ethics in AI research have greatly intensified in recent years, to the point where one might speak of “ethics hype” …

Prof. Buyx: … and many committees in Germany and around the world such as the German Ethics Council or the EU Commission High-Level Expert Group on Artificial Intelligence have responded. They are all in agreement: We need more ethics in the development of AI-based health technologies. But how do things look in practice for engineers and designers? Concrete solutions are still few and far between. In a joint pilot project with two Integrative Research Centers at TUM, the Munich School of Robotics and Machine Intelligence (MSRM) with its director, Prof. Sami Haddadin, and the Munich Center for Technology in Society (MCTS), with Prof. Ruth Müller, we want to try out the embedded ethics approach. We published the proposal in Nature Machine Intelligence at the end of July [2020].

What exactly is meant by the “embedded ethics approach”?

Prof.Buyx: The idea is to make ethics an integral part of the research process by integrating ethicists into the AI development team from day one. For example, they attend team meetings on a regular basis and create a sort of “ethical awareness” for certain issues. They also raise and analyze specific ethical and social issues.

Is there an example of this concept in practice?

Prof. Buyx: The Geriatronics Research Center, a flagship project of the MSRM in Garmisch-Partenkirchen, is developing robot assistants to enable people to live independently in old age. The center’s initiatives will include the construction of model apartments designed to try out residential concepts where seniors share their living space with robots. At a joint meeting with the participating engineers, it was noted that the idea of using an open concept layout everywhere in the units – with few doors or individual rooms – would give the robots considerable range of motion. With the seniors, however, this living concept could prove upsetting because they are used to having private spaces. At the outset, the engineers had not given explicit consideration to this aspect.

Prof.Buyx: The approach sounds promising. But how can we avoid “embedded ethics” from turning into an “ethics washing” exercise, offering companies a comforting sense of “being on the safe side” when developing new AI technologies?

That’s not something we can be certain of avoiding. The key is mutual openness and a willingness to listen, with the goal of finding a common language – and subsequently being prepared to effectively implement the ethical aspects. At TUM we are ideally positioned to achieve this. Prof. Sami Haddadin, the director of the MSRM, is also a member of the EU High-Level Group of Artificial Intelligence. In his research, he is guided by the concept of human centered engineering. Consequently, he has supported the idea of embedded ethics from the very beginning. But one thing is certain: Embedded ethics alone will not suddenly make AI “turn ethical”. Ultimately, that will require laws, codes of conduct and possibly state incentives.

Here’s a link to and a citation for the paper espousing the embedded ethics for AI development approach,

An embedded ethics approach for AI development by Stuart McLennan, Amelia Fiske, Leo Anthony Celi, Ruth Müller, Jan Harder, Konstantin Ritt, Sami Haddadin & Alena Buyx. Nature Machine Intelligence (2020) DOI: https://doi.org/10.1038/s42256-020-0214-1 Published 31 July 2020

This paper is behind a paywall.

Religion, ethics and and AI

For some reason embedded ethics and AI got me to thinking about Pope Francis and other religious leaders.

The Roman Catholic Church and AI

There was a recent announcement that the Roman Catholic Church will be working with MicroSoft and IBM on AI and ethics (from a February 28, 2020 article by Jen Copestake for British Broadcasting Corporation (BBC) news online (Note: A link has been removed),

Leaders from the two tech giants met senior church officials in Rome, and agreed to collaborate on “human-centred” ways of designing AI.

Microsoft president Brad Smith admitted some people may “think of us as strange bedfellows” at the signing event.

“But I think the world needs people from different places to come together,” he said.

The call was supported by Pope Francis, in his first detailed remarks about the impact of artificial intelligence on humanity.

The Rome Call for Ethics [sic] was co-signed by Mr Smith, IBM executive vice-president John Kelly and president of the Pontifical Academy for Life Archbishop Vincenzo Paglia.

It puts humans at the centre of new technologies, asking for AI to be designed with a focus on the good of the environment and “our common and shared home and of its human inhabitants”.

Framing the current era as a “renAIssance”, the speakers said the invention of artificial intelligence would be as significant to human development as the invention of the printing press or combustion engine.

UN Food and Agricultural Organization director Qu Dongyu and Italy’s technology minister Paola Pisano were also co-signatories.

Hannah Brockhaus’s February 28, 2020 article for the Catholic News Agency provides some details missing from the BBC report and I found it quite helpful when trying to understand the various pieces that make up this initiative,

The Pontifical Academy for Life signed Friday [February 28, 2020], alongside presidents of IBM and Microsoft, a call for ethical and responsible use of artificial intelligence technologies.

According to the document, “the sponsors of the call express their desire to work together, in this context and at a national and international level, to promote ‘algor-ethics.’”

“Algor-ethics,” according to the text, is the ethical use of artificial intelligence according to the principles of transparency, inclusion, responsibility, impartiality, reliability, security, and privacy.

The signing of the “Rome Call for AI Ethics [PDF]” took place as part of the 2020 assembly of the Pontifical Academy for Life, which was held Feb. 26-28 [2020] on the theme of artificial intelligence.

One part of the assembly was dedicated to private meetings of the academics of the Pontifical Academy for Life. The second was a workshop on AI and ethics that drew 356 participants from 41 countries.

On the morning of Feb. 28 [2020], a public event took place called “renAIssance. For a Humanistic Artificial Intelligence” and included the signing of the AI document by Microsoft President Brad Smith, and IBM Executive Vice-president John Kelly III.

The Director General of FAO, Dongyu Qu, and politician Paola Pisano, representing the Italian government, also signed.

The president of the European Parliament, David Sassoli, was also present Feb. 28.

Pope Francis canceled his scheduled appearance at the event due to feeling unwell. His prepared remarks were read by Archbishop Vincenzo Paglia, president of the Academy for Life.

You can find Pope Francis’s comments about the document here (if you’re not comfortable reading Italian, hopefully, the English translation which follows directly afterward will be helpful). The Pope’s AI initiative has a dedicated website, Rome Call for AI ethics, and while most of the material dates from the February 2020 announcement, they are keeping up a blog. It has two entries, one dated in May 2020 and another in September 2020.

Buddhism and AI

The Dalai Lama is well known for having an interest in science and having hosted scientists for various dialogues. So, I was able to track down a November 10, 2016 article by Ariel Conn for the futureoflife.org website, which features his insights on the matter,

The question of what it means and what it takes to feel needed is an important problem for ethicists and philosophers, but it may be just as important for AI researchers to consider. The Dalai Lama argues that lack of meaning and purpose in one’s work increases frustration and dissatisfaction among even those who are gainfully employed.

“The problem,” says the Dalai Lama, “is … the growing number of people who feel they are no longer useful, no longer needed, no longer one with their societies. … Feeling superfluous is a blow to the human spirit. It leads to social isolation and emotional pain, and creates the conditions for negative emotions to take root.”

If feeling needed and feeling useful are necessary for happiness, then AI researchers may face a conundrum. Many researchers hope that job loss due to artificial intelligence and automation could, in the end, provide people with more leisure time to pursue enjoyable activities. But if the key to happiness is feeling useful and needed, then a society without work could be just as emotionally challenging as today’s career-based societies, and possibly worse.

I also found a talk on the topic by The Venerable Tenzin Priyadarshi, first here’s a description from his bio at the Dalai Lama Center for Ethics and Transformative Values webspace on the Massachusetts Institute of Technology (MIT) website,

… an innovative thinker, philosopher, educator and a polymath monk. He is Director of the Ethics Initiative at the MIT Media Lab and President & CEO of The Dalai Lama Center for Ethics and Transformative Values at the Massachusetts Institute of Technology. Venerable Tenzin’s unusual background encompasses entering a Buddhist monastery at the age of ten and receiving graduate education at Harvard University with degrees ranging from Philosophy to Physics to International Relations. He is a Tribeca Disruptive Fellow and a Fellow at the Center for Advanced Study in Behavioral Sciences at Stanford University. Venerable Tenzin serves on the boards of a number of academic, humanitarian, and religious organizations. He is the recipient of several recognitions and awards and received Harvard’s Distinguished Alumni Honors for his visionary contributions to humanity.

He gave the 2018 Roger W. Heyns Lecture in Religion and Society at Stanford University on the topic, “Religious and Ethical Dimensions of Artificial Intelligence.” The video runs over one hour but he is a sprightly speaker (in comparison to other Buddhist speakers I’ve listened to over the years).

Judaism, Islam, and other Abrahamic faiths examine AI and ethics

I was delighted to find this January 30, 2020 Artificial Intelligence: Implications for Ethics and Religion event as it brought together a range of thinkers from various faiths and disciplines,

New technologies are transforming our world every day, and the pace of change is only accelerating.  In coming years, human beings will create machines capable of out-thinking us and potentially taking on such uniquely-human traits as empathy, ethical reasoning, perhaps even consciousness.  This will have profound implications for virtually every human activity, as well as the meaning we impart to life and creation themselves.  This conference will provide an introduction for non-specialists to Artificial Intelligence (AI):

What is it?  What can it do and be used for?  And what will be its implications for choice and free will; economics and worklife; surveillance economies and surveillance states; the changing nature of facts and truth; and the comparative intelligence and capabilities of humans and machines in the future? 

Leading practitioners, ethicists and theologians will provide cross-disciplinary and cross-denominational perspectives on such challenges as technology addiction, inherent biases and resulting inequalities, the ethics of creating destructive technologies and of turning decision-making over to machines from self-driving cars to “autonomous weapons” systems in warfare, and how we should treat the suffering of “feeling” machines.  The conference ultimately will address how we think about our place in the universe and what this means for both religious thought and theological institutions themselves.

UTS [Union Theological Seminary] is the oldest independent seminary in the United States and has long been known as a bastion of progressive Christian scholarship.  JTS [Jewish Theological Seminary] is one of the academic and spiritual centers of Conservative Judaism and a major center for academic scholarship in Jewish studies. The Riverside Church is an interdenominational, interracial, international, open, welcoming, and affirming church and congregation that has served as a focal point of global and national activism for peace and social justice since its inception and continues to serve God through word and public witness. The annual Greater Good Gathering, the following week at Columbia University’s School of International & Public Affairs, focuses on how technology is changing society, politics and the economy – part of a growing nationwide effort to advance conversations promoting the “greater good.”

They have embedded a video of the event (it runs a little over seven hours) on the January 30, 2020 Artificial Intelligence: Implications for Ethics and Religion event page. For anyone who finds that a daunting amount of information, you may want to check out the speaker list for ideas about who might be writing and thinking on this topic.

As for Islam, I did track down this November 29, 2018 article by Shahino Mah Abdullah, a fellow at the Institute of Advanced Islamic Studies (IAIS) Malaysia,

As the global community continues to work together on the ethics of AI, there are still vast opportunities to offer ethical inputs, including the ethical principles based on Islamic teachings.

This is in line with Islam’s encouragement for its believers to convey beneficial messages, including to share its ethical principles with society.

In Islam, ethics or akhlak (virtuous character traits) in Arabic, is sometimes employed interchangeably in the Arabic language with adab, which means the manner, attitude, behaviour, and etiquette of putting things in their proper places. Islamic ethics cover all the legal concepts ranging from syariah (Islamic law), fiqh ( jurisprudence), qanun (ordinance), and ‘urf (customary practices).

Adopting and applying moral values based on the Islamic ethical concept or applied Islamic ethics could be a way to address various issues in today’s societies.

At the same time, this approach is in line with the higher objectives of syariah (maqasid alsyariah) that is aimed at conserving human benefit by the protection of human values, including faith (hifz al-din), life (hifz alnafs), lineage (hifz al-nasl), intellect (hifz al-‘aql), and property (hifz al-mal). This approach could be very helpful to address contemporary issues, including those related to the rise of AI and intelligent robots.

..

Part of the difficulty with tracking down more about AI, ethics, and various religions is linguistic. I simply don’t have the language skills to search for the commentaries and, even in English, I may not have the best or most appropriate search terms.

Television (TV) episodes stored on DNA?

According to a Sept. 1, 2020 news item on Nanowerk, the first episode of a tv series, ‘Biohackers’ has been stored on synthetic DNA (deoxyribonucleic acid) by a researcher at TUM and colleagues at another institution,

The first episode of the newly released series “Biohackers” was stored in the form of synthetic DNA. This was made possible by the research of Prof. Reinhard Heckel of the Technical University of Munich (TUM) and his colleague Prof. Robert Grass of ETH Zürich.

They have developed a method that permits the stable storage of large quantities of data on DNA for over 1000 years.

A Sept. 1, 2020 TUM press release, which originated the news item, proceeds with more detail in an interview format,

Prof. Heckel, Biohackers is about a medical student seeking revenge on a professor with a dark past – and the manipulation of DNA with biotechnology tools. You were commissioned to store the series on DNA. How does that work?

First, I should mention that what we’re talking about is artificially generated – in other words, synthetic – DNA. DNA consists of four building blocks: the nucleotides adenine (A), thymine (T), guanine (G) and cytosine (C). Computer data, meanwhile, are coded as zeros and ones. The first episode of Biohackers consists of a sequence of around 600 million zeros and ones. To code the sequence 01 01 11 00 in DNA, for example, we decide which number combinations will correspond to which letters. For example: 00 is A, 01 is C, 10 is G and 11 is T. Our example then produces the DNA sequence CCTA. Using this principle of DNA data storage, we have stored the first episode of the series on DNA.

And to view the series – is it just a matter of “reverse translation” of the letters?

In a very simplified sense, you can visualize it like that. When writing, storing and reading the DNA, however, errors occur. If these errors are not corrected, the data stored on the DNA will be lost. To solve the problem, I have developed an algorithm based on channel coding. This method involves correcting errors that take place during information transfers. The underlying idea is to add redundancy to the data. Think of language: When we read or hear a word with missing or incorrect letters, the computing power of our brain is still capable of understanding the word. The algorithm follows the same principle: It encodes the data with sufficient redundancy to ensure that even highly inaccurate data can be restored later.

Channel coding is used in many fields, including in telecommunications. What challenges did you face when developing your solution?

The first challenge was to create an algorithm specifically geared to the errors that occur in DNA. The second one was to make the algorithm so efficient that the largest possible quantities of data can be stored on the smallest possible quantity of DNA, so that only the absolutely necessary amount of redundancy is added. We demonstrated that our algorithm is optimized in that sense.

DNA data storage is very expensive because of the complexity of DNA production as well as the reading process. What makes DNA an attractive storage medium despite these challenges?

First, DNA has a very high information density. This permits the storage of enormous data volumes in a minimal space. In the case of the TV series, we stored “only” 100 megabytes on a picogram – or a billionth of a gram of DNA. Theoretically, however, it would be possible to store up to 200 exabytes on one gram of DNA. And DNA lasts a long time. By comparison: If you never turned on your PC or wrote data to the hard disk it contains, the data would disappear after a couple of years. By contrast, DNA can remain stable for many thousands of years if it is packed right.

And the method you have developed also makes the DNA strands durable – practically indestructible.

My colleague Robert Grass was the first to develop a process for the “stable packing” of DNA strands by encapsulating them in nanometer-scale spheres made of silica glass. This ensures that the DNA is protected against mechanical influences. In a joint paper in 2015, we presented the first robust DNA data storage concept with our algorithm and the encapsulation process developed by Prof. Grass. Since then we have continuously improved our method. In our most recent publication in Nature Protocols of January 2020, we passed on what we have learned.

What are your next steps? Does data storage on DNA have a future?

We’re working on a way to make DNA data storage cheaper and faster. “Biohackers” was a milestone en route to commercialization. But we still have a long way to go. If this technology proves successful, big things will be possible. Entire libraries, all movies, photos, music and knowledge of every kind – provided it can be represented in the form of data – could be stored on DNA and would thus be available to humanity for eternity.

Here’s a link to and a citation for the paper,

Reading and writing digital data in DNA by Linda C. Meiser, Philipp L. Antkowiak, Julian Koch, Weida D. Chen, A. Xavier Kohll, Wendelin J. Stark, Reinhard Heckel & Robert N. Grass. Nature Protocols volume 15, pages86–101(2020) Issue Date: January 2020 DOI: https://doi.org/10.1038/s41596-019-0244-5 Published [online] 29 November 2019

This paper is behind a paywall.

As for ‘Biohackers’, it’s a German science fiction television series and you can find out more about it here on the Internet Movie Database.

Non-invasive chemical imaging reveals the Eykian Lamb of God’s secrets

Left: color image after the 1950s treatment. The ears of the Eyckian Lamb were revealed after removal of the 16th-century overpaint obscuring the background. Right: color image after the 2019 treatment that removed all of the 16th century overpaint, revealing the face of the Eyckian Lamb. The dotted lines indicate the outline of the head before removal of 16th-century overpaint.

Fascinating, yes? More than one person has noticed that the ‘new’ lamb is “disturbingly human-like.” First, here’s more about this masterpiece and the technology used to restore it (from a July 29, 2020 University of Antwerp (Belgium) press release (Note: I do not have all of the figures (images) described in this press release embedded here),

Two non-invasive chemical imaging modalities were employed to help understand the changes made over time to the Lamb of God, the focal point of the Ghent Altarpiece (1432) by Hubert and Jan Van Eyck. Two major results were obtained: a prediction of the facial features of the Lamb of God that had been hidden beneath non-original overpaint dating from the 16th century (and later), and evidence for a smaller earlier version of the Lamb’s body with a more naturalistic build. These non-invasive imaging methods, combined with analysis of paint cross-sections and magnified examination of the paint surface, provide objective chemical evidence to understand the extent of overpaints and the state of preservation of the original Eyckian paint underneath.

The Ghent Altarpiece is one of the founding masterpieces of Western European painting. The central panel, The Adoration of the Lamb, represents the sacrifice of Christ with a depiction of the Lamb of God standing on an altar, blood pouring into a chalice. During conservation treatment and technical analysis in the 1950s, conservators recognized the presence of overpaint on the Lamb and the surrounding area. But based on the evidence available at that time, the decision was made to remove only the overpaint obscuring the background immediately surrounding the head. As a result, the ears of the Eyckian Lamb were uncovered, leading to the surprising effect of a head with four ears (Figure 1).

Figure 1: Left: Color image after the 1950s treatment. The ears of the Eyckian Lamb were revealed after removal of the 16th century overpaint obscuring the background. (© Lukasweb.be – Art in Flanders vzw). Right: Color image after the 2019 treatment that removed all of the 16th century overpaint, revealing the face of the Eyckian Lamb. The dotted lines indicate the outline of the head before removal of 16th century overpaint. (© Lukasweb.be – Art in Flanders vzw).

During the recent conservation treatment of the central panel, chemical images collected before 16th century overpaint was removed revealed facial features that predicted aspects of the Eyckian Lamb, at that time still hidden below the overpaint. For example, the smaller, v-shaped nostrils of the Eyckian Lamb are situated higher than the 16th century nose, as revealed in the map for mercury, an element associated with the red pigment vermilion (Figure 2, red arrow). A pair of eyes that look forward, slightly lower than the 16th century eyes, can be seen in a false-color hyperspectral infrared reflectance image (Figure 2, right). This image also shows dark preparatory underdrawing lines that define pursed lips, and in conjunction with the presence of mercury in this area, suggest the Eyckian lips were more prominent. In addition, the higher, 16th century ears were painted over the gilded rays of the halo (Figure 2, yellow rays). Gilding is typically the artist’s final touch when working on a painting, which supports the conclusion that the lower set of ears is the Eyckian original. Collectively, these facial features indicate that, compared to the 16th century restorer’s overpainted face, the Eyckian Lamb has a smaller face with a distinctive expression.

Figure 2: Left: Colorized composite elemental map showing the distribution of gold (in yellow), mercury (in red), and lead (in white). The red arrow indicates the position of the Eyckian Lamb’s nostrils. (University of Antwerp). Right: Composite false-color infrared reflectance image (blue – 1000 nm, green – 1350 nm, red – 1650 nm) shows underdrawn lines indicating the position of facial features of the Eyckian Lamb, including forward-gazing eyes, the division between the lips, and the jawline. (National Gallery of Art, Washington). The dotted lines indicate the outline of the head before removal of 16th century overpaint.

The new imaging also revealed previously unrecognized revisions to the size and shape of the Lamb’s body: a more naturalistically shaped Lamb, with slightly sagging back, more rounded hindquarters and a smaller tail. The artist’s underdrawing lines used to lay out the design of the smaller shape can be seen in the false-color hyperspectral infrared reflectance image (Figure 3, lower left, white arrows). Mathematical processing of the reflectance dataset to emphasize a spectral feature associated with the pigment lead white resulted in a clearer image of the smaller Lamb (Figure 3, lower right). Differences between the paint handling of the fleece in the initial small Lamb and the revised area of the larger Lamb also were found upon reexamination of the x-radiograph and the paint surface under the microscope.

Figure 3: Upper left: Color image before removal of all 16th century overpaint. (© Lukasweb.be – Art in Flanders vzw). Upper right: Color image after removal of all 16th century overpaint. (© Lukasweb.be – Art in Flanders vzw). Lower left: False-color infrared reflectance image (blue – 1000 nm, green – 1350 nm, red – 1650 nm) reveals underdrawing lines that denote the smaller hindquarters of the initial Lamb. Lower right: Map derived from processing the infrared reflectance image cube showing the initial Lamb with a slightly sagging back, more rounded hindquarters and a smaller tail. Brighter areas of the map indicate stronger absorption from the -OH group associated with one of the forms of lead white. (National Gallery of Art, Washington).

During the conservation treatment completed in 2019, decisions were informed by well-established conservation methods (high-resolution color photography, X-radiography, infrared imaging, paint sample analysis) as well as the new chemical imaging. In this way, the conservation treatment uncovered the smaller face of the Eyckian Lamb, with forward-facing eyes that meet the viewer’s gaze. Only overpaints that could be identified as being later additions dating from the 16th century onward were carefully and safely removed. The body of the Lamb, however, has not changed. The material evidence indicates that the lead white paint layer used to define the larger squared-off hindquarters was applied prior to the 16th century restoration, but because analysis at the present time cannot definitively establish whether this was a change by the original artist(s) or a very early restoration or alteration by another artist, the enlarged contour of the Lamb was left untouched.

Chemical imaging technologies can be used to build confidence about the state of preservation of original paint and help guide the decision to remove overpaint. Combined with the conservators’ thorough optical examination, informed by years of experience and insights derived from paint cross-sections, chemical imaging methods will no doubt be central to ongoing interdisciplinary research, helping to resolve long-standing art-historical issues on the Ghent Altarpiece as well as other works of art. These findings were obtained by researchers from the University of Antwerp using macroscale X-ray fluorescence imaging and researchers at the National Gallery of Art, Washington using infrared reflectance imaging spectroscopy, interpreted in conjunction with the observations of the scientists and the conservation team from The Royal Institute for Cultural Heritage (KIK-IRPA), Brussels.

A January 22, 2020 British Broadcasting Corporation (BBC) online news item notes some of the response to the ‘new’ lamb (Note: A link has been removed),

Restorers found that the central panel of the artwork, known as the Adoration of the Mystic Lamb, had been painted over in the 16th Century.

Another artist had altered the Lamb of God, a symbol for Jesus depicted at the centre of the panel.

Now conservationists have stripped away the overpaint, revealing the lamb’s “intense gaze” and “large frontal eyes”.

Hélène Dubois, the head of the restoration project, told the Art Newspaper the original lamb had a more “intense interaction with the onlookers”.

She said the lamb’s “cartoonish” depiction, which departs from the painting’s naturalistic style, required more research.

The lamb has been described as having an “alarmingly humanoid face” with “penetrating, close-set eyes, full pink lips and flared nostrils” by the Smithsonian Magazine.

These features are “eye-catching, if not alarmingly anthropomorphic”, said the magazine, the official journal of the Smithsonian Institution.

There was also disbelief on social media, where the lamb was called “disturbing” by some and compared to an “alien creature”. Some said they felt it would have been better to not restore the lamb’s original face.

The painter of the panel, Jan Van Eyck, is considered to be one of the most technical and talented artists of his generation. However, it is widely believed that The Ghent Altarpiece was started by his brother, Hubert Van Eyck.

Taken away by the Nazis during World War Two and Napoleon’s troops in the 1700s, the altarpiece is thought to be one of the most frequently stolen artworks of all time.

If you have the time, do read the January 22, 2020 BBC news item in its entirety as it conveys more of the controversy.

Jennifer Ouellette’s July 29, 2020 article for Ars Technica delves further into the technical detail along with some history about this particular 21st Century restoration. The conservators and experts used artificial intelligence (AI) to assist.

Here’s a link to and a citation for the paper,

Dual mode standoff imaging spectroscopy documents the painting process of the Lamb of God in the Ghent Altarpiece by J. and H. Van Eyck by Geert Van der Snickt, Kathryn A. Dooley, Jana Sanyova, Hélène Dubois, John K. Delaney, E. Melanie Gifford, Stijn Legrand, Nathalie Laquiere and Koen Janssens. Science Advances 29 Jul 2020: Vol. 6, no. 31, eabb3379 DOI: 10.1126/sciadv.abb3379

This paper is open access.

Turning brain-controlled wireless electronic prostheses into reality plus some ethical points

Researchers at Stanford University (California, US) believe they have a solution for a problem with neuroprosthetics (Note: I have included brief comments about neuroprosthetics and possible ethical issues at the end of this posting) according an August 5, 2020 news item on ScienceDaily,

The current generation of neural implants record enormous amounts of neural activity, then transmit these brain signals through wires to a computer. But, so far, when researchers have tried to create wireless brain-computer interfaces to do this, it took so much power to transmit the data that the implants generated too much heat to be safe for the patient. A new study suggests how to solve his problem — and thus cut the wires.

Caption: Photo of a current neural implant, that uses wires to transmit information and receive power. New research suggests how to one day cut the wires. Credit: Sergey Stavisky

An August 3, 2020 Stanford University news release (also on EurekAlert but published August 4, 2020) by Tom Abate, which originated the news item, details the problem and the proposed solution,

Stanford researchers have been working for years to advance a technology that could one day help people with paralysis regain use of their limbs, and enable amputees to use their thoughts to control prostheses and interact with computers.

The team has been focusing on improving a brain-computer interface, a device implanted beneath the skull on the surface of a patient’s brain. This implant connects the human nervous system to an electronic device that might, for instance, help restore some motor control to a person with a spinal cord injury, or someone with a neurological condition like amyotrophic lateral sclerosis, also called Lou Gehrig’s disease.

The current generation of these devices record enormous amounts of neural activity, then transmit these brain signals through wires to a computer. But when researchers have tried to create wireless brain-computer interfaces to do this, it took so much power to transmit the data that the devices would generate too much heat to be safe for the patient.

Now, a team led by electrical engineers and neuroscientists Krishna Shenoy, PhD, and Boris Murmann, PhD, and neurosurgeon and neuroscientist Jaimie Henderson, MD, have shown how it would be possible to create a wireless device, capable of gathering and transmitting accurate neural signals, but using a tenth of the power required by current wire-enabled systems. These wireless devices would look more natural than the wired models and give patients freer range of motion.

Graduate student Nir Even-Chen and postdoctoral fellow Dante Muratore, PhD, describe the team’s approach in a Nature Biomedical Engineering paper.

The team’s neuroscientists identified the specific neural signals needed to control a prosthetic device, such as a robotic arm or a computer cursor. The team’s electrical engineers then designed the circuitry that would enable a future, wireless brain-computer interface to process and transmit these these carefully identified and isolated signals, using less power and thus making it safe to implant the device on the surface of the brain.

To test their idea, the researchers collected neuronal data from three nonhuman primates and one human participant in a (BrainGate) clinical trial.

As the subjects performed movement tasks, such as positioning a cursor on a computer screen, the researchers took measurements. The findings validated their hypothesis that a wireless interface could accurately control an individual’s motion by recording a subset of action-specific brain signals, rather than acting like the wired device and collecting brain signals in bulk.

The next step will be to build an implant based on this new approach and proceed through a series of tests toward the ultimate goal.

Here’s a link to and a citation for the paper,

Power-saving design opportunities for wireless intracortical brain–computer interfaces by Nir Even-Chen, Dante G. Muratore, Sergey D. Stavisky, Leigh R. Hochberg, Jaimie M. Henderson, Boris Murmann & Krishna V. Shenoy. Nature Biomedical Engineering (2020) DOI: https://doi.org/10.1038/s41551-020-0595-9 Published: 03 August 2020

This paper is behind a paywall.

Comments about ethical issues

As I found out while investigating, ethical issues in this area abound. My first thought was to look at how someone with a focus on ability studies might view the complexities.

My ‘go to’ resource for human enhancement and ethical issues is Gregor Wolbring, an associate professor at the University of Calgary (Alberta, Canada). his profile lists these areas of interest: ability studies, disability studies, governance of emerging and existing sciences and technologies (e.g. neuromorphic engineering, genetics, synthetic biology, robotics, artificial intelligence, automatization, brain machine interfaces, sensors) and more.

I can’t find anything more recent on this particular topic but I did find an August 10, 2017 essay for The Conversation where he comments on technology and human enhancement ethical issues where the technology is gene-editing. Regardless, he makes points that are applicable to brain-computer interfaces (human enhancement), Note: Links have been removed),

Ability expectations have been and still are used to disable, or disempower, many people, not only people seen as impaired. They’ve been used to disable or marginalize women (men making the argument that rationality is an important ability and women don’t have it). They also have been used to disable and disempower certain ethnic groups (one ethnic group argues they’re smarter than another ethnic group) and others.

A recent Pew Research survey on human enhancement revealed that an increase in the ability to be productive at work was seen as a positive. What does such ability expectation mean for the “us” in an era of scientific advancements in gene-editing, human enhancement and robotics?

Which abilities are seen as more important than others?

The ability expectations among “us” will determine how gene-editing and other scientific advances will be used.

And so how we govern ability expectations, and who influences that governance, will shape the future. Therefore, it’s essential that ability governance and ability literacy play a major role in shaping all advancements in science and technology.

One of the reasons I find Gregor’s commentary so valuable is that he writes lucidly about ability and disability as concepts and poses what can be provocative questions about expectations and what it is to be truly abled or disabled. You can find more of his writing here on his eponymous (more or less) blog.

Ethics of clinical trials for testing brain implants

This October 31, 2017 article by Emily Underwood for Science was revelatory,

In 2003, neurologist Helen Mayberg of Emory University in Atlanta began to test a bold, experimental treatment for people with severe depression, which involved implanting metal electrodes deep in the brain in a region called area 25 [emphases mine]. The initial data were promising; eventually, they convinced a device company, St. Jude Medical in Saint Paul, to sponsor a 200-person clinical trial dubbed BROADEN.

This month [October 2017], however, Lancet Psychiatry reported the first published data on the trial’s failure. The study stopped recruiting participants in 2012, after a 6-month study in 90 people failed to show statistically significant improvements between those receiving active stimulation and a control group, in which the device was implanted but switched off.

… a tricky dilemma for companies and research teams involved in deep brain stimulation (DBS) research: If trial participants want to keep their implants [emphases mine], who will take responsibility—and pay—for their ongoing care? And participants in last week’s meeting said it underscores the need for the growing corps of DBS researchers to think long-term about their planned studies.

… participants bear financial responsibility for maintaining the device should they choose to keep it, and for any additional surgeries that might be needed in the future, Mayberg says. “The big issue becomes cost [emphasis mine],” she says. “We transition from having grants and device donations” covering costs, to patients being responsible. And although the participants agreed to those conditions before enrolling in the trial, Mayberg says she considers it a “moral responsibility” to advocate for lower costs for her patients, even it if means “begging for charity payments” from hospitals. And she worries about what will happen to trial participants if she is no longer around to advocate for them. “What happens if I retire, or get hit by a bus?” she asks.

There’s another uncomfortable possibility: that the hypothesis was wrong [emphases mine] to begin with. A large body of evidence from many different labs supports the idea that area 25 is “key to successful antidepressant response,” Mayberg says. But “it may be too simple-minded” to think that zapping a single brain node and its connections can effectively treat a disease as complex as depression, Krakauer [John Krakauer, a neuroscientist at Johns Hopkins University in Baltimore, Maryland] says. Figuring that out will likely require more preclinical research in people—a daunting prospect that raises additional ethical dilemmas, Krakauer says. “The hardest thing about being a clinical researcher,” he says, “is knowing when to jump.”

Brain-computer interfaces, symbiosis, and ethical issues

This was the most recent and most directly applicable work that I could find. From a July 24, 2019 article by Liam Drew for Nature Outlook: The brain,

“It becomes part of you,” Patient 6 said, describing the technology that enabled her, after 45 years of severe epilepsy, to halt her disabling seizures. Electrodes had been implanted on the surface of her brain that would send a signal to a hand-held device when they detected signs of impending epileptic activity. On hearing a warning from the device, Patient 6 knew to take a dose of medication to halt the coming seizure.

“You grow gradually into it and get used to it, so it then becomes a part of every day,” she told Frederic Gilbert, an ethicist who studies brain–computer interfaces (BCIs) at the University of Tasmania in Hobart, Australia. “It became me,” she said. [emphasis mine]

Gilbert was interviewing six people who had participated in the first clinical trial of a predictive BCI to help understand how living with a computer that monitors brain activity directly affects individuals psychologically1. Patient 6’s experience was extreme: Gilbert describes her relationship with her BCI as a “radical symbiosis”.

Symbiosis is a term, borrowed from ecology, that means an intimate co-existence of two species for mutual advantage. As technologists work towards directly connecting the human brain to computers, it is increasingly being used to describe humans’ potential relationship with artificial intelligence.

Interface technologies are divided into those that ‘read’ the brain to record brain activity and decode its meaning, and those that ‘write’ to the brain to manipulate activity in specific regions and affect their function.

Commercial research is opaque, but scientists at social-media platform Facebook are known to be pursuing brain-reading techniques for use in headsets that would convert users’ brain activity into text. And neurotechnology companies such as Kernel in Los Angeles, California, and Neuralink, founded by Elon Musk in San Francisco, California, predict bidirectional coupling in which computers respond to people’s brain activity and insert information into their neural circuitry. [emphasis mine]

Already, it is clear that melding digital technologies with human brains can have provocative effects, not least on people’s agency — their ability to act freely and according to their own choices. Although neuroethicists’ priority is to optimize medical practice, their observations also shape the debate about the development of commercial neurotechnologies.

Neuroethicists began to note the complex nature of the therapy’s side effects. “Some effects that might be described as personality changes are more problematic than others,” says Maslen [Hannah Maslen, a neuroethicist at the University of Oxford, UK]. A crucial question is whether the person who is undergoing stimulation can reflect on how they have changed. Gilbert, for instance, describes a DBS patient who started to gamble compulsively, blowing his family’s savings and seeming not to care. He could only understand how problematic his behaviour was when the stimulation was turned off.

Such cases present serious questions about how the technology might affect a person’s ability to give consent to be treated, or for treatment to continue. [emphases mine] If the person who is undergoing DBS is happy to continue, should a concerned family member or doctor be able to overrule them? If someone other than the patient can terminate treatment against the patient’s wishes, it implies that the technology degrades people’s ability to make decisions for themselves. It suggests that if a person thinks in a certain way only when an electrical current alters their brain activity, then those thoughts do not reflect an authentic self.

To observe a person with tetraplegia bringing a drink to their mouth using a BCI-controlled robotic arm is spectacular. [emphasis mine] This rapidly advancing technology works by implanting an array of electrodes either on or in a person’s motor cortex — a brain region involved in planning and executing movements. The activity of the brain is recorded while the individual engages in cognitive tasks, such as imagining that they are moving their hand, and these recordings are used to command the robotic limb.

If neuroscientists could unambiguously discern a person’s intentions from the chattering electrical activity that they record in the brain, and then see that it matched the robotic arm’s actions, ethical concerns would be minimized. But this is not the case. The neural correlates of psychological phenomena are inexact and poorly understood, which means that signals from the brain are increasingly being processed by artificial intelligence (AI) software before reaching prostheses.[emphasis mine]

But, he [Philipp Kellmeyer, a neurologist and neuroethicist at the University of Freiburg, Germany] says, using AI tools also introduces ethical issues of which regulators have little experience. [emphasis mine] Machine-learning software learns to analyse data by generating algorithms that cannot be predicted and that are difficult, or impossible, to comprehend. This introduces an unknown and perhaps unaccountable process between a person’s thoughts and the technology that is acting on their behalf.

Maslen is already helping to shape BCI-device regulation. She is in discussion with the European Commission about regulations it will implement in 2020 that cover non-invasive brain-modulating devices that are sold straight to consumers. [emphases mine; Note: There is a Canadian company selling this type of product, MUSE] Maslen became interested in the safety of these devices, which were covered by only cursory safety regulations. Although such devices are simple, they pass electrical currents through people’s scalps to modulate brain activity. Maslen found reports of them causing burns, headaches and visual disturbances. She also says clinical studies have shown that, although non-invasive electrical stimulation of the brain can enhance certain cognitive abilities, this can come at the cost of deficits in other aspects of cognition.

Regarding my note about MUSE, the company is InteraXon and its product is MUSE.They advertise the product as “Brain Sensing Headbands That Improve Your Meditation Practice.” The company website and the product seem to be one entity, Choose Muse. The company’s product has been used in some serious research papers they can be found here. I did not see any research papers concerning safety issues.

Getting back to Drew’s July 24, 2019 article and Patient 6,

… He [Gilbert] is now preparing a follow-up report on Patient 6. The company that implanted the device in her brain to help free her from seizures went bankrupt. The device had to be removed.

… Patient 6 cried as she told Gilbert about losing the device. … “I lost myself,” she said.

“It was more than a device,” Gilbert says. “The company owned the existence of this new person.”

I strongly recommend reading Drew’s July 24, 2019 article in its entirety.

Finally

It’s easy to forget that in all the excitement over technologies ‘making our lives better’ that there can be a dark side or two. Some of the points brought forth in the articles by Wolbring, Underwood, and Drew confirmed my uneasiness as reasonable and gave me some specific examples of how these technologies raise new issues or old issues in new ways.

What I find interesting is that no one is using the term ‘cyborg’, which would seem quite applicable.There is an April 20, 2012 posting here titled ‘My mother is a cyborg‘ where I noted that by at lease one definition people with joint replacements, pacemakers, etc. are considered cyborgs. In short, cyborgs or technology integrated into bodies have been amongst us for quite some time.

Interestingly, no one seems to care much when insects are turned into cyborgs (can’t remember who pointed this out) but it is a popular area of research especially for military applications and search and rescue applications.

I’ve sometimes used the term ‘machine/flesh’ and or ‘augmentation’ as a description of technologies integrated with bodies, human or otherwise. You can find lots on the topic here however I’ve tagged or categorized it.

Amongst other pieces you can find here, there’s the August 8, 2016 posting, ‘Technology, athletics, and the ‘new’ human‘ featuring Oscar Pistorius when he was still best known as the ‘blade runner’ and a remarkably successful paralympic athlete. It’s about his efforts to compete against able-bodied athletes at the London Olympic Games in 2012. It is fascinating to read about technology and elite athletes of any kind as they are often the first to try out ‘enhancements’.

Gregor Wolbring has a number of essays on The Conversation looking at Paralympic athletes and their pursuit of enhancements and how all of this is affecting our notions of abilities and disabilities. By extension, one has to assume that ‘abled’ athletes are also affected with the trickle-down effect on the rest of us.

Regardless of where we start the investigation, there is a sameness to the participants in neuroethics discussions with a few experts and commercial interests deciding on how the rest of us (however you define ‘us’ as per Gregor Wolbring’s essay) will live.

This paucity of perspectives is something I was getting at in my COVID-19 editorial for the Canadian Science Policy Centre. My thesis being that we need a range of ideas and insights that cannot be culled from small groups of people who’ve trained and read the same materials or entrepreneurs who too often seem to put profit over thoughtful implementations of new technologies. (See the PDF May 2020 edition [you’ll find me under Policy Development]) or see my May 15, 2020 posting here (with all the sources listed.)

As for this new research at Stanford, it’s exciting news, which raises questions, as it offers the hope of independent movement for people diagnosed as tetraplegic (sometimes known as quadriplegic.)

Hydrogel (a soft, wet material) can memorize, retrieve, and forget information like a human brain

This is fascinating and it’s not a memristor. (You can find out more about memristors here on the Nanowerk website). Getting back to the research, scientists at Hokkaido University (Japan) are training squishy hydrogel to remember according to a July 28, 2020 news item on phys.org (Note: Links have been removed),

Hokkaido University researchers have found a soft and wet material that can memorize, retrieve, and forget information, much like the human brain. They report their findings in the journal Proceedings of the National Academy of Sciences (PNAS).

The human brain learns things, but tends to forget them when the information is no longer important. Recreating this dynamic memory process in manmade materials has been a challenge. Hokkaido University researchers now report a hydrogel that mimics the dynamic memory function of the brain: encoding information that fades with time depending on the memory intensity.

Hydrogels are flexible materials composed of a large percentage of water—in this case about 45%—along with other chemicals that provide a scaffold-like structure to contain the water. Professor Jian Ping Gong, Assistant Professor Kunpeng Cui and their students and colleagues in Hokkaido University’s Institute for Chemical Reaction Design and Discovery (WPI-ICReDD) are seeking to develop hydrogels that can serve biological functions.

“Hydrogels are excellent candidates to mimic biological functions because they are soft and wet like human tissues,” says Gong. “We are excited to demonstrate how hydrogels can mimic some of the memory functions of brain tissue.”

Caption: The hydrogel’s memorizing-forgetting behavior is achieved based on fast water uptake (swelling) at high temperature and slow water release (shrinking) at low temperature, which is enabled by dynamic bonds in the gel. The swelling part turns from transparent to opaque when cooled, enabling memory retrieval. (Chengtao Yu et al., PNAS, July 27, 2020) Credit: Chengtao Yu et al., PNAS, July 27, 2020

A July 27, 2020 Hokkaido University press release (also on EurekAlert but published July 28, 2020), which originated the news item, investigates just how the scientists trained the hydrogel,

In this study, the researchers placed a thin hydrogel between two plastic plates; the top plate had a shape or letters cut out, leaving only that area of the hydrogel exposed. For example, patterns included an airplane and the word “GEL.” They initially placed the gel in a cold water bath to establish equilibrium. Then they moved the gel to a hot bath. The gel absorbed water into its structure causing a swell, but only in the exposed area. This imprinted the pattern, which is like a piece of information, onto the gel. When the gel was moved back to the cold water bath, the exposed area turned opaque, making the stored information visible, due to what they call “structure frustration.” At the cold temperature, the hydrogel gradually shrank, releasing the water it had absorbed. The pattern slowly faded. The longer the gel was left in the hot water, the darker or more intense the imprint would be, and therefore the longer it took to fade or “forget” the information. The team also showed hotter temperatures intensified the memories.

“This is similar to humans,” says Cui. “The longer you spend learning something or the stronger the emotional stimuli, the longer it takes to forget it.”

The team showed that the memory established in the hydrogel is stable against temperature fluctuation and large physical stretching. More interestingly, the forgetting processes can be programmed by tuning the thermal learning time or temperature. For example, when they applied different learning times to each letter of “GEL,” the letters disappeared sequentially.

The team used a hydrogel containing materials called polyampholytes or PA gels. The memorizing-forgetting behavior is achieved based on fast water uptake and slow water release, which is enabled by dynamic bonds in the hydrogels. “This approach should work for a variety of hydrogels with physical bonds,” says Gong.

“The hydrogel’s brain-like memory system could be explored for some applications, such as disappearing messages for security,” Cui added.

Here’s a link to and a citation for the paper,

Hydrogels as dynamic memory with forgetting ability by Chengtao Yu, Honglei Guo, Kunpeng Cui, Xueyu Li, Ya Nan Ye, Takayuki Kurokawa, and Jian Ping Gong. PNAS August 11, 2020 117 (32) 18962-18968 DOI: https://doi.org/10.1073/pnas.2006842117 First published July 27, 2020

This paper is behind a paywall.

Neurotransistor for brainlike (neuromorphic) computing

According to researchers at Helmholtz-Zentrum Dresden-Rossendorf and the rest of the international team collaborating on the work, it’s time to look more closely at plasticity in the neuronal membrane,.

From the abstract for their paper, Intrinsic plasticity of silicon nanowire neurotransistors for dynamic memory and learning functions by Eunhye Baek, Nikhil Ranjan Das, Carlo Vittorio Cannistraci, Taiuk Rim, Gilbert Santiago Cañón Bermúdez, Khrystyna Nych, Hyeonsu Cho, Kihyun Kim, Chang-Ki Baek, Denys Makarov, Ronald Tetzlaff, Leon Chua, Larysa Baraban & Gianaurelio Cuniberti. Nature Electronics volume 3, pages 398–408 (2020) DOI: https://doi.org/10.1038/s41928-020-0412-1 Published online: 25 May 2020 Issue Date: July 2020

Neuromorphic architectures merge learning and memory functions within a single unit cell and in a neuron-like fashion. Research in the field has been mainly focused on the plasticity of artificial synapses. However, the intrinsic plasticity of the neuronal membrane is also important in the implementation of neuromorphic information processing. Here we report a neurotransistor made from a silicon nanowire transistor coated by an ion-doped sol–gel silicate film that can emulate the intrinsic plasticity of the neuronal membrane.

Caption: Neurotransistors: from silicon chips to neuromorphic architecture. Credit: TU Dresden / E. Baek Courtesy: Helmholtz-Zentrum Dresden-Rossendorf

A July 14, 2020 news item on Nanowerk announced the research (Note: A link has been removed),

Especially activities in the field of artificial intelligence, like teaching robots to walk or precise automatic image recognition, demand ever more powerful, yet at the same time more economical computer chips. While the optimization of conventional microelectronics is slowly reaching its physical limits, nature offers us a blueprint how information can be processed and stored quickly and efficiently: our own brain.

For the very first time, scientists at TU Dresden and the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) have now successfully imitated the functioning of brain neurons using semiconductor materials. They have published their research results in the journal Nature Electronics (“Intrinsic plasticity of silicon nanowire neurotransistors for dynamic memory and learning functions”).

A July 14, 2020 Helmholtz-Zentrum Dresden-Rossendorf press release (also on EurekAlert), which originated the news items delves further into the research,

Today, enhancing the performance of microelectronics is usually achieved by reducing component size, especially of the individual transistors on the silicon computer chips. “But that can’t go on indefinitely – we need new approaches”, Larysa Baraban asserts. The physicist, who has been working at HZDR since the beginning of the year, is one of the three primary authors of the international study, which involved a total of six institutes. One approach is based on the brain, combining data processing with data storage in an artificial neuron.

“Our group has extensive experience with biological and chemical electronic sensors,” Baraban continues. “So, we simulated the properties of neurons using the principles of biosensors and modified a classical field-effect transistor to create an artificial neurotransistor.” The advantage of such an architecture lies in the simultaneous storage and processing of information in a single component. In conventional transistor technology, they are separated, which slows processing time and hence ultimately also limits performance.

Silicon wafer + polymer = chip capable of learning

Modeling computers on the human brain is no new idea. Scientists made attempts to hook up nerve cells to electronics in Petri dishes decades ago. “But a wet computer chip that has to be fed all the time is of no use to anybody,” says Gianaurelio Cuniberti from TU Dresden. The Professor for Materials Science and Nanotechnology is one of the three brains behind the neurotransistor alongside Ronald Tetzlaff, Professor of Fundamentals of Electrical Engineering in Dresden, and Leon Chua [emphasis mine] from the University of California at Berkeley, who had already postulated similar components in the early 1970s.

Now, Cuniberti, Baraban and their team have been able to implement it: “We apply a viscous substance – called solgel – to a conventional silicon wafer with circuits. This polymer hardens and becomes a porous ceramic,” the materials science professor explains. “Ions move between the holes. They are heavier than electrons and slower to return to their position after excitation. This delay, called hysteresis, is what causes the storage effect.” As Cuniberti explains, this is a decisive factor in the functioning of the transistor. “The more an individual transistor is excited, the sooner it will open and let the current flow. This strengthens the connection. The system is learning.”

Cuniberti and his team are not focused on conventional issues, though. “Computers based on our chip would be less precise and tend to estimate mathematical computations rather than calculating them down to the last decimal,” the scientist explains. “But they would be more intelligent. For example, a robot with such processors would learn to walk or grasp; it would possess an optical system and learn to recognize connections. And all this without having to develop any software.” But these are not the only advantages of neuromorphic computers. Thanks to their plasticity, which is similar to that of the human brain, they can adapt to changing tasks during operation and, thus, solve problems for which they were not originally programmed.

I highlighted Dr. Leon Chua’s name as he was one of the first to conceptualize the notion of a memristor (memory resistor), which is what the press release seems to be referencing with the mention of artificial synapses. Dr. Chua very kindly answered a few questions for me about his work which I published in an April 13, 2010 posting (scroll down about 40% of the way).

Brain-inspired computer with optimized neural networks

Caption: Left to right: The experiment was performed on a prototype of the BrainScales-2 chip; Schematic representation of a neural network; Results for simple and complex tasks. Credit: Heidelberg University

I don’t often stumble across research from the European Union’s flagship Human Brain Project. So, this is a delightful occurrence especially with my interest in neuromorphic computing. From a July 22, 2020 Human Brain Project press release (also on EurekAlert),

Many computational properties are maximized when the dynamics of a network are at a “critical point”, a state where systems can quickly change their overall characteristics in fundamental ways, transitioning e.g. between order and chaos or stability and instability. Therefore, the critical state is widely assumed to be optimal for any computation in recurrent neural networks, which are used in many AI [artificial intelligence] applications.

Researchers from the HBP [Human Brain Project] partner Heidelberg University and the Max-Planck-Institute for Dynamics and Self-Organization challenged this assumption by testing the performance of a spiking recurrent neural network on a set of tasks with varying complexity at – and away from critical dynamics. They instantiated the network on a prototype of the analog neuromorphic BrainScaleS-2 system. BrainScaleS is a state-of-the-art brain-inspired computing system with synaptic plasticity implemented directly on the chip. It is one of two neuromorphic systems currently under development within the European Human Brain Project.

First, the researchers showed that the distance to criticality can be easily adjusted in the chip by changing the input strength, and then demonstrated a clear relation between criticality and task-performance. The assumption that criticality is beneficial for every task was not confirmed: whereas the information-theoretic measures all showed that network capacity was maximal at criticality, only the complex, memory intensive tasks profited from it, while simple tasks actually suffered. The study thus provides a more precise understanding of how the collective network state should be tuned to different task requirements for optimal performance.

Mechanistically, the optimal working point for each task can be set very easily under homeostatic plasticity by adapting the mean input strength. The theory behind this mechanism was developed very recently at the Max Planck Institute. “Putting it to work on neuromorphic hardware shows that these plasticity rules are very capable in tuning network dynamics to varying distances from criticality”, says senior author Viola Priesemann, group leader at MPIDS. Thereby tasks of varying complexity can be solved optimally within that space.

The finding may also explain why biological neural networks operate not necessarily at criticality, but in the dynamically rich vicinity of a critical point, where they can tune their computation properties to task requirements. Furthermore, it establishes neuromorphic hardware as a fast and scalable avenue to explore the impact of biological plasticity rules on neural computation and network dynamics.

“As a next step, we now study and characterize the impact of the spiking network’s working point on classifying artificial and real-world spoken words”, says first author Benjamin Cramer of Heidelberg University.

Here’s a link to and a citation for the paper,

Control of criticality and computation in spiking neuromorphic networks with plasticity by Benjamin Cramer, David Stöckel, Markus Kreft, Michael Wibral, Johannes Schemmel, Karlheinz Meier & Viola Priesemann. Nature Communications volume 11, Article number: 2853 (2020) DOI: https://doi.org/10.1038/s41467-020-16548-3 Published: 05 June 2020

This paper is open access.

Improving neuromorphic devices with ion conducting polymer

A July 1, 2020 news item on ScienceDaily announces work which researchers are hopeful will allow them exert more control over neuromorphic devices’ speed of response,

“Neuromorphic” refers to mimicking the behavior of brain neural cells. When one speaks of neuromorphic computers, they are talking about making computers think and process more like human brains-operating at high-speed with low energy consumption.

Despite a growing interest in polymer-based neuromorphic devices, researchers have yet to establish an effective method for controlling the response speed of devices. Researchers from Tohoku University and the University of Cambridge, however, have overcome this obstacle through mixing the polymers PSS-Na and PEDOT:PSS, discovering that adding an ion conducting polymer enhances neuromorphic device response time.

A June 24, 2020 Tohoku University press release (also on EurekAlert), which originated the news item, provides a few more technical details,

Polymers are materials composed of long molecular chains and play a fundamental aspect in modern life from the rubber in tires, to water bottles, to polystyrene. Mixing polymers together results in the creation of new materials with their own distinct physical properties.

Most studies on neuromorphic devices based on polymer focus exclusively on the application of PEDOT: PSS, a mixed conductor that transports both electrons and ions. PSS-Na, on the other hand, transports ions only. By blending these two polymers, the researchers could enhance the ion diffusivity in the active layer of the device. Their measurements confirmed an increase in device response time, achieving a 5-time shorting at maximum. The results also proved how closely related response time is to the diffusivity of ions in the active layer.

“Our study paves the way for a deeper understanding behind the science of conducting polymers.” explains co-author Shunsuke Yamamoto from the Department of Biomolecular Engineering at Tohoku University’s Graduate School of Engineering. “Moving forward, it may be possible to create artificial neural networks composed of multiple neuromorphic devices,” he adds.

Here’s a link to and a citation for the paper,

Controlling the Neuromorphic Behavior of Organic Electrochemical Transistors by Blending Mixed and Ion Conductors by Shunsuke Yamamoto and George G. Malliaras. ACS [American Chemical Society] Appl. Electron. Mater. 2020, XXXX, XXX, XXX-XXX DOI: https://doi.org/10.1021/acsaelm.0c00203 Publication Date:June 15, 2020 Copyright © 2020 American Chemical Society

This paper is behind a paywall.