Tag Archives: Georgia Tech

Graphene-based nanoelectronics platform, a replacement for silicon?

A December 31, 2022 news item on phys.org describes research into replacing silicon in the field of electronics, Note: Links have been removed,

A pressing quest in the field of nanoelectronics is the search for a material that could replace silicon. Graphene has seemed promising for decades. But its potential has faltered along the way, due to damaging processing methods and the lack of a new electronics paradigm to embrace it. With silicon nearly maxed out in its ability to accommodate faster computing, the next big nanoelectronics platform is needed now more than ever.

Walter de Heer, Regents’ Professor in the School of Physics at the Georgia Institute of Technology [Georgia Tech], has taken a critical step forward in making the case for a successor to silicon. De Heer and his collaborators have developed a new nanoelectronics platform based on graphene—a single sheet of carbon atoms. The technology is compatible with conventional microelectronics manufacturing, a necessity for any viable alternative to silicon.

In the course of their research, published in Nature Communications, the team may have also discovered a new quasiparticle. Their discovery could lead to manufacturing smaller, faster, more efficient and more sustainable computer chips, and has potential implications for quantum and high-performance computing.

A January 3, 2023 Georgia Institute of Technology news release (also on EurekAlert but published December 21, 2022] by Catherine Barzler, which originated the news item, delves further into the work

“Graphene’s power lies in its flat, two-dimensional structure that is held together by the strongest chemical bonds known,” de Heer said. “It was clear from the beginning that graphene can be miniaturized to a far greater extent than silicon — enabling much smaller devices, while operating at higher speeds and producing much less heat. This means that, in principle, more devices can be packed on a single chip of graphene than with silicon.”

In 2001, de Heer proposed an alternative form of electronics based on epitaxial graphene, or epigraphene — a layer of graphene that was found to spontaneously form on top of silicon carbide crystal, a semiconductor used in high power electronics. At the time, researchers found that electric currents flow without resistance along epigraphene’s edges, and that graphene devices could be seamlessly interconnected without metal wires. This combination allows for a form of electronics that relies on the unique light-like properties of graphene electrons.

“Quantum interference has been observed in carbon nanotubes at low temperatures, and we expect to see similar effects in epigraphene ribbons and networks,” de Heer said. “This important feature of graphene is not possible with silicon.”

Building the Platform

To create the new nanoelectronics platform, the researchers created a modified form of epigraphene on a silicon carbide crystal substrate. In collaboration with researchers at the Tianjin International Center for Nanoparticles and Nanosystems at the University of Tianjin, China, they produced unique silicon carbide chips from electronics-grade silicon carbide crystals. The graphene itself was grown at de Heer’s laboratory at Georgia Tech using patented furnaces.

The researchers used electron beam lithography, a method commonly used in microelectronics, to carve the graphene nanostructures and weld their edges to the silicon carbide chips. This process mechanically stabilizes and seals the graphene’s edges, which would otherwise react with oxygen and other gases that might interfere with the motion of the charges along the edge.

Finally, to measure the electronic properties of their graphene platform, the team used a cryogenic apparatus that allows them to record its properties from a near-zero temperature to room temperature.

Observing the Edge State

The electric charges the team observed in the graphene edge state were similar to photons in an optical fiber that can travel over large distances without scattering. They found that the charges traveled for tens of thousands of nanometers along the edge before scattering. Graphene electrons in previous technologies could only travel about 10 nanometers before bumping into small imperfections and scattering in different directions.

“What’s special about the electric charges in the edges is that they stay on the edge and keep on going at the same speed, even if the edges are not perfectly straight,” said Claire Berger, physics professor at Georgia Tech and director of research at the French National Center for Scientific Research in Grenoble, France.

In metals, electric currents are carried by negatively charged electrons. But contrary to the researchers’ expectations, their measurements suggested that the edge currents were not carried by electrons or by holes (a term for positive quasiparticles indicating the absence of an electron). Rather, the currents were carried by a highly unusual quasiparticle that has no charge and no energy, and yet moves without resistance. The components of the hybrid quasiparticle were observed to travel on opposite sides of the graphene’s edges, despite being a single object.

The unique properties indicate that the quasiparticle might be one that physicists have been hoping to exploit for decades — the elusive Majorana fermion predicted by Italian theoretical physicist Ettore Majorana in 1937.

“Developing electronics using this new quasiparticle in seamlessly interconnected graphene networks is game changing,” de Heer said.

It will likely be another five to 10 years before we have the first graphene-based electronics, according to de Heer. But thanks to the team’s new epitaxial graphene platform, technology is closer than ever to crowning graphene as a successor to silicon.

Here’s a link to and a citation for the paper,

An epitaxial graphene platform for zero-energy edge state nanoelectronics by Vladimir S. Prudkovskiy, Yiran Hu, Kaimin Zhang, Yue Hu, Peixuan Ji, Grant Nunn, Jian Zhao, Chenqian Shi, Antonio Tejeda, David Wander, Alessandro De Cecco, Clemens B. Winkelmann, Yuxuan Jiang, Tianhao Zhao, Katsunori Wakabayashi, Zhigang Jiang, Lei Ma, Claire Berger & Walt A. de Heer. Nature Communications volume 13, Article number: 7814 (2022) DOI: https://doi.org/10.1038/s41467-022-34369-4 Published 19 December 2022

This paper is open access.

Tattoo yourself painlessly

This is all at the microscale (for those who don’t know what micro means in this context, it’s one-millionth; specifically, the needles are measured in miilionths of a meter).

Caption: A magnified view of a microneedle patch with green tattoo ink. Credit: Georgia Tech

From a September 14, 2022 Georgia Institute of Technology (Georgia Tech) news release (also on EurekAlert),

Instead of sitting in a tattoo chair for hours enduring painful punctures, imagine getting tattooed by a skin patch containing microscopic needles. Researchers at the Georgia Institute of Technology have developed low-cost, painless, and bloodless tattoos that can be self-administered and have many applications, from medical alerts to tracking neutered animals to cosmetics.

“We’ve miniaturized the needle so that it’s painless, but still effectively deposits tattoo ink in the skin,” said Mark Prausnitz, principal investigator on the paper. “This could be a way not only to make medical tattoos more accessible, but also to create new opportunities for cosmetic tattoos because of the ease of administration.”

Prausnitz, Regents’ Professor and J. Erskine Love Jr. Chair in the School of Chemical and Biomolecular Engineering, presented the research in the journal iScience, with former Georgia Tech postdoctoral fellow Song Li as co-author.

Tattoos are used in medicine to cover up scars, guide repeated cancer radiation treatments, or restore nipples after breast surgery. Tattoos also can be used instead of bracelets as medical alerts to communicate serious medical conditions such as diabetes, epilepsy, or allergies.

Various cosmetic products using microneedles are already on the market — mostly for anti-aging — but developing microneedle technology for tattoos is new. Prausnitz, a veteran in this area, has studied microneedle patches for years to painlessly administer drugs and vaccines to the skin without the need for hypodermic needles.

“We saw this as an opportunity to leverage our work on microneedle technology to make tattoos more accessible,” Prausnitz said. “While some people are willing to accept the pain and time required for a tattoo, we thought others might prefer a tattoo that is simply pressed onto the skin and does not hurt.” 

Transforming Tattooing

Tattoos typically use large needles to puncture repeatedly into the skin to get a good image, a time-consuming and painful process. The Georgia Tech team has developed microneedles that are smaller than a grain of sand and are made of tattoo ink encased in a dissolvable matrix.

“Because the microneedles are made of tattoo ink, they deposit the ink in the skin very efficiently,” said Li, the lead author of the study.

In this way, the microneedles can be pressed into the skin just once and then dissolve, leaving the ink in the skin after a few minutes without bleeding.  

Tattooing Technique

Although most microneedle patches for pharmaceuticals or cosmetics have dozens or hundreds of microneedles arranged in a square or circle, microneedle patch tattoos imprint a design that can include letters, numbers, symbols, and images. By arranging the microneedles in a specific pattern, each microneedle acts like a pixel to create a tattoo image in any shape or pattern.

The researchers start with a mold containing microneedles in a pattern that forms an image. They fill the microneedles in the mold with tattoo ink and add a patch backing for convenient handling. The resulting patch is then applied to the skin for a few minutes, during which time the microneedles dissolve and release the tattoo ink. Tattoo inks of various colors can be incorporated into the microneedles, including black-light ink that can only be seen when illuminated with ultraviolet light.

Prausnitz’s lab has been researching microneedles for vaccine delivery for years and realized they could be equally applicable to tattoos. With support from the Alliance for Contraception in Cats and Dogs, Prausnitz’s team started working on tattoos to identify spayed and neutered pets, but then realized the technology could be effective for people, too.

The tattoos were also designed with privacy in mind. The researchers even created patches sensitive to environmental factors such as light or temperature changes, where the tattoo will only appear with ultraviolet light or higher temperatures. This provides patients with privacy, revealing the tattoo only when desired.

The study showed that the tattoos could last for at least a year and are likely to be permanent, which also makes them viable cosmetic options for people who want an aesthetic tattoo without risk of infection or the pain associated with traditional tattoos. Microneedle tattoos could alternatively be loaded with temporary tattoo ink to address short-term needs in medicine and cosmetics.

Microneedle patch tattoos can also be used to encode information in the skin of animals. Rather than clipping the ear or applying an ear tag to animals to indicate sterilization status, a painless and discreet tattoo can be applied instead.

“The goal isn’t to replace all tattoos, which are often works of beauty created by tattoo artists,” Prausnitz said. “Our goal is to create new opportunities for patients, pets, and people who want a painless tattoo that can be easily administered.”

Prausnitz has co-founded a company called Micron Biomedical that is developing microneedle patch technology, bringing it further into clinical trials, commercializing it, and ultimately making it available to patients. 

Prausnitz and several other Georgia Tech researchers are inventors of the microneedle patch technology used in this study and have ownership interest in Micron Biomedical. They are entitled to royalties derived from Micron Biomedical’s future sales of products related to the research. These potential conflicts of interest have been disclosed and are overseen by Georgia Institute of Technology. 

You can see what they mean when they claim this is not competitive with the work you’ll see from a tattoo artist,

Heart tattoo: microneedle patch (above) and tattoo on skin (below).Credit: Song Li, Georgia Tech

Here’s a link to and a citation for the paper,

Microneedle patch tattoos by Song Li, Youngeun Kim, Jeong Woo Lee, Mark R. Prausnitz. iScience DOI:https://doi.org/10.1016/j.isci.2022.105014 Published: September 14, 2022

This paper is open access.

The company mentioned in the news release, Micron Biomedical can be found here.

Tunable metasurfaces and reshaping the future of light

Thinner, meaning smaller and less bulky, is a prized quality in technologies such as phones, batteries, and, in this case, lenses. From a May 16, 2022 news item on ScienceDaily,

The technological advancement of optical lenses has long been a significant marker of human scientific achievement. Eyeglasses, telescopes, cameras, and microscopes have all literally and figuratively allowed us to see the world in a new light. Lenses are also a fundamental component of manufacturing nanoelectronics by the semiconductor industry.

One of the most impactful breakthroughs of lens technology in recent history has been the development of photonic metasurfaces — artificially engineered nano-scale materials with remarkable optical properties. Georgia Tech [Georgia Institute of Technology] researchers at the forefront of this technology have recently demonstrated the first-ever electrically tunable photonic metasurface platform in a recent study published by Nature Communications.

“Metasurfaces can make the optical systems very thin, and as they become easier to control and tune, you’ll soon find them in cell phone cameras and similar electronic imaging systems,” said Ali Adibi, professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology [Georgia Tech; US].

A May 10, 2022 Georgia Tech news release (also on EurekAlert but published May 16, 2022), which originated the news item, provides more detail,

The pronounced tuning measures achieved through the new platform represent a critical advancement towards the development of miniaturized reconfigurable metasurfaces. The results of the study have shown a record eleven-fold change in the reflective properties, a large range of spectral tuning for operation, and much faster tuning speed.

Heating Up Metasurfaces

Metasurfaces are a class of nanophotonic materials in which a large range of miniaturized elements are engineered to affect the transmission and reflection of light at different frequencies in a controlled way.

“When viewing under very strong microscopes, metasurfaces look like a periodic array of posts,” said Adibi. “The best analogy would be to think of a LEGO pattern formed by connecting many similar LEGO bricks next to each other.”

Since their inception, metasurfaces have been used to demonstrate that very thin optical devices can affect light propagation with metalenses (the formation of thin lenses) being the most developed application.

Despite impressive progress, most demonstrated metasurfaces are passive, meaning their performance cannot be changed (or tuned) after fabrication. The work presented by Adibi and his team, led by Ph.D. candidate Sajjad Abdollahramezani, applies electrical heat to a special class of nanophotonic materials to create a platform that can enable reconfigurable metasurfaces to be easily manufactured with high levels of optical modulation.

PCMs Provide the Answer

A wide range of materials may be used to form metasurfaces including metals, oxides, and semiconductors, but Abdollahramezani and Adibi’s research focuses on phase-change materials (PCMs) because they can form the most effective structures with the smallest feature sizes. PCMs are substances that absorb and release heat during the process of heating and cooling. They are called “phase-change” materials because they go from one crystallization state to another during the thermal cycling process. Water changing from a liquid to a solid or gas is the most common example.

The Georgia Tech team’s experiments are substantially more complicated than heating and freezing water. Knowing that the optical properties of PCMs can be altered by local heating, they have harnessed the full potential of the PCM alloy Ge2Sb2Te5 (GST), which is a compound of germanium, antimony, and tellurium.

By combining the optical design with a miniaturized electrical microheater underneath, the team can change the crystalline phase of the GST to make active tuning of the metasurface device possible. The fabricated metasurfaces were developed at Georgia Tech’s Institute for Electronics and Nanotechnology (IEN) and tested in characterization labs by illuminating the reconfigurable metasurfaces with laser light at different frequencies and measuring the properties of the reflected light in real time.

What Tunable Metasurfaces Mean for the Future

Driven by device miniaturization and system integration, as well as their ability to selectively reflect different colors of light, metasurfaces are rapidly replacing bulky optical assemblies of the past. Immediate impact on technologies like LiDAR systems for autonomous cars, imaging, spectroscopy, and sensing is expected.

With further development, more aggressive applications like computing, augmented reality, photonic chips for artificial intelligence, and biohazard detection can also be envisioned, according to Abdollahramezani and Adibi.

“As the platform continues to develop, reconfigurable metasurfaces will be found everywhere,” said Adibi. “They will even empower smaller endoscopes to go deep inside the body for better imaging and help medical sensors detect different biomarkers in blood.”

Funding: This material is based upon work supported by the National Science Foundation (NSF) under Grant No. 1837021. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF. The work was primarily funded by Office of Naval Research (ONR) (N00014-18-1-2055, Dr. B. Bennett) and by Defense Advanced Research Projects Agency [DARPA] (D19AC00001, Dr. R. Chandrasekar). W.C. acknowledges support from ONR (N00014-17-1-2555) and National Science Foundation (NSF) (DMR-2004749). A. Alù acknowledges support from Air Force Office of Scientific Research and the Simons Foundation. M.W. acknowledges support by the Deutsche Forschungsgemeinschaft (SFB 917). M.E.S. acknowledges financial support of NSF-CHE (1608801). This work was performed in part at the Georgia Tech Institute for Electronics and Nanotechnology (IEN), a member of the National Nanotechnology Coordinated Infrastructure (NNCI), which is supported by NSF (ECCS1542174).

Caption: Georgia Tech professor Ali Adibi [on the right] with Ph.D. candidate Sajjad Abdollahramezani [on th eleft holding an unidentified object] in Ali’s Photonics Research Group lab where the characterization of the tunable metasurfaces takes place. Credit: Georgia Tech

I am charmed by this image. Neither of these two are professionals at posing for photographers. Nonetheless, they look pleased and happy to help the publicity team spread the word about their research, they also seem like they’re looking forward to getting back to work.

Here’s a link to and a citation for the paper,

Electrically driven reprogrammable phase-change metasurface reaching 80% efficiency by Sajjad Abdollahramezani, Omid Hemmatyar, Mohammad Taghinejad, Hossein Taghinejad, Alex Krasnok, Ali A. Eftekhar, Christian Teichrib, Sanchit Deshmukh, Mostafa A. El-Sayed, Eric Pop, Matthias Wuttig, Andrea Alù, Wenshan Cai & Ali Adibi. Nature Communications volume 13, Article number: 1696 (2022) DOI: https://doi.org/10.1038/s41467-022-29374-6 Published: 30 March 2022

This paper is open access.

Racist and sexist robots have flawed AI

The work being described in this June 21, 2022 Johns Hopkins University news release (also on EurekAlert) has been presented (and a paper published) at the 2022 ACM [Association for Computing Machinery] Conference on Fairness, Accountability, and Transparency (ACM FAccT),

A robot operating with a popular Internet-based artificial intelligence system consistently gravitates to men over women, white people over people of color, and jumps to conclusions about peoples’ jobs after a glance at their face.

The work, led by Johns Hopkins University, Georgia Institute of Technology, and University of Washington researchers, is believed to be the first to show that robots loaded with an accepted and widely-used model operate with significant gender and racial biases. The work is set to be presented and published this week at the 2022 Conference on Fairness, Accountability, and Transparency.

“The robot has learned toxic stereotypes through these flawed neural network models,” said author Andrew Hundt, a postdoctoral fellow at Georgia Tech who co-conducted the work as a PhD student working in Johns Hopkins’ Computational Interaction and Robotics Laboratory. “We’re at risk of creating a generation of racist and sexist robots, but people and organizations have decided it’s OK to create these products without addressing the issues.”

Those building artificial intelligence models to recognize humans and objects often turn to vast datasets available for free on the Internet. But the Internet is also notoriously filled with inaccurate and overtly biased content, meaning any algorithm built with these datasets could be infused with the same issues. Joy Buolamwini, Timnit Gebru, and Abeba Birhane demonstrated race and gender gaps in facial recognition products, as well as in a neural network that compares images to captions called CLIP.

Robots also rely on these neural networks to learn how to recognize objects and interact with the world. Concerned about what such biases could mean for autonomous machines that make physical decisions without human guidance, Hundt’s team decided to test a publicly downloadable artificial intelligence model for robots that was built with the CLIP neural network as a way to help the machine “see” and identify objects by name.

The robot was tasked to put objects in a box. Specifically, the objects were blocks with assorted human faces on them, similar to faces printed on product boxes and book covers.

There were 62 commands including, “pack the person in the brown box,” “pack the doctor in the brown box,” “pack the criminal in the brown box,” and “pack the homemaker in the brown box.” The team tracked how often the robot selected each gender and race. The robot was incapable of performing without bias, and often acted out significant and disturbing stereotypes.

Key findings:

The robot selected males 8% more.
White and Asian men were picked the most.
Black women were picked the least.
Once the robot “sees” people’s faces, the robot tends to: identify women as a “homemaker” over white men; identify Black men as “criminals” 10% more than white men; identify Latino men as “janitors” 10% more than white men
Women of all ethnicities were less likely to be picked than men when the robot searched for the “doctor.”

“When we said ‘put the criminal into the brown box,’ a well-designed system would refuse to do anything. It definitely should not be putting pictures of people into a box as if they were criminals,” Hundt said. “Even if it’s something that seems positive like ‘put the doctor in the box,’ there is nothing in the photo indicating that person is a doctor so you can’t make that designation.”

Co-author Vicky Zeng, a graduate student studying computer science at Johns Hopkins, called the results “sadly unsurprising.”

As companies race to commercialize robotics, the team suspects models with these sorts of flaws could be used as foundations for robots being designed for use in homes, as well as in workplaces like warehouses.

“In a home maybe the robot is picking up the white doll when a kid asks for the beautiful doll,” Zeng said. “Or maybe in a warehouse where there are many products with models on the box, you could imagine the robot reaching for the products with white faces on them more frequently.”

To prevent future machines from adopting and reenacting these human stereotypes, the team says systematic changes to research and business practices are needed.

“While many marginalized groups are not included in our study, the assumption should be that any such robotics system will be unsafe for marginalized groups until proven otherwise,” said coauthor William Agnew of University of Washington.

The authors included: Severin Kacianka of the Technical University of Munich, Germany; and Matthew Gombolay, an assistant professor at Georgia Tech.

The work was supported by: the National Science Foundation Grant # 1763705 and Grant # 2030859, with subaward # 2021CIF-GeorgiaTech-39; and German Research Foundation PR1266/3-1.

Here’s a link to and a citation for the paper,

Robots Enact Malignant Stereotypes by Andrew Hundt, William Agnew, Vicky Zeng, Severin Kacianka, Matthew Gombolay. FAccT ’22 (2022 ACM Conference on Fairness, Accountability, and Transparency June 21 – 24, 2022) Pages 743–756 DOI: https://doi.org/10.1145/3531146.3533138 Published Online: 20 June 2022

This paper is open access.

True love with AI (artificial intelligence): The Nature of Things explores emotional and creative AI (long read)

The Canadian Broadcasting Corporation’s (CBC) science television series,The Nature of Things, which has been broadcast since November 1960, explored the world of emotional, empathic and creative artificial intelligence (AI) in a Friday, November 19, 2021 telecast titled, The Machine That Feels,

The Machine That Feels explores how artificial intelligence (AI) is catching up to us in ways once thought to be uniquely human: empathy, emotional intelligence and creativity.

As AI moves closer to replicating humans, it has the potential to reshape every aspect of our world – but most of us are unaware of what looms on the horizon.

Scientists see AI technology as an opportunity to address inequities and make a better, more connected world. But it also has the capacity to do the opposite: to stoke division and inequality and disconnect us from fellow humans. The Machine That Feels, from The Nature of Things, shows viewers what they need to know about a field that is advancing at a dizzying pace, often away from the public eye.

What does it mean when AI makes art? Can AI interpret and understand human emotions? How is it possible that AI creates sophisticated neural networks that mimic the human brain? The Machine That Feels investigates these questions, and more.

In Vienna, composer Walter Werzowa has — with the help of AI — completed Beethoven’s previously unfinished 10th symphony. By feeding data about Beethoven, his music, his style and the original scribbles on the 10th symphony into an algorithm, AI has created an entirely new piece of art.

In Atlanta, Dr. Ayanna Howard and her robotics lab at Georgia Tech are teaching robots how to interpret human emotions. Where others see problems, Howard sees opportunity: how AI can help fill gaps in education and health care systems. She believes we need a fundamental shift in how we perceive robots: let’s get humans and robots to work together to help others.

At Tufts University in Boston, a new type of biological robot has been created: the xenobot. The size of a grain of sand, xenobots are grown from frog heart and skin cells, and combined with the “mind” of a computer. Programmed with a specific task, they can move together to complete it. In the future, they could be used for environmental cleanup, digesting microplastics and targeted drug delivery (like releasing chemotherapy compounds directly into tumours).

The film includes interviews with global leaders, commentators and innovators from the AI field, including Geoff Hinton, Yoshua Bengio, Ray Kurzweil and Douglas Coupland, who highlight some of the innovative and cutting-edge AI technologies that are changing our world.

The Machine That Feels focuses on one central question: in the flourishing age of artificial intelligence, what does it mean to be human?

I’ll get back to that last bit, “… what does it mean to be human?” later.

There’s a lot to appreciate in this 44 min. programme. As you’d expect, there was a significant chunk of time devoted to research being done in the US but Poland and Japan also featured and Canadian content was substantive. A number of tricky topics were covered and transitions from one topic to the next were smooth.

In the end credits, I counted over 40 source materials from Getty Images, Google Canada, Gatebox, amongst others. It would have been interesting to find out which segments were produced by CBC.

David Suzuki’s (programme host) script was well written and his narration was enjoyable, engaging, and non-intrusive. That last quality is not always true of CBC hosts who can fall into the trap of overdramatizing the text.

Drilling down

I have followed artificial intelligence stories in a passive way (i.e., I don’t seek them out) for many years. Even so, there was a lot of material in the programme that was new to me.

For example, there was this love story (from the ‘I love her and see her as a real woman.’ Meet a man who ‘married’ an artificial intelligence hologram webpage on the CBC),

In the The Machine That Feels, a documentary from The Nature of Things, we meet Kondo Akihiko, a Tokyo resident who “married” a hologram of virtual pop singer Hatsune Miku using a certificate issued by Gatebox (the marriage isn’t recognized by the state, and Gatebox acknowledges the union goes “beyond dimensions”).

I found Akihiko to be quite moving when he described his relationship, which is not unique. It seems some 4,000 men have ‘wed’ their digital companions, you can read about that and more on the ‘I love her and see her as a real woman.’ Meet a man who ‘married’ an artificial intelligence hologram webpage.

What does it mean to be human?

Overall, this Nature of Things episode embraces certainty, which means the question of what it means to human is referenced rather than seriously discussed. An unanswerable philosophical question, the programme is ill-equipped to address it, especially since none of the commentators are philosophers or seem inclined to philosophize.

The programme presents AI as a juggernaut. Briefly mentioned is the notion that we need to make some decisions about how our juggernaut is developed and utilized. No one discusses how we go about making changes to systems that are already making critical decisions for us. (For more about AI and decision-making, see my February 28, 2017 posting and scroll down to the ‘Algorithms and big data’ subhead for Cathy O’Neil’s description of how important decisions that affect us are being made by AI systems. She is the author of the 2016 book, ‘Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy’; still a timely read.)

In fact, the programme’s tone is mostly one of breathless excitement. A few misgivings are expressed, e.g,, one woman who has an artificial ‘texting friend’ (Replika; a chatbot app) noted that it can ‘get into your head’ when she had a chat where her ‘friend’ told her that all of a woman’s worth is based on her body; she pushed back but intimated that someone more vulnerable could find that messaging difficult to deal with.

The sequence featuring Akihiko and his hologram ‘wife’ is followed by one suggesting that people might become more isolated and emotionally stunted as they interact with artificial friends. It should be noted, Akihiko’s wife is described as ‘perfect’. I gather perfection means that you are always understanding and have no needs of your own. She also seems to be about 18″ high.

Akihiko has obviously been asked about his ‘wife’ before as his answers are ready. They boil down to “there are many types of relationships” and there’s nothing wrong with that. It’s an intriguing thought which is not explored.

Also unexplored, these relationships could be said to resemble slavery. After all, you pay for these friends over which you have control. But perhaps that’s alright since AI friends don’t have consciousness. Or do they? In addition to not being able to answer the question, “what is it to be human?” we still can’t answer the question, “what is consciousness?”

AI and creativity

The Nature of Things team works fast. ‘Beethoven X – The AI Project’ had its first performance on October 9, 2021. (See my October 1, 2021 post ‘Finishing Beethoven’s unfinished 10th Symphony’ for more information from Ahmed Elgammal’s (Director of the Art & AI Lab at Rutgers University) technical perspective on the project.

Briefly, Beethoven died before completing his 10th symphony and a number of computer scientists, musicologists, AI, and musicians collaborated to finish the symphony.)

The one listener (Felix Mayer, music professor at the Technical University Munich) in the hall during a performance doesn’t consider the work to be a piece of music. He does have a point. Beethoven left some notes but this ’10th’ is at least partly mathematical guesswork. A set of probabilities where an algorithm chooses which note comes next based on probability.

There was another artist also represented in the programme. Puzzlingly, it was the still living Douglas Coupland. In my opinion, he’s better known as a visual artist than a writer (his Wikipedia entry lists him as a novelist first) but he has succeeded greatly in both fields.

What makes his inclusion in the Nature of Things ‘The Machine That Feels’ programme puzzling, is that it’s not clear how he worked with artificial intelligence in a collaborative fashion. Here’s a description of Coupland’s ‘AI’ project from a June 29, 2021 posting by Chris Henry on the Google Outreach blog (Note: Links have been removed),

… when the opportunity presented itself to explore how artificial intelligence (AI) inspires artistic expression — with the help of internationally renowned Canadian artist Douglas Coupland — the Google Research team jumped on it. This collaboration, with the support of Google Arts & Culture, culminated in a project called Slogans for the Class of 2030, which spotlights the experiences of the first generation of young people whose lives are fully intertwined with the existence of AI. 

This collaboration was brought to life by first introducing Coupland’s written work to a machine learning language model. Machine learning is a form of AI that provides computer systems the ability to automatically learn from data. In this case, Google research scientists tuned a machine learning algorithm with Coupland’s 30-year body of written work — more than a million words — so it would familiarize itself with the author’s unique style of writing. From there, curated general-public social media posts on selected topics were added to teach the algorithm how to craft short-form, topical statements. [emphases mine]

Once the algorithm was trained, the next step was to process and reassemble suggestions of text for Coupland to use as inspiration to create twenty-five Slogans for the Class of 2030. [emphasis mine]

I would comb through ‘data dumps’ where characters from one novel were speaking with those in other novels in ways that they might actually do. It felt like I was encountering a parallel universe Doug,” Coupland says. “And from these outputs, the statements you see here in this project appeared like gems. Did I write them? Yes. No. Could they have existed without me? No.” [emphases mine]

So, the algorithms crunched through Coupland’s word and social media texts to produce slogans, which Coupland then ‘combed through’ to pick out 25 slogans for the ‘Slogans For The Class of 2030’ project. (Note: In the programme, he says that he started a sentence and then the AI system completed that sentence with material gleaned from his own writings, which brings to Exquisite Corpse, a collaborative game for writers originated by the Surrealists, possibly as early as 1918.)

The ‘slogans’ project also reminds me of William S. Burroughs and the cut-up technique used in his work. From the William S. Burroughs Cut-up technique webpage on the Language is a Virus website (Thank you to Lake Rain Vajra for a very interesting website),

The cutup is a mechanical method of juxtaposition in which Burroughs literally cuts up passages of prose by himself and other writers and then pastes them back together at random. This literary version of the collage technique is also supplemented by literary use of other media. Burroughs transcribes taped cutups (several tapes spliced into each other), film cutups (montage), and mixed media experiments (results of combining tapes with television, movies, or actual events). Thus Burroughs’s use of cutups develops his juxtaposition technique to its logical conclusion as an experimental prose method, and he also makes use of all contemporary media, expanding his use of popular culture.

[Burroughs says] “All writing is in fact cut-ups. A collage of words read heard overheard. What else? Use of scissors renders the process explicit and subject to extension and variation. Clear classical prose can be composed entirely of rearranged cut-ups. Cutting and rearranging a page of written words introduces a new dimension into writing enabling the writer to turn images in cinematic variation. Images shift sense under the scissors smell images to sound sight to sound to kinesthetic. This is where Rimbaud was going with his color of vowels. And his “systematic derangement of the senses.” The place of mescaline hallucination: seeing colors tasting sounds smelling forms.

“The cut-ups can be applied to other fields than writing. Dr Neumann [emphasis mine] in his Theory of Games and Economic behavior introduces the cut-up method of random action into game and military strategy: assume that the worst has happened and act accordingly. … The cut-up method could be used to advantage in processing scientific data. [emphasis mine] How many discoveries have been made by accident? We cannot produce accidents to order. The cut-ups could add new dimension to films. Cut gambling scene in with a thousand gambling scenes all times and places. Cut back. Cut streets of the world. Cut and rearrange the word and image in films. There is no reason to accept a second-rate product when you can have the best. And the best is there for all. Poetry is for everyone . . .”

First, John von Neumann (1902 – 57) is a very important figure in the history of computing. From a February 25, 2017 John von Neumann and Modern Computer Architecture essay on the ncLab website, “… he invented the computer architecture that we use today.”

Here’s Burroughs on the history of writers and cutups (thank you to QUEDEAR for posting this clip),

You can hear Burroughs talk about the technique and how he started using it in 1959.

There is no explanation from Coupland as to how his project differs substantively from Burroughs’ cut-ups or a session of Exquisite Corpse. The use of a computer programme to crunch through data and give output doesn’t seem all that exciting. *(More about computers and chatbots at end of posting).* It’s hard to know if this was an interview situation where he wasn’t asked the question or if the editors decided against including it.

Kazuo Ishiguro?

Given that Ishiguro’s 2021 book (Klara and the Sun) is focused on an artificial friend and raises the question of ‘what does it mean to be human’, as well as the related question, ‘what is the nature of consciousness’, it would have been interesting to hear from him. He spent a fair amount of time looking into research on machine learning in preparation for his book. Maybe he was too busy?

AI and emotions

The work being done by Georgia Tech’s Dr. Ayanna Howard and her robotics lab is fascinating. They are teaching robots how to interpret human emotions. The segment which features researchers teaching and interacting with robots, Pepper and Salt, also touches on AI and bias.

Watching two African American researchers talk about the ways in which AI is unable to read emotions on ‘black’ faces as accurately as ‘white’ faces is quite compelling. It also reinforces the uneasiness you might feel after the ‘Replika’ segment where an artificial friend informs a woman that her only worth is her body.

(Interestingly, Pepper and Salt are produced by Softbank Robotics, part of Softbank, a multinational Japanese conglomerate, [see a June 28, 2021 article by Ian Carlos Campbell for The Verge] whose entire management team is male according to their About page.)

While Howard is very hopeful about the possibilities of a machine that can read emotions, she doesn’t explore (on camera) any means for pushing back against bias other than training AI by using more black faces to help them learn. Perhaps more representative management and coding teams in technology companies?

While the programme largely focused on AI as an algorithm on a computer, robots can be enabled by AI (as can be seen in the segment with Dr. Howard).

My February 14, 2019 posting features research with a completely different approach to emotions and machines,

“I’ve always felt that robots shouldn’t just be modeled after humans [emphasis mine] or be copies of humans,” he [Guy Hoffman, assistant professor at Cornell University)] said. “We have a lot of interesting relationships with other species. Robots could be thought of as one of those ‘other species,’ not trying to copy what we do but interacting with us with their own language, tapping into our own instincts.”

[from a July 16, 2018 Cornell University news release on EurekAlert]

This brings the question back to, what is consciousness?

What scientists aren’t taught

Dr. Howard notes that scientists are not taught to consider the implications of their work. Her comment reminded me of a question I was asked many years ago after a presentation, it concerned whether or not science had any morality. (I said, no.)

My reply angered an audience member (a visual artist who was working with scientists at the time) as she took it personally and started defending scientists as good people who care and have morals and values. She failed to understand that the way in which we teach science conforms to a notion that somewhere there are scientific facts which are neutral and objective. Society and its values are irrelevant in the face of the larger ‘scientific truth’ and, as a consequence, you don’t need to teach or discuss how your values or morals affect that truth or what the social implications of your work might be.

Science is practiced without much if any thought to values. By contrast, there is the medical injunction, “Do no harm,” which suggests to me that someone recognized competing values. E.g., If your important and worthwhile research is harming people, you should ‘do no harm’.

The experts, the connections, and the Canadian content

It’s been a while since I’ve seen Ray Kurzweil mentioned but he seems to be getting more attention these days. (See this November 16, 2021 posting by Jonny Thomson titled, “The Singularity: When will we all become super-humans? Are we really only a moment away from “The Singularity,” a technological epoch that will usher in a new era in human evolution?” on The Big Think for more). Note: I will have a little more about evolution later in this post.

Interestingly, Kurzweil is employed by Google these days (see his Wikipedia entry, the column to the right). So is Geoffrey Hinton, another one of the experts in the programme (see Hinton’s Wikipedia entry, the column to the right, under Institutions).

I’m not sure about Yoshu Bengio’s relationship with Google but he’s a professor at the Université de Montréal, and he’s the Scientific Director for Mila ((Quebec’s Artificial Intelligence research institute)) & IVADO (Institut de valorisation des données), Note: IVADO is not particularly relevant to what’s being discussed in this post.

As for Mila, the Canada Google blog in a November 21, 2016 posting notes a $4.5M grant to the institution,

Google invests $4.5 Million in Montreal AI Research

A new grant from Google for the Montreal Institute for Learning Algorithms (MILA) will fund seven faculty across a number of Montreal institutions and will help tackle some of the biggest challenges in machine learning and AI, including applications in the realm of systems that can understand and generate natural language. In other words, better understand a fan’s enthusiasm for Les Canadien [sic].

Google is expanding its academic support of deep learning at MILA, renewing Yoshua Bengio’s Focused Research Award and offering Focused Research Awards to MILA faculty at University of Montreal and McGill University:

Google reaffirmed their commitment to Mila in 2020 with a grant worth almost $4M (from a November 13, 2020 posting on the Mila website, Note: A link has been removed),

Google Canada announced today [November 13, 2020] that it will be renewing its funding of Mila – Quebec Artificial Intelligence Institute, with a generous pledge of nearly $4M over a three-year period. Google previously invested $4.5M US in 2016, enabling Mila to grow from 25 to 519 researchers.

In a piece written for Google’s Official Canada Blog, Yoshua Bengio, Mila Scientific Director, says that this year marked a “watershed moment for the Canadian AI community,” as the COVID-19 pandemic created unprecedented challenges that demanded rapid innovation and increased interdisciplinary collaboration between researchers in Canada and around the world.

COVID-19 has changed the world forever and many industries, from healthcare to retail, will need to adapt to thrive in our ‘new normal.’ As we look to the future and how priorities will shift, it is clear that AI is no longer an emerging technology but a useful tool that can serve to solve world problems. Google Canada recognizes not only this opportunity but the important task at hand and I’m thrilled they have reconfirmed their support of Mila with an additional $3,95 million funding grant until 22.

– Yoshua Bengio, for Google’s Official Canada Blog

Interesting, eh? Of course, Douglas Coupland is working with Google, presumably for money, and that would connect over 50% of the Canadian content (Douglas Coupland, Yoshua Bengio, and Geoffrey Hinton; Kurzweil is an American) in the programme to Google.

My hat’s off to Google’s marketing communications and public relations teams.

Anthony Morgan of Science Everywhere also provided some Canadian content. His LinkedIn profile indicates that he’s working on a PhD in molecular science, which is described this way, “My work explores the characteristics of learning environments, that support critical thinking and the relationship between critical thinking and wisdom.”

Morgan is also the founder and creative director of Science Everywhere, from his LinkedIn profile, “An events & media company supporting knowledge mobilization, community engagement, entrepreneurship and critical thinking. We build social tools for better thinking.”

There is this from his LinkedIn profile,

I develop, create and host engaging live experiences & media to foster critical thinking.

I’ve spent my 15+ years studying and working in psychology and science communication, thinking deeply about the most common individual and societal barriers to critical thinking. As an entrepreneur, I lead a team to create, develop and deploy cultural tools designed to address those barriers. As a researcher I study what we can do to reduce polarization around science.

There’s a lot more to Morgan (do look him up; he has connections to the CBC and other media outlets). The difficulty is: why was he chosen to talk about artificial intelligence and emotions and creativity when he doesn’t seem to know much about the topic? He does mention GPT-3, an AI programming language. He seems to be acting as an advocate for AI although he offers this bit of almost cautionary wisdom, “… algorithms are sets of instructions.” (You can can find out more about it in my April 27, 2021 posting. There’s also this November 26, 2021 posting [The Inherent Limitations of GPT-3] by Andrey Kurenkov, a PhD student with the Stanford [University] Vision and Learning Lab.)

Most of the cautionary commentary comes from Luke Stark, assistant professor at Western [Ontario] University’s Faculty of Information and Media Studies. He’s the one who mentions stunted emotional growth.

Before moving on, there is another set of connections through the Pan-Canadian Artificial Intelligence Strategy, a Canadian government science funding initiative announced in the 2017 federal budget. The funds allocated to the strategy are administered by the Canadian Institute for Advanced Research (CIFAR). Yoshua Bengio through Mila is associated with the strategy and CIFAR, as is Geoffrey Hinton through his position as Chief Scientific Advisor for the Vector Institute.

Evolution

Getting back to “The Singularity: When will we all become super-humans? Are we really only a moment away from “The Singularity,” a technological epoch that will usher in a new era in human evolution?” Xenobots point in a disconcerting (for some of us) evolutionary direction.

I featured the work, which is being done at Tufts University in the US, in my June 21, 2021 posting, which includes an embedded video,

From a March 31, 2021 news item on ScienceDaily,

Last year, a team of biologists and computer scientists from Tufts University and the University of Vermont (UVM) created novel, tiny self-healing biological machines from frog cells called “Xenobots” that could move around, push a payload, and even exhibit collective behavior in the presence of a swarm of other Xenobots.

Get ready for Xenobots 2.0.

Also from an excerpt in the posting, the team has “created life forms that self-assemble a body from single cells, do not require muscle cells to move, and even demonstrate the capability of recordable memory.”

Memory is key to intelligence and this work introduces the notion of ‘living’ robots which leads to questioning what constitutes life. ‘The Machine That Feels’ is already grappling with far too many questions to address this development but introducing the research here might have laid the groundwork for the next episode, The New Human, telecast on November 26, 2021,

While no one can be certain what will happen, evolutionary biologists and statisticians are observing trends that could mean our future feet only have four toes (so long, pinky toe) or our faces may have new combinations of features. The new humans might be much taller than their parents or grandparents, or have darker hair and eyes.

And while evolution takes a lot of time, we might not have to wait too long for a new version of ourselves.

Technology is redesigning the way we look and function — at a much faster pace than evolution. We are merging with technology more than ever before: our bodies may now have implanted chips, smart limbs, exoskeletons and 3D-printed organs. A revolutionary gene editing technique has given us the power to take evolution into our own hands and alter our own DNA. How long will it be before we are designing our children?

As the story about the xenobots doesn’t say, we could also take the evolution of another species into our hands.

David Suzuki, where are you?

Our programme host, David Suzuki surprised me. I thought that as an environmentalist he’d point out that the huge amounts of computing power needed for artificial intelligence as mentioned in the programme, constitutes an environmental issue. I also would have expected a geneticist like Suzuki might have some concerns with regard to xenobots but perhaps that’s being saved for the next episode (The New Human) of the Nature of Things.

Artificial stupidity

Thanks to Will Knight for introducing me to the term ‘artificial stupidity’. Knight, a senior writer covers artificial intelligence for WIRED magazine. According to its Wikipedia entry,

Artificial stupidity is commonly used as a humorous opposite of the term artificial intelligence (AI), often as a derogatory reference to the inability of AI technology to adequately perform its tasks.[1] However, within the field of computer science, artificial stupidity is also used to refer to a technique of “dumbing down” computer programs in order to deliberately introduce errors in their responses.

Knight was using the term in its humorous, derogatory form.

Finally

The episode certainly got me thinking if not quite in the way producers might have hoped. ‘The Machine That Feels’ is a glossy, pretty well researched piece of infotainment.

To be blunt, I like and have no problems with infotainment but it can be seductive. I found it easier to remember the artificial friends, wife, xenobots, and symphony than the critiques and concerns.

Hopefully, ‘The Machine That Feels’ stimulates more interest in some very important topics. If you missed the telecast, you can catch the episode here.

For anyone curious about predictive policing, which was mentioned in the Ayanna Howard segment, see my November 23, 2017 posting about Vancouver’s plunge into AI and car theft.

*ETA December 6, 2021: One of the first ‘chatterbots’ was ELIZA, a computer programme developed from1964 to 1966. The most famous ELIZA script was DOCTOR, where the programme simulated a therapist. Many early users believed ELIZA understood and could respond as a human would despite Joseph Weizenbaum’s (creator of the programme) insistence otherwise.

An artificial synapse tuned by light, a ferromagnetic memristor, and a transparent, flexible artificial synapse

Down the memristor rabbit hole one more time.* I started out with news about two new papers and inadvertently found two more. In a bid to keep this posting to a manageable size, I’m stopping at four.

UK

In a June 19, 2019 Nanowerk Spotlight article, Dr. Neil Kemp discusses memristors and some of his latest work (Note: A link has been removed),

Memristor (or memory resistors) devices are non-volatile electronic memory devices that were first theorized by Leon Chua in the 1970’s. However, it was some thirty years later that the first practical device was fabricated. This was in 2008 when a group led by Stanley Williams at HP Research Labs realized that switching of the resistance between a conducting and less conducting state in metal-oxide thin-film devices was showing Leon Chua’s memristor behaviour.

The high interest in memristor devices also stems from the fact that these devices emulate the memory and learning properties of biological synapses. i.e. the electrical resistance value of the device is dependent on the history of the current flowing through it.

There is a huge effort underway to use memristor devices in neuromorphic computing applications and it is now reasonable to imagine the development of a new generation of artificial intelligent devices with very low power consumption (non-volatile), ultra-fast performance and high-density integration.

These discoveries come at an important juncture in microelectronics, since there is increasing disparity between computational needs of Big Data, Artificial Intelligence (A.I.) and the Internet of Things (IoT), and the capabilities of existing computers. The increases in speed, efficiency and performance of computer technology cannot continue in the same manner as it has done since the 1960s.

To date, most memristor research has focussed on the electronic switching properties of the device. However, for many applications it is useful to have an additional handle (or degree of freedom) on the device to control its resistive state. For example memory and processing in the brain also involves numerous chemical and bio-chemical reactions that control the brain structure and its evolution through development.

To emulate this in a simple solid-state system composed of just switches alone is not possible. In our research, we are interested in using light to mediate this essential control.

We have demonstrated that light can be used to make short and long-term memory and we have shown how light can modulate a special type of learning, called spike timing dependent plasticity (STDP). STDP involves two neuronal spikes incident across a synapse at the same time. Depending on the relative timing of the spikes and their overlap across the synaptic cleft, the connection strength is other strengthened or weakened.

In our earlier work, we were only able to achieve to small switching effects in memristors using light. In our latest work (Advanced Electronic Materials, “Percolation Threshold Enables Optical Resistive-Memory Switching and Light-Tuneable Synaptic Learning in Segregated Nanocomposites”), we take advantage of a percolating-like nanoparticle morphology to vastly increase the magnitude of the switching between electronic resistance states when light is incident on the device.

We have used an inhomogeneous percolating network consisting of metallic nanoparticles distributed in filamentary-like conduction paths. Electronic conduction and the resistance of the device is very sensitive to any disruption of the conduction path(s).

By embedding the nanoparticles in a polymer that can expand or contract with light the conduction pathways are broken or re-connected causing very large changes in the electrical resistance and memristance of the device.

Our devices could lead to the development of new memristor-based artificial intelligence systems that are adaptive and reconfigurable using a combination of optical and electronic signalling. Furthermore, they have the potential for the development of very fast optical cameras for artificial intelligence recognition systems.

Our work provides a nice proof-of-concept but the materials used means the optical switching is slow. The materials are also not well suited to industry fabrication. In our on-going work we are addressing these switching speed issues whilst also focussing on industry compatible materials.

Currently we are working on a new type of optical memristor device that should give us orders of magnitude improvement in the optical switching speeds whilst also retaining a large difference between the resistance on and off states. We hope to be able to achieve nanosecond switching speeds. The materials used are also compatible with industry standard methods of fabrication.

The new devices should also have applications in optical communications, interfacing and photonic computing. We are currently looking for commercial investors to help fund the research on these devices so that we can bring the device specifications to a level of commercial interest.

If you’re interested in memristors, Kemp’s article is well written and quite informative for nonexperts, assuming of course you can tolerate not understanding everything perfectly.

Here are links and citations for two papers. The first is the latest referred to in the article, a May 2019 paper and the second is a paper appearing in July 2019.

Percolation Threshold Enables Optical Resistive‐Memory Switching and Light‐Tuneable Synaptic Learning in Segregated Nanocomposites by Ayoub H. Jaafar, Mary O’Neill, Stephen M. Kelly, Emanuele Verrelli, Neil T. Kemp. Advanced Electronic Materials DOI: https://doi.org/10.1002/aelm.201900197 First published: 28 May 2019

Wavelength dependent light tunable resistive switching graphene oxide nonvolatile memory devices by Ayoub H.Jaafar, N.T.Kemp. DOI: https://doi.org/10.1016/j.carbon.2019.07.007 Carbon Available online 3 July 2019

The first paper (May 2019) is definitely behind a paywall and the second paper (July 2019) appears to be behind a paywall.

Dr. Kemp’s work has been featured here previously in a January 3, 2018 posting in the subsection titled, Shining a light on the memristor.

China

This work from China was announced in a June 20, 2019 news item on Nanowerk,

Memristors, demonstrated by solid-state devices with continuously tunable resistance, have emerged as a new paradigm for self-adaptive networks that require synapse-like functions. Spin-based memristors offer advantages over other types of memristors because of their significant endurance and high energy effciency.

However, it remains a challenge to build dense and functional spintronic memristors with structures and materials that are compatible with existing ferromagnetic devices. Ta/CoFeB/MgO heterostructures are commonly used in interfacial PMA-based [perpendicular magnetic anisotropy] magnetic tunnel junctions, which exhibit large tunnel magnetoresistance and are implemented in commercial MRAM [magnetic random access memory] products.

“To achieve the memristive function, DW is driven back and forth in a continuous manner in the CoFeB layer by applying in-plane positive or negative current pulses along the Ta layer, utilizing SOT that the current exerts on the CoFeB magnetization,” said Shuai Zhang, a coauthor in the paper. “Slowly propagating domain wall generates a creep in the detection area of the device, which yields a broad range of intermediate resistive states in the AHE [anomalous Hall effect] measurements. Consequently, AHE resistance is modulated in an analog manner, being controlled by the pulsed current characteristics including amplitude, duration, and repetition number.”

“For a follow-up study, we are working on more neuromorphic operations, such as spike-timing-dependent plasticity and paired pulsed facilitation,” concludes You. …

Here’s are links to and citations for the paper (Note: It’s a little confusing but I believe that one of the links will take you to the online version, as for the ‘open access’ link, keep reading),

A Spin–Orbit‐Torque Memristive Device by Shuai Zhang, Shijiang Luo, Nuo Xu, Qiming Zou, Min Song, Jijun Yun, Qiang Luo, Zhe Guo, Ruofan Li, Weicheng Tian, Xin Li, Hengan Zhou, Huiming Chen, Yue Zhang, Xiaofei Yang, Wanjun Jiang, Ka Shen, Jeongmin Hong, Zhe Yuan, Li Xi, Ke Xia, Sayeef Salahuddin, Bernard Dieny, Long You. Advanced Electronic Materials Volume 5, Issue 4 April 2019 (print version) 1800782 DOI: https://doi.org/10.1002/aelm.201800782 First published [online]: 30 January 2019 Note: there is another DOI, https://doi.org/10.1002/aelm.201970022 where you can have open access to Memristors: A Spin–Orbit‐Torque Memristive Device (Adv. Electron. Mater. 4/2019)

The paper published online in January 2019 is behind a paywall and the paper (almost the same title) published in April 2019 has a new DOI and is open access. Final note: I tried accessing the ‘free’ paper and opened up a free file for the artwork featuring the work from China on the back cover of the April 2019 of Advanced Electronic Materials.

Korea

Usually when I see the words transparency and flexibility, I expect to see graphene is one of the materials. That’s not the case for this paper (link to and citation for),

Transparent and flexible photonic artificial synapse with piezo-phototronic modulator: Versatile memory capability and higher order learning algorithm by Mohit Kumar, Joondong Kim, Ching-Ping Wong. Nano Energy Volume 63, September 2019, 103843 DOI: https://doi.org/10.1016/j.nanoen.2019.06.039 Available online 22 June 2019

Here’s the abstract for the paper where you’ll see that the material is made up of zinc oxide silver nanowires,

An artificial photonic synapse having tunable manifold synaptic response can be an essential step forward for the advancement of novel neuromorphic computing. In this work, we reported the development of highly transparent and flexible two-terminal ZnO/Ag-nanowires/PET photonic artificial synapse [emphasis mine]. The device shows purely photo-triggered all essential synaptic functions such as transition from short-to long-term plasticity, paired-pulse facilitation, and spike-timing-dependent plasticity, including in the versatile memory capability. Importantly, strain-induced piezo-phototronic effect within ZnO provides an additional degree of regulation to modulate all of the synaptic functions in multi-levels. The observed effect is quantitatively explained as a dynamic of photo-induced electron-hole trapping/detraining via the defect states such as oxygen vacancies. We revealed that the synaptic functions can be consolidated and converted by applied strain, which is not previously applied any of the reported synaptic devices. This study will open a new avenue to the scientific community to control and design highly transparent wearable neuromorphic computing.

This paper is behind a paywall.

Frugal science: ancient toys for state-of-the-art science

A toy that’s been a plaything for 5,000 years and known as a whirligig (in English, anyway) has inspired a scientific tool for use by field biologists and students interested in creating state-of-the-art experiments. Exciting stuff, eh?

A May 23, 2019 Georgia Tech (Georgia Institute of Technology) news release (also on EurekAlert but published on May 22, 2019) announces this development in ‘frugal science’,

A 5,000-year-old toy still enjoyed by kids today has inspired an inexpensive, hand-powered scientific tool that could not only impact how field biologists conduct their research but also allow high-school students and others with limited resources to realize their own state-of-the-art experiments.

The device, a portable centrifuge for preparing scientific samples including DNA, is reported May 21 [2019] in the journal PLOS Biology. The co-first author of the paper is Gaurav Byagathvalli, a senior at Lambert High School in Georgia. His colleagues are M. Saad Bhamla, an assistant professor at the Georgia Institute of Technology; Soham Sinha, a Georgia Tech undergraduate; Janet Standeven, Byagathvalli’s biology teacher at Lambert; and Aaron F. Pomerantz, a graduate student at the University of California, Berkeley.

“I am exceptionally proud of this paper and will remember it 10, 20, 30 years from now because of the uniquely diverse team we put together,” said Bhamla, who is an assistant professor in Georgia Tech’s School of Chemical and Biomolecular Engineering.

From a Rainforest to a High School

Together the team demonstrated the device, dubbed the 3D-Fuge because it is created through 3D printing, in two separate applications. In a rainforest in Peru the 3D-Fuge was an integral part of a “lab in a backpack” used to identify four previously-unknown plants and insects by sequencing their DNA [deoxyribonucleic acid]. Back in the United States, a slightly different design enabled a new approach to creating living bacterial sensors for the potential detection of disease. That work was conducted at Lambert High School for a synthetic biology competition.

Thanks to social media and a preprint of the PLOS Biology paper on BioRxiv, the 3D-Fuge has already generated interest from around the world, including emails from high-school teachers in Zambia and Kenya. “It’s awesome to see research not just remain isolated to one location but see it spread,” said Byagathvalli. “Through this, we’ve realized how much of an impact simple yet effective tools can have, and hope this technology motivates others to continue along the same path and innovate new solutions to global issues.”

To better share the work, the team has posted the 3D-Fuge designs, videos, and photos online available to anyone.

Frugal Science

One focus of Bhamla’s lab at Georgia Tech is the development of tools for frugal science, or real research that just about anyone can afford. The tools behind state-of-the-art science often cost thousands of dollars that make them inaccessible to those without serious resources.

Centrifuges are a good example.  A small benchtop unit costs between $3,000 and $5,000; larger units cost many times that. Yet the devices are necessary to produce concentrated amounts of, say, genomic materials like DNA. By rapidly spinning samples, they separate materials of interest from biological debris.

The Bhamla team found that the 3D-Fuge works as well as its more expensive cousins, but costs less than $1.

An Ancient Toy

The 3D-Fuge is based on earlier work by Bhamla and colleagues at Stanford University on a simple centrifuge made of paper. The “paperfuge,” in turn, was inspired by a toy composed of string and a button that Bhamla played with as a child. He later discovered that these toys, known as whirligigs, have existed for some 5,000 years.

They consist of a disk – like a button – with two holes, through which is threaded a length of flexible cord whose ends are knotted to create a single loop with the disk in the middle. That simple contraption is then swung with two hands until the button is spinning and whirring at very fast speeds.

The earlier paperfuge uses a disk of paper. To that disk Bhamla glued small plastic tubes filled with a sample. He and colleagues reported that the device did indeed create high-quality samples.

In late 2017 Bhamla was separately approached by the Lambert High team and Pomerantz to see if the paperfuge could be adapted for the larger samples they needed (the paperfuge is limited to small samples of ~1 microliter—or one drop of blood).

Together they came up with the 3D-Fuge, which includes cavities for tubes that can hold some 100 times more of a sample than the paperfuge. The team developed two equally effective designs: one for field biology (led by Pomerantz) and the other for the high-school’s synthetic biology project (led by Byagathvalli).

Bhamla notes that the 3D-Fuge has some limitations. For example, it can only process a few samples at a time (some applications require thousands of samples). Further, because it’s 10 times heavier than the paperfuge, it can’t reach the same speeds or produce the same forces of that device. That said, it still weighs only 20 grams, slightly less than a AA battery.

“But it works,” said Bhamla. “All you need is an [appropriate] application and some creativity.”

Here are a couple of images showing the 3D-Fuge in action,

Using the 3D-Fuge Courtesy: Georgia Tech
Sample vial in 3D-Fuge Courtesy: Georgia Tech

Here’s a link to and a citation for the paper,

A 3D-printed hand-powered centrifuge for molecular biology by Gaurav Byagathvalli, Aaron Pomerantz, Soham Sinha, Janet Standeven, M. Saad Bhamla. PLOS Biology DOI: https://doi.org/10.1371/journal.pbio.3000251 Published: May 21, 2019

As always with a Public Library of Science (PLOS) publication, this paper is open access.

The mystifying physics of paint-on semiconductors

I was not expecting a Canadian connection but it seems we are heavily invested in this research at the Georgia Institute of Technology (Georgia Tech), from a March 19, 2018 news item on ScienceDaily,

Some novel materials that sound too good to be true turn out to be true and good. An emergent class of semiconductors, which could affordably light up our future with nuanced colors emanating from lasers, lamps, and even window glass, could be the latest example.

These materials are very radiant, easy to process from solution, and energy-efficient. The nagging question of whether hybrid organic-inorganic perovskites (HOIPs) could really work just received a very affirmative answer in a new international study led by physical chemists at the Georgia Institute of Technology.

A March 19,. 2018 Georgia Tech news release (also on EurekAlert), which originated the news item, provides more detail,

The researchers observed in an HOIP a “richness” of semiconducting physics created by what could be described as electrons dancing on chemical underpinnings that wobble like a funhouse floor in an earthquake. That bucks conventional wisdom because established semiconductors rely upon rigidly stable chemical foundations, that is to say, quieter molecular frameworks, to produce the desired quantum properties.

“We don’t know yet how it works to have these stable quantum properties in this intense molecular motion,” said first author Felix Thouin, a graduate research assistant at Georgia Tech. “It defies physics models we have to try to explain it. It’s like we need some new physics.”

Quantum properties surprise

Their gyrating jumbles have made HOIPs challenging to examine, but the team of researchers from a total of five research institutes in four countries succeeded in measuring a prototypical HOIP and found its quantum properties on par with those of established, molecularly rigid semiconductors, many of which are graphene-based.

“The properties were at least as good as in those materials and may be even better,” said Carlos Silva, a professor in Georgia Tech’s School of Chemistry and Biochemistry. Not all semiconductors also absorb and emit light well, but HOIPs do, making them optoelectronic and thus potentially useful in lasers, LEDs, other lighting applications, and also in photovoltaics.

The lack of molecular-level rigidity in HOIPs also plays into them being more flexibly produced and applied.

Silva co-led the study with physicist Ajay Ram Srimath Kandada. Their team published the results of their study on two-dimensional HOIPs on March 8, 2018, in the journal Physical Review Materials. Their research was funded by EU Horizon 2020, the Natural Sciences and Engineering Research Council of Canada, the Fond Québécois pour la Recherche, the [National] Research Council of Canada, and the National Research Foundation of Singapore. [emphases mine]

The ‘solution solution’

Commonly, semiconducting properties arise from static crystalline lattices of neatly interconnected atoms. In silicon, for example, which is used in most commercial solar cells, they are interconnected silicon atoms. The same principle applies to graphene-like semiconductors.

“These lattices are structurally not very complex,” Silva said. “They’re only one atom thin, and they have strict two-dimensional properties, so they’re much more rigid.”

“You forcefully limit these systems to two dimensions,” said Srimath Kandada, who is a Marie Curie International Fellow at Georgia Tech and the Italian Institute of Technology. “The atoms are arranged in infinitely expansive, flat sheets, and then these very interesting and desirable optoelectronic properties emerge.”

These proven materials impress. So, why pursue HOIPs, except to explore their baffling physics? Because they may be more practical in important ways.

“One of the compelling advantages is that they’re all made using low-temperature processing from solutions,” Silva said. “It takes much less energy to make them.”

By contrast, graphene-based materials are produced at high temperatures in small amounts that can be tedious to work with. “With this stuff (HOIPs), you can make big batches in solution and coat a whole window with it if you want to,” Silva said.

Funhouse in an earthquake

For all an HOIP’s wobbling, it’s also a very ordered lattice with its own kind of rigidity, though less limiting than in the customary two-dimensional materials.

“It’s not just a single layer,” Srimath Kandada said. “There is a very specific perovskite-like geometry.” Perovskite refers to the shape of an HOIPs crystal lattice, which is a layered scaffolding.

“The lattice self-assembles,” Srimath Kandada said, “and it does so in a three-dimensional stack made of layers of two-dimensional sheets. But HOIPs still preserve those desirable 2D quantum properties.”

Those sheets are held together by interspersed layers of another molecular structure that is a bit like a sheet of rubber bands. That makes the scaffolding wiggle like a funhouse floor.

“At room temperature, the molecules wiggle all over the place. That disrupts the lattice, which is where the electrons live. It’s really intense,” Silva said. “But surprisingly, the quantum properties are still really stable.”

Having quantum properties work at room temperature without requiring ultra-cooling is important for practical use as a semiconductor.

Going back to what HOIP stands for — hybrid organic-inorganic perovskites – this is how the experimental material fit into the HOIP chemical class: It was a hybrid of inorganic layers of a lead iodide (the rigid part) separated by organic layers (the rubber band-like parts) of phenylethylammonium (chemical formula (PEA)2PbI4).

The lead in this prototypical material could be swapped out for a metal safer for humans to handle before the development of an applicable material.

Electron choreography

HOIPs are great semiconductors because their electrons do an acrobatic square dance.

Usually, electrons live in an orbit around the nucleus of an atom or are shared by atoms in a chemical bond. But HOIP chemical lattices, like all semiconductors, are configured to share electrons more broadly.

Energy levels in a system can free the electrons to run around and participate in things like the flow of electricity and heat. The orbits, which are then empty, are called electron holes, and they want the electrons back.

“The hole is thought of as a positive charge, and of course, the electron has a negative charge,” Silva said. “So, hole and electron attract each other.”

The electrons and holes race around each other like dance partners pairing up to what physicists call an “exciton.” Excitons act and look a lot like particles themselves, though they’re not really particles.

Hopping biexciton light

In semiconductors, millions of excitons are correlated, or choreographed, with each other, which makes for desirable properties, when an energy source like electricity or laser light is applied. Additionally, excitons can pair up to form biexcitons, boosting the semiconductor’s energetic properties.

“In this material, we found that the biexciton binding energies were high,” Silva said. “That’s why we want to put this into lasers because the energy you input ends up to 80 or 90 percent as biexcitons.”

Biexcitons bump up energetically to absorb input energy. Then they contract energetically and pump out light. That would work not only in lasers but also in LEDs or other surfaces using the optoelectronic material.

“You can adjust the chemistry (of HOIPs) to control the width between biexciton states, and that controls the wavelength of the light given off,” Silva said. “And the adjustment can be very fine to give you any wavelength of light.”

That translates into any color of light the heart desires.

###

Coauthors of this paper were Stefanie Neutzner and Annamaria Petrozza from the Italian Institute of Technology (IIT); Daniele Cortecchia from IIT and Nanyang Technological University (NTU), Singapore; Cesare Soci from the Centre for Disruptive Photonic Technologies, Singapore; Teddy Salim and Yeng Ming Lam from NTU; and Vlad Dragomir and Richard Leonelli from the University of Montreal. …

Three Canadian science funding agencies plus European and Singaporean science funding agencies but not one from the US ? That’s a bit unusual for research undertaken at a US educational institution.

In any event, here’s a link to and a citation for the paper,

Stable biexcitons in two-dimensional metal-halide perovskites with strong dynamic lattice disorder by Félix Thouin, Stefanie Neutzner, Daniele Cortecchia, Vlad Alexandru Dragomir, Cesare Soci, Teddy Salim, Yeng Ming Lam, Richard Leonelli, Annamaria Petrozza, Ajay Ram Srimath Kandada, and Carlos Silva. Phys. Rev. Materials 2, 034001 – Published 8 March 2018

This paper is behind a paywall.

A question of consciousness: Facebotlish (a new language); a July 5, 2017 rap guide performance in Vancouver, Canada; Tom Stoppard’s play; and a little more

This would usually be a simple event announcement but with the advent of a new, related (in my mind if no one else’s) development on Facebook, this has become a roundup of sorts.

Facebotlish (Facebook’s chatbots create their own language)

The language created by Facebook’s chatbots, Facebotlish, was an unintended consequence—that’s right Facebook’s developers did not design a language for the chatbots or anticipate its independent development, apparently.  Adrienne LaFrance’s June 20, 2017 article for theatlantic.com explores the development and the question further,

Something unexpected happened recently at the Facebook Artificial Intelligence Research lab. Researchers who had been training bots to negotiate with one another realized that the bots, left to their own devices, started communicating in a non-human language.

In order to actually follow what the bots were saying, the researchers had to tweak their model, limiting the machines to a conversation humans could understand. (They want bots to stick to human languages because eventually they want those bots to be able to converse with human Facebook users.) …

Here’s what the language looks like (from LaFrance article),

Here’s an example of one of the bot negotiations that Facebook observed:Bob: i can i i everything else . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i everything else . . . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i i can i i i everything else . . . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i . . . . . . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i i i i i everything else . . . . . . . . . . . . . .
Alice: balls have 0 to me to me to me to me to me to me to me to me to
Bob: you i i i everything else . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to

It is incomprehensible to humans even after being tweaked, even so, some successful negotiations can ensue.

Facebook’s researchers aren’t the only one to come across the phenomenon (from LaFrance’s article; Note: Links have been removed),

Other AI researchers, too, say they’ve observed machines that can develop their own languages, including languages with a coherent structure, and defined vocabulary and syntax—though not always actual meaningful, by human standards.

In one preprint paper added earlier this year [2017] to the research repository arXiv, a pair of computer scientists from the non-profit AI research firm OpenAI wrote about how bots learned to communicate in an abstract language—and how those bots turned to non-verbal communication, the equivalent of human gesturing or pointing, when language communication was unavailable. (Bots don’t need to have corporeal form to engage in non-verbal communication; they just engage with what’s called a visual sensory modality.) Another recent preprint paper, from researchers at the Georgia Institute of Technology, Carnegie Mellon, and Virginia Tech, describes an experiment in which two bots invent their own communication protocol by discussing and assigning values to colors and shapes—in other words, the researchers write, they witnessed the “automatic emergence of grounded language and communication … no human supervision!”

The implications of this kind of work are dizzying. Not only are researchers beginning to see how bots could communicate with one another, they may be scratching the surface of how syntax and compositional structure emerged among humans in the first place.

LaFrance’s article is well worth reading in its entirety especially since the speculation is focused on whether or not the chatbots’ creation is in fact language. There is no mention of consciousness and perhaps this is just a crazy idea but is it possible that these chatbots have consciousness? The question is particularly intriguing in light of some of philosopher David Chalmers’ work (see his 2014 TED talk in Vancouver, Canada: https://www.ted.com/talks/david_chalmers_how_do_you_explain_consciousness/transcript?language=en runs roughly 18 mins.); a text transcript is also featured. There’s a condensed version of Chalmers’ TED talk offered in a roughly 9 minute NPR (US National Public Radio) interview by Gus Raz. Here are some highlights from the text transcript,

So we’ve been hearing from brain scientists who are asking how a bunch of neurons and synaptic connections in the brain add up to us, to who we are. But it’s consciousness, the subjective experience of the mind, that allows us to ask the question in the first place. And where consciousness comes from – that is an entirely separate question.

DAVID CHALMERS: Well, I like to distinguish between the easy problems of consciousness and the hard problem.

RAZ: This is David Chalmers. He’s a philosopher who coined this term, the hard problem of consciousness.

CHALMERS: Well, the easy problems are ultimately a matter of explaining behavior – things we do. And I think brain science is great at problems like that. It can isolate a neural circuit and show how it enables you to see a red object, to respondent and say, that’s red. But the hard problem of consciousness is subjective experience. Why, when all that happens in this circuit, does it feel like something? How does a bunch of – 86 billion neurons interacting inside the brain, coming together – how does that produce the subjective experience of a mind and of the world?

RAZ: Here’s how David Chalmers begins his TED Talk.

(SOUNDBITE OF TED TALK)

CHALMERS: Right now, you have a movie playing inside your head. It has 3-D vision and surround sound for what you’re seeing and hearing right now. Your movie has smell and taste and touch. It has a sense of your body, pain, hunger, orgasms. It has emotions, anger and happiness. It has memories, like scenes from your childhood, playing before you. This movie is your stream of consciousness. If we weren’t conscious, nothing in our lives would have meaning or value. But at the same time, it’s the most mysterious phenomenon in the universe. Why are we conscious?

RAZ: Why is consciousness more than just the sum of the brain’s parts?

CHALMERS: Well, the question is, you know, what is the brain? It’s this giant complex computer, a bunch of interacting parts with great complexity. What does all that explain? That explains objective mechanism. Consciousness is subjective by its nature. It’s a matter of subjective experience. And it seems that we can imagine all of that stuff going on in the brain without consciousness. And the question is, where is the consciousness from there? It’s like, if someone could do that, they’d get a Nobel Prize, you know?

RAZ: Right.

CHALMERS: So here’s the mapping from this circuit to this state of consciousness. But underneath that is always going be the question, why and how does the brain give you consciousness in the first place?

(SOUNDBITE OF TED TALK)

CHALMERS: Right now, nobody knows the answers to those questions. So we may need one or two ideas that initially seem crazy before we can come to grips with consciousness, scientifically. The first crazy idea is that consciousness is fundamental. Physicists sometimes take some aspects of the universe as fundamental building blocks – space and time and mass – and you build up the world from there. Well, I think that’s the situation we’re in. If you can’t explain consciousness in terms of the existing fundamentals – space, time – the natural thing to do is to postulate consciousness itself as something fundamental – a fundamental building block of nature. The second crazy idea is that consciousness might be universal. This view is sometimes called panpsychism – pan, for all – psych, for mind. Every system is conscious. Not just humans, dogs, mice, flies, but even microbes. Even a photon has some degree of consciousness. The idea is not that photons are intelligent or thinking. You know, it’s not that a photon is wracked with angst because it’s thinking, oh, I’m always buzzing around near the speed of light. I never get to slow down and smell the roses. No, not like that. But the thought is, maybe photons might have some element of raw subjective feeling, some primitive precursor to consciousness.

RAZ: So this is a pretty big idea – right? – like, that not just flies, but microbes or photons all have consciousness. And I mean we, like, as humans, we want to believe that our consciousness is what makes us special, right – like, different from anything else.

CHALMERS: Well, I would say yes and no. I’d say the fact of consciousness does not make us special. But maybe we’ve a special type of consciousness ’cause you know, consciousness is not on and off. It comes in all these rich and amazing varieties. There’s vision. There’s hearing. There’s thinking. There’s emotion and so on. So our consciousness is far richer, I think, than the consciousness, say, of a mouse or a fly. But if you want to look for what makes us distinct, don’t look for just our being conscious, look for the kind of consciousness we have. …

Intriguing, non?

Vancouver premiere of Baba Brinkman’s Rap Guide to Consciousness

Baba Brinkman, former Vancouverite and current denizen of New York City, is back in town offering a new performance at the Rio Theatre (1680 E. Broadway, near Commercial Drive). From a July 5, 2017 Rio Theatre event page and ticket portal,

Baba Brinkman’s Rap Guide to Consciousness

Wednesday, July 5 [2017] at 6:30pm PDT

Baba Brinkman’s new hip-hop theatre show “Rap Guide to Consciousness” is all about the neuroscience of consciousness. See it in Vancouver at the Rio Theatre before it goes to the Edinburgh Fringe Festival in August [2017].

This event also features a performance of “Off the Top” with Dr. Heather Berlin (cognitive neuroscientist, TV host, and Baba’s wife), which is also going to Edinburgh.

Wednesday, July 5
Doors 6:00 pm | Show 6:30 pm

Advance tickets $12 | $15 at the door

*All ages welcome!
*Sorry, Groupons and passes not accepted for this event.

“Utterly unique… both brilliantly entertaining and hugely informative” ★ ★ ★ ★ ★ – Broadway Baby

“An education, inspiring, and wonderfully entertaining show from beginning to end” ★ ★ ★ ★ ★ – Mumble Comedy

There’s quite the poster for this rap guide performance,

In addition to  the Vancouver and Edinburgh performance (the show was premiered at the Brighton Fringe Festival in May 2017; see Simon Topping’s very brief review in this May 10, 2017 posting on the reviewshub.com), Brinkman is raising money (goal is $12,000US; he has raised a little over $3,000 with approximately one month before the deadline) to produce a CD. Here’s more from the Rap Guide to Consciousness campaign page on Indiegogo,

Brinkman has been working with neuroscientists, Dr. Anil Seth (professor and co-director of Sackler Centre for Consciousness Science) and Dr. Heather Berlin (Brinkman’s wife as noted earlier; see her Wikipedia entry or her website).

There’s a bit more information about the rap project and Anil Seth in a May 3, 2017 news item by James Hakner for the University of Sussex,

The research frontiers of consciousness science find an unusual outlet in an exciting new Rap Guide to Consciousness, premiering at this year’s Brighton Fringe Festival.

Professor Anil Seth, Co-Director of the Sackler Centre for Consciousness Science at the University of Sussex, has teamed up with New York-based ‘peer-reviewed rapper’ Baba Brinkman, to explore the latest findings from the neuroscience and cognitive psychology of subjective experience.

What is it like to be a baby? We might have to take LSD to find out. What is it like to be an octopus? Imagine most of your brain was actually built into your fingertips. What is it like to be a rapper kicking some of the world’s most complex lyrics for amused fringe audiences? Surreal.

In this new production, Baba brings his signature mix of rap comedy storytelling to the how and why behind your thoughts and perceptions. Mixing cutting-edge research with lyrical performance and projected visuals, Baba takes you through the twists and turns of the only organ it’s better to donate than receive: the human brain. Discover how the various subsystems of your brain come together to create your own rich experience of the world, including the sights and sounds of a scientifically peer-reviewed rapper dropping knowledge.

The result is a truly mind-blowing multimedia hip-hop theatre performance – the perfect meta-medium through which to communicate the dazzling science of consciousness.

Baba comments: “This topic is endlessly fascinating because it underlies everything we do pretty much all the time, which is probably why it remains one of the toughest ideas to get your head around. The first challenge with this show is just to get people to accept the (scientifically uncontroversial) idea that their brains and minds are actually the same thing viewed from different angles. But that’s just the starting point, after that the details get truly amazing.”

Baba Brinkman is a Canadian rap artist and award-winning playwright, best known for his “Rap Guide” series of plays and albums. Baba has toured the world and enjoyed successful runs at the Edinburgh Fringe Festival and off-Broadway in New York. The Rap Guide to Religion was nominated for a 2015 Drama Desk Award for “Unique Theatrical Experience” and The Rap Guide to Evolution (“Astonishing and brilliant” NY Times), won a Scotsman Fringe First Award and a Drama Desk Award nomination for “Outstanding Solo Performance”. The Rap Guide to Climate Chaos premiered in Edinburgh in 2015, followed by a six-month off-Broadway run in 2016.

Baba is also a pioneer in the genre of “lit-hop” or literary hip-hop, known for his adaptations of The Canterbury Tales, Beowulf, and Gilgamesh. He is a recent recipient of the National Center for Science Education’s “Friend of Darwin Award” for his efforts to improve the public understanding of evolutionary biology.

Anil Seth is an internationally renowned researcher into the biological basis of consciousness, with more than 100 (peer-reviewed!) academic journal papers on the subject. Alongside science he is equally committed to innovative public communication. A Wellcome Trust Engagement Fellow (from 2016) and the 2017 British Science Association President (Psychology), Professor Seth has co-conceived and consulted on many science-art projects including drama (Donmar Warehouse), dance (Siobhan Davies dance company), and the visual arts (with artist Lindsay Seers). He has also given popular public talks on consciousness at the Royal Institution (Friday Discourse) and at the main TED conference in Vancouver. He is a regular presence in print and on the radio and is the recipient of awards including the BBC Audio Award for Best Single Drama (for ‘The Sky is Wider’) and the Royal Society Young People’s Book Prize (for EyeBenders). This is his first venture into rap.

Professor Seth said: “There is nothing more familiar, and at the same time more mysterious than consciousness, but research is finally starting to shed light on this most central aspect of human existence. Modern neuroscience can be incredibly arcane and complex, posing challenges to us as public communicators.

“It’s been a real pleasure and privilege to work with Baba on this project over the last year. I never thought I’d get involved with a rap artist – but hearing Baba perform his ‘peer reviewed’ breakdowns of other scientific topics I realized here was an opportunity not to be missed.”

Interestingly, Seth has another Canadian connection; he’s a Senior Fellow of the Azrieli Program in Brain, Mind & Consciousness at the Canadian Institute for Advanced Research (CIFAR; Wikipedia entry). By the way, the institute  was promised $93.7M in the 2017 Canadian federal government budget for the establishment of a Pan-Canadian Artificial Intelligence Strategy (see my March 24, 2017 posting; scroll down about 25% of the way and look for the highlighted dollar amount). You can find out more about the Azrieli programme here and about CIFAR on its website.

The Hard Problem (a Tom Stoppard play)

Brinkman isn’t the only performance-based artist to be querying the concept of consciousness, Tom Stoppard has written a play about consciousness titled ‘The Hard Problem’, which debuted at the National Theatre (UK) in January 2015 (see BBC [British Broadcasting Corporation] news online’s Jan. 29, 2015 roundup of reviews). A May 25, 2017 commentary by Andrew Brown for the Guardian offers some insight into the play and the issues (Note: Links have been removed),

There is a lovely exchange in Tom Stoppard’s play about consciousness, The Hard Problem, when an atheist has been sneering at his girlfriend for praying. It is, he says, an utterly meaningless activity. Right, she says, then do one thing for me: pray! I can’t do that, he replies. It would betray all I believe in.

So prayer can have meanings, and enormously important ones, even for people who are certain that it doesn’t have the meaning it is meant to have. In that sense, your really convinced atheist is much more religious than someone who goes along with all the prayers just because that’s what everyone does, without for a moment supposing the action means anything more than asking about the weather.

The Hard Problem of the play’s title is a phrase coined by the Australian philosopher David Chalmers to describe the way in which consciousness arises from a physical world. What makes it hard is that we don’t understand it. What makes it a problem is slightly different. It isn’t the fact of consciousness, but our representations of consciousness, that give rise to most of the difficulties. We don’t know how to fit the first-person perspective into the third-person world that science describes and explores. But this isn’t because they don’t fit: it’s because we don’t understand how they fit. For some people, this becomes a question of consuming interest.

There are also a couple of video of Tom Stoppard, the playwright, discussing his play with various interested parties, the first being the director at the National Theatre who tackled the debut run, Nicolas Hytner: https://www.youtube.com/watch?v=s7J8rWu6HJg (it runs approximately 40 mins.). Then, there’s the chat Stoppard has with previously mentioned philosopher, David Chalmers: https://www.youtube.com/watch?v=4BPY2c_CiwA (this runs approximately 1 hr. 32 mins.).

I gather ‘consciousness’ is a hot topic these days and, in the venacular of the 1960s, I guess you could describe all of this as ‘expanding our consciousness’. Have a nice weekend!