Tag Archives: Columbia University

Portable and non-invasive (?) mind-reading AI (artificial intelligence) turns thoughts into text and some thoughts about the near future

First, here’s some of the latest research and if by ‘non-invasive,’ you mean that electrodes are not being planted in your brain, then this December 12, 2023 University of Technology Sydney (UTS) press release (also on EurekAlert) highlights non-invasive mind-reading AI via a brain-computer interface (BCI), Note: Links have been removed,

In a world-first, researchers from the GrapheneX-UTS Human-centric Artificial Intelligence Centre at the University of Technology Sydney (UTS) have developed a portable, non-invasive system that can decode silent thoughts and turn them into text. 

The technology could aid communication for people who are unable to speak due to illness or injury, including stroke or paralysis. It could also enable seamless communication between humans and machines, such as the operation of a bionic arm or robot.

The study has been selected as the spotlight paper at the NeurIPS conference, a top-tier annual meeting that showcases world-leading research on artificial intelligence and machine learning, held in New Orleans on 12 December 2023.

The research was led by Distinguished Professor CT Lin, Director of the GrapheneX-UTS HAI Centre, together with first author Yiqun Duan and fellow PhD candidate Jinzhou Zhou from the UTS Faculty of Engineering and IT.

In the study participants silently read passages of text while wearing a cap that recorded electrical brain activity through their scalp using an electroencephalogram (EEG). A demonstration of the technology can be seen in this video [See UTS press release].

The EEG wave is segmented into distinct units that capture specific characteristics and patterns from the human brain. This is done by an AI model called DeWave developed by the researchers. DeWave translates EEG signals into words and sentences by learning from large quantities of EEG data. 

“This research represents a pioneering effort in translating raw EEG waves directly into language, marking a significant breakthrough in the field,” said Distinguished Professor Lin.

“It is the first to incorporate discrete encoding techniques in the brain-to-text translation process, introducing an innovative approach to neural decoding. The integration with large language models is also opening new frontiers in neuroscience and AI,” he said.

Previous technology to translate brain signals to language has either required surgery to implant electrodes in the brain, such as Elon Musk’s Neuralink [emphasis mine], or scanning in an MRI machine, which is large, expensive, and difficult to use in daily life.

These methods also struggle to transform brain signals into word level segments without additional aids such as eye-tracking, which restrict the practical application of these systems. The new technology is able to be used either with or without eye-tracking.

The UTS research was carried out with 29 participants. This means it is likely to be more robust and adaptable than previous decoding technology that has only been tested on one or two individuals, because EEG waves differ between individuals. 

The use of EEG signals received through a cap, rather than from electrodes implanted in the brain, means that the signal is noisier. In terms of EEG translation however, the study reported state-of the art performance, surpassing previous benchmarks.

“The model is more adept at matching verbs than nouns. However, when it comes to nouns, we saw a tendency towards synonymous pairs rather than precise translations, such as ‘the man’ instead of ‘the author’,” said Duan. [emphases mine; synonymous, eh? what about ‘woman’ or ‘child’ instead of the ‘man’?]

“We think this is because when the brain processes these words, semantically similar words might produce similar brain wave patterns. Despite the challenges, our model yields meaningful results, aligning keywords and forming similar sentence structures,” he said.

The translation accuracy score is currently around 40% on BLEU-1. The BLEU score is a number between zero and one that measures the similarity of the machine-translated text to a set of high-quality reference translations. The researchers hope to see this improve to a level that is comparable to traditional language translation or speech recognition programs, which is closer to 90%.

The research follows on from previous brain-computer interface technology developed by UTS in association with the Australian Defence Force [ADF] that uses brainwaves to command a quadruped robot, which is demonstrated in this ADF video [See my June 13, 2023 posting, “Mind-controlled robots based on graphene: an Australian research story” for the story and embedded video].

About one month after the research announcement regarding the University of Technology Sydney’s ‘non-invasive’ brain-computer interface (BCI), I stumbled across an in-depth piece about the field of ‘non-invasive’ mind-reading research.

Neurotechnology and neurorights

Fletcher Reveley’s January 18, 2024 article on salon.com (originally published January 3, 2024 on Undark) shows how quickly the field is developing and raises concerns, Note: Links have been removed,

One afternoon in May 2020, Jerry Tang, a Ph.D. student in computer science at the University of Texas at Austin, sat staring at a cryptic string of words scrawled across his computer screen:

“I am not finished yet to start my career at twenty without having gotten my license I never have to pull out and run back to my parents to take me home.”

The sentence was jumbled and agrammatical. But to Tang, it represented a remarkable feat: A computer pulling a thought, however disjointed, from a person’s mind.

For weeks, ever since the pandemic had shuttered his university and forced his lab work online, Tang had been at home tweaking a semantic decoder — a brain-computer interface, or BCI, that generates text from brain scans. Prior to the university’s closure, study participants had been providing data to train the decoder for months, listening to hours of storytelling podcasts while a functional magnetic resonance imaging (fMRI) machine logged their brain responses. Then, the participants had listened to a new story — one that had not been used to train the algorithm — and those fMRI scans were fed into the decoder, which used GPT1, a predecessor to the ubiquitous AI chatbot ChatGPT, to spit out a text prediction of what it thought the participant had heard. For this snippet, Tang compared it to the original story:

“Although I’m twenty-three years old I don’t have my driver’s license yet and I just jumped out right when I needed to and she says well why don’t you come back to my house and I’ll give you a ride.”

The decoder was not only capturing the gist of the original, but also producing exact matches of specific words — twenty, license. When Tang shared the results with his adviser, a UT Austin neuroscientist named Alexander Huth who had been working towards building such a decoder for nearly a decade, Huth was floored. “Holy shit,” Huth recalled saying. “This is actually working.” By the fall of 2021, the scientists were testing the device with no external stimuli at all — participants simply imagined a story and the decoder spat out a recognizable, albeit somewhat hazy, description of it. “What both of those experiments kind of point to,” said Huth, “is the fact that what we’re able to read out here was really like the thoughts, like the idea.”

The scientists brimmed with excitement over the potentially life-altering medical applications of such a device — restoring communication to people with locked-in syndrome, for instance, whose near full-body paralysis made talking impossible. But just as the potential benefits of the decoder snapped into focus, so too did the thorny ethical questions posed by its use. Huth himself had been one of the three primary test subjects in the experiments, and the privacy implications of the device now seemed visceral: “Oh my god,” he recalled thinking. “We can look inside my brain.”

Huth’s reaction mirrored a longstanding concern in neuroscience and beyond: that machines might someday read people’s minds. And as BCI technology advances at a dizzying clip, that possibility and others like it — that computers of the future could alter human identities, for example, or hinder free will — have begun to seem less remote. “The loss of mental privacy, this is a fight we have to fight today,” said Rafael Yuste, a Columbia University neuroscientist. “That could be irreversible. If we lose our mental privacy, what else is there to lose? That’s it, we lose the essence of who we are.”

Spurred by these concerns, Yuste and several colleagues have launched an international movement advocating for “neurorights” — a set of five principles Yuste argues should be enshrined in law as a bulwark against potential misuse and abuse of neurotechnology. But he may be running out of time.

Reveley’s January 18, 2024 article provides fascinating context and is well worth reading if you have the time.

For my purposes, I’m focusing on ethics, Note: Links have been removed,

… as these and other advances propelled the field forward, and as his own research revealed the discomfiting vulnerability of the brain to external manipulation, Yuste found himself increasingly concerned by the scarce attention being paid to the ethics of these technologies. Even Obama’s multi-billion-dollar BRAIN Initiative, a government program designed to advance brain research, which Yuste had helped launch in 2013 and supported heartily, seemed to mostly ignore the ethical and societal consequences of the research it funded. “There was zero effort on the ethical side,” Yuste recalled.

Yuste was appointed to the rotating advisory group of the BRAIN Initiative in 2015, where he began to voice his concerns. That fall, he joined an informal working group to consider the issue. “We started to meet, and it became very evident to me that the situation was a complete disaster,” Yuste said. “There was no guidelines, no work done.” Yuste said he tried to get the group to generate a set of ethical guidelines for novel BCI technologies, but the effort soon became bogged down in bureaucracy. Frustrated, he stepped down from the committee and, together with a University of Washington bioethicist named Sara Goering, decided to independently pursue the issue. “Our aim here is not to contribute to or feed fear for doomsday scenarios,” the pair wrote in a 2016 article in Cell, “but to ensure that we are reflective and intentional as we prepare ourselves for the neurotechnological future.”

In the fall of 2017, Yuste and Goering called a meeting at the Morningside Campus of Columbia, inviting nearly 30 experts from all over the world in such fields as neurotechnology, artificial intelligence, medical ethics, and the law. By then, several other countries had launched their own versions of the BRAIN Initiative, and representatives from Australia, Canada [emphasis mine], China, Europe, Israel, South Korea, and Japan joined the Morningside gathering, along with veteran neuroethicists and prominent researchers. “We holed ourselves up for three days to study the ethical and societal consequences of neurotechnology,” Yuste said. “And we came to the conclusion that this is a human rights issue. These methods are going to be so powerful, that enable to access and manipulate mental activity, and they have to be regulated from the angle of human rights. That’s when we coined the term ‘neurorights.’”

The Morningside group, as it became known, identified four principal ethical priorities, which were later expanded by Yuste into five clearly defined neurorights: The right to mental privacy, which would ensure that brain data would be kept private and its use, sale, and commercial transfer would be strictly regulated; the right to personal identity, which would set boundaries on technologies that could disrupt one’s sense of self; the right to fair access to mental augmentation, which would ensure equality of access to mental enhancement neurotechnologies; the right of protection from bias in the development of neurotechnology algorithms; and the right to free will, which would protect an individual’s agency from manipulation by external neurotechnologies. The group published their findings in an often-cited paper in Nature.

But while Yuste and the others were focused on the ethical implications of these emerging technologies, the technologies themselves continued to barrel ahead at a feverish speed. In 2014, the first kick of the World Cup was made by a paraplegic man using a mind-controlled robotic exoskeleton. In 2016, a man fist bumped Obama using a robotic arm that allowed him to “feel” the gesture. The following year, scientists showed that electrical stimulation of the hippocampus could improve memory, paving the way for cognitive augmentation technologies. The military, long interested in BCI technologies, built a system that allowed operators to pilot three drones simultaneously, partially with their minds. Meanwhile, a confusing maelstrom of science, science-fiction, hype, innovation, and speculation swept the private sector. By 2020, over $33 billion had been invested in hundreds of neurotech companies — about seven times what the NIH [US National Institutes of Health] had envisioned for the 12-year span of the BRAIN Initiative itself.

Now back to Tang and Huth (from Reveley’s January 18, 2024 article), Note: Links have been removed,

Central to the ethical questions Huth and Tang grappled with was the fact that their decoder, unlike other language decoders developed around the same time, was non-invasive — it didn’t require its users to undergo surgery. Because of that, their technology was free from the strict regulatory oversight that governs the medical domain. (Yuste, for his part, said he believes non-invasive BCIs pose a far greater ethical challenge than invasive systems: “The non-invasive, the commercial, that’s where the battle is going to get fought.”) Huth and Tang’s decoder faced other hurdles to widespread use — namely that fMRI machines are enormous, expensive, and stationary. But perhaps, the researchers thought, there was a way to overcome that hurdle too.

The information measured by fMRI machines — blood oxygenation levels, which indicate where blood is flowing in the brain — can also be measured with another technology, functional Near-Infrared Spectroscopy, or fNIRS. Although lower resolution than fMRI, several expensive, research-grade, wearable fNIRS headsets do approach the resolution required to work with Huth and Tang’s decoder. In fact, the scientists were able to test whether their decoder would work with such devices by simply blurring their fMRI data to simulate the resolution of research-grade fNIRS. The decoded result “doesn’t get that much worse,” Huth said.

And while such research-grade devices are currently cost-prohibitive for the average consumer, more rudimentary fNIRS headsets have already hit the market. Although these devices provide far lower resolution than would be required for Huth and Tang’s decoder to work effectively, the technology is continually improving, and Huth believes it is likely that an affordable, wearable fNIRS device will someday provide high enough resolution to be used with the decoder. In fact, he is currently teaming up with scientists at Washington University to research the development of such a device.

Even comparatively primitive BCI headsets can raise pointed ethical questions when released to the public. Devices that rely on electroencephalography, or EEG, a commonplace method of measuring brain activity by detecting electrical signals, have now become widely available — and in some cases have raised alarm. In 2019, a school in Jinhua, China, drew criticism after trialing EEG headbands that monitored the concentration levels of its pupils. (The students were encouraged to compete to see who concentrated most effectively, and reports were sent to their parents.) Similarly, in 2018 the South China Morning Post reported that dozens of factories and businesses had begun using “brain surveillance devices” to monitor workers’ emotions, in the hopes of increasing productivity and improving safety. The devices “caused some discomfort and resistance in the beginning,” Jin Jia, then a brain scientist at Ningbo University, told the reporter. “After a while, they got used to the device.”

But the primary problem with even low-resolution devices is that scientists are only just beginning to understand how information is actually encoded in brain data. In the future, powerful new decoding algorithms could discover that even raw, low-resolution EEG data contains a wealth of information about a person’s mental state at the time of collection. Consequently, nobody can definitively know what they are giving away when they allow companies to collect information from their brains.

Huth and Tang concluded that brain data, therefore, should be closely guarded, especially in the realm of consumer products. In an article on Medium from last April, Tang wrote that “decoding technology is continually improving, and the information that could be decoded from a brain scan a year from now may be very different from what can be decoded today. It is crucial that companies are transparent about what they intend to do with brain data and take measures to ensure that brain data is carefully protected.” (Yuste said the Neurorights Foundation recently surveyed the user agreements of 30 neurotech companies and found that all of them claim ownership of users’ brain data — and most assert the right to sell that data to third parties. [emphases mine]) Despite these concerns, however, Huth and Tang maintained that the potential benefits of these technologies outweighed their risks, provided the proper guardrails [emphasis mine] were put in place.

It would seem the first guardrails are being set up in South America (from Reveley’s January 18, 2024 article), Note: Links have been removed,

On a hot summer night in 2019, Yuste sat in the courtyard of an adobe hotel in the north of Chile with his close friend, the prominent Chilean doctor and then-senator Guido Girardi, observing the vast, luminous skies of the Atacama Desert and discussing, as they often did, the world of tomorrow. Girardi, who every year organizes the Congreso Futuro, Latin America’s preeminent science and technology event, had long been intrigued by the accelerating advance of technology and its paradigm-shifting impact on society — “living in the world at the speed of light,” as he called it. Yuste had been a frequent speaker at the conference, and the two men shared a conviction that scientists were birthing technologies powerful enough to disrupt the very notion of what it meant to be human.

Around midnight, as Yuste finished his pisco sour, Girardi made an intriguing proposal: What if they worked together to pass an amendment to Chile’s constitution, one that would enshrine protections for mental privacy as an inviolable right of every Chilean? It was an ambitious idea, but Girardi had experience moving bold pieces of legislation through the senate; years earlier he had spearheaded Chile’s famous Food Labeling and Advertising Law, which required companies to affix health warning labels on junk food. (The law has since inspired dozens of countries to pursue similar legislation.) With BCI, here was another chance to be a trailblazer. “I said to Rafael, ‘Well, why don’t we create the first neuro data protection law?’” Girardi recalled. Yuste readily agreed.

… Girardi led the political push, promoting a piece of legislation that would amend Chile’s constitution to protect mental privacy. The effort found surprising purchase across the political spectrum, a remarkable feat in a country famous for its political polarization. In 2021, Chile’s congress unanimously passed the constitutional amendment, which Piñera [Sebastián Piñera] swiftly signed into law. (A second piece of legislation, which would establish a regulatory framework for neurotechnology, is currently under consideration by Chile’s congress.) “There was no divide between the left or right,” recalled Girardi. “This was maybe the only law in Chile that was approved by unanimous vote.” Chile, then, had become the first country in the world to enshrine “neurorights” in its legal code.

Even before the passage of the Chilean constitutional amendment, Yuste had begun meeting regularly with Jared Genser, an international human rights lawyer who had represented such high-profile clients as Desmond Tutu, Liu Xiaobo, and Aung San Suu Kyi. (The New York Times Magazine once referred to Genser as “the extractor” for his work with political prisoners.) Yuste was seeking guidance on how to develop an international legal framework to protect neurorights, and Genser, though he had just a cursory knowledge of neurotechnology, was immediately captivated by the topic. “It’s fair to say he blew my mind in the first hour of discussion,” recalled Genser. Soon thereafter, Yuste, Genser, and a private-sector entrepreneur named Jamie Daves launched the Neurorights Foundation, a nonprofit whose first goal, according to its website, is “to protect the human rights of all people from the potential misuse or abuse of neurotechnology.”

To accomplish this, the organization has sought to engage all levels of society, from the United Nations and regional governing bodies like the Organization of American States, down to national governments, the tech industry, scientists, and the public at large. Such a wide-ranging approach, said Genser, “is perhaps insanity on our part, or grandiosity. But nonetheless, you know, it’s definitely the Wild West as it comes to talking about these issues globally, because so few people know about where things are, where they’re heading, and what is necessary.”

This general lack of knowledge about neurotech, in all strata of society, has largely placed Yuste in the role of global educator — he has met several times with U.N. Secretary-General António Guterres, for example, to discuss the potential dangers of emerging neurotech. And these efforts are starting to yield results. Guterres’s 2021 report, “Our Common Agenda,” which sets forth goals for future international cooperation, urges “updating or clarifying our application of human rights frameworks and standards to address frontier issues,” such as “neuro-technology.” Genser attributes the inclusion of this language in the report to Yuste’s advocacy efforts.

But updating international human rights law is difficult, and even within the Neurorights Foundation there are differences of opinion regarding the most effective approach. For Yuste, the ideal solution would be the creation of a new international agency, akin to the International Atomic Energy Agency — but for neurorights. “My dream would be to have an international convention about neurotechnology, just like we had one about atomic energy and about certain things, with its own treaty,” he said. “And maybe an agency that would essentially supervise the world’s efforts in neurotechnology.”

Genser, however, believes that a new treaty is unnecessary, and that neurorights can be codified most effectively by extending interpretation of existing international human rights law to include them. The International Covenant of Civil and Political Rights, for example, already ensures the general right to privacy, and an updated interpretation of the law could conceivably clarify that that clause extends to mental privacy as well.

There is no need for immediate panic (from Reveley’s January 18, 2024 article),

… while Yuste and the others continue to grapple with the complexities of international and national law, Huth and Tang have found that, for their decoder at least, the greatest privacy guardrails come not from external institutions but rather from something much closer to home — the human mind itself. Following the initial success of their decoder, as the pair read widely about the ethical implications of such a technology, they began to think of ways to assess the boundaries of the decoder’s capabilities. “We wanted to test a couple kind of principles of mental privacy,” said Huth. Simply put, they wanted to know if the decoder could be resisted.

In late 2021, the scientists began to run new experiments. First, they were curious if an algorithm trained on one person could be used on another. They found that it could not — the decoder’s efficacy depended on many hours of individualized training. Next, they tested whether the decoder could be thrown off simply by refusing to cooperate with it. Instead of focusing on the story that was playing through their headphones while inside the fMRI machine, participants were asked to complete other mental tasks, such as naming random animals, or telling a different story in their head. “Both of those rendered it completely unusable,” Huth said. “We didn’t decode the story they were listening to, and we couldn’t decode anything about what they were thinking either.”

Given how quickly this field of research is progressing, it seems like a good idea to increase efforts to establish neurorights (from Reveley’s January 18, 2024 article),

For Yuste, however, technologies like Huth and Tang’s decoder may only mark the beginning of a mind-boggling new chapter in human history, one in which the line between human brains and computers will be radically redrawn — or erased completely. A future is conceivable, he said, where humans and computers fuse permanently, leading to the emergence of technologically augmented cyborgs. “When this tsunami hits us I would say it’s not likely it’s for sure that humans will end up transforming themselves — ourselves — into maybe a hybrid species,” Yuste said. He is now focused on preparing for this future.

In the last several years, Yuste has traveled to multiple countries, meeting with a wide assortment of politicians, supreme court justices, U.N. committee members, and heads of state. And his advocacy is beginning to yield results. In August, Mexico began considering a constitutional reform that would establish the right to mental privacy. Brazil is currently considering a similar proposal, while Spain, Argentina, and Uruguay have also expressed interest, as has the European Union. In September [2023], neurorights were officially incorporated into Mexico’s digital rights charter, while in Chile, a landmark Supreme Court ruling found that Emotiv Inc, a company that makes a wearable EEG headset, violated Chile’s newly minted mental privacy law. That suit was brought by Yuste’s friend and collaborator, Guido Girardi.

“This is something that we should take seriously,” he [Huth] said. “Because even if it’s rudimentary right now, where is that going to be in five years? What was possible five years ago? What’s possible now? Where’s it gonna be in five years? Where’s it gonna be in 10 years? I think the range of reasonable possibilities includes things that are — I don’t want to say like scary enough — but like dystopian enough that I think it’s certainly a time for us to think about this.”

You can find The Neurorights Foundation here and/or read Reveley’s January 18, 2024 article on salon.com or as originally published January 3, 2024 on Undark. Finally, thank you for the article, Fletcher Reveley!

2023 Nobel prizes (medicine, physics, and chemistry)

For the first time in the 15 years this blog has been around, the Nobel prizes awarded in medicine, physics, and chemistry all are in areas discussed here at one or another. As usual where people are concerned, some of these scientists had a tortuous journey to this prestigious outcome.

Medicine

Two people (Katalin Karikó and Drew Weissman) were awarded the prize in medicine according to the October 2, 2023 Nobel Prize press release, Note: Links have been removed,

The Nobel Assembly at Karolinska Institutet [Sweden]

has today decided to award

the 2023 Nobel Prize in Physiology or Medicine

jointly to

Katalin Karikó and Drew Weissman

for their discoveries concerning nucleoside base modifications that enabled the development of effective mRNA vaccines against COVID-19

The discoveries by the two Nobel Laureates were critical for developing effective mRNA vaccines against COVID-19 during the pandemic that began in early 2020. Through their groundbreaking findings, which have fundamentally changed our understanding of how mRNA interacts with our immune system, the laureates contributed to the unprecedented rate of vaccine development during one of the greatest threats to human health in modern times.

Vaccines before the pandemic

Vaccination stimulates the formation of an immune response to a particular pathogen. This gives the body a head start in the fight against disease in the event of a later exposure. Vaccines based on killed or weakened viruses have long been available, exemplified by the vaccines against polio, measles, and yellow fever. In 1951, Max Theiler was awarded the Nobel Prize in Physiology or Medicine for developing the yellow fever vaccine.

Thanks to the progress in molecular biology in recent decades, vaccines based on individual viral components, rather than whole viruses, have been developed. Parts of the viral genetic code, usually encoding proteins found on the virus surface, are used to make proteins that stimulate the formation of virus-blocking antibodies. Examples are the vaccines against the hepatitis B virus and human papillomavirus. Alternatively, parts of the viral genetic code can be moved to a harmless carrier virus, a “vector.” This method is used in vaccines against the Ebola virus. When vector vaccines are injected, the selected viral protein is produced in our cells, stimulating an immune response against the targeted virus.

Producing whole virus-, protein- and vector-based vaccines requires large-scale cell culture. This resource-intensive process limits the possibilities for rapid vaccine production in response to outbreaks and pandemics. Therefore, researchers have long attempted to develop vaccine technologies independent of cell culture, but this proved challenging.

Illustration of methods for vaccine production before the COVID-19 pandemic.
Figure 1. Methods for vaccine production before the COVID-19 pandemic. © The Nobel Committee for Physiology or Medicine. Ill. Mattias Karlén

mRNA vaccines: A promising idea

In our cells, genetic information encoded in DNA is transferred to messenger RNA (mRNA), which is used as a template for protein production. During the 1980s, efficient methods for producing mRNA without cell culture were introduced, called in vitro transcription. This decisive step accelerated the development of molecular biology applications in several fields. Ideas of using mRNA technologies for vaccine and therapeutic purposes also took off, but roadblocks lay ahead. In vitro transcribed mRNA was considered unstable and challenging to deliver, requiring the development of sophisticated carrier lipid systems to encapsulate the mRNA. Moreover, in vitro-produced mRNA gave rise to inflammatory reactions. Enthusiasm for developing the mRNA technology for clinical purposes was, therefore, initially limited.

These obstacles did not discourage the Hungarian biochemist Katalin Karikó, who was devoted to developing methods to use mRNA for therapy. During the early 1990s, when she was an assistant professor at the University of Pennsylvania, she remained true to her vision of realizing mRNA as a therapeutic despite encountering difficulties in convincing research funders of the significance of her project. A new colleague of Karikó at her university was the immunologist Drew Weissman. He was interested in dendritic cells, which have important functions in immune surveillance and the activation of vaccine-induced immune responses. Spurred by new ideas, a fruitful collaboration between the two soon began, focusing on how different RNA types interact with the immune system.

The breakthrough

Karikó and Weissman noticed that dendritic cells recognize in vitro transcribed mRNA as a foreign substance, which leads to their activation and the release of inflammatory signaling molecules. They wondered why the in vitro transcribed mRNA was recognized as foreign while mRNA from mammalian cells did not give rise to the same reaction. Karikó and Weissman realized that some critical properties must distinguish the different types of mRNA.

RNA contains four bases, abbreviated A, U, G, and C, corresponding to A, T, G, and C in DNA, the letters of the genetic code. Karikó and Weissman knew that bases in RNA from mammalian cells are frequently chemically modified, while in vitro transcribed mRNA is not. They wondered if the absence of altered bases in the in vitro transcribed RNA could explain the unwanted inflammatory reaction. To investigate this, they produced different variants of mRNA, each with unique chemical alterations in their bases, which they delivered to dendritic cells. The results were striking: The inflammatory response was almost abolished when base modifications were included in the mRNA. This was a paradigm change in our understanding of how cells recognize and respond to different forms of mRNA. Karikó and Weissman immediately understood that their discovery had profound significance for using mRNA as therapy. These seminal results were published in 2005, fifteen years before the COVID-19 pandemic.

Illustration of the four different bases mRNA contains.
Figure 2. mRNA contains four different bases, abbreviated A, U, G, and C. The Nobel Laureates discovered that base-modified mRNA can be used to block activation of inflammatory reactions (secretion of signaling molecules) and increase protein production when mRNA is delivered to cells.  © The Nobel Committee for Physiology or Medicine. Ill. Mattias Karlén

In further studies published in 2008 and 2010, Karikó and Weissman showed that the delivery of mRNA generated with base modifications markedly increased protein production compared to unmodified mRNA. The effect was due to the reduced activation of an enzyme that regulates protein production. Through their discoveries that base modifications both reduced inflammatory responses and increased protein production, Karikó and Weissman had eliminated critical obstacles on the way to clinical applications of mRNA.

mRNA vaccines realized their potential

Interest in mRNA technology began to pick up, and in 2010, several companies were working on developing the method. Vaccines against Zika virus and MERS-CoV were pursued; the latter is closely related to SARS-CoV-2. After the outbreak of the COVID-19 pandemic, two base-modified mRNA vaccines encoding the SARS-CoV-2 surface protein were developed at record speed. Protective effects of around 95% were reported, and both vaccines were approved as early as December 2020.

The impressive flexibility and speed with which mRNA vaccines can be developed pave the way for using the new platform also for vaccines against other infectious diseases. In the future, the technology may also be used to deliver therapeutic proteins and treat some cancer types.

Several other vaccines against SARS-CoV-2, based on different methodologies, were also rapidly introduced, and together, more than 13 billion COVID-19 vaccine doses have been given globally. The vaccines have saved millions of lives and prevented severe disease in many more, allowing societies to open and return to normal conditions. Through their fundamental discoveries of the importance of base modifications in mRNA, this year’s Nobel laureates critically contributed to this transformative development during one of the biggest health crises of our time.

Read more about this year’s prize

Scientific background: Discoveries concerning nucleoside base modifications that enabled the development of effective mRNA vaccines against COVID-19

Katalin Karikó was born in 1955 in Szolnok, Hungary. She received her PhD from Szeged’s University in 1982 and performed postdoctoral research at the Hungarian Academy of Sciences in Szeged until 1985. She then conducted postdoctoral research at Temple University, Philadelphia, and the University of Health Science, Bethesda. In 1989, she was appointed Assistant Professor at the University of Pennsylvania, where she remained until 2013. After that, she became vice president and later senior vice president at BioNTech RNA Pharmaceuticals. Since 2021, she has been a Professor at Szeged University and an Adjunct Professor at Perelman School of Medicine at the University of Pennsylvania.

Drew Weissman was born in 1959 in Lexington, Massachusetts, USA. He received his MD, PhD degrees from Boston University in 1987. He did his clinical training at Beth Israel Deaconess Medical Center at Harvard Medical School and postdoctoral research at the National Institutes of Health. In 1997, Weissman established his research group at the Perelman School of Medicine at the University of Pennsylvania. He is the Roberts Family Professor in Vaccine Research and Director of the Penn Institute for RNA Innovations.

The University of Pennsylvania October 2, 2023 news release is a very interesting announcement (more about why it’s interesting afterwards), Note: Links have been removed,

The University of Pennsylvania messenger RNA pioneers whose years of scientific partnership unlocked understanding of how to modify mRNA to make it an effective therapeutic—enabling a platform used to rapidly develop lifesaving vaccines amid the global COVID-19 pandemic—have been named winners of the 2023 Nobel Prize in Physiology or Medicine. They become the 28th and 29th Nobel laureates affiliated with Penn, and join nine previous Nobel laureates with ties to the University of Pennsylvania who have won the Nobel Prize in Medicine.

Nearly three years after the rollout of mRNA vaccines across the world, Katalin Karikó, PhD, an adjunct professor of Neurosurgery in Penn’s Perelman School of Medicine, and Drew Weissman, MD, PhD, the Roberts Family Professor of Vaccine Research in the Perelman School of Medicine, are recipients of the prize announced this morning by the Nobel Assembly in Solna, Sweden.

After a chance meeting in the late 1990s while photocopying research papers, Karikó and Weissman began investigating mRNA as a potential therapeutic. In 2005, they published a key discovery: mRNA could be altered and delivered effectively into the body to activate the body’s protective immune system. The mRNA-based vaccines elicited a robust immune response, including high levels of antibodies that attack a specific infectious disease that has not previously been encountered. Unlike other vaccines, a live or attenuated virus is not injected or required at any point.

When the COVID-19 pandemic struck, the true value of the pair’s lab work was revealed in the most timely of ways, as companies worked to quickly develop and deploy vaccines to protect people from the virus. Both Pfizer/BioNTech and Moderna utilized Karikó and Weissman’s technology to build their highly effective vaccines to protect against severe illness and death from the virus. In the United States alone, mRNA vaccines make up more than 655 million total doses of SARS-CoV-2 vaccines that have been administered since they became available in December 2020.

Editor’s Note: The Pfizer/BioNTech and Moderna COVID-19 mRNA vaccines both use licensed University of Pennsylvania technology. As a result of these licensing relationships, Penn, Karikó and Weissman have received and may continue to receive significant financial benefits in the future based on the sale of these products. BioNTech provides funding for Weissman’s research into the development of additional infectious disease vaccines.

Science can be brutal

Now for the interesting bit: it’s in my March 5, 2021 posting (mRNA, COVID-19 vaccines, treating genetic diseases before birth, and the scientist who started it all),

Before messenger RNA was a multibillion-dollar idea, it was a scientific backwater. And for the Hungarian-born scientist behind a key mRNA discovery, it was a career dead-end.

Katalin Karikó spent the 1990s collecting rejections. Her work, attempting to harness the power of mRNA to fight disease, was too far-fetched for government grants, corporate funding, and even support from her own colleagues.

“Every night I was working: grant, grant, grant,” Karikó remembered, referring to her efforts to obtain funding. “And it came back always no, no, no.”

By 1995, after six years on the faculty at the University of Pennsylvania, Karikó got demoted. [emphasis mine] She had been on the path to full professorship, but with no money coming in to support her work on mRNA, her bosses saw no point in pressing on.

She was back to the lower rungs of the scientific academy.

“Usually, at that point, people just say goodbye and leave because it’s so horrible,” Karikó said.

There’s no opportune time for demotion, but 1995 had already been uncommonly difficult. Karikó had recently endured a cancer scare, and her husband was stuck in Hungary sorting out a visa issue. Now the work to which she’d devoted countless hours was slipping through her fingers.

In time, those better experiments came together. After a decade of trial and error, Karikó and her longtime collaborator at Penn — Drew Weissman [emphasis mine], an immunologist with a medical degree and Ph.D. from Boston University — discovered a remedy for mRNA’s Achilles’ heel.

You can get the whole story from my March 5, 2021 posting, scroll down to the “mRNA—it’s in the details, plus, the loneliness of pioneer researchers, a demotion, and squabbles” subhead. If you are very curious about mRNA and the rough and tumble of the world of science, there’s my August 20, 2021 posting “Getting erased from the mRNA/COVID-19 story” where Ian MacLachlan is featured as a researcher who got erased and where Karikó credits his work.

‘Rowing Mom Wins Nobel’ (credit: rowing website Row 2K)

Karikó’s daughter is a two-time gold medal Olympic athlete as the Canadian Broadcasting Corporation’s (CBC) radio programme, As It Happens, notes in an interview with the daughter (Susan Francia). From an October 4, 2023 As It Happens article (with embedded audio programme excerpt) by Sheena Goodyear,

Olympic gold medallist Susan Francia is coming to terms with the fact that she’s no longer the most famous person in her family.

That’s because the retired U.S. rower’s mother, Katalin Karikó, just won a Nobel Prize in Medicine. The biochemist was awarded alongside her colleague, vaccine researcher Drew Weissman, for their groundbreaking work that led to the development of COVID-19 vaccines. 

“Now I’m like, ‘Shoot! All right, I’ve got to work harder,'” Francia said with a laugh during an interview with As It Happens host Nil Köksal. 

But in all seriousness, Francia says she’s immensely proud of her mother’s accomplishments. In fact, it was Karikó’s fierce dedication to science that inspired Francia to win Olympic gold medals in 2008 and 2012.

“Sport is a lot like science in that, you know, you have a passion for something and you just go and you train, attain your goal, whether it be making this discovery that you truly believe in, or for me, it was trying to be the best in the world,” Francia said.

“It’s a grind and, honestly, I love that grind. And my mother did too.”

… one of her [Karikó] favourite headlines so far comes from a little blurb on the rowing website Row 2K: “Rowing Mom Wins Nobel.”

Nowadays, scientists are trying to harness the power of mRNA to fight cancer, malaria, influenza and rabies. But when Karikó first began her work, it was a fringe concept. For decades, she toiled in relative obscurity, struggling to secure funding for her research.

“That’s also that same passion that I took into my rowing,” Francia said.

But even as Karikó struggled to make a name for herself, she says her own mother, Zsuzsanna, always believed she would earn a Nobel Prize one day.

Every year, as the Nobel Prize announcement approached, she would tell Karikó she’d be watching for her name. 

“I was laughing [and saying] that, ‘Mom, I am not getting anything,'” she said. 

But her mother, who died a few years ago, ultimately proved correct. 

Congratulations to both Katalin Karikó and Drew Weissman and thank you both for persisting!

Physics

This prize is for physics at the attoscale.

Aaron W. Harrison (Assistant Professor of Chemistry, Austin College, Texas, US) attempts an explanation of an attosecond in his October 3, 2023 essay (in English “What is an attosecond? A physical chemist explains the tiny time scale behind Nobel Prize-winning research” and in French “Nobel de physique : qu’est-ce qu’une attoseconde?”) for The Conversation, Note: Links have been removed,

“Atto” is the scientific notation prefix that represents 10-18, which is a decimal point followed by 17 zeroes and a 1. So a flash of light lasting an attosecond, or 0.000000000000000001 of a second, is an extremely short pulse of light.

In fact, there are approximately as many attoseconds in one second as there are seconds in the age of the universe.

Previously, scientists could study the motion of heavier and slower-moving atomic nuclei with femtosecond (10-15) light pulses. One thousand attoseconds are in 1 femtosecond. But researchers couldn’t see movement on the electron scale until they could generate attosecond light pulses – electrons move too fast for scientists to parse exactly what they are up to at the femtosecond level.

Harrison does a very good job of explaining something that requires a leap of imagination. He also explains why scientists engage in attosecond research. h/t October 4, 2023 news item on phys.org

Amelle Zaïr (Imperial College London) offers a more technical explanation in her October 4, 2023 essay about the 2023 prize winners for The Conversation. h/t October 4, 2023 news item on phys.org

Main event

Here’s the October 3, 2023 Nobel Prize press release, Note: A link has been removed,

The Royal Swedish Academy of Sciences has decided to award the Nobel Prize in Physics 2023 to

Pierre Agostini
The Ohio State University, Columbus, USA

Ferenc Krausz
Max Planck Institute of Quantum Optics, Garching and Ludwig-Maximilians-Universität München, Germany

Anne L’Huillier
Lund University, Sweden

“for experimental methods that generate attosecond pulses of light for the study of electron dynamics in matter”

Experiments with light capture the shortest of moments

The three Nobel Laureates in Physics 2023 are being recognised for their experiments, which have given humanity new tools for exploring the world of electrons inside atoms and molecules. Pierre Agostini, Ferenc Krausz and Anne L’Huillier have demonstrated a way to create extremely short pulses of light that can be used to measure the rapid processes in which electrons move or change energy.

Fast-moving events flow into each other when perceived by humans, just like a film that consists of still images is perceived as continual movement. If we want to investigate really brief events, we need special technology. In the world of electrons, changes occur in a few tenths of an attosecond – an attosecond is so short that there are as many in one second as there have been seconds since the birth of the universe.

The laureates’ experiments have produced pulses of light so short that they are measured in attoseconds, thus demonstrating that these pulses can be used to provide images of processes inside atoms and molecules.

In 1987, Anne L’Huillier discovered that many different overtones of light arose when she transmitted infrared laser light through a noble gas. Each overtone is a light wave with a given number of cycles for each cycle in the laser light. They are caused by the laser light interacting with atoms in the gas; it gives some electrons extra energy that is then emitted as light. Anne L’Huillier has continued to explore this phenomenon, laying the ground for subsequent breakthroughs.

In 2001, Pierre Agostini succeeded in producing and investigating a series of consecutive light pulses, in which each pulse lasted just 250 attoseconds. At the same time, Ferenc Krausz was working with another type of experiment, one that made it possible to isolate a single light pulse that lasted 650 attoseconds.

The laureates’ contributions have enabled the investigation of processes that are so rapid they were previously impossible to follow.

“We can now open the door to the world of electrons. Attosecond physics gives us the opportunity to understand mechanisms that are governed by electrons. The next step will be utilising them,” says Eva Olsson, Chair of the Nobel Committee for Physics.

There are potential applications in many different areas. In electronics, for example, it is important to understand and control how electrons behave in a material. Attosecond pulses can also be used to identify different molecules, such as in medical diagnostics.

Read more about this year’s prize

Popular science background: Electrons in pulses of light (pdf)
Scientific background: “For experimental methods that generate attosecond pulses of light for the study of electron dynamics in matter” (pdf)

Pierre Agostini. PhD 1968 from Aix-Marseille University, France. Professor at The Ohio State University, Columbus, USA.

Ferenc Krausz, born 1962 in Mór, Hungary. PhD 1991 from Vienna University of Technology, Austria. Director at Max Planck Institute of Quantum Optics, Garching and Professor at Ludwig-Maximilians-Universität München, Germany.

Anne L’Huillier, born 1958 in Paris, France. PhD 1986 from University Pierre and Marie Curie, Paris, France. Professor at Lund University, Sweden.

A Canadian connection?

An October 3, 2023 CBC online news item from the Associated Press reveals a Canadian connection of sorts ,

Three scientists have won the Nobel Prize in physics Tuesday for giving us the first split-second glimpse into the superfast world of spinning electrons, a field that could one day lead to better electronics or disease diagnoses.

The award went to French-Swedish physicist Anne L’Huillier, French scientist Pierre Agostini and Hungarian-born Ferenc Krausz for their work with the tiny part of each atom that races around the centre, and that is fundamental to virtually everything: chemistry, physics, our bodies and our gadgets.

Electrons move around so fast that they have been out of reach of human efforts to isolate them. But by looking at the tiniest fraction of a second possible, scientists now have a “blurry” glimpse of them, and that opens up whole new sciences, experts said.

“The electrons are very fast, and the electrons are really the workforce in everywhere,” Nobel Committee member Mats Larsson said. “Once you can control and understand electrons, you have taken a very big step forward.”

L’Huillier is the fifth woman to receive a Nobel in Physics.

L’Huillier was teaching basic engineering physics to about 100 undergraduates at Lund when she got the call that she had won, but her phone was on silent and she didn’t pick up. She checked it during a break and called the Nobel Committee.

Then she went back to teaching.

Agostini, an emeritus professor at Ohio State University, was in Paris and could not be reached by the Nobel Committee before it announced his win to the world

Here’s the Canadian connection (from the October 3, 2023 CBC online news item),

Krausz, of the Max Planck Institute of Quantum Optics and Ludwig Maximilian University of Munich, told reporters that he was bewildered.

“I have been trying to figure out since 11 a.m. whether I’m in reality or it’s just a long dream,” the 61-year-old said.

Last year, Krausz and L’Huillier won the prestigious Wolf prize in physics for their work, sharing it with University of Ottawa scientist Paul Corkum [emphasis mine]. Nobel prizes are limited to only three winners and Krausz said it was a shame that it could not include Corkum.

Corkum was key to how the split-second laser flashes could be measured [emphasis mine], which was crucial, Krausz said.

Congratulations to Pierre Agostini, Ferenc Krausz and Anne L’Huillier and a bow to Paul Corkum!

For those who are curious. a ‘Paul Corkum’ search should bring up a few postings on this blog but I missed this piece of news, a May 4, 2023 University of Ottawa news release about Corkum and the 2022 Wolf Prize, which he shared with Krausz and L’Huillier,

Chemistry

There was a little drama where this prize was concerned, It was announced too early according to an October 4, 2023 news item on phys.org and, again, in another October 4, 2023 news item on phys.org (from the Oct. 4, 2023 news item by Karl Ritter for the Associated Press),

Oops! Nobel chemistry winners are announced early in a rare slip-up

The most prestigious and secretive prize in science ran headfirst into the digital era Wednesday when Swedish media got an emailed press release revealing the winners of the Nobel Prize in chemistry and the news prematurely went public.

Here’s the fully sanctioned October 4, 2023 Nobel Prize press release, Note: A link has been removed,

The Royal Swedish Academy of Sciences has decided to award the Nobel Prize in Chemistry 2023 to

Moungi G. Bawendi
Massachusetts Institute of Technology (MIT), Cambridge, MA, USA

Louis E. Brus
Columbia University, New York, NY, USA

Alexei I. Ekimov
Nanocrystals Technology Inc., New York, NY, USA

“for the discovery and synthesis of quantum dots”

They planted an important seed for nanotechnology

The Nobel Prize in Chemistry 2023 rewards the discovery and development of quantum dots, nanoparticles so tiny that their size determines their properties. These smallest components of nanotechnology now spread their light from televisions and LED lamps, and can also guide surgeons when they remove tumour tissue, among many other things.

Everyone who studies chemistry learns that an element’s properties are governed by how many electrons it has. However, when matter shrinks to nano-dimensions quantum phenomena arise; these are governed by the size of the matter. The Nobel Laureates in Chemistry 2023 have succeeded in producing particles so small that their properties are determined by quantum phenomena. The particles, which are called quantum dots, are now of great importance in nanotechnology.

“Quantum dots have many fascinating and unusual properties. Importantly, they have different colours depending on their size,” says Johan Åqvist, Chair of the Nobel Committee for Chemistry.

Physicists had long known that in theory size-dependent quantum effects could arise in nanoparticles, but at that time it was almost impossible to sculpt in nanodimensions. Therefore, few people believed that this knowledge would be put to practical use.

However, in the early 1980s, Alexei Ekimov succeeded in creating size-dependent quantum effects in coloured glass. The colour came from nanoparticles of copper chloride and Ekimov demonstrated that the particle size affected the colour of the glass via quantum effects.

A few years later, Louis Brus was the first scientist in the world to prove size-dependent quantum effects in particles floating freely in a fluid.

In 1993, Moungi Bawendi revolutionised the chemical production of quantum dots, resulting in almost perfect particles. This high quality was necessary for them to be utilised in applications.

Quantum dots now illuminate computer monitors and television screens based on QLED technology. They also add nuance to the light of some LED lamps, and biochemists and doctors use them to map biological tissue.

Quantum dots are thus bringing the greatest benefit to humankind. Researchers believe that in the future they could contribute to flexible electronics, tiny sensors, thinner solar cells and encrypted quantum communication – so we have just started exploring the potential of these tiny particles.

Read more about this year’s prize

Popular science background: They added colour to nanotechnology (pdf)
Scientific background: Quantum dots – seeds of nanoscience (pdf)

Moungi G. Bawendi, born 1961 in Paris, France. PhD 1988 from University of Chicago, IL, USA. Professor at Massachusetts Institute of Technology (MIT), Cambridge, MA, USA.

Louis E. Brus, born 1943 in Cleveland, OH, USA. PhD 1969 from Columbia University, New York, NY, USA. Professor at Columbia University, New York, NY, USA.

Alexei I. Ekimov, born 1945 in the former USSR. PhD 1974 from Ioffe Physical-Technical Institute, Saint Petersburg, Russia. Formerly Chief Scientist at Nanocrystals Technology Inc., New York, NY, USA.


The most recent ‘quantum dot’ (a particular type of nanoparticle) story here is a January 5, 2023 posting, “Can I have a beer with those carbon quantum dots?

Proving yet again that scientists can have a bumpy trip to a Nobel prize, an October 4, 2023 news item on phys.org describes how one of the winners flunked his first undergraduate chemistry test, Note: Links have been removed,

Talk about bouncing back. MIT professor Moungi Bawendi is a co-winner of this year’s Nobel chemistry prize for helping develop “quantum dots”—nanoparticles that are now found in next generation TV screens and help illuminate tumors within the body.

But as an undergraduate, he flunked his very first chemistry exam, recalling that the experience nearly “destroyed” him.

The 62-year-old of Tunisian and French heritage excelled at science throughout high school, without ever having to break a sweat.

But when he arrived at Harvard University as an undergraduate in the late 1970s, he was in for a rude awakening.

You can find more about the winners and quantum dots in an October 4, 2023 news item on Nanowerk and in Dr. Andrew Maynard’s (Professor of Advanced Technology Transitions, Arizona State University) October 4, 2023 essay for The Conversation (h/t October 4, 2023 news item on phys.org), Note: Links have been removed,

This year’s prize recognizes Moungi Bawendi, Louis Brus and Alexei Ekimov for the discovery and development of quantum dots. For many years, these precisely constructed nanometer-sized particles – just a few hundred thousandths the width of a human hair in diameter – were the darlings of nanotechnology pitches and presentations. As a researcher and adviser on nanotechnology [emphasis mine], I’ve [Dr. Andrew Maynard] even used them myself when talking with developers, policymakers, advocacy groups and others about the promise and perils of the technology.

The origins of nanotechnology predate Bawendi, Brus and Ekimov’s work on quantum dots – the physicist Richard Feynman speculated on what could be possible through nanoscale engineering as early as 1959, and engineers like Erik Drexler were speculating about the possibilities of atomically precise manufacturing in the the 1980s. However, this year’s trio of Nobel laureates were part of the earliest wave of modern nanotechnology where researchers began putting breakthroughs in material science to practical use.

Quantum dots brilliantly fluoresce: They absorb one color of light and reemit it nearly instantaneously as another color. A vial of quantum dots, when illuminated with broad spectrum light, shines with a single vivid color. What makes them special, though, is that their color is determined by how large or small they are. Make them small and you get an intense blue. Make them larger, though still nanoscale, and the color shifts to red.

The wavelength of light a quantum dot emits depends on its size. Maysinger, Ji, Hutter, Cooper, CC BY

There’s also an October 4, 2023 overview article by Tekla S. Perry and Margo Anderson for the IEEE Spectrum about the magazine’s almost twenty-five years of reporting on quantum dots

Red blue and green dots mass in rows, with some dots moving away

Image credit: Brandon Palacio/IEEE Spectrum

Your Guide to the Newest Nobel Prize: Quantum Dots

What you need to know—and what we’ve reported—about this year’s Chemistry award

It’s not a long article and it has a heavy focus on the IEEEE’s (Institute of Electrical and Electtronics Engineers) the road quantum dots have taken to become applications and being commercialized.

Congratulations to Moungi Bawendi, Louis Brus, and Alexei Ekimov!

A robot with body image and self awareness

This research is a rather interesting direction for robotics to take (from a July 13, 2022 news item on ScienceDaily),

As every athletic or fashion-conscious person knows, our body image is not always accurate or realistic, but it’s an important piece of information that determines how we function in the world. When you get dressed or play ball, your brain is constantly planning ahead so that you can move your body without bumping, tripping, or falling over.

We humans acquire our body-model as infants, and robots are following suit. A Columbia Engineering team announced today they have created a robot that — for the first time — is able to learn a model of its entire body from scratch, without any human assistance. In a new study published by Science Robotics,, the researchers demonstrate how their robot created a kinematic model of itself, and then used its self-model to plan motion, reach goals, and avoid obstacles in a variety of situations. It even automatically recognized and then compensated for damage to its body.

Courtesy Columbia University School of Engineering and Applied Science

A July 13, 2022 Columbia University news release by Holly Evarts (also on EurekAlert), which originated the news item, describes the research in more detail, Note: Links have been removed,

Robot watches itself like an an infant exploring itself in a hall of mirrors

The researchers placed a robotic arm inside a circle of five streaming video cameras. The robot watched itself through the cameras as it undulated freely. Like an infant exploring itself for the first time in a hall of mirrors, the robot wiggled and contorted to learn how exactly its body moved in response to various motor commands. After about three hours, the robot stopped. Its internal deep neural network had finished learning the relationship between the robot’s motor actions and the volume it occupied in its environment. 

“We were really curious to see how the robot imagined itself,” said Hod Lipson, professor of mechanical engineering and director of Columbia’s Creative Machines Lab, where the work was done. “But you can’t just peek into a neural network, it’s a black box.” After the researchers struggled with various visualization techniques, the self-image gradually emerged. “It was a sort of gently flickering cloud that appeared to engulf the robot’s three-dimensional body,” said Lipson. “As the robot moved, the flickering cloud gently followed it.” The robot’s self-model was accurate to about 1% of its workspace.

Self-modeling robots will lead to more self-reliant autonomous systems

The ability of robots to model themselves without being assisted by engineers is important for many reasons: Not only does it save labor, but it also allows the robot to keep up with its own wear-and-tear, and even detect and compensate for damage. The authors argue that this ability is important as we need autonomous systems to be more self-reliant. A factory robot, for instance, could detect that something isn’t moving right, and compensate or call for assistance.

“We humans clearly have a notion of self,” explained the study’s first author Boyuan Chen, who led the work and is now an assistant professor at Duke University. “Close your eyes and try to imagine how your own body would move if you were to take some action, such as stretch your arms forward or take a step backward. Somewhere inside our brain we have a notion of self, a self-model that informs us what volume of our immediate surroundings we occupy, and how that volume changes as we move.”

Self-awareness in robots

The work is part of Lipson’s decades-long quest to find ways to grant robots some form of self-awareness.  “Self-modeling is a primitive form of self-awareness,” he explained. “If a robot, animal, or human, has an accurate self-model, it can function better in the world, it can make better decisions, and it has an evolutionary advantage.” 

The researchers are aware of the limits, risks, and controversies surrounding granting machines greater autonomy through self-awareness. Lipson is quick to admit that the kind of self-awareness demonstrated in this study is, as he noted, “trivial compared to that of humans, but you have to start somewhere. We have to go slowly and carefully, so we can reap the benefits while minimizing the risks.”  

Here’s a link to and a citation for the paper,

Fully body visual self-modeling of robot morphologies by Boyuan Chen, Robert Kwiatkowski, Carl Vondrick and Hod Lipson. Science Robotics 13 Jul 2022 Vol 7, Issue 68 DOI: 10.1126/scirobotics.abn1944

This paper is behind a paywall.

If you follow the link to the July 13, 2022 Columbia University news release, you’ll find an approximately 25 min. video of Hod Lipson showing you how they did it. As Lipson notes discussion of self-awareness and sentience is not found in robotics programmes. Plus, there are more details and links if you follow the EurekAlert link.

Pulling water from the air

Adele Peters’ May 27, 2022 article for Fast Company describes some research into harvesting water from the air (Note: Links have been removed),

In Ethiopia, where an ongoing drought is the worst in 40 years, getting drinking water for the day can involve walking for eight hours. Some wells are drying up. As climate change progresses, water scarcity keeps getting worse. But new technology in development at the University of Texas at Austin could help: Using simple, low-cost materials, it harvests water from the air, even in the driest climates.

“The advantage of taking water moisture from the air is that it’s not limited geographically,” says Youhong “Nancy” Guo, lead author of a new study in Nature Communications that describes the technology.

It’s a little surprising that Peters doesn’t mention the megadrought in the US Southwest, which has made quite a splash in the news, from a February 15, 2022 article by Denise Chow for NBC [{US} National Broadcasting Corporation] news online, Note: Links have been removed,

The megadrought that has gripped the southwestern United States for the past 22 years is the worst since at least 800 A.D., according to a new study that examined shifts in water availability and soil moisture over the past 12 centuries.

The research, which suggests that the past two decades in the American Southwest have been the driest period in 1,200 years, pointed to human-caused climate change as a major reason for the current drought’s severity. The findings were published Monday in the journal Nature Climate Change.

Jason Smerdon, one of the study’s authors and a climate scientist at Columbia University’s Lamont-Doherty Earth Observatory, said global warming has made the megadrought more extreme because it creates a “thirstier” atmosphere that is better able to pull moisture out of forests, vegetation and soil.

Over the past two decades, temperatures in the Southwest were around 1.64 degrees Fahrenheit higher than the average from 1950 to 1999, according to the researchers. Globally, the world has warmed by about 2 degrees Fahrenheit since the late 1800s.

It’s getting drier even here in the Pacific Northwest. Maybe it’s time to start looking at drought and water shortages as a global issue rather than as a regional issue.

Caption: An example of a different shape the water-capturing film can take. Credit: The University of Texas at Austin / Cockrell School of Engineering

Getting back to the topic, a May 23, 2022 University of Texas at Austin news release (also on EurkeAlert), which originated the Peters’ article, announces the work,

More than a third of the world’s population lives in drylands, areas that experience significant water shortages. Scientists and engineers at The University of Texas at Austin have developed a solution that could help people in these areas access clean drinking water.

The team developed a low-cost gel film made of abundant materials that can pull water from the air in even the driest climates. The materials that facilitate this reaction cost a mere $2 per kilogram, and a single kilogram can produce more than 6 liters of water per day in areas with less than 15% relative humidity and 13 liters in areas with up to 30% relative humidity.

The research builds on previous breakthroughs from the team, including the ability to pull water out of the atmosphere and the application of that technology to create self-watering soil. However, these technologies were designed for relatively high-humidity environments.

“This new work is about practical solutions that people can use to get water in the hottest, driest places on Earth,” said Guihua Yu, professor of materials science and mechanical engineering in the Cockrell School of Engineering’s Walker Department of Mechanical Engineering. “This could allow millions of people without consistent access to drinking water to have simple, water generating devices at home that they can easily operate.”

The researchers used renewable cellulose and a common kitchen ingredient, konjac gum, as a main hydrophilic (attracted to water) skeleton. The open-pore structure of gum speeds the moisture-capturing process. Another designed component, thermo-responsive cellulose with hydrophobic (resistant to water) interaction when heated, helps release the collected water immediately so that overall energy input to produce water is minimized.

Other attempts at pulling water from desert air are typically energy-intensive and do not produce much. And although 6 liters does not sound like much, the researchers say that creating thicker films or absorbent beds or arrays with optimization could drastically increase the amount of water they yield.

The reaction itself is a simple one, the researchers said, which reduces the challenges of scaling it up and achieving mass usage.

“This is not something you need an advanced degree to use,” said Youhong “Nancy” Guo, the lead author on the paper and a former doctoral student in Yu’s lab, now a postdoctoral researcher at the Massachusetts Institute of Technology. “It’s straightforward enough that anyone can make it at home if they have the materials.”

The film is flexible and can be molded into a variety of shapes and sizes, depending on the need of the user. Making the film requires only the gel precursor, which includes all the relevant ingredients poured into a mold.

“The gel takes 2 minutes to set simply. Then, it just needs to be freeze-dried, and it can be peeled off the mold and used immediately after that,” said Weixin Guan, a doctoral student on Yu’s team and a lead researcher of the work.

The research was funded by the U.S. Department of Defense’s Defense Advanced Research Projects Agency (DARPA), and drinking water for soldiers in arid climates is a big part of the project. However, the researchers also envision this as something that people could someday buy at a hardware store and use in their homes because of the simplicity.

Yu directed the project. Guo and Guan co-led experimental efforts on synthesis, characterization of the samples and device demonstration. Other team members are Chuxin Lei, Hengyi Lu and Wen Shi.

Here’s a link to and a citation for the paper,

Scalable super hygroscopic polymer films for sustainable moisture harvesting in arid environments by Youhong Guo, Weixin Guan, Chuxin Lei, Hengyi Lu, Wen Shi & Guihua Yu. Nature Communications volume 13, Article number: 2761 (2022) DOI: https://doi.org/10.1038/s41467-022-30505-2 Published: 19 May 2022

This paper is open access.

Philosophy and science in Tokyo, Japan from Dec. 1-2, 2022

I have not seen a more timely and à propos overview for a meeting/conference/congress that this one for Tokyo Forum 2022 (hosted by the University of Tokyo and South Korea’s Chey Institute for Advanced Studies),

Dialogue between Philosophy and Science: In a World Facing War, Pandemic, and Climate Change

In the face of war, a pandemic, and climate change, we cannot repeat the history of the last century, in which our ancestors headed down the road to division, global conflict, and environmental destruction.

How can we live more fully and how do we find a new common understanding about what our society should be? Tokyo Forum 2022 will tackle these questions through a series of in-depth dialogues between philosophy and science. The dialogues will weave together the latest findings and deep contemplation, and explore paths that could lead us to viable answers and solutions.

Philosophy of the 21st century must contribute to the construction of a new universality based on locality and diversity. It should be a universality that is open to co-existing with other non-human elements, such as ecosystems and nature, while severely criticizing the understanding of history that unreflectively identifies anthropocentrism with universality.

Science in the 21st century also needs to dispense with its overarching aura of supremacy and lack of self-criticism. There is a need for scientists to make efforts to demarcate their own limits. This also means reexamining what ethics means for science.

Tokyo Forum 2022 will offer multifaceted dialogues between philosophers, scientists, and scholars from various fields of study on the state and humanity in the 21st century, with a view to imagining and proposing a vision of the society we need.

Here are some details about the hybrid event from a November 4, 2022 University of Tokyo press release on EurekAlert,

The University of Tokyo and South Korea’s Chey Institute for Advanced Studies will host Tokyo Forum 2022 from Dec. 1-2, 2022. Under this year’s theme “Dialogue between Philosophy and Science,” the annual symposium will bring together philosophers, scientists and scholars in various fields from around the world for multifaceted dialogues on humanity and the state in the 21st century, while envisioning the society we need.

The event is free and open to the public, and will be held both on site at Yasuda Auditorium of the University of Tokyo and online via livestream. [emphases mine]

Keynote speakers lined up for the first day of the two-day symposium are former U.N. Secretary-General Ban Ki-moon, University of Chicago President Paul Alivisatos and Mariko Hasegawa, president of the Graduate University for Advanced Studies in Japan.

Other featured speakers on the event’s opening day include renowned modern thinker and author Professor Markus Gabriel of the University of Bonn, and physicist Hirosi Ooguri, director of the Kavli Institute for the Physics and Mathematics of the Universe at the University of Tokyo and professor at the California Institute of Technology, who are scheduled to participate in the high-level discussion on the dialogue between philosophy and science.

Columbia University Professor Jeffrey Sachs will take part in a panel discussion, also on Day 1, on tackling global environmental issues with stewardship of the global commons — the stable and resilient Earth system that sustains our lives — as a global common value.

The four panel discussions slated for Day 2 will cover the role of world philosophy in addressing the problems of a globalized world; transformative change for a sustainable future by understanding the diverse values of nature and its contributions to people; the current and future impacts of autonomous robots on society; and finding collective solutions and universal values to pursue equitable and sustainable futures for humanity by looking at interconnections among various fields of inquiry.

Opening remarks will be delivered by University of Tokyo President Teruo Fujii and South Korea’s SK Group Chairman Chey Tae-won, on Day 1. Fujii and Chey Institute President Park In-kook will make closing remarks following the wrap-up session on the second and final day.

Tokyo Forum with its overarching theme “Shaping the Future” is held annually since 2019 to stimulate discussions on finding the best ideas for shaping the world and humanity in the face of complex situations where the conventional wisdom can no longer provide answers.

For more information about the program and speakers of Tokyo Forum 2022, visit the event website and social media accounts:

Website: https://www.tokyoforum.tc.u-tokyo.ac.jp/en/index.html

Twitter: https://twitter.com/UTokyo_forum

Facebook: https://www.facebook.com/UTokyo.tokyo.forum/

To register, fill out the registration form on the Tokyo Forum 2022 website (registration is free but required [emphasis mine] to attend the event): https://www.tokyo-forum-form.com/apply/audiences/en

I’m not sure how they are handling languages. I’m guessing that people are speaking in the language they choose and translations (subtitles or dubbing) are available. For anyone who may have difficulty attending due to timezone issues, there are archives for previous Tokyo Forums. Presumably 2022 will be added at some point in the future.

A computer simulation inside a computer simulation?

Stumbling across an entry from National Film Board of Canada for the Venice VR (virtual reality) Expanded section at the 77th Venice International Film Festival (September 2 to 12, 2020) and a recent Scientific American article on computer simulations provoked a memory from Frank Herbert’s 1965 novel, Dune. From an Oct. 3, 2007 posting on Equivocality; A journal of self-discovery, healing, growth, and growing pains,

Knowing where the trap is — that’s the first step in evading it. This is like single combat, Son, only on a larger scale — a feint within a feint within a feint [emphasis mine]…seemingly without end. The task is to unravel it.

—Duke Leto Atreides, Dune [Note: Dune is a 1965 science-fiction novel by US author Frank Herbert]

Now, onto what provoked memory of that phrase.

The first computer simulation “Agence”

Here’s a description of “Agence” and its creators from an August 11, 2020 Canada National Film Board (NFB) news release,

Two-time Emmy Award-winning storytelling pioneer Pietro Gagliano’s new work Agence (Transitional Forms/National Film Board of Canada) is an industry-first dynamic film that integrates cinematic storytelling, artificial intelligence, and user interactivity to create a different experience each time.

Agence is premiering in official competition in the Venice VR Expanded section at the 77th Venice International Film Festival (September 2 to 12), and accessible worldwide via the online Venice VR Expanded platform.

About the experience

Would you play god to intelligent life? Agence places the fate of artificially intelligent creatures in your hands. In their simulated universe, you have the power to observe, and to interfere. Maintain the balance of their peaceful existence or throw them into a state of chaos as you move from planet to planet. Watch closely and you’ll see them react to each other and their emerging world.

About the creators

Created by Pietro Gagliano, Agence is a co-production between his studio lab Transitional Forms and the NFB. Pietro is a pioneer of new forms of media that allow humans to understand what it means to be machine, and machines what it means to be human. Previously, Pietro co-founded digital studio Secret Location, and with his team, made history in 2015 by winning the first ever Emmy Award for a virtual reality project. His work has been recognized through hundreds of awards and nominations, including two Emmy Awards, 11 Canadian Screen Awards, 31 FWAs, two Webby Awards, a Peabody-Facebook Award, and a Cannes Lion.

Agence is produced by Casey Blustein (Transitional Forms) and David Oppenheim (NFB) and executive produced by Pietro Gagliano (Transitional Forms) and Anita Lee (NFB). 

About Transitional Forms

Transitional Forms is a studio lab focused on evolving entertainment formats through the use of artificial intelligence. Through their innovative approach to content and tool creation, their interdisciplinary team transforms valuable research into dynamic, culturally relevant experiences across a myriad of emerging platforms. Dedicated to the intersection of technology and art, Transitional Forms strives to make humans more creative, and machines more human.

About the NFB

David Oppenheim and Anita Lee’s recent VR credits also include the acclaimed virtual reality/live performance piece Draw Me Close and The Book of Distance, which premiered at the Sundance Film Festival and is in the “Best of VR” section at Venice this year. Canada’s public producer of award-winning creative documentaries, auteur animation, interactive stories and participatory experiences, the NFB has won over 7,000 awards, including 21 Webbys and 12 Academy Awards.

The line that caught my eye? “Would you play god to intelligent life?” For the curious, here’s the film’s trailer,

Now for the second computer simulation (the feint within the feint).

Are we living in a computer simulation?

According to some thinkers in the field, the chances are about 50/50 that we are computer simulations, which makes “Agence” a particularly piquant experience.

An October 13, 2020 article ‘Do We Live in a Simulation? Chances are about 50 – 50‘ by Anil Ananthaswamy for Scientific American poses the question with an answer that’s unexpectedly uncertain, Note: Links have been removed,

It is not often that a comedian gives an astrophysicist goose bumps when discussing the laws of physics. But comic Chuck Nice managed to do just that in a recent episode of the podcast StarTalk.The show’s host Neil deGrasse Tyson had just explained the simulation argument—the idea that we could be virtual beings living in a computer simulation. If so, the simulation would most likely create perceptions of reality on demand rather than simulate all of reality all the time—much like a video game optimized to render only the parts of a scene visible to a player. “Maybe that’s why we can’t travel faster than the speed of light, because if we could, we’d be able to get to another galaxy,” said Nice, the show’s co-host, prompting Tyson to gleefully interrupt. “Before they can program it,” the astrophysicist said,delighting at the thought. “So the programmer put in that limit.”

Such conversations may seem flippant. But ever since Nick Bostrom of the University of Oxford wrote a seminal paper about the simulation argument in 2003, philosophers, physicists, technologists and, yes, comedians have been grappling with the idea of our reality being a simulacrum. Some have tried to identify ways in which we can discern if we are simulated beings. Others have attempted to calculate the chance of us being virtual entities. Now a new analysis shows that the odds that we are living in base reality—meaning an existence that is not simulated—are pretty much even. But the study also demonstrates that if humans were to ever develop the ability to simulate conscious beings, the chances would overwhelmingly tilt in favor of us, too, being virtual denizens inside someone else’s computer. (A caveat to that conclusion is that there is little agreement about what the term “consciousness” means, let alone how one might go about simulating it.)

In 2003 Bostrom imagined a technologically adept civilization that possesses immense computing power and needs a fraction of that power to simulate new realities with conscious beings in them. Given this scenario, his simulation argument showed that at least one proposition in the following trilemma must be true: First, humans almost always go extinct before reaching the simulation-savvy stage. Second, even if humans make it to that stage, they are unlikely to be interested in simulating their own ancestral past. And third, the probability that we are living in a simulation is close to one.

Before Bostrom, the movie The Matrix had already done its part to popularize the notion of simulated realities. And the idea has deep roots in Western and Eastern philosophical traditions, from Plato’s cave allegory to Zhuang Zhou’s butterfly dream. More recently, Elon Musk gave further fuel to the concept that our reality is a simulation: “The odds that we are in base reality is one in billions,” he said at a 2016 conference.

For him [astronomer David Kipping of Columbia University], there is a more obvious answer: Occam’s razor, which says that in the absence of other evidence, the simplest explanation is more likely to be correct. The simulation hypothesis is elaborate, presuming realities nested upon realities, as well as simulated entities that can never tell that they are inside a simulation. “Because it is such an overly complicated, elaborate model in the first place, by Occam’s razor, it really should be disfavored, compared to the simple natural explanation,” Kipping says.

Maybe we are living in base reality after all—The Matrix, Musk and weird quantum physics notwithstanding.

It’s all a little mind-boggling (a computer simulation creating and playing with a computer simulation?) and I’m not sure how far how I want to start thinking about the implications (the feint within the feint within the feint). Still, it seems that the idea could be useful as a kind of thought experiment designed to have us rethink our importance in the world. Or maybe, as a way to have a laugh at our own absurdity.

Technical University of Munich: embedded ethics approach for AI (artificial intelligence) and storing a tv series in synthetic DNA

I stumbled across two news bits of interest from the Technical University of Munich in one day (Sept. 1, 2020, I think). The topics: artificial intelligence (AI) and synthetic DNA (deoxyribonucleic acid).

Embedded ethics and artificial intelligence (AI)

An August 27, 2020 Technical University of Munich (TUM) press release (also on EurekAlert but published Sept. 1, 2020) features information about a proposal to embed ethicists in with AI development teams,

The increasing use of AI (artificial intelligence) in the development of new medical technologies demands greater attention to ethical aspects. An interdisciplinary team at the Technical University of Munich (TUM) advocates the integration of ethics from the very beginning of the development process of new technologies. Alena Buyx, Professor of Ethics in Medicine and Health Technologies, explains the embedded ethics approach.

Professor Buyx, the discussions surrounding a greater emphasis on ethics in AI research have greatly intensified in recent years, to the point where one might speak of “ethics hype” …

Prof. Buyx: … and many committees in Germany and around the world such as the German Ethics Council or the EU Commission High-Level Expert Group on Artificial Intelligence have responded. They are all in agreement: We need more ethics in the development of AI-based health technologies. But how do things look in practice for engineers and designers? Concrete solutions are still few and far between. In a joint pilot project with two Integrative Research Centers at TUM, the Munich School of Robotics and Machine Intelligence (MSRM) with its director, Prof. Sami Haddadin, and the Munich Center for Technology in Society (MCTS), with Prof. Ruth Müller, we want to try out the embedded ethics approach. We published the proposal in Nature Machine Intelligence at the end of July [2020].

What exactly is meant by the “embedded ethics approach”?

Prof.Buyx: The idea is to make ethics an integral part of the research process by integrating ethicists into the AI development team from day one. For example, they attend team meetings on a regular basis and create a sort of “ethical awareness” for certain issues. They also raise and analyze specific ethical and social issues.

Is there an example of this concept in practice?

Prof. Buyx: The Geriatronics Research Center, a flagship project of the MSRM in Garmisch-Partenkirchen, is developing robot assistants to enable people to live independently in old age. The center’s initiatives will include the construction of model apartments designed to try out residential concepts where seniors share their living space with robots. At a joint meeting with the participating engineers, it was noted that the idea of using an open concept layout everywhere in the units – with few doors or individual rooms – would give the robots considerable range of motion. With the seniors, however, this living concept could prove upsetting because they are used to having private spaces. At the outset, the engineers had not given explicit consideration to this aspect.

Prof.Buyx: The approach sounds promising. But how can we avoid “embedded ethics” from turning into an “ethics washing” exercise, offering companies a comforting sense of “being on the safe side” when developing new AI technologies?

That’s not something we can be certain of avoiding. The key is mutual openness and a willingness to listen, with the goal of finding a common language – and subsequently being prepared to effectively implement the ethical aspects. At TUM we are ideally positioned to achieve this. Prof. Sami Haddadin, the director of the MSRM, is also a member of the EU High-Level Group of Artificial Intelligence. In his research, he is guided by the concept of human centered engineering. Consequently, he has supported the idea of embedded ethics from the very beginning. But one thing is certain: Embedded ethics alone will not suddenly make AI “turn ethical”. Ultimately, that will require laws, codes of conduct and possibly state incentives.

Here’s a link to and a citation for the paper espousing the embedded ethics for AI development approach,

An embedded ethics approach for AI development by Stuart McLennan, Amelia Fiske, Leo Anthony Celi, Ruth Müller, Jan Harder, Konstantin Ritt, Sami Haddadin & Alena Buyx. Nature Machine Intelligence (2020) DOI: https://doi.org/10.1038/s42256-020-0214-1 Published 31 July 2020

This paper is behind a paywall.

Religion, ethics and and AI

For some reason embedded ethics and AI got me to thinking about Pope Francis and other religious leaders.

The Roman Catholic Church and AI

There was a recent announcement that the Roman Catholic Church will be working with MicroSoft and IBM on AI and ethics (from a February 28, 2020 article by Jen Copestake for British Broadcasting Corporation (BBC) news online (Note: A link has been removed),

Leaders from the two tech giants met senior church officials in Rome, and agreed to collaborate on “human-centred” ways of designing AI.

Microsoft president Brad Smith admitted some people may “think of us as strange bedfellows” at the signing event.

“But I think the world needs people from different places to come together,” he said.

The call was supported by Pope Francis, in his first detailed remarks about the impact of artificial intelligence on humanity.

The Rome Call for Ethics [sic] was co-signed by Mr Smith, IBM executive vice-president John Kelly and president of the Pontifical Academy for Life Archbishop Vincenzo Paglia.

It puts humans at the centre of new technologies, asking for AI to be designed with a focus on the good of the environment and “our common and shared home and of its human inhabitants”.

Framing the current era as a “renAIssance”, the speakers said the invention of artificial intelligence would be as significant to human development as the invention of the printing press or combustion engine.

UN Food and Agricultural Organization director Qu Dongyu and Italy’s technology minister Paola Pisano were also co-signatories.

Hannah Brockhaus’s February 28, 2020 article for the Catholic News Agency provides some details missing from the BBC report and I found it quite helpful when trying to understand the various pieces that make up this initiative,

The Pontifical Academy for Life signed Friday [February 28, 2020], alongside presidents of IBM and Microsoft, a call for ethical and responsible use of artificial intelligence technologies.

According to the document, “the sponsors of the call express their desire to work together, in this context and at a national and international level, to promote ‘algor-ethics.’”

“Algor-ethics,” according to the text, is the ethical use of artificial intelligence according to the principles of transparency, inclusion, responsibility, impartiality, reliability, security, and privacy.

The signing of the “Rome Call for AI Ethics [PDF]” took place as part of the 2020 assembly of the Pontifical Academy for Life, which was held Feb. 26-28 [2020] on the theme of artificial intelligence.

One part of the assembly was dedicated to private meetings of the academics of the Pontifical Academy for Life. The second was a workshop on AI and ethics that drew 356 participants from 41 countries.

On the morning of Feb. 28 [2020], a public event took place called “renAIssance. For a Humanistic Artificial Intelligence” and included the signing of the AI document by Microsoft President Brad Smith, and IBM Executive Vice-president John Kelly III.

The Director General of FAO, Dongyu Qu, and politician Paola Pisano, representing the Italian government, also signed.

The president of the European Parliament, David Sassoli, was also present Feb. 28.

Pope Francis canceled his scheduled appearance at the event due to feeling unwell. His prepared remarks were read by Archbishop Vincenzo Paglia, president of the Academy for Life.

You can find Pope Francis’s comments about the document here (if you’re not comfortable reading Italian, hopefully, the English translation which follows directly afterward will be helpful). The Pope’s AI initiative has a dedicated website, Rome Call for AI ethics, and while most of the material dates from the February 2020 announcement, they are keeping up a blog. It has two entries, one dated in May 2020 and another in September 2020.

Buddhism and AI

The Dalai Lama is well known for having an interest in science and having hosted scientists for various dialogues. So, I was able to track down a November 10, 2016 article by Ariel Conn for the futureoflife.org website, which features his insights on the matter,

The question of what it means and what it takes to feel needed is an important problem for ethicists and philosophers, but it may be just as important for AI researchers to consider. The Dalai Lama argues that lack of meaning and purpose in one’s work increases frustration and dissatisfaction among even those who are gainfully employed.

“The problem,” says the Dalai Lama, “is … the growing number of people who feel they are no longer useful, no longer needed, no longer one with their societies. … Feeling superfluous is a blow to the human spirit. It leads to social isolation and emotional pain, and creates the conditions for negative emotions to take root.”

If feeling needed and feeling useful are necessary for happiness, then AI researchers may face a conundrum. Many researchers hope that job loss due to artificial intelligence and automation could, in the end, provide people with more leisure time to pursue enjoyable activities. But if the key to happiness is feeling useful and needed, then a society without work could be just as emotionally challenging as today’s career-based societies, and possibly worse.

I also found a talk on the topic by The Venerable Tenzin Priyadarshi, first here’s a description from his bio at the Dalai Lama Center for Ethics and Transformative Values webspace on the Massachusetts Institute of Technology (MIT) website,

… an innovative thinker, philosopher, educator and a polymath monk. He is Director of the Ethics Initiative at the MIT Media Lab and President & CEO of The Dalai Lama Center for Ethics and Transformative Values at the Massachusetts Institute of Technology. Venerable Tenzin’s unusual background encompasses entering a Buddhist monastery at the age of ten and receiving graduate education at Harvard University with degrees ranging from Philosophy to Physics to International Relations. He is a Tribeca Disruptive Fellow and a Fellow at the Center for Advanced Study in Behavioral Sciences at Stanford University. Venerable Tenzin serves on the boards of a number of academic, humanitarian, and religious organizations. He is the recipient of several recognitions and awards and received Harvard’s Distinguished Alumni Honors for his visionary contributions to humanity.

He gave the 2018 Roger W. Heyns Lecture in Religion and Society at Stanford University on the topic, “Religious and Ethical Dimensions of Artificial Intelligence.” The video runs over one hour but he is a sprightly speaker (in comparison to other Buddhist speakers I’ve listened to over the years).

Judaism, Islam, and other Abrahamic faiths examine AI and ethics

I was delighted to find this January 30, 2020 Artificial Intelligence: Implications for Ethics and Religion event as it brought together a range of thinkers from various faiths and disciplines,

New technologies are transforming our world every day, and the pace of change is only accelerating.  In coming years, human beings will create machines capable of out-thinking us and potentially taking on such uniquely-human traits as empathy, ethical reasoning, perhaps even consciousness.  This will have profound implications for virtually every human activity, as well as the meaning we impart to life and creation themselves.  This conference will provide an introduction for non-specialists to Artificial Intelligence (AI):

What is it?  What can it do and be used for?  And what will be its implications for choice and free will; economics and worklife; surveillance economies and surveillance states; the changing nature of facts and truth; and the comparative intelligence and capabilities of humans and machines in the future? 

Leading practitioners, ethicists and theologians will provide cross-disciplinary and cross-denominational perspectives on such challenges as technology addiction, inherent biases and resulting inequalities, the ethics of creating destructive technologies and of turning decision-making over to machines from self-driving cars to “autonomous weapons” systems in warfare, and how we should treat the suffering of “feeling” machines.  The conference ultimately will address how we think about our place in the universe and what this means for both religious thought and theological institutions themselves.

UTS [Union Theological Seminary] is the oldest independent seminary in the United States and has long been known as a bastion of progressive Christian scholarship.  JTS [Jewish Theological Seminary] is one of the academic and spiritual centers of Conservative Judaism and a major center for academic scholarship in Jewish studies. The Riverside Church is an interdenominational, interracial, international, open, welcoming, and affirming church and congregation that has served as a focal point of global and national activism for peace and social justice since its inception and continues to serve God through word and public witness. The annual Greater Good Gathering, the following week at Columbia University’s School of International & Public Affairs, focuses on how technology is changing society, politics and the economy – part of a growing nationwide effort to advance conversations promoting the “greater good.”

They have embedded a video of the event (it runs a little over seven hours) on the January 30, 2020 Artificial Intelligence: Implications for Ethics and Religion event page. For anyone who finds that a daunting amount of information, you may want to check out the speaker list for ideas about who might be writing and thinking on this topic.

As for Islam, I did track down this November 29, 2018 article by Shahino Mah Abdullah, a fellow at the Institute of Advanced Islamic Studies (IAIS) Malaysia,

As the global community continues to work together on the ethics of AI, there are still vast opportunities to offer ethical inputs, including the ethical principles based on Islamic teachings.

This is in line with Islam’s encouragement for its believers to convey beneficial messages, including to share its ethical principles with society.

In Islam, ethics or akhlak (virtuous character traits) in Arabic, is sometimes employed interchangeably in the Arabic language with adab, which means the manner, attitude, behaviour, and etiquette of putting things in their proper places. Islamic ethics cover all the legal concepts ranging from syariah (Islamic law), fiqh ( jurisprudence), qanun (ordinance), and ‘urf (customary practices).

Adopting and applying moral values based on the Islamic ethical concept or applied Islamic ethics could be a way to address various issues in today’s societies.

At the same time, this approach is in line with the higher objectives of syariah (maqasid alsyariah) that is aimed at conserving human benefit by the protection of human values, including faith (hifz al-din), life (hifz alnafs), lineage (hifz al-nasl), intellect (hifz al-‘aql), and property (hifz al-mal). This approach could be very helpful to address contemporary issues, including those related to the rise of AI and intelligent robots.

..

Part of the difficulty with tracking down more about AI, ethics, and various religions is linguistic. I simply don’t have the language skills to search for the commentaries and, even in English, I may not have the best or most appropriate search terms.

Television (TV) episodes stored on DNA?

According to a Sept. 1, 2020 news item on Nanowerk, the first episode of a tv series, ‘Biohackers’ has been stored on synthetic DNA (deoxyribonucleic acid) by a researcher at TUM and colleagues at another institution,

The first episode of the newly released series “Biohackers” was stored in the form of synthetic DNA. This was made possible by the research of Prof. Reinhard Heckel of the Technical University of Munich (TUM) and his colleague Prof. Robert Grass of ETH Zürich.

They have developed a method that permits the stable storage of large quantities of data on DNA for over 1000 years.

A Sept. 1, 2020 TUM press release, which originated the news item, proceeds with more detail in an interview format,

Prof. Heckel, Biohackers is about a medical student seeking revenge on a professor with a dark past – and the manipulation of DNA with biotechnology tools. You were commissioned to store the series on DNA. How does that work?

First, I should mention that what we’re talking about is artificially generated – in other words, synthetic – DNA. DNA consists of four building blocks: the nucleotides adenine (A), thymine (T), guanine (G) and cytosine (C). Computer data, meanwhile, are coded as zeros and ones. The first episode of Biohackers consists of a sequence of around 600 million zeros and ones. To code the sequence 01 01 11 00 in DNA, for example, we decide which number combinations will correspond to which letters. For example: 00 is A, 01 is C, 10 is G and 11 is T. Our example then produces the DNA sequence CCTA. Using this principle of DNA data storage, we have stored the first episode of the series on DNA.

And to view the series – is it just a matter of “reverse translation” of the letters?

In a very simplified sense, you can visualize it like that. When writing, storing and reading the DNA, however, errors occur. If these errors are not corrected, the data stored on the DNA will be lost. To solve the problem, I have developed an algorithm based on channel coding. This method involves correcting errors that take place during information transfers. The underlying idea is to add redundancy to the data. Think of language: When we read or hear a word with missing or incorrect letters, the computing power of our brain is still capable of understanding the word. The algorithm follows the same principle: It encodes the data with sufficient redundancy to ensure that even highly inaccurate data can be restored later.

Channel coding is used in many fields, including in telecommunications. What challenges did you face when developing your solution?

The first challenge was to create an algorithm specifically geared to the errors that occur in DNA. The second one was to make the algorithm so efficient that the largest possible quantities of data can be stored on the smallest possible quantity of DNA, so that only the absolutely necessary amount of redundancy is added. We demonstrated that our algorithm is optimized in that sense.

DNA data storage is very expensive because of the complexity of DNA production as well as the reading process. What makes DNA an attractive storage medium despite these challenges?

First, DNA has a very high information density. This permits the storage of enormous data volumes in a minimal space. In the case of the TV series, we stored “only” 100 megabytes on a picogram – or a billionth of a gram of DNA. Theoretically, however, it would be possible to store up to 200 exabytes on one gram of DNA. And DNA lasts a long time. By comparison: If you never turned on your PC or wrote data to the hard disk it contains, the data would disappear after a couple of years. By contrast, DNA can remain stable for many thousands of years if it is packed right.

And the method you have developed also makes the DNA strands durable – practically indestructible.

My colleague Robert Grass was the first to develop a process for the “stable packing” of DNA strands by encapsulating them in nanometer-scale spheres made of silica glass. This ensures that the DNA is protected against mechanical influences. In a joint paper in 2015, we presented the first robust DNA data storage concept with our algorithm and the encapsulation process developed by Prof. Grass. Since then we have continuously improved our method. In our most recent publication in Nature Protocols of January 2020, we passed on what we have learned.

What are your next steps? Does data storage on DNA have a future?

We’re working on a way to make DNA data storage cheaper and faster. “Biohackers” was a milestone en route to commercialization. But we still have a long way to go. If this technology proves successful, big things will be possible. Entire libraries, all movies, photos, music and knowledge of every kind – provided it can be represented in the form of data – could be stored on DNA and would thus be available to humanity for eternity.

Here’s a link to and a citation for the paper,

Reading and writing digital data in DNA by Linda C. Meiser, Philipp L. Antkowiak, Julian Koch, Weida D. Chen, A. Xavier Kohll, Wendelin J. Stark, Reinhard Heckel & Robert N. Grass. Nature Protocols volume 15, pages86–101(2020) Issue Date: January 2020 DOI: https://doi.org/10.1038/s41596-019-0244-5 Published [online] 29 November 2019

This paper is behind a paywall.

As for ‘Biohackers’, it’s a German science fiction television series and you can find out more about it here on the Internet Movie Database.

Bringing a technique from astronomy down to the nanoscale

A January 2, 2020 Columbia University news release on EurekAlert (also on phys.org but published Jan. 3, 2020) describes research that takes the inter-galactic down to the quantum level,

Researchers at Columbia University and University of California, San Diego, have introduced a novel “multi-messenger” approach to quantum physics that signifies a technological leap in how scientists can explore quantum materials.

The findings appear in a recent article published in Nature Materials, led by A. S. McLeod, postdoctoral researcher, Columbia Nano Initiative, with co-authors Dmitri Basov and A. J. Millis at Columbia and R.A. Averitt at UC San Diego.

“We have brought a technique from the inter-galactic scale down to the realm of the ultra-small,” said Basov, Higgins Professor of Physics and Director of the Energy Frontier Research Center at Columbia. Equipped with multi-modal nanoscience tools we can now routinely go places no one thought would be possible as recently as five years ago.”

The work was inspired by “multi-messenger” astrophysics, which emerged during the last decade as a revolutionary technique for the study of distant phenomena like black hole mergers. Simultaneous measurements from instruments, including infrared, optical, X-ray and gravitational-wave telescopes can, taken together, deliver a physical picture greater than the sum of their individual parts.

The search is on for new materials that can supplement the current reliance on electronic semiconductors. Control over material properties using light can offer improved functionality, speed, flexibility and energy efficiency for next-generation computing platforms.

Experimental papers on quantum materials have typically reported results obtained by using only one type of spectroscopy. The researchers have shown the power of using a combination of measurement techniques to simultaneously examine electrical and optical properties.

The researchers performed their experiment by focusing laser light onto the sharp tip of a needle probe coated with magnetic material. When thin films of metal oxide are subject to a unique strain, ultra-fast light pulses can trigger the material to switch into an unexplored phase of nanometer-scale domains, and the change is reversible.

By scanning the probe over the surface of their thin film sample, the researchers were able to trigger the change locally and simultaneously manipulate and record the electrical, magnetic and optical properties of these light-triggered domains with nanometer-scale precision.

The study reveals how unanticipated properties can emerge in long-studied quantum materials at ultra-small scales when scientists tune them by strain.

“It is relatively common to study these nano-phase materials with scanning probes. But this is the first time an optical nano-probe has been combined with simultaneous magnetic nano-imaging, and all at the very low temperatures where quantum materials show their merits,” McLeod said. “Now, investigation of quantum materials by multi-modal nanoscience offers a means to close the loop on programs to engineer them.”

The excitement is palpable.

Caption: The discovery of multi-messenger nanoprobes allows scientists to simultaneously probe multiple properties of quantum materials at nanometer-scale spatial resolutions. Credit: Ella Maru Studio

Here’s a link to and a citation for the paper,

Multi-messenger nanoprobes of hidden magnetism in a strained manganite by A. S. McLeod, Jingdi Zhang, M. Q. Gu, F. Jin, G. Zhang, K. W. Post, X. G. Zhao, A. J. Millis, W. B. Wu, J. M. Rondinelli, R. D. Averitt & D. N. Basov. Nature Materials (2019) doi:10.1038/s41563-019-0533-y Published: 16 December 2019

This paper is behind a paywall.

Soft things for your brain

A March 5, 2018 news item on Nanowerk describes the latest stretchable electrode (Note: A link has been removed),

Klas Tybrandt, principal investigator at the Laboratory of Organic Electronics at Linköping University [Sweden], has developed new technology for long-term stable neural recording. It is based on a novel elastic material composite, which is biocompatible and retains high electrical conductivity even when stretched to double its original length.

The result has been achieved in collaboration with colleagues in Zürich and New York. The breakthrough, which is crucial for many applications in biomedical engineering, is described in an article published in the prestigious scientific journal Advanced Materials (“High-Density Stretchable Electrode Grids for Chronic Neural Recording”).

A March 5, 2018 Linköping University press release, which originated the news item, gives more detail but does not mention that the nanowires are composed of titanium dioxide (you can find additional details in the abstract for the paper; link and citation will be provided later in this posting)),

The coupling between electronic components and nerve cells is crucial not only to collect information about cell signalling, but also to diagnose and treat neurological disorders and diseases, such as epilepsy.

It is very challenging to achieve long-term stable connections that do not damage neurons or tissue, since the two systems, the soft and elastic tissue of the body and the hard and rigid electronic components, have completely different mechanical properties.

Stretchable soft electrodeThe soft electrode stretched to twice its length Photo credit: Thor Balkhed

“As human tissue is elastic and mobile, damage and inflammation arise at the interface with rigid electronic components. It not only causes damage to tissue; it also attenuates neural signals,” says Klas Tybrandt, leader of the Soft Electronics group at the Laboratory of Organic Electronics, Linköping University, Campus Norrköping.

New conductive material

Klas Tybrandt has developed a new conductive material that is as soft as human tissue and can be stretched to twice its length. The material consists of gold coated titanium dioxide nanowires, embedded into silicone rubber. The material is biocompatible – which means it can be in contact with the body without adverse effects – and its conductivity remains stable over time.

“The microfabrication of soft electrically conductive composites involves several challenges. We have developed a process to manufacture small electrodes that also preserves the biocompatibility of the materials. The process uses very little material, and this means that we can work with a relatively expensive material such as gold, without the cost becoming prohibitive,” says Klas Tybrandt.

The electrodes are 50 µm [microns or micrometres] in size and are located at a distance of 200 µm from each other. The fabrication procedure allows 32 electrodes to be placed onto a very small surface. The final probe, shown in the photograph, has a width of 3.2 mm and a thickness of 80 µm.

The soft microelectrodes have been developed at Linköping University and ETH Zürich, and researchers at New York University and Columbia University have subsequently implanted them in the brain of rats. The researchers were able to collect high-quality neural signals from the freely moving rats for 3 months. The experiments have been subject to ethical review, and have followed the strict regulations that govern animal experiments.

Important future applications

Klas Tybrandt, researcher at Laboratory for Organic ElectronicsKlas Tybrandt, researcher at Laboratory for Organic Electronics Photo credit: Thor Balkhed

“When the neurons in the brain transmit signals, a voltage is formed that the electrodes detect and transmit onwards through a tiny amplifier. We can also see which electrodes the signals came from, which means that we can estimate the location in the brain where the signals originated. This type of spatiotemporal information is important for future applications. We hope to be able to see, for example, where the signal that causes an epileptic seizure starts, a prerequisite for treating it. Another area of application is brain-machine interfaces, by which future technology and prostheses can be controlled with the aid of neural signals. There are also many interesting applications involving the peripheral nervous system in the body and the way it regulates various organs,” says Klas Tybrandt.

The breakthrough is the foundation of the research area Soft Electronics, currently being established at Linköping University, with Klas Tybrandt as principal investigator.
liu.se/soft-electronics

A video has been made available (Note: For those who find any notion of animal testing disturbing; don’t watch the video even though it is an animation and does not feature live animals),

Here’s a link to and a citation for the paper,

High-Density Stretchable Electrode Grids for Chronic Neural Recording by Klas Tybrandt, Dion Khodagholy, Bernd Dielacher, Flurin Stauffer, Aline F. Renz, György Buzsáki, and János Vörös. Advanced Materials 2018. DOI: 10.1002/adma.201706520
 First published 28 February 2018

This paper is open access.

Narrating neuroscience in Toronto (Canada) on Oct. 20, 2017 and knitting a neuron

What is it with the Canadian neuroscience community? First, there’s The Beautiful Brain an exhibition of the extraordinary drawings of Santiago Ramón y Cajal (1852–1934) at the Belkin Gallery on the University of British Columbia (UBC) campus in Vancouver and a series of events marking the exhibition (for more see my Sept. 11, 2017 posting ; scroll down about 30% for information about the drawings and the events still to come).

I guess there must be some money floating around for raising public awareness because now there’s a neuroscience and ‘storytelling’ event (Narrating Neuroscience) in Toronto, Canada. From a Sept. 25, 2017 ArtSci Salon announcement (received via email),

With NARRATING NEUROSCIENCE we plan to initiate a discussion on the  role and the use of storytelling and art (both in verbal and visual  forms) to communicate abstract and complex concepts in neuroscience to  very different audiences, ranging from fellow scientists, clinicians and patients, to social scientists and the general public. We invited four guests to share their research through case studies and experiences stemming directly from their research or from other practices they have adopted and incorporated into their research, where storytelling and the arts have played a crucial role not only in communicating cutting edge research in neuroscience, but also in developing and advancing it.

OUR GUESTS

MATTEO FARINELLA, PhD, Presidential Scholar in Society and Neuroscience – Columbia University

SHELLEY WALL , AOCAD, MSc, PhD – Assistant professor, Biomedical Communications Graduate Program and Department of Biology, UTM

ALFONSO FASANO, MD, PhD, Associate Professor – University of Toronto Clinician Investigator – Krembil Research Institute Movement Disorders Centre – Toronto Western Hospital

TAHANI BAAKDHAH, MD, MSc, PhD candidate – University of Toronto

DATE: October 20, 2017
TIME: 6:00-8:00 pm
LOCATION: The Fields Institute for Research in Mathematical Sciences
222 College Street, Toronto, ON

Events Facilitators: Roberta Buiani and Stephen Morris (ArtSci Salon) and Nina Czegledy (Leonardo Network)

TAHANI BAAKDHAH is a PhD student at the University of Toronto studying how the stem cells built our retina during development, the mechanism by which the light sensing cells inside the eye enable us to see this beautiful world and how we can regenerate these cells in case of disease or injury.

MATTEO FARINELLA combines a background in neuroscience with a lifelong passion for drawing, making comics and illustrations about the brain. He is the author of _Neurocomic_ (Nobrow 2013) published with the support of the Wellcome Trust, _Cervellopoli_ (Editoriale Scienza 2017) and he has collaborated with universities and educational institutions around
the world to make science more clear and accessible. In 2016 Matteo joined Columbia University as a Presidential Scholar in Society and Neuroscience, where he investigates the role of visual narratives in science communication. Working with science journalists, educators and cognitive neuroscientists he aims to understand how these tools may
affect the public perception of science and increase scientific literacy (cartoonscience.org [2]).

ALFONSO FASANO graduated from the Catholic University of Rome, Italy, in 2002 and became a neurologist in 2007. After a 2-year fellowship at the University of Kiel, Germany, he completed a PhD in neuroscience at the Catholic University of Rome. In 2013 he joined the Movement Disorder Centre at Toronto Western Hospital, where he is the co-director of the
surgical program for movement disorders. He is also an associate professor of medicine in the Division of Neurology at the University of Toronto and clinician investigator at the Krembil Research Institute. Dr. Fasano’s main areas of interest are the treatment of movement  disorders with advanced technology (infusion pumps and neuromodulation), pathophysiology and treatment of tremor and gait disorders. He is author of more than 170 papers and book chapters. He is principal investigator of several clinical trials.

SHELLEY WALL is an assistant professor in the University of Toronto’s Biomedical Communications graduate program, a certified medical illustrator, and inaugural Illustrator-in-Residence in the Faculty of Medicine, University of Toronto. One of her primary areas of research, teaching, and creation is graphic medicine—the intersection of comics with illness, medicine, and caregiving—and one of her ongoing projects is a series of comics about caregiving and young onset Parkinson’s disease.

You can register for this free Toronto event here.

One brief observation, there aren’t any writers (other than academics) or storytellers included in this ‘storytelling’ event. The ‘storytelling’ being featured is visual. To be blunt I’m not of the ‘one picture is worth a thousand words’ school of thinking (see my Feb. 22, 2011 posting). Yes, sometimes pictures are all you need but that tiresome aphorism which suggests  communication can be reduced to one means of communication really needs to be retired. As for academic writing, it’s not noted for its storytelling qualities or experimentation. Academics are not judged on their writing or storytelling skills although there are some who are very good.

Getting back to the Toronto event, they seem to have the visual part of their focus  ” … discussion on the  role and the use of storytelling and art (both in verbal and visual  forms) … ” covered. Having recently attended a somewhat similar event in Vancouver, which was announced n my Sept. 11, 2017 posting, there were some exciting images and ideas presented.

The ArtSci Salon folks also announced this (from the Sept. 25, 2017 ArtSci Salon announcement; received via email),

ATTENTION ARTSCI SALONISTAS AND FANS OF ART AND SCIENCE!!
CALL FOR KNITTING AND CROCHET LOVERS!

In addition to being a PhD student at the University of Toronto, Tahani Baakdhah is a prolific knitter and crocheter and has been the motor behind two successful Knit-a-Neuron Toronto initiatives. We invite all Knitters and Crocheters among our ArtSci Salonistas to pick a pattern
(link below) and knit a neuron (or 2! Or as many as you want!!)

http://bit.ly/2y05hRR

BRING THEM TO OUR OCTOBER 20 ARTSCI SALON!
Come to the ArtSci Salon and knit there!
You can’t come?
Share a picture with @ArtSci_Salon @SciCommTO #KnitANeuronTO [3] on
social media
Or…Drop us a line at artscisalon@gmail.com !

I think it’s been a few years since my last science knitting post. No, it was Oct. 18, 2016. Moving on, I found more neuron knitting while researching this piece. Here’s the Neural Knitworks group, which is part of Australia’s National Science Week (11-19 August 2018) initiative (from the Neural Knitworks webpage),

Neural Knitworks is a collaborative project about mind and brain health.

Whether you’re a whiz with yarn, or just discovering the joy of craft, now you can crochet wrap, knit or knot—and find out about neuroscience.

During 2014 an enormous number of handmade neurons were donated (1665 in total!) and used to build a giant walk-in brain, as seen here at Hazelhurst Gallery [scroll to end of this post]. Since then Neural Knitworks have been held in dozens of communities across Australia, with installations created in Queensland, the ACT, Singapore, as part of the Cambridge Science Festival in the UK and in Philadelphia, USA.

In 2017, the Neural Knitworks team again invites you to host your own home-grown Neural Knitwork for National Science Week*. Together we’ll create a giant ‘virtual’ neural network by linking your displays visually online.

* If you wish to host a Neural Knitwork event outside of National Science Week or internationally we ask that you contact us to seek permission to use the material, particularly if you intend to create derivative works or would like to exhibit the giant brain. Please outline your plans in an email.

Your creation can be big or small, part of a formal display, or simply consist of neighbourhood neuron ‘yarn-bombings’. Knitworks can be created at home, at work or at school. No knitting experience is required and all ages can participate.

See below for how to register your event and download our scientifically informed patterns.

What is a neuron?

Neurons are electrically excitable cells of the brain, spinal cord and peripheral nerves. The billions of neurons in your body connect to each other in neural networks. They receive signals from every sense, control movement, create memories, and form the neural basis of every thought.

Check out the neuron microscopy gallery for some real-world inspiration.

What happens at a Neural Knitwork?

Neural Knitworks are based on the principle that yarn craft, with its mental challenges, social connection and mindfulness, helps keep our brains and minds sharp, engaged and healthy.

Have fun as you

  • design your own woolly neurons, or get inspired by our scientifically-informed knitting, crochet or knot patterns;
  • natter with neuroscientists and teach them a few of your crafty tricks;
  • contribute to a travelling textile brain exhibition;
  • increase your attention span and test your memory.

Calm your mind and craft your own brain health as you

  • forge friendships;
  • solve creative and mental challenges;
  • practice mindfulness and relaxation;
  • teach and learn;
  • develop eye-hand coordination and fine motor dexterity.

Interested in hosting a Neural Knitwork?

  1. Log your event on the National Science Week calendar to take advantage of multi-channel promotion.
  2. Share the link^ for this Neural Knitwork page on your own website or online newsletter and add information your own event details.
  3. Use this flyer template (2.5 MB .docx) to promote your event in local shop windows and on noticeboards.
  4. Read our event organisers toolbox for tips on hosting a successful event.
  5. You’ll need plenty of yarn, needles, copies of our scientifically-based neuron crafting pattern books (3.4 MB PDF) and a comfy spot in which to create.
  6. Gather together a group of friends who knit, crochet, design, spin, weave and anyone keen to give it a go. Those who know how to knit can teach others how to do it, and there’s even an easy no knit pattern that you can knot.
  7. Download a neuroscience podcast to listen to, and you’ve got a Neural Knitwork!
  8. Join the Neural Knitworks community on Facebook  to share and find information about events including public talks featuring neuroscientists.
  9. Tweet #neuralknitworks to show us your creations.
  10. Find display ideas in the pattern book and on our Facebook page.

Finally,, the knitted neurons from Australia’s 2014 National Science Week brain exhibit,

[downloaded from https://www.scienceweek.net.au/neural-knitworks/]

ETA Oct. 24, 2017: If you’re interested on how the talk was received, there’s an Oct. 24, 2017 posting by Magosia Pakulska for the Research2Reality blog.