Tag Archives: Columbia University

A robot with body image and self awareness

This research is a rather interesting direction for robotics to take (from a July 13, 2022 news item on ScienceDaily),

As every athletic or fashion-conscious person knows, our body image is not always accurate or realistic, but it’s an important piece of information that determines how we function in the world. When you get dressed or play ball, your brain is constantly planning ahead so that you can move your body without bumping, tripping, or falling over.

We humans acquire our body-model as infants, and robots are following suit. A Columbia Engineering team announced today they have created a robot that — for the first time — is able to learn a model of its entire body from scratch, without any human assistance. In a new study published by Science Robotics,, the researchers demonstrate how their robot created a kinematic model of itself, and then used its self-model to plan motion, reach goals, and avoid obstacles in a variety of situations. It even automatically recognized and then compensated for damage to its body.

Courtesy Columbia University School of Engineering and Applied Science

A July 13, 2022 Columbia University news release by Holly Evarts (also on EurekAlert), which originated the news item, describes the research in more detail, Note: Links have been removed,

Robot watches itself like an an infant exploring itself in a hall of mirrors

The researchers placed a robotic arm inside a circle of five streaming video cameras. The robot watched itself through the cameras as it undulated freely. Like an infant exploring itself for the first time in a hall of mirrors, the robot wiggled and contorted to learn how exactly its body moved in response to various motor commands. After about three hours, the robot stopped. Its internal deep neural network had finished learning the relationship between the robot’s motor actions and the volume it occupied in its environment. 

“We were really curious to see how the robot imagined itself,” said Hod Lipson, professor of mechanical engineering and director of Columbia’s Creative Machines Lab, where the work was done. “But you can’t just peek into a neural network, it’s a black box.” After the researchers struggled with various visualization techniques, the self-image gradually emerged. “It was a sort of gently flickering cloud that appeared to engulf the robot’s three-dimensional body,” said Lipson. “As the robot moved, the flickering cloud gently followed it.” The robot’s self-model was accurate to about 1% of its workspace.

Self-modeling robots will lead to more self-reliant autonomous systems

The ability of robots to model themselves without being assisted by engineers is important for many reasons: Not only does it save labor, but it also allows the robot to keep up with its own wear-and-tear, and even detect and compensate for damage. The authors argue that this ability is important as we need autonomous systems to be more self-reliant. A factory robot, for instance, could detect that something isn’t moving right, and compensate or call for assistance.

“We humans clearly have a notion of self,” explained the study’s first author Boyuan Chen, who led the work and is now an assistant professor at Duke University. “Close your eyes and try to imagine how your own body would move if you were to take some action, such as stretch your arms forward or take a step backward. Somewhere inside our brain we have a notion of self, a self-model that informs us what volume of our immediate surroundings we occupy, and how that volume changes as we move.”

Self-awareness in robots

The work is part of Lipson’s decades-long quest to find ways to grant robots some form of self-awareness.  “Self-modeling is a primitive form of self-awareness,” he explained. “If a robot, animal, or human, has an accurate self-model, it can function better in the world, it can make better decisions, and it has an evolutionary advantage.” 

The researchers are aware of the limits, risks, and controversies surrounding granting machines greater autonomy through self-awareness. Lipson is quick to admit that the kind of self-awareness demonstrated in this study is, as he noted, “trivial compared to that of humans, but you have to start somewhere. We have to go slowly and carefully, so we can reap the benefits while minimizing the risks.”  

Here’s a link to and a citation for the paper,

Fully body visual self-modeling of robot morphologies by Boyuan Chen, Robert Kwiatkowski, Carl Vondrick and Hod Lipson. Science Robotics 13 Jul 2022 Vol 7, Issue 68 DOI: 10.1126/scirobotics.abn1944

This paper is behind a paywall.

If you follow the link to the July 13, 2022 Columbia University news release, you’ll find an approximately 25 min. video of Hod Lipson showing you how they did it. As Lipson notes discussion of self-awareness and sentience is not found in robotics programmes. Plus, there are more details and links if you follow the EurekAlert link.

Pulling water from the air

Adele Peters’ May 27, 2022 article for Fast Company describes some research into harvesting water from the air (Note: Links have been removed),

In Ethiopia, where an ongoing drought is the worst in 40 years, getting drinking water for the day can involve walking for eight hours. Some wells are drying up. As climate change progresses, water scarcity keeps getting worse. But new technology in development at the University of Texas at Austin could help: Using simple, low-cost materials, it harvests water from the air, even in the driest climates.

“The advantage of taking water moisture from the air is that it’s not limited geographically,” says Youhong “Nancy” Guo, lead author of a new study in Nature Communications that describes the technology.

It’s a little surprising that Peters doesn’t mention the megadrought in the US Southwest, which has made quite a splash in the news, from a February 15, 2022 article by Denise Chow for NBC [{US} National Broadcasting Corporation] news online, Note: Links have been removed,

The megadrought that has gripped the southwestern United States for the past 22 years is the worst since at least 800 A.D., according to a new study that examined shifts in water availability and soil moisture over the past 12 centuries.

The research, which suggests that the past two decades in the American Southwest have been the driest period in 1,200 years, pointed to human-caused climate change as a major reason for the current drought’s severity. The findings were published Monday in the journal Nature Climate Change.

Jason Smerdon, one of the study’s authors and a climate scientist at Columbia University’s Lamont-Doherty Earth Observatory, said global warming has made the megadrought more extreme because it creates a “thirstier” atmosphere that is better able to pull moisture out of forests, vegetation and soil.

Over the past two decades, temperatures in the Southwest were around 1.64 degrees Fahrenheit higher than the average from 1950 to 1999, according to the researchers. Globally, the world has warmed by about 2 degrees Fahrenheit since the late 1800s.

It’s getting drier even here in the Pacific Northwest. Maybe it’s time to start looking at drought and water shortages as a global issue rather than as a regional issue.

Caption: An example of a different shape the water-capturing film can take. Credit: The University of Texas at Austin / Cockrell School of Engineering

Getting back to the topic, a May 23, 2022 University of Texas at Austin news release (also on EurkeAlert), which originated the Peters’ article, announces the work,

More than a third of the world’s population lives in drylands, areas that experience significant water shortages. Scientists and engineers at The University of Texas at Austin have developed a solution that could help people in these areas access clean drinking water.

The team developed a low-cost gel film made of abundant materials that can pull water from the air in even the driest climates. The materials that facilitate this reaction cost a mere $2 per kilogram, and a single kilogram can produce more than 6 liters of water per day in areas with less than 15% relative humidity and 13 liters in areas with up to 30% relative humidity.

The research builds on previous breakthroughs from the team, including the ability to pull water out of the atmosphere and the application of that technology to create self-watering soil. However, these technologies were designed for relatively high-humidity environments.

“This new work is about practical solutions that people can use to get water in the hottest, driest places on Earth,” said Guihua Yu, professor of materials science and mechanical engineering in the Cockrell School of Engineering’s Walker Department of Mechanical Engineering. “This could allow millions of people without consistent access to drinking water to have simple, water generating devices at home that they can easily operate.”

The researchers used renewable cellulose and a common kitchen ingredient, konjac gum, as a main hydrophilic (attracted to water) skeleton. The open-pore structure of gum speeds the moisture-capturing process. Another designed component, thermo-responsive cellulose with hydrophobic (resistant to water) interaction when heated, helps release the collected water immediately so that overall energy input to produce water is minimized.

Other attempts at pulling water from desert air are typically energy-intensive and do not produce much. And although 6 liters does not sound like much, the researchers say that creating thicker films or absorbent beds or arrays with optimization could drastically increase the amount of water they yield.

The reaction itself is a simple one, the researchers said, which reduces the challenges of scaling it up and achieving mass usage.

“This is not something you need an advanced degree to use,” said Youhong “Nancy” Guo, the lead author on the paper and a former doctoral student in Yu’s lab, now a postdoctoral researcher at the Massachusetts Institute of Technology. “It’s straightforward enough that anyone can make it at home if they have the materials.”

The film is flexible and can be molded into a variety of shapes and sizes, depending on the need of the user. Making the film requires only the gel precursor, which includes all the relevant ingredients poured into a mold.

“The gel takes 2 minutes to set simply. Then, it just needs to be freeze-dried, and it can be peeled off the mold and used immediately after that,” said Weixin Guan, a doctoral student on Yu’s team and a lead researcher of the work.

The research was funded by the U.S. Department of Defense’s Defense Advanced Research Projects Agency (DARPA), and drinking water for soldiers in arid climates is a big part of the project. However, the researchers also envision this as something that people could someday buy at a hardware store and use in their homes because of the simplicity.

Yu directed the project. Guo and Guan co-led experimental efforts on synthesis, characterization of the samples and device demonstration. Other team members are Chuxin Lei, Hengyi Lu and Wen Shi.

Here’s a link to and a citation for the paper,

Scalable super hygroscopic polymer films for sustainable moisture harvesting in arid environments by Youhong Guo, Weixin Guan, Chuxin Lei, Hengyi Lu, Wen Shi & Guihua Yu. Nature Communications volume 13, Article number: 2761 (2022) DOI: https://doi.org/10.1038/s41467-022-30505-2 Published: 19 May 2022

This paper is open access.

Philosophy and science in Tokyo, Japan from Dec. 1-2, 2022

I have not seen a more timely and à propos overview for a meeting/conference/congress that this one for Tokyo Forum 2022 (hosted by the University of Tokyo and South Korea’s Chey Institute for Advanced Studies),

Dialogue between Philosophy and Science: In a World Facing War, Pandemic, and Climate Change

In the face of war, a pandemic, and climate change, we cannot repeat the history of the last century, in which our ancestors headed down the road to division, global conflict, and environmental destruction.

How can we live more fully and how do we find a new common understanding about what our society should be? Tokyo Forum 2022 will tackle these questions through a series of in-depth dialogues between philosophy and science. The dialogues will weave together the latest findings and deep contemplation, and explore paths that could lead us to viable answers and solutions.

Philosophy of the 21st century must contribute to the construction of a new universality based on locality and diversity. It should be a universality that is open to co-existing with other non-human elements, such as ecosystems and nature, while severely criticizing the understanding of history that unreflectively identifies anthropocentrism with universality.

Science in the 21st century also needs to dispense with its overarching aura of supremacy and lack of self-criticism. There is a need for scientists to make efforts to demarcate their own limits. This also means reexamining what ethics means for science.

Tokyo Forum 2022 will offer multifaceted dialogues between philosophers, scientists, and scholars from various fields of study on the state and humanity in the 21st century, with a view to imagining and proposing a vision of the society we need.

Here are some details about the hybrid event from a November 4, 2022 University of Tokyo press release on EurekAlert,

The University of Tokyo and South Korea’s Chey Institute for Advanced Studies will host Tokyo Forum 2022 from Dec. 1-2, 2022. Under this year’s theme “Dialogue between Philosophy and Science,” the annual symposium will bring together philosophers, scientists and scholars in various fields from around the world for multifaceted dialogues on humanity and the state in the 21st century, while envisioning the society we need.

The event is free and open to the public, and will be held both on site at Yasuda Auditorium of the University of Tokyo and online via livestream. [emphases mine]

Keynote speakers lined up for the first day of the two-day symposium are former U.N. Secretary-General Ban Ki-moon, University of Chicago President Paul Alivisatos and Mariko Hasegawa, president of the Graduate University for Advanced Studies in Japan.

Other featured speakers on the event’s opening day include renowned modern thinker and author Professor Markus Gabriel of the University of Bonn, and physicist Hirosi Ooguri, director of the Kavli Institute for the Physics and Mathematics of the Universe at the University of Tokyo and professor at the California Institute of Technology, who are scheduled to participate in the high-level discussion on the dialogue between philosophy and science.

Columbia University Professor Jeffrey Sachs will take part in a panel discussion, also on Day 1, on tackling global environmental issues with stewardship of the global commons — the stable and resilient Earth system that sustains our lives — as a global common value.

The four panel discussions slated for Day 2 will cover the role of world philosophy in addressing the problems of a globalized world; transformative change for a sustainable future by understanding the diverse values of nature and its contributions to people; the current and future impacts of autonomous robots on society; and finding collective solutions and universal values to pursue equitable and sustainable futures for humanity by looking at interconnections among various fields of inquiry.

Opening remarks will be delivered by University of Tokyo President Teruo Fujii and South Korea’s SK Group Chairman Chey Tae-won, on Day 1. Fujii and Chey Institute President Park In-kook will make closing remarks following the wrap-up session on the second and final day.

Tokyo Forum with its overarching theme “Shaping the Future” is held annually since 2019 to stimulate discussions on finding the best ideas for shaping the world and humanity in the face of complex situations where the conventional wisdom can no longer provide answers.

For more information about the program and speakers of Tokyo Forum 2022, visit the event website and social media accounts:

Website: https://www.tokyoforum.tc.u-tokyo.ac.jp/en/index.html

Twitter: https://twitter.com/UTokyo_forum

Facebook: https://www.facebook.com/UTokyo.tokyo.forum/

To register, fill out the registration form on the Tokyo Forum 2022 website (registration is free but required [emphasis mine] to attend the event): https://www.tokyo-forum-form.com/apply/audiences/en

I’m not sure how they are handling languages. I’m guessing that people are speaking in the language they choose and translations (subtitles or dubbing) are available. For anyone who may have difficulty attending due to timezone issues, there are archives for previous Tokyo Forums. Presumably 2022 will be added at some point in the future.

A computer simulation inside a computer simulation?

Stumbling across an entry from National Film Board of Canada for the Venice VR (virtual reality) Expanded section at the 77th Venice International Film Festival (September 2 to 12, 2020) and a recent Scientific American article on computer simulations provoked a memory from Frank Herbert’s 1965 novel, Dune. From an Oct. 3, 2007 posting on Equivocality; A journal of self-discovery, healing, growth, and growing pains,

Knowing where the trap is — that’s the first step in evading it. This is like single combat, Son, only on a larger scale — a feint within a feint within a feint [emphasis mine]…seemingly without end. The task is to unravel it.

—Duke Leto Atreides, Dune [Note: Dune is a 1965 science-fiction novel by US author Frank Herbert]

Now, onto what provoked memory of that phrase.

The first computer simulation “Agence”

Here’s a description of “Agence” and its creators from an August 11, 2020 Canada National Film Board (NFB) news release,

Two-time Emmy Award-winning storytelling pioneer Pietro Gagliano’s new work Agence (Transitional Forms/National Film Board of Canada) is an industry-first dynamic film that integrates cinematic storytelling, artificial intelligence, and user interactivity to create a different experience each time.

Agence is premiering in official competition in the Venice VR Expanded section at the 77th Venice International Film Festival (September 2 to 12), and accessible worldwide via the online Venice VR Expanded platform.

About the experience

Would you play god to intelligent life? Agence places the fate of artificially intelligent creatures in your hands. In their simulated universe, you have the power to observe, and to interfere. Maintain the balance of their peaceful existence or throw them into a state of chaos as you move from planet to planet. Watch closely and you’ll see them react to each other and their emerging world.

About the creators

Created by Pietro Gagliano, Agence is a co-production between his studio lab Transitional Forms and the NFB. Pietro is a pioneer of new forms of media that allow humans to understand what it means to be machine, and machines what it means to be human. Previously, Pietro co-founded digital studio Secret Location, and with his team, made history in 2015 by winning the first ever Emmy Award for a virtual reality project. His work has been recognized through hundreds of awards and nominations, including two Emmy Awards, 11 Canadian Screen Awards, 31 FWAs, two Webby Awards, a Peabody-Facebook Award, and a Cannes Lion.

Agence is produced by Casey Blustein (Transitional Forms) and David Oppenheim (NFB) and executive produced by Pietro Gagliano (Transitional Forms) and Anita Lee (NFB). 

About Transitional Forms

Transitional Forms is a studio lab focused on evolving entertainment formats through the use of artificial intelligence. Through their innovative approach to content and tool creation, their interdisciplinary team transforms valuable research into dynamic, culturally relevant experiences across a myriad of emerging platforms. Dedicated to the intersection of technology and art, Transitional Forms strives to make humans more creative, and machines more human.

About the NFB

David Oppenheim and Anita Lee’s recent VR credits also include the acclaimed virtual reality/live performance piece Draw Me Close and The Book of Distance, which premiered at the Sundance Film Festival and is in the “Best of VR” section at Venice this year. Canada’s public producer of award-winning creative documentaries, auteur animation, interactive stories and participatory experiences, the NFB has won over 7,000 awards, including 21 Webbys and 12 Academy Awards.

The line that caught my eye? “Would you play god to intelligent life?” For the curious, here’s the film’s trailer,

Now for the second computer simulation (the feint within the feint).

Are we living in a computer simulation?

According to some thinkers in the field, the chances are about 50/50 that we are computer simulations, which makes “Agence” a particularly piquant experience.

An October 13, 2020 article ‘Do We Live in a Simulation? Chances are about 50 – 50‘ by Anil Ananthaswamy for Scientific American poses the question with an answer that’s unexpectedly uncertain, Note: Links have been removed,

It is not often that a comedian gives an astrophysicist goose bumps when discussing the laws of physics. But comic Chuck Nice managed to do just that in a recent episode of the podcast StarTalk.The show’s host Neil deGrasse Tyson had just explained the simulation argument—the idea that we could be virtual beings living in a computer simulation. If so, the simulation would most likely create perceptions of reality on demand rather than simulate all of reality all the time—much like a video game optimized to render only the parts of a scene visible to a player. “Maybe that’s why we can’t travel faster than the speed of light, because if we could, we’d be able to get to another galaxy,” said Nice, the show’s co-host, prompting Tyson to gleefully interrupt. “Before they can program it,” the astrophysicist said,delighting at the thought. “So the programmer put in that limit.”

Such conversations may seem flippant. But ever since Nick Bostrom of the University of Oxford wrote a seminal paper about the simulation argument in 2003, philosophers, physicists, technologists and, yes, comedians have been grappling with the idea of our reality being a simulacrum. Some have tried to identify ways in which we can discern if we are simulated beings. Others have attempted to calculate the chance of us being virtual entities. Now a new analysis shows that the odds that we are living in base reality—meaning an existence that is not simulated—are pretty much even. But the study also demonstrates that if humans were to ever develop the ability to simulate conscious beings, the chances would overwhelmingly tilt in favor of us, too, being virtual denizens inside someone else’s computer. (A caveat to that conclusion is that there is little agreement about what the term “consciousness” means, let alone how one might go about simulating it.)

In 2003 Bostrom imagined a technologically adept civilization that possesses immense computing power and needs a fraction of that power to simulate new realities with conscious beings in them. Given this scenario, his simulation argument showed that at least one proposition in the following trilemma must be true: First, humans almost always go extinct before reaching the simulation-savvy stage. Second, even if humans make it to that stage, they are unlikely to be interested in simulating their own ancestral past. And third, the probability that we are living in a simulation is close to one.

Before Bostrom, the movie The Matrix had already done its part to popularize the notion of simulated realities. And the idea has deep roots in Western and Eastern philosophical traditions, from Plato’s cave allegory to Zhuang Zhou’s butterfly dream. More recently, Elon Musk gave further fuel to the concept that our reality is a simulation: “The odds that we are in base reality is one in billions,” he said at a 2016 conference.

For him [astronomer David Kipping of Columbia University], there is a more obvious answer: Occam’s razor, which says that in the absence of other evidence, the simplest explanation is more likely to be correct. The simulation hypothesis is elaborate, presuming realities nested upon realities, as well as simulated entities that can never tell that they are inside a simulation. “Because it is such an overly complicated, elaborate model in the first place, by Occam’s razor, it really should be disfavored, compared to the simple natural explanation,” Kipping says.

Maybe we are living in base reality after all—The Matrix, Musk and weird quantum physics notwithstanding.

It’s all a little mind-boggling (a computer simulation creating and playing with a computer simulation?) and I’m not sure how far how I want to start thinking about the implications (the feint within the feint within the feint). Still, it seems that the idea could be useful as a kind of thought experiment designed to have us rethink our importance in the world. Or maybe, as a way to have a laugh at our own absurdity.

Technical University of Munich: embedded ethics approach for AI (artificial intelligence) and storing a tv series in synthetic DNA

I stumbled across two news bits of interest from the Technical University of Munich in one day (Sept. 1, 2020, I think). The topics: artificial intelligence (AI) and synthetic DNA (deoxyribonucleic acid).

Embedded ethics and artificial intelligence (AI)

An August 27, 2020 Technical University of Munich (TUM) press release (also on EurekAlert but published Sept. 1, 2020) features information about a proposal to embed ethicists in with AI development teams,

The increasing use of AI (artificial intelligence) in the development of new medical technologies demands greater attention to ethical aspects. An interdisciplinary team at the Technical University of Munich (TUM) advocates the integration of ethics from the very beginning of the development process of new technologies. Alena Buyx, Professor of Ethics in Medicine and Health Technologies, explains the embedded ethics approach.

Professor Buyx, the discussions surrounding a greater emphasis on ethics in AI research have greatly intensified in recent years, to the point where one might speak of “ethics hype” …

Prof. Buyx: … and many committees in Germany and around the world such as the German Ethics Council or the EU Commission High-Level Expert Group on Artificial Intelligence have responded. They are all in agreement: We need more ethics in the development of AI-based health technologies. But how do things look in practice for engineers and designers? Concrete solutions are still few and far between. In a joint pilot project with two Integrative Research Centers at TUM, the Munich School of Robotics and Machine Intelligence (MSRM) with its director, Prof. Sami Haddadin, and the Munich Center for Technology in Society (MCTS), with Prof. Ruth Müller, we want to try out the embedded ethics approach. We published the proposal in Nature Machine Intelligence at the end of July [2020].

What exactly is meant by the “embedded ethics approach”?

Prof.Buyx: The idea is to make ethics an integral part of the research process by integrating ethicists into the AI development team from day one. For example, they attend team meetings on a regular basis and create a sort of “ethical awareness” for certain issues. They also raise and analyze specific ethical and social issues.

Is there an example of this concept in practice?

Prof. Buyx: The Geriatronics Research Center, a flagship project of the MSRM in Garmisch-Partenkirchen, is developing robot assistants to enable people to live independently in old age. The center’s initiatives will include the construction of model apartments designed to try out residential concepts where seniors share their living space with robots. At a joint meeting with the participating engineers, it was noted that the idea of using an open concept layout everywhere in the units – with few doors or individual rooms – would give the robots considerable range of motion. With the seniors, however, this living concept could prove upsetting because they are used to having private spaces. At the outset, the engineers had not given explicit consideration to this aspect.

Prof.Buyx: The approach sounds promising. But how can we avoid “embedded ethics” from turning into an “ethics washing” exercise, offering companies a comforting sense of “being on the safe side” when developing new AI technologies?

That’s not something we can be certain of avoiding. The key is mutual openness and a willingness to listen, with the goal of finding a common language – and subsequently being prepared to effectively implement the ethical aspects. At TUM we are ideally positioned to achieve this. Prof. Sami Haddadin, the director of the MSRM, is also a member of the EU High-Level Group of Artificial Intelligence. In his research, he is guided by the concept of human centered engineering. Consequently, he has supported the idea of embedded ethics from the very beginning. But one thing is certain: Embedded ethics alone will not suddenly make AI “turn ethical”. Ultimately, that will require laws, codes of conduct and possibly state incentives.

Here’s a link to and a citation for the paper espousing the embedded ethics for AI development approach,

An embedded ethics approach for AI development by Stuart McLennan, Amelia Fiske, Leo Anthony Celi, Ruth Müller, Jan Harder, Konstantin Ritt, Sami Haddadin & Alena Buyx. Nature Machine Intelligence (2020) DOI: https://doi.org/10.1038/s42256-020-0214-1 Published 31 July 2020

This paper is behind a paywall.

Religion, ethics and and AI

For some reason embedded ethics and AI got me to thinking about Pope Francis and other religious leaders.

The Roman Catholic Church and AI

There was a recent announcement that the Roman Catholic Church will be working with MicroSoft and IBM on AI and ethics (from a February 28, 2020 article by Jen Copestake for British Broadcasting Corporation (BBC) news online (Note: A link has been removed),

Leaders from the two tech giants met senior church officials in Rome, and agreed to collaborate on “human-centred” ways of designing AI.

Microsoft president Brad Smith admitted some people may “think of us as strange bedfellows” at the signing event.

“But I think the world needs people from different places to come together,” he said.

The call was supported by Pope Francis, in his first detailed remarks about the impact of artificial intelligence on humanity.

The Rome Call for Ethics [sic] was co-signed by Mr Smith, IBM executive vice-president John Kelly and president of the Pontifical Academy for Life Archbishop Vincenzo Paglia.

It puts humans at the centre of new technologies, asking for AI to be designed with a focus on the good of the environment and “our common and shared home and of its human inhabitants”.

Framing the current era as a “renAIssance”, the speakers said the invention of artificial intelligence would be as significant to human development as the invention of the printing press or combustion engine.

UN Food and Agricultural Organization director Qu Dongyu and Italy’s technology minister Paola Pisano were also co-signatories.

Hannah Brockhaus’s February 28, 2020 article for the Catholic News Agency provides some details missing from the BBC report and I found it quite helpful when trying to understand the various pieces that make up this initiative,

The Pontifical Academy for Life signed Friday [February 28, 2020], alongside presidents of IBM and Microsoft, a call for ethical and responsible use of artificial intelligence technologies.

According to the document, “the sponsors of the call express their desire to work together, in this context and at a national and international level, to promote ‘algor-ethics.’”

“Algor-ethics,” according to the text, is the ethical use of artificial intelligence according to the principles of transparency, inclusion, responsibility, impartiality, reliability, security, and privacy.

The signing of the “Rome Call for AI Ethics [PDF]” took place as part of the 2020 assembly of the Pontifical Academy for Life, which was held Feb. 26-28 [2020] on the theme of artificial intelligence.

One part of the assembly was dedicated to private meetings of the academics of the Pontifical Academy for Life. The second was a workshop on AI and ethics that drew 356 participants from 41 countries.

On the morning of Feb. 28 [2020], a public event took place called “renAIssance. For a Humanistic Artificial Intelligence” and included the signing of the AI document by Microsoft President Brad Smith, and IBM Executive Vice-president John Kelly III.

The Director General of FAO, Dongyu Qu, and politician Paola Pisano, representing the Italian government, also signed.

The president of the European Parliament, David Sassoli, was also present Feb. 28.

Pope Francis canceled his scheduled appearance at the event due to feeling unwell. His prepared remarks were read by Archbishop Vincenzo Paglia, president of the Academy for Life.

You can find Pope Francis’s comments about the document here (if you’re not comfortable reading Italian, hopefully, the English translation which follows directly afterward will be helpful). The Pope’s AI initiative has a dedicated website, Rome Call for AI ethics, and while most of the material dates from the February 2020 announcement, they are keeping up a blog. It has two entries, one dated in May 2020 and another in September 2020.

Buddhism and AI

The Dalai Lama is well known for having an interest in science and having hosted scientists for various dialogues. So, I was able to track down a November 10, 2016 article by Ariel Conn for the futureoflife.org website, which features his insights on the matter,

The question of what it means and what it takes to feel needed is an important problem for ethicists and philosophers, but it may be just as important for AI researchers to consider. The Dalai Lama argues that lack of meaning and purpose in one’s work increases frustration and dissatisfaction among even those who are gainfully employed.

“The problem,” says the Dalai Lama, “is … the growing number of people who feel they are no longer useful, no longer needed, no longer one with their societies. … Feeling superfluous is a blow to the human spirit. It leads to social isolation and emotional pain, and creates the conditions for negative emotions to take root.”

If feeling needed and feeling useful are necessary for happiness, then AI researchers may face a conundrum. Many researchers hope that job loss due to artificial intelligence and automation could, in the end, provide people with more leisure time to pursue enjoyable activities. But if the key to happiness is feeling useful and needed, then a society without work could be just as emotionally challenging as today’s career-based societies, and possibly worse.

I also found a talk on the topic by The Venerable Tenzin Priyadarshi, first here’s a description from his bio at the Dalai Lama Center for Ethics and Transformative Values webspace on the Massachusetts Institute of Technology (MIT) website,

… an innovative thinker, philosopher, educator and a polymath monk. He is Director of the Ethics Initiative at the MIT Media Lab and President & CEO of The Dalai Lama Center for Ethics and Transformative Values at the Massachusetts Institute of Technology. Venerable Tenzin’s unusual background encompasses entering a Buddhist monastery at the age of ten and receiving graduate education at Harvard University with degrees ranging from Philosophy to Physics to International Relations. He is a Tribeca Disruptive Fellow and a Fellow at the Center for Advanced Study in Behavioral Sciences at Stanford University. Venerable Tenzin serves on the boards of a number of academic, humanitarian, and religious organizations. He is the recipient of several recognitions and awards and received Harvard’s Distinguished Alumni Honors for his visionary contributions to humanity.

He gave the 2018 Roger W. Heyns Lecture in Religion and Society at Stanford University on the topic, “Religious and Ethical Dimensions of Artificial Intelligence.” The video runs over one hour but he is a sprightly speaker (in comparison to other Buddhist speakers I’ve listened to over the years).

Judaism, Islam, and other Abrahamic faiths examine AI and ethics

I was delighted to find this January 30, 2020 Artificial Intelligence: Implications for Ethics and Religion event as it brought together a range of thinkers from various faiths and disciplines,

New technologies are transforming our world every day, and the pace of change is only accelerating.  In coming years, human beings will create machines capable of out-thinking us and potentially taking on such uniquely-human traits as empathy, ethical reasoning, perhaps even consciousness.  This will have profound implications for virtually every human activity, as well as the meaning we impart to life and creation themselves.  This conference will provide an introduction for non-specialists to Artificial Intelligence (AI):

What is it?  What can it do and be used for?  And what will be its implications for choice and free will; economics and worklife; surveillance economies and surveillance states; the changing nature of facts and truth; and the comparative intelligence and capabilities of humans and machines in the future? 

Leading practitioners, ethicists and theologians will provide cross-disciplinary and cross-denominational perspectives on such challenges as technology addiction, inherent biases and resulting inequalities, the ethics of creating destructive technologies and of turning decision-making over to machines from self-driving cars to “autonomous weapons” systems in warfare, and how we should treat the suffering of “feeling” machines.  The conference ultimately will address how we think about our place in the universe and what this means for both religious thought and theological institutions themselves.

UTS [Union Theological Seminary] is the oldest independent seminary in the United States and has long been known as a bastion of progressive Christian scholarship.  JTS [Jewish Theological Seminary] is one of the academic and spiritual centers of Conservative Judaism and a major center for academic scholarship in Jewish studies. The Riverside Church is an interdenominational, interracial, international, open, welcoming, and affirming church and congregation that has served as a focal point of global and national activism for peace and social justice since its inception and continues to serve God through word and public witness. The annual Greater Good Gathering, the following week at Columbia University’s School of International & Public Affairs, focuses on how technology is changing society, politics and the economy – part of a growing nationwide effort to advance conversations promoting the “greater good.”

They have embedded a video of the event (it runs a little over seven hours) on the January 30, 2020 Artificial Intelligence: Implications for Ethics and Religion event page. For anyone who finds that a daunting amount of information, you may want to check out the speaker list for ideas about who might be writing and thinking on this topic.

As for Islam, I did track down this November 29, 2018 article by Shahino Mah Abdullah, a fellow at the Institute of Advanced Islamic Studies (IAIS) Malaysia,

As the global community continues to work together on the ethics of AI, there are still vast opportunities to offer ethical inputs, including the ethical principles based on Islamic teachings.

This is in line with Islam’s encouragement for its believers to convey beneficial messages, including to share its ethical principles with society.

In Islam, ethics or akhlak (virtuous character traits) in Arabic, is sometimes employed interchangeably in the Arabic language with adab, which means the manner, attitude, behaviour, and etiquette of putting things in their proper places. Islamic ethics cover all the legal concepts ranging from syariah (Islamic law), fiqh ( jurisprudence), qanun (ordinance), and ‘urf (customary practices).

Adopting and applying moral values based on the Islamic ethical concept or applied Islamic ethics could be a way to address various issues in today’s societies.

At the same time, this approach is in line with the higher objectives of syariah (maqasid alsyariah) that is aimed at conserving human benefit by the protection of human values, including faith (hifz al-din), life (hifz alnafs), lineage (hifz al-nasl), intellect (hifz al-‘aql), and property (hifz al-mal). This approach could be very helpful to address contemporary issues, including those related to the rise of AI and intelligent robots.

..

Part of the difficulty with tracking down more about AI, ethics, and various religions is linguistic. I simply don’t have the language skills to search for the commentaries and, even in English, I may not have the best or most appropriate search terms.

Television (TV) episodes stored on DNA?

According to a Sept. 1, 2020 news item on Nanowerk, the first episode of a tv series, ‘Biohackers’ has been stored on synthetic DNA (deoxyribonucleic acid) by a researcher at TUM and colleagues at another institution,

The first episode of the newly released series “Biohackers” was stored in the form of synthetic DNA. This was made possible by the research of Prof. Reinhard Heckel of the Technical University of Munich (TUM) and his colleague Prof. Robert Grass of ETH Zürich.

They have developed a method that permits the stable storage of large quantities of data on DNA for over 1000 years.

A Sept. 1, 2020 TUM press release, which originated the news item, proceeds with more detail in an interview format,

Prof. Heckel, Biohackers is about a medical student seeking revenge on a professor with a dark past – and the manipulation of DNA with biotechnology tools. You were commissioned to store the series on DNA. How does that work?

First, I should mention that what we’re talking about is artificially generated – in other words, synthetic – DNA. DNA consists of four building blocks: the nucleotides adenine (A), thymine (T), guanine (G) and cytosine (C). Computer data, meanwhile, are coded as zeros and ones. The first episode of Biohackers consists of a sequence of around 600 million zeros and ones. To code the sequence 01 01 11 00 in DNA, for example, we decide which number combinations will correspond to which letters. For example: 00 is A, 01 is C, 10 is G and 11 is T. Our example then produces the DNA sequence CCTA. Using this principle of DNA data storage, we have stored the first episode of the series on DNA.

And to view the series – is it just a matter of “reverse translation” of the letters?

In a very simplified sense, you can visualize it like that. When writing, storing and reading the DNA, however, errors occur. If these errors are not corrected, the data stored on the DNA will be lost. To solve the problem, I have developed an algorithm based on channel coding. This method involves correcting errors that take place during information transfers. The underlying idea is to add redundancy to the data. Think of language: When we read or hear a word with missing or incorrect letters, the computing power of our brain is still capable of understanding the word. The algorithm follows the same principle: It encodes the data with sufficient redundancy to ensure that even highly inaccurate data can be restored later.

Channel coding is used in many fields, including in telecommunications. What challenges did you face when developing your solution?

The first challenge was to create an algorithm specifically geared to the errors that occur in DNA. The second one was to make the algorithm so efficient that the largest possible quantities of data can be stored on the smallest possible quantity of DNA, so that only the absolutely necessary amount of redundancy is added. We demonstrated that our algorithm is optimized in that sense.

DNA data storage is very expensive because of the complexity of DNA production as well as the reading process. What makes DNA an attractive storage medium despite these challenges?

First, DNA has a very high information density. This permits the storage of enormous data volumes in a minimal space. In the case of the TV series, we stored “only” 100 megabytes on a picogram – or a billionth of a gram of DNA. Theoretically, however, it would be possible to store up to 200 exabytes on one gram of DNA. And DNA lasts a long time. By comparison: If you never turned on your PC or wrote data to the hard disk it contains, the data would disappear after a couple of years. By contrast, DNA can remain stable for many thousands of years if it is packed right.

And the method you have developed also makes the DNA strands durable – practically indestructible.

My colleague Robert Grass was the first to develop a process for the “stable packing” of DNA strands by encapsulating them in nanometer-scale spheres made of silica glass. This ensures that the DNA is protected against mechanical influences. In a joint paper in 2015, we presented the first robust DNA data storage concept with our algorithm and the encapsulation process developed by Prof. Grass. Since then we have continuously improved our method. In our most recent publication in Nature Protocols of January 2020, we passed on what we have learned.

What are your next steps? Does data storage on DNA have a future?

We’re working on a way to make DNA data storage cheaper and faster. “Biohackers” was a milestone en route to commercialization. But we still have a long way to go. If this technology proves successful, big things will be possible. Entire libraries, all movies, photos, music and knowledge of every kind – provided it can be represented in the form of data – could be stored on DNA and would thus be available to humanity for eternity.

Here’s a link to and a citation for the paper,

Reading and writing digital data in DNA by Linda C. Meiser, Philipp L. Antkowiak, Julian Koch, Weida D. Chen, A. Xavier Kohll, Wendelin J. Stark, Reinhard Heckel & Robert N. Grass. Nature Protocols volume 15, pages86–101(2020) Issue Date: January 2020 DOI: https://doi.org/10.1038/s41596-019-0244-5 Published [online] 29 November 2019

This paper is behind a paywall.

As for ‘Biohackers’, it’s a German science fiction television series and you can find out more about it here on the Internet Movie Database.

Bringing a technique from astronomy down to the nanoscale

A January 2, 2020 Columbia University news release on EurekAlert (also on phys.org but published Jan. 3, 2020) describes research that takes the inter-galactic down to the quantum level,

Researchers at Columbia University and University of California, San Diego, have introduced a novel “multi-messenger” approach to quantum physics that signifies a technological leap in how scientists can explore quantum materials.

The findings appear in a recent article published in Nature Materials, led by A. S. McLeod, postdoctoral researcher, Columbia Nano Initiative, with co-authors Dmitri Basov and A. J. Millis at Columbia and R.A. Averitt at UC San Diego.

“We have brought a technique from the inter-galactic scale down to the realm of the ultra-small,” said Basov, Higgins Professor of Physics and Director of the Energy Frontier Research Center at Columbia. Equipped with multi-modal nanoscience tools we can now routinely go places no one thought would be possible as recently as five years ago.”

The work was inspired by “multi-messenger” astrophysics, which emerged during the last decade as a revolutionary technique for the study of distant phenomena like black hole mergers. Simultaneous measurements from instruments, including infrared, optical, X-ray and gravitational-wave telescopes can, taken together, deliver a physical picture greater than the sum of their individual parts.

The search is on for new materials that can supplement the current reliance on electronic semiconductors. Control over material properties using light can offer improved functionality, speed, flexibility and energy efficiency for next-generation computing platforms.

Experimental papers on quantum materials have typically reported results obtained by using only one type of spectroscopy. The researchers have shown the power of using a combination of measurement techniques to simultaneously examine electrical and optical properties.

The researchers performed their experiment by focusing laser light onto the sharp tip of a needle probe coated with magnetic material. When thin films of metal oxide are subject to a unique strain, ultra-fast light pulses can trigger the material to switch into an unexplored phase of nanometer-scale domains, and the change is reversible.

By scanning the probe over the surface of their thin film sample, the researchers were able to trigger the change locally and simultaneously manipulate and record the electrical, magnetic and optical properties of these light-triggered domains with nanometer-scale precision.

The study reveals how unanticipated properties can emerge in long-studied quantum materials at ultra-small scales when scientists tune them by strain.

“It is relatively common to study these nano-phase materials with scanning probes. But this is the first time an optical nano-probe has been combined with simultaneous magnetic nano-imaging, and all at the very low temperatures where quantum materials show their merits,” McLeod said. “Now, investigation of quantum materials by multi-modal nanoscience offers a means to close the loop on programs to engineer them.”

The excitement is palpable.

Caption: The discovery of multi-messenger nanoprobes allows scientists to simultaneously probe multiple properties of quantum materials at nanometer-scale spatial resolutions. Credit: Ella Maru Studio

Here’s a link to and a citation for the paper,

Multi-messenger nanoprobes of hidden magnetism in a strained manganite by A. S. McLeod, Jingdi Zhang, M. Q. Gu, F. Jin, G. Zhang, K. W. Post, X. G. Zhao, A. J. Millis, W. B. Wu, J. M. Rondinelli, R. D. Averitt & D. N. Basov. Nature Materials (2019) doi:10.1038/s41563-019-0533-y Published: 16 December 2019

This paper is behind a paywall.

Soft things for your brain

A March 5, 2018 news item on Nanowerk describes the latest stretchable electrode (Note: A link has been removed),

Klas Tybrandt, principal investigator at the Laboratory of Organic Electronics at Linköping University [Sweden], has developed new technology for long-term stable neural recording. It is based on a novel elastic material composite, which is biocompatible and retains high electrical conductivity even when stretched to double its original length.

The result has been achieved in collaboration with colleagues in Zürich and New York. The breakthrough, which is crucial for many applications in biomedical engineering, is described in an article published in the prestigious scientific journal Advanced Materials (“High-Density Stretchable Electrode Grids for Chronic Neural Recording”).

A March 5, 2018 Linköping University press release, which originated the news item, gives more detail but does not mention that the nanowires are composed of titanium dioxide (you can find additional details in the abstract for the paper; link and citation will be provided later in this posting)),

The coupling between electronic components and nerve cells is crucial not only to collect information about cell signalling, but also to diagnose and treat neurological disorders and diseases, such as epilepsy.

It is very challenging to achieve long-term stable connections that do not damage neurons or tissue, since the two systems, the soft and elastic tissue of the body and the hard and rigid electronic components, have completely different mechanical properties.

Stretchable soft electrodeThe soft electrode stretched to twice its length Photo credit: Thor Balkhed

“As human tissue is elastic and mobile, damage and inflammation arise at the interface with rigid electronic components. It not only causes damage to tissue; it also attenuates neural signals,” says Klas Tybrandt, leader of the Soft Electronics group at the Laboratory of Organic Electronics, Linköping University, Campus Norrköping.

New conductive material

Klas Tybrandt has developed a new conductive material that is as soft as human tissue and can be stretched to twice its length. The material consists of gold coated titanium dioxide nanowires, embedded into silicone rubber. The material is biocompatible – which means it can be in contact with the body without adverse effects – and its conductivity remains stable over time.

“The microfabrication of soft electrically conductive composites involves several challenges. We have developed a process to manufacture small electrodes that also preserves the biocompatibility of the materials. The process uses very little material, and this means that we can work with a relatively expensive material such as gold, without the cost becoming prohibitive,” says Klas Tybrandt.

The electrodes are 50 µm [microns or micrometres] in size and are located at a distance of 200 µm from each other. The fabrication procedure allows 32 electrodes to be placed onto a very small surface. The final probe, shown in the photograph, has a width of 3.2 mm and a thickness of 80 µm.

The soft microelectrodes have been developed at Linköping University and ETH Zürich, and researchers at New York University and Columbia University have subsequently implanted them in the brain of rats. The researchers were able to collect high-quality neural signals from the freely moving rats for 3 months. The experiments have been subject to ethical review, and have followed the strict regulations that govern animal experiments.

Important future applications

Klas Tybrandt, researcher at Laboratory for Organic ElectronicsKlas Tybrandt, researcher at Laboratory for Organic Electronics Photo credit: Thor Balkhed

“When the neurons in the brain transmit signals, a voltage is formed that the electrodes detect and transmit onwards through a tiny amplifier. We can also see which electrodes the signals came from, which means that we can estimate the location in the brain where the signals originated. This type of spatiotemporal information is important for future applications. We hope to be able to see, for example, where the signal that causes an epileptic seizure starts, a prerequisite for treating it. Another area of application is brain-machine interfaces, by which future technology and prostheses can be controlled with the aid of neural signals. There are also many interesting applications involving the peripheral nervous system in the body and the way it regulates various organs,” says Klas Tybrandt.

The breakthrough is the foundation of the research area Soft Electronics, currently being established at Linköping University, with Klas Tybrandt as principal investigator.
liu.se/soft-electronics

A video has been made available (Note: For those who find any notion of animal testing disturbing; don’t watch the video even though it is an animation and does not feature live animals),

Here’s a link to and a citation for the paper,

High-Density Stretchable Electrode Grids for Chronic Neural Recording by Klas Tybrandt, Dion Khodagholy, Bernd Dielacher, Flurin Stauffer, Aline F. Renz, György Buzsáki, and János Vörös. Advanced Materials 2018. DOI: 10.1002/adma.201706520
 First published 28 February 2018

This paper is open access.

Narrating neuroscience in Toronto (Canada) on Oct. 20, 2017 and knitting a neuron

What is it with the Canadian neuroscience community? First, there’s The Beautiful Brain an exhibition of the extraordinary drawings of Santiago Ramón y Cajal (1852–1934) at the Belkin Gallery on the University of British Columbia (UBC) campus in Vancouver and a series of events marking the exhibition (for more see my Sept. 11, 2017 posting ; scroll down about 30% for information about the drawings and the events still to come).

I guess there must be some money floating around for raising public awareness because now there’s a neuroscience and ‘storytelling’ event (Narrating Neuroscience) in Toronto, Canada. From a Sept. 25, 2017 ArtSci Salon announcement (received via email),

With NARRATING NEUROSCIENCE we plan to initiate a discussion on the  role and the use of storytelling and art (both in verbal and visual  forms) to communicate abstract and complex concepts in neuroscience to  very different audiences, ranging from fellow scientists, clinicians and patients, to social scientists and the general public. We invited four guests to share their research through case studies and experiences stemming directly from their research or from other practices they have adopted and incorporated into their research, where storytelling and the arts have played a crucial role not only in communicating cutting edge research in neuroscience, but also in developing and advancing it.

OUR GUESTS

MATTEO FARINELLA, PhD, Presidential Scholar in Society and Neuroscience – Columbia University

SHELLEY WALL , AOCAD, MSc, PhD – Assistant professor, Biomedical Communications Graduate Program and Department of Biology, UTM

ALFONSO FASANO, MD, PhD, Associate Professor – University of Toronto Clinician Investigator – Krembil Research Institute Movement Disorders Centre – Toronto Western Hospital

TAHANI BAAKDHAH, MD, MSc, PhD candidate – University of Toronto

DATE: October 20, 2017
TIME: 6:00-8:00 pm
LOCATION: The Fields Institute for Research in Mathematical Sciences
222 College Street, Toronto, ON

Events Facilitators: Roberta Buiani and Stephen Morris (ArtSci Salon) and Nina Czegledy (Leonardo Network)

TAHANI BAAKDHAH is a PhD student at the University of Toronto studying how the stem cells built our retina during development, the mechanism by which the light sensing cells inside the eye enable us to see this beautiful world and how we can regenerate these cells in case of disease or injury.

MATTEO FARINELLA combines a background in neuroscience with a lifelong passion for drawing, making comics and illustrations about the brain. He is the author of _Neurocomic_ (Nobrow 2013) published with the support of the Wellcome Trust, _Cervellopoli_ (Editoriale Scienza 2017) and he has collaborated with universities and educational institutions around
the world to make science more clear and accessible. In 2016 Matteo joined Columbia University as a Presidential Scholar in Society and Neuroscience, where he investigates the role of visual narratives in science communication. Working with science journalists, educators and cognitive neuroscientists he aims to understand how these tools may
affect the public perception of science and increase scientific literacy (cartoonscience.org [2]).

ALFONSO FASANO graduated from the Catholic University of Rome, Italy, in 2002 and became a neurologist in 2007. After a 2-year fellowship at the University of Kiel, Germany, he completed a PhD in neuroscience at the Catholic University of Rome. In 2013 he joined the Movement Disorder Centre at Toronto Western Hospital, where he is the co-director of the
surgical program for movement disorders. He is also an associate professor of medicine in the Division of Neurology at the University of Toronto and clinician investigator at the Krembil Research Institute. Dr. Fasano’s main areas of interest are the treatment of movement  disorders with advanced technology (infusion pumps and neuromodulation), pathophysiology and treatment of tremor and gait disorders. He is author of more than 170 papers and book chapters. He is principal investigator of several clinical trials.

SHELLEY WALL is an assistant professor in the University of Toronto’s Biomedical Communications graduate program, a certified medical illustrator, and inaugural Illustrator-in-Residence in the Faculty of Medicine, University of Toronto. One of her primary areas of research, teaching, and creation is graphic medicine—the intersection of comics with illness, medicine, and caregiving—and one of her ongoing projects is a series of comics about caregiving and young onset Parkinson’s disease.

You can register for this free Toronto event here.

One brief observation, there aren’t any writers (other than academics) or storytellers included in this ‘storytelling’ event. The ‘storytelling’ being featured is visual. To be blunt I’m not of the ‘one picture is worth a thousand words’ school of thinking (see my Feb. 22, 2011 posting). Yes, sometimes pictures are all you need but that tiresome aphorism which suggests  communication can be reduced to one means of communication really needs to be retired. As for academic writing, it’s not noted for its storytelling qualities or experimentation. Academics are not judged on their writing or storytelling skills although there are some who are very good.

Getting back to the Toronto event, they seem to have the visual part of their focus  ” … discussion on the  role and the use of storytelling and art (both in verbal and visual  forms) … ” covered. Having recently attended a somewhat similar event in Vancouver, which was announced n my Sept. 11, 2017 posting, there were some exciting images and ideas presented.

The ArtSci Salon folks also announced this (from the Sept. 25, 2017 ArtSci Salon announcement; received via email),

ATTENTION ARTSCI SALONISTAS AND FANS OF ART AND SCIENCE!!
CALL FOR KNITTING AND CROCHET LOVERS!

In addition to being a PhD student at the University of Toronto, Tahani Baakdhah is a prolific knitter and crocheter and has been the motor behind two successful Knit-a-Neuron Toronto initiatives. We invite all Knitters and Crocheters among our ArtSci Salonistas to pick a pattern
(link below) and knit a neuron (or 2! Or as many as you want!!)

http://bit.ly/2y05hRR

BRING THEM TO OUR OCTOBER 20 ARTSCI SALON!
Come to the ArtSci Salon and knit there!
You can’t come?
Share a picture with @ArtSci_Salon @SciCommTO #KnitANeuronTO [3] on
social media
Or…Drop us a line at artscisalon@gmail.com !

I think it’s been a few years since my last science knitting post. No, it was Oct. 18, 2016. Moving on, I found more neuron knitting while researching this piece. Here’s the Neural Knitworks group, which is part of Australia’s National Science Week (11-19 August 2018) initiative (from the Neural Knitworks webpage),

Neural Knitworks is a collaborative project about mind and brain health.

Whether you’re a whiz with yarn, or just discovering the joy of craft, now you can crochet wrap, knit or knot—and find out about neuroscience.

During 2014 an enormous number of handmade neurons were donated (1665 in total!) and used to build a giant walk-in brain, as seen here at Hazelhurst Gallery [scroll to end of this post]. Since then Neural Knitworks have been held in dozens of communities across Australia, with installations created in Queensland, the ACT, Singapore, as part of the Cambridge Science Festival in the UK and in Philadelphia, USA.

In 2017, the Neural Knitworks team again invites you to host your own home-grown Neural Knitwork for National Science Week*. Together we’ll create a giant ‘virtual’ neural network by linking your displays visually online.

* If you wish to host a Neural Knitwork event outside of National Science Week or internationally we ask that you contact us to seek permission to use the material, particularly if you intend to create derivative works or would like to exhibit the giant brain. Please outline your plans in an email.

Your creation can be big or small, part of a formal display, or simply consist of neighbourhood neuron ‘yarn-bombings’. Knitworks can be created at home, at work or at school. No knitting experience is required and all ages can participate.

See below for how to register your event and download our scientifically informed patterns.

What is a neuron?

Neurons are electrically excitable cells of the brain, spinal cord and peripheral nerves. The billions of neurons in your body connect to each other in neural networks. They receive signals from every sense, control movement, create memories, and form the neural basis of every thought.

Check out the neuron microscopy gallery for some real-world inspiration.

What happens at a Neural Knitwork?

Neural Knitworks are based on the principle that yarn craft, with its mental challenges, social connection and mindfulness, helps keep our brains and minds sharp, engaged and healthy.

Have fun as you

  • design your own woolly neurons, or get inspired by our scientifically-informed knitting, crochet or knot patterns;
  • natter with neuroscientists and teach them a few of your crafty tricks;
  • contribute to a travelling textile brain exhibition;
  • increase your attention span and test your memory.

Calm your mind and craft your own brain health as you

  • forge friendships;
  • solve creative and mental challenges;
  • practice mindfulness and relaxation;
  • teach and learn;
  • develop eye-hand coordination and fine motor dexterity.

Interested in hosting a Neural Knitwork?

  1. Log your event on the National Science Week calendar to take advantage of multi-channel promotion.
  2. Share the link^ for this Neural Knitwork page on your own website or online newsletter and add information your own event details.
  3. Use this flyer template (2.5 MB .docx) to promote your event in local shop windows and on noticeboards.
  4. Read our event organisers toolbox for tips on hosting a successful event.
  5. You’ll need plenty of yarn, needles, copies of our scientifically-based neuron crafting pattern books (3.4 MB PDF) and a comfy spot in which to create.
  6. Gather together a group of friends who knit, crochet, design, spin, weave and anyone keen to give it a go. Those who know how to knit can teach others how to do it, and there’s even an easy no knit pattern that you can knot.
  7. Download a neuroscience podcast to listen to, and you’ve got a Neural Knitwork!
  8. Join the Neural Knitworks community on Facebook  to share and find information about events including public talks featuring neuroscientists.
  9. Tweet #neuralknitworks to show us your creations.
  10. Find display ideas in the pattern book and on our Facebook page.

Finally,, the knitted neurons from Australia’s 2014 National Science Week brain exhibit,

[downloaded from https://www.scienceweek.net.au/neural-knitworks/]

ETA Oct. 24, 2017: If you’re interested on how the talk was received, there’s an Oct. 24, 2017 posting by Magosia Pakulska for the Research2Reality blog.

A biocompatible (implantable) micromachine (microrobot)

I appreciate the detail and information in this well written Jan. 4, 2017 Columbia University news release (h/t Jan. 4, 2016 Nanowerk; Note: Links have been removed),

A team of researchers led by Biomedical Engineering Professor Sam Sia has developed a way to manufacture microscale-sized machines from biomaterials that can safely be implanted in the body. Working with hydrogels, which are biocompatible materials that engineers have been studying for decades, Sia has invented a new technique that stacks the soft material in layers to make devices that have three-dimensional, freely moving parts. The study, published online January 4, 2017, in Science Robotics, demonstrates a fast manufacturing method Sia calls “implantable microelectromechanical systems” (iMEMS).

By exploiting the unique mechanical properties of hydrogels, the researchers developed a “locking mechanism” for precise actuation and movement of freely moving parts, which can provide functions such as valves, manifolds, rotors, pumps, and drug delivery. They were able to tune the biomaterials within a wide range of mechanical and diffusive properties and to control them after implantation without a sustained power supply such as a toxic battery. They then tested the “payload” delivery in a bone cancer model and found that the triggering of release of doxorubicin from the device over 10 days showed high treatment efficacy and low toxicity, at 1/10 of the standard systemic chemotherapy dose.

“Overall, our iMEMS platform enables development of biocompatible implantable microdevices with a wide range of intricate moving components that can be wirelessly controlled on demand and solves issues of device powering and biocompatibility,” says Sia, also a member of the Data Science Institute. “We’re really excited about this because we’ve been able to connect the world of biomaterials with that of complex, elaborate medical devices. Our platform has a large number of potential applications, including the drug delivery system demonstrated in our paper which is linked to providing tailored drug doses for precision medicine.”

I particularly like this bit about hydrogels being a challenge to work with and the difficulties of integrating both rigid and soft materials,

Most current implantable microdevices have static components rather than moving parts and, because they require batteries or other toxic electronics, have limited biocompatibility. Sia’s team spent more than eight years working on how to solve this problem. “Hydrogels are difficult to work with, as they are soft and not compatible with traditional machining techniques,” says Sau Yin Chin, lead author of the study who worked with Sia. “We have tuned the mechanical properties and carefully matched the stiffness of structures that come in contact with each other within the device. Gears that interlock have to be stiff in order to allow for force transmission and to withstand repeated actuation. Conversely, structures that form locking mechanisms have to be soft and flexible to allow for the gears to slip by them during actuation, while at the same time they have to be stiff enough to hold the gears in place when the device is not actuated. We also studied the diffusive properties of the hydrogels to ensure that the loaded drugs do not easily diffuse through the hydrogel layers.”

The team used light to polymerize sheets of gel and incorporated a stepper mechanization to control the z-axis and pattern the sheets layer by layer, giving them three-dimensionality. Controlling the z-axis enabled the researchers to create composite structures within one layer of the hydrogel while managing the thickness of each layer throughout the fabrication process. They were able to stack multiple layers that are precisely aligned and, because they could polymerize a layer at a time, one right after the other, the complex structure was built in under 30 minutes.

Sia’s iMEMS technique addresses several fundamental considerations in building biocompatible microdevices, micromachines, and microrobots: how to power small robotic devices without using toxic batteries, how to make small biocompatible moveable components that are not silicon which has limited biocompatibility, and how to communicate wirelessly once implanted (radio frequency microelectronics require power, are relatively large, and are not biocompatible). The researchers were able to trigger the iMEMS device to release additional payloads over days to weeks after implantation. They were also able to achieve precise actuation by using magnetic forces to induce gear movements that, in turn, bend structural beams made of hydrogels with highly tunable properties. (Magnetic iron particles are commonly used and FDA-approved for human use as contrast agents.)

In collaboration with Francis Lee, an orthopedic surgeon at Columbia University Medical Center at the time of the study, the team tested the drug delivery system on mice with bone cancer. The iMEMS system delivered chemotherapy adjacent to the cancer, and limited tumor growth while showing less toxicity than chemotherapy administered throughout the body.

“These microscale components can be used for microelectromechanical systems, for larger devices ranging from drug delivery to catheters to cardiac pacemakers, and soft robotics,” notes Sia. “People are already making replacement tissues and now we can make small implantable devices, sensors, or robots that we can talk to wirelessly. Our iMEMS system could bring the field a step closer in developing soft miniaturized robots that can safely interact with humans and other living systems.”

Here’s a link to and a citation for the paper,

Additive manufacturing of hydrogel-based materials for next-generation implantable medical devices by Sau Yin Chin, Yukkee Cheung Poh, Anne-Céline Kohler, Jocelyn T. Compton, Lauren L. Hsu, Kathryn M. Lau, Sohyun Kim, Benjamin W. Lee, Francis Y. Lee, and Samuel K. Sia. Science Robotics  04 Jan 2017: Vol. 2, Issue 2, DOI: 10.1126/scirobotics.aah6451

This paper appears to be open access.

The researchers have provided a video demonstrating their work (you may want to read the caption below before watching),

Magnetic actuation of the Geneva drive device. A magnet is placed about 1cm below and without contact with the device. The rotating magnet results in the rotational movement of the smaller driving gear. With each full rotation of this driving gear, the larger driven gear is engaged and rotates by 60º, exposing the next reservoir to the aperture on the top layer of the device.

—Video courtesy of Sau Yin Chin/Columbia Engineering

You can hear some background conversation but it doesn’t seem to have been included for informational purposes.

Montreal Neuro creates a new paradigm for technology transfer?

It’s one heck of a Christmas present. Canadian businessmen Larry Tannenbaum and his wife Judy have given the Montreal Neurological Institute (Montreal Neuro), which is affiliated with McGill University, a $20M donation. From a Dec. 16, 2016 McGill University news release,

The Prime Minister of Canada, Justin Trudeau, was present today at the Montreal Neurological Institute and Hospital (MNI) for the announcement of an important donation of $20 million by the Larry and Judy Tanenbaum family. This transformative gift will help to establish the Tanenbaum Open Science Institute, a bold initiative that will facilitate the sharing of neuroscience findings worldwide to accelerate the discovery of leading edge therapeutics to treat patients suffering from neurological diseases.

‟Today, we take an important step forward in opening up new horizons in neuroscience research and discovery,” said Mr. Larry Tanenbaum. ‟Our digital world provides for unprecedented opportunities to leverage advances in technology to the benefit of science.  That is what we are celebrating here today: the transformation of research, the removal of barriers, the breaking of silos and, most of all, the courage of researchers to put patients and progress ahead of all other considerations.”

Neuroscience has reached a new frontier, and advances in technology now allow scientists to better understand the brain and all its complexities in ways that were previously deemed impossible. The sharing of research findings amongst scientists is critical, not only due to the sheer scale of data involved, but also because diseases of the brain and the nervous system are amongst the most compelling unmet medical needs of our time.

Neurological diseases, mental illnesses, addictions, and brain and spinal cord injuries directly impact 1 in 3 Canadians, representing approximately 11 million people across the country.

“As internationally-recognized leaders in the field of brain research, we are uniquely placed to deliver on this ambitious initiative and reinforce our reputation as an institution that drives innovation, discovery and advanced patient care,” said Dr. Guy Rouleau, Director of the Montreal Neurological Institute and Hospital and Chair of McGill University’s Department of Neurology and Neurosurgery. “Part of the Tanenbaum family’s donation will be used to incentivize other Canadian researchers and institutions to adopt an Open Science model, thus strengthening the network of like-minded institutes working in this field.”

What they don’t mention in the news release is that they will not be pursuing any patents (for five years according to one of the people in the video but I can’t find text to substantiate that time limit*; there are no time limits noted elsewhere) on their work. For this detail and others, you have to listen to the video they’ve created,

The CBC (Canadian Broadcasting Corporation) news online Dec. 16, 2016 posting (with files from Sarah Leavitt and Justin Hayward) adds a few personal details about Tannenbaum,

“Our goal is simple: to accelerate brain research and discovery to relieve suffering,” said Tanenbaum.

Tanenbaum, a Canadian businessman and chairman of Maple Leaf Sports and Entertainment, said many of his loved ones suffered from neurological disorders.

“I lost my mother to Alzheimer’s, my father to a stroke, three dear friends to brain cancer, and a brilliant friend and scientist to clinical depression,” said Tanenbaum.

He hopes the institute will serve as the template for science research across the world, a thought that Trudeau echoed.

“This vision around open science, recognizing the role that Canada can and should play, the leadership that Canadians can have in this initiative is truly, truly exciting,” said Trudeau.

The Neurological Institute says the pharmaceutical industry is supportive of the open science concept because it will provide crucial base research that can later be used to develop drugs to fight an array of neurological conditions.

Jack Stilgoe in a Dec. 16, 2016 posting on the Guardian blogs explains what this donation could mean (Note: Links have been removed),

With the help of Tanenbaum’s gift of 20 million Canadian dollars (£12million) the ‘Neuro’, the Montreal Neurological Institute and Hospital, is setting up an experiment in experimentation, an Open Science Initiative with the express purpose of finding out the best way to realise the potential of scientific research.

Governments in science-rich countries are increasingly concerned that they do not appear to reaping the economic returns they feel they deserve from investments in scientific research. Their favoured response has been to try to bridge what they see as a ‘valley of death’ between basic scientific research and industrial applications. This has meant more funding for ‘translational research’ and the flowering of technology transfer offices within universities.

… There are some success stories, particularly in the life sciences. Patents from the work of Richard Axel at Columbia University at one point brought the university almost $100 million per year. The University of Florida received more than $150 million for inventing Gatorade in the 1960s. The stakes are high in the current battle between Berkely and MIT/Harvard over who owns the rights to the CRISPR/Cas9 system that has revolutionised genetic engineering and could be worth billions.

Policymakers imagine a world in which universities pay for themselves just as a pharmaceutical research lab does. However, for critics of technology transfer, such stories blind us to the reality of university’s entrepreneurial abilities.

For most universities, evidence of their money-making prowess is, to put it charitably, mixed. A recent Bloomberg report shows how quickly university patent incomes plunge once we look beyond the megastars. In 2014, just 15 US universities earned 70% of all patent royalties. British science policy researchers Paul Nightingale and Alex Coad conclude that ‘Roughly 9/10 US universities lose money on their technology transfer offices… MIT makes more money from selling T-shirts than it does from licensing’. A report from the Brookings institute concluded that the model of technology transfer ‘is unprofitable for most universities and sometimes even risks alienating the private sector’. In the UK, the situation is even worse. Businesses who have dealings with universities report that their technology transfer offices are often unrealistic in negotiations. In many cases, academics are, like a small child who refuses to let others play with a brand new football, unable to make the most of their gifts. And areas of science outside the life sciences are harder to patent than medicines, sports drinks and genetic engineering techniques. Trying too hard to force science towards the market may be, to use the phrase of science policy professor Keith Pavitt, like pushing a piece of string.

Science policy is slowly waking up to the realisation that the value of science may lie in people and places rather than papers and patents. It’s an idea that the Neuro, with the help of Tanenbaum’s gift, is going to test. By sharing data and giving away intellectual property, the initiative aims to attract new private partners to the institute and build Montreal as a hub for knowledge and innovation. The hypothesis is that this will be more lucrative than hoarding patents.

This experiment is not wishful thinking. It will be scientifically measured. It is the job of Richard Gold, a McGill University law professor, to see whether it works. He told me that his first task is ‘to figure out what to counts… There’s going to be a gap between what we would like to measure and what we can measure’. However, he sees an open-mindedness among his colleagues that is unusual. Some are evangelists for open science; some are sceptics. But they share a curiosity about new approaches and a recognition of a problem in neuroscience: ‘We haven’t come up with a new drug for Parkinson’s in 30 years. We don’t even understand the biological basis for many of these diseases. So whatever we’re doing at the moment doesn’t work’. …

Montreal Neuro made news on the ‘open science’ front in January 2016 when it formally announced its research would be freely available and that researchers would not be pursuing patents (see my January 22, 2016 posting).

I recommend reading Stilgoe’s posting in its entirety and for those who don’t know or have forgotten, Prime Minister’s Trudeau’s family has some experience with mental illness. His mother has been very open about her travails. This makes his presence at the announcement perhaps a bit more meaningful than the usual political presence at a major funding announcement.

*The five-year time limit is confirmed in a Feb. 17, 2017 McGill University news release about their presentations at the AAAS (American Association for the Advancement of Science) 2017 annual meeting) on EurekAlert,

umpstarting Neurological Research through Open Science – MNI & McGill University

Friday, February 17, 2017, 1:30-2:30 PM/ Room 208

Neurological research is advancing too slowly according to Dr. Guy Rouleau, director of the Montreal Neurological Institute (MNI) of McGill University. To speed up discovery, MNI has become the first ever Open Science academic institution in the world. In a five-year experiment, MNI is opening its books and making itself transparent to an international group of social scientists, policymakers, industrial partners, and members of civil society. They hope, by doing so, to accelerate research and the discovery of new treatments for patients with neurological diseases, and to encourage other leading institutions around the world to consider a similar model. A team led by McGill Faculty of Law’s Professor Richard Gold will monitor and evaluate how well the MNI Open Science experiment works and provide the scientific and policy worlds with insight into 21st century university-industry partnerships. At this workshop, Rouleau and Gold will discuss the benefits and challenges of this open-science initiative.