Tag Archives: Columbia University

A computer simulation inside a computer simulation?

Stumbling across an entry from National Film Board of Canada for the Venice VR (virtual reality) Expanded section at the 77th Venice International Film Festival (September 2 to 12, 2020) and a recent Scientific American article on computer simulations provoked a memory from Frank Herbert’s 1965 novel, Dune. From an Oct. 3, 2007 posting on Equivocality; A journal of self-discovery, healing, growth, and growing pains,

Knowing where the trap is — that’s the first step in evading it. This is like single combat, Son, only on a larger scale — a feint within a feint within a feint [emphasis mine]…seemingly without end. The task is to unravel it.

—Duke Leto Atreides, Dune [Note: Dune is a 1965 science-fiction novel by US author Frank Herbert]

Now, onto what provoked memory of that phrase.

The first computer simulation “Agence”

Here’s a description of “Agence” and its creators from an August 11, 2020 Canada National Film Board (NFB) news release,

Two-time Emmy Award-winning storytelling pioneer Pietro Gagliano’s new work Agence (Transitional Forms/National Film Board of Canada) is an industry-first dynamic film that integrates cinematic storytelling, artificial intelligence, and user interactivity to create a different experience each time.

Agence is premiering in official competition in the Venice VR Expanded section at the 77th Venice International Film Festival (September 2 to 12), and accessible worldwide via the online Venice VR Expanded platform.

About the experience

Would you play god to intelligent life? Agence places the fate of artificially intelligent creatures in your hands. In their simulated universe, you have the power to observe, and to interfere. Maintain the balance of their peaceful existence or throw them into a state of chaos as you move from planet to planet. Watch closely and you’ll see them react to each other and their emerging world.

About the creators

Created by Pietro Gagliano, Agence is a co-production between his studio lab Transitional Forms and the NFB. Pietro is a pioneer of new forms of media that allow humans to understand what it means to be machine, and machines what it means to be human. Previously, Pietro co-founded digital studio Secret Location, and with his team, made history in 2015 by winning the first ever Emmy Award for a virtual reality project. His work has been recognized through hundreds of awards and nominations, including two Emmy Awards, 11 Canadian Screen Awards, 31 FWAs, two Webby Awards, a Peabody-Facebook Award, and a Cannes Lion.

Agence is produced by Casey Blustein (Transitional Forms) and David Oppenheim (NFB) and executive produced by Pietro Gagliano (Transitional Forms) and Anita Lee (NFB). 

About Transitional Forms

Transitional Forms is a studio lab focused on evolving entertainment formats through the use of artificial intelligence. Through their innovative approach to content and tool creation, their interdisciplinary team transforms valuable research into dynamic, culturally relevant experiences across a myriad of emerging platforms. Dedicated to the intersection of technology and art, Transitional Forms strives to make humans more creative, and machines more human.

About the NFB

David Oppenheim and Anita Lee’s recent VR credits also include the acclaimed virtual reality/live performance piece Draw Me Close and The Book of Distance, which premiered at the Sundance Film Festival and is in the “Best of VR” section at Venice this year. Canada’s public producer of award-winning creative documentaries, auteur animation, interactive stories and participatory experiences, the NFB has won over 7,000 awards, including 21 Webbys and 12 Academy Awards.

The line that caught my eye? “Would you play god to intelligent life?” For the curious, here’s the film’s trailer,

Now for the second computer simulation (the feint within the feint).

Are we living in a computer simulation?

According to some thinkers in the field, the chances are about 50/50 that we are computer simulations, which makes “Agence” a particularly piquant experience.

An October 13, 2020 article ‘Do We Live in a Simulation? Chances are about 50 – 50‘ by Anil Ananthaswamy for Scientific American poses the question with an answer that’s unexpectedly uncertain, Note: Links have been removed,

It is not often that a comedian gives an astrophysicist goose bumps when discussing the laws of physics. But comic Chuck Nice managed to do just that in a recent episode of the podcast StarTalk.The show’s host Neil deGrasse Tyson had just explained the simulation argument—the idea that we could be virtual beings living in a computer simulation. If so, the simulation would most likely create perceptions of reality on demand rather than simulate all of reality all the time—much like a video game optimized to render only the parts of a scene visible to a player. “Maybe that’s why we can’t travel faster than the speed of light, because if we could, we’d be able to get to another galaxy,” said Nice, the show’s co-host, prompting Tyson to gleefully interrupt. “Before they can program it,” the astrophysicist said,delighting at the thought. “So the programmer put in that limit.”

Such conversations may seem flippant. But ever since Nick Bostrom of the University of Oxford wrote a seminal paper about the simulation argument in 2003, philosophers, physicists, technologists and, yes, comedians have been grappling with the idea of our reality being a simulacrum. Some have tried to identify ways in which we can discern if we are simulated beings. Others have attempted to calculate the chance of us being virtual entities. Now a new analysis shows that the odds that we are living in base reality—meaning an existence that is not simulated—are pretty much even. But the study also demonstrates that if humans were to ever develop the ability to simulate conscious beings, the chances would overwhelmingly tilt in favor of us, too, being virtual denizens inside someone else’s computer. (A caveat to that conclusion is that there is little agreement about what the term “consciousness” means, let alone how one might go about simulating it.)

In 2003 Bostrom imagined a technologically adept civilization that possesses immense computing power and needs a fraction of that power to simulate new realities with conscious beings in them. Given this scenario, his simulation argument showed that at least one proposition in the following trilemma must be true: First, humans almost always go extinct before reaching the simulation-savvy stage. Second, even if humans make it to that stage, they are unlikely to be interested in simulating their own ancestral past. And third, the probability that we are living in a simulation is close to one.

Before Bostrom, the movie The Matrix had already done its part to popularize the notion of simulated realities. And the idea has deep roots in Western and Eastern philosophical traditions, from Plato’s cave allegory to Zhuang Zhou’s butterfly dream. More recently, Elon Musk gave further fuel to the concept that our reality is a simulation: “The odds that we are in base reality is one in billions,” he said at a 2016 conference.

For him [astronomer David Kipping of Columbia University], there is a more obvious answer: Occam’s razor, which says that in the absence of other evidence, the simplest explanation is more likely to be correct. The simulation hypothesis is elaborate, presuming realities nested upon realities, as well as simulated entities that can never tell that they are inside a simulation. “Because it is such an overly complicated, elaborate model in the first place, by Occam’s razor, it really should be disfavored, compared to the simple natural explanation,” Kipping says.

Maybe we are living in base reality after all—The Matrix, Musk and weird quantum physics notwithstanding.

It’s all a little mind-boggling (a computer simulation creating and playing with a computer simulation?) and I’m not sure how far how I want to start thinking about the implications (the feint within the feint within the feint). Still, it seems that the idea could be useful as a kind of thought experiment designed to have us rethink our importance in the world. Or maybe, as a way to have a laugh at our own absurdity.

Technical University of Munich: embedded ethics approach for AI (artificial intelligence) and storing a tv series in synthetic DNA

I stumbled across two news bits of interest from the Technical University of Munich in one day (Sept. 1, 2020, I think). The topics: artificial intelligence (AI) and synthetic DNA (deoxyribonucleic acid).

Embedded ethics and artificial intelligence (AI)

An August 27, 2020 Technical University of Munich (TUM) press release (also on EurekAlert but published Sept. 1, 2020) features information about a proposal to embed ethicists in with AI development teams,

The increasing use of AI (artificial intelligence) in the development of new medical technologies demands greater attention to ethical aspects. An interdisciplinary team at the Technical University of Munich (TUM) advocates the integration of ethics from the very beginning of the development process of new technologies. Alena Buyx, Professor of Ethics in Medicine and Health Technologies, explains the embedded ethics approach.

Professor Buyx, the discussions surrounding a greater emphasis on ethics in AI research have greatly intensified in recent years, to the point where one might speak of “ethics hype” …

Prof. Buyx: … and many committees in Germany and around the world such as the German Ethics Council or the EU Commission High-Level Expert Group on Artificial Intelligence have responded. They are all in agreement: We need more ethics in the development of AI-based health technologies. But how do things look in practice for engineers and designers? Concrete solutions are still few and far between. In a joint pilot project with two Integrative Research Centers at TUM, the Munich School of Robotics and Machine Intelligence (MSRM) with its director, Prof. Sami Haddadin, and the Munich Center for Technology in Society (MCTS), with Prof. Ruth Müller, we want to try out the embedded ethics approach. We published the proposal in Nature Machine Intelligence at the end of July [2020].

What exactly is meant by the “embedded ethics approach”?

Prof.Buyx: The idea is to make ethics an integral part of the research process by integrating ethicists into the AI development team from day one. For example, they attend team meetings on a regular basis and create a sort of “ethical awareness” for certain issues. They also raise and analyze specific ethical and social issues.

Is there an example of this concept in practice?

Prof. Buyx: The Geriatronics Research Center, a flagship project of the MSRM in Garmisch-Partenkirchen, is developing robot assistants to enable people to live independently in old age. The center’s initiatives will include the construction of model apartments designed to try out residential concepts where seniors share their living space with robots. At a joint meeting with the participating engineers, it was noted that the idea of using an open concept layout everywhere in the units – with few doors or individual rooms – would give the robots considerable range of motion. With the seniors, however, this living concept could prove upsetting because they are used to having private spaces. At the outset, the engineers had not given explicit consideration to this aspect.

Prof.Buyx: The approach sounds promising. But how can we avoid “embedded ethics” from turning into an “ethics washing” exercise, offering companies a comforting sense of “being on the safe side” when developing new AI technologies?

That’s not something we can be certain of avoiding. The key is mutual openness and a willingness to listen, with the goal of finding a common language – and subsequently being prepared to effectively implement the ethical aspects. At TUM we are ideally positioned to achieve this. Prof. Sami Haddadin, the director of the MSRM, is also a member of the EU High-Level Group of Artificial Intelligence. In his research, he is guided by the concept of human centered engineering. Consequently, he has supported the idea of embedded ethics from the very beginning. But one thing is certain: Embedded ethics alone will not suddenly make AI “turn ethical”. Ultimately, that will require laws, codes of conduct and possibly state incentives.

Here’s a link to and a citation for the paper espousing the embedded ethics for AI development approach,

An embedded ethics approach for AI development by Stuart McLennan, Amelia Fiske, Leo Anthony Celi, Ruth Müller, Jan Harder, Konstantin Ritt, Sami Haddadin & Alena Buyx. Nature Machine Intelligence (2020) DOI: https://doi.org/10.1038/s42256-020-0214-1 Published 31 July 2020

This paper is behind a paywall.

Religion, ethics and and AI

For some reason embedded ethics and AI got me to thinking about Pope Francis and other religious leaders.

The Roman Catholic Church and AI

There was a recent announcement that the Roman Catholic Church will be working with MicroSoft and IBM on AI and ethics (from a February 28, 2020 article by Jen Copestake for British Broadcasting Corporation (BBC) news online (Note: A link has been removed),

Leaders from the two tech giants met senior church officials in Rome, and agreed to collaborate on “human-centred” ways of designing AI.

Microsoft president Brad Smith admitted some people may “think of us as strange bedfellows” at the signing event.

“But I think the world needs people from different places to come together,” he said.

The call was supported by Pope Francis, in his first detailed remarks about the impact of artificial intelligence on humanity.

The Rome Call for Ethics [sic] was co-signed by Mr Smith, IBM executive vice-president John Kelly and president of the Pontifical Academy for Life Archbishop Vincenzo Paglia.

It puts humans at the centre of new technologies, asking for AI to be designed with a focus on the good of the environment and “our common and shared home and of its human inhabitants”.

Framing the current era as a “renAIssance”, the speakers said the invention of artificial intelligence would be as significant to human development as the invention of the printing press or combustion engine.

UN Food and Agricultural Organization director Qu Dongyu and Italy’s technology minister Paola Pisano were also co-signatories.

Hannah Brockhaus’s February 28, 2020 article for the Catholic News Agency provides some details missing from the BBC report and I found it quite helpful when trying to understand the various pieces that make up this initiative,

The Pontifical Academy for Life signed Friday [February 28, 2020], alongside presidents of IBM and Microsoft, a call for ethical and responsible use of artificial intelligence technologies.

According to the document, “the sponsors of the call express their desire to work together, in this context and at a national and international level, to promote ‘algor-ethics.’”

“Algor-ethics,” according to the text, is the ethical use of artificial intelligence according to the principles of transparency, inclusion, responsibility, impartiality, reliability, security, and privacy.

The signing of the “Rome Call for AI Ethics [PDF]” took place as part of the 2020 assembly of the Pontifical Academy for Life, which was held Feb. 26-28 [2020] on the theme of artificial intelligence.

One part of the assembly was dedicated to private meetings of the academics of the Pontifical Academy for Life. The second was a workshop on AI and ethics that drew 356 participants from 41 countries.

On the morning of Feb. 28 [2020], a public event took place called “renAIssance. For a Humanistic Artificial Intelligence” and included the signing of the AI document by Microsoft President Brad Smith, and IBM Executive Vice-president John Kelly III.

The Director General of FAO, Dongyu Qu, and politician Paola Pisano, representing the Italian government, also signed.

The president of the European Parliament, David Sassoli, was also present Feb. 28.

Pope Francis canceled his scheduled appearance at the event due to feeling unwell. His prepared remarks were read by Archbishop Vincenzo Paglia, president of the Academy for Life.

You can find Pope Francis’s comments about the document here (if you’re not comfortable reading Italian, hopefully, the English translation which follows directly afterward will be helpful). The Pope’s AI initiative has a dedicated website, Rome Call for AI ethics, and while most of the material dates from the February 2020 announcement, they are keeping up a blog. It has two entries, one dated in May 2020 and another in September 2020.

Buddhism and AI

The Dalai Lama is well known for having an interest in science and having hosted scientists for various dialogues. So, I was able to track down a November 10, 2016 article by Ariel Conn for the futureoflife.org website, which features his insights on the matter,

The question of what it means and what it takes to feel needed is an important problem for ethicists and philosophers, but it may be just as important for AI researchers to consider. The Dalai Lama argues that lack of meaning and purpose in one’s work increases frustration and dissatisfaction among even those who are gainfully employed.

“The problem,” says the Dalai Lama, “is … the growing number of people who feel they are no longer useful, no longer needed, no longer one with their societies. … Feeling superfluous is a blow to the human spirit. It leads to social isolation and emotional pain, and creates the conditions for negative emotions to take root.”

If feeling needed and feeling useful are necessary for happiness, then AI researchers may face a conundrum. Many researchers hope that job loss due to artificial intelligence and automation could, in the end, provide people with more leisure time to pursue enjoyable activities. But if the key to happiness is feeling useful and needed, then a society without work could be just as emotionally challenging as today’s career-based societies, and possibly worse.

I also found a talk on the topic by The Venerable Tenzin Priyadarshi, first here’s a description from his bio at the Dalai Lama Center for Ethics and Transformative Values webspace on the Massachusetts Institute of Technology (MIT) website,

… an innovative thinker, philosopher, educator and a polymath monk. He is Director of the Ethics Initiative at the MIT Media Lab and President & CEO of The Dalai Lama Center for Ethics and Transformative Values at the Massachusetts Institute of Technology. Venerable Tenzin’s unusual background encompasses entering a Buddhist monastery at the age of ten and receiving graduate education at Harvard University with degrees ranging from Philosophy to Physics to International Relations. He is a Tribeca Disruptive Fellow and a Fellow at the Center for Advanced Study in Behavioral Sciences at Stanford University. Venerable Tenzin serves on the boards of a number of academic, humanitarian, and religious organizations. He is the recipient of several recognitions and awards and received Harvard’s Distinguished Alumni Honors for his visionary contributions to humanity.

He gave the 2018 Roger W. Heyns Lecture in Religion and Society at Stanford University on the topic, “Religious and Ethical Dimensions of Artificial Intelligence.” The video runs over one hour but he is a sprightly speaker (in comparison to other Buddhist speakers I’ve listened to over the years).

Judaism, Islam, and other Abrahamic faiths examine AI and ethics

I was delighted to find this January 30, 2020 Artificial Intelligence: Implications for Ethics and Religion event as it brought together a range of thinkers from various faiths and disciplines,

New technologies are transforming our world every day, and the pace of change is only accelerating.  In coming years, human beings will create machines capable of out-thinking us and potentially taking on such uniquely-human traits as empathy, ethical reasoning, perhaps even consciousness.  This will have profound implications for virtually every human activity, as well as the meaning we impart to life and creation themselves.  This conference will provide an introduction for non-specialists to Artificial Intelligence (AI):

What is it?  What can it do and be used for?  And what will be its implications for choice and free will; economics and worklife; surveillance economies and surveillance states; the changing nature of facts and truth; and the comparative intelligence and capabilities of humans and machines in the future? 

Leading practitioners, ethicists and theologians will provide cross-disciplinary and cross-denominational perspectives on such challenges as technology addiction, inherent biases and resulting inequalities, the ethics of creating destructive technologies and of turning decision-making over to machines from self-driving cars to “autonomous weapons” systems in warfare, and how we should treat the suffering of “feeling” machines.  The conference ultimately will address how we think about our place in the universe and what this means for both religious thought and theological institutions themselves.

UTS [Union Theological Seminary] is the oldest independent seminary in the United States and has long been known as a bastion of progressive Christian scholarship.  JTS [Jewish Theological Seminary] is one of the academic and spiritual centers of Conservative Judaism and a major center for academic scholarship in Jewish studies. The Riverside Church is an interdenominational, interracial, international, open, welcoming, and affirming church and congregation that has served as a focal point of global and national activism for peace and social justice since its inception and continues to serve God through word and public witness. The annual Greater Good Gathering, the following week at Columbia University’s School of International & Public Affairs, focuses on how technology is changing society, politics and the economy – part of a growing nationwide effort to advance conversations promoting the “greater good.”

They have embedded a video of the event (it runs a little over seven hours) on the January 30, 2020 Artificial Intelligence: Implications for Ethics and Religion event page. For anyone who finds that a daunting amount of information, you may want to check out the speaker list for ideas about who might be writing and thinking on this topic.

As for Islam, I did track down this November 29, 2018 article by Shahino Mah Abdullah, a fellow at the Institute of Advanced Islamic Studies (IAIS) Malaysia,

As the global community continues to work together on the ethics of AI, there are still vast opportunities to offer ethical inputs, including the ethical principles based on Islamic teachings.

This is in line with Islam’s encouragement for its believers to convey beneficial messages, including to share its ethical principles with society.

In Islam, ethics or akhlak (virtuous character traits) in Arabic, is sometimes employed interchangeably in the Arabic language with adab, which means the manner, attitude, behaviour, and etiquette of putting things in their proper places. Islamic ethics cover all the legal concepts ranging from syariah (Islamic law), fiqh ( jurisprudence), qanun (ordinance), and ‘urf (customary practices).

Adopting and applying moral values based on the Islamic ethical concept or applied Islamic ethics could be a way to address various issues in today’s societies.

At the same time, this approach is in line with the higher objectives of syariah (maqasid alsyariah) that is aimed at conserving human benefit by the protection of human values, including faith (hifz al-din), life (hifz alnafs), lineage (hifz al-nasl), intellect (hifz al-‘aql), and property (hifz al-mal). This approach could be very helpful to address contemporary issues, including those related to the rise of AI and intelligent robots.

..

Part of the difficulty with tracking down more about AI, ethics, and various religions is linguistic. I simply don’t have the language skills to search for the commentaries and, even in English, I may not have the best or most appropriate search terms.

Television (TV) episodes stored on DNA?

According to a Sept. 1, 2020 news item on Nanowerk, the first episode of a tv series, ‘Biohackers’ has been stored on synthetic DNA (deoxyribonucleic acid) by a researcher at TUM and colleagues at another institution,

The first episode of the newly released series “Biohackers” was stored in the form of synthetic DNA. This was made possible by the research of Prof. Reinhard Heckel of the Technical University of Munich (TUM) and his colleague Prof. Robert Grass of ETH Zürich.

They have developed a method that permits the stable storage of large quantities of data on DNA for over 1000 years.

A Sept. 1, 2020 TUM press release, which originated the news item, proceeds with more detail in an interview format,

Prof. Heckel, Biohackers is about a medical student seeking revenge on a professor with a dark past – and the manipulation of DNA with biotechnology tools. You were commissioned to store the series on DNA. How does that work?

First, I should mention that what we’re talking about is artificially generated – in other words, synthetic – DNA. DNA consists of four building blocks: the nucleotides adenine (A), thymine (T), guanine (G) and cytosine (C). Computer data, meanwhile, are coded as zeros and ones. The first episode of Biohackers consists of a sequence of around 600 million zeros and ones. To code the sequence 01 01 11 00 in DNA, for example, we decide which number combinations will correspond to which letters. For example: 00 is A, 01 is C, 10 is G and 11 is T. Our example then produces the DNA sequence CCTA. Using this principle of DNA data storage, we have stored the first episode of the series on DNA.

And to view the series – is it just a matter of “reverse translation” of the letters?

In a very simplified sense, you can visualize it like that. When writing, storing and reading the DNA, however, errors occur. If these errors are not corrected, the data stored on the DNA will be lost. To solve the problem, I have developed an algorithm based on channel coding. This method involves correcting errors that take place during information transfers. The underlying idea is to add redundancy to the data. Think of language: When we read or hear a word with missing or incorrect letters, the computing power of our brain is still capable of understanding the word. The algorithm follows the same principle: It encodes the data with sufficient redundancy to ensure that even highly inaccurate data can be restored later.

Channel coding is used in many fields, including in telecommunications. What challenges did you face when developing your solution?

The first challenge was to create an algorithm specifically geared to the errors that occur in DNA. The second one was to make the algorithm so efficient that the largest possible quantities of data can be stored on the smallest possible quantity of DNA, so that only the absolutely necessary amount of redundancy is added. We demonstrated that our algorithm is optimized in that sense.

DNA data storage is very expensive because of the complexity of DNA production as well as the reading process. What makes DNA an attractive storage medium despite these challenges?

First, DNA has a very high information density. This permits the storage of enormous data volumes in a minimal space. In the case of the TV series, we stored “only” 100 megabytes on a picogram – or a billionth of a gram of DNA. Theoretically, however, it would be possible to store up to 200 exabytes on one gram of DNA. And DNA lasts a long time. By comparison: If you never turned on your PC or wrote data to the hard disk it contains, the data would disappear after a couple of years. By contrast, DNA can remain stable for many thousands of years if it is packed right.

And the method you have developed also makes the DNA strands durable – practically indestructible.

My colleague Robert Grass was the first to develop a process for the “stable packing” of DNA strands by encapsulating them in nanometer-scale spheres made of silica glass. This ensures that the DNA is protected against mechanical influences. In a joint paper in 2015, we presented the first robust DNA data storage concept with our algorithm and the encapsulation process developed by Prof. Grass. Since then we have continuously improved our method. In our most recent publication in Nature Protocols of January 2020, we passed on what we have learned.

What are your next steps? Does data storage on DNA have a future?

We’re working on a way to make DNA data storage cheaper and faster. “Biohackers” was a milestone en route to commercialization. But we still have a long way to go. If this technology proves successful, big things will be possible. Entire libraries, all movies, photos, music and knowledge of every kind – provided it can be represented in the form of data – could be stored on DNA and would thus be available to humanity for eternity.

Here’s a link to and a citation for the paper,

Reading and writing digital data in DNA by Linda C. Meiser, Philipp L. Antkowiak, Julian Koch, Weida D. Chen, A. Xavier Kohll, Wendelin J. Stark, Reinhard Heckel & Robert N. Grass. Nature Protocols volume 15, pages86–101(2020) Issue Date: January 2020 DOI: https://doi.org/10.1038/s41596-019-0244-5 Published [online] 29 November 2019

This paper is behind a paywall.

As for ‘Biohackers’, it’s a German science fiction television series and you can find out more about it here on the Internet Movie Database.

Bringing a technique from astronomy down to the nanoscale

A January 2, 2020 Columbia University news release on EurekAlert (also on phys.org but published Jan. 3, 2020) describes research that takes the inter-galactic down to the quantum level,

Researchers at Columbia University and University of California, San Diego, have introduced a novel “multi-messenger” approach to quantum physics that signifies a technological leap in how scientists can explore quantum materials.

The findings appear in a recent article published in Nature Materials, led by A. S. McLeod, postdoctoral researcher, Columbia Nano Initiative, with co-authors Dmitri Basov and A. J. Millis at Columbia and R.A. Averitt at UC San Diego.

“We have brought a technique from the inter-galactic scale down to the realm of the ultra-small,” said Basov, Higgins Professor of Physics and Director of the Energy Frontier Research Center at Columbia. Equipped with multi-modal nanoscience tools we can now routinely go places no one thought would be possible as recently as five years ago.”

The work was inspired by “multi-messenger” astrophysics, which emerged during the last decade as a revolutionary technique for the study of distant phenomena like black hole mergers. Simultaneous measurements from instruments, including infrared, optical, X-ray and gravitational-wave telescopes can, taken together, deliver a physical picture greater than the sum of their individual parts.

The search is on for new materials that can supplement the current reliance on electronic semiconductors. Control over material properties using light can offer improved functionality, speed, flexibility and energy efficiency for next-generation computing platforms.

Experimental papers on quantum materials have typically reported results obtained by using only one type of spectroscopy. The researchers have shown the power of using a combination of measurement techniques to simultaneously examine electrical and optical properties.

The researchers performed their experiment by focusing laser light onto the sharp tip of a needle probe coated with magnetic material. When thin films of metal oxide are subject to a unique strain, ultra-fast light pulses can trigger the material to switch into an unexplored phase of nanometer-scale domains, and the change is reversible.

By scanning the probe over the surface of their thin film sample, the researchers were able to trigger the change locally and simultaneously manipulate and record the electrical, magnetic and optical properties of these light-triggered domains with nanometer-scale precision.

The study reveals how unanticipated properties can emerge in long-studied quantum materials at ultra-small scales when scientists tune them by strain.

“It is relatively common to study these nano-phase materials with scanning probes. But this is the first time an optical nano-probe has been combined with simultaneous magnetic nano-imaging, and all at the very low temperatures where quantum materials show their merits,” McLeod said. “Now, investigation of quantum materials by multi-modal nanoscience offers a means to close the loop on programs to engineer them.”

The excitement is palpable.

Caption: The discovery of multi-messenger nanoprobes allows scientists to simultaneously probe multiple properties of quantum materials at nanometer-scale spatial resolutions. Credit: Ella Maru Studio

Here’s a link to and a citation for the paper,

Multi-messenger nanoprobes of hidden magnetism in a strained manganite by A. S. McLeod, Jingdi Zhang, M. Q. Gu, F. Jin, G. Zhang, K. W. Post, X. G. Zhao, A. J. Millis, W. B. Wu, J. M. Rondinelli, R. D. Averitt & D. N. Basov. Nature Materials (2019) doi:10.1038/s41563-019-0533-y Published: 16 December 2019

This paper is behind a paywall.

Soft things for your brain

A March 5, 2018 news item on Nanowerk describes the latest stretchable electrode (Note: A link has been removed),

Klas Tybrandt, principal investigator at the Laboratory of Organic Electronics at Linköping University [Sweden], has developed new technology for long-term stable neural recording. It is based on a novel elastic material composite, which is biocompatible and retains high electrical conductivity even when stretched to double its original length.

The result has been achieved in collaboration with colleagues in Zürich and New York. The breakthrough, which is crucial for many applications in biomedical engineering, is described in an article published in the prestigious scientific journal Advanced Materials (“High-Density Stretchable Electrode Grids for Chronic Neural Recording”).

A March 5, 2018 Linköping University press release, which originated the news item, gives more detail but does not mention that the nanowires are composed of titanium dioxide (you can find additional details in the abstract for the paper; link and citation will be provided later in this posting)),

The coupling between electronic components and nerve cells is crucial not only to collect information about cell signalling, but also to diagnose and treat neurological disorders and diseases, such as epilepsy.

It is very challenging to achieve long-term stable connections that do not damage neurons or tissue, since the two systems, the soft and elastic tissue of the body and the hard and rigid electronic components, have completely different mechanical properties.

Stretchable soft electrodeThe soft electrode stretched to twice its length Photo credit: Thor Balkhed

“As human tissue is elastic and mobile, damage and inflammation arise at the interface with rigid electronic components. It not only causes damage to tissue; it also attenuates neural signals,” says Klas Tybrandt, leader of the Soft Electronics group at the Laboratory of Organic Electronics, Linköping University, Campus Norrköping.

New conductive material

Klas Tybrandt has developed a new conductive material that is as soft as human tissue and can be stretched to twice its length. The material consists of gold coated titanium dioxide nanowires, embedded into silicone rubber. The material is biocompatible – which means it can be in contact with the body without adverse effects – and its conductivity remains stable over time.

“The microfabrication of soft electrically conductive composites involves several challenges. We have developed a process to manufacture small electrodes that also preserves the biocompatibility of the materials. The process uses very little material, and this means that we can work with a relatively expensive material such as gold, without the cost becoming prohibitive,” says Klas Tybrandt.

The electrodes are 50 µm [microns or micrometres] in size and are located at a distance of 200 µm from each other. The fabrication procedure allows 32 electrodes to be placed onto a very small surface. The final probe, shown in the photograph, has a width of 3.2 mm and a thickness of 80 µm.

The soft microelectrodes have been developed at Linköping University and ETH Zürich, and researchers at New York University and Columbia University have subsequently implanted them in the brain of rats. The researchers were able to collect high-quality neural signals from the freely moving rats for 3 months. The experiments have been subject to ethical review, and have followed the strict regulations that govern animal experiments.

Important future applications

Klas Tybrandt, researcher at Laboratory for Organic ElectronicsKlas Tybrandt, researcher at Laboratory for Organic Electronics Photo credit: Thor Balkhed

“When the neurons in the brain transmit signals, a voltage is formed that the electrodes detect and transmit onwards through a tiny amplifier. We can also see which electrodes the signals came from, which means that we can estimate the location in the brain where the signals originated. This type of spatiotemporal information is important for future applications. We hope to be able to see, for example, where the signal that causes an epileptic seizure starts, a prerequisite for treating it. Another area of application is brain-machine interfaces, by which future technology and prostheses can be controlled with the aid of neural signals. There are also many interesting applications involving the peripheral nervous system in the body and the way it regulates various organs,” says Klas Tybrandt.

The breakthrough is the foundation of the research area Soft Electronics, currently being established at Linköping University, with Klas Tybrandt as principal investigator.
liu.se/soft-electronics

A video has been made available (Note: For those who find any notion of animal testing disturbing; don’t watch the video even though it is an animation and does not feature live animals),

Here’s a link to and a citation for the paper,

High-Density Stretchable Electrode Grids for Chronic Neural Recording by Klas Tybrandt, Dion Khodagholy, Bernd Dielacher, Flurin Stauffer, Aline F. Renz, György Buzsáki, and János Vörös. Advanced Materials 2018. DOI: 10.1002/adma.201706520
 First published 28 February 2018

This paper is open access.

Narrating neuroscience in Toronto (Canada) on Oct. 20, 2017 and knitting a neuron

What is it with the Canadian neuroscience community? First, there’s The Beautiful Brain an exhibition of the extraordinary drawings of Santiago Ramón y Cajal (1852–1934) at the Belkin Gallery on the University of British Columbia (UBC) campus in Vancouver and a series of events marking the exhibition (for more see my Sept. 11, 2017 posting ; scroll down about 30% for information about the drawings and the events still to come).

I guess there must be some money floating around for raising public awareness because now there’s a neuroscience and ‘storytelling’ event (Narrating Neuroscience) in Toronto, Canada. From a Sept. 25, 2017 ArtSci Salon announcement (received via email),

With NARRATING NEUROSCIENCE we plan to initiate a discussion on the  role and the use of storytelling and art (both in verbal and visual  forms) to communicate abstract and complex concepts in neuroscience to  very different audiences, ranging from fellow scientists, clinicians and patients, to social scientists and the general public. We invited four guests to share their research through case studies and experiences stemming directly from their research or from other practices they have adopted and incorporated into their research, where storytelling and the arts have played a crucial role not only in communicating cutting edge research in neuroscience, but also in developing and advancing it.

OUR GUESTS

MATTEO FARINELLA, PhD, Presidential Scholar in Society and Neuroscience – Columbia University

SHELLEY WALL , AOCAD, MSc, PhD – Assistant professor, Biomedical Communications Graduate Program and Department of Biology, UTM

ALFONSO FASANO, MD, PhD, Associate Professor – University of Toronto Clinician Investigator – Krembil Research Institute Movement Disorders Centre – Toronto Western Hospital

TAHANI BAAKDHAH, MD, MSc, PhD candidate – University of Toronto

DATE: October 20, 2017
TIME: 6:00-8:00 pm
LOCATION: The Fields Institute for Research in Mathematical Sciences
222 College Street, Toronto, ON

Events Facilitators: Roberta Buiani and Stephen Morris (ArtSci Salon) and Nina Czegledy (Leonardo Network)

TAHANI BAAKDHAH is a PhD student at the University of Toronto studying how the stem cells built our retina during development, the mechanism by which the light sensing cells inside the eye enable us to see this beautiful world and how we can regenerate these cells in case of disease or injury.

MATTEO FARINELLA combines a background in neuroscience with a lifelong passion for drawing, making comics and illustrations about the brain. He is the author of _Neurocomic_ (Nobrow 2013) published with the support of the Wellcome Trust, _Cervellopoli_ (Editoriale Scienza 2017) and he has collaborated with universities and educational institutions around
the world to make science more clear and accessible. In 2016 Matteo joined Columbia University as a Presidential Scholar in Society and Neuroscience, where he investigates the role of visual narratives in science communication. Working with science journalists, educators and cognitive neuroscientists he aims to understand how these tools may
affect the public perception of science and increase scientific literacy (cartoonscience.org [2]).

ALFONSO FASANO graduated from the Catholic University of Rome, Italy, in 2002 and became a neurologist in 2007. After a 2-year fellowship at the University of Kiel, Germany, he completed a PhD in neuroscience at the Catholic University of Rome. In 2013 he joined the Movement Disorder Centre at Toronto Western Hospital, where he is the co-director of the
surgical program for movement disorders. He is also an associate professor of medicine in the Division of Neurology at the University of Toronto and clinician investigator at the Krembil Research Institute. Dr. Fasano’s main areas of interest are the treatment of movement  disorders with advanced technology (infusion pumps and neuromodulation), pathophysiology and treatment of tremor and gait disorders. He is author of more than 170 papers and book chapters. He is principal investigator of several clinical trials.

SHELLEY WALL is an assistant professor in the University of Toronto’s Biomedical Communications graduate program, a certified medical illustrator, and inaugural Illustrator-in-Residence in the Faculty of Medicine, University of Toronto. One of her primary areas of research, teaching, and creation is graphic medicine—the intersection of comics with illness, medicine, and caregiving—and one of her ongoing projects is a series of comics about caregiving and young onset Parkinson’s disease.

You can register for this free Toronto event here.

One brief observation, there aren’t any writers (other than academics) or storytellers included in this ‘storytelling’ event. The ‘storytelling’ being featured is visual. To be blunt I’m not of the ‘one picture is worth a thousand words’ school of thinking (see my Feb. 22, 2011 posting). Yes, sometimes pictures are all you need but that tiresome aphorism which suggests  communication can be reduced to one means of communication really needs to be retired. As for academic writing, it’s not noted for its storytelling qualities or experimentation. Academics are not judged on their writing or storytelling skills although there are some who are very good.

Getting back to the Toronto event, they seem to have the visual part of their focus  ” … discussion on the  role and the use of storytelling and art (both in verbal and visual  forms) … ” covered. Having recently attended a somewhat similar event in Vancouver, which was announced n my Sept. 11, 2017 posting, there were some exciting images and ideas presented.

The ArtSci Salon folks also announced this (from the Sept. 25, 2017 ArtSci Salon announcement; received via email),

ATTENTION ARTSCI SALONISTAS AND FANS OF ART AND SCIENCE!!
CALL FOR KNITTING AND CROCHET LOVERS!

In addition to being a PhD student at the University of Toronto, Tahani Baakdhah is a prolific knitter and crocheter and has been the motor behind two successful Knit-a-Neuron Toronto initiatives. We invite all Knitters and Crocheters among our ArtSci Salonistas to pick a pattern
(link below) and knit a neuron (or 2! Or as many as you want!!)

http://bit.ly/2y05hRR

BRING THEM TO OUR OCTOBER 20 ARTSCI SALON!
Come to the ArtSci Salon and knit there!
You can’t come?
Share a picture with @ArtSci_Salon @SciCommTO #KnitANeuronTO [3] on
social media
Or…Drop us a line at artscisalon@gmail.com !

I think it’s been a few years since my last science knitting post. No, it was Oct. 18, 2016. Moving on, I found more neuron knitting while researching this piece. Here’s the Neural Knitworks group, which is part of Australia’s National Science Week (11-19 August 2018) initiative (from the Neural Knitworks webpage),

Neural Knitworks is a collaborative project about mind and brain health.

Whether you’re a whiz with yarn, or just discovering the joy of craft, now you can crochet wrap, knit or knot—and find out about neuroscience.

During 2014 an enormous number of handmade neurons were donated (1665 in total!) and used to build a giant walk-in brain, as seen here at Hazelhurst Gallery [scroll to end of this post]. Since then Neural Knitworks have been held in dozens of communities across Australia, with installations created in Queensland, the ACT, Singapore, as part of the Cambridge Science Festival in the UK and in Philadelphia, USA.

In 2017, the Neural Knitworks team again invites you to host your own home-grown Neural Knitwork for National Science Week*. Together we’ll create a giant ‘virtual’ neural network by linking your displays visually online.

* If you wish to host a Neural Knitwork event outside of National Science Week or internationally we ask that you contact us to seek permission to use the material, particularly if you intend to create derivative works or would like to exhibit the giant brain. Please outline your plans in an email.

Your creation can be big or small, part of a formal display, or simply consist of neighbourhood neuron ‘yarn-bombings’. Knitworks can be created at home, at work or at school. No knitting experience is required and all ages can participate.

See below for how to register your event and download our scientifically informed patterns.

What is a neuron?

Neurons are electrically excitable cells of the brain, spinal cord and peripheral nerves. The billions of neurons in your body connect to each other in neural networks. They receive signals from every sense, control movement, create memories, and form the neural basis of every thought.

Check out the neuron microscopy gallery for some real-world inspiration.

What happens at a Neural Knitwork?

Neural Knitworks are based on the principle that yarn craft, with its mental challenges, social connection and mindfulness, helps keep our brains and minds sharp, engaged and healthy.

Have fun as you

  • design your own woolly neurons, or get inspired by our scientifically-informed knitting, crochet or knot patterns;
  • natter with neuroscientists and teach them a few of your crafty tricks;
  • contribute to a travelling textile brain exhibition;
  • increase your attention span and test your memory.

Calm your mind and craft your own brain health as you

  • forge friendships;
  • solve creative and mental challenges;
  • practice mindfulness and relaxation;
  • teach and learn;
  • develop eye-hand coordination and fine motor dexterity.

Interested in hosting a Neural Knitwork?

  1. Log your event on the National Science Week calendar to take advantage of multi-channel promotion.
  2. Share the link^ for this Neural Knitwork page on your own website or online newsletter and add information your own event details.
  3. Use this flyer template (2.5 MB .docx) to promote your event in local shop windows and on noticeboards.
  4. Read our event organisers toolbox for tips on hosting a successful event.
  5. You’ll need plenty of yarn, needles, copies of our scientifically-based neuron crafting pattern books (3.4 MB PDF) and a comfy spot in which to create.
  6. Gather together a group of friends who knit, crochet, design, spin, weave and anyone keen to give it a go. Those who know how to knit can teach others how to do it, and there’s even an easy no knit pattern that you can knot.
  7. Download a neuroscience podcast to listen to, and you’ve got a Neural Knitwork!
  8. Join the Neural Knitworks community on Facebook  to share and find information about events including public talks featuring neuroscientists.
  9. Tweet #neuralknitworks to show us your creations.
  10. Find display ideas in the pattern book and on our Facebook page.

Finally,, the knitted neurons from Australia’s 2014 National Science Week brain exhibit,

[downloaded from https://www.scienceweek.net.au/neural-knitworks/]

ETA Oct. 24, 2017: If you’re interested on how the talk was received, there’s an Oct. 24, 2017 posting by Magosia Pakulska for the Research2Reality blog.

A biocompatible (implantable) micromachine (microrobot)

I appreciate the detail and information in this well written Jan. 4, 2017 Columbia University news release (h/t Jan. 4, 2016 Nanowerk; Note: Links have been removed),

A team of researchers led by Biomedical Engineering Professor Sam Sia has developed a way to manufacture microscale-sized machines from biomaterials that can safely be implanted in the body. Working with hydrogels, which are biocompatible materials that engineers have been studying for decades, Sia has invented a new technique that stacks the soft material in layers to make devices that have three-dimensional, freely moving parts. The study, published online January 4, 2017, in Science Robotics, demonstrates a fast manufacturing method Sia calls “implantable microelectromechanical systems” (iMEMS).

By exploiting the unique mechanical properties of hydrogels, the researchers developed a “locking mechanism” for precise actuation and movement of freely moving parts, which can provide functions such as valves, manifolds, rotors, pumps, and drug delivery. They were able to tune the biomaterials within a wide range of mechanical and diffusive properties and to control them after implantation without a sustained power supply such as a toxic battery. They then tested the “payload” delivery in a bone cancer model and found that the triggering of release of doxorubicin from the device over 10 days showed high treatment efficacy and low toxicity, at 1/10 of the standard systemic chemotherapy dose.

“Overall, our iMEMS platform enables development of biocompatible implantable microdevices with a wide range of intricate moving components that can be wirelessly controlled on demand and solves issues of device powering and biocompatibility,” says Sia, also a member of the Data Science Institute. “We’re really excited about this because we’ve been able to connect the world of biomaterials with that of complex, elaborate medical devices. Our platform has a large number of potential applications, including the drug delivery system demonstrated in our paper which is linked to providing tailored drug doses for precision medicine.”

I particularly like this bit about hydrogels being a challenge to work with and the difficulties of integrating both rigid and soft materials,

Most current implantable microdevices have static components rather than moving parts and, because they require batteries or other toxic electronics, have limited biocompatibility. Sia’s team spent more than eight years working on how to solve this problem. “Hydrogels are difficult to work with, as they are soft and not compatible with traditional machining techniques,” says Sau Yin Chin, lead author of the study who worked with Sia. “We have tuned the mechanical properties and carefully matched the stiffness of structures that come in contact with each other within the device. Gears that interlock have to be stiff in order to allow for force transmission and to withstand repeated actuation. Conversely, structures that form locking mechanisms have to be soft and flexible to allow for the gears to slip by them during actuation, while at the same time they have to be stiff enough to hold the gears in place when the device is not actuated. We also studied the diffusive properties of the hydrogels to ensure that the loaded drugs do not easily diffuse through the hydrogel layers.”

The team used light to polymerize sheets of gel and incorporated a stepper mechanization to control the z-axis and pattern the sheets layer by layer, giving them three-dimensionality. Controlling the z-axis enabled the researchers to create composite structures within one layer of the hydrogel while managing the thickness of each layer throughout the fabrication process. They were able to stack multiple layers that are precisely aligned and, because they could polymerize a layer at a time, one right after the other, the complex structure was built in under 30 minutes.

Sia’s iMEMS technique addresses several fundamental considerations in building biocompatible microdevices, micromachines, and microrobots: how to power small robotic devices without using toxic batteries, how to make small biocompatible moveable components that are not silicon which has limited biocompatibility, and how to communicate wirelessly once implanted (radio frequency microelectronics require power, are relatively large, and are not biocompatible). The researchers were able to trigger the iMEMS device to release additional payloads over days to weeks after implantation. They were also able to achieve precise actuation by using magnetic forces to induce gear movements that, in turn, bend structural beams made of hydrogels with highly tunable properties. (Magnetic iron particles are commonly used and FDA-approved for human use as contrast agents.)

In collaboration with Francis Lee, an orthopedic surgeon at Columbia University Medical Center at the time of the study, the team tested the drug delivery system on mice with bone cancer. The iMEMS system delivered chemotherapy adjacent to the cancer, and limited tumor growth while showing less toxicity than chemotherapy administered throughout the body.

“These microscale components can be used for microelectromechanical systems, for larger devices ranging from drug delivery to catheters to cardiac pacemakers, and soft robotics,” notes Sia. “People are already making replacement tissues and now we can make small implantable devices, sensors, or robots that we can talk to wirelessly. Our iMEMS system could bring the field a step closer in developing soft miniaturized robots that can safely interact with humans and other living systems.”

Here’s a link to and a citation for the paper,

Additive manufacturing of hydrogel-based materials for next-generation implantable medical devices by Sau Yin Chin, Yukkee Cheung Poh, Anne-Céline Kohler, Jocelyn T. Compton, Lauren L. Hsu, Kathryn M. Lau, Sohyun Kim, Benjamin W. Lee, Francis Y. Lee, and Samuel K. Sia. Science Robotics  04 Jan 2017: Vol. 2, Issue 2, DOI: 10.1126/scirobotics.aah6451

This paper appears to be open access.

The researchers have provided a video demonstrating their work (you may want to read the caption below before watching),

Magnetic actuation of the Geneva drive device. A magnet is placed about 1cm below and without contact with the device. The rotating magnet results in the rotational movement of the smaller driving gear. With each full rotation of this driving gear, the larger driven gear is engaged and rotates by 60º, exposing the next reservoir to the aperture on the top layer of the device.

—Video courtesy of Sau Yin Chin/Columbia Engineering

You can hear some background conversation but it doesn’t seem to have been included for informational purposes.

Montreal Neuro creates a new paradigm for technology transfer?

It’s one heck of a Christmas present. Canadian businessmen Larry Tannenbaum and his wife Judy have given the Montreal Neurological Institute (Montreal Neuro), which is affiliated with McGill University, a $20M donation. From a Dec. 16, 2016 McGill University news release,

The Prime Minister of Canada, Justin Trudeau, was present today at the Montreal Neurological Institute and Hospital (MNI) for the announcement of an important donation of $20 million by the Larry and Judy Tanenbaum family. This transformative gift will help to establish the Tanenbaum Open Science Institute, a bold initiative that will facilitate the sharing of neuroscience findings worldwide to accelerate the discovery of leading edge therapeutics to treat patients suffering from neurological diseases.

‟Today, we take an important step forward in opening up new horizons in neuroscience research and discovery,” said Mr. Larry Tanenbaum. ‟Our digital world provides for unprecedented opportunities to leverage advances in technology to the benefit of science.  That is what we are celebrating here today: the transformation of research, the removal of barriers, the breaking of silos and, most of all, the courage of researchers to put patients and progress ahead of all other considerations.”

Neuroscience has reached a new frontier, and advances in technology now allow scientists to better understand the brain and all its complexities in ways that were previously deemed impossible. The sharing of research findings amongst scientists is critical, not only due to the sheer scale of data involved, but also because diseases of the brain and the nervous system are amongst the most compelling unmet medical needs of our time.

Neurological diseases, mental illnesses, addictions, and brain and spinal cord injuries directly impact 1 in 3 Canadians, representing approximately 11 million people across the country.

“As internationally-recognized leaders in the field of brain research, we are uniquely placed to deliver on this ambitious initiative and reinforce our reputation as an institution that drives innovation, discovery and advanced patient care,” said Dr. Guy Rouleau, Director of the Montreal Neurological Institute and Hospital and Chair of McGill University’s Department of Neurology and Neurosurgery. “Part of the Tanenbaum family’s donation will be used to incentivize other Canadian researchers and institutions to adopt an Open Science model, thus strengthening the network of like-minded institutes working in this field.”

What they don’t mention in the news release is that they will not be pursuing any patents (for five years according to one of the people in the video but I can’t find text to substantiate that time limit*; there are no time limits noted elsewhere) on their work. For this detail and others, you have to listen to the video they’ve created,

The CBC (Canadian Broadcasting Corporation) news online Dec. 16, 2016 posting (with files from Sarah Leavitt and Justin Hayward) adds a few personal details about Tannenbaum,

“Our goal is simple: to accelerate brain research and discovery to relieve suffering,” said Tanenbaum.

Tanenbaum, a Canadian businessman and chairman of Maple Leaf Sports and Entertainment, said many of his loved ones suffered from neurological disorders.

“I lost my mother to Alzheimer’s, my father to a stroke, three dear friends to brain cancer, and a brilliant friend and scientist to clinical depression,” said Tanenbaum.

He hopes the institute will serve as the template for science research across the world, a thought that Trudeau echoed.

“This vision around open science, recognizing the role that Canada can and should play, the leadership that Canadians can have in this initiative is truly, truly exciting,” said Trudeau.

The Neurological Institute says the pharmaceutical industry is supportive of the open science concept because it will provide crucial base research that can later be used to develop drugs to fight an array of neurological conditions.

Jack Stilgoe in a Dec. 16, 2016 posting on the Guardian blogs explains what this donation could mean (Note: Links have been removed),

With the help of Tanenbaum’s gift of 20 million Canadian dollars (£12million) the ‘Neuro’, the Montreal Neurological Institute and Hospital, is setting up an experiment in experimentation, an Open Science Initiative with the express purpose of finding out the best way to realise the potential of scientific research.

Governments in science-rich countries are increasingly concerned that they do not appear to reaping the economic returns they feel they deserve from investments in scientific research. Their favoured response has been to try to bridge what they see as a ‘valley of death’ between basic scientific research and industrial applications. This has meant more funding for ‘translational research’ and the flowering of technology transfer offices within universities.

… There are some success stories, particularly in the life sciences. Patents from the work of Richard Axel at Columbia University at one point brought the university almost $100 million per year. The University of Florida received more than $150 million for inventing Gatorade in the 1960s. The stakes are high in the current battle between Berkely and MIT/Harvard over who owns the rights to the CRISPR/Cas9 system that has revolutionised genetic engineering and could be worth billions.

Policymakers imagine a world in which universities pay for themselves just as a pharmaceutical research lab does. However, for critics of technology transfer, such stories blind us to the reality of university’s entrepreneurial abilities.

For most universities, evidence of their money-making prowess is, to put it charitably, mixed. A recent Bloomberg report shows how quickly university patent incomes plunge once we look beyond the megastars. In 2014, just 15 US universities earned 70% of all patent royalties. British science policy researchers Paul Nightingale and Alex Coad conclude that ‘Roughly 9/10 US universities lose money on their technology transfer offices… MIT makes more money from selling T-shirts than it does from licensing’. A report from the Brookings institute concluded that the model of technology transfer ‘is unprofitable for most universities and sometimes even risks alienating the private sector’. In the UK, the situation is even worse. Businesses who have dealings with universities report that their technology transfer offices are often unrealistic in negotiations. In many cases, academics are, like a small child who refuses to let others play with a brand new football, unable to make the most of their gifts. And areas of science outside the life sciences are harder to patent than medicines, sports drinks and genetic engineering techniques. Trying too hard to force science towards the market may be, to use the phrase of science policy professor Keith Pavitt, like pushing a piece of string.

Science policy is slowly waking up to the realisation that the value of science may lie in people and places rather than papers and patents. It’s an idea that the Neuro, with the help of Tanenbaum’s gift, is going to test. By sharing data and giving away intellectual property, the initiative aims to attract new private partners to the institute and build Montreal as a hub for knowledge and innovation. The hypothesis is that this will be more lucrative than hoarding patents.

This experiment is not wishful thinking. It will be scientifically measured. It is the job of Richard Gold, a McGill University law professor, to see whether it works. He told me that his first task is ‘to figure out what to counts… There’s going to be a gap between what we would like to measure and what we can measure’. However, he sees an open-mindedness among his colleagues that is unusual. Some are evangelists for open science; some are sceptics. But they share a curiosity about new approaches and a recognition of a problem in neuroscience: ‘We haven’t come up with a new drug for Parkinson’s in 30 years. We don’t even understand the biological basis for many of these diseases. So whatever we’re doing at the moment doesn’t work’. …

Montreal Neuro made news on the ‘open science’ front in January 2016 when it formally announced its research would be freely available and that researchers would not be pursuing patents (see my January 22, 2016 posting).

I recommend reading Stilgoe’s posting in its entirety and for those who don’t know or have forgotten, Prime Minister’s Trudeau’s family has some experience with mental illness. His mother has been very open about her travails. This makes his presence at the announcement perhaps a bit more meaningful than the usual political presence at a major funding announcement.

*The five-year time limit is confirmed in a Feb. 17, 2017 McGill University news release about their presentations at the AAAS (American Association for the Advancement of Science) 2017 annual meeting) on EurekAlert,

umpstarting Neurological Research through Open Science – MNI & McGill University

Friday, February 17, 2017, 1:30-2:30 PM/ Room 208

Neurological research is advancing too slowly according to Dr. Guy Rouleau, director of the Montreal Neurological Institute (MNI) of McGill University. To speed up discovery, MNI has become the first ever Open Science academic institution in the world. In a five-year experiment, MNI is opening its books and making itself transparent to an international group of social scientists, policymakers, industrial partners, and members of civil society. They hope, by doing so, to accelerate research and the discovery of new treatments for patients with neurological diseases, and to encourage other leading institutions around the world to consider a similar model. A team led by McGill Faculty of Law’s Professor Richard Gold will monitor and evaluate how well the MNI Open Science experiment works and provide the scientific and policy worlds with insight into 21st century university-industry partnerships. At this workshop, Rouleau and Gold will discuss the benefits and challenges of this open-science initiative.

The dangers of metaphors when applied to science

Metaphors can be powerful in both good ways and bad. I once read that there was a ‘lighthouse’ metaphor used to explain a scientific concept to high school students which later caused problems for them when they were studying the biological sciences as university students.  It seems there’s research now to back up the assertion about metaphors and their powers. From an Oct. 7, 2016 news item on phys.org,

Whether ideas are “like a light bulb” or come forth as “nurtured seeds,” how we describe discovery shapes people’s perceptions of both inventions and inventors. Notably, Kristen Elmore (Bronfenbrenner Center for Translational Research at Cornell University) and Myra Luna-Lucero (Teachers College, Columbia University) have shown that discovery metaphors influence our perceptions of the quality of an idea and of the ability of the idea’s creator. The research appears in the journal Social Psychological and Personality Science.

While the metaphor that ideas appear “like light bulbs” is popular and appealing, new research shows that discovery metaphors influence our understanding of the scientific process and perceptions of the ability of inventors based on their gender. [downloaded from http://www.spsp.org/news-center/press-release/metaphors-bias-perception]

While the metaphor that ideas appear “like light bulbs” is popular and appealing, new research shows that discovery metaphors influence our understanding of the scientific process and perceptions of the ability of inventors based on their gender. [downloaded from http://www.spsp.org/news-center/press-release/metaphors-bias-perception]

An Oct. 7, 2016  Society for Personality and Social Psychology news release (also on EurekAlert), which originated the news item, provides more insight into the work,

While those involved in research know there are many trials and errors and years of work before something is understood, discovered or invented, our use of words for inspiration may have an unintended and underappreciated effect of portraying good ideas as a sudden and exceptional occurrence.

In a series of experiments, Elmore and Luna-Lucero tested how people responded to ideas that were described as being “like a light bulb,” “nurtured like a seed,” or a neutral description. 

According the authors, the “light bulb metaphor implies that ‘brilliant’ ideas result from sudden and spontaneous inspiration, bestowed upon a chosen few (geniuses) while the seed metaphor implies that ideas are nurtured over time, ‘cultivated’ by anyone willing to invest effort.”

The first study looked at how people reacted to a description of Alan Turing’s invention of a precursor to the modern computer. It turns out light bulbs are more remarkable than seeds.

“We found that an idea was seen as more exceptional when described as appearing like a light bulb rather than nurtured like a seed,” said Elmore.

But this pattern changed when they used these metaphors to describe a female inventor’s ideas. When using the “like a light bulb” and “nurtured seed” metaphors, the researchers found “women were judged as better idea creators than men when ideas were described as nurtured over time like seeds.”

The results suggest gender stereotypes play a role in how people perceived the inventors.

In the third study, the researchers presented participants with descriptions of the work of either a female (Hedy Lamarr) or a male (George Antheil) inventor, who together created the idea for spread-spectrum technology (a precursor to modern wireless communications). Indeed, the seed metaphor “increased perceptions that a female inventor was a genius, while the light bulb metaphor was more consistent with stereotypical views of male genius,” stated Elmore.

Elmore plans to expand upon their research on metaphors by examining the interactions of teachers and students in real world classroom settings.

“The ways that teachers and students talk about ideas may impact students’ beliefs about how good ideas are created and who is likely to have them,” said Elmore. “Having good ideas is relevant across subjects—whether students are creating a hypothesis in science or generating a thesis for their English paper—and language that stresses the role of effort rather than inspiration in creating ideas may have real benefits for students’ motivation.”

Here’s a link to and a citation for the paper,

Light Bulbs or Seeds? How Metaphors for Ideas Influence Judgments About Genius by Kristen C. Elmore and Myra Luna-Lucero. Social Psychological and Personality Science doi: 10.1177/1948550616667611 Published online before print October 7, 2016

This paper is behind a paywall.

While Elmore and Luna-Lucero are focused on a nuanced analysis of specific metaphors, Richard Holmes’s book, ‘The Age of Wonder: How the Romantic Generation Discovered the Beauty and Terror of Science’, notes that the ‘Eureka’  (light bulb) moment for scientific discovery and the notion of a ‘single great man’ (a singular genius) as the discoverer has its roots in romantic (Shelley, Keats, etc.) poetry.

How might artificial intelligence affect urban life in 2030? A study

Peering into the future is always a chancy business as anyone who’s seen those film shorts from the 1950’s and 60’s which speculate exuberantly as to what the future will bring knows.

A sober approach (appropriate to our times) has been taken in a study about the impact that artificial intelligence might have by 2030. From a Sept. 1, 2016 Stanford University news release (also on EurekAlert) by Tom Abate (Note: Links have been removed),

A panel of academic and industrial thinkers has looked ahead to 2030 to forecast how advances in artificial intelligence (AI) might affect life in a typical North American city – in areas as diverse as transportation, health care and education ­– and to spur discussion about how to ensure the safe, fair and beneficial development of these rapidly emerging technologies.

Titled “Artificial Intelligence and Life in 2030,” this year-long investigation is the first product of the One Hundred Year Study on Artificial Intelligence (AI100), an ongoing project hosted by Stanford to inform societal deliberation and provide guidance on the ethical development of smart software, sensors and machines.

“We believe specialized AI applications will become both increasingly common and more useful by 2030, improving our economy and quality of life,” said Peter Stone, a computer scientist at the University of Texas at Austin and chair of the 17-member panel of international experts. “But this technology will also create profound challenges, affecting jobs and incomes and other issues that we should begin addressing now to ensure that the benefits of AI are broadly shared.”

The new report traces its roots to a 2009 study that brought AI scientists together in a process of introspection that became ongoing in 2014, when Eric and Mary Horvitz created the AI100 endowment through Stanford. AI100 formed a standing committee of scientists and charged this body with commissioning periodic reports on different aspects of AI over the ensuing century.

“This process will be a marathon, not a sprint, but today we’ve made a good start,” said Russ Altman, a professor of bioengineering and the Stanford faculty director of AI100. “Stanford is excited to host this process of introspection. This work makes practical contribution to the public debate on the roles and implications of artificial intelligence.”

The AI100 standing committee first met in 2015, led by chairwoman and Harvard computer scientist Barbara Grosz. It sought to convene a panel of scientists with diverse professional and personal backgrounds and enlist their expertise to assess the technological, economic and policy implications of potential AI applications in a societally relevant setting.

“AI technologies can be reliable and broadly beneficial,” Grosz said. “Being transparent about their design and deployment challenges will build trust and avert unjustified fear and suspicion.”

The report investigates eight domains of human activity in which AI technologies are beginning to affect urban life in ways that will become increasingly pervasive and profound by 2030.

The 28,000-word report includes a glossary to help nontechnical readers understand how AI applications such as computer vision might help screen tissue samples for cancers or how natural language processing will allow computerized systems to grasp not simply the literal definitions, but the connotations and intent, behind words.

The report is broken into eight sections focusing on applications of AI. Five examine application arenas such as transportation where there is already buzz about self-driving cars. Three other sections treat technological impacts, like the section on employment and workplace trends which touches on the likelihood of rapid changes in jobs and incomes.

“It is not too soon for social debate on how the fruits of an AI-dominated economy should be shared,” the researchers write in the report, noting also the need for public discourse.

“Currently in the United States, at least sixteen separate agencies govern sectors of the economy related to AI technologies,” the researchers write, highlighting issues raised by AI applications: “Who is responsible when a self-driven car crashes or an intelligent medical device fails? How can AI applications be prevented from [being used for] racial discrimination or financial cheating?”

The eight sections discuss:

Transportation: Autonomous cars, trucks and, possibly, aerial delivery vehicles may alter how we commute, work and shop and create new patterns of life and leisure in cities.

Home/service robots: Like the robotic vacuum cleaners already in some homes, specialized robots will clean and provide security in live/work spaces that will be equipped with sensors and remote controls.

Health care: Devices to monitor personal health and robot-assisted surgery are hints of things to come if AI is developed in ways that gain the trust of doctors, nurses, patients and regulators.

Education: Interactive tutoring systems already help students learn languages, math and other skills. More is possible if technologies like natural language processing platforms develop to augment instruction by humans.

Entertainment: The conjunction of content creation tools, social networks and AI will lead to new ways to gather, organize and deliver media in engaging, personalized and interactive ways.

Low-resource communities: Investments in uplifting technologies like predictive models to prevent lead poisoning or improve food distributions could spread AI benefits to the underserved.

Public safety and security: Cameras, drones and software to analyze crime patterns should use AI in ways that reduce human bias and enhance safety without loss of liberty or dignity.

Employment and workplace: Work should start now on how to help people adapt as the economy undergoes rapid changes as many existing jobs are lost and new ones are created.

“Until now, most of what is known about AI comes from science fiction books and movies,” Stone said. “This study provides a realistic foundation to discuss how AI technologies are likely to affect society.”

Grosz said she hopes the AI 100 report “initiates a century-long conversation about ways AI-enhanced technologies might be shaped to improve life and societies.”

You can find the A100 website here, and the group’s first paper: “Artificial Intelligence and Life in 2030” here. Unfortunately, I don’t have time to read the report but I hope to do so soon.

The AI100 website’s About page offered a surprise,

This effort, called the One Hundred Year Study on Artificial Intelligence, or AI100, is the brainchild of computer scientist and Stanford alumnus Eric Horvitz who, among other credits, is a former president of the Association for the Advancement of Artificial Intelligence.

In that capacity Horvitz convened a conference in 2009 at which top researchers considered advances in artificial intelligence and its influences on people and society, a discussion that illuminated the need for continuing study of AI’s long-term implications.

Now, together with Russ Altman, a professor of bioengineering and computer science at Stanford, Horvitz has formed a committee that will select a panel to begin a series of periodic studies on how AI will affect automation, national security, psychology, ethics, law, privacy, democracy and other issues.

“Artificial intelligence is one of the most profound undertakings in science, and one that will affect every aspect of human life,” said Stanford President John Hennessy, who helped initiate the project. “Given’s Stanford’s pioneering role in AI and our interdisciplinary mindset, we feel obliged and qualified to host a conversation about how artificial intelligence will affect our children and our children’s children.”

Five leading academicians with diverse interests will join Horvitz and Altman in launching this effort. They are:

  • Barbara Grosz, the Higgins Professor of Natural Sciences at HarvardUniversity and an expert on multi-agent collaborative systems;
  • Deirdre K. Mulligan, a lawyer and a professor in the School of Information at the University of California, Berkeley, who collaborates with technologists to advance privacy and other democratic values through technical design and policy;

    This effort, called the One Hundred Year Study on Artificial Intelligence, or AI100, is the brainchild of computer scientist and Stanford alumnus Eric Horvitz who, among other credits, is a former president of the Association for the Advancement of Artificial Intelligence.

    In that capacity Horvitz convened a conference in 2009 at which top researchers considered advances in artificial intelligence and its influences on people and society, a discussion that illuminated the need for continuing study of AI’s long-term implications.

    Now, together with Russ Altman, a professor of bioengineering and computer science at Stanford, Horvitz has formed a committee that will select a panel to begin a series of periodic studies on how AI will affect automation, national security, psychology, ethics, law, privacy, democracy and other issues.

    “Artificial intelligence is one of the most profound undertakings in science, and one that will affect every aspect of human life,” said Stanford President John Hennessy, who helped initiate the project. “Given’s Stanford’s pioneering role in AI and our interdisciplinary mindset, we feel obliged and qualified to host a conversation about how artificial intelligence will affect our children and our children’s children.”

    Five leading academicians with diverse interests will join Horvitz and Altman in launching this effort. They are:

    • Barbara Grosz, the Higgins Professor of Natural Sciences at HarvardUniversity and an expert on multi-agent collaborative systems;
    • Deirdre K. Mulligan, a lawyer and a professor in the School of Information at the University of California, Berkeley, who collaborates with technologists to advance privacy and other democratic values through technical design and policy;
    • Yoav Shoham, a professor of computer science at Stanford, who seeks to incorporate common sense into AI;
    • Tom Mitchell, the E. Fredkin University Professor and chair of the machine learning department at Carnegie Mellon University, whose studies include how computers might learn to read the Web;
    • and Alan Mackworth, a professor of computer science at the University of British Columbia [emphases mine] and the Canada Research Chair in Artificial Intelligence, who built the world’s first soccer-playing robot.

    I wasn’t expecting to see a Canadian listed as a member of the AI100 standing committee and then I got another surprise (from the AI100 People webpage),

    Study Panels

    Study Panels are planned to convene every 5 years to examine some aspect of AI and its influences on society and the world. The first study panel was convened in late 2015 to study the likely impacts of AI on urban life by the year 2030, with a focus on typical North American cities.

    2015 Study Panel Members

    • Peter Stone, UT Austin, Chair
    • Rodney Brooks, Rethink Robotics
    • Erik Brynjolfsson, MIT
    • Ryan Calo, University of Washington
    • Oren Etzioni, Allen Institute for AI
    • Greg Hager, Johns Hopkins University
    • Julia Hirschberg, Columbia University
    • Shivaram Kalyanakrishnan, IIT Bombay
    • Ece Kamar, Microsoft
    • Sarit Kraus, Bar Ilan University
    • Kevin Leyton-Brown, [emphasis mine] UBC [University of British Columbia]
    • David Parkes, Harvard
    • Bill Press, UT Austin
    • AnnaLee (Anno) Saxenian, Berkeley
    • Julie Shah, MIT
    • Milind Tambe, USC
    • Astro Teller, Google[X]
  • [emphases mine] and the Canada Research Chair in Artificial Intelligence, who built the world’s first soccer-playing robot.

I wasn’t expecting to see a Canadian listed as a member of the AI100 standing committee and then I got another surprise (from the AI100 People webpage),

Study Panels

Study Panels are planned to convene every 5 years to examine some aspect of AI and its influences on society and the world. The first study panel was convened in late 2015 to study the likely impacts of AI on urban life by the year 2030, with a focus on typical North American cities.

2015 Study Panel Members

  • Peter Stone, UT Austin, Chair
  • Rodney Brooks, Rethink Robotics
  • Erik Brynjolfsson, MIT
  • Ryan Calo, University of Washington
  • Oren Etzioni, Allen Institute for AI
  • Greg Hager, Johns Hopkins University
  • Julia Hirschberg, Columbia University
  • Shivaram Kalyanakrishnan, IIT Bombay
  • Ece Kamar, Microsoft
  • Sarit Kraus, Bar Ilan University
  • Kevin Leyton-Brown, [emphasis mine] UBC [University of British Columbia]
  • David Parkes, Harvard
  • Bill Press, UT Austin
  • AnnaLee (Anno) Saxenian, Berkeley
  • Julie Shah, MIT
  • Milind Tambe, USC
  • Astro Teller, Google[X]

I see they have representation from Israel, India, and the private sector as well. Refreshingly, there’s more than one woman on the standing committee and in this first study group. It’s good to see these efforts at inclusiveness and I’m particularly delighted with the inclusion of an organization from Asia. All too often inclusiveness means Europe, especially the UK. So, it’s good (and I think important) to see a different range of representation.

As for the content of report, should anyone have opinions about it, please do let me know your thoughts in the blog comments.

Gold-144 is a polymorph

Au-144 (also known as Gold-144) is an iconic gold nanocluster according to a June 14, 2016 news item announcing its polymorphic nature on ScienceDaily,

Chemically the same, graphite and diamonds are as physically distinct as two minerals can be, one opaque and soft, the other translucent and hard. What makes them unique is their differing arrangement of carbon atoms.

Polymorphs, or materials with the same composition but different structures, are common in bulk materials, and now a new study in Nature Communications confirms they exist in nanomaterials, too. Researchers describe two unique structures for the iconic gold nanocluster Au144(SR)60, better known as Gold-144, including a version never seen before. Their discovery gives engineers a new material to explore, along with the possibility of finding other polymorphic nanoparticles.

A June 14, 2016 Columbia University news release (also on EurekAlert), which originated the news item, provides more insight into the work,

“This took four years to unravel,” said Simon Billinge, a physics professor at Columbia Engineering and a member of the Data Science Institute. “We weren’t expecting the clusters to take on more than one atomic arrangement. But this discovery gives us more handles to turn when trying to design clusters with new and useful properties.”

Gold has been used in coins and jewelry for thousands of years for its durability, but shrink it to a size 10,000 times smaller than a human hair [at one time one billionth of a meter or a nanometer was said to be 1/50,000, 1/60,000 or 1/100,000 of the diameter of a human hair], and it becomes wildly unstable and unpredictable. At the nanoscale, gold likes to split apart other particles and molecules, making it a useful material for purifying water, imaging and killing tumors, and making solar panels more efficient, among other applications.

Though a variety of nanogold particles and molecules have been made in the lab, very few have had their secret atomic arrangement revealed. But recently, new technologies are bringing these miniscule structures into focus.

Under one approach, high-energy x-ray beams are fired at a sample of nanoparticles. Advanced data analytics are used to interpret the x-ray scattering data and infer the sample’s structure, which is key to understanding how strong, reactive or durable the particles might be.

Billinge and his lab have pioneered a method, the atomic Pair Distribution Function (PDF) analysis, for interpreting this scattering data. To test the PDF method, Billinge asked chemists at the Colorado State University to make tiny samples of Gold-144, a molecule-sized nanogold cluster first isolated in 1995. Its structure had been theoretically predicted in 2009, and though never confirmed, Gold-144 has found numerous applications, including in tissue-imaging.

Hoping the test would confirm Gold-144’s structure, they analyzed the clusters at the European Synchrotron Radiation Source in Grenoble, and used the PDF method to infer their structure. To their surprise, they found an angular core, and not the sphere-like icosahedral core predicted. When they made a new sample and tried the experiment again, this time using synchrotrons at Brookhaven and Argonne national laboratories, the structure came back spherical.

“We didn’t understand what was going on, but digging deeper, we realized we had a polymorph,” said study coauthor Kirsten Jensen, formerly a postdoctoral researcher at Columbia, now a chemistry professor at the University of Copenhagen.

Further experiments confirmed the cluster had two versions, sometimes found together, each with a unique structure indicating they behave differently. The researchers are still unsure if Gold-144 can switch from one version to the other or, what exactly, differentiates the two forms.

To make their discovery, the researchers solved what physicists call the nanostructure inverse problem. How can the structure of a tiny nanoparticle in a sample be inferred from an x-ray signal that has been averaged over millions of particles, each with different orientations?

“The signal is noisy and highly degraded,” said Billinge. “It’s the equivalent of trying to recognize if the bird in the tree is a robin or a cardinal, but the image in your binoculars is too blurry and distorted to tell.”

“Our results demonstrate the power of PDF analysis to reveal the structure of very tiny particles,” added study coauthor Christopher Ackerson, a chemistry professor at Colorado State. “I’ve been trying, off and on, for more than 10 years to get the single-crystal x-ray structure of Gold-144. The presence of polymorphs helps to explain why this molecule has been so resistant to traditional methods.”

The PDF approach is one of several rival methods being developed to bring nanoparticle structure into focus. Now that it has proven itself, it could help speed up the work of describing other nanostructures.

The eventual goal is to design nanoparticles by their desired properties, rather than through trial and error, by understanding how form and function relate. Databases of known and predicted structures could make it possible to design new materials with a few clicks of a mouse.

The study is a first step.

“We’ve had a structure model for this iconic gold molecule for years and then this study comes along and says the structure is basically right but it’s got a doppelgänger,” said Robert Whetten, a professor of chemical physics at the University of Texas, San Antonio, who led the team that first isolated Gold-144. “It seemed preposterous, to have two distinct structures that underlie its ubiquity, but this is a beautiful paper that will persuade a lot of people.”

Here’s an image illustrating the two shapes,

Setting out to confirm the predicted structure of Gold-144, researchers discovered an entirely unexpected atomic arrangement (right). The two structures, described in detail for the first time, each have 144 gold atoms, but are uniquely shaped, suggesting they also behave differently. (Courtesy of Kirsten Ørnsbjerg Jensen)

Setting out to confirm the predicted structure of Gold-144, researchers discovered an entirely unexpected atomic arrangement (right). The two structures, described in detail for the first time, each have 144 gold atoms, but are uniquely shaped, suggesting they also behave differently. (Courtesy of Kirsten Ørnsbjerg Jensen)

Here’s a link to and a citation for the paper,

Polymorphism in magic-sized Au144(SR)60 clusters by Kirsten M.Ø. Jensen, Pavol Juhas, Marcus A. Tofanelli, Christine L. Heinecke, Gavin Vaughan, Christopher J. Ackerson, & Simon J. L. Billinge.  Nature Communications 7, Article number: 11859  doi:10.1038/ncomms11859 Published 14 June 2016

This is an open access paper.