Tag Archives: Stanford University

Brain cell-like nanodevices

Given R. Stanley Williams’s presence on the author list, it’s a bit surprising that there’s no mention of memristors. If I read the signs rightly the interest is shifting, in some cases, from the memristor to a more comprehensive grouping of circuit elements referred to as ‘neuristors’ or, more likely, ‘nanocirucuit elements’ in the effort to achieve brainlike (neuromorphic) computing (engineering). (Williams was the leader of the HP Labs team that offered proof and more of the memristor’s existence, which I mentioned here in an April 5, 2010 posting. There are many, many postings on this topic here; try ‘memristors’ or ‘brainlike computing’ for your search terms.)

A September 24, 2020 news item on ScienceDaily announces a recent development in the field of neuromorphic engineering,

In the September [2020] issue of the journal Nature, scientists from Texas A&M University, Hewlett Packard Labs and Stanford University have described a new nanodevice that acts almost identically to a brain cell. Furthermore, they have shown that these synthetic brain cells can be joined together to form intricate networks that can then solve problems in a brain-like manner.

“This is the first study where we have been able to emulate a neuron with just a single nanoscale device, which would otherwise need hundreds of transistors,” said Dr. R. Stanley Williams, senior author on the study and professor in the Department of Electrical and Computer Engineering. “We have also been able to successfully use networks of our artificial neurons to solve toy versions of a real-world problem that is computationally intense even for the most sophisticated digital technologies.”

In particular, the researchers have demonstrated proof of concept that their brain-inspired system can identify possible mutations in a virus, which is highly relevant for ensuring the efficacy of vaccines and medications for strains exhibiting genetic diversity.

A September 24, 2020 Texas A&M University news release (also on EurekAlert) by Vandana Suresh, which originated the news item, provides some context for the research,

Over the past decades, digital technologies have become smaller and faster largely because of the advancements in transistor technology. However, these critical circuit components are fast approaching their limit of how small they can be built, initiating a global effort to find a new type of technology that can supplement, if not replace, transistors.

In addition to this “scaling-down” problem, transistor-based digital technologies have other well-known challenges. For example, they struggle at finding optimal solutions when presented with large sets of data.

“Let’s take a familiar example of finding the shortest route from your office to your home. If you have to make a single stop, it’s a fairly easy problem to solve. But if for some reason you need to make 15 stops in between, you have 43 billion routes to choose from,” said Dr. Suhas Kumar, lead author on the study and researcher at Hewlett Packard Labs. “This is now an optimization problem, and current computers are rather inept at solving it.”

Kumar added that another arduous task for digital machines is pattern recognition, such as identifying a face as the same regardless of viewpoint or recognizing a familiar voice buried within a din of sounds.

But tasks that can send digital machines into a computational tizzy are ones at which the brain excels. In fact, brains are not just quick at recognition and optimization problems, but they also consume far less energy than digital systems. Hence, by mimicking how the brain solves these types of tasks, Williams said brain-inspired or neuromorphic systems could potentially overcome some of the computational hurdles faced by current digital technologies.

To build the fundamental building block of the brain or a neuron, the researchers assembled a synthetic nanoscale device consisting of layers of different inorganic materials, each with a unique function. However, they said the real magic happens in the thin layer made of the compound niobium dioxide.

When a small voltage is applied to this region, its temperature begins to increase. But when the temperature reaches a critical value, niobium dioxide undergoes a quick change in personality, turning from an insulator to a conductor. But as it begins to conduct electric currents, its temperature drops and niobium dioxide switches back to being an insulator.

These back-and-forth transitions enable the synthetic devices to generate a pulse of electrical current that closely resembles the profile of electrical spikes, or action potentials, produced by biological neurons. Further, by changing the voltage across their synthetic neurons, the researchers reproduced a rich range of neuronal behaviors observed in the brain, such as sustained, burst and chaotic firing of electrical spikes.

“Capturing the dynamical behavior of neurons is a key goal for brain-inspired computers,” said Kumar. “Altogether, we were able to recreate around 15 types of neuronal firing profiles, all using a single electrical component and at much lower energies compared to transistor-based circuits.”

To evaluate if their synthetic neurons [neuristor?] can solve real-world problems, the researchers first wired 24 such nanoscale devices together in a network inspired by the connections between the brain’s cortex and thalamus, a well-known neural pathway involved in pattern recognition. Next, they used this system to solve a toy version of the viral quasispecies reconstruction problem, where mutant variations of a virus are identified without a reference genome.

By means of data inputs, the researchers introduced the network to short gene fragments. Then, by programming the strength of connections between the artificial neurons within the network, they established basic rules about joining these genetic fragments. The jigsaw puzzle-like task for the network was to list mutations in the virus’ genome based on these short genetic segments.

The researchers found that within a few microseconds, their network of artificial neurons settled down in a state that was indicative of the genome for a mutant strain.

Williams and Kumar noted this result is proof of principle that their neuromorphic systems can quickly perform tasks in an energy-efficient way.

The researchers said the next steps in their research will be to expand the repertoire of the problems that their brain-like networks can solve by incorporating other firing patterns and some hallmark properties of the human brain like learning and memory. They also plan to address hardware challenges for implementing their technology on a commercial scale.

“Calculating the national debt or solving some large-scale simulation is not the type of task the human brain is good at and that’s why we have digital computers. Alternatively, we can leverage our knowledge of neuronal connections for solving problems that the brain is exceptionally good at,” said Williams. “We have demonstrated that depending on the type of problem, there are different and more efficient ways of doing computations other than the conventional methods using digital computers with transistors.”

If you look at the news release on EurekAlert, you’ll see this informative image is titled: NeuristerSchematic [sic],

Caption: Networks of artificial neurons connected together can solve toy versions the viral quasispecies reconstruction problem. Credit: Texas A&M University College of Engineering

(On the university website, the image is credited to Rachel Barton.) You can see one of the first mentions of a ‘neuristor’ here in an August 24, 2017 posting.

Here’s a link to and a citation for the paper,

Third-order nanocircuit elements for neuromorphic engineering by Suhas Kumar, R. Stanley Williams & Ziwen Wang. Nature volume 585, pages518–523(2020) DOI: https://doi.org/10.1038/s41586-020-2735-5 Published: 23 September 2020 Issue Date: 24 September 2020

This paper is behind a paywall.

Technical University of Munich: embedded ethics approach for AI (artificial intelligence) and storing a tv series in synthetic DNA

I stumbled across two news bits of interest from the Technical University of Munich in one day (Sept. 1, 2020, I think). The topics: artificial intelligence (AI) and synthetic DNA (deoxyribonucleic acid).

Embedded ethics and artificial intelligence (AI)

An August 27, 2020 Technical University of Munich (TUM) press release (also on EurekAlert but published Sept. 1, 2020) features information about a proposal to embed ethicists in with AI development teams,

The increasing use of AI (artificial intelligence) in the development of new medical technologies demands greater attention to ethical aspects. An interdisciplinary team at the Technical University of Munich (TUM) advocates the integration of ethics from the very beginning of the development process of new technologies. Alena Buyx, Professor of Ethics in Medicine and Health Technologies, explains the embedded ethics approach.

Professor Buyx, the discussions surrounding a greater emphasis on ethics in AI research have greatly intensified in recent years, to the point where one might speak of “ethics hype” …

Prof. Buyx: … and many committees in Germany and around the world such as the German Ethics Council or the EU Commission High-Level Expert Group on Artificial Intelligence have responded. They are all in agreement: We need more ethics in the development of AI-based health technologies. But how do things look in practice for engineers and designers? Concrete solutions are still few and far between. In a joint pilot project with two Integrative Research Centers at TUM, the Munich School of Robotics and Machine Intelligence (MSRM) with its director, Prof. Sami Haddadin, and the Munich Center for Technology in Society (MCTS), with Prof. Ruth Müller, we want to try out the embedded ethics approach. We published the proposal in Nature Machine Intelligence at the end of July [2020].

What exactly is meant by the “embedded ethics approach”?

Prof.Buyx: The idea is to make ethics an integral part of the research process by integrating ethicists into the AI development team from day one. For example, they attend team meetings on a regular basis and create a sort of “ethical awareness” for certain issues. They also raise and analyze specific ethical and social issues.

Is there an example of this concept in practice?

Prof. Buyx: The Geriatronics Research Center, a flagship project of the MSRM in Garmisch-Partenkirchen, is developing robot assistants to enable people to live independently in old age. The center’s initiatives will include the construction of model apartments designed to try out residential concepts where seniors share their living space with robots. At a joint meeting with the participating engineers, it was noted that the idea of using an open concept layout everywhere in the units – with few doors or individual rooms – would give the robots considerable range of motion. With the seniors, however, this living concept could prove upsetting because they are used to having private spaces. At the outset, the engineers had not given explicit consideration to this aspect.

Prof.Buyx: The approach sounds promising. But how can we avoid “embedded ethics” from turning into an “ethics washing” exercise, offering companies a comforting sense of “being on the safe side” when developing new AI technologies?

That’s not something we can be certain of avoiding. The key is mutual openness and a willingness to listen, with the goal of finding a common language – and subsequently being prepared to effectively implement the ethical aspects. At TUM we are ideally positioned to achieve this. Prof. Sami Haddadin, the director of the MSRM, is also a member of the EU High-Level Group of Artificial Intelligence. In his research, he is guided by the concept of human centered engineering. Consequently, he has supported the idea of embedded ethics from the very beginning. But one thing is certain: Embedded ethics alone will not suddenly make AI “turn ethical”. Ultimately, that will require laws, codes of conduct and possibly state incentives.

Here’s a link to and a citation for the paper espousing the embedded ethics for AI development approach,

An embedded ethics approach for AI development by Stuart McLennan, Amelia Fiske, Leo Anthony Celi, Ruth Müller, Jan Harder, Konstantin Ritt, Sami Haddadin & Alena Buyx. Nature Machine Intelligence (2020) DOI: https://doi.org/10.1038/s42256-020-0214-1 Published 31 July 2020

This paper is behind a paywall.

Religion, ethics and and AI

For some reason embedded ethics and AI got me to thinking about Pope Francis and other religious leaders.

The Roman Catholic Church and AI

There was a recent announcement that the Roman Catholic Church will be working with MicroSoft and IBM on AI and ethics (from a February 28, 2020 article by Jen Copestake for British Broadcasting Corporation (BBC) news online (Note: A link has been removed),

Leaders from the two tech giants met senior church officials in Rome, and agreed to collaborate on “human-centred” ways of designing AI.

Microsoft president Brad Smith admitted some people may “think of us as strange bedfellows” at the signing event.

“But I think the world needs people from different places to come together,” he said.

The call was supported by Pope Francis, in his first detailed remarks about the impact of artificial intelligence on humanity.

The Rome Call for Ethics [sic] was co-signed by Mr Smith, IBM executive vice-president John Kelly and president of the Pontifical Academy for Life Archbishop Vincenzo Paglia.

It puts humans at the centre of new technologies, asking for AI to be designed with a focus on the good of the environment and “our common and shared home and of its human inhabitants”.

Framing the current era as a “renAIssance”, the speakers said the invention of artificial intelligence would be as significant to human development as the invention of the printing press or combustion engine.

UN Food and Agricultural Organization director Qu Dongyu and Italy’s technology minister Paola Pisano were also co-signatories.

Hannah Brockhaus’s February 28, 2020 article for the Catholic News Agency provides some details missing from the BBC report and I found it quite helpful when trying to understand the various pieces that make up this initiative,

The Pontifical Academy for Life signed Friday [February 28, 2020], alongside presidents of IBM and Microsoft, a call for ethical and responsible use of artificial intelligence technologies.

According to the document, “the sponsors of the call express their desire to work together, in this context and at a national and international level, to promote ‘algor-ethics.’”

“Algor-ethics,” according to the text, is the ethical use of artificial intelligence according to the principles of transparency, inclusion, responsibility, impartiality, reliability, security, and privacy.

The signing of the “Rome Call for AI Ethics [PDF]” took place as part of the 2020 assembly of the Pontifical Academy for Life, which was held Feb. 26-28 [2020] on the theme of artificial intelligence.

One part of the assembly was dedicated to private meetings of the academics of the Pontifical Academy for Life. The second was a workshop on AI and ethics that drew 356 participants from 41 countries.

On the morning of Feb. 28 [2020], a public event took place called “renAIssance. For a Humanistic Artificial Intelligence” and included the signing of the AI document by Microsoft President Brad Smith, and IBM Executive Vice-president John Kelly III.

The Director General of FAO, Dongyu Qu, and politician Paola Pisano, representing the Italian government, also signed.

The president of the European Parliament, David Sassoli, was also present Feb. 28.

Pope Francis canceled his scheduled appearance at the event due to feeling unwell. His prepared remarks were read by Archbishop Vincenzo Paglia, president of the Academy for Life.

You can find Pope Francis’s comments about the document here (if you’re not comfortable reading Italian, hopefully, the English translation which follows directly afterward will be helpful). The Pope’s AI initiative has a dedicated website, Rome Call for AI ethics, and while most of the material dates from the February 2020 announcement, they are keeping up a blog. It has two entries, one dated in May 2020 and another in September 2020.

Buddhism and AI

The Dalai Lama is well known for having an interest in science and having hosted scientists for various dialogues. So, I was able to track down a November 10, 2016 article by Ariel Conn for the futureoflife.org website, which features his insights on the matter,

The question of what it means and what it takes to feel needed is an important problem for ethicists and philosophers, but it may be just as important for AI researchers to consider. The Dalai Lama argues that lack of meaning and purpose in one’s work increases frustration and dissatisfaction among even those who are gainfully employed.

“The problem,” says the Dalai Lama, “is … the growing number of people who feel they are no longer useful, no longer needed, no longer one with their societies. … Feeling superfluous is a blow to the human spirit. It leads to social isolation and emotional pain, and creates the conditions for negative emotions to take root.”

If feeling needed and feeling useful are necessary for happiness, then AI researchers may face a conundrum. Many researchers hope that job loss due to artificial intelligence and automation could, in the end, provide people with more leisure time to pursue enjoyable activities. But if the key to happiness is feeling useful and needed, then a society without work could be just as emotionally challenging as today’s career-based societies, and possibly worse.

I also found a talk on the topic by The Venerable Tenzin Priyadarshi, first here’s a description from his bio at the Dalai Lama Center for Ethics and Transformative Values webspace on the Massachusetts Institute of Technology (MIT) website,

… an innovative thinker, philosopher, educator and a polymath monk. He is Director of the Ethics Initiative at the MIT Media Lab and President & CEO of The Dalai Lama Center for Ethics and Transformative Values at the Massachusetts Institute of Technology. Venerable Tenzin’s unusual background encompasses entering a Buddhist monastery at the age of ten and receiving graduate education at Harvard University with degrees ranging from Philosophy to Physics to International Relations. He is a Tribeca Disruptive Fellow and a Fellow at the Center for Advanced Study in Behavioral Sciences at Stanford University. Venerable Tenzin serves on the boards of a number of academic, humanitarian, and religious organizations. He is the recipient of several recognitions and awards and received Harvard’s Distinguished Alumni Honors for his visionary contributions to humanity.

He gave the 2018 Roger W. Heyns Lecture in Religion and Society at Stanford University on the topic, “Religious and Ethical Dimensions of Artificial Intelligence.” The video runs over one hour but he is a sprightly speaker (in comparison to other Buddhist speakers I’ve listened to over the years).

Judaism, Islam, and other Abrahamic faiths examine AI and ethics

I was delighted to find this January 30, 2020 Artificial Intelligence: Implications for Ethics and Religion event as it brought together a range of thinkers from various faiths and disciplines,

New technologies are transforming our world every day, and the pace of change is only accelerating.  In coming years, human beings will create machines capable of out-thinking us and potentially taking on such uniquely-human traits as empathy, ethical reasoning, perhaps even consciousness.  This will have profound implications for virtually every human activity, as well as the meaning we impart to life and creation themselves.  This conference will provide an introduction for non-specialists to Artificial Intelligence (AI):

What is it?  What can it do and be used for?  And what will be its implications for choice and free will; economics and worklife; surveillance economies and surveillance states; the changing nature of facts and truth; and the comparative intelligence and capabilities of humans and machines in the future? 

Leading practitioners, ethicists and theologians will provide cross-disciplinary and cross-denominational perspectives on such challenges as technology addiction, inherent biases and resulting inequalities, the ethics of creating destructive technologies and of turning decision-making over to machines from self-driving cars to “autonomous weapons” systems in warfare, and how we should treat the suffering of “feeling” machines.  The conference ultimately will address how we think about our place in the universe and what this means for both religious thought and theological institutions themselves.

UTS [Union Theological Seminary] is the oldest independent seminary in the United States and has long been known as a bastion of progressive Christian scholarship.  JTS [Jewish Theological Seminary] is one of the academic and spiritual centers of Conservative Judaism and a major center for academic scholarship in Jewish studies. The Riverside Church is an interdenominational, interracial, international, open, welcoming, and affirming church and congregation that has served as a focal point of global and national activism for peace and social justice since its inception and continues to serve God through word and public witness. The annual Greater Good Gathering, the following week at Columbia University’s School of International & Public Affairs, focuses on how technology is changing society, politics and the economy – part of a growing nationwide effort to advance conversations promoting the “greater good.”

They have embedded a video of the event (it runs a little over seven hours) on the January 30, 2020 Artificial Intelligence: Implications for Ethics and Religion event page. For anyone who finds that a daunting amount of information, you may want to check out the speaker list for ideas about who might be writing and thinking on this topic.

As for Islam, I did track down this November 29, 2018 article by Shahino Mah Abdullah, a fellow at the Institute of Advanced Islamic Studies (IAIS) Malaysia,

As the global community continues to work together on the ethics of AI, there are still vast opportunities to offer ethical inputs, including the ethical principles based on Islamic teachings.

This is in line with Islam’s encouragement for its believers to convey beneficial messages, including to share its ethical principles with society.

In Islam, ethics or akhlak (virtuous character traits) in Arabic, is sometimes employed interchangeably in the Arabic language with adab, which means the manner, attitude, behaviour, and etiquette of putting things in their proper places. Islamic ethics cover all the legal concepts ranging from syariah (Islamic law), fiqh ( jurisprudence), qanun (ordinance), and ‘urf (customary practices).

Adopting and applying moral values based on the Islamic ethical concept or applied Islamic ethics could be a way to address various issues in today’s societies.

At the same time, this approach is in line with the higher objectives of syariah (maqasid alsyariah) that is aimed at conserving human benefit by the protection of human values, including faith (hifz al-din), life (hifz alnafs), lineage (hifz al-nasl), intellect (hifz al-‘aql), and property (hifz al-mal). This approach could be very helpful to address contemporary issues, including those related to the rise of AI and intelligent robots.


Part of the difficulty with tracking down more about AI, ethics, and various religions is linguistic. I simply don’t have the language skills to search for the commentaries and, even in English, I may not have the best or most appropriate search terms.

Television (TV) episodes stored on DNA?

According to a Sept. 1, 2020 news item on Nanowerk, the first episode of a tv series, ‘Biohackers’ has been stored on synthetic DNA (deoxyribonucleic acid) by a researcher at TUM and colleagues at another institution,

The first episode of the newly released series “Biohackers” was stored in the form of synthetic DNA. This was made possible by the research of Prof. Reinhard Heckel of the Technical University of Munich (TUM) and his colleague Prof. Robert Grass of ETH Zürich.

They have developed a method that permits the stable storage of large quantities of data on DNA for over 1000 years.

A Sept. 1, 2020 TUM press release, which originated the news item, proceeds with more detail in an interview format,

Prof. Heckel, Biohackers is about a medical student seeking revenge on a professor with a dark past – and the manipulation of DNA with biotechnology tools. You were commissioned to store the series on DNA. How does that work?

First, I should mention that what we’re talking about is artificially generated – in other words, synthetic – DNA. DNA consists of four building blocks: the nucleotides adenine (A), thymine (T), guanine (G) and cytosine (C). Computer data, meanwhile, are coded as zeros and ones. The first episode of Biohackers consists of a sequence of around 600 million zeros and ones. To code the sequence 01 01 11 00 in DNA, for example, we decide which number combinations will correspond to which letters. For example: 00 is A, 01 is C, 10 is G and 11 is T. Our example then produces the DNA sequence CCTA. Using this principle of DNA data storage, we have stored the first episode of the series on DNA.

And to view the series – is it just a matter of “reverse translation” of the letters?

In a very simplified sense, you can visualize it like that. When writing, storing and reading the DNA, however, errors occur. If these errors are not corrected, the data stored on the DNA will be lost. To solve the problem, I have developed an algorithm based on channel coding. This method involves correcting errors that take place during information transfers. The underlying idea is to add redundancy to the data. Think of language: When we read or hear a word with missing or incorrect letters, the computing power of our brain is still capable of understanding the word. The algorithm follows the same principle: It encodes the data with sufficient redundancy to ensure that even highly inaccurate data can be restored later.

Channel coding is used in many fields, including in telecommunications. What challenges did you face when developing your solution?

The first challenge was to create an algorithm specifically geared to the errors that occur in DNA. The second one was to make the algorithm so efficient that the largest possible quantities of data can be stored on the smallest possible quantity of DNA, so that only the absolutely necessary amount of redundancy is added. We demonstrated that our algorithm is optimized in that sense.

DNA data storage is very expensive because of the complexity of DNA production as well as the reading process. What makes DNA an attractive storage medium despite these challenges?

First, DNA has a very high information density. This permits the storage of enormous data volumes in a minimal space. In the case of the TV series, we stored “only” 100 megabytes on a picogram – or a billionth of a gram of DNA. Theoretically, however, it would be possible to store up to 200 exabytes on one gram of DNA. And DNA lasts a long time. By comparison: If you never turned on your PC or wrote data to the hard disk it contains, the data would disappear after a couple of years. By contrast, DNA can remain stable for many thousands of years if it is packed right.

And the method you have developed also makes the DNA strands durable – practically indestructible.

My colleague Robert Grass was the first to develop a process for the “stable packing” of DNA strands by encapsulating them in nanometer-scale spheres made of silica glass. This ensures that the DNA is protected against mechanical influences. In a joint paper in 2015, we presented the first robust DNA data storage concept with our algorithm and the encapsulation process developed by Prof. Grass. Since then we have continuously improved our method. In our most recent publication in Nature Protocols of January 2020, we passed on what we have learned.

What are your next steps? Does data storage on DNA have a future?

We’re working on a way to make DNA data storage cheaper and faster. “Biohackers” was a milestone en route to commercialization. But we still have a long way to go. If this technology proves successful, big things will be possible. Entire libraries, all movies, photos, music and knowledge of every kind – provided it can be represented in the form of data – could be stored on DNA and would thus be available to humanity for eternity.

Here’s a link to and a citation for the paper,

Reading and writing digital data in DNA by Linda C. Meiser, Philipp L. Antkowiak, Julian Koch, Weida D. Chen, A. Xavier Kohll, Wendelin J. Stark, Reinhard Heckel & Robert N. Grass. Nature Protocols volume 15, pages86–101(2020) Issue Date: January 2020 DOI: https://doi.org/10.1038/s41596-019-0244-5 Published [online] 29 November 2019

This paper is behind a paywall.

As for ‘Biohackers’, it’s a German science fiction television series and you can find out more about it here on the Internet Movie Database.

Turning brain-controlled wireless electronic prostheses into reality plus some ethical points

Researchers at Stanford University (California, US) believe they have a solution for a problem with neuroprosthetics (Note: I have included brief comments about neuroprosthetics and possible ethical issues at the end of this posting) according an August 5, 2020 news item on ScienceDaily,

The current generation of neural implants record enormous amounts of neural activity, then transmit these brain signals through wires to a computer. But, so far, when researchers have tried to create wireless brain-computer interfaces to do this, it took so much power to transmit the data that the implants generated too much heat to be safe for the patient. A new study suggests how to solve his problem — and thus cut the wires.

Caption: Photo of a current neural implant, that uses wires to transmit information and receive power. New research suggests how to one day cut the wires. Credit: Sergey Stavisky

An August 3, 2020 Stanford University news release (also on EurekAlert but published August 4, 2020) by Tom Abate, which originated the news item, details the problem and the proposed solution,

Stanford researchers have been working for years to advance a technology that could one day help people with paralysis regain use of their limbs, and enable amputees to use their thoughts to control prostheses and interact with computers.

The team has been focusing on improving a brain-computer interface, a device implanted beneath the skull on the surface of a patient’s brain. This implant connects the human nervous system to an electronic device that might, for instance, help restore some motor control to a person with a spinal cord injury, or someone with a neurological condition like amyotrophic lateral sclerosis, also called Lou Gehrig’s disease.

The current generation of these devices record enormous amounts of neural activity, then transmit these brain signals through wires to a computer. But when researchers have tried to create wireless brain-computer interfaces to do this, it took so much power to transmit the data that the devices would generate too much heat to be safe for the patient.

Now, a team led by electrical engineers and neuroscientists Krishna Shenoy, PhD, and Boris Murmann, PhD, and neurosurgeon and neuroscientist Jaimie Henderson, MD, have shown how it would be possible to create a wireless device, capable of gathering and transmitting accurate neural signals, but using a tenth of the power required by current wire-enabled systems. These wireless devices would look more natural than the wired models and give patients freer range of motion.

Graduate student Nir Even-Chen and postdoctoral fellow Dante Muratore, PhD, describe the team’s approach in a Nature Biomedical Engineering paper.

The team’s neuroscientists identified the specific neural signals needed to control a prosthetic device, such as a robotic arm or a computer cursor. The team’s electrical engineers then designed the circuitry that would enable a future, wireless brain-computer interface to process and transmit these these carefully identified and isolated signals, using less power and thus making it safe to implant the device on the surface of the brain.

To test their idea, the researchers collected neuronal data from three nonhuman primates and one human participant in a (BrainGate) clinical trial.

As the subjects performed movement tasks, such as positioning a cursor on a computer screen, the researchers took measurements. The findings validated their hypothesis that a wireless interface could accurately control an individual’s motion by recording a subset of action-specific brain signals, rather than acting like the wired device and collecting brain signals in bulk.

The next step will be to build an implant based on this new approach and proceed through a series of tests toward the ultimate goal.

Here’s a link to and a citation for the paper,

Power-saving design opportunities for wireless intracortical brain–computer interfaces by Nir Even-Chen, Dante G. Muratore, Sergey D. Stavisky, Leigh R. Hochberg, Jaimie M. Henderson, Boris Murmann & Krishna V. Shenoy. Nature Biomedical Engineering (2020) DOI: https://doi.org/10.1038/s41551-020-0595-9 Published: 03 August 2020

This paper is behind a paywall.

Comments about ethical issues

As I found out while investigating, ethical issues in this area abound. My first thought was to look at how someone with a focus on ability studies might view the complexities.

My ‘go to’ resource for human enhancement and ethical issues is Gregor Wolbring, an associate professor at the University of Calgary (Alberta, Canada). his profile lists these areas of interest: ability studies, disability studies, governance of emerging and existing sciences and technologies (e.g. neuromorphic engineering, genetics, synthetic biology, robotics, artificial intelligence, automatization, brain machine interfaces, sensors) and more.

I can’t find anything more recent on this particular topic but I did find an August 10, 2017 essay for The Conversation where he comments on technology and human enhancement ethical issues where the technology is gene-editing. Regardless, he makes points that are applicable to brain-computer interfaces (human enhancement), Note: Links have been removed),

Ability expectations have been and still are used to disable, or disempower, many people, not only people seen as impaired. They’ve been used to disable or marginalize women (men making the argument that rationality is an important ability and women don’t have it). They also have been used to disable and disempower certain ethnic groups (one ethnic group argues they’re smarter than another ethnic group) and others.

A recent Pew Research survey on human enhancement revealed that an increase in the ability to be productive at work was seen as a positive. What does such ability expectation mean for the “us” in an era of scientific advancements in gene-editing, human enhancement and robotics?

Which abilities are seen as more important than others?

The ability expectations among “us” will determine how gene-editing and other scientific advances will be used.

And so how we govern ability expectations, and who influences that governance, will shape the future. Therefore, it’s essential that ability governance and ability literacy play a major role in shaping all advancements in science and technology.

One of the reasons I find Gregor’s commentary so valuable is that he writes lucidly about ability and disability as concepts and poses what can be provocative questions about expectations and what it is to be truly abled or disabled. You can find more of his writing here on his eponymous (more or less) blog.

Ethics of clinical trials for testing brain implants

This October 31, 2017 article by Emily Underwood for Science was revelatory,

In 2003, neurologist Helen Mayberg of Emory University in Atlanta began to test a bold, experimental treatment for people with severe depression, which involved implanting metal electrodes deep in the brain in a region called area 25 [emphases mine]. The initial data were promising; eventually, they convinced a device company, St. Jude Medical in Saint Paul, to sponsor a 200-person clinical trial dubbed BROADEN.

This month [October 2017], however, Lancet Psychiatry reported the first published data on the trial’s failure. The study stopped recruiting participants in 2012, after a 6-month study in 90 people failed to show statistically significant improvements between those receiving active stimulation and a control group, in which the device was implanted but switched off.

… a tricky dilemma for companies and research teams involved in deep brain stimulation (DBS) research: If trial participants want to keep their implants [emphases mine], who will take responsibility—and pay—for their ongoing care? And participants in last week’s meeting said it underscores the need for the growing corps of DBS researchers to think long-term about their planned studies.

… participants bear financial responsibility for maintaining the device should they choose to keep it, and for any additional surgeries that might be needed in the future, Mayberg says. “The big issue becomes cost [emphasis mine],” she says. “We transition from having grants and device donations” covering costs, to patients being responsible. And although the participants agreed to those conditions before enrolling in the trial, Mayberg says she considers it a “moral responsibility” to advocate for lower costs for her patients, even it if means “begging for charity payments” from hospitals. And she worries about what will happen to trial participants if she is no longer around to advocate for them. “What happens if I retire, or get hit by a bus?” she asks.

There’s another uncomfortable possibility: that the hypothesis was wrong [emphases mine] to begin with. A large body of evidence from many different labs supports the idea that area 25 is “key to successful antidepressant response,” Mayberg says. But “it may be too simple-minded” to think that zapping a single brain node and its connections can effectively treat a disease as complex as depression, Krakauer [John Krakauer, a neuroscientist at Johns Hopkins University in Baltimore, Maryland] says. Figuring that out will likely require more preclinical research in people—a daunting prospect that raises additional ethical dilemmas, Krakauer says. “The hardest thing about being a clinical researcher,” he says, “is knowing when to jump.”

Brain-computer interfaces, symbiosis, and ethical issues

This was the most recent and most directly applicable work that I could find. From a July 24, 2019 article by Liam Drew for Nature Outlook: The brain,

“It becomes part of you,” Patient 6 said, describing the technology that enabled her, after 45 years of severe epilepsy, to halt her disabling seizures. Electrodes had been implanted on the surface of her brain that would send a signal to a hand-held device when they detected signs of impending epileptic activity. On hearing a warning from the device, Patient 6 knew to take a dose of medication to halt the coming seizure.

“You grow gradually into it and get used to it, so it then becomes a part of every day,” she told Frederic Gilbert, an ethicist who studies brain–computer interfaces (BCIs) at the University of Tasmania in Hobart, Australia. “It became me,” she said. [emphasis mine]

Gilbert was interviewing six people who had participated in the first clinical trial of a predictive BCI to help understand how living with a computer that monitors brain activity directly affects individuals psychologically1. Patient 6’s experience was extreme: Gilbert describes her relationship with her BCI as a “radical symbiosis”.

Symbiosis is a term, borrowed from ecology, that means an intimate co-existence of two species for mutual advantage. As technologists work towards directly connecting the human brain to computers, it is increasingly being used to describe humans’ potential relationship with artificial intelligence.

Interface technologies are divided into those that ‘read’ the brain to record brain activity and decode its meaning, and those that ‘write’ to the brain to manipulate activity in specific regions and affect their function.

Commercial research is opaque, but scientists at social-media platform Facebook are known to be pursuing brain-reading techniques for use in headsets that would convert users’ brain activity into text. And neurotechnology companies such as Kernel in Los Angeles, California, and Neuralink, founded by Elon Musk in San Francisco, California, predict bidirectional coupling in which computers respond to people’s brain activity and insert information into their neural circuitry. [emphasis mine]

Already, it is clear that melding digital technologies with human brains can have provocative effects, not least on people’s agency — their ability to act freely and according to their own choices. Although neuroethicists’ priority is to optimize medical practice, their observations also shape the debate about the development of commercial neurotechnologies.

Neuroethicists began to note the complex nature of the therapy’s side effects. “Some effects that might be described as personality changes are more problematic than others,” says Maslen [Hannah Maslen, a neuroethicist at the University of Oxford, UK]. A crucial question is whether the person who is undergoing stimulation can reflect on how they have changed. Gilbert, for instance, describes a DBS patient who started to gamble compulsively, blowing his family’s savings and seeming not to care. He could only understand how problematic his behaviour was when the stimulation was turned off.

Such cases present serious questions about how the technology might affect a person’s ability to give consent to be treated, or for treatment to continue. [emphases mine] If the person who is undergoing DBS is happy to continue, should a concerned family member or doctor be able to overrule them? If someone other than the patient can terminate treatment against the patient’s wishes, it implies that the technology degrades people’s ability to make decisions for themselves. It suggests that if a person thinks in a certain way only when an electrical current alters their brain activity, then those thoughts do not reflect an authentic self.

To observe a person with tetraplegia bringing a drink to their mouth using a BCI-controlled robotic arm is spectacular. [emphasis mine] This rapidly advancing technology works by implanting an array of electrodes either on or in a person’s motor cortex — a brain region involved in planning and executing movements. The activity of the brain is recorded while the individual engages in cognitive tasks, such as imagining that they are moving their hand, and these recordings are used to command the robotic limb.

If neuroscientists could unambiguously discern a person’s intentions from the chattering electrical activity that they record in the brain, and then see that it matched the robotic arm’s actions, ethical concerns would be minimized. But this is not the case. The neural correlates of psychological phenomena are inexact and poorly understood, which means that signals from the brain are increasingly being processed by artificial intelligence (AI) software before reaching prostheses.[emphasis mine]

But, he [Philipp Kellmeyer, a neurologist and neuroethicist at the University of Freiburg, Germany] says, using AI tools also introduces ethical issues of which regulators have little experience. [emphasis mine] Machine-learning software learns to analyse data by generating algorithms that cannot be predicted and that are difficult, or impossible, to comprehend. This introduces an unknown and perhaps unaccountable process between a person’s thoughts and the technology that is acting on their behalf.

Maslen is already helping to shape BCI-device regulation. She is in discussion with the European Commission about regulations it will implement in 2020 that cover non-invasive brain-modulating devices that are sold straight to consumers. [emphases mine; Note: There is a Canadian company selling this type of product, MUSE] Maslen became interested in the safety of these devices, which were covered by only cursory safety regulations. Although such devices are simple, they pass electrical currents through people’s scalps to modulate brain activity. Maslen found reports of them causing burns, headaches and visual disturbances. She also says clinical studies have shown that, although non-invasive electrical stimulation of the brain can enhance certain cognitive abilities, this can come at the cost of deficits in other aspects of cognition.

Regarding my note about MUSE, the company is InteraXon and its product is MUSE.They advertise the product as “Brain Sensing Headbands That Improve Your Meditation Practice.” The company website and the product seem to be one entity, Choose Muse. The company’s product has been used in some serious research papers they can be found here. I did not see any research papers concerning safety issues.

Getting back to Drew’s July 24, 2019 article and Patient 6,

… He [Gilbert] is now preparing a follow-up report on Patient 6. The company that implanted the device in her brain to help free her from seizures went bankrupt. The device had to be removed.

… Patient 6 cried as she told Gilbert about losing the device. … “I lost myself,” she said.

“It was more than a device,” Gilbert says. “The company owned the existence of this new person.”

I strongly recommend reading Drew’s July 24, 2019 article in its entirety.


It’s easy to forget that in all the excitement over technologies ‘making our lives better’ that there can be a dark side or two. Some of the points brought forth in the articles by Wolbring, Underwood, and Drew confirmed my uneasiness as reasonable and gave me some specific examples of how these technologies raise new issues or old issues in new ways.

What I find interesting is that no one is using the term ‘cyborg’, which would seem quite applicable.There is an April 20, 2012 posting here titled ‘My mother is a cyborg‘ where I noted that by at lease one definition people with joint replacements, pacemakers, etc. are considered cyborgs. In short, cyborgs or technology integrated into bodies have been amongst us for quite some time.

Interestingly, no one seems to care much when insects are turned into cyborgs (can’t remember who pointed this out) but it is a popular area of research especially for military applications and search and rescue applications.

I’ve sometimes used the term ‘machine/flesh’ and or ‘augmentation’ as a description of technologies integrated with bodies, human or otherwise. You can find lots on the topic here however I’ve tagged or categorized it.

Amongst other pieces you can find here, there’s the August 8, 2016 posting, ‘Technology, athletics, and the ‘new’ human‘ featuring Oscar Pistorius when he was still best known as the ‘blade runner’ and a remarkably successful paralympic athlete. It’s about his efforts to compete against able-bodied athletes at the London Olympic Games in 2012. It is fascinating to read about technology and elite athletes of any kind as they are often the first to try out ‘enhancements’.

Gregor Wolbring has a number of essays on The Conversation looking at Paralympic athletes and their pursuit of enhancements and how all of this is affecting our notions of abilities and disabilities. By extension, one has to assume that ‘abled’ athletes are also affected with the trickle-down effect on the rest of us.

Regardless of where we start the investigation, there is a sameness to the participants in neuroethics discussions with a few experts and commercial interests deciding on how the rest of us (however you define ‘us’ as per Gregor Wolbring’s essay) will live.

This paucity of perspectives is something I was getting at in my COVID-19 editorial for the Canadian Science Policy Centre. My thesis being that we need a range of ideas and insights that cannot be culled from small groups of people who’ve trained and read the same materials or entrepreneurs who too often seem to put profit over thoughtful implementations of new technologies. (See the PDF May 2020 edition [you’ll find me under Policy Development]) or see my May 15, 2020 posting here (with all the sources listed.)

As for this new research at Stanford, it’s exciting news, which raises questions, as it offers the hope of independent movement for people diagnosed as tetraplegic (sometimes known as quadriplegic.)

A biohybrid artificial synapse that can communicate with living cells

As I noted in my June 16, 2020 posting, we may have more than one kind of artificial brain in our future. This latest work features a biohybrid. From a June 15, 2020 news item on ScienceDaily,

In 2017, Stanford University researchers presented a new device that mimics the brain’s efficient and low-energy neural learning process [see my March 8, 2017 posting for more]. It was an artificial version of a synapse — the gap across which neurotransmitters travel to communicate between neurons — made from organic materials. In 2019, the researchers assembled nine of their artificial synapses together in an array, showing that they could be simultaneously programmed to mimic the parallel operation of the brain [see my Sept. 17, 2019 posting].

Now, in a paper published June 15 [2020] in Nature Materials, they have tested the first biohybrid version of their artificial synapse and demonstrated that it can communicate with living cells. Future technologies stemming from this device could function by responding directly to chemical signals from the brain. The research was conducted in collaboration with researchers at Istituto Italiano di Tecnologia (Italian Institute of Technology — IIT) in Italy and at Eindhoven University of Technology (Netherlands).

“This paper really highlights the unique strength of the materials that we use in being able to interact with living matter,” said Alberto Salleo, professor of materials science and engineering at Stanford and co-senior author of the paper. “The cells are happy sitting on the soft polymer. But the compatibility goes deeper: These materials work with the same molecules neurons use naturally.”

While other brain-integrated devices require an electrical signal to detect and process the brain’s messages, the communications between this device and living cells occur through electrochemistry — as though the material were just another neuron receiving messages from its neighbor.

A June 15, 2020 Stanford University news release (also on EurekAlert) by Taylor Kubota, which originated the news item, delves further into this recent work,

How neurons learn

The biohybrid artificial synapse consists of two soft polymer electrodes, separated by a trench filled with electrolyte solution – which plays the part of the synaptic cleft that separates communicating neurons in the brain. When living cells are placed on top of one electrode, neurotransmitters that those cells release can react with that electrode to produce ions. Those ions travel across the trench to the second electrode and modulate the conductive state of this electrode. Some of that change is preserved, simulating the learning process occurring in nature.

“In a biological synapse, essentially everything is controlled by chemical interactions at the synaptic junction. Whenever the cells communicate with one another, they’re using chemistry,” said Scott Keene, a graduate student at Stanford and co-lead author of the paper. “Being able to interact with the brain’s natural chemistry gives the device added utility.”

This process mimics the same kind of learning seen in biological synapses, which is highly efficient in terms of energy because computing and memory storage happen in one action. In more traditional computer systems, the data is processed first and then later moved to storage.

To test their device, the researchers used rat neuroendocrine cells that release the neurotransmitter dopamine. Before they ran their experiment, they were unsure how the dopamine would interact with their material – but they saw a permanent change in the state of their device upon the first reaction.

“We knew the reaction is irreversible, so it makes sense that it would cause a permanent change in the device’s conductive state,” said Keene. “But, it was hard to know whether we’d achieve the outcome we predicted on paper until we saw it happen in the lab. That was when we realized the potential this has for emulating the long-term learning process of a synapse.”

A first step

This biohybrid design is in such early stages that the main focus of the current research was simply to make it work.

“It’s a demonstration that this communication melding chemistry and electricity is possible,” said Salleo. “You could say it’s a first step toward a brain-machine interface, but it’s a tiny, tiny very first step.”

Now that the researchers have successfully tested their design, they are figuring out the best paths for future research, which could include work on brain-inspired computers, brain-machine interfaces, medical devices or new research tools for neuroscience. Already, they are working on how to make the device function better in more complex biological settings that contain different kinds of cells and neurotransmitters.

Here’s a link to and a citation for the paper,

A biohybrid synapse with neurotransmitter-mediated plasticity by Scott T. Keene, Claudia Lubrano, Setareh Kazemzadeh, Armantas Melianas, Yaakov Tuchman, Giuseppina Polino, Paola Scognamiglio, Lucio Cinà, Alberto Salleo, Yoeri van de Burgt & Francesca Santoro. Nature Materials (2020) DOI: https://doi.org/10.1038/s41563-020-0703-y Published: 15 June 2020

This paper is behind a paywall.

Brain scan variations

The Scientist is a magazine I do not feature here often enough. The latest issue (June 2020) features a May 20, 2020 opinion piece by Ruth Williams on a recent study about interpretating brain scans—70 different teams of neuroimaging experts were involved (Note: Links have been removed),

In a test of scientific reproducibility, multiple teams of neuroimaging experts from across the globe were asked to independently analyze and interpret the same functional magnetic resonance imaging dataset. The results of the test, published in Nature today (May 20), show that each team performed the analysis in a subtly different manner and that their conclusions varied as a result. While highlighting the cause of the irreproducibility—human methodological decisions—the paper also reveals ways to safeguard future studies against it.

Problems with reproducibility plague all areas of science, and have been particularly highlighted in the fields of psychology and cancer through projects run in part by the Center for Open Science. Now, neuroimaging has come under the spotlight thanks to a collaborative project by neuroimaging experts around the world called the Neuroimaging Analysis Replication and Prediction Study (NARPS).

Neuroimaging, specifically functional magnetic resonance imaging (fMRI), which produces pictures of blood flow patterns in the brain that are thought to relate to neuronal activity, has been criticized in the past for problems such as poor study design and statistical methods, and specifying hypotheses after results are known (SHARKing), says neurologist Alain Dagher of McGill University who was not involved in the study. A particularly memorable criticism of the technique was a paper demonstrating that, without needed statistical corrections, it could identify apparent brain activity in a dead fish.

Perhaps because of such criticisms, nowadays fMRI “is a field that is known to have a lot of cautiousness about statistics and . . . about the sample sizes,” says neuroscientist Tom Schonberg of Tel Aviv University, an author of the paper and co-coordinator of NARPS. Also, unlike in many areas of biology, he adds, the image analysis is computational, not manual, so fewer biases might be expected to creep in.

Schonberg was therefore a little surprised to see the NARPS results, admitting, “it wasn’t easy seeing this variability, but it was what it was.”

The study, led by Schonberg together with psychologist Russell Poldrack of Stanford University and neuroimaging statistician Thomas Nichols of the University of Oxford, recruited independent teams of researchers around the globe to analyze and interpret the same raw neuroimaging data—brain scans of 108 healthy adults taken while the subjects were at rest and while they performed a simple decision-making task about whether to gamble a sum of money.

Each of the 70 research teams taking part used one of three different image analysis software packages. But variations in the final results didn’t depend on these software choices, says Nichols. Instead, they came down to numerous steps in the analysis that each require a human’s decision, such as how to correct for motion of the subjects’ heads, how signal-to-noise ratios are enhanced, how much image smoothing to apply—that is, how strictly the anatomical regions of the brain are defined—and which statistical approaches and thresholds to use.

If this topic interests you, I strongly suggest you read Williams’ article in its entirety.

Here are two links to the paper,

Variability in the analysis of a single neuroimaging dataset by many teams. Nature DOI: https://doi.org/10.1038/s41586-020-2314-9 Published online: 20 May 2020 Check for updates

This first one seems to be a free version of the paper.

Variability in the analysis of a single neuroimaging dataset by many teams by R. Botvinik-Nezer, F. Holzmeister, C. F. Camerer, et al. (at least 70 authors in total) Nature 582, 84–88 (2020). DOI: https://doi.org/10.1038/s41586-020-2314-9 Published 20 May 2020 Issue Date 04 June 2020

This version is behind a paywall.

Are nano electronics as good as gold?

“As good as gold” was a behavioural goal when I was a child. It turns out, the same can be said of gold in electronic devices according to the headline for a March 26, 2020 news item on Nanowerk (Note: Links have been removed),

As electronics shrink to nanoscale, will they still be good as gold?

Deep inside computer chips, tiny wires made of gold and other conductive metals carry the electricity used to process data.

But as these interconnected circuits shrink to nanoscale, engineers worry that pressure, such as that caused by thermal expansion when current flows through these wires, might cause gold to behave more like a liquid than a solid, making nanoelectronics unreliable. That, in turn, could force chip designers to hunt for new materials to make these critical wires.

But according to a new paper in Physical Review Letters (“Nucleation of Dislocations in 3.9 nm Nanocrystals at High Pressure”), chip designers can rest easy. “Gold still behaves like a solid at these small scales,” says Stanford mechanical engineer Wendy Gu, who led a team that figured out how to pressurize gold particles just 4 nanometers in length — the smallest particles ever measured — to assess whether current flows might cause the metal’s atomic structure to collapse.

I have seen the issue about gold as a metal or liquid before but I can’t find it here (search engines, sigh). However, I found this somewhat related story from almost five years ago. In my April 14, 2015 posting (Gold atoms: sometimes they’re a metal and sometimes they’re a molecule), there was news that the number of gold atoms present means the difference between being a metal and being a molecule .This could have implications as circuit elements (which include some gold in their fabrication) shrink down past a certain point.

A March 24, 2020 Stanford University news release (also on Eurekalert but published on March 25, 2020) by Andrew Myers, which originated the news item, provides details about research designed to investigate a similar question, i.e, can we used gold as we shrink the scale?*,

To conduct the experiment, Gu’s team first had to devise a way put tiny gold particles under extreme pressure, while simultaneously measuring how much that pressure damaged gold’s atomic structure.

To solve the first problem, they turned to the field of high-pressure physics to borrow a device known as a diamond anvil cell. As the name implies, both hammer and anvil are diamonds that are used to compress the gold. As Gu explained, a nanoparticle of gold is built like a skyscraper with atoms forming a crystalline lattice of neat rows and columns. She knew that pressure from the anvil would dislodge some atoms from the crystal and create tiny defects in the gold.

The next challenge was to detect these defects in nanoscale gold. The scientists shined X-rays through the diamond onto the gold. Defects in the crystal caused the X-rays to reflect at different angles than they would on uncompressed gold. By measuring variations in the angles at which the X-rays bounced off the particles before and after pressure was applied, the team was able to tell whether the particles retained the deformations or reverted to their original state when pressure was lifted.

In practical terms, her findings mean that chipmakers can know with certainty that they’ll be able to design stable nanodevices using gold — a material they have known and trusted for decades — for years to come.

“For the foreseeable future, gold’s luster will not fade,” Gu says.

*The 2015 research measured the gold nanoclusters by the number of atoms within the cluster with the changes occurring at some where between 102 atoms and 144 atoms. This 2020 work measures the amount of gold by nanometers as in 3.9 nm gold nanocrystals . So, how many gold atoms in a nanometer? Cathy Murphy provides the answer and the way to calculate it for yourself in a July 26, 2016 posting on the Sustainable Nano blog ( a blog by the Center for Sustainable Nanotechnology),

Two years ago, I wrote a blog post called Two Ways to Make Nanoparticles, describing the difference between top-down and bottom-up methods for making nanoparticles. In the post I commented, “we can estimate, knowing how gold atoms pack into crystals, that there are about 2000 gold atoms in one 4 nm diameter gold nanoparticle.” Recently, a Sustainable Nano reader wrote in to ask about how this calculation is done. It’s a great question!

So, a 3.9 nm gold nanocrystal contains approximately 2000 gold atoms. (If you have time, do read Murphy’s description of how to determine the number of gold atoms in a gold nanoparticle.) So, this research does not answer the question posed by the 2015 research.

It may take years before researchers can devise tests for gold nanoclusters consisting of 102 atoms as opposed to nanoparticles consisting of 2000 atoms. In the meantime, here’s a link to and a citation for the latest on how gold reacts as we shrink the size of our electronics,

Nucleation of Dislocations in 3.9 nm Nanocrystals at High Pressure by Abhinav Parakh, Sangryun Lee, K. Anika Harkins, Mehrdad T. Kiani, David Doan, Martin Kunz, Andrew Doran, Lindsey A. Hanson, Seunghwa Ryu, and X. Wendy Gu. Phys. Rev. Lett. 124, 106104 DOI:https://doi.org/10.1103/PhysRevLett.124.106104 Published 13 March 2020 © 2020 American Physical Society

This paper is behind a paywall.

Bad battery, good synapse from Stanford University

A May 4, 2019 news item on ScienceDaily announces the latest advance made by Stanford University and Sandia National Laboratories in the field of neuromorphic (brainlike) computing,

The brain’s capacity for simultaneously learning and memorizing large amounts of information while requiring little energy has inspired an entire field to pursue brain-like — or neuromorphic — computers. Researchers at Stanford University and Sandia National Laboratories previously developed one portion of such a computer: a device that acts as an artificial synapse, mimicking the way neurons communicate in the brain.

In a paper published online by the journal Science on April 25 [2019], the team reports that a prototype array of nine of these devices performed even better than expected in processing speed, energy efficiency, reproducibility and durability.

Looking forward, the team members want to combine their artificial synapse with traditional electronics, which they hope could be a step toward supporting artificially intelligent learning on small devices.

“If you have a memory system that can learn with the energy efficiency and speed that we’ve presented, then you can put that in a smartphone or laptop,” said Scott Keene, co-author of the paper and a graduate student in the lab of Alberto Salleo, professor of materials science and engineering at Stanford who is co-senior author. “That would open up access to the ability to train our own networks and solve problems locally on our own devices without relying on data transfer to do so.”

An April 25, 2019 Stanford University news release (also on EurekAlert but published May 3, 2019) by Taylor Kubota, which originated the news item, expands on the theme,

A bad battery, a good synapse

The team’s artificial synapse is similar to a battery, modified so that the researchers can dial up or down the flow of electricity between the two terminals. That flow of electricity emulates how learning is wired in the brain. This is an especially efficient design because data processing and memory storage happen in one action, rather than a more traditional computer system where the data is processed first and then later moved to storage.

Seeing how these devices perform in an array is a crucial step because it allows the researchers to program several artificial synapses simultaneously. This is far less time consuming than having to program each synapse one-by-one and is comparable to how the brain actually works.

In previous tests of an earlier version of this device, the researchers found their processing and memory action requires about one-tenth as much energy as a state-of-the-art computing system needs in order to carry out specific tasks. Still, the researchers worried that the sum of all these devices working together in larger arrays could risk drawing too much power. So, they retooled each device to conduct less electrical current – making them much worse batteries but making the array even more energy efficient.

The 3-by-3 array relied on a second type of device – developed by Joshua Yang at the University of Massachusetts, Amherst, who is co-author of the paper – that acts as a switch for programming synapses within the array.

“Wiring everything up took a lot of troubleshooting and a lot of wires. We had to ensure all of the array components were working in concert,” said Armantas Melianas, a postdoctoral scholar in the Salleo lab. “But when we saw everything light up, it was like a Christmas tree. That was the most exciting moment.”

During testing, the array outperformed the researchers’ expectations. It performed with such speed that the team predicts the next version of these devices will need to be tested with special high-speed electronics. After measuring high energy efficiency in the 3-by-3 array, the researchers ran computer simulations of a larger 1024-by-1024 synapse array and estimated that it could be powered by the same batteries currently used in smartphones or small drones. The researchers were also able to switch the devices over a billion times – another testament to its speed – without seeing any degradation in its behavior.

“It turns out that polymer devices, if you treat them well, can be as resilient as traditional counterparts made of silicon. That was maybe the most surprising aspect from my point of view,” Salleo said. “For me, it changes how I think about these polymer devices in terms of reliability and how we might be able to use them.”

Room for creativity

The researchers haven’t yet submitted their array to tests that determine how well it learns but that is something they plan to study. The team also wants to see how their device weathers different conditions – such as high temperatures – and to work on integrating it with electronics. There are also many fundamental questions left to answer that could help the researchers understand exactly why their device performs so well.

“We hope that more people will start working on this type of device because there are not many groups focusing on this particular architecture, but we think it’s very promising,” Melianas said. “There’s still a lot of room for improvement and creativity. We only barely touched the surface.”

Here’s a link to and a citation for the paper,

Parallel programming of an ionic floating-gate memory array for scalable neuromorphic computing by Elliot J. Fuller, Scott T. Keene, Armantas Melianas, Zhongrui Wang, Sapan Agarwal, Yiyang Li, Yaakov Tuchman, Conrad D. James, Matthew J. Marinella, J. Joshua Yang3, Alberto Salleo, A. Alec Talin1. Science 25 Apr 2019: eaaw5581 DOI: 10.1126/science.aaw5581

This paper is behind a paywall.

For anyone interested in more about brainlike/brain-like/neuromorphic computing/neuromorphic engineering/memristors, use any or all of those terms in this blog’s search engine.

Sticky at any temperature and other American Chemical Society News

Just when I thought I’d seen all the carbon nanotube abbreviations; I find two new ones in my first news bit about adhesion. Later, I’m including a second news bit that has to do with the upcoming American Chemical Society (ACS) Meeting in San Diego, California.

Sticky carbon nanotubes (CNTs)

Scientists have developed an adhesive that retains its stickiness in extreme temperatures according to a July 10, 2019 news item on Nanowerk (Note: A link has been removed),

In very hot or cold environments, conventional tape can lose its stickiness and leave behind an annoying residue. But while most people can avoid keeping taped items in a hot car or freezer, those living in extreme environments such as deserts and the Antarctic often can’t avoid such conditions.

Now, researchers reporting in ACS’ journal Nano Letters (“Continuous, Ultra-lightweight, and Multipurpose Super-aligned Carbon Nanotube Tapes Viable over a Wide Range of Temperatures”) say they have developed a new nanomaterial tape that can function over a wide temperature range.

In previous work, researchers have explored using nanomaterials, such as vertically aligned multi-walled carbon nanotubes (VA-MWNTs), to make better adhesive tapes. Although VA-MWNTs are stronger than conventional tapes at both high and low temperatures, the materials are relatively thick, and large amounts can’t be made cost-effectively.

These are my first vertically aligned multi-walled carbon nanotubes (VA-MWNTs) and superaligned carbon nanotubes (SACNTs). I was a little surprised that VA-MWNTs didn’t include the C since these are carbon nanotubes (CNTs) and there are other types of nanotubes. So, I searched and found that inclusion of the letter ‘C’ for carbon seems to be discretionary. Moving on.

A July 10, 2019 ACS press release (also on EurekAlert), which originated the news item, provides more detail,

… Kai Liu, Xide Li, Wenhui Duan, Kaili Jiang and coworkers wondered if they could develop a new type of tape composed of superaligned carbon nanotube (SACNT) films. As their name suggests, SACNTs are nanotubes that are precisely aligned parallel to each other, capable of forming ultrathin but strong yarns or films.

To make their tape, the researchers pulled a film from the interior of an array of SACNTs — similar to pulling a strip of tape from a roll. The resulting double-sided tape could adhere to surfaces through van der Waals interactions, which are weak electric forces generated between two atoms or molecules that are close together. The ultrathin, ultra-lightweight and flexible tape outperformed conventional adhesives, at temperatures ranging from -321 F to 1,832 F. Researchers could remove the tape by peeling it off, soaking it in acetone or burning it, with no noticeable residues. The tape adhered to many different materials such as metals, nonmetals, plastics and ceramics, but it stuck more strongly to smooth than rough surfaces, similar to regular tape. The SACNT tape can be made cost-effectively in large amounts. In addition to performing well in extreme environments, the new tape might be useful for electronic components that heat up during use, the researchers say.

Here’s a link to and a citation for the paper,

Continuous, Ultra-lightweight, and Multipurpose Super-aligned Carbon Nanotube Tapes Viable over a Wide Range of Temperatures by Xiang Jin, Hengxin Tan, Zipeng Wu, Jiecun Liang, Wentao Miao, Chao-Sheng Lian, Jiangtao Wang, Kai Liu, Haoming Wei, Chen Feng, Peng Liu, Yang Wei, Qunqing Li, Jiaping Wang, Liang Liu, Xide Li, Shoushan Fan, Wenhui Duan, Kaili Jiang. Nano Lett.2019 DOI: https://doi.org/10.1021/acs.nanolett.9b01629 Publication Date:June 16, 2019 Copyright © 2019 American Chemical Society

This paper is behind a paywall.

American Chemical Society (ACS) National Meeting in San Diego, Aug. 25 to 29, 2019: an invite to journalists

A July 18, 2019 ACS press release (received via email) announced their upcoming meeting and it included an invitation to journalists. (ACS has two meetings per year, one on the East Coast and the other on the West, roughly speaking).

Materials science and nanotechnology topics at the upcoming 2019 American Chemical Society national meeting in San Diego

WASHINGTON, July 18, 2019 — Journalists who register for the American Chemical Society’s (ACS’) Fall 2019 National Meeting & Exposition in San Diego will have access to more than 9,500 presentations on the meeting’s theme, “Chemistry & Water,” will include  nanotechnology and materials science topics. The meeting, one of the largest scientific conferences of the year, will be held Aug. 25 to 29 [2019] in San Diego.

Nobel Prize winner Frances Arnold, Ph.D., of the California Institute of Technology and Thomas Markland, DPhil, of Stanford University will deliver the two Kavli Foundation lectures on Aug. 26 [2019].

The more than 9,500 presentations will include presentations on nanotechnology and materials science, such as: 

Colloids and nanomaterials for water purification
Nanozymes for bioanalysis and beyond
The latest in wearable and implantable sensors
Nanoscale and molecular assemblies: designing matter to control energy transport
Colloidal quantum dots for solar and other emerging technologies
Nanoscience of bourbon
Targeted delivery of nanomedicines 
Advances in nanocellulose research for engineered functionality
Water sustainability through nanotechnology

Looking for something else? Search the meeting’s abstracts

ACS will operate a press center with press conferences, a news media workroom fully staffed to assist in arranging interviews and free Wi-Fi, computers and refreshments.

Embargoed copies of press releases and a press conference schedule will be available in mid-August.  Reporters planning to cover the meeting from their home bases will have access to the press conferences on YouTube at http://bit.ly/acs2019sandiego.

ACS considers requests for press credentials and complimentary registration to national meetings from reporters (staff and freelance) and public information officers at government, non-profit and educational institutions. See the website for details.

Here’s who does and doesn’t quality for a free press registration (from the ACS complimentary registration webpage),

Press Registration Requirements

The ACS provides complimentary registration to national meetings to reporters (staff and freelancers) and public information officers from government, non-profit and educational institutions. Marketing and public relations professionals, lobbyists and scientists do not qualify as press and must register via the main meeting registration page. Journal managing editors, book commissioning editors, acquisitions editors, publishers and those who do not produce news for a publication or institution also do not qualify. We reserve the right to refuse press credentials for any reason.

No bloggers, eh? it’s been a long time since I’ve seen a press registration process that doesn’t mention bloggers at all.

Frugal science, foldable microscopes, and curiosity: a talk on June 3, 2019 at Simon Fraser University (Burnaby, Canada) … it’s in Metro Vancouver

This is the second frugal science item* I’m publishing today (May 29, 2019) which means that I’ve gone from complete ignorance on the topic to collecting news items about it. Manu Prakash, the developer behind a usable paper microscope than can be folded and kept in your pocket, is going to be giving a talk locally according to a May 28, 2019 announcement (received via email) from Simon Fraser University’s (SFU) Faculty of Science,

On June 3rd [2019], at 7:30 pmManu Prakash from Stanford University will give the Herzberg Public Lecture in conjunction with this year’s Canadian Association of Physicists (CAP) conference that the department is hosting. Dr. Prakash’s lecture is entitled “Frugal Science in the Age of Curiosity”. Tickets are free and can be obtained through Eventbrite: https://t.co/WNrPh9fop5 . 

This presentation will be held at the Shrum Science Centre Chemistry C9001 Lecture Theatre, Burnaby campus (instead of the Diamond Family Auditorium).

There’s a synopsis of the talk on the Herzbergy Public Lecture: Frugal Science in the Age of Curiosity webpage,

Science faces an accessibility challenge. Although information/knowledge is fast becoming available to everyone around the world, the experience of science is significantly limited. One approach to solving this challenge is to democratize access to scientific tools. Manu Prakash believes this can be achieved via “Frugal science”; a philosophy that inspires design, development, and deployment of ultra-affordable yet powerful scientific tools for the masses. Using examples from his own work (Foldscope: one-dollar origami microscope, Paperfuge: a twenty-cent high-speed centrifuge), Dr. Prakash will describe the process of identifying challenges, designing solutions, and deploying these tools globally to enable open ended scientific curiosity/inquiries in communities around the world. By connecting the dots between science education, global health and environmental monitoring, he will explore the role of “simple” tools in advancing access to better human and planetary health in a resource limited world.

If you’re curious there is a Foldscope website where you can find out more and/or get a Foldscope for yourself.

In addition to the talk, there is a day-long workshop for teachers (as part of the 2019 CAP Congress) with Dr. Donna Strickland the University of Waterloo researcher who won the 2018 Nobel Prize for physics. If you want to learn how to make a Foldscope, t here is also a one hour session for which you can register separately from the day-long event,. (I featured Strickland and her win in an October 3, 2018 posting.)

Getting back to the main event. Dr. Prakash’s evening talk, you can register here.

*ETA May 29, 2019 at 1120 hours PDT: My first posting on frugal science is Frugal science: ancient toys for state-of-the-art science. It’s about a 3D printable centrifuge based on a toy known (in English) as a whirligig.

First CRISPR gene-edited babies? Ethics and the science story

Scientists, He Jiankui and Michael Deem, may have created the first human babies born after being subjected to CRISPR (clustered regularly interspaced short palindromic repeats) gene editing.  At this point, no one is entirely certain that these babies  as described actually exist since the information was made public in a rather unusual (for scientists) fashion.

The news broke on Sunday, November 25, 2018 through a number of media outlets none of which included journals associated with gene editing or high impact journals such as Cell, Nature, or Science.The news broke in MIT Technology Review and in Associated Press. Plus, this all happened just before the Second International Summit on Human Genome Editing (Nov. 27 – 29, 2018) in Hong Kong. He Jiankui was scheduled to speak today, Nov. 27, 2018.

Predictably, this news has caused quite a tizzy.

Breaking news

Antonio Regalado broke the news in a November 25, 2018  article for MIT [Massachusetts Institute of Technology] Technology Review (Note: Links have been removed),

According to Chinese medical documents posted online this month (here and here), a team at the Southern University of Science and Technology, in Shenzhen, has been recruiting couples in an effort to create the first gene-edited babies. They planned to eliminate a gene called CCR5 in hopes of rendering the offspring resistant to HIV, smallpox, and cholera.

The clinical trial documents describe a study in which CRISPR is employed to modify human embryos before they are transferred into women’s uteruses.

The scientist behind the effort, He Jiankui, did not reply to a list of questions about whether the undertaking had produced a live birth. Reached by telephone, he declined to comment.

However, data submitted as part of the trial listing shows that genetic tests have been carried out on fetuses as late as 24 weeks, or six months. It’s not known if those pregnancies were terminated, carried to term, or are ongoing.

Apparently He changed his mind because Marilynn Marchione in a November 26, 2018 article for the Associated Press confirms the news,

A Chinese researcher claims that he helped make the world’s first genetically edited babies — twin girls born this month whose DNA he said he altered with a powerful new tool capable of rewriting the very blueprint of life.

If true, it would be a profound leap of science and ethics.

A U.S. scientist [Dr. Michael Deem] said he took part in the work in China, but this kind of gene editing is banned in the United States because the DNA changes can pass to future generations and it risks harming other genes.

Many mainstream scientists think it’s too unsafe to try, and some denounced the Chinese report as human experimentation.

There is no independent confirmation of He’s claim, and it has not been published in a journal, where it would be vetted by other experts. He revealed it Monday [November 26, 2018] in Hong Kong to one of the organizers of an international conference on gene editing that is set to begin Tuesday [November 27, 2018], and earlier in exclusive interviews with The Associated Press.

“I feel a strong responsibility that it’s not just to make a first, but also make it an example,” He told the AP. “Society will decide what to do next” in terms of allowing or forbidding such science.

Some scientists were astounded to hear of the claim and strongly condemned it.

It’s “unconscionable … an experiment on human beings that is not morally or ethically defensible,” said Dr. Kiran Musunuru, a University of Pennsylvania gene editing expert and editor of a genetics journal.

“This is far too premature,” said Dr. Eric Topol, who heads the Scripps Research Translational Institute in California. “We’re dealing with the operating instructions of a human being. It’s a big deal.”

However, one famed geneticist, Harvard University’s George Church, defended attempting gene editing for HIV, which he called “a major and growing public health threat.”

“I think this is justifiable,” Church said of that goal.

h/t Cale Guthrie Weissman’s Nov. 26, 2018 article for Fast Company.

Diving into more detail

Ed Yong in a November 26, 2018 article for The Atlantic provides more details about the claims (Note: Links have been removed),

… “Two beautiful little Chinese girls, Lulu and Nana, came crying into the world as healthy as any other babies a few weeks ago,” He said in the first of five videos, posted yesterday {Nov. 25, 2018] to YouTube [link provided at the end of this section of the post]. “The girls are home now with their mom, Grace, and dad, Mark.” The claim has yet to be formally verified, but if true, it represents a landmark in the continuing ethical and scientific debate around gene editing.

Late last year, He reportedly enrolled seven couples in a clinical trial, and used their eggs and sperm to create embryos through in vitro fertilization. His team then used CRISPR to deactivate a single gene called CCR5 in the embryos, six of which they then implanted into mothers. CCR5 is a protein that the HIV virus uses to gain entry into human cells; by deactivating it, the team could theoretically reduce the risk of infection. Indeed, the fathers in all eight couples were HIV-positive.

Whether the experiment was successful or not, it’s intensely controversial. Scientists have already begun using CRISPR and other gene-editing technologies to alter human cells, in attempts to treat cancers, genetic disorders, and more. But in these cases, the affected cells stay within a person’s body. Editing an embryo [it’s often called, germline editing] is very different: It changes every cell in the body of the resulting person, including the sperm or eggs that would pass those changes to future generations. Such work is banned in many European countries, and prohibited in the United States. “I understand my work will be controversial, but I believe families need this technology and I’m willing to take the criticism for them,” He said.

“Was this a reasonable thing to do? I would say emphatically no,” says Paula Cannon of the University of Southern California. She and others have worked on gene editing, and particularly on trials that knock out CCR5 as a way to treat HIV. But those were attempts to treat people who were definitively sick and had run out of other options. That wasn’t the case with Nana and Lulu.

“The idea that being born HIV-susceptible, which is what the vast majority of humans are, is somehow a disease state that requires the extraordinary intervention of gene editing blows my mind,” says Cannon. “I feel like he’s appropriating this potentially valuable therapy as a shortcut to doing something in the sphere of gene editing. He’s either very naive or very cynical.”

“I want someone to make sure that it has happened,” says Hank Greely, an ethicist at Stanford University. If it hasn’t, that “would be a pretty bald-faced fraud,” but such deceptions have happened in the past. “If it is true, I’m disappointed. It’s reckless on safety grounds, and imprudent and stupid on social grounds.” He notes that a landmark summit in 2015 (which included Chinese researchers) and a subsequent major report from the National Academies of Science, Engineering, and Medicine both argued that “public participation should precede any heritable germ-line editing.” That is: Society needs to work out how it feels about making gene-edited babies before any babies are edited. Absent that consensus, He’s work is “waving a red flag in front of a bull,” says Greely. “It provokes not just the regular bio-Luddites, but also reasonable people who just wanted to talk it out.”

Societally, the creation of CRISPR-edited babies is a binary moment—a Rubicon that has been crossed. But scientifically, the devil is in the details, and most of those are still unknown.

CRISPR is still inefficient. [emphasis mine] The Chinese teams who first used it to edit human embryos only did so successfully in a small proportion of cases, and even then, they found worrying levels of “off-target mutations,” where they had erroneously cut parts of the genome outside their targeted gene. He, in his video, claimed that his team had thoroughly sequenced Nana and Lulu’s genomes and found no changes in genes other than CCR5.

That claim is impossible to verify in the absence of a peer-reviewed paper, or even published data of any kind. “The paper is where we see whether the CCR5 gene was properly edited, what effect it had at the cellular level, and whether [there were] any off-target effects,” said Eric Topol of the Scripps Research Institute. “It’s not just ‘it worked’ as a binary declaration.”

In the video, He said that using CRISPR for human enhancement, such as enhancing IQ or selecting eye color, “should be banned.” Speaking about Nana and Lulu’s parents, he said that they “don’t want a designer baby, just a child who won’t suffer from a disease that medicine can now prevent.”

But his rationale is questionable. Huang [Junjiu Huang of Sun Yat-sen University ], the first Chinese researcher to use CRISPR on human embryos, targeted the faulty gene behind an inherited disease called beta thalassemia. Mitalipov, likewise, tried to edit a gene called MYBPC3, whose faulty versions cause another inherited disease called hypertrophic cardiomyopathy (HCM). Such uses are still controversial, but they rank among the more acceptable applications for embryonic gene editing as ways of treating inherited disorders for which treatments are either difficult or nonexistent.

In contrast, He’s team disableda normal gene in an attempt to reduce the risk of a disease that neither child had—and one that can be controlled. There are already ways of preventing fathers from passing HIV to their children. There are antiviral drugs that prevent infections. There’s safe-sex education. “This is not a plague for which we have no tools,” says Cannon.

As Marilynn Marchione of the AP reports, early tests suggest that He’s editing was incomplete [emphasis mine], and at least one of the twins is a mosaic, where some cells have silenced copies of CCR5 and others do not. If that’s true, it’s unlikely that they would be significantly protected from HIV. And in any case, deactivating CCR5 doesn’t confer complete immunity, because some HIV strains can still enter cells via a different protein called CXCR4.

Nana and Lulu might have other vulnerabilities. …

It is also unclear if the participants in He’s trial were fully aware of what they were signing up for. [emphasis mine] The team’s informed-consent document describes their work as an “AIDS vaccine development project,” and while it describes CRISPR gene editing, it does so in heavily technical language. It doesn’t mention any of the risks of disabling CCR5, and while it does note the possibility of off-target effects, it also says that the “project team is not responsible for the risk.”

He owns two genetics companies, and his collaborator, Michael Deem of Rice University,  [emphasis mine] holds a small stake in, and sits on the advisory board of, both of them. The AP’s Marchione reports, “Both men are physics experts with no experience running human clinical trials.” [emphasis mine]

Yong’s article is well worth reading in its entirety. As for YouTube, here’s The He Lab’s webpage with relevant videos.


Gina Kolata, Sui-Lee Wee, and Pam Belluck writing in a Nov. 26, 2018 article for the New York Times chronicle some of the response to He’s announcement,

It is highly unusual for a scientist to announce a groundbreaking development without at least providing data that academic peers can review. Dr. He said he had gotten permission to do the work from the ethics board of the hospital Shenzhen Harmonicare, but the hospital, in interviews with Chinese media, denied being involved. Cheng Zhen, the general manager of Shenzhen Harmonicare, has asked the police to investigate what they suspect are “fraudulent ethical review materials,” according to the Beijing News.

The university that Dr. He is attached to, the Southern University of Science and Technology, said Dr. He has been on no-pay leave since February and that the school of biology believed that his project “is a serious violation of academic ethics and academic norms,” according to the state-run Beijing News.

In a statement late on Monday, China’s national health commission said it has asked the health commission in southern Guangdong province to investigate Mr. He’s claims.

“I think that’s completely insane,” said Shoukhrat Mitalipov, director of the Center for Embryonic Cell and Gene Therapy at Oregon Health and Science University. Dr. Mitalipov broke new ground last year by using gene editing to successfully remove a dangerous mutation from human embryos in a laboratory dish. [I wrote a three-part series about CRISPR, which included what was then the latest US news, Mitalipov’s announcement, along with a roundup of previous work in China. Links are at the end of this section.’

Dr. Mitalipov said that unlike his own work, which focuses on editing out mutations that cause serious diseases that cannot be prevented any other way, Dr. He did not do anything medically necessary. There are other ways to prevent H.I.V. infection in newborns.

Just three months ago, at a conference in late August on genome engineering at Cold Spring Harbor Laboratory in New York, Dr. He presented work on editing the CCR₅ gene in the embryos of nine couples.

At the conference, whose organizers included Jennifer Doudna, one of the inventors of Crispr technology, Dr. He gave a careful talk about something that fellow attendees considered squarely within the realm of ethically approved research. But he did not mention that some of those embryos had been implanted in a woman and could result in genetically engineered babies.

“What we now know is that as he was talking, there was a woman in China carrying twins,” said Fyodor Urnov, deputy director of the Altius Institute for Biomedical Sciences and a visiting researcher at the Innovative Genomics Institute at the University of California. “He had the opportunity to say ‘Oh and by the way, I’m just going to come out and say it, people, there’s a woman carrying twins.’”

“I would never play poker against Dr. He,” Dr. Urnov quipped.

Richard Hynes, a cancer researcher at the Massachusetts Institute of Technology, who co-led an advisory group on human gene editing for the National Academy of Sciences and the National Academy of Medicine, said that group and a similar organization in Britain had determined that if human genes were to be edited, the procedure should only be done to address “serious unmet needs in medical treatment, it had to be well monitored, it had to be well followed up, full consent has to be in place.”

It is not clear why altering genes to make people resistant to H.I.V. is “a serious unmet need.” Men with H.I.V. do not infect embryos. …

Dr. He got his Ph.D., from Rice University, in physics and his postdoctoral training, at Stanford, was with Stephen Quake, a professor of bioengineering and applied physics who works on sequencing DNA, not editing it.

Experts said that using Crispr would actually be quite easy for someone like Dr. He.

After coming to Shenzhen in 2012, Dr. He, at age 28, established a DNA sequencing company, Direct Genomics, and listed Dr. Quake on its advisory board. But, in a telephone interview on Monday, Dr. Quake said he was never associated with the company.

Deem, the US scientist who worked in China with He is currently being investigated (from a Nov. 26, 2018 article by Andrew Joseph in STAT),

Rice University said Monday that it had opened a “full investigation” into the involvement of one of its faculty members in a study that purportedly resulted in the creation of the world’s first babies born with edited DNA.

Michael Deem, a bioengineering professor at Rice, told the Associated Press in a story published Sunday that he helped work on the research in China.

Deem told the AP that he was in China when participants in the study consented to join the research. Deem also said that he had “a small stake” in and is on the scientific advisory boards of He’s two companies.

Megan Molteni in a Nov. 27, 2018 article for Wired admits she and her colleagues at the magazine may have dismissed CRISPR concerns about designer babies prematurely while shedding more light on this  latest development (Note: Links have been removed),

We said “don’t freak out,” when scientists first used Crispr to edit DNA in non-viable human embryos. When they tried it in embryos that could theoretically produce babies, we said “don’t panic.” Many years and years of boring bench science remain before anyone could even think about putting it near a woman’s uterus. Well, we might have been wrong. Permission to push the panic button granted.

Late Sunday night, a Chinese researcher stunned the world by claiming to have created the first human babies, a set of twins, with Crispr-edited DNA….

What’s perhaps most strange is not that He ignored global recommendations on conducting responsible Crispr research in humans. He also ignored his own advice to the world—guidelines that were published within hours of his transgression becoming public.

On Monday, He and his colleagues at Southern University of Science and Technology, in Shenzhen, published a set of draft ethical principles “to frame, guide, and restrict clinical applications that communities around the world can share and localize based on religious beliefs, culture, and public-health challenges.” Those principles included transparency and only performing the procedure when the risks are outweighed by serious medical need.

The piece appeared in the The Crispr Journal, a young publication dedicated to Crispr research, commentary, and debate. Rodolphe Barrangou, the journal’s editor in chief, where the peer-reviewed perspective appeared, says that the article was one of two that it had published recently addressing the ethical concerns of human germline editing, the other by a bioethicist at the University of North Carolina. Both papers’ authors had requested that their writing come out ahead of a major gene editing summit taking place this week in Hong Kong. When half-rumors of He’s covert work reached Barrangou over the weekend, his team discussed pulling the paper, but ultimately decided that there was nothing too solid to discredit it, based on the information available at the time.

Now Barrangou and his team are rethinking that decision. For one thing, He did not disclose any conflicts of interest, which is standard practice among respectable journals. It’s since become clear that not only is He at the helm of several genetics companies in China, He was actively pursuing controversial human research long before writing up a scientific and moral code to guide it.“We’re currently assessing whether the omission was a matter of ill-management or ill-intent,” says Barrangou, who added that the journal is now conducting an audit to see if a retraction might be warranted. …

“There are all sorts of questions these issues raise, but the most fundamental is the risk-benefit ratio for the babies who are going to be born,” says Hank Greely, an ethicist at Stanford University. “And the risk-benefit ratio on this stinks. Any institutional review board that approved it should be disbanded if not jailed.”

Reporting by Stat indicates that He may have just gotten in over his head and tried to cram a self-guided ethics education into a few short months. The young scientist—records indicate He is just 34—has a background in biophysics, with stints studying in the US at Rice University and in bioengineer Stephen Quake’s lab at Stanford. His resume doesn’t read like someone steeped deeply in the nuances and ethics of human research. Barrangou says that came across in the many rounds of edits He’s framework went through.

… China’s central government in Beijing has yet to come down one way or another. Condemnation would make He a rogue and a scientific outcast. Anything else opens the door for a Crispr IVF cottage industry to emerge in China and potentially elsewhere. “It’s hard to imagine this was the only group in the world doing this,” says Paul Knoepfler, a stem cell researcher at UC Davis who wrote a book on the future of designer babies called GMO Sapiens. “Some might say this broke the ice. Will others forge ahead and go public with their results or stop what they’re doing and see how this plays out?”

Here’s some of the very latest information with the researcher attempting to explain himself.

What does He have to say?

After He’s appearance at the Second International Summit on Human Genome Editing today, Nov. 27, 2018, David Cyranoski produced this article for Nature,

He Jiankui, the Chinese scientist who claims to have helped produce the first people born with edited genomes — twin girls — appeared today at a gene-editing summit in Hong Kong to explain his experiment. He gave his talk amid threats of legal action and mounting questions, from the scientific community and beyond, about the ethics of his work and the way in which he released the results.

He had never before presented his work publicly outside of a handful of videos he posted on YouTube. Scientists welcomed the fact that he appeared at all — but his talk left many hungry for more answers, and still not completely certain that He has achieved what he claims.

“There’s no reason not to believe him,” says Robin Lovell-Badge, a developmental biologist at the Francis Crick Institute in London. “I’m just not completely convinced.”

Lovell-Badge, like others at the conference, says that an independent body should confirm the test results by performing an in-depth comparison of the parents’ and childrens’ genes.

Many scientists faulted He for a lack of transparency and the seemingly cavalier nature in which he embarked on such a landmark, and potentially risky, project.

“I’m happy he came but I was really horrified and stunned when he described the process he used,” says Jennifer Doudna, a biochemist at the University of California, Berkeley and a pioneer of the CRISPR/Cas-9 gene-editing technique that He used. “It was so inappropriate on so many levels.”

He seemed shaky approaching the stage and nervous during the talk. “I think he was scared,” says Matthew Porteus, who researches genome-editing at Stanford University in California and co-hosted a question-and-answer session with He after his presentation. Porteus attributes this either to the legal pressures that He faces or the mounting criticism from the scientists and media he was about to address.

He’s talk leaves a host of other questions unanswered, including whether the prospective parents were properly informed of the risks; why He selected CCR5 when there are other, proven ways to prevent HIV; why he chose to do the experiment with couples in which the fathers have HIV, rather than mothers who have a higher chance of passing the virus on to their children; and whether the risks of knocking out CCR5 — a gene normally present in people, which could have necessary but still unknown functions — outweighed the benefits in this case.

In the discussion following He’s talk, one scientist asked why He proceeded with the experiments despite the clear consensus among scientists worldwide that such research shouldn’t be done. He didn’t answer the question.

He’s attempts to justify his actions mainly fell flat. In response to questions about why the science community had not been informed of the experiments before the first women were impregnated, he cited presentations that he gave last year at meetings at the University of California, Berkeley, and at the Cold Spring Harbor Laboratory in New York. But Doudna, who organized the Berkeley meeting, says He did not present anything that showed he was ready to experiment in people. She called his defence “disingenuous at best”.

He also said he discussed the human experiment with unnamed scientists in the United States. But Porteus says that’s not enough for such an extraordinary experiment: “You need feedback not from your two closest friends but from the whole community.” …

Pressure was mounting on He ahead of the presentation. On 27 November, the Chinese national health commission ordered the Guangdong health commission, in the province where He’s university is located, to investigate.

On the same day, the Chinese Academy of Sciences issued a statement condemning his work, and the Genetics Society of China and the Chinese Society for Stem Cell Research jointly issued a statement saying the experiment “violates internationally accepted ethical principles regulating human experimentation and human rights law”.

The hospital cited in China’s clinical-trial registry as the that gave ethical approval for He’s work posted a press release on 27 November saying it did not give any approval. It questioned the signatures on the approval form and said that the hospital’s medical-ethics committee never held a meeting related to He’s research. The hospital, which itself is under investigation by the Shenzhen health authorities following He’s revelations, wrote: “The Company does not condone the means of the Claimed Project, and has reservations as to the accuracy, reliability and truthfulness of its contents and results.”

He has not yet responded to requests for comment on these statements and investigations, nor on why the hospital was listed in the registry and the claim of apparent forged signatures.

Alice Park’s Nov. 26, 2018 article for Time magazine includes an embedded video of He’s Nov. 27, 2018 presentation at the summit meeting.

What about the politics?

Mara Hvistendahl’s Nov. 27, 2018 article about this research for Slate.com poses some geopolitical questions (Note: Links have been removed),

The informed consent agreement for He Jiankui’s experiment describes it as an “AIDS vaccine development project” and used highly technical language to describe the procedure that patients would undergo. If the reality for some Chinese patients is that such agreements are glossed over, densely written, or never read, the reality for some researchers working in the country is that the appeal of cutting-edge trials is too great to resist. It is not just Chinese scientists who can be blinded by the lure of quick breakthroughs. Several of the most notable breaches of informed consent on the mainland have involved Western researchers or co-authors. … When people say that the usual rules don’t apply in China, they are really referring to authoritarian science, not some alternative communitarian ethics.

For the many scientists in China who adhere to recognized international standards, the incident comes as a disgrace. He Jiankui now faces an ethics investigation from provincial health authorities, and his institution, Southern University of Science and Technology, was quick to issue a statement noting that He was on unpaid leave. …

It would seem that US [and from elsewhere]* scientists wanting to avoid pesky ethics requirements in the US have found that going to China could be the answer to their problems. I gather it’s not just big business that prefers deregulated environments.

Guillaume Levrier’s  (he’ studying for a PhD at the Universté Sorbonne Paris Cité) November 16, 2018 essay for The Conversation sheds some light on political will and its impact on science (Note: Links have been removed),

… China has entered a “genome editing” race among great scientific nations and its progress didn’t come out of nowhere. China has invested heavily in the natural-sciences sector over the past 20 years. The Ninth Five-Year Plan (1996-2001) mentioned the crucial importance of biotechnologies. The current Thirteenth Five-Year Plan is even more explicit. It contains a section dedicated to “developing efficient and advanced biotechnologies” and lists key sectors such as “genome-editing technologies” intended to “put China at the bleeding edge of biotechnology innovation and become the leader in the international competition in this sector”.

Chinese embryo research is regulated by a legal framework, the “technical norms on human-assisted reproductive technologies”, published by the Science and Health Ministries. The guidelines theoretically forbid using sperm or eggs whose genome have been manipulated for procreative purposes. However, it’s hard to know how much value is actually placed on this rule in practice, especially in China’s intricate institutional and political context.

In theory, three major actors have authority on biomedical research in China: the Science and Technology Ministry, the Health Ministry, and the Chinese Food and Drug Administration. In reality, other agents also play a significant role. Local governments interpret and enforce the ministries’ “recommendations”, and their own interpretations can lead to significant variations in what researchers can and cannot do on the ground. The Chinese National Academy of Medicine is also a powerful institution that has its own network of hospitals, universities and laboratories.

Another prime actor is involved: the health section of the People’s Liberation Army (PLA), which has its own biomedical faculties, hospitals and research labs. The PLA makes its own interpretations of the recommendations and has proven its ability to work with the private sector on gene editing projects. …

One other thing from Levrier’s essay,

… And the media timing is just a bit too perfect, …

Do read the essay; there’s a twist at the end.

Final thoughts and some links

If I read this material rightly, there are suspicions there may be more of this work being done in China and elsewhere. In short, we likely don’t have the whole story.

As for the ethical issues, this is a discussion among experts only, so far. The great unwashed (thee and me) are being left at the wayside. Sure, we’ll be invited to public consultations, one day,  after the big decisions have been made.

Anyone who’s read up on the history of science will tell you this kind of breach is very common at the beginning. Richard Holmes’  2008 book, ‘The Age of Wonder: How the Romantic Generation Discovered the Beauty and Terror of Science’ recounts stories of early scientists (European science) who did crazy things. Some died, some shortened their life spans; and, some irreversibly damaged their health.  They also experimented on other people. Informed consent had not yet been dreamed up.

In fact, I remember reading somewhere that the largest human clinical trial in history was held in Canada. The small pox vaccine was highly contested in the US but the Canadian government thought it was a good idea so they offered US scientists the option of coming here to vaccinate Canadian babies. This was in the 1950s and the vaccine seems to have been administered almost universally. That was a lot of Canadian babies. Thankfully, it seems to have worked out but it does seem mind-boggling today.

For all the indignation and shock we’re seeing, this is not the first time nor will it be the last time someone steps over a line in order to conduct scientific research. And, that is the eternal problem.

Meanwhile I think some of the real action regarding CRISPR and germline editing is taking place in the field (pun!) of agriculture:

My Nov. 27, 2018 posting titled: ‘Designer groundcherries by CRISPR (clustered regularly interspaced short palindromic repeats)‘ and a more disturbing Nov. 27, 2018 post titled: ‘Agriculture and gene editing … shades of the AquAdvantage salmon‘. That second posting features a company which is trying to sell its gene-editing services to farmers who would like cows that  never grow horns and pigs that never reach puberty.

Then there’s this ,

The Genetic Revolution‘, a documentary that offers relatively up-to-date information about gene editing, which was broadcast on Nov. 11, 2018 as part of The Nature of Things series on CBC (Canadian Broadcasting Corporation).

My July 17, 2018 posting about research suggesting that scientists hadn’t done enough research on possible effects of CRISPR editing titled: ‘The CRISPR ((clustered regularly interspaced short palindromic repeats)-CAS9 gene-editing technique may cause new genetic damage kerfuffle’.

My 2017 three-part series on CRISPR and germline editing:

CRISPR and editing the germline in the US (part 1 of 3): In the beginning

CRISPR and editing the germline in the US (part 2 of 3): ‘designer babies’?

CRISPR and editing the germline in the US (part 3 of 3): public discussions and pop culture

There you have it.

Added on November 30, 2018: David Cyanowski has written one final article (Nov. 30, 2018 for Nature) about He and the Second International Summit on Human Genome Editing. He did not make his second scheduled appearance at the summit, returning to China before the summit concluded. He was rebuked in a statement produced by the Summit’s organizing committee at the end of the three-day meeting. The situation with regard to his professional status in China is ambiguous. Cyanowski ends his piece with the information that the third summit will take place in London (likely in the UK) in 2021. I encourage you to read Cyanowski’s Nov. 30, 2018 article in its entirety; it’s not long.

Added on Dec. 3, 2018: The story continues. Ed Yong has written a summary of the issues to date in a Dec. 3, 2018 article for The Atlantic (even if you know the story ift’s eyeopening to see all the parts put together.

J. Benjamin Hurlbut, Associate Professor of Life Sciences at Arizona State University (ASU) and Jason Scott Robert, Director of the Lincoln Center for Applied Ethics at Arizona State University have written a provocative (and true) Dec. 3, 2018 essay titled, CRISPR babies raise an uncomfortable reality – abiding by scientific standards doesn’t guarantee ethical research, for The Conversation. h/t phys.org

*[and from elsewhere] added January 17, 2019.

Added on January 23, 2019: He has been fired by his university (Southern University of Science and Technology in Shenzhen) as announced on January 21, 2019.  David Cyranoski provides a details accounting in his January 22, 2019 article for Nature.