Tag Archives: University of Wisconsin-Madison

CRISPR and editing the germline in the US (part 3 of 3): public discussions and pop culture

After giving a basic explanation of the technology and some of the controversies in part 1 and offering more detail about the technology and about the possibility of designer babies in part 2; this part covers public discussion, a call for one and the suggestion that one is taking place in popular culture.

But a discussion does need to happen

In a move that is either an exquisite coincidence or has been carefully orchestrated (I vote for the latter), researchers from the University of Wisconsin-Madison have released a study about attitudes in the US to human genome editing. From an Aug. 11, 2017 University of Wisconsin-Madison news release (also on EurekAllert),

In early August 2017, an international team of scientists announced they had successfully edited the DNA of human embryos. As people process the political, moral and regulatory issues of the technology — which nudges us closer to nonfiction than science fiction — researchers at the University of Wisconsin-Madison and Temple University show the time is now to involve the American public in discussions about human genome editing.

In a study published Aug. 11 in the journal Science, the researchers assessed what people in the United States think about the uses of human genome editing and how their attitudes may drive public discussion. They found a public divided on its uses but united in the importance of moving conversations forward.

“There are several pathways we can go down with gene editing,” says UW-Madison’s Dietram Scheufele, lead author of the study and member of a National Academy of Sciences committee that compiled a report focused on human gene editing earlier this year. “Our study takes an exhaustive look at all of those possible pathways forward and asks where the public stands on each one of them.”

Compared to previous studies on public attitudes about the technology, the new study takes a more nuanced approach, examining public opinion about the use of gene editing for disease therapy versus for human enhancement, and about editing that becomes hereditary versus editing that does not.

The research team, which included Scheufele and Dominique Brossard — both professors of life sciences communication — along with Michael Xenos, professor of communication arts, first surveyed study participants about the use of editing to treat disease (therapy) versus for enhancement (creating so-called “designer babies”). While about two-thirds of respondents expressed at least some support for therapeutic editing, only one-third expressed support for using the technology for enhancement.

Diving even deeper, researchers looked into public attitudes about gene editing on specific cell types — somatic or germline — either for therapy or enhancement. Somatic cells are non-reproductive, so edits made in those cells do not affect future generations. Germline cells, however, are heritable, and changes made in these cells would be passed on to children.

Public support of therapeutic editing was high both in cells that would be inherited and those that would not, with 65 percent of respondents supporting therapy in germline cells and 64 percent supporting therapy in somatic cells. When considering enhancement editing, however, support depended more upon whether the changes would affect future generations. Only 26 percent of people surveyed supported enhancement editing in heritable germline cells and 39 percent supported enhancement of somatic cells that would not be passed on to children.

“A majority of people are saying that germline enhancement is where the technology crosses that invisible line and becomes unacceptable,” says Scheufele. “When it comes to therapy, the public is more open, and that may partly be reflective of how severe some of those genetically inherited diseases are. The potential treatments for those diseases are something the public at least is willing to consider.”

Beyond questions of support, researchers also wanted to understand what was driving public opinions. They found that two factors were related to respondents’ attitudes toward gene editing as well as their attitudes toward the public’s role in its emergence: the level of religious guidance in their lives, and factual knowledge about the technology.

Those with a high level of religious guidance in their daily lives had lower support for human genome editing than those with low religious guidance. Additionally, those with high knowledge of the technology were more supportive of it than those with less knowledge.

While respondents with high religious guidance and those with high knowledge differed on their support for the technology, both groups highly supported public engagement in its development and use. These results suggest broad agreement that the public should be involved in questions of political, regulatory and moral aspects of human genome editing.

“The public may be split along lines of religiosity or knowledge with regard to what they think about the technology and scientific community, but they are united in the idea that this is an issue that requires public involvement,” says Scheufele. “Our findings show very nicely that the public is ready for these discussions and that the time to have the discussions is now, before the science is fully ready and while we have time to carefully think through different options regarding how we want to move forward.”

Here’s a  link to and a citation for the paper,

U.S. attitudes on human genome editing by Dietram A. Scheufele, Michael A. Xenos, Emily L. Howell, Kathleen M. Rose, Dominique Brossard1, and Bruce W. Hardy. Science 11 Aug 2017: Vol. 357, Issue 6351, pp. 553-554 DOI: 10.1126/science.aan3708

This paper is behind a paywall.

A couple of final comments

Briefly, I notice that there’s no mention of the ethics of patenting this technology in the news release about the study.

Moving on, it seems surprising that the first team to engage in germline editing in the US is in Oregon; I would have expected the work to come from Massachusetts, California, or Illinois where a lot of bleeding edge medical research is performed. However, given the dearth of financial support from federal funding institutions, it seems likely that only an outsider would dare to engage i the research. Given the timing, Mitalipov’s work was already well underway before the recent about-face from the US National Academy of Sciences (Note: Kaiser’s Feb. 14, 2017 article does note that for some the recent recommendations do not represent any change).

As for discussion on issues such as editing of the germline, I’ve often noted here that popular culture (including advertising with the science fiction and other dramas laid in various media) often provides an informal forum for discussion. Joelle Renstrom in an Aug. 13, 2017 article for slate.com writes that Orphan Black (a BBC America series featuring clones) opened up a series of questions about science and ethics in the guise of a thriller about clones. She offers a précis of the first four seasons (Note: A link has been removed),

If you stopped watching a few seasons back, here’s a brief synopsis of how the mysteries wrap up. Neolution, an organization that seeks to control human evolution through genetic modification, began Project Leda, the cloning program, for two primary reasons: to see whether they could and to experiment with mutations that might allow people (i.e., themselves) to live longer. Neolution partnered with biotech companies such as Dyad, using its big pharma reach and deep pockets to harvest people’s genetic information and to conduct individual and germline (that is, genetic alterations passed down through generations) experiments, including infertility treatments that result in horrifying birth defects and body modification, such as tail-growing.

She then provides the article’s thesis (Note: Links have been removed),

Orphan Black demonstrates Carl Sagan’s warning of a time when “awesome technological powers are in the hands of a very few.” Neolutionists do whatever they want, pausing only to consider whether they’re missing an opportunity to exploit. Their hubris is straight out of Victor Frankenstein’s playbook. Frankenstein wonders whether he ought to first reanimate something “of simpler organisation” than a human, but starting small means waiting for glory. Orphan Black’s evil scientists embody this belief: if they’re going to play God, then they’ll control not just their own destinies, but the clones’ and, ultimately, all of humanity’s. Any sacrifices along the way are for the greater good—reasoning that culminates in Westmoreland’s eugenics fantasy to genetically sterilize 99 percent of the population he doesn’t enhance.

Orphan Black uses sci-fi tropes to explore real-world plausibility. Neolution shares similarities with transhumanism, the belief that humans should use science and technology to take control of their own evolution. While some transhumanists dabble in body modifications, such as microchip implants or night-vision eye drops, others seek to end suffering by curing human illness and aging. But even these goals can be seen as selfish, as access to disease-eradicating or life-extending technologies would be limited to the wealthy. Westmoreland’s goal to “sell Neolution to the 1 percent” seems frighteningly plausible—transhumanists, who statistically tend to be white, well-educated, and male, and their associated organizations raise and spend massive sums of money to help fulfill their goals. …

On Orphan Black, denial of choice is tantamount to imprisonment. That the clones have to earn autonomy underscores the need for ethics in science, especially when it comes to genetics. The show’s message here is timely given the rise of gene-editing techniques such as CRISPR. Recently, the National Academy of Sciences gave germline gene editing the green light, just one year after academy scientists from around the world argued it would be “irresponsible to proceed” without further exploring the implications. Scientists in the United Kingdom and China have already begun human genetic engineering and American scientists recently genetically engineered a human embryo for the first time. The possibility of Project Leda isn’t farfetched. Orphan Black warns us that money, power, and fear of death can corrupt both people and science. Once that happens, loss of humanity—of both the scientists and the subjects—is inevitable.

In Carl Sagan’s dark vision of the future, “people have lost the ability to set their own agendas or knowledgeably question those in authority.” This describes the plight of the clones at the outset of Orphan Black, but as the series continues, they challenge this paradigm by approaching science and scientists with skepticism, ingenuity, and grit. …

I hope there are discussions such as those Scheufele and Brossard are advocating but it might be worth considering that there is already some discussion underway, as informal as it is.

-30-

Part 1: CRISPR and editing the germline in the US (part 1 of 3): In the beginning

Part 2: CRISPR and editing the germline in the US (part 2 of 3): ‘designer babies’?

CRISPR and editing the germline in the US (part 2 of 3): ‘designer babies’?

Having included an explanation of CRISPR-CAS9 technology along with the news about the first US team to edit the germline and bits and pieces about ethics and a patent fight (part 1), this part hones in on the details of the work and worries about ‘designer babies’.

The interest flurry

I found three articles addressing the research and all three concur that despite some of the early reporting, this is not the beginning of a ‘designer baby’ generation.

First up was Nick Thieme in a July 28, 2017 article for Slate,

MIT Technology Review reported Thursday that a team of researchers from Portland, Oregon were the first team of U.S.-based scientists to successfully create a genetically modified human embryo. The researchers, led by Shoukhrat Mitalipov of Oregon Health and Science University, changed the DNA of—in MIT Technology Review’s words—“many tens” of genetically-diseased embryos by injecting the host egg with CRISPR, a DNA-based gene editing tool first discovered in bacteria, at the time of fertilization. CRISPR-Cas9, as the full editing system is called, allows scientists to change genes accurately and efficiently. As has happened with research elsewhere, the CRISPR-edited embryos weren’t implanted—they were kept sustained for only a couple of days.

In addition to being the first American team to complete this feat, the researchers also improved upon the work of the three Chinese research teams that beat them to editing embryos with CRISPR: Mitalipov’s team increased the proportion of embryonic cells that received the intended genetic changes, addressing an issue called “mosaicism,” which is when an embryo is comprised of cells with different genetic makeups. Increasing that proportion is essential to CRISPR work in eliminating inherited diseases, to ensure that the CRISPR therapy has the intended result. The Oregon team also reduced the number of genetic errors introduced by CRISPR, reducing the likelihood that a patient would develop cancer elsewhere in the body.

Separate from the scientific advancements, it’s a big deal that this work happened in a country with such intense politicization of embryo research. …

But there are a great number of obstacles between the current research and the future of genetically editing all children to be 12-foot-tall Einsteins.

Ed Yong in an Aug. 2, 2017 article for The Atlantic offered a comprehensive overview of the research and its implications (unusually for Yong, there seems to be mildly condescending note but it’s worth ignoring for the wealth of information in the article; Note: Links have been removed),

… the full details of the experiment, which are released today, show that the study is scientifically important but much less of a social inflection point than has been suggested. “This has been widely reported as the dawn of the era of the designer baby, making it probably the fifth or sixth time people have reported that dawn,” says Alta Charo, an expert on law and bioethics at the University of Wisconsin-Madison. “And it’s not.”

Given the persistent confusion around CRISPR and its implications, I’ve laid out exactly what the team did, and what it means.

Who did the experiments?

Shoukhrat Mitalipov is a Kazakhstani-born cell biologist with a history of breakthroughs—and controversy—in the stem cell field. He was the scientist to clone monkeys. He was the first to create human embryos by cloning adult cells—a move that could provide patients with an easy supply of personalized stem cells. He also pioneered a technique for creating embryos with genetic material from three biological parents, as a way of preventing a group of debilitating inherited diseases.

Although MIT Tech Review name-checked Mitalipov alone, the paper splits credit for the research between five collaborating teams—four based in the United States, and one in South Korea.

What did they actually do?

The project effectively began with an elevator conversation between Mitalipov and his colleague Sanjiv Kaul. Mitalipov explained that he wanted to use CRISPR to correct a disease-causing gene in human embryos, and was trying to figure out which disease to focus on. Kaul, a cardiologist, told him about hypertrophic cardiomyopathy (HCM)—an inherited heart disease that’s commonly caused by mutations in a gene called MYBPC3. HCM is surprisingly common, affecting 1 in 500 adults. Many of them lead normal lives, but in some, the walls of their hearts can thicken and suddenly fail. For that reason, HCM is the commonest cause of sudden death in athletes. “There really is no treatment,” says Kaul. “A number of drugs are being evaluated but they are all experimental,” and they merely treat the symptoms. The team wanted to prevent HCM entirely by removing the underlying mutation.

They collected sperm from a man with HCM and used CRISPR to change his mutant gene into its normal healthy version, while simultaneously using the sperm to fertilize eggs that had been donated by female volunteers. In this way, they created embryos that were completely free of the mutation. The procedure was effective, and avoided some of the critical problems that have plagued past attempts to use CRISPR in human embryos.

Wait, other human embryos have been edited before?

There have been three attempts in China. The first two—in 2015 and 2016—used non-viable embryos that could never have resulted in a live birth. The third—announced this March—was the first to use viable embryos that could theoretically have been implanted in a womb. All of these studies showed that CRISPR gene-editing, for all its hype, is still in its infancy.

The editing was imprecise. CRISPR is heralded for its precision, allowing scientists to edit particular genes of choice. But in practice, some of the Chinese researchers found worrying levels of off-target mutations, where CRISPR mistakenly cut other parts of the genome.

The editing was inefficient. The first Chinese team only managed to successfully edit a disease gene in 4 out of 86 embryos, and the second team fared even worse.

The editing was incomplete. Even in the successful cases, each embryo had a mix of modified and unmodified cells. This pattern, known as mosaicism, poses serious safety problems if gene-editing were ever to be used in practice. Doctors could end up implanting women with embryos that they thought were free of a disease-causing mutation, but were only partially free. The resulting person would still have many tissues and organs that carry those mutations, and might go on to develop symptoms.

What did the American team do differently?

The Chinese teams all used CRISPR to edit embryos at early stages of their development. By contrast, the Oregon researchers delivered the CRISPR components at the earliest possible point—minutes before fertilization. That neatly avoids the problem of mosaicism by ensuring that an embryo is edited from the very moment it is created. The team did this with 54 embryos and successfully edited the mutant MYBPC3 gene in 72 percent of them. In the other 28 percent, the editing didn’t work—a high failure rate, but far lower than in previous attempts. Better still, the team found no evidence of off-target mutations.

This is a big deal. Many scientists assumed that they’d have to do something more convoluted to avoid mosaicism. They’d have to collect a patient’s cells, which they’d revert into stem cells, which they’d use to make sperm or eggs, which they’d edit using CRISPR. “That’s a lot of extra steps, with more risks,” says Alta Charo. “If it’s possible to edit the embryo itself, that’s a real advance.” Perhaps for that reason, this is the first study to edit human embryos that was published in a top-tier scientific journal—Nature, which rejected some of the earlier Chinese papers.

Is this kind of research even legal?

Yes. In Western Europe, 15 countries out of 22 ban any attempts to change the human germ line—a term referring to sperm, eggs, and other cells that can transmit genetic information to future generations. No such stance exists in the United States but Congress has banned the Food and Drug Administration from considering research applications that make such modifications. Separately, federal agencies like the National Institutes of Health are banned from funding research that ultimately destroys human embryos. But the Oregon team used non-federal money from their institutions, and donations from several small non-profits. No taxpayer money went into their work. [emphasis mine]

Why would you want to edit embryos at all?

Partly to learn more about ourselves. By using CRISPR to manipulate the genes of embryos, scientists can learn more about the earliest stages of human development, and about problems like infertility and miscarriages. That’s why biologist Kathy Niakan from the Crick Institute in London recently secured a license from a British regulator to use CRISPR on human embryos.

Isn’t this a slippery slope toward making designer babies?

In terms of avoiding genetic diseases, it’s not conceptually different from PGD, which is already widely used. The bigger worry is that gene-editing could be used to make people stronger, smarter, or taller, paving the way for a new eugenics, and widening the already substantial gaps between the wealthy and poor. But many geneticists believe that such a future is fundamentally unlikely because complex traits like height and intelligence are the work of hundreds or thousands of genes, each of which have a tiny effect. The prospect of editing them all is implausible. And since genes are so thoroughly interconnected, it may be impossible to edit one particular trait without also affecting many others.

“There’s the worry that this could be used for enhancement, so society has to draw a line,” says Mitalipov. “But this is pretty complex technology and it wouldn’t be hard to regulate it.”

Does this discovery have any social importance at all?

“It’s not so much about designer babies as it is about geographical location,” says Charo. “It’s happening in the United States, and everything here around embryo research has high sensitivity.” She and others worry that the early report about the study, before the actual details were available for scrutiny, could lead to unnecessary panic. “Panic reactions often lead to panic-driven policy … which is usually bad policy,” wrote Greely [bioethicist Hank Greely].

As I understand it, despite the change in stance, there is no federal funding available for the research performed by Mitalipov and his team.

Finally, University College London (UCL) scientists Joyce Harper and Helen O’Neill wrote about CRISPR, the Oregon team’s work, and the possibilities in an Aug. 3, 2017 essay for The Conversation (Note: Links have been removed),

The genome editing tool used, CRISPR-Cas9, has transformed the field of biology in the short time since its discovery in that it not only promises, but delivers. CRISPR has surpassed all previous efforts to engineer cells and alter genomes at a fraction of the time and cost.

The technology, which works like molecular scissors to cut and paste DNA, is a natural defence system that bacteria use to fend off harmful infections. This system has the ability to recognise invading virus DNA, cut it and integrate this cut sequence into its own genome – allowing the bacterium to render itself immune to future infections of viruses with similar DNA. It is this ability to recognise and cut DNA that has allowed scientists to use it to target and edit specific DNA regions.

When this technology is applied to “germ cells” – the sperm and eggs – or embryos, it changes the germline. That means that any alterations made would be permanent and passed down to future generations. This makes it more ethically complex, but there are strict regulations around human germline genome editing, which is predominantly illegal. The UK received a licence in 2016 to carry out CRISPR on human embryos for research into early development. But edited embryos are not allowed to be inserted into the uterus and develop into a fetus in any country.

Germline genome editing came into the global spotlight when Chinese scientists announced in 2015 that they had used CRISPR to edit non-viable human embryos – cells that could never result in a live birth. They did this to modify the gene responsible for the blood disorder β-thalassaemia. While it was met with some success, it received a lot of criticism because of the premature use of this technology in human embryos. The results showed a high number of potentially dangerous, off-target mutations created in the procedure.

Impressive results

The new study, published in Nature, is different because it deals with viable human embryos and shows that the genome editing can be carried out safely – without creating harmful mutations. The team used CRISPR to correct a mutation in the gene MYBPC3, which accounts for approximately 40% of the myocardial disease hypertrophic cardiomyopathy. This is a dominant disease, so an affected individual only needs one abnormal copy of the gene to be affected.

The researchers used sperm from a patient carrying one copy of the MYBPC3 mutation to create 54 embryos. They edited them using CRISPR-Cas9 to correct the mutation. Without genome editing, approximately 50% of the embryos would carry the patients’ normal gene and 50% would carry his abnormal gene.

After genome editing, the aim would be for 100% of embryos to be normal. In the first round of the experiments, they found that 66.7% of embryos – 36 out of 54 – were normal after being injected with CRIPSR. Of the remaining 18 embryos, five had remained unchanged, suggesting editing had not worked. In 13 embryos, only a portion of cells had been edited.

The level of efficiency is affected by the type of CRISPR machinery used and, critically, the timing in which it is put into the embryo. The researchers therefore also tried injecting the sperm and the CRISPR-Cas9 complex into the egg at the same time, which resulted in more promising results. This was done for 75 mature donated human eggs using a common IVF technique called intracytoplasmic sperm injection. This time, impressively, 72.4% of embryos were normal as a result. The approach also lowered the number of embryos containing a mixture of edited and unedited cells (these embryos are called mosaics).

Finally, the team injected a further 22 embryos which were grown into blastocyst – a later stage of embryo development. These were sequenced and the researchers found that the editing had indeed worked. Importantly, they could show that the level of off-target mutations was low.

A brave new world?

So does this mean we finally have a cure for debilitating, heritable diseases? It’s important to remember that the study did not achieve a 100% success rate. Even the researchers themselves stress that further research is needed in order to fully understand the potential and limitations of the technique.

In our view, it is unlikely that genome editing would be used to treat the majority of inherited conditions anytime soon. We still can’t be sure how a child with a genetically altered genome will develop over a lifetime, so it seems unlikely that couples carrying a genetic disease would embark on gene editing rather than undergoing already available tests – such as preimplantation genetic diagnosis or prenatal diagnosis – where the embryos or fetus are tested for genetic faults.

-30-

As might be expected there is now a call for public discussion about the ethics about this kind of work. See Part 3.

For anyone who started in the middle of this series, here’s Part 1 featuring an introduction to the technology and some of the issues.

CRISPR and editing the germline in the US (part 1 of 3): In the beginning

There’s been a minor flurry of interest in CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats; also known as CRISPR-CAS9), a gene-editing technique, since a team in Oregon announced a paper describing their work editing the germline. Since I’ve been following the CRISPR-CAS9 story for a while this seems like a good juncture for a more in-depth look at the topic. In this first part I’m including an introduction to CRISPR, some information about the latest US work, and some previous writing about ethics issues raised when Chinese scientists first announced their work editing germlines in 2015 and during the patent dispute between the University of California at Berkeley and Harvard University’s Broad Institute.

Introduction to CRISPR

I’ve been searching for a good description of CRISPR and this helped to clear up some questions for me (Thank you to MIT Review),

For anyone who’s been reading about science for a while, this upbeat approach to explaining how a particular technology will solve all sorts of problems will seem quite familiar. It’s not the most hyperbolic piece I’ve seen but it barely mentions any problems associated with research (for some of the problems see: ‘The interest flurry’ later in part 2).

Oregon team

Steve Connor’s July 26, 2017 article for the MIT (Massachusetts Institute of Technology) Technology Review breaks the news (Note: Links have been removed),

The first known attempt at creating genetically modified human embryos in the United States has been carried out by a team of researchers in Portland, Oregon, MIT Technology Review has learned.

The effort, led by Shoukhrat Mitalipov of Oregon Health and Science University, involved changing the DNA of a large number of one-cell embryos with the gene-editing technique CRISPR, according to people familiar with the scientific results.

Until now, American scientists have watched with a combination of awe, envy, and some alarm as scientists elsewhere were first to explore the controversial practice. To date, three previous reports of editing human embryos were all published by scientists in China.

Now Mitalipov is believed to have broken new ground both in the number of embryos experimented upon and by demonstrating that it is possible to safely and efficiently correct defective genes that cause inherited diseases.

Although none of the embryos were allowed to develop for more than a few days—and there was never any intention of implanting them into a womb—the experiments are a milestone on what may prove to be an inevitable journey toward the birth of the first genetically modified humans.

In altering the DNA code of human embryos, the objective of scientists is to show that they can eradicate or correct genes that cause inherited disease, like the blood condition beta-thalassemia. The process is termed “germline engineering” because any genetically modified child would then pass the changes on to subsequent generations via their own germ cells—the egg and sperm.

Some critics say germline experiments could open the floodgates to a brave new world of “designer babies” engineered with genetic enhancements—a prospect bitterly opposed by a range of religious organizations, civil society groups, and biotech companies.

The U.S. intelligence community last year called CRISPR a potential “weapon of mass destruction.”

Here’s a link to a citation for the groundbreaking paper,

Correction of a pathogenic gene mutation in human embryos by Hong Ma, Nuria Marti-Gutierrez, Sang-Wook Park, Jun Wu, Yeonmi Lee, Keiichiro Suzuki, Amy Koski, Dongmei Ji, Tomonari Hayama, Riffat Ahmed, Hayley Darby, Crystal Van Dyken, Ying Li, Eunju Kang, A.-Reum Park, Daesik Kim, Sang-Tae Kim, Jianhui Gong, Ying Gu, Xun Xu, David Battaglia, Sacha A. Krieg, David M. Lee, Diana H. Wu, Don P. Wolf, Stephen B. Heitner, Juan Carlos Izpisua Belmonte, Paula Amato, Jin-Soo Kim, Sanjiv Kaul, & Shoukhrat Mitalipov. Nature (2017) doi:10.1038/nature23305 Published online 02 August 2017

This paper appears to be open access.

CRISPR Issues: ethics and patents

In my May 14, 2015 posting I mentioned a ‘moratorium’ on germline research, the Chinese research paper, and the stance taken by the US National Institutes of Health (NIH),

The CRISPR technology has reignited a discussion about ethical and moral issues of human genetic engineering some of which is reviewed in an April 7, 2015 posting about a moratorium by Sheila Jasanoff, J. Benjamin Hurlbut and Krishanu Saha for the Guardian science blogs (Note: A link has been removed),

On April 3, 2015, a group of prominent biologists and ethicists writing in Science called for a moratorium on germline gene engineering; modifications to the human genome that will be passed on to future generations. The moratorium would apply to a technology called CRISPR/Cas9, which enables the removal of undesirable genes, insertion of desirable ones, and the broad recoding of nearly any DNA sequence.

Such modifications could affect every cell in an adult human being, including germ cells, and therefore be passed down through the generations. Many organisms across the range of biological complexity have already been edited in this way to generate designer bacteria, plants and primates. There is little reason to believe the same could not be done with human eggs, sperm and embryos. Now that the technology to engineer human germlines is here, the advocates for a moratorium declared, it is time to chart a prudent path forward. They recommend four actions: a hold on clinical applications; creation of expert forums; transparent research; and a globally representative group to recommend policy approaches.

The authors go on to review precedents and reasons for the moratorium while suggesting we need better ways for citizens to engage with and debate these issues,

An effective moratorium must be grounded in the principle that the power to modify the human genome demands serious engagement not only from scientists and ethicists but from all citizens. We need a more complex architecture for public deliberation, built on the recognition that we, as citizens, have a duty to participate in shaping our biotechnological futures, just as governments have a duty to empower us to participate in that process. Decisions such as whether or not to edit human genes should not be left to elite and invisible experts, whether in universities, ad hoc commissions, or parliamentary advisory committees. Nor should public deliberation be temporally limited by the span of a moratorium or narrowed to topics that experts deem reasonable to debate.

I recommend reading the post in its entirety as there are nuances that are best appreciated in the entirety of the piece.

Shortly after this essay was published, Chinese scientists announced they had genetically modified (nonviable) human embryos. From an April 22, 2015 article by David Cyranoski and Sara Reardon in Nature where the research and some of the ethical issues discussed,

In a world first, Chinese scientists have reported editing the genomes of human embryos. The results are published1 in the online journal Protein & Cell and confirm widespread rumours that such experiments had been conducted — rumours that sparked a high-profile debate last month2, 3 about the ethical implications of such work.

In the paper, researchers led by Junjiu Huang, a gene-function researcher at Sun Yat-sen University in Guangzhou, tried to head off such concerns by using ‘non-viable’ embryos, which cannot result in a live birth, that were obtained from local fertility clinics. The team attempted to modify the gene responsible for β-thalassaemia, a potentially fatal blood disorder, using a gene-editing technique known as CRISPR/Cas9. The researchers say that their results reveal serious obstacles to using the method in medical applications.

“I believe this is the first report of CRISPR/Cas9 applied to human pre-implantation embryos and as such the study is a landmark, as well as a cautionary tale,” says George Daley, a stem-cell biologist at Harvard Medical School in Boston, Massachusetts. “Their study should be a stern warning to any practitioner who thinks the technology is ready for testing to eradicate disease genes.”

….

Huang says that the paper was rejected by Nature and Science, in part because of ethical objections; both journals declined to comment on the claim. (Nature’s news team is editorially independent of its research editorial team.)

He adds that critics of the paper have noted that the low efficiencies and high number of off-target mutations could be specific to the abnormal embryos used in the study. Huang acknowledges the critique, but because there are no examples of gene editing in normal embryos he says that there is no way to know if the technique operates differently in them.

Still, he maintains that the embryos allow for a more meaningful model — and one closer to a normal human embryo — than an animal model or one using adult human cells. “We wanted to show our data to the world so people know what really happened with this model, rather than just talking about what would happen without data,” he says.

This, too, is a good and thoughtful read.

There was an official response in the US to the publication of this research, from an April 29, 2015 post by David Bruggeman on his Pasco Phronesis blog (Note: Links have been removed),

In light of Chinese researchers reporting their efforts to edit the genes of ‘non-viable’ human embryos, the National Institutes of Health (NIH) Director Francis Collins issued a statement (H/T Carl Zimmer).

“NIH will not fund any use of gene-editing technologies in human embryos. The concept of altering the human germline in embryos for clinical purposes has been debated over many years from many different perspectives, and has been viewed almost universally as a line that should not be crossed. Advances in technology have given us an elegant new way of carrying out genome editing, but the strong arguments against engaging in this activity remain. These include the serious and unquantifiable safety issues, ethical issues presented by altering the germline in a way that affects the next generation without their consent, and a current lack of compelling medical applications justifying the use of CRISPR/Cas9 in embryos.” …

The US has modified its stance according to a February 14, 2017 article by Jocelyn Kaiser for Science Magazine (Note: Links have been removed),

Editing the DNA of a human embryo to prevent a disease in a baby could be ethically allowable one day—but only in rare circumstances and with safeguards in place, says a widely anticipated report released today.

The report from an international committee convened by the U.S. National Academy of Sciences (NAS) and the National Academy of Medicine in Washington, D.C., concludes that such a clinical trial “might be permitted, but only following much more research” on risks and benefits, and “only for compelling reasons and under strict oversight.” Those situations could be limited to couples who both have a serious genetic disease and for whom embryo editing is “really the last reasonable option” if they want to have a healthy biological child, says committee co-chair Alta Charo, a bioethicist at the University of Wisconsin in Madison.

Some researchers are pleased with the report, saying it is consistent with previous conclusions that safely altering the DNA of human eggs, sperm, or early embryos—known as germline editing—to create a baby could be possible eventually. “They have closed the door to the vast majority of germline applications and left it open for a very small, well-defined subset. That’s not unreasonable in my opinion,” says genome researcher Eric Lander of the Broad Institute in Cambridge, Massachusetts. Lander was among the organizers of an international summit at NAS in December 2015 who called for more discussion before proceeding with embryo editing.

But others see the report as lowering the bar for such experiments because it does not explicitly say they should be prohibited for now. “It changes the tone to an affirmative position in the absence of the broad public debate this report calls for,” says Edward Lanphier, chairman of the DNA editing company Sangamo Therapeutics in Richmond, California. Two years ago, he co-authored a Nature commentary calling for a moratorium on clinical embryo editing.

One advocacy group opposed to embryo editing goes further. “We’re very disappointed with the report. It’s really a pretty dramatic shift from the existing and widespread agreement globally that human germline editing should be prohibited,” says Marcy Darnovsky, executive director of the Center for Genetics and Society in Berkeley, California.

Interestingly, this change of stance occurred just prior to a CRISPR patent decision (from my March 15, 2017 posting),

I have written about the CRISPR patent tussle (Harvard & MIT’s [Massachusetts Institute of Technology] Broad Institute vs the University of California at Berkeley) previously in a Jan. 6, 2015 posting and in a more detailed May 14, 2015 posting. I also mentioned (in a Jan. 17, 2017 posting) CRISPR and its patent issues in the context of a posting about a Slate.com series on Frankenstein and the novel’s applicability to our own time. This patent fight is being bitterly fought as fortunes are at stake.

It seems a decision has been made regarding the CRISPR patent claims. From a Feb. 17, 2017 article by Charmaine Distor for The Science Times,

After an intense court battle, the US Patent and Trademark Office (USPTO) released its ruling on February 15 [2017]. The rights for the CRISPR-Cas9 gene editing technology was handed over to the Broad Institute of Harvard University and the Massachusetts Institute of Technology (MIT).

According to an article in Nature, the said court battle was between the Broad Institute and the University of California. The two institutions are fighting over the intellectual property right for the CRISPR patent. The case between the two started when the patent was first awarded to the Broad Institute despite having the University of California apply first for the CRISPR patent.

Heidi Ledford’s Feb. 17, 2017 article for Nature provides more insight into the situation (Note: Links have been removed),

It [USPTO] ruled that the Broad Institute of Harvard and MIT in Cambridge could keep its patents on using CRISPR–Cas9 in eukaryotic cells. That was a blow to the University of California in Berkeley, which had filed its own patents and had hoped to have the Broad’s thrown out.

The fight goes back to 2012, when Jennifer Doudna at Berkeley, Emmanuelle Charpentier, then at the University of Vienna, and their colleagues outlined how CRISPR–Cas9 could be used to precisely cut isolated DNA1. In 2013, Feng Zhang at the Broad and his colleagues — and other teams — showed2 how it could be adapted to edit DNA in eukaryotic cells such as plants, livestock and humans.

Berkeley filed for a patent earlier, but the USPTO granted the Broad’s patents first — and this week upheld them. There are high stakes involved in the ruling. The holder of key patents could make millions of dollars from CRISPR–Cas9’s applications in industry: already, the technique has sped up genetic research, and scientists are using it to develop disease-resistant livestock and treatments for human diseases.

….

I also noted this eyebrow-lifting statistic,  “As for Ledford’s 3rd point, there are an estimated 763 patent families (groups of related patents) claiming CAS9 leading to the distinct possibility that the Broad Institute will be fighting many patent claims in the future.)

-30-

Part 2 covers three critical responses to the reporting and between them describe the technology in more detail and the possibility of ‘designer babies’.  CRISPR and editing the germline in the US (part 2 of 3): ‘designer babies’?

Part 3 is all about public discussion or, rather, the lack of and need for according to a couple of social scientists. Informally, there is some discussion via pop culture and Joelle Renstrom notes although she is focused on the larger issues touched on by the television series, Orphan Black and as I touch on in my final comments. CRISPR and editing the germline in the US (part 3 of 3): public discussions and pop culture

Repairing a ‘broken’ heart with a 3D printed patch

The idea of using stem cells to help heal your heart so you don’t have scar tissue seems to be a step closer to reality. From an April 14, 2017 news item on ScienceDaily which announces the research and explains why scar tissue in your heart is a problem,

A team of biomedical engineering researchers, led by the University of Minnesota, has created a revolutionary 3D-bioprinted patch that can help heal scarred heart tissue after a heart attack. The discovery is a major step forward in treating patients with tissue damage after a heart attack.

According to the American Heart Association, heart disease is the No. 1 cause of death in the U.S. killing more than 360,000 people a year. During a heart attack, a person loses blood flow to the heart muscle and that causes cells to die. Our bodies can’t replace those heart muscle cells so the body forms scar tissue in that area of the heart, which puts the person at risk for compromised heart function and future heart failure.

An April 13, 2017 University of Minnesota news release (also on EurekAlert but dated April 14, 2017), which originated the news item, describes the work in more detail,

In this study, researchers from the University of Minnesota-Twin Cities, University of Wisconsin-Madison, and University of Alabama-Birmingham used laser-based 3D-bioprinting techniques to incorporate stem cells derived from adult human heart cells on a matrix that began to grow and beat synchronously in a dish in the lab.

When the cell patch was placed on a mouse following a simulated heart attack, the researchers saw significant increase in functional capacity after just four weeks. Since the patch was made from cells and structural proteins native to the heart, it became part of the heart and absorbed into the body, requiring no further surgeries.

“This is a significant step forward in treating the No. 1 cause of death in the U.S.,” said Brenda Ogle, an associate professor of biomedical engineering at the University of Minnesota. “We feel that we could scale this up to repair hearts of larger animals and possibly even humans within the next several years.”

Ogle said that this research is different from previous research in that the patch is modeled after a digital, three-dimensional scan of the structural proteins of native heart tissue.  The digital model is made into a physical structure by 3D printing with proteins native to the heart and further integrating cardiac cell types derived from stem cells.  Only with 3D printing of this type can we achieve one micron resolution needed to mimic structures of native heart tissue.

“We were quite surprised by how well it worked given the complexity of the heart,” Ogle said.  “We were encouraged to see that the cells had aligned in the scaffold and showed a continuous wave of electrical signal that moved across the patch.”

Ogle said they are already beginning the next step to develop a larger patch that they would test on a pig heart, which is similar in size to a human heart.

The researchers has made this video of beating heart cells in a petri dish available,

Date: Published on Apr 14, 2017

Caption: Researchers used laser-based 3D-bioprinting techniques to incorporate stem cells derived from adult human heart cells on a matrix that began to grow and beat synchronously in a dish in the lab. Credit: Brenda Ogle, University of Minnesota

Here’s a link to and a citation for the paper,

Myocardial Tissue Engineering With Cells Derived From Human-Induced Pluripotent Stem Cells and a Native-Like, High-Resolution, 3-Dimensionally Printed Scaffold by Ling Gao, Molly E. Kupfer, Jangwook P. Jung, Libang Yang, Patrick Zhang, Yong Da Sie, Quyen Tran, Visar Ajeti, Brian T. Freeman, Vladimir G. Fast, Paul J. Campagnola, Brenda M. Ogle, Jianyi Zhang. Circulation Research April 14, 2017, Volume 120, Issue 8 https://doi.org/10.1161/CIRCRESAHA.116.310277 Circulation Research. 2017;120:1318-1325 Originally published online] January 9, 2017

This paper appears to be open access.

Better bioimaging accuracy with direct radiolabeling of nanomaterials

Even I can tell the image is improved when the chelator is omitted,

Courtesy: Wiley

A Feb. 9, 2017 news item on phys.org describes a new, chelator-free technique for increased bioimaging accuracy,

Positron emission tomography (PET) plays a pivotal role for monitoring the distribution and accumulation of radiolabeled nanomaterials in living subjects. The radioactive metals are usually connected to the nanomaterial through an anchor, a so-called chelator, but this chemical binding can be omitted if nanographene is used, as American scientists report in the journal Angewandte Chemie. The replacement of chelator-based labeling by intrinsic labeling significantly enhances the bioimaging accuracy and reduces biases.

A Feb 9, 2017Wiley press release (also on EurekAlert), which originated the news item, provides more detail,

Nanoparticles are very promising substances for biodiagnostics (e.g., detecting cancerous tissue) and biotherapy (e.g., destroying tumors by molecular agents), because they are not as fast [sic] metabolized as normal pharmaceuticals and they particularly enrich [sic] in tumors through an effect called enhanced permeability and retention (EPR). Chelators, which have a macrocyclic structure, are used to anchor the radioactive element (e.g., copper-64) onto the nanoparticles’ surface. The tracers are then detected and localized in the body with the help of a positron emission tomography (PET) scanner. However, the use of a chelator can also be problematic, because it can detach from the nanoparticles or bias the imaging. Therefore, the group of Weibo Cai at University of Wisconsin-Madison, USA, sought for chelator-free solutions—and found it in nanographene, one of the most promising substances in nanotechnology.

Nanographene offers the electronic system to provide special binding electrons for some transition metal ions. “π bonds of nanographene are able to provide the additional electron to stably incorporate the 64Cu2+ acceptor ions onto the surface of graphene,” the authors wrote. Thus, it was possible to directly and stably attach the copper isotope to reduced graphene oxide nanomaterials stabilized by poly(ethylene glycol) (PEG), and this system was used for several bioimaging tests including the detection of tumors in mice.

After injection in the mouse model, the scientists observed long blood circulation and high tumor uptake. “Prolonged blood circulation of 64Cu-RGO-PEG […] induced a prompt and persistent tumor uptake via EPR effect,” they wrote. Moreover, the directly radiolabeled nanographene was readily prepared by simply mixing both components and heating them. This simple chelator-free, intrinsically labeled system may provide an attractive alternative to the chelator-based radiolabeling, which is still the “gold standard” in bioimaging.

Here’s a link to and a citation for the paper,

Chelator-Free Radiolabeling of Nanographene: Breaking the Stereotype of Chelation by Sixiang Shi, Cheng Xu, Dr. Kai Yang, Shreya Goel, Hector F. Valdovinos, Dr. Haiming Luo, Emily B. Ehlerding, Dr. Christopher G. England, Dr. Liang Cheng, Dr. Feng Chen, Prof. Robert J. Nickles, Prof. Zhuang Liu, and Prof. Weibo Cai. Angewandte Chemie International Edition DOI: 10.1002/anie.201610649 Version of Record online: 7 FEB 2017

© 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

A guide to producing transparent electronics

A blue light shines through a clear, implantable medical sensor onto a brain model. See-through sensors, which have been developed by a team of UW–Madison engineers, should help neural researchers better view brain activity. Credit: Justin Williams research group

A blue light shines through a clear, implantable medical sensor onto a brain model. See-through sensors, which have been developed by a team of UW–Madison engineers, should help neural researchers better view brain activity. Credit: Justin Williams research group

Read this Oct. 13, 2016 news item on ScienceDaily if you want to find out how to make your own transparent electronics,

When University of Wisconsin-Madison engineers announced in the journal Nature Communications that they had developed transparent sensors for use in imaging the brain, researchers around the world took notice.

Then the requests came flooding in. “So many research groups started asking us for these devices that we couldn’t keep up,” says Zhenqiang (Jack) Ma, the Lynn H. Matthias Professor and Vilas Distinguished Achievement Professor in electrical and computer engineering at UW-Madison.

As a result, in a paper published in the journal Nature Protocols, the researchers have described in great detail how to fabricate and use transparent graphene neural electrode arrays in applications in electrophysiology, fluorescent microscopy, optical coherence tomography, and optogenetics. “We described how to do these things so we can start working on the next generation,” says Ma.

Although he and collaborator Justin Williams, the Vilas Distinguished Achievement Professor in biomedical engineering and neurological surgery at UW-Madison, patented the technology through the Wisconsin Alumni Research Foundation, they saw its potential for advancements in research. “That little step has already resulted in an explosion of research in this field,” says Williams. “We didn’t want to keep this technology in our lab. We wanted to share it and expand the boundaries of its applications.”

An Oct. 13, 2016 University of Wisconsin-Madison news release, which originated the news item, provides more detail about the paper and the researchers,

‘This paper is a gateway for other groups to explore the huge potential from here,’ says Ma. ‘Our technology demonstrates one of the key in vivo applications of graphene. We expect more revolutionary research will follow in this interdisciplinary field.’

Ma’s group is a world leader in developing revolutionary flexible electronic devices. The see-through, implantable micro-electrode arrays were light years beyond anything ever created.

Here’s a link to and a citation for the paper,

Fabrication and utility of a transparent graphene neural electrode array for electrophysiology, in vivo imaging, and optogenetics by Dong-Wook Park, Sarah K Brodnick, Jared P Ness, Farid Atry, Lisa Krugner-Higby, Amelia Sandberg, Solomon Mikael, Thomas J Richner, Joseph Novello, Hyungsoo Kim, Dong-Hyun Baek, Jihye Bong, Seth T Frye, Sanitta Thongpang, Kyle I Swanson, Wendell Lake, Ramin Pashaie, Justin C Williams, & Zhenqiang Ma. Nature Protocols 11, 2201–2222 (2016) doi:10.1038/nprot.2016.127 Published online 13 October 2016

Of course this paper is open access. The team’s previous paper published in 2014 was featured here in an Oct. 23, 2014 posting.

Carbon nanotubes that can outperform silicon

According to a Sept. 2, 2016 news item on phys.org, researchers at the University of Wisconsin-Madison have produced carbon nanotube transistors that outperform state-of-the-art silicon transistors,

For decades, scientists have tried to harness the unique properties of carbon nanotubes to create high-performance electronics that are faster or consume less power—resulting in longer battery life, faster wireless communication and faster processing speeds for devices like smartphones and laptops.

But a number of challenges have impeded the development of high-performance transistors made of carbon nanotubes, tiny cylinders made of carbon just one atom thick. Consequently, their performance has lagged far behind semiconductors such as silicon and gallium arsenide used in computer chips and personal electronics.

Now, for the first time, University of Wisconsin-Madison materials engineers have created carbon nanotube transistors that outperform state-of-the-art silicon transistors.

Led by Michael Arnold and Padma Gopalan, UW-Madison professors of materials science and engineering, the team’s carbon nanotube transistors achieved current that’s 1.9 times higher than silicon transistors. …

A Sept. 2, 2016 University of Wisconsin-Madison news release (also on EurekAlert) by Adam Malecek, which originated the news item, describes the research in more detail and notes that the technology has been patented,

“This achievement has been a dream of nanotechnology for the last 20 years,” says Arnold. “Making carbon nanotube transistors that are better than silicon transistors is a big milestone. This breakthrough in carbon nanotube transistor performance is a critical advance toward exploiting carbon nanotubes in logic, high-speed communications, and other semiconductor electronics technologies.”

This advance could pave the way for carbon nanotube transistors to replace silicon transistors and continue delivering the performance gains the computer industry relies on and that consumers demand. The new transistors are particularly promising for wireless communications technologies that require a lot of current flowing across a relatively small area.

As some of the best electrical conductors ever discovered, carbon nanotubes have long been recognized as a promising material for next-generation transistors.

Carbon nanotube transistors should be able to perform five times faster or use five times less energy than silicon transistors, according to extrapolations from single nanotube measurements. The nanotube’s ultra-small dimension makes it possible to rapidly change a current signal traveling across it, which could lead to substantial gains in the bandwidth of wireless communications devices.

But researchers have struggled to isolate purely carbon nanotubes, which are crucial, because metallic nanotube impurities act like copper wires and disrupt their semiconducting properties — like a short in an electronic device.

The UW–Madison team used polymers to selectively sort out the semiconducting nanotubes, achieving a solution of ultra-high-purity semiconducting carbon nanotubes.

“We’ve identified specific conditions in which you can get rid of nearly all metallic nanotubes, where we have less than 0.01 percent metallic nanotubes,” says Arnold.

Placement and alignment of the nanotubes is also difficult to control.

To make a good transistor, the nanotubes need to be aligned in just the right order, with just the right spacing, when assembled on a wafer. In 2014, the UW–Madison researchers overcame that challenge when they announced a technique, called “floating evaporative self-assembly,” that gives them this control.

The nanotubes must make good electrical contacts with the metal electrodes of the transistor. Because the polymer the UW–Madison researchers use to isolate the semiconducting nanotubes also acts like an insulating layer between the nanotubes and the electrodes, the team “baked” the nanotube arrays in a vacuum oven to remove the insulating layer. The result: excellent electrical contacts to the nanotubes.

The researchers also developed a treatment that removes residues from the nanotubes after they’re processed in solution.

“In our research, we’ve shown that we can simultaneously overcome all of these challenges of working with nanotubes, and that has allowed us to create these groundbreaking carbon nanotube transistors that surpass silicon and gallium arsenide transistors,” says Arnold.

The researchers benchmarked their carbon nanotube transistor against a silicon transistor of the same size, geometry and leakage current in order to make an apples-to-apples comparison.

They are continuing to work on adapting their device to match the geometry used in silicon transistors, which get smaller with each new generation. Work is also underway to develop high-performance radio frequency amplifiers that may be able to boost a cellphone signal. While the researchers have already scaled their alignment and deposition process to 1 inch by 1 inch wafers, they’re working on scaling the process up for commercial production.

Arnold says it’s exciting to finally reach the point where researchers can exploit the nanotubes to attain performance gains in actual technologies.

“There has been a lot of hype about carbon nanotubes that hasn’t been realized, and that has kind of soured many people’s outlook,” says Arnold. “But we think the hype is deserved. It has just taken decades of work for the materials science to catch up and allow us to effectively harness these materials.”

The researchers have patented their technology through the Wisconsin Alumni Research Foundation.

Interestingly, at least some of the research was publicly funded according to the news release,

Funding from the National Science Foundation, the Army Research Office and the Air Force supported their work.

Will the public ever benefit financially from this research?

A treasure trove of molecule and battery data released to the public

Scientists working on The Materials Project have taken the notion of open science to their hearts and opened up access to their data according to a June 9, 2016 news item on Nanowerk,

The Materials Project, a Google-like database of material properties aimed at accelerating innovation, has released an enormous trove of data to the public, giving scientists working on fuel cells, photovoltaics, thermoelectrics, and a host of other advanced materials a powerful tool to explore new research avenues. But it has become a particularly important resource for researchers working on batteries. Co-founded and directed by Lawrence Berkeley National Laboratory (Berkeley Lab) scientist Kristin Persson, the Materials Project uses supercomputers to calculate the properties of materials based on first-principles quantum-mechanical frameworks. It was launched in 2011 by the U.S. Department of Energy’s (DOE) Office of Science.

A June 8, 2016 Berkeley Lab news release, which originated the news item, provides more explanation about The Materials Project,

The idea behind the Materials Project is that it can save researchers time by predicting material properties without needing to synthesize the materials first in the lab. It can also suggest new candidate materials that experimentalists had not previously dreamed up. With a user-friendly web interface, users can look up the calculated properties, such as voltage, capacity, band gap, and density, for tens of thousands of materials.

Two sets of data were released last month: nearly 1,500 compounds investigated for multivalent intercalation electrodes and more than 21,000 organic molecules relevant for liquid electrolytes as well as a host of other research applications. Batteries with multivalent cathodes (which have multiple electrons per mobile ion available for charge transfer) are promising candidates for reducing cost and achieving higher energy density than that available with current lithium-ion technology.

The sheer volume and scope of the data is unprecedented, said Persson, who is also a professor in UC Berkeley’s Department of Materials Science and Engineering. “As far as the multivalent cathodes, there’s nothing similar in the world that exists,” she said. “To give you an idea, experimentalists are usually able to focus on one of these materials at a time. Using calculations, we’ve added data on 1,500 different compositions.”

While other research groups have made their data publicly available, what makes the Materials Project so useful are the online tools to search all that data. The recent release includes two new web apps—the Molecules Explorer and the Redox Flow Battery Dashboard—plus an add-on to the Battery Explorer web app enabling researchers to work with other ions in addition to lithium.

“Not only do we give the data freely, we also give algorithms and software to interpret or search over the data,” Persson said.

The Redox Flow Battery app gives scientific parameters as well as techno-economic ones, so battery designers can quickly rule out a molecule that might work well but be prohibitively expensive. The Molecules Explorer app will be useful to researchers far beyond the battery community.

“For multivalent batteries it’s so hard to get good experimental data,” Persson said. “The calculations provide rich and robust benchmarks to assess whether the experiments are actually measuring a valid intercalation process or a side reaction, which is particularly difficult for multivalent energy technology because there are so many problems with testing these batteries.”

Here’s a screen capture from the Battery Explorer app,

The Materials Project’s Battery Explorer app now allows researchers to work with other ions in addition to lithium.

The Materials Project’s Battery Explorer app now allows researchers to work with other ions in addition to lithium. Courtesy: The Materials Project

The news release goes on to describe a new discovery made possible by The Materials Project (Note: A link has been removed),

Together with Persson, Berkeley Lab scientist Gerbrand Ceder, postdoctoral associate Miao Liu, and MIT graduate student Ziqin Rong, the Materials Project team investigated some of the more promising materials in detail for high multivalent ion mobility, which is the most difficult property to achieve in these cathodes. This led the team to materials known as thiospinels. One of these thiospinels has double the capacity of the currently known multivalent cathodes and was recently synthesized and tested in the lab by JCESR researcher Linda Nazar of the University of Waterloo, Canada.

“These materials may not work well the first time you make them,” Persson said. “You have to be persistent; for example you may have to make the material very phase pure or smaller than a particular particle size and you have to test them under very controlled conditions. There are people who have actually tried this material before and discarded it because they thought it didn’t work particularly well. The power of the computations and the design metrics we have uncovered with their help is that it gives us the confidence to keep trying.”

The researchers were able to double the energy capacity of what had previously been achieved for this kind of multivalent battery. The study has been published in the journal Energy & Environmental Science in an article titled, “A High Capacity Thiospinel Cathode for Mg Batteries.”

“The new multivalent battery works really well,” Persson said. “It’s a significant advance and an excellent proof-of-concept for computational predictions as a valuable new tool for battery research.”

Here’s a link to and a citation for the paper,

A high capacity thiospinel cathode for Mg batteries by Xiaoqi Sun, Patrick Bonnick, Victor Duffort, Miao Liu, Ziqin Rong, Kristin A. Persson, Gerbrand Ceder and  Linda F. Nazar. Energy Environ. Sci., 2016, Advance Article DOI: 10.1039/C6EE00724D First published online 24 May 2016

This paper seems to be behind a paywall.

Getting back to the news release, there’s more about The Materials Project in relationship to its membership,

The Materials Project has attracted more than 20,000 users since launching five years ago. Every day about 20 new users register and 300 to 400 people log in to do research.

One of those users is Dane Morgan, a professor of engineering at the University of Wisconsin-Madison who develops new materials for a wide range of applications, including highly active catalysts for fuel cells, stable low-work function electron emitter cathodes for high-powered microwave devices, and efficient, inexpensive, and environmentally safe solar materials.

“The Materials Project has enabled some of the most exciting research in my group,” said Morgan, who also serves on the Materials Project’s advisory board. “By providing easy access to a huge database, as well as tools to process that data for thermodynamic predictions, the Materials Project has enabled my group to rapidly take on materials design projects that would have been prohibitive just a few years ago.”

More materials are being calculated and added to the database every day. In two years, Persson expects another trove of data to be released to the public.

“This is the way to reach a significant part of the research community, to reach students while they’re still learning material science,” she said. “It’s a teaching tool. It’s a science tool. It’s unprecedented.”

Supercomputing clusters at the National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science User Facility hosted at Berkeley Lab, provide the infrastructure for the Materials Project.

Funding for the Materials Project is provided by the Office of Science (US Department of Energy], including support through JCESR [Joint Center for Energy Storage Research].

Happy researching!

Science literacy, science advice, the US Supreme Court, and Britain’s House of Commons

This ‘think’ piece is going to cover a fair bit of ground including science literacy in the general public and in the US Supreme Court, and what that might mean for science advice and UK Members of Parliament (MPs).

Science literacy generally and in the US Supreme Court

A science literacy report for the US National Academy of Sciences (NAS), due sometime from early to mid 2017, is being crafted with an eye to capturing a different perspective according to a March 24, 2016 University of Wisconsin-Madison news release by Terry Dewitt,

What does it mean to be science literate? How science literate is the American public? How do we stack up against other countries? What are the civic implications of a public with limited knowledge of science and how it works? How is science literacy measured?

These and other questions are under the microscope of a 12-member National Academy of Sciences (NAS) panel — including University of Wisconsin—Madison Life Sciences Communication Professor Dominique Brossard and School of Education Professor Noah Feinstein — charged with sorting through the existing data on American science and health literacy and exploring the association between knowledge of science and public perception of and support for science.

The committee — composed of educators, scientists, physicians and social scientists — will take a hard look at the existing data on the state of U.S. science literacy, the questions asked, and the methods used to measure what Americans know and don’t know about science and how that knowledge has changed over time. Critically for science, the panel will explore whether a lack of science literacy is associated with decreased public support for science or research.

Historically, policymakers and leaders in the scientific community have fretted over a perceived lack of knowledge among Americans about science and how it works. A prevailing fear is that an American public unequipped to come to terms with modern science will ultimately have serious economic, security and civic consequences, especially when it comes to addressing complex and nuanced issues like climate change, antibiotic resistance, emerging diseases, environment and energy choices.

While the prevailing wisdom, inspired by past studies, is that Americans don’t stack up well in terms of understanding science, Brossard is not so convinced. Much depends on what kinds of questions are asked, how they are asked, and how the data is analyzed.

It is very easy, she argues, to do bad social science and past studies may have measured the wrong things or otherwise created a perception about the state of U.S. science literacy that may or may not be true.

“How do you conceptualize scientific literacy? What do people need to know? Some argue that scientific literacy may be as simple as an understanding of how science works, the nature of science, [emphasis mine]” Brossard explains. “For others it may be a kind of ‘civic science literacy,’ where people have enough knowledge to be informed and make good decisions in a civics context.”

Science literacy may not be just for the public, it would seem that US Supreme Court judges may not have a basic understanding of how science works. David Bruggeman’s March 24, 2016 posting (on his Pasco Phronesis blog) describes a then current case before the Supreme Court (Justice Antonin Scalia has since died), Note: Links have been removed,

It’s a case concerning aspects of the University of Texas admissions process for undergraduates and the case is seen as a possible means of restricting race-based considerations for admission.  While I think the arguments in the case will likely revolve around factors far removed from science and or technology, there were comments raised by two Justices that struck a nerve with many scientists and engineers.

Both Justice Antonin Scalia and Chief Justice John Roberts raised questions about the validity of having diversity where science and scientists are concerned [emphasis mine].  Justice Scalia seemed to imply that diversity wasn’t esential for the University of Texas as most African-American scientists didn’t come from schools at the level of the University of Texas (considered the best university in Texas).  Chief Justice Roberts was a bit more plain about not understanding the benefits of diversity.  He stated, “What unique perspective does a black student bring to a class in physics?”

To that end, Dr. S. James Gates, theoretical physicist at the University of Maryland, and member of the President’s Council of Advisers on Science and Technology (and commercial actor) has an editorial in the March 25 [2016] issue of Science explaining that the value of having diversity in science does not accrue *just* to those who are underrepresented.

Dr. Gates relates his personal experience as a researcher and teacher of how people’s background inform their practice of science, and that two different people may use the same scientific method, but think about the problem differently.

I’m guessing that both Scalia and Roberts and possibly others believe that science is the discovery and accumulation of facts. In this worldview science facts such as gravity are waiting for discovery and formulation into a ‘law’. They do not recognize that most science is a collection of beliefs and may be influenced by personal beliefs. For example, we believe we’ve proved the existence of the Higgs boson but no one associated with the research has ever stated unequivocally that it exists.

For judges who are under the impression that scientific facts are out there somewhere waiting to be discovered diversity must seem irrelevant. It is not. Who you are affects the questions you ask and how you approach science. The easiest example is to look at how women were viewed when they were subjects in medical research. The fact that women’s physiology is significantly different (and not just in child-bearing ways) was never considered relevant when reporting results. Today, researchers consider not only gender, but age (to some extent), ethnicity, and more when examining results. It’s still not a perfect but it was a step forward.

So when Brossard included “… an understanding of how science works, the nature of science …” as an aspect of science literacy, the judges seemed to present a good example of how not understanding science can have a major impact on how others live.

I’d almost forgotten this science literacy piece as I’d started the draft some months ago but then I spotted a news item about a science advice/MP ‘dating’ service in the UK.

Science advice and UK MPs

First, the news, then, the speculation (from a June 6, 2016 news item on ScienceDaily),

MPs have expressed an overwhelming willingness to use a proposed new service to swiftly link them with academics in relevant areas to help ensure policy is based on the latest evidence.

A June 6, 2016 University of Exeter press release, which originated the news item, provides more detail about the proposed service and the research providing the supporting evidence (Note: A link has been removed),

The government is pursuing a drive towards evidence-based policy, yet policy makers still struggle to incorporate evidence into their decisions. One reason for this is limited easy access to the latest research findings or to academic experts who can respond to questions about evidence quickly.

Researchers at Cardiff University, the University of Exeter and University College London have today published results of the largest study to date reporting MPs’ attitudes to evidence in policy making and their reactions to a proposed Evidence Information Service (EIS) – a rapid match-making advisory service that would work alongside existing systems to put MPs in touch with relevant academic experts.

Dr Natalia Lawrence, of the University of Exeter, said: “It’s clear from our study that politicians want to ensure their decisions incorporate the most reliable evidence, but it can sometimes be very difficult for them to know how to access the latest research findings. This new matchmaking service could be a quick and easy way for them to seek advice from cutting-edge researchers and to check their understanding and facts. It could provide a useful complement to existing highly-valued information services.”

The research, published today in the journal Evidence and Policy, reports the findings of a national consultation exercise between politicians and the public. The researchers recruited members of the public to interview their local parliamentary representative. In total 86, politicians were contacted with 56 interviews completed. The MPs indicated an overwhelming willingness to use a service such as the EIS, with 85% supporting the idea, but noted a number of potential reservations related to the logistics of the EIS such as response time and familiarity with the service. Yet, the MPs indicated that their logistical reservations could be overcome by accessing the EIS via existing highly-valued parliamentary information services such as those provided by the House of Commons and Lords Libraries. Furthermore prior to rolling out the EIS on a nationwide basis it would first need to be piloted.

Developing the proposed EIS in line with feedback from this consultation of MPs would offer the potential to provide policy makers with rapid, reliable and confidential evidence from willing volunteers from the research community.

Professor Chris Chambers, of Cardiff University, said: “The government has given a robust steer that MPs need to link in more with academics to ensure decisions shaping the future of the country are evidence-based. It’s heartening to see that there is a will to adopt this system and we now need to move into a phase of developing a service that is both simple and effective to meet this need.”

The next steps for the project are parallel consultations of academics and members of the public and a pilot of the EIS, using funding from GW4 alliance of universities, made up of Bath, Bristol, Cardiff and Exeter.

What this study shows:
• The consultation shows that politicians recognise the importance of evidence-based policy making and agree on the need for an easier and more direct linkage between academic experts and policy makers.
• Politicians would welcome the creation of the EIS as a provider of rapid, reliable and confidential evidence.

What this study does not show:
• This study does not show how academics would provide evidence. This was a small-scale study which consulted politicians and has not attempted to give voice to the academic community.
• This study does not detail the mechanism of an operational EIS. Instead it indicates the need for a service such as the EIS and suggests ways in which the EIS can be operationalized.

Here’s a link to and a citation for the paper,

Service as a new platform for supporting evidence-based policy: a consultation of UK parliamentarians by Natalia Lawrence, Jemma Chambers, Sinead Morrison, Sven Bestmann, Gerard O’Grady, Christopher Chambers, Andrew Kythreotis. Evidence & Policy: A Journal of Research, Debate and Practice DOI: http://dx.doi.org/10.1332/174426416X14643531912169 Appeared or available online: June 6, 2016

This paper is behind a paywall open access. *Corrected June 17, 2016.*

It’s an interesting idea and I can understand the appeal. However, operationalizing this ‘dating’ or ‘matchmaking’ service could prove quite complex. I appreciate the logistics issues but I’m a little more concerned about the MPs’ science literacy. Are they going to be like the two US justices who believe that science is the pursuit of immutable facts? What happens if two MPs are matched up with a different scientist and those two scientists didn’t agree about what the evidence says. Or, what happens if one scientist is more cautious than the other. There are all kinds of pitfalls. I’m not arguing against the idea but it’s going to require a lot of careful consideration.

Diamond-based electronics?

A May 24, 2016 news item on ScienceDaily describes the latest research on using diamonds as semiconductors,

Along with being a “girl’s best friend,” diamonds also have remarkable properties that could make them ideal semiconductors. This is welcome news for electronics; semiconductors are needed to meet the rising demand for more efficient electronics that deliver and convert power.

The thirst for electronics is unlikely to cease and almost every appliance or device requires a suite of electronics that transfer, convert and control power. Now, researchers have taken an important step toward that technology with a new way to dope single crystals of diamonds, a crucial process for building electronic devices.

A May 24, 2016 American Institute of Physics (AIP) news release (also on EurekAlert), which originated the news item, provides more detail,

For power electronics, diamonds could serve as the perfect material. They are thermally conductive, which means diamond-based devices would dissipate heat quickly and easily, foregoing the need for bulky and expensive methods for cooling. Diamond can also handle high voltages and power. Electrical currents also flow through diamonds quickly, meaning the material would make for energy efficient devices.

But among the biggest challenges to making diamond-based devices is doping, a process in which other elements are integrated into the semiconductor to change its properties. Because of diamond’s rigid crystalline structure, doping is difficult.

Currently, you can dope diamond by coating the crystal with boron and heating it to 1450 degrees Celsius. But it’s difficult to remove the boron coating at the end. This method only works on diamonds consisting of multiple crystals stuck together. Because such polydiamonds have irregularities between the crystals, single-crystals would be superior semiconductors.

You can dope single crystals by injecting boron atoms while growing the crystals artificially. The problem is the process requires powerful microwaves that can degrade the quality of the crystal.

Now, Ma [Zhengqiang (Jack) Ma, an electrical and computer engineering professor at the University of Wisconsin-Madison] and his colleagues have found a way to dope single-crystal diamonds with boron at relatively low temperatures and without any degradation. The researchers discovered if you bond a single-crystal diamond with a piece of silicon doped with boron, and heat it to 800 degrees Celsius, which is low compared to the conventional techniques, the boron atoms will migrate from the silicon to the diamond. It turns out that the boron-doped silicon has defects such as vacancies, where an atom is missing in the lattice structure. Carbon atoms from the diamond will fill those vacancies, leaving empty spots for boron atoms.

This technique also allows for selective doping, which means more control when making devices. You can choose where to dope a single-crystal diamond simply by bonding the silicon to that spot.

The new method only works for P-type doping, where the semiconductor is doped with an element that provides positive charge carriers (in this case, the absence of electrons, called holes).

“We feel like we found a very easy, inexpensive, and effective way to do it,” Ma said. The researchers are already working on a simple device using P-type single-crystal diamond semiconductors.

But to make electronic devices like transistors, you need N-type doping that gives the semiconductor negative charge carriers (electrons). And other barriers remain. Diamond is expensive and single crystals are very small.

Still, Ma says, achieving P-type doping is an important step, and might inspire others to find solutions for the remaining challenges. Eventually, he said, single-crystal diamond could be useful everywhere — perfect, for instance, for delivering power through the grid.

Here’s an image the researchers have released,

Optical image of a diode array on a natural single crystalline diamond plate. (The image looks blurred due to light scattering by the array of small pads on top of the diamond plate.) Inset shows the deposited anode metal on top of heavy doped Si nanomembrane that is bonded to natural single crystalline diamond. CREDIT: Jung-Hun Seo

Optical image of a diode array on a natural single crystalline diamond plate. (The image looks blurred due to light scattering by the array of small pads on top of the diamond plate.) Inset shows the deposited anode metal on top of heavy doped Si nanomembrane that is bonded to natural single crystalline diamond. CREDIT: Jung-Hun Seo Courtesy: American Institute of Physics

Here’s a link to and a citation for the paper,

Thermal diffusion boron doping of single-crystal natural diamond by Jung-Hun Seo, Henry Wu, Solomon Mikael, Hongyi Mi, James P. Blanchard, Giri Venkataramanan, Weidong Zhou, Shaoqin Gong, Dane Morgan, and Zhenqiang Ma. J. Appl. Phys. 119, 205703 (2016); http://dx.doi.org/10.1063/1.4949327

This paper appears to be open access.