Tag Archives: University of California at Berkeley

Body-on-a-chip (10 organs)

Also known as human-on-a-chip, the 10-organ body-on-a-chip was being discussed at the 9th World Congress on Alternatives to Animal Testing in the Life Sciences in 2014 in Prague, Czech Republic (see this July 1, 2015 posting for more). At the time, scientists were predicting success at achieving their goal of 10 organs on-a-chip in 2017 (the best at the time was four organs). Only a few months past that deadline, scientists from the Massachusetts Institute of Technology (MIT) seem to have announced a ’10 organ chip’ in a March 14, 2018 news item on ScienceDaily,

MIT engineers have developed new technology that could be used to evaluate new drugs and detect possible side effects before the drugs are tested in humans. Using a microfluidic platform that connects engineered tissues from up to 10 organs, the researchers can accurately replicate human organ interactions for weeks at a time, allowing them to measure the effects of drugs on different parts of the body.

Such a system could reveal, for example, whether a drug that is intended to treat one organ will have adverse effects on another.

A March 14, 2018 MIT news release (also on EurekAlert), which originated the news item, expands on the theme,

“Some of these effects are really hard to predict from animal models because the situations that lead to them are idiosyncratic,” says Linda Griffith, the School of Engineering Professor of Teaching Innovation, a professor of biological engineering and mechanical engineering, and one of the senior authors of the study. “With our chip, you can distribute a drug and then look for the effects on other tissues, and measure the exposure and how it is metabolized.”

These chips could also be used to evaluate antibody drugs and other immunotherapies, which are difficult to test thoroughly in animals because they are designed to interact with the human immune system.

David Trumper, an MIT professor of mechanical engineering, and Murat Cirit, a research scientist in the Department of Biological Engineering, are also senior authors of the paper, which appears in the journal Scientific Reports. The paper’s lead authors are former MIT postdocs Collin Edington and Wen Li Kelly Chen.

Modeling organs

When developing a new drug, researchers identify drug targets based on what they know about the biology of the disease, and then create compounds that affect those targets. Preclinical testing in animals can offer information about a drug’s safety and effectiveness before human testing begins, but those tests may not reveal potential side effects, Griffith says. Furthermore, drugs that work in animals often fail in human trials.

“Animals do not represent people in all the facets that you need to develop drugs and understand disease,” Griffith says. “That is becoming more and more apparent as we look across all kinds of drugs.”

Complications can also arise due to variability among individual patients, including their genetic background, environmental influences, lifestyles, and other drugs they may be taking. “A lot of the time you don’t see problems with a drug, particularly something that might be widely prescribed, until it goes on the market,” Griffith says.

As part of a project spearheaded by the Defense Advanced Research Projects Agency (DARPA), Griffith and her colleagues decided to pursue a technology that they call a “physiome on a chip,” which they believe could offer a way to model potential drug effects more accurately and rapidly. To achieve this, the researchers needed new equipment — a platform that would allow tissues to grow and interact with each other — as well as engineered tissue that would accurately mimic the functions of human organs.

Before this project was launched, no one had succeeded in connecting more than a few different tissue types on a platform. Furthermore, most researchers working on this kind of chip were working with closed microfluidic systems, which allow fluid to flow in and out but do not offer an easy way to manipulate what is happening inside the chip. These systems also require external pumps.

The MIT team decided to create an open system, which essentially removes the lid and makes it easier to manipulate the system and remove samples for analysis. Their system, adapted from technology they previously developed and commercialized through U.K.-based CN BioInnovations, also incorporates several on-board pumps that can control the flow of liquid between the “organs,” replicating the circulation of blood, immune cells, and proteins through the human body. The pumps also allow larger engineered tissues, for example tumors within an organ, to be evaluated.

Complex interactions

The researchers created several versions of their chip, linking up to 10 organ types: liver, lung, gut, endometrium, brain, heart, pancreas, kidney, skin, and skeletal muscle. Each “organ” consists of clusters of 1 million to 2 million cells. These tissues don’t replicate the entire organ, but they do perform many of its important functions. Significantly, most of the tissues come directly from patient samples rather than from cell lines that have been developed for lab use. These so-called “primary cells” are more difficult to work with but offer a more representative model of organ function, Griffith says.

Using this system, the researchers showed that they could deliver a drug to the gastrointestinal tissue, mimicking oral ingestion of a drug, and then observe as the drug was transported to other tissues and metabolized. They could measure where the drugs went, the effects of the drugs on different tissues, and how the drugs were broken down. In a related publication, the researchers modeled how drugs can cause unexpected stress on the liver by making the gastrointestinal tract “leaky,” allowing bacteria to enter the bloodstream and produce inflammation in the liver.

Kevin Healy, a professor of bioengineering and materials science and engineering at the University of California at Berkeley, says that this kind of system holds great potential for accurate prediction of complex adverse drug reactions.

“While microphysiological systems (MPS) featuring single organs can be of great use for both pharmaceutical testing and basic organ-level studies, the huge potential of MPS technology is revealed by connecting multiple organ chips in an integrated system for in vitro pharmacology. This study beautifully illustrates that multi-MPS “physiome-on-a-chip” approaches, which combine the genetic background of human cells with physiologically relevant tissue-to-media volumes, allow accurate prediction of drug pharmacokinetics and drug absorption, distribution, metabolism, and excretion,” says Healy, who was not involved in the research.

Griffith believes that the most immediate applications for this technology involve modeling two to four organs. Her lab is now developing a model system for Parkinson’s disease that includes brain, liver, and gastrointestinal tissue, which she plans to use to investigate the hypothesis that bacteria found in the gut can influence the development of Parkinson’s disease.

Other applications include modeling tumors that metastasize to other parts of the body, she says.

“An advantage of our platform is that we can scale it up or down and accommodate a lot of different configurations,” Griffith says. “I think the field is going to go through a transition where we start to get more information out of a three-organ or four-organ system, and it will start to become cost-competitive because the information you’re getting is so much more valuable.”

The research was funded by the U.S. Army Research Office and DARPA.

Caption: MIT engineers have developed new technology that could be used to evaluate new drugs and detect possible side effects before the drugs are tested in humans. Using a microfluidic platform that connects engineered tissues from up to 10 organs, the researchers can accurately replicate human organ interactions for weeks at a time, allowing them to measure the effects of drugs on different parts of the body. Credit: Felice Frankel

Here’s a link to and a citation for the paper,

Interconnected Microphysiological Systems for Quantitative Biology and Pharmacology Studies by Collin D. Edington, Wen Li Kelly Chen, Emily Geishecker, Timothy Kassis, Luis R. Soenksen, Brij M. Bhushan, Duncan Freake, Jared Kirschner, Christian Maass, Nikolaos Tsamandouras, Jorge Valdez, Christi D. Cook, Tom Parent, Stephen Snyder, Jiajie Yu, Emily Suter, Michael Shockley, Jason Velazquez, Jeremy J. Velazquez, Linda Stockdale, Julia P. Papps, Iris Lee, Nicholas Vann, Mario Gamboa, Matthew E. LaBarge, Zhe Zhong, Xin Wang, Laurie A. Boyer, Douglas A. Lauffenburger, Rebecca L. Carrier, Catherine Communal, Steven R. Tannenbaum, Cynthia L. Stokes, David J. Hughes, Gaurav Rohatgi, David L. Trumper, Murat Cirit, Linda G. Griffith. Scientific Reports, 2018; 8 (1) DOI: 10.1038/s41598-018-22749-0 Published online:

This paper which describes testing for four-, seven-, and ten-organs-on-a-chip, is open access. From the paper’s Discussion,

In summary, we have demonstrated a generalizable approach to linking MPSs [microphysiological systems] within a fluidic platform to create a physiome-on-a-chip approach capable of generating complex molecular distribution profiles for advanced drug discovery applications. This adaptable, reusable system has unique and complementary advantages to existing microfluidic and PDMS-based approaches, especially for applications involving high logD substances (drugs and hormones), those requiring precise and flexible control over inter-MPS flow partitioning and drug distribution, and those requiring long-term (weeks) culture with reliable fluidic and sampling operation. We anticipate this platform can be applied to a wide range of problems in disease modeling and pre-clinical drug development, especially for tractable lower-order (2–4) interactions.

Congratulations to the researchers!

Yes! Art, genetic modifications, gene editing, and xenotransplantation at the Vancouver Biennale (Canada)

Patricia Piccinini’s Curious Imaginings Courtesy: Vancouver Biennale [downloaded from http://dailyhive.com/vancouver/vancouver-biennale-unsual-public-art-2018/]

Up to this point, I’ve been a little jealous of the Art/Sci Salon’s (Toronto, Canada) January 2018 workshops for artists and discussions about CRISPR ((clustered regularly interspaced short palindromic repeats))/Cas9 and its social implications. (See my January 10, 2018 posting for more about the events.) Now, it seems Vancouver may be in line for its ‘own’ discussion about CRISPR and the implications of gene editing. The image you saw (above) represents one of the installations being hosted by the 2018 – 2020 edition of the Vancouver Biennale.

While this posting is mostly about the Biennale and Piccinini’s work, there is a ‘science’ subsection featuring the science of CRISPR and xenotransplantation. Getting back to the Biennale and Piccinini: A major public art event since 1988, the Vancouver Biennale has hosted over 91 outdoor sculptures and new media works by more than 78 participating artists from over 25 countries and from 4 continents.

Quickie description of the 2018 – 2020 Vancouver Biennale

The latest edition of the Vancouver Biennale was featured in a June 6, 2018 news item on the Daily Hive (Vancouver),

The Vancouver Biennale will be bringing new —and unusual— works of public art to the city beginning this June.

The theme for this season’s Vancouver Biennale exhibition is “re-IMAGE-n” and it kicks off on June 20 [2018] in Vanier Park with Saudi artist Ajlan Gharem’s Paradise Has Many Gates.

Gharem’s architectural chain-link sculpture resembles a traditional mosque, the piece is meant to challenge the notions of religious orthodoxy and encourages individuals to image a space free of Islamophobia.

Melbourne artist Patricia Piccinini’s Curious Imaginings is expected to be one of the most talked about installations of the exhibit. Her style of “oddly captivating, somewhat grotesque, human-animal hybrid creature” is meant to be shocking and thought-provoking.

Piccinini’s interactive [emphasis mine] experience will “challenge us to explore the social impacts of emerging biotechnology and our ethical limits in an age where genetic engineering and digital technologies are already pushing the boundaries of humanity.”

Piccinini’s work will be displayed in the 105-year-old Patricia Hotel in Vancouver’s Strathcona neighbourhood. The 90-day ticketed exhibition [emphasis mine] is scheduled to open this September [2018].

Given that this blog is focused on nanotechnology and other emerging technologies such as CRISPR, I’m focusing on Piccinini’s work and its art/science or sci-art status. This image from the GOMA Gallery where Piccinini’s ‘Curious Affection‘ installation is being shown from March 24 – Aug. 5, 2018 in Brisbane, Queensland, Australia may give you some sense of what one of her installations is like,

Courtesy: Queensland Art Gallery | Gallery of Modern Art (QAGOMA)

I spoke with Serena at the Vancouver Biennale office and asked about the ‘interactive’ aspect of Piccinini’s installation. She suggested the term ‘immersive’ as an alternative. In other words, you won’t be playing with the sculptures or pressing buttons and interacting with computer screens or robots. She also noted that the ticket prices have not been set yet and they are currently developing events focused on the issues raised by the installation. She knew that 2018 is the 200th anniversary of the publication of Mary Shelley’s Frankenstein but I’m not sure how the Biennale folks plan (or don’t plan)  to integrate any recognition of the novle’s impact on the discussions about ‘new’ technologies .They expect Piccinini will visit Vancouver. (Note 1: Piccinini’s work can  also be seen in a group exhibition titled: Frankenstein’s Birthday Party at the Hosfselt Gallery in San Francisco (California, US) from June 23 – August 11, 2018.  Note 2: I featured a number of international events commemorating the 200th anniversary of the publication of Mary Shelley’s novel, Frankenstein, in my Feb. 26, 2018 posting. Note 3: The term ‘Frankenfoods’ helped to shape the discussion of genetically modified organisms and food supply on this planet. It was a wildly successful campaign for activists affecting legislation in some areas of research. Scientists have not been as enthusiastic about the effects. My January 15, 2009 posting briefly traces a history of the term.)

The 2018 – 2020 Vancouver Biennale and science

A June 7, 2018 Vancouver Biennale news release provides more detail about the current series of exhibitions,

The Biennale is also committed to presenting artwork at the cutting edge of discussion and in keeping with the STEAM (science, technology, engineering, arts, math[ematics]) approach to integrating the arts and sciences. In August [2018], Colombian/American visual artist Jessica Angel will present her monumental installation Dogethereum Bridge at Hinge Park in Olympic Village. Inspired by blockchain technology, the artwork’s design was created through the integration of scientific algorithms, new developments in technology, and the arts. This installation, which will serve as an immersive space and collaborative hub for artists and technologists, will host a series of activations with blockchain as the inspirational jumping-off point.

In what is expected to become one of North America’s most talked-about exhibitions of the year, Melbourne artist Patricia Piccinini’s Curious Imaginings will see the intersection of art, science, and ethics. For the first time in the Biennale’s fifteen years of creating transformative experiences, and in keeping with the 2018-2020 theme of “re-IMAGE-n,” the Biennale will explore art in unexpected places by exhibiting in unconventional interior spaces.  The hyperrealist “world of oddly captivating, somewhat grotesque, human-animal hybrid creatures” will be the artist’s first exhibit in a non-museum setting, transforming a wing of the 105-year-old Patricia Hotel. Situated in Vancouver’s oldest neighbourbood of Strathcona, Piccinini’s interactive experience will “challenge us to explore the social impacts of emerging bio-technology and our ethical limits in an age where genetic engineering and digital technologies are already pushing the boundaries of humanity.” In this intimate hotel setting located in a neighborhood continually undergoing its own change, Curious Imaginings will empower visitors to personally consider questions posed by the exhibition, including the promises and consequences of genetic research and human interference. …

There are other pieces being presented at the Biennale but my special interest is in the art/sci pieces and, at this point, CRISPR.

Piccinini in more depth

You can find out more about Patricia Piccinini in her biography on the Vancouver Biennale website but I found this Char Larsson April 7, 2018 article for the Independent (UK) more informative (Note: A link has been removed),

Patricia Piccinini’s sculptures are deeply disquieting. Walking through Curious Affection, her new solo exhibition at Brisbane’s Gallery of Modern Art, is akin to entering a science laboratory full of DNA experiments. Made from silicone, fibreglass and even human hair, her sculptures are breathtakingly lifelike, however, we can’t be sure what life they are like. The artist creates an exuberant parallel universe where transgenic experiments flourish and human evolution has given way to genetic engineering and DNA splicing.

Curious Affection is a timely and welcome recognition of Piccinini’s enormous contribution to reaching back to the mid-1990s. Working across a variety of mediums including photography, video and drawing, she is perhaps best known for her hyperreal creations.

As a genre, hyperrealism depends on the skill of the artist to create the illusion of reality. To be truly successful, it must convince the spectator of its realness. Piccinini acknowledges this demand, but with a delightful twist. The excruciating attention to detail deliberately solicits our desire to look, only to generate unease, as her sculptures are imbued with a fascinating otherness. Part human, part animal, the works are uncannily familiar, but also alarmingly “other”.

Inspired by advances in genetically modified pigs to generate replacement organs for humans [also known as xenotransplantation], we are reminded that Piccinini has always been at the forefront of debates concerning the possibilities of science, technology and DNA cloning. She does so, however, with a warm affection and sense of humour, eschewing the hysterical anxiety frequently accompanying these scientific developments.

Beyond the astonishing level of detail achieved by working with silicon and fibreglass, there is an ethics at work here. Piccinini is asking us not to avert our gaze from the other, and in doing so, to develop empathy and understanding through the encounter.

I encourage anyone who’s interested to read Larsson’s entire piece (April 7, 2018 article).

According to her Wikipedia entry, Piccinini works in a variety of media including video, sound, sculpture, and more. She also has her own website.

Gene editing and xenotransplantation

Sarah Zhang’s June 8, 2018 article for The Atlantic provides a peek at the extraordinary degree of interest and competition in the field of gene editing and CRISPR ((clustered regularly interspaced short palindromic repeats))/Cas9 research (Note: A link has been removed),

China Is Genetically Engineering Monkeys With Brain Disorders

Guoping Feng applied to college the first year that Chinese universities reopened after the Cultural Revolution. It was 1977, and more than a decade’s worth of students—5.7 million—sat for the entrance exams. Feng was the only one in his high school to get in. He was assigned—by chance, essentially—to medical school. Like most of his contemporaries with scientific ambitions, he soon set his sights on graduate studies in the United States. “China was really like 30 to 50 years behind,” he says. “There was no way to do cutting-edge research.” So in 1989, he left for Buffalo, New York, where for the first time he saw snow piled several feet high. He completed his Ph.D. in genetics at the State University of New York at Buffalo.

Feng is short and slim, with a monk-like placidity and a quick smile, and he now holds an endowed chair in neuroscience at MIT, where he focuses on the genetics of brain disorders. His 45-person lab is part of the McGovern Institute for Brain Research, which was established in 2000 with the promise of a $350 million donation, the largest ever received by the university. In short, his lab does not lack for much.

Yet Feng now travels to China several times a year, because there, he can pursue research he has not yet been able to carry out in the United States. [emphasis mine] …

Feng had organized a symposium at SIAT [Shenzhen Institutes of Advanced Technology], and he was not the only scientist who traveled all the way from the United States to attend: He invited several colleagues as symposium speakers, including a fellow MIT neuroscientist interested in tree shrews, a tiny mammal related to primates and native to southern China, and Chinese-born neuroscientists who study addiction at the University of Pittsburgh and SUNY Upstate Medical University. Like Feng, they had left China in the ’80s and ’90s, part of a wave of young scientists in search of better opportunities abroad. Also like Feng, they were back in China to pursue a type of cutting-edge research too expensive and too impractical—and maybe too ethically sensitive—in the United States.

Here’s what precipitated Feng’s work in China, (from Zhang’s article; Note: Links have been removed)

At MIT, Feng’s lab worked on genetically engineering a monkey species called marmosets, which are very small and genuinely bizarre-looking. They are cheaper to keep due to their size, but they are a relatively new lab animal, and they can be difficult to train on lab tasks. For this reason, Feng also wanted to study Shank3 on macaques in China. Scientists have been cataloging the social behavior of macaques for decades, making it an obvious model for studies of disorders like autism that have a strong social component. Macaques are also more closely related to humans than marmosets, making their brains a better stand-in for those of humans.

The process of genetically engineering a macaque is not trivial, even with the advanced tools of CRISPR. Researchers begin by dosing female monkeys with the same hormones used in human in vitro fertilization. They then collect and fertilize the eggs, and inject the resulting embryos with CRISPR proteins using a long, thin glass needle. Monkey embryos are far more sensitive than mice embryos, and can be affected by small changes in the pH of the injection or the concentration of CRISPR proteins. Only some of the embryos will have the desired mutation, and only some will survive once implanted in surrogate mothers. It takes dozens of eggs to get to just one live monkey, so making even a few knockout monkeys required the support of a large breeding colony.

The first Shank3 macaque was born in 2015. Four more soon followed, bringing the total to five.

To visit his research animals, Feng now has to fly 8,000 miles across 12 time zones. It would be a lot more convenient to carry out his macaque research in the United States, of course, but so far, he has not been able to.

He originally inquired about making Shank3 macaques at the New England Primate Research Center, one of eight national primate research centers then funded by the National Institutes of Health in partnership with a local institution (Harvard Medical School, in this case). The center was conveniently located in Southborough, Massachusetts, just 20 miles west of the MIT campus. But in 2013, Harvard decided to shutter the center.

The decision came as a shock to the research community, and it was widely interpreted as a sign of waning interest in primate research in the United States. While the national primate centers have been important hubs of research on HIV, Zika, Ebola, and other diseases, they have also come under intense public scrutiny. Animal-rights groups like the Humane Society of the United States have sent investigators to work undercover in the labs, and the media has reported on monkey deaths in grisly detail. Harvard officially made its decision to close for “financial” reasons. But the announcement also came after the high-profile deaths of four monkeys from improper handling between 2010 and 2012. The deaths sparked a backlash; demonstrators showed up at the gates. The university gave itself two years to wind down their primate work, officially closing the center in 2015.

“They screwed themselves,” Michael Halassa, the MIT neuroscientist who spoke at Feng’s symposium, told me in Shenzhen. Wei-Dong Yao, another one of the speakers, chimed in, noting that just two years later CRISPR has created a new wave of interest in primate research. Yao was one of the researchers at Harvard’s primate center before it closed; he now runs a lab at SUNY Upstate Medical University that uses genetically engineered mouse and human stem cells, and he had come to Shenzhen to talk about restarting his addiction research on primates.

Here’s comes the competition (from Zhang’s article; Note: Links have been removed),

While the U.S. government’s biomedical research budget has been largely flat, both national and local governments in China are eager to raise their international scientific profiles, and they are shoveling money into research. A long-rumored, government-sponsored China Brain Project is supposed to give neuroscience research, and primate models in particular, a big funding boost. Chinese scientists may command larger salaries, too: Thanks to funding from the Shenzhen local government, a new principal investigator returning from overseas can get 3 million yuan—almost half a million U.S. dollars—over his or her first five years. China is even finding success in attracting foreign researchers from top U.S. institutions like Yale.

In the past few years, China has seen a miniature explosion of genetic engineering in monkeys. In Kunming, Shanghai, and Guangzhou, scientists have created monkeys engineered to show signs of Parkinson’s, Duchenne muscular dystrophy, autism, and more. And Feng’s group is not even the only one in China to have created Shank3 monkeys. Another group—a collaboration primarily between researchers at Emory University and scientists in China—has done the same.

Chinese scientists’ enthusiasm for CRISPR also extends to studies of humans, which are moving much more quickly, and in some cases under less oversight, than in the West. The first studies to edit human embryos and first clinical trials for cancer therapies using CRISPR have all happened in China. [emphases mine]

Some ethical issues are also covered (from Zhang’s article),

Parents with severely epileptic children had asked him if it would be possible to study the condition in a monkey. Feng told them what he thought would be technically possible. “But I also said, ‘I’m not sure I want to generate a model like this,’” he recalled. Maybe if there were a drug to control the monkeys’ seizures, he said: “I cannot see them seizure all the time.”

But is it ethical, he continued, to let these babies die without doing anything? Is it ethical to generate thousands or millions of mutant mice for studies of brain disorders, even when you know they will not elucidate much about human conditions?

Primates should only be used if other models do not work, says Feng, and only if a clear path forward is identified. The first step in his work, he says, is to use the Shank3 monkeys to identify the changes the mutations cause in the brain. Then, researchers might use that information to find targets for drugs, which could be tested in the same monkeys. He’s talking with the Oregon National Primate Research Center about carrying out similar work in the United States. ….[Note: I have a three-part series about CRISPR and germline editing* in the US, precipitated by research coming out of Oregon, Part 1, which links to the other parts, is here.]

Zhang’s June 8, 2018 article is excellent and I highly recommend reading it.

I touched on the topic of xenotransplanttaion in a commentary on a book about the science  of the television series, Orphan Black in a January 31,2018 posting (Note: A chimera is what you use to incubate a ‘human’ organ for transplantation or, more accurately, xenotransplantation),

On the subject of chimeras, the Canadian Broadcasting Corporation (CBC) featured a January 26, 2017 article about the pig-human chimeras on its website along with a video,

The end

I am very excited to see Piccinini’s work come to Vancouver. There have been a number of wonderful art and art/science installations and discussions here but this is the first one (I believe) to tackle the emerging gene editing technologies and the issues they raise. (It also fits in rather nicely with the 200th anniversary of the publication of Mary Shelley’s Frankenstein which continues to raise issues and stimulate discussion.)

In addition to the ethical issues raised in Zhang’s article, there are some other philosophical questions:

  • what does it mean to be human
  • if we are going to edit genes to create hybrid human/animals, what are they and how do they fit into our current animal/human schema
  • are you still human if you’ve had an organ transplant where the organ was incubated in a pig

There are also going to be legal issues. In addition to any questions about legal status, there are also fights about intellectual property such as the one involving Harvard & MIT’s [Massachusetts Institute of Technology] Broad Institute vs the University of California at Berkeley (March 15, 2017 posting)..

While I’m thrilled about the Piccinini installation, it should be noted the issues raised by other artworks hosted in this version of the Biennale are important. Happily, they have been broached here in Vancouver before and I suspect this will result in more nuanced  ‘conversations’ than are possible when a ‘new’ issue is introduced.

Bravo 2018 – 2020 Vancouver Biennale!

* Germline editing is when your gene editing will affect subsequent generations as opposed to editing out a mutated gene for the lifetime of a single individual.

Art/sci and CRISPR links

This art/science posting may prove of some interest:

The connectedness of living things: an art/sci project in Saskatchewan: evolutionary biology (February 16, 2018)

A selection of my CRISPR posts:

CRISPR and editing the germline in the US (part 1 of 3): In the beginning (August 15, 2017)

NOTE: An introductory CRISPR video describing how CRISPR/Cas9 works was embedded in part1.

Why don’t you CRISPR yourself? (January 25, 2018)

Editing the genome with CRISPR ((clustered regularly interspaced short palindromic repeats)-carrying nanoparticles (January 26, 2018)

Immune to CRISPR? (April 10, 2018)

CRISPR-Cas12a as a new diagnostic tool

Similar to Cas9, Cas12a is has an added feature as noted in this February 15, 2018 news item on ScienceDaily,

Utilizing an unsuspected activity of the CRISPR-Cas12a protein, researchers created a simple diagnostic system called DETECTR to analyze cells, blood, saliva, urine and stool to detect genetic mutations, cancer and antibiotic resistance and also diagnose bacterial and viral infections. The scientists discovered that when Cas12a binds its double-stranded DNA target, it indiscriminately chews up all single-stranded DNA. They then created reporter molecules attached to single-stranded DNA to signal when Cas12a finds its target.

A February 15, 2018 University of California at Berkeley (UC Berkeley) news release by Robert Sanders and which originated the news item, provides more detail and history,

CRISPR-Cas12a, one of the DNA-cutting proteins revolutionizing biology today, has an unexpected side effect that makes it an ideal enzyme for simple, rapid and accurate disease diagnostics.

blood in test tube

(iStock)

Cas12a, discovered in 2015 and originally called Cpf1, is like the well-known Cas9 protein that UC Berkeley’s Jennifer Doudna and colleague Emmanuelle Charpentier turned into a powerful gene-editing tool in 2012.

CRISPR-Cas9 has supercharged biological research in a mere six years, speeding up exploration of the causes of disease and sparking many potential new therapies. Cas12a was a major addition to the gene-cutting toolbox, able to cut double-stranded DNA at places that Cas9 can’t, and, because it leaves ragged edges, perhaps easier to use when inserting a new gene at the DNA cut.

But co-first authors Janice Chen, Enbo Ma and Lucas Harrington in Doudna’s lab discovered that when Cas12a binds and cuts a targeted double-stranded DNA sequence, it unexpectedly unleashes indiscriminate cutting of all single-stranded DNA in a test tube.

Most of the DNA in a cell is in the form of a double-stranded helix, so this is not necessarily a problem for gene-editing applications. But it does allow researchers to use a single-stranded “reporter” molecule with the CRISPR-Cas12a protein, which produces an unambiguous fluorescent signal when Cas12a has found its target.

“We continue to be fascinated by the functions of bacterial CRISPR systems and how mechanistic understanding leads to opportunities for new technologies,” said Doudna, a professor of molecular and cell biology and of chemistry and a Howard Hughes Medical Institute investigator.

DETECTR diagnostics

The new DETECTR system based on CRISPR-Cas12a can analyze cells, blood, saliva, urine and stool to detect genetic mutations, cancer and antibiotic resistance as well as diagnose bacterial and viral infections. Target DNA is amplified by RPA to make it easier for Cas12a to find it and bind, unleashing indiscriminate cutting of single-stranded DNA, including DNA attached to a fluorescent marker (gold star) that tells researchers that Cas12a has found its target.

The UC Berkeley researchers, along with their colleagues at UC San Francisco, will publish their findings Feb. 15 [2018] via the journal Science’s fast-track service, First Release.

The researchers developed a diagnostic system they dubbed the DNA Endonuclease Targeted CRISPR Trans Reporter, or DETECTR, for quick and easy point-of-care detection of even small amounts of DNA in clinical samples. It involves adding all reagents in a single reaction: CRISPR-Cas12a and its RNA targeting sequence (guide RNA), fluorescent reporter molecule and an isothermal amplification system called recombinase polymerase amplification (RPA), which is similar to polymerase chain reaction (PCR). When warmed to body temperature, RPA rapidly multiplies the number of copies of the target DNA, boosting the chances Cas12a will find one of them, bind and unleash single-strand DNA cutting, resulting in a fluorescent readout.

The UC Berkeley researchers tested this strategy using patient samples containing human papilloma virus (HPV), in collaboration with Joel Palefsky’s lab at UC San Francisco. Using DETECTR, they were able to demonstrate accurate detection of the “high-risk” HPV types 16 and 18 in samples infected with many different HPV types.

“This protein works as a robust tool to detect DNA from a variety of sources,” Chen said. “We want to push the limits of the technology, which is potentially applicable in any point-of-care diagnostic situation where there is a DNA component, including cancer and infectious disease.”

The indiscriminate cutting of all single-stranded DNA, which the researchers discovered holds true for all related Cas12 molecules, but not Cas9, may have unwanted effects in genome editing applications, but more research is needed on this topic, Chen said. During the transcription of genes, for example, the cell briefly creates single strands of DNA that could accidentally be cut by Cas12a.

The activity of the Cas12 proteins is similar to that of another family of CRISPR enzymes, Cas13a, which chew up RNA after binding to a target RNA sequence. Various teams, including Doudna’s, are developing diagnostic tests using Cas13a that could, for example, detect the RNA genome of HIV.

infographic about DETECTR system

(Infographic by the Howard Hughes Medical Institute)

These new tools have been repurposed from their original role in microbes where they serve as adaptive immune systems to fend off viral infections. In these bacteria, Cas proteins store records of past infections and use these “memories” to identify harmful DNA during infections. Cas12a, the protein used in this study, then cuts the invading DNA, saving the bacteria from being taken over by the virus.

The chance discovery of Cas12a’s unusual behavior highlights the importance of basic research, Chen said, since it came from a basic curiosity about the mechanism Cas12a uses to cleave double-stranded DNA.

“It’s cool that, by going after the question of the cleavage mechanism of this protein, we uncovered what we think is a very powerful technology useful in an array of applications,” Chen said.

Here’s a link to and a citation for the paper,

CRISPR-Cas12a target binding unleashes indiscriminate single-stranded DNase activity by Janice S. Chen, Enbo Ma, Lucas B. Harrington, Maria Da Costa, Xinran Tian, Joel M. Palefsky, Jennifer A. Doudna. Science 15 Feb 2018: eaar6245 DOI: 10.1126/science.aar6245

This paper is behind a paywall.

AI (artificial intelligence) for Good Global Summit from May 15 – 17, 2018 in Geneva, Switzerland: details and an interview with Frederic Werner

With all the talk about artificial intelligence (AI), a lot more attention seems to be paid to apocalyptic scenarios: loss of jobs, financial hardship, loss of personal agency and privacy, and more with all of these impacts being described as global. Still, there are some folks who are considering and working on ‘AI for good’.

If you’d asked me, the International Telecommunications Union (ITU) would not have been my first guess (my choice would have been United Nations Educational, Scientific and Cultural Organization [UNESCO]) as an agency likely to host the 2018 AI for Good Global Summit. But, it turns out the ITU is a UN (United Nations agency) and, according to its Wikipedia entry, it’s an intergovernmental public-private partnership, which may explain the nature of the participants in the upcoming summit.

The news

First, there’s a May 4, 2018 ITU media advisory (received via email or you can find the full media advisory here) about the upcoming summit,

Artificial Intelligence (AI) is now widely identified as being able to address the greatest challenges facing humanity – supporting innovation in fields ranging from crisis management and healthcare to smart cities and communications networking.

The second annual ‘AI for Good Global Summit’ will take place 15-17 May [2018] in Geneva, and seeks to leverage AI to accelerate progress towards the United Nations’ Sustainable Development Goals and ultimately benefit humanity.

WHAT: Global event to advance ‘AI for Good’ with the participation of internationally recognized AI experts. The programme will include interactive high-level panels, while ‘AI Breakthrough Teams’ will propose AI strategies able to create impact in the near term, guided by an expert audience of mentors representing government, industry, academia and civil society – through interactive sessions. The summit will connect AI innovators with public and private-sector decision-makers, building collaboration to take promising strategies forward.

A special demo & exhibit track will feature innovative applications of AI designed to: protect women from sexual violence, avoid infant crib deaths, end child abuse, predict oral cancer, and improve mental health treatments for depression – as well as interactive robots including: Alice, a Dutch invention designed to support the aged; iCub, an open-source robot; and Sophia, the humanoid AI robot.

WHEN: 15-17 May 2018, beginning daily at 9 AM

WHERE: ITU Headquarters, 2 Rue de Varembé, Geneva, Switzerland (Please note: entrance to ITU is now limited for all visitors to the Montbrillant building entrance only on rue Varembé).

WHO: Confirmed participants to date include expert representatives from: Association for Computing Machinery, Bill and Melinda Gates Foundation, Cambridge University, Carnegie Mellon, Chan Zuckerberg Initiative, Consumer Trade Association, Facebook, Fraunhofer, Google, Harvard University, IBM Watson, IEEE, Intellectual Ventures, ITU, Microsoft, Massachusetts Institute of Technology (MIT), Partnership on AI, Planet Labs, Shenzhen Open Innovation Lab, University of California at Berkeley, University of Tokyo, XPRIZE Foundation, Yale University – and the participation of “Sophia” the humanoid robot and “iCub” the EU open source robotcub.

The interview

Frederic Werner, Senior Communications Officer at the International Telecommunication Union and** one of the organizers of the AI for Good Global Summit 2018 kindly took the time to speak to me and provide a few more details about the upcoming event.

Werner noted that the 2018 event grew out of a much smaller 2017 ‘workshop’ and first of its kind, about beneficial AI which this year has ballooned in size to 91 countries (about 15 participants are expected from Canada), 32 UN agencies, and substantive representation from the private sector. The 2017 event featured Dr. Yoshua Bengio of the University of Montreal  (Université de Montréal) was a featured speaker.

“This year, we’re focused on action-oriented projects that will help us reach our Sustainable Development Goals (SDGs) by 2030. We’re looking at near-term practical AI applications,” says Werner. “We’re matchmaking problem-owners and solution-owners.”

Academics, industry professionals, government officials, and representatives from UN agencies are gathering  to work on four tracks/themes:

In advance of this meeting, the group launched an AI repository (an action item from the 2017 meeting) on April 25, 2018 inviting people to list their AI projects (from the ITU’s April 25, 2018? AI repository news announcement),

ITU has just launched an AI Repository where anyone working in the field of artificial intelligence (AI) can contribute key information about how to leverage AI to help solve humanity’s greatest challenges.

This is the only global repository that identifies AI-related projects, research initiatives, think-tanks and organizations that aim to accelerate progress on the 17 United Nations’ Sustainable Development Goals (SDGs).

To submit a project, just press ‘Submit’ on the AI Repository site and fill in the online questionnaire, providing all relevant details of your project. You will also be asked to map your project to the relevant World Summit on the Information Society (WSIS) action lines and the SDGs. Approved projects will be officially registered in the repository database.

Benefits of participation on the AI Repository include:

WSIS Prizes recognize individuals, governments, civil society, local, regional and international agencies, research institutions and private-sector companies for outstanding success in implementing development oriented strategies that leverage the power of AI and ICTs.

Creating the AI Repository was one of the action items of last year’s AI for Good Global Summit.

We are looking forward to your submissions.

If you have any questions, please send an email to: ai@itu.int

“Your project won’t be visible immediately as we have to vet the submissions to weed out spam-type material and projects that are not in line with our goals,” says Werner. That said, there are already 29 projects in the repository. As you might expect, the UK, China, and US are in the repository but also represented are Egypt, Uganda, Belarus, Serbia, Peru, Italy, and other countries not commonly cited when discussing AI research.

Werner also pointed out in response to my surprise over the ITU’s role with regard to this AI initiative that the ITU is the only UN agency which has 192* member states (countries), 150 universities, and over 700 industry members as well as other member entities, which gives them tremendous breadth of reach. As well, the organization, founded originally in 1865 as the International Telegraph Convention, has extensive experience with global standardization in the information technology and telecommunications industries. (See more in their Wikipedia entry.)

Finally

There is a bit more about the summit on the ITU’s AI for Good Global Summit 2018 webpage,

The 2nd edition of the AI for Good Global Summit will be organized by ITU in Geneva on 15-17 May 2018, in partnership with XPRIZE Foundation, the global leader in incentivized prize competitions, the Association for Computing Machinery (ACM) and sister United Nations agencies including UNESCO, UNICEF, UNCTAD, UNIDO, Global Pulse, UNICRI, UNODA, UNIDIR, UNODC, WFP, IFAD, UNAIDS, WIPO, ILO, UNITAR, UNOPS, OHCHR, UN UniversityWHO, UNEP, ICAO, UNDP, The World Bank, UN DESA, CTBTOUNISDRUNOG, UNOOSAUNFPAUNECE, UNDPA, and UNHCR.

The AI for Good series is the leading United Nations platform for dialogue on AI. The action​​-oriented 2018 summit will identify practical applications of AI and supporting strategies to improve the quality and sustainability of life on our planet. The summit will continue to formulate strategies to ensure trusted, safe and inclusive development of AI technologies and equitable access to their benefits.

While the 2017 summit sparked the first ever inclusive global dialogue on beneficial AI, the action-oriented 2018 summit will focus on impactful AI solutions able to yield long-term benefits and help achieve the Sustainable Development Goals. ‘Breakthrough teams’ will demonstrate the potential of AI to map poverty and aid with natural disasters using satellite imagery, how AI could assist the delivery of citizen-centric services in smart cities, and new opportunities for AI to help achieve Universal Health Coverage, and finally to help achieve transparency and explainability in AI algorithms.

Teams will propose impactful AI strategies able to be enacted in the near term, guided by an expert audience of mentors representing government, industry, academia and civil society. Strategies will be evaluated by the mentors according to their feasibility and scalability, potential to address truly global challenges, degree of supporting advocacy, and applicability to market failures beyond the scope of government and industry. The exercise will connect AI innovators with public and private-sector decision-makers, building collaboration to take promising strategies forward.

“As the UN specialized agency for information and communication technologies, ITU is well placed to guide AI innovation towards the achievement of the UN Sustainable Development ​Goals. We are providing a neutral close quotation markplatform for international dialogue aimed at ​building a ​common understanding of the capabilities of emerging AI technologies.​​” Houlin Zhao, Secretary General ​of ITU​

Should you be close to Geneva, it seems that registration is still open. Just go to the ITU’s AI for Good Global Summit 2018 webpage, scroll the page down to ‘Documentation’ and you will find a link to the invitation and a link to online registration. Participation is free but I expect that you are responsible for your travel and accommodation costs.

For anyone unable to attend in person, the summit will be livestreamed (webcast in real time) and you can watch the sessions by following the link below,

https://www.itu.int/en/ITU-T/AI/2018/Pages/webcast.aspx

For those of us on the West Coast of Canada and other parts distant to Geneva, you will want to take the nine hour difference between Geneva (Switzerland) and here into account when viewing the proceedings. If you can’t manage the time difference, the sessions are being recorded and will be posted at a later date.

*’132 member states’ corrected to ‘192 member states’ on May 11, 2018 at 1500 hours PDT.

*Redundant ‘and’ removed on July 19, 2018.

CRISPR-CAS9 and gold

As so often happens in the sciences, now that the initial euphoria has expended itself problems (and solutions) with CRISPR ((clustered regularly interspaced short palindromic repeats))-CAAS9 are being disclosed to those of us who are not experts. From an Oct. 3, 2017 article by Bob Yirka for phys.org,

A team of researchers from the University of California and the University of Tokyo has found a way to use the CRISPR gene editing technique that does not rely on a virus for delivery. In their paper published in the journal Nature Biomedical Engineering, the group describes the new technique, how well it works and improvements that need to be made to make it a viable gene editing tool.

CRISPR-Cas9 has been in the news a lot lately because it allows researchers to directly edit genes—either disabling unwanted parts or replacing them altogether. But despite many success stories, the technique still suffers from a major deficit that prevents it from being used as a true medical tool—it sometimes makes mistakes. Those mistakes can cause small or big problems for a host depending on what goes wrong. Prior research has suggested that the majority of mistakes are due to delivery problems, which means that a replacement for the virus part of the technique is required. In this new effort, the researchers report that they have discovered just a such a replacement, and it worked so well that it was able to repair a gene mutation in a Duchenne muscular dystrophy mouse model. The team has named the new technique CRISPR-Gold, because a gold nanoparticle was used to deliver the gene editing molecules instead of a virus.

An Oct. 2, 2017 article by Abby Olena for The Scientist lays out the CRISPR-CAS9 problems the scientists are trying to solve (Note: Links have been removed),

While promising, applications of CRISPR-Cas9 gene editing have so far been limited by the challenges of delivery—namely, how to get all the CRISPR parts to every cell that needs them. In a study published today (October 2) in Nature Biomedical Engineering, researchers have successfully repaired a mutation in the gene for dystrophin in a mouse model of Duchenne muscular dystrophy by injecting a vehicle they call CRISPR-Gold, which contains the Cas9 protein, guide RNA, and donor DNA, all wrapped around a tiny gold ball.

The authors have made “great progress in the gene editing area,” says Tufts University biomedical engineer Qiaobing Xu, who did not participate in the work but penned an accompanying commentary. Because their approach is nonviral, Xu explains, it will minimize the potential off-target effects that result from constant Cas9 activity, which occurs when users deliver the Cas9 template with a viral vector.

Duchenne muscular dystrophy is a degenerative disease of the muscles caused by a lack of the protein dystrophin. In about a third of patients, the gene for dystrophin has small deletions or single base mutations that render it nonfunctional, which makes this gene an excellent candidate for gene editing. Researchers have previously used viral delivery of CRISPR-Cas9 components to delete the mutated exon and achieve clinical improvements in mouse models of the disease.

“In this paper, we were actually able to correct [the gene for] dystrophin back to the wild-type sequence” via homology-directed repair (HDR), coauthor Niren Murthy, a drug delivery researcher at the University of California, Berkeley, tells The Scientist. “The other way of treating this is to do something called exon skipping, which is where you delete some of the exons and you can get dystrophin to be produced, but it’s not [as functional as] the wild-type protein.”

The research team created CRISPR-Gold by covering a central gold nanoparticle with DNA that they modified so it would stick to the particle. This gold-conjugated DNA bound the donor DNA needed for HDR, which the Cas9 protein and guide RNA bound to in turn. They coated the entire complex with a polymer that seems to trigger endocytosis and then facilitate escape of the Cas9 protein, guide RNA, and template DNA from endosomes within cells.

In order to do HDR, “you have to provide the cell [with] the Cas9 enzyme, guide RNA by which you target Cas9 to a particular part of the genome, and a big chunk of DNA, which will be used as a template to edit the mutant sequence to wild-type,” explains coauthor Irina Conboy, who studies tissue repair at the University of California, Berkeley. “They all have to be present at the same time and at the same place, so in our system you have a nanoparticle which simultaneously delivers all of those three key components in their active state.”

Olena’s article carries on to describe how the team created CRISPR-Gold and more.

Additional technical details are available in an Oct. 3, 2017 University of California at Berkeley news release by Brett Israel (also on EurekAlert), which originated the news item (Note: A link has been removed) ,

Scientists at the University of California, Berkeley, have engineered a new way to deliver CRISPR-Cas9 gene-editing technology inside cells and have demonstrated in mice that the technology can repair the mutation that causes Duchenne muscular dystrophy, a severe muscle-wasting disease. A new study shows that a single injection of CRISPR-Gold, as the new delivery system is called, into mice with Duchenne muscular dystrophy led to an 18-times-higher correction rate and a two-fold increase in a strength and agility test compared to control groups.

Diagram of CRISPR-Gold

CRISPR–Gold is composed of 15 nanometer gold nanoparticles that are conjugated to thiol-modified oligonucleotides (DNA-Thiol), which are hybridized with single-stranded donor DNA and subsequently complexed with Cas9 and encapsulated by a polymer that disrupts the endosome of the cell.

Since 2012, when study co-author Jennifer Doudna, a professor of molecular and cell biology and of chemistry at UC Berkeley, and colleague Emmanuelle Charpentier, of the Max Planck Institute for Infection Biology, repurposed the Cas9 protein to create a cheap, precise and easy-to-use gene editor, researchers have hoped that therapies based on CRISPR-Cas9 would one day revolutionize the treatment of genetic diseases. Yet developing treatments for genetic diseases remains a big challenge in medicine. This is because most genetic diseases can be cured only if the disease-causing gene mutation is corrected back to the normal sequence, and this is impossible to do with conventional therapeutics.

CRISPR/Cas9, however, can correct gene mutations by cutting the mutated DNA and triggering homology-directed DNA repair. However, strategies for safely delivering the necessary components (Cas9, guide RNA that directs Cas9 to a specific gene, and donor DNA) into cells need to be developed before the potential of CRISPR-Cas9-based therapeutics can be realized. A common technique to deliver CRISPR-Cas9 into cells employs viruses, but that technique has a number of complications. CRISPR-Gold does not need viruses.

In the new study, research lead by the laboratories of Berkeley bioengineering professors Niren Murthy and Irina Conboy demonstrated that their novel approach, called CRISPR-Gold because gold nanoparticles are a key component, can deliver Cas9 – the protein that binds and cuts DNA – along with guide RNA and donor DNA into the cells of a living organism to fix a gene mutation.

“CRISPR-Gold is the first example of a delivery vehicle that can deliver all of the CRISPR components needed to correct gene mutations, without the use of viruses,” Murthy said.

The study was published October 2 [2017] in the journal Nature Biomedical Engineering.

CRISPR-Gold repairs DNA mutations through a process called homology-directed repair. Scientists have struggled to develop homology-directed repair-based therapeutics because they require activity at the same place and time as Cas9 protein, an RNA guide that recognizes the mutation and donor DNA to correct the mutation.

To overcome these challenges, the Berkeley scientists invented a delivery vessel that binds all of these components together, and then releases them when the vessel is inside a wide variety of cell types, triggering homology directed repair. CRISPR-Gold’s gold nanoparticles coat the donor DNA and also bind Cas9. When injected into mice, their cells recognize a marker in CRISPR-Gold and then import the delivery vessel. Then, through a series of cellular mechanisms, CRISPR-Gold is released into the cells’ cytoplasm and breaks apart, rapidly releasing Cas9 and donor DNA.

Schematic of CRISPR-Gold's method of action

CRISPR-Gold’s method of action (Click to enlarge).

A single injection of CRISPR-Gold into muscle tissue of mice that model Duchenne muscular dystrophy restored 5.4 percent of the dystrophin gene, which causes the disease, to the wild- type, or normal, sequence. This correction rate was approximately 18 times higher than in mice treated with Cas9 and donor DNA by themselves, which experienced only a 0.3 percent correction rate.

Importantly, the study authors note, CRISPR-Gold faithfully restored the normal sequence of dystrophin, which is a significant improvement over previously published approaches that only removed the faulty part of the gene, making it shorter and converting one disease into another, milder disease.

CRISPR-Gold was also able to reduce tissue fibrosis – the hallmark of diseases where muscles do not function properly – and enhanced strength and agility in mice with Duchenne muscular dystrophy. CRISPR-Gold-treated mice showed a two-fold increase in hanging time in a common test for mouse strength and agility, compared to mice injected with a control.

“These experiments suggest that it will be possible to develop non-viral CRISPR therapeutics that can safely correct gene mutations, via the process of homology-directed repair, by simply developing nanoparticles that can simultaneously encapsulate all of the CRISPR components,” Murthy said.

CRISPR-Cas9

CRISPR in action: A model of the Cas9 protein cutting a double-stranded piece of DNA

The study found that CRISPR-Gold’s approach to Cas9 protein delivery is safer than viral delivery of CRISPR, which, in addition to toxicity, amplifies the side effects of Cas9 through continuous expression of this DNA-cutting enzyme. When the research team tested CRISPR-Gold’s gene-editing capability in mice, they found that CRISPR-Gold efficiently corrected the DNA mutation that causes Duchenne muscular dystrophy, with minimal collateral DNA damage.

The researchers quantified CRISPR-Gold’s off-target DNA damage and found damage levels similar to the that of a typical DNA sequencing error in a typical cell that was not exposed to CRISPR (0.005 – 0.2 percent). To test for possible immunogenicity, the blood stream cytokine profiles of mice were analyzed at 24 hours and two weeks after the CRISPR-Gold injection. CRISPR-Gold did not cause an acute up-regulation of inflammatory cytokines in plasma, after multiple injections, or weight loss, suggesting that CRISPR-Gold can be used multiple times safely, and that it has a high therapeutic window for gene editing in muscle tissue.

“CRISPR-Gold and, more broadly, CRISPR-nanoparticles open a new way for safer, accurately controlled delivery of gene-editing tools,” Conboy said. “Ultimately, these techniques could be developed into a new medicine for Duchenne muscular dystrophy and a number of other genetic diseases.”

A clinical trial will be needed to discern whether CRISPR-Gold is an effective treatment for genetic diseases in humans. Study co-authors Kunwoo Lee and Hyo Min Park have formed a start-up company, GenEdit (Murthy has an ownership stake in GenEdit), which is focused on translating the CRISPR-Gold technology into humans. The labs of Murthy and Conboy are also working on the next generation of particles that can deliver CRISPR into tissues from the blood stream and would preferentially target adult stem cells, which are considered the best targets for gene correction because stem and progenitor cells are capable of gene editing, self-renewal and differentiation.

“Genetic diseases cause devastating levels of mortality and morbidity, and new strategies for treating them are greatly needed,” Murthy said. “CRISPR-Gold was able to correct disease-causing gene mutations in vivo, via the non-viral delivery of Cas9 protein, guide RNA and donor DNA, and therefore has the potential to develop into a therapeutic for treating genetic diseases.”

The study was funded by the National Institutes of Health, the W.M. Keck Foundation, the Moore Foundation, the Li Ka Shing Foundation, Calico, Packer, Roger’s and SENS, and the Center of Innovation (COI) Program of the Japan Science and Technology Agency.

Here’s a link to and a citation for the paper,

Nanoparticle delivery of Cas9 ribonucleoprotein and donor DNA in vivo induces homology-directed DNA repair by Kunwoo Lee, Michael Conboy, Hyo Min Park, Fuguo Jiang, Hyun Jin Kim, Mark A. Dewitt, Vanessa A. Mackley, Kevin Chang, Anirudh Rao, Colin Skinner, Tamanna Shobha, Melod Mehdipour, Hui Liu, Wen-chin Huang, Freeman Lan, Nicolas L. Bray, Song Li, Jacob E. Corn, Kazunori Kataoka, Jennifer A. Doudna, Irina Conboy, & Niren Murthy. Nature Biomedical Engineering (2017) doi:10.1038/s41551-017-0137-2 Published online: 02 October 2017

This paper is behind a paywall.

Cyborg bacteria to reduce carbon dioxide

This video is a bit technical but then it is about work being presented to chemists at the American Chemical Society’s (ACS) at the 254th National Meeting & Exposition Aug. 20 -24, 2017,

For a more plain language explanation, there’s an August 22, 2017 ACS news release (also on EurekAlert),

Photosynthesis provides energy for the vast majority of life on Earth. But chlorophyll, the green pigment that plants use to harvest sunlight, is relatively inefficient. To enable humans to capture more of the sun’s energy than natural photosynthesis can, scientists have taught bacteria to cover themselves in tiny, highly efficient solar panels to produce useful compounds.

“Rather than rely on inefficient chlorophyll to harvest sunlight, I’ve taught bacteria how to grow and cover their bodies with tiny semiconductor nanocrystals,” says Kelsey K. Sakimoto, Ph.D., who carried out the research in the lab of Peidong Yang, Ph.D. “These nanocrystals are much more efficient than chlorophyll and can be grown at a fraction of the cost of manufactured solar panels.”

Humans increasingly are looking to find alternatives to fossil fuels as sources of energy and feedstocks for chemical production. Many scientists have worked to create artificial photosynthetic systems to generate renewable energy and simple organic chemicals using sunlight. Progress has been made, but the systems are not efficient enough for commercial production of fuels and feedstocks.

Research in Yang’s lab at the University of California, Berkeley, where Sakimoto earned his Ph.D., focuses on harnessing inorganic semiconductors that can capture sunlight to organisms such as bacteria that can then use the energy to produce useful chemicals from carbon dioxide and water. “The thrust of research in my lab is to essentially ‘supercharge’ nonphotosynthetic bacteria by providing them energy in the form of electrons from inorganic semiconductors, like cadmium sulfide, that are efficient light absorbers,” Yang says. “We are now looking for more benign light absorbers than cadmium sulfide to provide bacteria with energy from light.”

Sakimoto worked with a naturally occurring, nonphotosynthetic bacterium, Moorella thermoacetica, which, as part of its normal respiration, produces acetic acid from carbon dioxide (CO2). Acetic acid is a versatile chemical that can be readily upgraded to a number of fuels, polymers, pharmaceuticals and commodity chemicals through complementary, genetically engineered bacteria.

When Sakimoto fed cadmium and the amino acid cysteine, which contains a sulfur atom, to the bacteria, they synthesized cadmium sulfide (CdS) nanoparticles, which function as solar panels on their surfaces. The hybrid organism, M. thermoacetica-CdS, produces acetic acid from CO2, water and light. “Once covered with these tiny solar panels, the bacteria can synthesize food, fuels and plastics, all using solar energy,” Sakimoto says. “These bacteria outperform natural photosynthesis.”

The bacteria operate at an efficiency of more than 80 percent, and the process is self-replicating and self-regenerating, making this a zero-waste technology. “Synthetic biology and the ability to expand the product scope of CO2 reduction will be crucial to poising this technology as a replacement, or one of many replacements, for the petrochemical industry,” Sakimoto says.

So, do the inorganic-biological hybrids have commercial potential? “I sure hope so!” he says. “Many current systems in artificial photosynthesis require solid electrodes, which is a huge cost. Our algal biofuels are much more attractive, as the whole CO2-to-chemical apparatus is self-contained and only requires a big vat out in the sun.” But he points out that the system still requires some tweaking to tune both the semiconductor and the bacteria. He also suggests that it is possible that the hybrid bacteria he created may have some naturally occurring analog. “A future direction, if this phenomenon exists in nature, would be to bioprospect for these organisms and put them to use,” he says.

For more insight into the work, check out Dexter Johnson’s Aug. 22, 2017 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website),

“It’s actually a natural, overlooked feature of their biology,” explains Sakimoto in an e-mail interview with IEEE Spectrum. “This bacterium has a detoxification pathway, meaning if it encounters a toxic metal, like cadmium, it will try to precipitate it out, thereby detoxifying it. So when we introduce cadmium ions into the growth medium in which M. thermoacetica is hanging out, it will convert the amino acid cysteine into sulfide, which precipitates out cadmium as cadmium sulfide. The crystals then assemble and stick onto the bacterium through normal electrostatic interactions.”

I’ve just excerpted one bit, there’s more in Dexter’s posting.

Canadian science policy news and doings (also: some US science envoy news)

I have a couple of notices from the Canadian Science Policy Centre (CSPC), a twitter feed, and an article in online magazine to thank for this bumper crop of news.

 Canadian Science Policy Centre: the conference

The 2017 Canadian Science Policy Conference to be held Nov. 1 – 3, 2017 in Ottawa, Ontario for the third year in a row has a super saver rate available until Sept. 3, 2017 according to an August 14, 2017 announcement (received via email).

Time is running out, you have until September 3rd until prices go up from the SuperSaver rate.

Savings off the regular price with the SuperSaver rate:
Up to 26% for General admission
Up to 29% for Academic/Non-Profit Organizations
Up to 40% for Students and Post-Docs

Before giving you the link to the registration page and assuming that you might want to check out what is on offer at the conference, here’s a link to the programme. They don’t seem to have any events celebrating Canada’s 150th anniversary although they do have a session titled, ‘The Next 150 years of Science in Canada: Embedding Equity, Delivering Diversity/Les 150 prochaine années de sciences au Canada:  Intégrer l’équité, promouvoir la diversité‘,

Enhancing equity, diversity, and inclusivity (EDI) in science, technology, engineering and math (STEM) has been described as being a human rights issue and an economic development issue by various individuals and organizations (e.g. OECD). Recent federal policy initiatives in Canada have focused on increasing participation of women (a designated under-represented group) in science through increased reporting, program changes, and institutional accountability. However, the Employment Equity Act requires employers to act to ensure the full representation of the three other designated groups: Aboriginal peoples, persons with disabilities and members of visible minorities. Significant structural and systemic barriers to full participation and employment in STEM for members of these groups still exist in Canadian institutions. Since data support the positive role of diversity in promoting innovation and economic development, failure to capture the full intellectual capacity of a diverse population limits provincial and national potential and progress in many areas. A diverse international panel of experts from designated groups will speak to the issue of accessibility and inclusion in STEM. In addition, the discussion will focus on evidence-based recommendations for policy initiatives that will promote full EDI in science in Canada to ensure local and national prosperity and progress for Canada over the next 150 years.

There’s also this list of speakers . Curiously, I don’t see Kirsty Duncan, Canada’s Minister of Science on the list, nor do I see any other politicians in the banner for their conference website  This divergence from the CSPC’s usual approach to promoting the conference is interesting.

Moving onto the conference, the organizers have added two panels to the programme (from the announcement received via email),

Friday, November 3, 2017
10:30AM-12:00PM
Open Science and Innovation
Organizer: Tiberius Brastaviceanu
Organization: ACES-CAKE

10:30AM- 12:00PM
The Scientific and Economic Benefits of Open Science
Organizer: Arij Al Chawaf
Organization: Structural Genomics

I think this is the first time there’s been a ‘Tiberius’ on this blog and teamed with the organization’s name, well, I just had to include it.

Finally, here’s the link to the registration page and a page that details travel deals.

Canadian Science Policy Conference: a compendium of documents and articles on Canada’s Chief Science Advisor and Ontario’s Chief Scientist and the pre-2018 budget submissions

The deadline for applications for the Chief Science Advisor position was extended to Feb. 2017 and so far, there’s no word as to whom it might be. Perhaps Minister of Science Kirsty Duncan wants to make a splash with a surprise announcement at the CSPC’s 2017 conference? As for Ontario’s Chief Scientist, this move will make province the third (?) to have a chief scientist, after Québec and Alberta. There is apparently one in Alberta but there doesn’t seem to be a government webpage and his LinkedIn profile doesn’t include this title. In any event, Dr. Fred Wrona is mentioned as the Alberta’s Chief Scientist in a May 31, 2017 Alberta government announcement. *ETA Aug. 25, 2017: I missed the Yukon, which has a Senior Science Advisor. The position is currently held by Dr. Aynslie Ogden.*

Getting back to the compendium, here’s the CSPC’s A Comprehensive Collection of Publications Regarding Canada’s Federal Chief Science Advisor and Ontario’s Chief Scientist webpage. Here’s a little background provided on the page,

On June 2nd, 2017, the House of Commons Standing Committee on Finance commenced the pre-budget consultation process for the 2018 Canadian Budget. These consultations provide Canadians the opportunity to communicate their priorities with a focus on Canadian productivity in the workplace and community in addition to entrepreneurial competitiveness. Organizations from across the country submitted their priorities on August 4th, 2017 to be selected as witness for the pre-budget hearings before the Committee in September 2017. The process will result in a report to be presented to the House of Commons in December 2017 and considered by the Minister of Finance in the 2018 Federal Budget.

NEWS & ANNOUNCEMENT

House of Commons- PRE-BUDGET CONSULTATIONS IN ADVANCE OF THE 2018 BUDGET

https://www.ourcommons.ca/Committees/en/FINA/StudyActivity?studyActivityId=9571255

CANADIANS ARE INVITED TO SHARE THEIR PRIORITIES FOR THE 2018 FEDERAL BUDGET

https://www.ourcommons.ca/DocumentViewer/en/42-1/FINA/news-release/9002784

The deadline for pre-2018 budget submissions was Aug. 4, 2017 and they haven’t yet scheduled any meetings although they are to be held in September. (People can meet with the Standing Committee on Finance in various locations across Canada to discuss their submissions.) I’m not sure where the CSPC got their list of ‘science’ submissions but it’s definitely worth checking as there are some odd omissions such as TRIUMF (Canada’s National Laboratory for Particle and Nuclear Physics)), Genome Canada, the Pan-Canadian Artificial Intelligence Strategy, CIFAR (Canadian Institute for Advanced Research), the Perimeter Institute, Canadian Light Source, etc.

Twitter and the Naylor Report under a microscope

This news came from University of British Columbia President Santa Ono’s twitter feed,

 I will join Jon [sic] Borrows and Janet Rossant on Sept 19 in Ottawa at a Mindshare event to discuss the importance of the Naylor Report

The Mindshare event Ono is referring to is being organized by Universities Canada (formerly the Association of Universities and Colleges of Canada) and the Institute for Research on Public Policy. It is titled, ‘The Naylor report under the microscope’. Here’s more from the event webpage,

Join Universities Canada and Policy Options for a lively discussion moderated by editor-in-chief Jennifer Ditchburn on the report from the Fundamental Science Review Panel and why research matters to Canadians.

Moderator

Jennifer Ditchburn, editor, Policy Options.

Jennifer Ditchburn

Editor-in-chief, Policy Options

Jennifer Ditchburn is the editor-in-chief of Policy Options, the online policy forum of the Institute for Research on Public Policy.  An award-winning parliamentary correspondent, Jennifer began her journalism career at the Canadian Press in Montreal as a reporter-editor during the lead-up to the 1995 referendum.  From 2001 and 2006 she was a national reporter with CBC TV on Parliament Hill, and in 2006 she returned to the Canadian Press.  She is a three-time winner of a National Newspaper Award:  twice in the politics category, and once in the breaking news category. In 2015 she was awarded the prestigious Charles Lynch Award for outstanding coverage of national issues. Jennifer has been a frequent contributor to television and radio public affairs programs, including CBC’s Power and Politics, the “At Issue” panel, and The Current. She holds a bachelor of arts from Concordia University, and a master of journalism from Carleton University.

@jenditchburn

Tuesday, September 19, 2017

 12-2 pm

Fairmont Château Laurier,  Laurier  Room
 1 Rideau Street, Ottawa

 rsvp@univcan.ca

I can’t tell if they’re offering lunch or if there is a cost associated with this event so you may want to contact the organizers.

As for the Naylor report, I posted a three-part series on June 8, 2017, which features my comments and the other comments I was able to find on the report:

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 1 of 3

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 2 of 3

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 3 of 3

One piece not mentioned in my three-part series is Paul Wells’ provocatively titled June 29, 2017 article for MacLean’s magazine, Why Canadian scientists aren’t happy (Note: Links have been removed),

Much hubbub this morning over two interviews Kirsty Duncan, the science minister, has given the papers. The subject is Canada’s Fundamental Science Review, commonly called the Naylor Report after David Naylor, the former University of Toronto president who was its main author.

Other authors include BlackBerry founder Mike Lazaridis, who has bankrolled much of the Waterloo renaissance, and Canadian Nobel physicist Arthur McDonald. It’s as blue-chip as a blue-chip panel could be.

Duncan appointed the panel a year ago. It’s her panel, delivered by her experts. Why does it not seem to be… getting anywhere? Why does it seem to have no champion in government? Therein lies a tale.

Note, first, that Duncan’s interviews—her first substantive comment on the report’s recommendations!—come nearly three months after its April release, which in turn came four months after Duncan asked Naylor to deliver his report, last December. (By March I had started to make fun of the Trudeau government in print for dragging its heels on the report’s release. That column was not widely appreciated in the government, I’m told.)

Anyway, the report was released, at an event attended by no representative of the Canadian government. Here’s the gist of what I wrote at the time:

 

Naylor’s “single most important recommendation” is a “rapid increase” in federal spending on “independent investigator-led research” instead of the “priority-driven targeted research” that two successive federal governments, Trudeau’s and Stephen Harper’s, have preferred in the last 8 or 10 federal budgets.

In English: Trudeau has imitated Harper in favouring high-profile, highly targeted research projects, on areas of study selected by political staffers in Ottawa, that are designed to attract star researchers from outside Canada so they can bolster the image of Canada as a research destination.

That’d be great if it wasn’t achieved by pruning budgets for the less spectacular research that most scientists do.

Naylor has numbers. “Between 2007-08 and 2015-16, the inflation-adjusted budgetary envelope for investigator-led research fell by 3 per cent while that for priority-driven research rose by 35 per cent,” he and his colleagues write. “As the number of researchers grew during this period, the real resources available per active researcher to do investigator-led research declined by about 35 per cent.”

And that’s not even taking into account the way two new programs—the $10-million-per-recipient Canada Excellence Research Chairs and the $1.5 billion Canada First Research Excellence Fund—are “further concentrating resources in the hands of smaller numbers of individuals and institutions.”

That’s the context for Duncan’s remarks. In the Globe, she says she agrees with Naylor on “the need for a research system that promotes equity and diversity, provides a better entry for early career researchers and is nimble in response to new scientific opportunities.” But she also “disagreed” with the call for a national advisory council that would give expert advice on the government’s entire science, research and innovation policy.

This is an asinine statement. When taking three months to read a report, it’s a good idea to read it. There is not a single line in Naylor’s overlong report that calls for the new body to make funding decisions. Its proposed name is NACRI, for National Advisory Council on Research and Innovation. A for Advisory. Its responsibilities, listed on Page 19 if you’re reading along at home, are restricted to “advice… evaluation… public reporting… advice… advice.”

Duncan also didn’t promise to meet Naylor’s requested funding levels: $386 million for research in the first year, growing to $1.3 billion in new money in the fourth year. That’s a big concern for researchers, who have been warning for a decade that two successive government’s—Harper’s and Trudeau’s—have been more interested in building new labs than in ensuring there’s money to do research in them.

The minister has talking points. She gave the same answer to both reporters about whether Naylor’s recommendations will be implemented in time for the next federal budget. “It takes time to turn the Queen Mary around,” she said. Twice. I’ll say it does: She’s reacting three days before Canada Day to a report that was written before Christmas. Which makes me worry when she says elected officials should be in charge of being nimble.

Here’s what’s going on.

The Naylor report represents Canadian research scientists’ side of a power struggle. The struggle has been continuing since Jean Chrétien left office. After early cuts, he presided for years over very large increases to the budgets of the main science granting councils. But since 2003, governments have preferred to put new funding dollars to targeted projects in applied sciences. …

Naylor wants that trend reversed, quickly. He is supported in that call by a frankly astonishingly broad coalition of university administrators and working researchers, who until his report were more often at odds. So you have the group representing Canada’s 15 largest research universities and the group representing all universities and a new group representing early-career researchers and, as far as I can tell, every Canadian scientist on Twitter. All backing Naylor. All fundamentally concerned that new money for research is of no particular interest if it does not back the best science as chosen by scientists, through peer review.

The competing model, the one preferred by governments of all stripes, might best be called superclusters. Very large investments into very large projects with loosely defined scientific objectives, whose real goal is to retain decorated veteran scientists and to improve the Canadian high-tech industry. Vast and sprawling labs and tech incubators, cabinet ministers nodding gravely as world leaders in sexy trendy fields sketch the golden path to Jobs of Tomorrow.

You see the imbalance. On one side, ribbons to cut. On the other, nerds experimenting on tapeworms. Kirsty Duncan, a shaky political performer, transparently a junior minister to the supercluster guy, with no deputy minister or department reporting to her, is in a structurally weak position: her title suggests she’s science’s emissary to the government, but she is not equipped to be anything more than government’s emissary to science.

A government that consistently buys into the market for intellectual capital at the very top of the price curve is a factory for producing white elephants. But don’t take my word for it. Ask Geoffrey Hinton [University of Toronto’s Geoffrey Hinton, a Canadian leader in machine learning].

“There is a lot of pressure to make things more applied; I think it’s a big mistake,” he said in 2015. “In the long run, curiosity-driven research just works better… Real breakthroughs come from people focusing on what they’re excited about.”

I keep saying this, like a broken record. If you want the science that changes the world, ask the scientists who’ve changed it how it gets made. This government claims to be interested in what scientists think. We’ll see.

Incisive and acerbic,  you may want to make time to read this article in its entirety.

Getting back to the ‘The Naylor report under the microscope’ event, I wonder if anyone will be as tough and direct as Wells. Going back even further, I wonder if this is why there’s no mention of Duncan as a speaker at the conference. It could go either way: surprise announcement of a Chief Science Advisor, as I first suggested, or avoidance of a potentially angry audience.

For anyone curious about Geoffrey Hinton, there’s more here in my March 31, 2017 post (scroll down about 20% of the way) and for more about the 2017 budget and allocations for targeted science projects there’s my March 24, 2017 post.

US science envoy quits

An Aug. 23, 2017article by Matthew Rosza for salon.com notes the resignation of one of the US science envoys,

President Donald Trump’s infamous response to the Charlottesville riots — namely, saying that both sides were to blame and that there were “very fine people” marching as white supremacists — has prompted yet another high profile resignation from his administration.

Daniel M. Kammen, who served as a science envoy for the State Department and focused on renewable energy development in the Middle East and Northern Africa, submitted a letter of resignation on Wednesday. Notably, he began the first letter of each paragraph with letters that spelled out I-M-P-E-A-C-H. That followed a letter earlier this month by writer Jhumpa Lahiri and actor Kal Penn to similarly spell R-E-S-I-S-T in their joint letter of resignation from the President’s Committee on Arts and Humanities.

Jeremy Berke’s Aug. 23, 2017 article for BusinessInsider.com provides a little more detail (Note: Links have been removed),

A State Department climate science envoy resigned Wednesday in a public letter posted on Twitter over what he says is President Donald Trump’s “attacks on the core values” of the United States with his response to violence in Charlottesville, Virginia.

“My decision to resign is in response to your attacks on the core values of the United States,” wrote Daniel Kammen, a professor of energy at the University of California, Berkeley, who was appointed as one five science envoys in 2016. “Your failure to condemn white supremacists and neo-Nazis has domestic and international ramifications.”

“Your actions to date have, sadly, harmed the quality of life in the United States, our standing abroad, and the sustainability of the planet,” Kammen writes.

Science envoys work with the State Department to establish and develop energy programs in countries around the world. Kammen specifically focused on renewable energy development in the Middle East and North Africa.

That’s it.

3-D integration of nanotechnologies on a single computer chip

By integrating nanomaterials , a new technique for a 3D computer chip capable of handling today’s huge amount of data has been developed. Weirdly, the first two paragraphs of a July 5, 2017 news item on Nanowerk do not convey the main point (Note: A link has been removed),

As embedded intelligence is finding its way into ever more areas of our lives, fields ranging from autonomous driving to personalized medicine are generating huge amounts of data. But just as the flood of data is reaching massive proportions, the ability of computer chips to process it into useful information is stalling.

Now, researchers at Stanford University and MIT have built a new chip to overcome this hurdle. The results are published today in the journal Nature (“Three-dimensional integration of nanotechnologies for computing and data storage on a single chip”), by lead author Max Shulaker, an assistant professor of electrical engineering and computer science at MIT. Shulaker began the work as a PhD student alongside H.-S. Philip Wong and his advisor Subhasish Mitra, professors of electrical engineering and computer science at Stanford. The team also included professors Roger Howe and Krishna Saraswat, also from Stanford.

This image helps to convey the main points,

Instead of relying on silicon-based devices, a new chip uses carbon nanotubes and resistive random-access memory (RRAM) cells. The two are built vertically over one another, making a new, dense 3-D computer architecture with interleaving layers of logic and memory. Courtesy MIT

As I hove been quite impressed with their science writing, it was a bit surprising to find that the Massachusetts Institute of Technology (MIT) had issued this news release (news item) as it didn’t follow the ‘rules’, i.e., cover as many of the journalistic questions (Who, What, Where, When, Why, and, sometimes, How) as possible in the first sentence/paragraph. This is written more in the style of a magazine article and so the details take a while to emerge, from a July 5, 2017 MIT news release, which originated the news item,

Computers today comprise different chips cobbled together. There is a chip for computing and a separate chip for data storage, and the connections between the two are limited. As applications analyze increasingly massive volumes of data, the limited rate at which data can be moved between different chips is creating a critical communication “bottleneck.” And with limited real estate on the chip, there is not enough room to place them side-by-side, even as they have been miniaturized (a phenomenon known as Moore’s Law).

To make matters worse, the underlying devices, transistors made from silicon, are no longer improving at the historic rate that they have for decades.

The new prototype chip is a radical change from today’s chips. It uses multiple nanotechnologies, together with a new computer architecture, to reverse both of these trends.

Instead of relying on silicon-based devices, the chip uses carbon nanotubes, which are sheets of 2-D graphene formed into nanocylinders, and resistive random-access memory (RRAM) cells, a type of nonvolatile memory that operates by changing the resistance of a solid dielectric material. The researchers integrated over 1 million RRAM cells and 2 million carbon nanotube field-effect transistors, making the most complex nanoelectronic system ever made with emerging nanotechnologies.

The RRAM and carbon nanotubes are built vertically over one another, making a new, dense 3-D computer architecture with interleaving layers of logic and memory. By inserting ultradense wires between these layers, this 3-D architecture promises to address the communication bottleneck.

However, such an architecture is not possible with existing silicon-based technology, according to the paper’s lead author, Max Shulaker, who is a core member of MIT’s Microsystems Technology Laboratories. “Circuits today are 2-D, since building conventional silicon transistors involves extremely high temperatures of over 1,000 degrees Celsius,” says Shulaker. “If you then build a second layer of silicon circuits on top, that high temperature will damage the bottom layer of circuits.”

The key in this work is that carbon nanotube circuits and RRAM memory can be fabricated at much lower temperatures, below 200 C. “This means they can be built up in layers without harming the circuits beneath,” Shulaker says.

This provides several simultaneous benefits for future computing systems. “The devices are better: Logic made from carbon nanotubes can be an order of magnitude more energy-efficient compared to today’s logic made from silicon, and similarly, RRAM can be denser, faster, and more energy-efficient compared to DRAM,” Wong says, referring to a conventional memory known as dynamic random-access memory.

“In addition to improved devices, 3-D integration can address another key consideration in systems: the interconnects within and between chips,” Saraswat adds.

“The new 3-D computer architecture provides dense and fine-grained integration of computating and data storage, drastically overcoming the bottleneck from moving data between chips,” Mitra says. “As a result, the chip is able to store massive amounts of data and perform on-chip processing to transform a data deluge into useful information.”

To demonstrate the potential of the technology, the researchers took advantage of the ability of carbon nanotubes to also act as sensors. On the top layer of the chip they placed over 1 million carbon nanotube-based sensors, which they used to detect and classify ambient gases.

Due to the layering of sensing, data storage, and computing, the chip was able to measure each of the sensors in parallel, and then write directly into its memory, generating huge bandwidth, Shulaker says.

Three-dimensional integration is the most promising approach to continue the technology scaling path set forth by Moore’s laws, allowing an increasing number of devices to be integrated per unit volume, according to Jan Rabaey, a professor of electrical engineering and computer science at the University of California at Berkeley, who was not involved in the research.

“It leads to a fundamentally different perspective on computing architectures, enabling an intimate interweaving of memory and logic,” Rabaey says. “These structures may be particularly suited for alternative learning-based computational paradigms such as brain-inspired systems and deep neural nets, and the approach presented by the authors is definitely a great first step in that direction.”

“One big advantage of our demonstration is that it is compatible with today’s silicon infrastructure, both in terms of fabrication and design,” says Howe.

“The fact that this strategy is both CMOS [complementary metal-oxide-semiconductor] compatible and viable for a variety of applications suggests that it is a significant step in the continued advancement of Moore’s Law,” says Ken Hansen, president and CEO of the Semiconductor Research Corporation, which supported the research. “To sustain the promise of Moore’s Law economics, innovative heterogeneous approaches are required as dimensional scaling is no longer sufficient. This pioneering work embodies that philosophy.”

The team is working to improve the underlying nanotechnologies, while exploring the new 3-D computer architecture. For Shulaker, the next step is working with Massachusetts-based semiconductor company Analog Devices to develop new versions of the system that take advantage of its ability to carry out sensing and data processing on the same chip.

So, for example, the devices could be used to detect signs of disease by sensing particular compounds in a patient’s breath, says Shulaker.

“The technology could not only improve traditional computing, but it also opens up a whole new range of applications that we can target,” he says. “My students are now investigating how we can produce chips that do more than just computing.”

“This demonstration of the 3-D integration of sensors, memory, and logic is an exceptionally innovative development that leverages current CMOS technology with the new capabilities of carbon nanotube field–effect transistors,” says Sam Fuller, CTO emeritus of Analog Devices, who was not involved in the research. “This has the potential to be the platform for many revolutionary applications in the future.”

This work was funded by the Defense Advanced Research Projects Agency [DARPA], the National Science Foundation, Semiconductor Research Corporation, STARnet SONIC, and member companies of the Stanford SystemX Alliance.

Here’s a link to and a citation for the paper,

Three-dimensional integration of nanotechnologies for computing and data storage on a single chip by Max M. Shulaker, Gage Hills, Rebecca S. Park, Roger T. Howe, Krishna Saraswat, H.-S. Philip Wong, & Subhasish Mitra. Nature 547, 74–78 (06 July 2017) doi:10.1038/nature22994 Published online 05 July 2017

This paper is behind a paywall.

CRISPR and editing the germline in the US (part 3 of 3): public discussions and pop culture

After giving a basic explanation of the technology and some of the controversies in part 1 and offering more detail about the technology and about the possibility of designer babies in part 2; this part covers public discussion, a call for one and the suggestion that one is taking place in popular culture.

But a discussion does need to happen

In a move that is either an exquisite coincidence or has been carefully orchestrated (I vote for the latter), researchers from the University of Wisconsin-Madison have released a study about attitudes in the US to human genome editing. From an Aug. 11, 2017 University of Wisconsin-Madison news release (also on EurekAllert),

In early August 2017, an international team of scientists announced they had successfully edited the DNA of human embryos. As people process the political, moral and regulatory issues of the technology — which nudges us closer to nonfiction than science fiction — researchers at the University of Wisconsin-Madison and Temple University show the time is now to involve the American public in discussions about human genome editing.

In a study published Aug. 11 in the journal Science, the researchers assessed what people in the United States think about the uses of human genome editing and how their attitudes may drive public discussion. They found a public divided on its uses but united in the importance of moving conversations forward.

“There are several pathways we can go down with gene editing,” says UW-Madison’s Dietram Scheufele, lead author of the study and member of a National Academy of Sciences committee that compiled a report focused on human gene editing earlier this year. “Our study takes an exhaustive look at all of those possible pathways forward and asks where the public stands on each one of them.”

Compared to previous studies on public attitudes about the technology, the new study takes a more nuanced approach, examining public opinion about the use of gene editing for disease therapy versus for human enhancement, and about editing that becomes hereditary versus editing that does not.

The research team, which included Scheufele and Dominique Brossard — both professors of life sciences communication — along with Michael Xenos, professor of communication arts, first surveyed study participants about the use of editing to treat disease (therapy) versus for enhancement (creating so-called “designer babies”). While about two-thirds of respondents expressed at least some support for therapeutic editing, only one-third expressed support for using the technology for enhancement.

Diving even deeper, researchers looked into public attitudes about gene editing on specific cell types — somatic or germline — either for therapy or enhancement. Somatic cells are non-reproductive, so edits made in those cells do not affect future generations. Germline cells, however, are heritable, and changes made in these cells would be passed on to children.

Public support of therapeutic editing was high both in cells that would be inherited and those that would not, with 65 percent of respondents supporting therapy in germline cells and 64 percent supporting therapy in somatic cells. When considering enhancement editing, however, support depended more upon whether the changes would affect future generations. Only 26 percent of people surveyed supported enhancement editing in heritable germline cells and 39 percent supported enhancement of somatic cells that would not be passed on to children.

“A majority of people are saying that germline enhancement is where the technology crosses that invisible line and becomes unacceptable,” says Scheufele. “When it comes to therapy, the public is more open, and that may partly be reflective of how severe some of those genetically inherited diseases are. The potential treatments for those diseases are something the public at least is willing to consider.”

Beyond questions of support, researchers also wanted to understand what was driving public opinions. They found that two factors were related to respondents’ attitudes toward gene editing as well as their attitudes toward the public’s role in its emergence: the level of religious guidance in their lives, and factual knowledge about the technology.

Those with a high level of religious guidance in their daily lives had lower support for human genome editing than those with low religious guidance. Additionally, those with high knowledge of the technology were more supportive of it than those with less knowledge.

While respondents with high religious guidance and those with high knowledge differed on their support for the technology, both groups highly supported public engagement in its development and use. These results suggest broad agreement that the public should be involved in questions of political, regulatory and moral aspects of human genome editing.

“The public may be split along lines of religiosity or knowledge with regard to what they think about the technology and scientific community, but they are united in the idea that this is an issue that requires public involvement,” says Scheufele. “Our findings show very nicely that the public is ready for these discussions and that the time to have the discussions is now, before the science is fully ready and while we have time to carefully think through different options regarding how we want to move forward.”

Here’s a  link to and a citation for the paper,

U.S. attitudes on human genome editing by Dietram A. Scheufele, Michael A. Xenos, Emily L. Howell, Kathleen M. Rose, Dominique Brossard1, and Bruce W. Hardy. Science 11 Aug 2017: Vol. 357, Issue 6351, pp. 553-554 DOI: 10.1126/science.aan3708

This paper is behind a paywall.

A couple of final comments

Briefly, I notice that there’s no mention of the ethics of patenting this technology in the news release about the study.

Moving on, it seems surprising that the first team to engage in germline editing in the US is in Oregon; I would have expected the work to come from Massachusetts, California, or Illinois where a lot of bleeding edge medical research is performed. However, given the dearth of financial support from federal funding institutions, it seems likely that only an outsider would dare to engage i the research. Given the timing, Mitalipov’s work was already well underway before the recent about-face from the US National Academy of Sciences (Note: Kaiser’s Feb. 14, 2017 article does note that for some the recent recommendations do not represent any change).

As for discussion on issues such as editing of the germline, I’ve often noted here that popular culture (including advertising with the science fiction and other dramas laid in various media) often provides an informal forum for discussion. Joelle Renstrom in an Aug. 13, 2017 article for slate.com writes that Orphan Black (a BBC America series featuring clones) opened up a series of questions about science and ethics in the guise of a thriller about clones. She offers a précis of the first four seasons (Note: A link has been removed),

If you stopped watching a few seasons back, here’s a brief synopsis of how the mysteries wrap up. Neolution, an organization that seeks to control human evolution through genetic modification, began Project Leda, the cloning program, for two primary reasons: to see whether they could and to experiment with mutations that might allow people (i.e., themselves) to live longer. Neolution partnered with biotech companies such as Dyad, using its big pharma reach and deep pockets to harvest people’s genetic information and to conduct individual and germline (that is, genetic alterations passed down through generations) experiments, including infertility treatments that result in horrifying birth defects and body modification, such as tail-growing.

She then provides the article’s thesis (Note: Links have been removed),

Orphan Black demonstrates Carl Sagan’s warning of a time when “awesome technological powers are in the hands of a very few.” Neolutionists do whatever they want, pausing only to consider whether they’re missing an opportunity to exploit. Their hubris is straight out of Victor Frankenstein’s playbook. Frankenstein wonders whether he ought to first reanimate something “of simpler organisation” than a human, but starting small means waiting for glory. Orphan Black’s evil scientists embody this belief: if they’re going to play God, then they’ll control not just their own destinies, but the clones’ and, ultimately, all of humanity’s. Any sacrifices along the way are for the greater good—reasoning that culminates in Westmoreland’s eugenics fantasy to genetically sterilize 99 percent of the population he doesn’t enhance.

Orphan Black uses sci-fi tropes to explore real-world plausibility. Neolution shares similarities with transhumanism, the belief that humans should use science and technology to take control of their own evolution. While some transhumanists dabble in body modifications, such as microchip implants or night-vision eye drops, others seek to end suffering by curing human illness and aging. But even these goals can be seen as selfish, as access to disease-eradicating or life-extending technologies would be limited to the wealthy. Westmoreland’s goal to “sell Neolution to the 1 percent” seems frighteningly plausible—transhumanists, who statistically tend to be white, well-educated, and male, and their associated organizations raise and spend massive sums of money to help fulfill their goals. …

On Orphan Black, denial of choice is tantamount to imprisonment. That the clones have to earn autonomy underscores the need for ethics in science, especially when it comes to genetics. The show’s message here is timely given the rise of gene-editing techniques such as CRISPR. Recently, the National Academy of Sciences gave germline gene editing the green light, just one year after academy scientists from around the world argued it would be “irresponsible to proceed” without further exploring the implications. Scientists in the United Kingdom and China have already begun human genetic engineering and American scientists recently genetically engineered a human embryo for the first time. The possibility of Project Leda isn’t farfetched. Orphan Black warns us that money, power, and fear of death can corrupt both people and science. Once that happens, loss of humanity—of both the scientists and the subjects—is inevitable.

In Carl Sagan’s dark vision of the future, “people have lost the ability to set their own agendas or knowledgeably question those in authority.” This describes the plight of the clones at the outset of Orphan Black, but as the series continues, they challenge this paradigm by approaching science and scientists with skepticism, ingenuity, and grit. …

I hope there are discussions such as those Scheufele and Brossard are advocating but it might be worth considering that there is already some discussion underway, as informal as it is.

-30-

Part 1: CRISPR and editing the germline in the US (part 1 of 3): In the beginning

Part 2: CRISPR and editing the germline in the US (part 2 of 3): ‘designer babies’?

CRISPR and editing the germline in the US (part 2 of 3): ‘designer babies’?

Having included an explanation of CRISPR-CAS9 technology along with the news about the first US team to edit the germline and bits and pieces about ethics and a patent fight (part 1), this part hones in on the details of the work and worries about ‘designer babies’.

The interest flurry

I found three articles addressing the research and all three concur that despite some of the early reporting, this is not the beginning of a ‘designer baby’ generation.

First up was Nick Thieme in a July 28, 2017 article for Slate,

MIT Technology Review reported Thursday that a team of researchers from Portland, Oregon were the first team of U.S.-based scientists to successfully create a genetically modified human embryo. The researchers, led by Shoukhrat Mitalipov of Oregon Health and Science University, changed the DNA of—in MIT Technology Review’s words—“many tens” of genetically-diseased embryos by injecting the host egg with CRISPR, a DNA-based gene editing tool first discovered in bacteria, at the time of fertilization. CRISPR-Cas9, as the full editing system is called, allows scientists to change genes accurately and efficiently. As has happened with research elsewhere, the CRISPR-edited embryos weren’t implanted—they were kept sustained for only a couple of days.

In addition to being the first American team to complete this feat, the researchers also improved upon the work of the three Chinese research teams that beat them to editing embryos with CRISPR: Mitalipov’s team increased the proportion of embryonic cells that received the intended genetic changes, addressing an issue called “mosaicism,” which is when an embryo is comprised of cells with different genetic makeups. Increasing that proportion is essential to CRISPR work in eliminating inherited diseases, to ensure that the CRISPR therapy has the intended result. The Oregon team also reduced the number of genetic errors introduced by CRISPR, reducing the likelihood that a patient would develop cancer elsewhere in the body.

Separate from the scientific advancements, it’s a big deal that this work happened in a country with such intense politicization of embryo research. …

But there are a great number of obstacles between the current research and the future of genetically editing all children to be 12-foot-tall Einsteins.

Ed Yong in an Aug. 2, 2017 article for The Atlantic offered a comprehensive overview of the research and its implications (unusually for Yong, there seems to be mildly condescending note but it’s worth ignoring for the wealth of information in the article; Note: Links have been removed),

… the full details of the experiment, which are released today, show that the study is scientifically important but much less of a social inflection point than has been suggested. “This has been widely reported as the dawn of the era of the designer baby, making it probably the fifth or sixth time people have reported that dawn,” says Alta Charo, an expert on law and bioethics at the University of Wisconsin-Madison. “And it’s not.”

Given the persistent confusion around CRISPR and its implications, I’ve laid out exactly what the team did, and what it means.

Who did the experiments?

Shoukhrat Mitalipov is a Kazakhstani-born cell biologist with a history of breakthroughs—and controversy—in the stem cell field. He was the scientist to clone monkeys. He was the first to create human embryos by cloning adult cells—a move that could provide patients with an easy supply of personalized stem cells. He also pioneered a technique for creating embryos with genetic material from three biological parents, as a way of preventing a group of debilitating inherited diseases.

Although MIT Tech Review name-checked Mitalipov alone, the paper splits credit for the research between five collaborating teams—four based in the United States, and one in South Korea.

What did they actually do?

The project effectively began with an elevator conversation between Mitalipov and his colleague Sanjiv Kaul. Mitalipov explained that he wanted to use CRISPR to correct a disease-causing gene in human embryos, and was trying to figure out which disease to focus on. Kaul, a cardiologist, told him about hypertrophic cardiomyopathy (HCM)—an inherited heart disease that’s commonly caused by mutations in a gene called MYBPC3. HCM is surprisingly common, affecting 1 in 500 adults. Many of them lead normal lives, but in some, the walls of their hearts can thicken and suddenly fail. For that reason, HCM is the commonest cause of sudden death in athletes. “There really is no treatment,” says Kaul. “A number of drugs are being evaluated but they are all experimental,” and they merely treat the symptoms. The team wanted to prevent HCM entirely by removing the underlying mutation.

They collected sperm from a man with HCM and used CRISPR to change his mutant gene into its normal healthy version, while simultaneously using the sperm to fertilize eggs that had been donated by female volunteers. In this way, they created embryos that were completely free of the mutation. The procedure was effective, and avoided some of the critical problems that have plagued past attempts to use CRISPR in human embryos.

Wait, other human embryos have been edited before?

There have been three attempts in China. The first two—in 2015 and 2016—used non-viable embryos that could never have resulted in a live birth. The third—announced this March—was the first to use viable embryos that could theoretically have been implanted in a womb. All of these studies showed that CRISPR gene-editing, for all its hype, is still in its infancy.

The editing was imprecise. CRISPR is heralded for its precision, allowing scientists to edit particular genes of choice. But in practice, some of the Chinese researchers found worrying levels of off-target mutations, where CRISPR mistakenly cut other parts of the genome.

The editing was inefficient. The first Chinese team only managed to successfully edit a disease gene in 4 out of 86 embryos, and the second team fared even worse.

The editing was incomplete. Even in the successful cases, each embryo had a mix of modified and unmodified cells. This pattern, known as mosaicism, poses serious safety problems if gene-editing were ever to be used in practice. Doctors could end up implanting women with embryos that they thought were free of a disease-causing mutation, but were only partially free. The resulting person would still have many tissues and organs that carry those mutations, and might go on to develop symptoms.

What did the American team do differently?

The Chinese teams all used CRISPR to edit embryos at early stages of their development. By contrast, the Oregon researchers delivered the CRISPR components at the earliest possible point—minutes before fertilization. That neatly avoids the problem of mosaicism by ensuring that an embryo is edited from the very moment it is created. The team did this with 54 embryos and successfully edited the mutant MYBPC3 gene in 72 percent of them. In the other 28 percent, the editing didn’t work—a high failure rate, but far lower than in previous attempts. Better still, the team found no evidence of off-target mutations.

This is a big deal. Many scientists assumed that they’d have to do something more convoluted to avoid mosaicism. They’d have to collect a patient’s cells, which they’d revert into stem cells, which they’d use to make sperm or eggs, which they’d edit using CRISPR. “That’s a lot of extra steps, with more risks,” says Alta Charo. “If it’s possible to edit the embryo itself, that’s a real advance.” Perhaps for that reason, this is the first study to edit human embryos that was published in a top-tier scientific journal—Nature, which rejected some of the earlier Chinese papers.

Is this kind of research even legal?

Yes. In Western Europe, 15 countries out of 22 ban any attempts to change the human germ line—a term referring to sperm, eggs, and other cells that can transmit genetic information to future generations. No such stance exists in the United States but Congress has banned the Food and Drug Administration from considering research applications that make such modifications. Separately, federal agencies like the National Institutes of Health are banned from funding research that ultimately destroys human embryos. But the Oregon team used non-federal money from their institutions, and donations from several small non-profits. No taxpayer money went into their work. [emphasis mine]

Why would you want to edit embryos at all?

Partly to learn more about ourselves. By using CRISPR to manipulate the genes of embryos, scientists can learn more about the earliest stages of human development, and about problems like infertility and miscarriages. That’s why biologist Kathy Niakan from the Crick Institute in London recently secured a license from a British regulator to use CRISPR on human embryos.

Isn’t this a slippery slope toward making designer babies?

In terms of avoiding genetic diseases, it’s not conceptually different from PGD, which is already widely used. The bigger worry is that gene-editing could be used to make people stronger, smarter, or taller, paving the way for a new eugenics, and widening the already substantial gaps between the wealthy and poor. But many geneticists believe that such a future is fundamentally unlikely because complex traits like height and intelligence are the work of hundreds or thousands of genes, each of which have a tiny effect. The prospect of editing them all is implausible. And since genes are so thoroughly interconnected, it may be impossible to edit one particular trait without also affecting many others.

“There’s the worry that this could be used for enhancement, so society has to draw a line,” says Mitalipov. “But this is pretty complex technology and it wouldn’t be hard to regulate it.”

Does this discovery have any social importance at all?

“It’s not so much about designer babies as it is about geographical location,” says Charo. “It’s happening in the United States, and everything here around embryo research has high sensitivity.” She and others worry that the early report about the study, before the actual details were available for scrutiny, could lead to unnecessary panic. “Panic reactions often lead to panic-driven policy … which is usually bad policy,” wrote Greely [bioethicist Hank Greely].

As I understand it, despite the change in stance, there is no federal funding available for the research performed by Mitalipov and his team.

Finally, University College London (UCL) scientists Joyce Harper and Helen O’Neill wrote about CRISPR, the Oregon team’s work, and the possibilities in an Aug. 3, 2017 essay for The Conversation (Note: Links have been removed),

The genome editing tool used, CRISPR-Cas9, has transformed the field of biology in the short time since its discovery in that it not only promises, but delivers. CRISPR has surpassed all previous efforts to engineer cells and alter genomes at a fraction of the time and cost.

The technology, which works like molecular scissors to cut and paste DNA, is a natural defence system that bacteria use to fend off harmful infections. This system has the ability to recognise invading virus DNA, cut it and integrate this cut sequence into its own genome – allowing the bacterium to render itself immune to future infections of viruses with similar DNA. It is this ability to recognise and cut DNA that has allowed scientists to use it to target and edit specific DNA regions.

When this technology is applied to “germ cells” – the sperm and eggs – or embryos, it changes the germline. That means that any alterations made would be permanent and passed down to future generations. This makes it more ethically complex, but there are strict regulations around human germline genome editing, which is predominantly illegal. The UK received a licence in 2016 to carry out CRISPR on human embryos for research into early development. But edited embryos are not allowed to be inserted into the uterus and develop into a fetus in any country.

Germline genome editing came into the global spotlight when Chinese scientists announced in 2015 that they had used CRISPR to edit non-viable human embryos – cells that could never result in a live birth. They did this to modify the gene responsible for the blood disorder β-thalassaemia. While it was met with some success, it received a lot of criticism because of the premature use of this technology in human embryos. The results showed a high number of potentially dangerous, off-target mutations created in the procedure.

Impressive results

The new study, published in Nature, is different because it deals with viable human embryos and shows that the genome editing can be carried out safely – without creating harmful mutations. The team used CRISPR to correct a mutation in the gene MYBPC3, which accounts for approximately 40% of the myocardial disease hypertrophic cardiomyopathy. This is a dominant disease, so an affected individual only needs one abnormal copy of the gene to be affected.

The researchers used sperm from a patient carrying one copy of the MYBPC3 mutation to create 54 embryos. They edited them using CRISPR-Cas9 to correct the mutation. Without genome editing, approximately 50% of the embryos would carry the patients’ normal gene and 50% would carry his abnormal gene.

After genome editing, the aim would be for 100% of embryos to be normal. In the first round of the experiments, they found that 66.7% of embryos – 36 out of 54 – were normal after being injected with CRIPSR. Of the remaining 18 embryos, five had remained unchanged, suggesting editing had not worked. In 13 embryos, only a portion of cells had been edited.

The level of efficiency is affected by the type of CRISPR machinery used and, critically, the timing in which it is put into the embryo. The researchers therefore also tried injecting the sperm and the CRISPR-Cas9 complex into the egg at the same time, which resulted in more promising results. This was done for 75 mature donated human eggs using a common IVF technique called intracytoplasmic sperm injection. This time, impressively, 72.4% of embryos were normal as a result. The approach also lowered the number of embryos containing a mixture of edited and unedited cells (these embryos are called mosaics).

Finally, the team injected a further 22 embryos which were grown into blastocyst – a later stage of embryo development. These were sequenced and the researchers found that the editing had indeed worked. Importantly, they could show that the level of off-target mutations was low.

A brave new world?

So does this mean we finally have a cure for debilitating, heritable diseases? It’s important to remember that the study did not achieve a 100% success rate. Even the researchers themselves stress that further research is needed in order to fully understand the potential and limitations of the technique.

In our view, it is unlikely that genome editing would be used to treat the majority of inherited conditions anytime soon. We still can’t be sure how a child with a genetically altered genome will develop over a lifetime, so it seems unlikely that couples carrying a genetic disease would embark on gene editing rather than undergoing already available tests – such as preimplantation genetic diagnosis or prenatal diagnosis – where the embryos or fetus are tested for genetic faults.

-30-

As might be expected there is now a call for public discussion about the ethics about this kind of work. See Part 3.

For anyone who started in the middle of this series, here’s Part 1 featuring an introduction to the technology and some of the issues.