Tag Archives: Jennifer Kuzma

The CRISPR yogurt story and a hornless cattle update

Clustered regularly interspaced short palindromic repeats (CRISPR) does not and never has made much sense to me. I understand each word individually it’s just that I’ve never thought they made much sense strung together that way. It’s taken years but I’ve finally found out what the words (when strung together that way) mean and the origins for the phrase. Hint: it’s all about the phages.

Apparently, it all started with yogurt as Cynthia Graber and Nicola Twilley of Gastropod discuss on their podcast, “4 CRISPR experts on how gene editing is changing the future of food.” During the course of the podcast they explain the ‘phraseology’ issue, mention hornless cattle (I have an update to the information in the podcast later in this posting), and so much more.

CRISPR started with yogurt

You’ll find the podcast (almost 50 minutes long) here on an Oct. 11, 2019 posting on the Genetic Literacy Project. If you need a little more encouragement, here’s how the podcast is described,

To understand how CRISPR will transform our food, we begin our episode at Dupont’s yoghurt culture facility in Madison, Wisconsin. Senior scientist Dennis Romero tells us the story of CRISPR’s accidental discovery—and its undercover but ubiquitous presence in the dairy aisles today.

Jennifer Kuzma and Yiping Qi help us understand the technology’s potential, both good and bad, as well as how it might be regulated and labeled. And Joyce Van Eck, a plant geneticist at the Boyce Thompson Institute in Ithaca, New York, tells us the story of how she is using CRISPR, combined with her understanding of tomato genetics, to fast-track the domestication of one of the Americas’ most delicious orphan crops [groundcherries].

I featured Van Eck’s work with groundcherries last year in a November 28, 2018 posting and I don’t think she’s published any new work about the fruit since. As for Kuzma’s point that there should be more transparency where genetically modified food is concerned, Canadian consumers were surprised (shocked) in 2017 to find out that genetically modified Atlantic salmon had been introduced into the food market without any notification (my September 13, 2017 posting; scroll down to the Fish subheading; Note: The WordPress ‘updated version from Hell’ has affected some of the formatting on the page).

The earliest article on CRISPR and yogurt that I’ve found is a January 1, 2015 article by Kerry Grens for The Scientist,

Two years ago, a genome-editing tool referred to as CRISPR (clustered regularly interspaced short palindromic repeats) burst onto the scene and swept through laboratories faster than you can say “adaptive immunity.” Bacteria and archaea evolved CRISPR eons before clever researchers harnessed the system to make very precise changes to pretty much any sequence in just about any genome.

But life scientists weren’t the first to get hip to CRISPR’s potential. For nearly a decade, cheese and yogurt makers have been relying on CRISPR to produce starter cultures that are better able to fend off bacteriophage attacks. “It’s a very efficient way to get rid of viruses for bacteria,” says Martin Kullen, the global R&D technology leader of Health and Protection at DuPont Nutrition & Health. “CRISPR’s been an important part of our solution to avoid food waste.”

Phage infection of starter cultures is a widespread and significant problem in the dairy-product business, one that’s been around as long as people have been making cheese. Patrick Derkx, senior director of innovation at Denmark-based Chr. Hansen, one of the world’s largest culture suppliers, estimates that the quality of about two percent of cheese production worldwide suffers from phage attacks. Infection can also slow the acidification of milk starter cultures, thereby reducing creameries’ capacity by up to about 10 percent, Derkx estimates.
In the early 2000s, Philippe Horvath and Rodolphe Barrangou of Danisco (later acquired by DuPont) and their colleagues were first introduced to CRISPR while sequencing Streptococcus thermophilus, a workhorse of yogurt and cheese production. Initially, says Barrangou, they had no idea of the purpose of the CRISPR sequences. But as his group sequenced different strains of the bacteria, they began to realize that CRISPR might be related to phage infection and subsequent immune defense. “That was an eye-opening moment when we first thought of the link between CRISPR sequencing content and phage resistance,” says Barrangou, who joined the faculty of North Carolina State University in 2013.

One last bit before getting to the hornless cattle, scientist Yi Li has a November 15, 2018 posting on the GLP website about his work with gene editing and food crops,

I’m a plant geneticist and one of my top priorities is developing tools to engineer woody plants such as citrus trees that can resist the greening disease, Huanglongbing (HLB), which has devastated these trees around the world. First detected in Florida in 2005, the disease has decimated the state’s US$9 billion citrus crop, leading to a 75 percent decline in its orange production in 2017. Because citrus trees take five to 10 years before they produce fruits, our new technique – which has been nominated by many editors-in-chief as one of the groundbreaking approaches of 2017 that has the potential to change the world – may accelerate the development of non-GMO citrus trees that are HLB-resistant.

Genetically modified vs. gene edited

You may wonder why the plants we create with our new DNA editing technique are not considered GMO? It’s a good question.

Genetically modified refers to plants and animals that have been altered in a way that wouldn’t have arisen naturally through evolution. A very obvious example of this involves transferring a gene from one species to another to endow the organism with a new trait – like pest resistance or drought tolerance.

But in our work, we are not cutting and pasting genes from animals or bacteria into plants. We are using genome editing technologies to introduce new plant traits by directly rewriting the plants’ genetic code.

This is faster and more precise than conventional breeding, is less controversial than GMO techniques, and can shave years or even decades off the time it takes to develop new crop varieties for farmers.

There is also another incentive to opt for using gene editing to create designer crops. On March 28, 2018, U.S. Secretary of Agriculture Sonny Perdue announced that the USDA wouldn’t regulate new plant varieties developed with new technologies like genome editing that would yield plants indistinguishable from those developed through traditional breeding methods. By contrast, a plant that includes a gene or genes from another organism, such as bacteria, is considered a GMO. This is another reason why many researchers and companies prefer using CRISPR in agriculture whenever it is possible.

As the Gatropod’casters note, there’s more than one side to the gene editing story and not everyone is comfortable with the notion of cavalierly changing genetic codes when so much is still unknown.

Hornless cattle update

First mentioned here in a November 28, 2018 posting, hornless cattle have been in the news again. From an October 7, 2019 news item on ScienceDaily,

For the past two years, researchers at the University of California, Davis, have been studying six offspring of a dairy bull, genome-edited to prevent it from growing horns. This technology has been proposed as an alternative to dehorning, a common management practice performed to protect other cattle and human handlers from injuries.

UC Davis scientists have just published their findings in the journal Nature Biotechnology. They report that none of the bull’s offspring developed horns, as expected, and blood work and physical exams of the calves found they were all healthy. The researchers also sequenced the genomes of the calves and their parents and analyzed these genomic sequences, looking for any unexpected changes.

An October 7, 2019 UC Davis news release (also on EurekAlert), which originated the news item, provides more detail about the research (I have checked the UC Davis website here and the October 2019 update appears to be the latest available publicly as of February 5, 2020),

All data were shared with the U.S. Food and Drug Administration. Analysis by FDA scientists revealed a fragment of bacterial DNA, used to deliver the hornless trait to the bull, had integrated alongside one of the two hornless genetic variants, or alleles, that were generated by genome-editing in the bull. UC Davis researchers further validated this finding.

“Our study found that two calves inherited the naturally-occurring hornless allele and four calves additionally inherited a fragment of bacterial DNA, known as a plasmid,” said corresponding author Alison Van Eenennaam, with the UC Davis Department of Animal Science.

Plasmid integration can be addressed by screening and selection, in this case, selecting the two offspring of the genome-edited hornless bull that inherited only the naturally occurring allele.

“This type of screening is routinely done in plant breeding where genome editing frequently involves a step that includes a plasmid integration,” said Van Eenennaam.

Van Eenennaam said the plasmid does not harm the animals, but the integration technically made the genome-edited bull a GMO, because it contained foreign DNA from another species, in this case a bacterial plasmid.

“We’ve demonstrated that healthy hornless calves with only the intended edit can be produced, and we provided data to help inform the process for evaluating genome-edited animals,” said Van Eenennaam. “Our data indicates the need to screen for plasmid integration when they’re used in the editing process.”

Since the original work in 2013, initiated by the Minnesota-based company Recombinetics, new methods have been developed that no longer use donor template plasmid or other extraneous DNA sequence to bring about introgression of the hornless allele.

Scientists did not observe any other unintended genomic alterations in the calves, and all animals remained healthy during the study period. Neither the bull, nor the calves, entered the food supply as per FDA guidance for genome-edited livestock.

WHY THE NEED FOR HORNLESS COWS?

Many dairy breeds naturally grow horns. But on dairy farms, the horns are typically removed, or the calves “disbudded” at a young age. Animals that don’t have horns are less likely to harm animals or dairy workers and have fewer aggressive behaviors. The dehorning process is unpleasant and has implications for animal welfare. Van Eenennaam said genome-editing offers a pain-free genetic alternative to removing horns by introducing a naturally occurring genetic variant, or allele, that is present in some breeds of beef cattle such as Angus.

Here’s a link to and a citation for the paper,

Genomic and phenotypic analyses of six offspring of a genome-edited hornless bull by Amy E. Young, Tamer A. Mansour, Bret R. McNabb, Joseph R. Owen, Josephine F. Trott, C. Titus Brown & Alison L. Van Eenennaam. Nature Biotechnology (2019) DOI: https://doi.org/10.1038/s41587-019-0266-0 Published 07 October 2019

This paper is open access.

Emerging technology and the law

I have three news bits about legal issues that are arising as a consequence of emerging technologies.

Deep neural networks, art, and copyright

Caption: The rise of automated art opens new creative avenues, coupled with new problems for copyright protection. Credit: Provided by: Alexander Mordvintsev, Christopher Olah and Mike Tyka

Presumably this artwork is a demonstration of automated art although they never really do explain how in the news item/news release. An April 26, 2017 news item on ScienceDaily announces research into copyright and the latest in using neural networks to create art,

In 1968, sociologist Jean Baudrillard wrote on automatism that “contained within it is the dream of a dominated world […] that serves an inert and dreamy humanity.”

With the growing popularity of Deep Neural Networks (DNN’s), this dream is fast becoming a reality.

Dr. Jean-Marc Deltorn, researcher at the Centre d’études internationales de la propriété intellectuelle in Strasbourg, argues that we must remain a responsive and responsible force in this process of automation — not inert dominators. As he demonstrates in a recent Frontiers in Digital Humanities paper, the dream of automation demands a careful study of the legal problems linked to copyright.

An April 26, 2017 Frontiers (publishing) news release on EurekAlert, which originated the news item, describes the research in more detail,

For more than half a century, artists have looked to computational processes as a way of expanding their vision. DNN’s are the culmination of this cross-pollination: by learning to identify a complex number of patterns, they can generate new creations.

These systems are made up of complex algorithms modeled on the transmission of signals between neurons in the brain.

DNN creations rely in equal measure on human inputs and the non-human algorithmic networks that process them.

Inputs are fed into the system, which is layered. Each layer provides an opportunity for a more refined knowledge of the inputs (shape, color, lines). Neural networks compare actual outputs to expected ones, and correct the predictive error through repetition and optimization. They train their own pattern recognition, thereby optimizing their learning curve and producing increasingly accurate outputs.

The deeper the layers are, the higher the level of abstraction. The highest layers are able to identify the contents of a given input with reasonable accuracy, after extended periods of training.

Creation thus becomes increasingly automated through what Deltorn calls “the arcane traceries of deep architecture”. The results are sufficiently abstracted from their sources to produce original creations that have been exhibited in galleries, sold at auction and performed at concerts.

The originality of DNN’s is a combined product of technological automation on one hand, human inputs and decisions on the other.

DNN’s are gaining popularity. Various platforms (such as DeepDream) now allow internet users to generate their very own new creations . This popularization of the automation process calls for a comprehensive legal framework that ensures a creator’s economic and moral rights with regards to his work – copyright protection.

Form, originality and attribution are the three requirements for copyright. And while DNN creations satisfy the first of these three, the claim to originality and attribution will depend largely on a given country legislation and on the traceability of the human creator.

Legislation usually sets a low threshold to originality. As DNN creations could in theory be able to create an endless number of riffs on source materials, the uncurbed creation of original works could inflate the existing number of copyright protections.

Additionally, a small number of national copyright laws confers attribution to what UK legislation defines loosely as “the person by whom the arrangements necessary for the creation of the work are undertaken.” In the case of DNN’s, this could mean anybody from the programmer to the user of a DNN interface.

Combined with an overly supple take on originality, this view on attribution would further increase the number of copyrightable works.

The risk, in both cases, is that artists will be less willing to publish their own works, for fear of infringement of DNN copyright protections.

In order to promote creativity – one seminal aim of copyright protection – the issue must be limited to creations that manifest a personal voice “and not just the electric glint of a computational engine,” to quote Deltorn. A delicate act of discernment.

DNN’s promise new avenues of creative expression for artists – with potential caveats. Copyright protection – a “catalyst to creativity” – must be contained. Many of us gently bask in the glow of an increasingly automated form of technology. But if we want to safeguard the ineffable quality that defines much art, it might be a good idea to hone in more closely on the differences between the electric and the creative spark.

This research is and be will part of a broader Frontiers Research Topic collection of articles on Deep Learning and Digital Humanities.

Here’s a link to and a citation for the paper,

Deep Creations: Intellectual Property and the Automata by Jean-Marc Deltorn. Front. Digit. Humanit., 01 February 2017 | https://doi.org/10.3389/fdigh.2017.00003

This paper is open access.

Conference on governance of emerging technologies

I received an April 17, 2017 notice via email about this upcoming conference. Here’s more from the Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics webpage,

The Fifth Annual Conference on Governance of Emerging Technologies:

Law, Policy and Ethics held at the new

Beus Center for Law & Society in Phoenix, AZ

May 17-19, 2017!

Call for Abstracts – Now Closed

The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including (but not limited to) nanotechnology, synthetic biology, gene editing, biotechnology, genomics, personalized medicine, human enhancement technologies, telecommunications, information technologies, surveillance technologies, geoengineering, neuroscience, artificial intelligence, and robotics. The conference is premised on the belief that there is much to be learned and shared from and across the governance experience and proposals for these various emerging technologies.

Keynote Speakers:

Gillian HadfieldRichard L. and Antoinette Schamoi Kirtland Professor of Law and Professor of Economics USC [University of Southern California] Gould School of Law

Shobita Parthasarathy, Associate Professor of Public Policy and Women’s Studies, Director, Science, Technology, and Public Policy Program University of Michigan

Stuart Russell, Professor at [University of California] Berkeley, is a computer scientist known for his contributions to artificial intelligence

Craig Shank, Vice President for Corporate Standards Group in Microsoft’s Corporate, External and Legal Affairs (CELA)

Plenary Panels:

Innovation – Responsible and/or Permissionless

Ellen-Marie Forsberg, Senior Researcher/Research Manager at Oslo and Akershus University College of Applied Sciences

Adam Thierer, Senior Research Fellow with the Technology Policy Program at the Mercatus Center at George Mason University

Wendell Wallach, Consultant, ethicist, and scholar at Yale University’s Interdisciplinary Center for Bioethics

 Gene Drives, Trade and International Regulations

Greg Kaebnick, Director, Editorial Department; Editor, Hastings Center Report; Research Scholar, Hastings Center

Jennifer Kuzma, Goodnight-North Carolina GlaxoSmithKline Foundation Distinguished Professor in Social Sciences in the School of Public and International Affairs (SPIA) and co-director of the Genetic Engineering and Society (GES) Center at North Carolina State University

Andrew Maynard, Senior Sustainability Scholar, Julie Ann Wrigley Global Institute of Sustainability Director, Risk Innovation Lab, School for the Future of Innovation in Society Professor, School for the Future of Innovation in Society, Arizona State University

Gary Marchant, Regents’ Professor of Law, Professor of Law Faculty Director and Faculty Fellow, Center for Law, Science & Innovation, Arizona State University

Marc Saner, Inaugural Director of the Institute for Science, Society and Policy, and Associate Professor, University of Ottawa Department of Geography

Big Data

Anupam Chander, Martin Luther King, Jr. Professor of Law and Director, California International Law Center, UC Davis School of Law

Pilar Ossorio, Professor of Law and Bioethics, University of Wisconsin, School of Law and School of Medicine and Public Health; Morgridge Institute for Research, Ethics Scholar-in-Residence

George Poste, Chief Scientist, Complex Adaptive Systems Initiative (CASI) (http://www.casi.asu.edu/), Regents’ Professor and Del E. Webb Chair in Health Innovation, Arizona State University

Emily Shuckburgh, climate scientist and deputy head of the Polar Oceans Team at the British Antarctic Survey, University of Cambridge

 Responsible Development of AI

Spring Berman, Ira A. Fulton Schools of Engineering, Arizona State University

John Havens, The IEEE [Institute of Electrical and Electronics Engineers] Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems

Subbarao Kambhampati, Senior Sustainability Scientist, Julie Ann Wrigley Global Institute of Sustainability, Professor, School of Computing, Informatics and Decision Systems Engineering, Ira A. Fulton Schools of Engineering, Arizona State University

Wendell Wallach, Consultant, Ethicist, and Scholar at Yale University’s Interdisciplinary Center for Bioethics

Existential and Catastrophic Ricks [sic]

Tony Barrett, Co-Founder and Director of Research of the Global Catastrophic Risk Institute

Haydn Belfield,  Academic Project Administrator, Centre for the Study of Existential Risk at the University of Cambridge

Margaret E. Kosal Associate Director, Sam Nunn School of International Affairs, Georgia Institute of Technology

Catherine Rhodes,  Academic Project Manager, Centre for the Study of Existential Risk at CSER, University of Cambridge

These were the panels that are of interest to me; there are others on the homepage.

Here’s some information from the Conference registration webpage,

Early Bird Registration – $50 off until May 1! Enter discount code: earlybirdGETs50

New: Group Discount – Register 2+ attendees together and receive an additional 20% off for all group members!

Click Here to Register!

Conference registration fees are as follows:

  • General (non-CLE) Registration: $150.00
  • CLE Registration: $350.00
  • *Current Student / ASU Law Alumni Registration: $50.00
  • ^Cybsersecurity sessions only (May 19): $100 CLE / $50 General / Free for students (registration info coming soon)

There you have it.

Neuro-techno future laws

I’m pretty sure this isn’t the first exploration of potential legal issues arising from research into neuroscience although it’s the first one I’ve stumbled across. From an April 25, 2017 news item on phys.org,

New human rights laws to prepare for advances in neurotechnology that put the ‘freedom of the mind’ at risk have been proposed today in the open access journal Life Sciences, Society and Policy.

The authors of the study suggest four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy. The four laws are: the right to cognitive liberty, the right to mental privacy, the right to mental integrity and the right to psychological continuity.

An April 25, 2017 Biomed Central news release on EurekAlert, which originated the news item, describes the work in more detail,

Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel, said: “The mind is considered to be the last refuge of personal freedom and self-determination, but advances in neural engineering, brain imaging and neurotechnology put the freedom of the mind at risk. Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology.”

Advances in neurotechnology, such as sophisticated brain imaging and the development of brain-computer interfaces, have led to these technologies moving away from a clinical setting and into the consumer domain. While these advances may be beneficial for individuals and society, there is a risk that the technology could be misused and create unprecedented threats to personal freedom.

Professor Roberto Andorno, co-author of the research, explained: “Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court, for example as a tool for assessing criminal responsibility or even the risk of reoffending. Consumer companies are using brain imaging for ‘neuromarketing’, to understand consumer behaviour and elicit desired responses from customers. There are also tools such as ‘brain decoders’ which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom which we sought to address with the development of four new human rights laws.”

The authors explain that as neurotechnology improves and becomes commonplace, there is a risk that the technology could be hacked, allowing a third-party to ‘eavesdrop’ on someone’s mind. In the future, a brain-computer interface used to control consumer technology could put the user at risk of physical and psychological damage caused by a third-party attack on the technology. There are also ethical and legal concerns over the protection of data generated by these devices that need to be considered.

International human rights laws make no specific mention to neuroscience, although advances in biomedicine have become intertwined with laws, such as those concerning human genetic data. Similar to the historical trajectory of the genetic revolution, the authors state that the on-going neurorevolution will force a reconceptualization of human rights laws and even the creation of new ones.

Marcello Ienca added: “Science-fiction can teach us a lot about the potential threat of technology. Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes. We need to be prepared to deal with the impact these technologies will have on our personal freedom.”

Here’s a link to and a citation for the paper,

Towards new human rights in the age of neuroscience and neurotechnology by Marcello Ienca and Roberto Andorno. Life Sciences, Society and Policy201713:5 DOI: 10.1186/s40504-017-0050-1 Published: 26 April 2017

©  The Author(s). 2017

This paper is open access.

Who would buy foods that were nanotechnology-enabled or genetically modified?

A research survey conducted by scientists at North Carolina State University (NCSU) and the University of Minnesota suggests that under certain conditions, consumers in the US would be likely to purchase nanotechnology-enabled or genetically modified food. From a Dec. 2, 2014 news item on Nanowerk,

New research from North Carolina State University and the University of Minnesota shows that the majority of consumers will accept the presence of nanotechnology or genetic modification (GM) technology in foods – but only if the technology enhances the nutrition or improves the safety of the food.

A Dec. 2, 2014 NCSU news release (also on EurekAlert), which originated the news item, notes that while many people will pay more to avoid nanotechnology-enabled or genetically modified food there is an exception of sorts,

“In general, people are willing to pay more to avoid GM or nanotech in foods, and people were more averse to GM tech than to nanotech,” says Dr. Jennifer Kuzma, senior author of a paper on the research and co-director of the Genetic Engineering in Society Center at NC State. “However, it’s not really that simple. There were some qualifiers, indicating that many people would be willing to buy GM or nanotech in foods if there were health or safety benefits.”

The researchers conducted a nationally representative survey of 1,117 U.S. consumers. Participants were asked to answer an array of questions that explored their willingness to purchase foods that contained GM tech and foods that contained nanotech. The questions also explored the price of the various foods and whether participants would buy foods that contained nanotech or GM tech if the foods had enhanced nutrition, improved taste, improved food safety, or if the production of the food had environmental benefits.

The researchers found that survey participants could be broken into four groups.

Eighteen percent of participants belonged to a group labeled the “new technology rejecters,” which would not by GM or nanotech foods under any circumstances. Nineteen percent of participants belonged to a group labeled the “technology averse,” which would buy GM or nanotech foods only if those products conveyed food safety benefits. Twenty-three percent of participants were “price oriented,” basing their shopping decisions primarily on the cost of the food – regardless of the presence of GM or nanotech. And 40 percent of participants were “benefit oriented,” meaning they would buy GM or nanotech foods if the foods had enhanced nutrition or food safety.

“This tells us that GM or nanotech food products have greater potential to be viable in the marketplace if companies focus on developing products that have safety and nutrition benefits – because a majority of consumers would be willing to buy those products,” Kuzma says.

“From a policy standpoint, it also argues that GM and nanotech foods should be labeled, so that the technology rejecters can avoid them,” Kuzma adds.

Here’s a link to and a citation for the paper,

Heterogeneous Consumer Preferences for Nanotechnology and Genetic-modification Technology in Food Products by Chengyan Yue, Shuoli Zhao, and Jennifer Kuzma. Journal of Agricultural Economics DOI: 10.1111/1477-9552.12090 Article first published online: 12 NOV 2014

© 2014 The Agricultural Economics Society

This paper is behind a paywall.

I have mentioned Jennifer Kuzma’s work previously in an Oct. 29, 2013 posting titled, Nano info on food labels wanted by public in the US?