Tag Archives: University of California at Davis

Agriculture and gene editing … shades of the AquAdvantage salmon

Salmon are not the only food animals being genetically altered (more about that later in this post) we can now add cows, pigs, and more.

This November 15, 2018 article by Candice Choi on the Huffington Post website illustrates some of the excitement and terror associated with gene editing farm animals,

A company wants to alter farm animals by adding and subtracting genetic traits in a lab. It sounds like science fiction, but Recombinetics sees opportunity for its technology in the livestock industry.

But first, it needs to convince regulators that gene-edited animals are no different than conventionally bred ones. To make the technology appealing and to ease any fears that it may be creating Franken-animals, [emphasis mine] Recombinetics isn’t starting with productivity. Instead, it’s introducing gene-edited traits as a way to ease animal suffering.

“It’s a better story to tell,” said Tammy Lee, CEO of the St. Paul, Minnesota-based company.

For instance, animal welfare advocates have long criticized the way farmers use caustic paste or hot irons to dehorn dairy cows so the animals don’t harm each other. Recombinetics snips out the gene for growing horns so the procedure is unnecessary. [emphases mine]

Last year, a bull gene-edited by Recombinetics to have the dominant hornless trait sired several offspring. All were born hornless as expected, and are being raised at the University of California, Davis. Once the female offspring starts lactating, its milk will be tested for any abnormalities.

Another Recombinetics project: castration-free pigs.

When male piglets go through puberty, their meat can take on an unpleasant odour, something known as “boar taint.” To combat it, farmers castrate pigs, a procedure animal welfare advocates say is commonly performed without painkillers. Editing genes so that pigs never go through puberty would make castration unnecessary.

Also in development are dairy cows that could withstand higher temperatures, so the animals don’t suffer in hotter climates. [emphasis mine]

..

Before food from gene-edited animals can land on dinner tables, however, Recombinetics has to overcome any public unease about the technology.

Beyond worries about “playing God,” it may be an uncomfortable reminder of how modern food production already treats animals, said Paul Thompson, a professor of agriculture at Michigan State University.

“There’s an ethical question that’s been debated for at least the last 20 years, of whether you need to change the animal or change the system,” Thompson said.

Support for gene editing will also likely depend on how the technology is used: whether it’s for animal welfare, productivity or disease resistance. In August, a Pew study found 43 per cent of Americans supported genetically engineered animals for more nutritious meat.

Choi has written an interesting article, which includes a picture of the hornless cows embedded in the piece. One note: Choi makes reference to a milk glut. As far as I’m aware that’s not the case in Canada (at this time) but it is a problem in the US where in 2015 (?) farmers dumped some 43  million gallons of milk (October 12, 2016 article by Martha C. White for Money magazine).

As for the salmon, I’ve covered that story a few times during its journey to being approved for human consumption i Canada (my May 20, 2016 posting) to the discovery in 2017 that the genetically modified product, AquAdvantage salmon, had been introduced into the market, (from my Sept. 13, 2017 posting; scroll down about 40R of the way),

“Since the 2016 approval, AquAdvantage salmon, 4.5M tonnes has been sold in Canada according to an Aug. 8, 2017 article by Sima Shakeri for Huffington Post …”

After decades of trying to get approval by in North America, genetically modified Atlantic salmon has been sold to consumers in Canada.

AquaBounty Technologies, an American company that produces the Atlantic salmon, confirmed it had sold 4.5 tonnes of the modified fish on August 4 [2017], the Scientific American reported.

The fish have been engineered with a growth hormone gene from Chinook salmon to grow faster than regular salmon and require less food. They take about 18 months to reach market size, which is much quicker than the 30 months or so for conventional salmon.

The Washington Post wrote AquaBounty’s salmon also contains a gene from the ocean pout that makes the salmon produce the growth hormone gene all-year-round.

The company produces the eggs in a facility in P.E.I. [Prince Edward Island; a province in Canada], which is currently being expanded, and then they’re shipped to Panama where the fish are raised.

….

There was a bit of a kerfuffle about the whole affair but it seems Canadians have gone on to embrace the genetically modified product. At least that’s Christine Blank’s perspective in her Sept. 13, 2018 article (Canada, US embrace AquAdvantage GMO salmon, Brazil and China may be next) for the Genetic Literacy Project website,

Genetically modified salmon firm AquaBounty has found “very enthusiastic” buyers in Canada, according to president and CEO Ronald Stotish.

The first sale of the Maynard, Massachusetts, U.S.A.-based firm’s AquAdvantage salmon was made last June [2017], when unnamed buyers in Canada bought five metric tons at the going rate of traditional farmed Atlantic salmon, according to the company. Since then, AquaBounty has sold 10 additional metric tons of its AquAdvantage salmon to buyers in Canada

Meanwhile, Stotish revealed that AquAdvantage will be sold in the U.S. through established distributors.

“Once [AquaBounty salmon] is established in the market, the option for branding as a ‘sustainably produced’ food item can be considered,” he told investors.

Alex Gillis’ June 5, 2018 article for Macleans magazine suggests that Canadians may be a bit more doubtful about GM (genetically modified) salmon than Stotish seems to be believe,

An Ipsos Reid poll conducted for the Canadian Biotechnology Action Network in 2015 suggested that Canadians are concerned about GM foods, in spite of government assurances that they’re safe. About 60 per cent of respondents opposed genetically modifying crops and animals for food; nearly half supported a ban on all GM food. More than 20 years of surveys indicate that the vast majority of Canadians want to know when they’re eating GMOs. Fully 88 per cent of those polled in the 2015 survey said they want mandatory labelling.

Their concern hasn’t escaped the notice of those who raise and sell much of the salmon consumed in this country. Five years ago, Marine Harvest, one of the world’s largest producers of farmed salmon, called for labelling of GMOs. Today, it says that it doesn’t grow, sell or research GM salmon, a policy it shares with major salmon producers in Canada. And most big grocery retailers have stated they don’t want GM salmon. When contacted by Maclean’s for this story, Metro, Sobeys, Wal-Mart and Loblaws—four of Canada’s five largest food retailers—declared that none of AquaBounty’s GM salmon from 2017 was sold in their stores, saying neither Sea Delight Canada nor Montreal Fish Co. supplied them with Atlantic salmon at the time.

“I’m happy to report that we don’t source salmon from these two companies,” says Geneviève Grégoire, communications adviser with Metro Richelieu Inc., which operates or supplies 948 food stores in Quebec and Ontario, including Metro, Super C, Food Basics, Adonis and Première Moisson. “As we said before, we didn’t and will not sell GM Atlantic salmon.”

If you’re looking for a more comprehensive and critical examination of the issue, read Lucy Sharratt’s Sept. 1, 2018 article for the Canadian Centre for Policy Alternatives (CCPA).

Gold’s origin in the universe due to cosmic collision

An hypothesis for gold’s origins was first mentioned here in a May 26, 2016 posting,

The link between this research and my side project on gold nanoparticles is a bit tenuous but this work on the origins for gold and other precious metals being found in the stars is so fascinating and I’m determined to find a connection.

An artist's impression of two neutron stars colliding. (Credit: Dana Berry / Skyworks Digital, Inc.) Courtesy: Kavli Foundation

An artist’s impression of two neutron stars colliding. (Credit: Dana Berry / Skyworks Digital, Inc.) Courtesy: Kavli Foundation

From a May 19, 2016 news item on phys.org,

The origin of many of the most precious elements on the periodic table, such as gold, silver and platinum, has perplexed scientists for more than six decades. Now a recent study has an answer, evocatively conveyed in the faint starlight from a distant dwarf galaxy.

In a roundtable discussion, published today [May 19, 2016?], The Kavli Foundation spoke to two of the researchers behind the discovery about why the source of these heavy elements, collectively called “r-process” elements, has been so hard to crack.

From the Spring 2016 Kavli Foundation webpage hosting the  “Galactic ‘Gold Mine’ Explains the Origin of Nature’s Heaviest Elements” Roundtable ,

Astronomers studying a galaxy called Reticulum II have just discovered that its stars contain whopping amounts of these metals—collectively known as “r-process” elements (See “What is the R-Process?”). Of the 10 dwarf galaxies that have been similarly studied so far, only Reticulum II bears such strong chemical signatures. The finding suggests some unusual event took place billions of years ago that created ample amounts of heavy elements and then strew them throughout the galaxy’s reservoir of gas and dust. This r-process-enriched material then went on to form Reticulum II’s standout stars.

Based on the new study, from a team of researchers at the Kavli Institute at the Massachusetts Institute of Technology, the unusual event in Reticulum II was likely the collision of two, ultra-dense objects called neutron stars. Scientists have hypothesized for decades that these collisions could serve as a primary source for r-process elements, yet the idea had lacked solid observational evidence. Now armed with this information, scientists can further hope to retrace the histories of galaxies based on the contents of their stars, in effect conducting “stellar archeology.”

Researchers have confirmed the hypothesis according to an Oct. 16, 2017 news item on phys.org,

Gold’s origin in the Universe has finally been confirmed, after a gravitational wave source was seen and heard for the first time ever by an international collaboration of researchers, with astronomers at the University of Warwick playing a leading role.

Members of Warwick’s Astronomy and Astrophysics Group, Professor Andrew Levan, Dr Joe Lyman, Dr Sam Oates and Dr Danny Steeghs, led observations which captured the light of two colliding neutron stars, shortly after being detected through gravitational waves – perhaps the most eagerly anticipated phenomenon in modern astronomy.

Marina Koren’s Oct. 16, 2017 article for The Atlantic presents a richly evocative view (Note: Links have been removed),

Some 130 million years ago, in another galaxy, two neutron stars spiraled closer and closer together until they smashed into each other in spectacular fashion. The violent collision produced gravitational waves, cosmic ripples powerful enough to stretch and squeeze the fabric of the universe. There was a brief flash of light a million trillion times as bright as the sun, and then a hot cloud of radioactive debris. The afterglow hung for several days, shifting from bright blue to dull red as the ejected material cooled in the emptiness of space.

Astronomers detected the aftermath of the merger on Earth on August 17. For the first time, they could see the source of universe-warping forces Albert Einstein predicted a century ago. Unlike with black-hole collisions, they had visible proof, and it looked like a bright jewel in the night sky.

But the merger of two neutron stars is more than fireworks. It’s a factory.

Using infrared telescopes, astronomers studied the spectra—the chemical composition of cosmic objects—of the collision and found that the plume ejected by the merger contained a host of newly formed heavy chemical elements, including gold, silver, platinum, and others. Scientists estimate the amount of cosmic bling totals about 10,000 Earth-masses of heavy elements.

I’m not sure exactly what this image signifies but it did accompany Koren’s article so presumably it’s a representation of colliding neutron stars,

NSF / LIGO / Sonoma State University /A. Simonnet. Downloaded from: https://www.theatlantic.com/science/archive/2017/10/the-making-of-cosmic-bling/543030/

An Oct. 16, 2017 University of Warwick press release (also on EurekAlert), which originated the news item on phys.org, provides more detail,

Huge amounts of gold, platinum, uranium and other heavy elements were created in the collision of these compact stellar remnants, and were pumped out into the universe – unlocking the mystery of how gold on wedding rings and jewellery is originally formed.

The collision produced as much gold as the mass of the Earth. [emphasis mine]

This discovery has also confirmed conclusively that short gamma-ray bursts are directly caused by the merging of two neutron stars.

The neutron stars were very dense – as heavy as our Sun yet only 10 kilometres across – and they collided with each other 130 million years ago, when dinosaurs roamed the Earth, in a relatively old galaxy that was no longer forming many stars.

They drew towards each other over millions of light years, and revolved around each other increasingly quickly as they got closer – eventually spinning around each other five hundred times per second.

Their merging sent ripples through the fabric of space and time – and these ripples are the elusive gravitational waves spotted by the astronomers.

The gravitational waves were detected by the Advanced Laser Interferometer Gravitational-Wave Observatory (Adv-LIGO) on 17 August this year [2017], with a short duration gamma-ray burst detected by the Fermi satellite just two seconds later.

This led to a flurry of observations as night fell in Chile, with a first report of a new source from the Swope 1m telescope.

Longstanding collaborators Professor Levan and Professor Nial Tanvir (from the University of Leicester) used the facilities of the European Southern Observatory to pinpoint the source in infrared light.

Professor Levan’s team was the first one to get observations of this new source with the Hubble Space Telescope. It comes from a galaxy called NGC 4993, 130 million light years away.

Andrew Levan, Professor in the Astronomy & Astrophysics group at the University of Warwick, commented: “Once we saw the data, we realised we had caught a new kind of astrophysical object. This ushers in the era of multi-messenger astronomy, it is like being able to see and hear for the first time.”

Dr Joe Lyman, who was observing at the European Southern Observatory at the time was the first to alert the community that the source was unlike any seen before.

He commented: “The exquisite observations obtained in a few days showed we were observing a kilonova, an object whose light is powered by extreme nuclear reactions. This tells us that the heavy elements, like the gold or platinum in jewellery are the cinders, forged in the billion degree remnants of a merging neutron star.”

Dr Samantha Oates added: “This discovery has answered three questions that astronomers have been puzzling for decades: what happens when neutron stars merge? What causes the short duration gamma-ray bursts? Where are the heavy elements, like gold, made? In the space of about a week all three of these mysteries were solved.”

Dr Danny Steeghs said: “This is a new chapter in astrophysics. We hope that in the next few years we will detect many more events like this. Indeed, in Warwick we have just finished building a telescope designed to do just this job, and we expect it to pinpoint these sources in this new era of multi-messenger astronomy”.

Congratulations to all of the researchers involved in this work!

Many, many research teams were  involved. Here’s a sampling of their news releases which focus on their areas of research,

University of the Witwatersrand (South Africa)

https://www.eurekalert.org/pub_releases/2017-10/uotw-wti101717.php

Weizmann Institute of Science (Israel)

https://www.eurekalert.org/pub_releases/2017-10/wios-cns101717.php

Carnegie Institution for Science (US)

https://www.eurekalert.org/pub_releases/2017-10/cifs-dns101217.php

Northwestern University (US)

https://www.eurekalert.org/pub_releases/2017-10/nu-adc101617.php

National Radio Astronomy Observatory (US)

https://www.eurekalert.org/pub_releases/2017-10/nrao-ru101317.php

Max-Planck-Gesellschaft (Germany)

https://www.eurekalert.org/pub_releases/2017-10/m-gwf101817.php

Penn State (Pennsylvania State University; US)

https://www.eurekalert.org/pub_releases/2017-10/ps-stl101617.php

University of California – Davis

https://www.eurekalert.org/pub_releases/2017-10/uoc–cns101717.php

The American Association for the Advancement of Science’s (AAAS) magazine, Science, has published seven papers on this research. Here’s an Oct. 16, 2017 AAAS news release with an overview of the papers,

https://www.eurekalert.org/pub_releases/2017-10/aaft-btf101617.php

I’m sure there are more news releases out there and that there will be many more papers published in many journals, so if this interests, I encourage you to keep looking.

Two final pieces I’d like to draw your attention to: one answers basic questions and another focuses on how artists knew what to draw when neutron stars collide.

Keith A Spencer’s Oct. 18, 2017 piece on salon.com answers a lot of basic questions for those of us who don’t have a background in astronomy. Here are a couple of examples,

What is a neutron star?

Okay, you know how atoms have protons, neutrons, and electrons in them? And you know how protons are positively charged, and electrons are negatively charged, and neutrons are neutral?

Yeah, I remember that from watching Bill Nye as a kid.

Totally. Anyway, have you ever wondered why the negatively-charged electrons and the positively-charged protons don’t just merge into each other and form a neutral neutron? I mean, they’re sitting there in the atom’s nucleus pretty close to each other. Like, if you had two magnets that close, they’d stick together immediately.

I guess now that you mention it, yeah, it is weird.

Well, it’s because there’s another force deep in the atom that’s preventing them from merging.

It’s really really strong.

The only way to overcome this force is to have a huge amount of matter in a really hot, dense space — basically shove them into each other until they give up and stick together and become a neutron. This happens in very large stars that have been around for a while — the core collapses, and in the aftermath, the electrons in the star are so close to the protons, and under so much pressure, that they suddenly merge. There’s a big explosion and the outer material of the star is sloughed off.

Okay, so you’re saying under a lot of pressure and in certain conditions, some stars collapse and become big balls of neutrons?

Pretty much, yeah.

So why do the neutrons just stick around in a huge ball? Aren’t they neutral? What’s keeping them together? 

Gravity, mostly. But also the strong nuclear force, that aforementioned weird strong force. This isn’t something you’d encounter on a macroscopic scale — the strong force only really works at the type of distances typified by particles in atomic nuclei. And it’s different, fundamentally, than the electromagnetic force, which is what makes magnets attract and repel and what makes your hair stick up when you rub a balloon on it.

So these neutrons in a big ball are bound by gravity, but also sticking together by virtue of the strong nuclear force. 

So basically, the new ball of neutrons is really small, at least, compared to how heavy it is. That’s because the neutrons are all clumped together as if this neutron star is one giant atomic nucleus — which it kinda is. It’s like a giant atom made only of neutrons. If our sun were a neutron star, it would be less than 20 miles wide. It would also not be something you would ever want to get near.

Got it. That means two giant balls of neutrons that weighed like, more than our sun and were only ten-ish miles wide, suddenly smashed into each other, and in the aftermath created a black hole, and we are just now detecting it on Earth?

Exactly. Pretty weird, no?

Spencer does a good job of gradually taking you through increasingly complex explanations.

For those with artistic interests, Neel V. Patel tries to answer a question about how artists knew what draw when neutron stars collided in his Oct. 18, 2017 piece for Slate.com,

All of these things make this discovery easy to marvel at and somewhat impossible to picture. Luckily, artists have taken up the task of imagining it for us, which you’ve likely seen if you’ve already stumbled on coverage of the discovery. Two bright, furious spheres of light and gas spiraling quickly into one another, resulting in a massive swell of lit-up matter along with light and gravitational waves rippling off speedily in all directions, towards parts unknown. These illustrations aren’t just alluring interpretations of a rare phenomenon; they are, to some extent, the translation of raw data and numbers into a tangible visual that gives scientists and nonscientists alike some way of grasping what just happened. But are these visualizations realistic? Is this what it actually looked like? No one has any idea. Which is what makes the scientific illustrators’ work all the more fascinating.

“My goal is to represent what the scientists found,” says Aurore Simmonet, a scientific illustrator based at Sonoma State University in Rohnert Park, California. Even though she said she doesn’t have a rigorous science background (she certainly didn’t know what a kilonova was before being tasked to illustrate one), she also doesn’t believe that type of experience is an absolute necessity. More critical, she says, is for the artist to have an interest in the subject matter and in learning new things, as well as a capacity to speak directly to scientists about their work.

Illustrators like Simmonet usually start off work on an illustration by asking the scientist what’s the biggest takeaway a viewer should grasp when looking at a visual. Unfortunately, this latest discovery yielded a multitude of papers emphasizing different conclusions and highlights. With so many scientific angles, there’s a stark challenge in trying to cram every important thing into a single drawing.

Clearly, however, the illustrations needed to center around the kilonova. Simmonet loves colors, so she began by discussing with the researchers what kind of color scheme would work best. The smash of two neutron stars lends itself well to deep, vibrant hues. Simmonet and Robin Dienel at the Carnegie Institution for Science elected to use a wide array of colors and drew bright cracking to show pressure forming at the merging. Others, like Luis Calcada at the European Southern Observatory, limited the color scheme in favor of emphasizing the bright moment of collision and the signal waves created by the kilonova.

Animators have even more freedom to show the event, since they have much more than a single frame to play with. The Conceptual Image Lab at NASA’s [US National Aeronautics and Space Administration] Goddard Space Flight Center created a short video about the new findings, and lead animator Brian Monroe says the video he and his colleagues designed shows off the evolution of the entire process: the rising action, climax, and resolution of the kilonova event.

The illustrators try to adhere to what the likely physics of the event entailed, soliciting feedback from the scientists to make sure they’re getting it right. The swirling of gas, the direction of ejected matter upon impact, the reflection of light, the proportions of the objects—all of these things are deliberately framed such that they make scientific sense. …

Do take a look at Patel’s piece, if for no other reason than to see all of the images he has embedded there. You may recognize Aurore Simmonet’s name from the credit line in the second image I have embedded here.

Emerging technology and the law

I have three news bits about legal issues that are arising as a consequence of emerging technologies.

Deep neural networks, art, and copyright

Caption: The rise of automated art opens new creative avenues, coupled with new problems for copyright protection. Credit: Provided by: Alexander Mordvintsev, Christopher Olah and Mike Tyka

Presumably this artwork is a demonstration of automated art although they never really do explain how in the news item/news release. An April 26, 2017 news item on ScienceDaily announces research into copyright and the latest in using neural networks to create art,

In 1968, sociologist Jean Baudrillard wrote on automatism that “contained within it is the dream of a dominated world […] that serves an inert and dreamy humanity.”

With the growing popularity of Deep Neural Networks (DNN’s), this dream is fast becoming a reality.

Dr. Jean-Marc Deltorn, researcher at the Centre d’études internationales de la propriété intellectuelle in Strasbourg, argues that we must remain a responsive and responsible force in this process of automation — not inert dominators. As he demonstrates in a recent Frontiers in Digital Humanities paper, the dream of automation demands a careful study of the legal problems linked to copyright.

An April 26, 2017 Frontiers (publishing) news release on EurekAlert, which originated the news item, describes the research in more detail,

For more than half a century, artists have looked to computational processes as a way of expanding their vision. DNN’s are the culmination of this cross-pollination: by learning to identify a complex number of patterns, they can generate new creations.

These systems are made up of complex algorithms modeled on the transmission of signals between neurons in the brain.

DNN creations rely in equal measure on human inputs and the non-human algorithmic networks that process them.

Inputs are fed into the system, which is layered. Each layer provides an opportunity for a more refined knowledge of the inputs (shape, color, lines). Neural networks compare actual outputs to expected ones, and correct the predictive error through repetition and optimization. They train their own pattern recognition, thereby optimizing their learning curve and producing increasingly accurate outputs.

The deeper the layers are, the higher the level of abstraction. The highest layers are able to identify the contents of a given input with reasonable accuracy, after extended periods of training.

Creation thus becomes increasingly automated through what Deltorn calls “the arcane traceries of deep architecture”. The results are sufficiently abstracted from their sources to produce original creations that have been exhibited in galleries, sold at auction and performed at concerts.

The originality of DNN’s is a combined product of technological automation on one hand, human inputs and decisions on the other.

DNN’s are gaining popularity. Various platforms (such as DeepDream) now allow internet users to generate their very own new creations . This popularization of the automation process calls for a comprehensive legal framework that ensures a creator’s economic and moral rights with regards to his work – copyright protection.

Form, originality and attribution are the three requirements for copyright. And while DNN creations satisfy the first of these three, the claim to originality and attribution will depend largely on a given country legislation and on the traceability of the human creator.

Legislation usually sets a low threshold to originality. As DNN creations could in theory be able to create an endless number of riffs on source materials, the uncurbed creation of original works could inflate the existing number of copyright protections.

Additionally, a small number of national copyright laws confers attribution to what UK legislation defines loosely as “the person by whom the arrangements necessary for the creation of the work are undertaken.” In the case of DNN’s, this could mean anybody from the programmer to the user of a DNN interface.

Combined with an overly supple take on originality, this view on attribution would further increase the number of copyrightable works.

The risk, in both cases, is that artists will be less willing to publish their own works, for fear of infringement of DNN copyright protections.

In order to promote creativity – one seminal aim of copyright protection – the issue must be limited to creations that manifest a personal voice “and not just the electric glint of a computational engine,” to quote Deltorn. A delicate act of discernment.

DNN’s promise new avenues of creative expression for artists – with potential caveats. Copyright protection – a “catalyst to creativity” – must be contained. Many of us gently bask in the glow of an increasingly automated form of technology. But if we want to safeguard the ineffable quality that defines much art, it might be a good idea to hone in more closely on the differences between the electric and the creative spark.

This research is and be will part of a broader Frontiers Research Topic collection of articles on Deep Learning and Digital Humanities.

Here’s a link to and a citation for the paper,

Deep Creations: Intellectual Property and the Automata by Jean-Marc Deltorn. Front. Digit. Humanit., 01 February 2017 | https://doi.org/10.3389/fdigh.2017.00003

This paper is open access.

Conference on governance of emerging technologies

I received an April 17, 2017 notice via email about this upcoming conference. Here’s more from the Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics webpage,

The Fifth Annual Conference on Governance of Emerging Technologies:

Law, Policy and Ethics held at the new

Beus Center for Law & Society in Phoenix, AZ

May 17-19, 2017!

Call for Abstracts – Now Closed

The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including (but not limited to) nanotechnology, synthetic biology, gene editing, biotechnology, genomics, personalized medicine, human enhancement technologies, telecommunications, information technologies, surveillance technologies, geoengineering, neuroscience, artificial intelligence, and robotics. The conference is premised on the belief that there is much to be learned and shared from and across the governance experience and proposals for these various emerging technologies.

Keynote Speakers:

Gillian HadfieldRichard L. and Antoinette Schamoi Kirtland Professor of Law and Professor of Economics USC [University of Southern California] Gould School of Law

Shobita Parthasarathy, Associate Professor of Public Policy and Women’s Studies, Director, Science, Technology, and Public Policy Program University of Michigan

Stuart Russell, Professor at [University of California] Berkeley, is a computer scientist known for his contributions to artificial intelligence

Craig Shank, Vice President for Corporate Standards Group in Microsoft’s Corporate, External and Legal Affairs (CELA)

Plenary Panels:

Innovation – Responsible and/or Permissionless

Ellen-Marie Forsberg, Senior Researcher/Research Manager at Oslo and Akershus University College of Applied Sciences

Adam Thierer, Senior Research Fellow with the Technology Policy Program at the Mercatus Center at George Mason University

Wendell Wallach, Consultant, ethicist, and scholar at Yale University’s Interdisciplinary Center for Bioethics

 Gene Drives, Trade and International Regulations

Greg Kaebnick, Director, Editorial Department; Editor, Hastings Center Report; Research Scholar, Hastings Center

Jennifer Kuzma, Goodnight-North Carolina GlaxoSmithKline Foundation Distinguished Professor in Social Sciences in the School of Public and International Affairs (SPIA) and co-director of the Genetic Engineering and Society (GES) Center at North Carolina State University

Andrew Maynard, Senior Sustainability Scholar, Julie Ann Wrigley Global Institute of Sustainability Director, Risk Innovation Lab, School for the Future of Innovation in Society Professor, School for the Future of Innovation in Society, Arizona State University

Gary Marchant, Regents’ Professor of Law, Professor of Law Faculty Director and Faculty Fellow, Center for Law, Science & Innovation, Arizona State University

Marc Saner, Inaugural Director of the Institute for Science, Society and Policy, and Associate Professor, University of Ottawa Department of Geography

Big Data

Anupam Chander, Martin Luther King, Jr. Professor of Law and Director, California International Law Center, UC Davis School of Law

Pilar Ossorio, Professor of Law and Bioethics, University of Wisconsin, School of Law and School of Medicine and Public Health; Morgridge Institute for Research, Ethics Scholar-in-Residence

George Poste, Chief Scientist, Complex Adaptive Systems Initiative (CASI) (http://www.casi.asu.edu/), Regents’ Professor and Del E. Webb Chair in Health Innovation, Arizona State University

Emily Shuckburgh, climate scientist and deputy head of the Polar Oceans Team at the British Antarctic Survey, University of Cambridge

 Responsible Development of AI

Spring Berman, Ira A. Fulton Schools of Engineering, Arizona State University

John Havens, The IEEE [Institute of Electrical and Electronics Engineers] Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems

Subbarao Kambhampati, Senior Sustainability Scientist, Julie Ann Wrigley Global Institute of Sustainability, Professor, School of Computing, Informatics and Decision Systems Engineering, Ira A. Fulton Schools of Engineering, Arizona State University

Wendell Wallach, Consultant, Ethicist, and Scholar at Yale University’s Interdisciplinary Center for Bioethics

Existential and Catastrophic Ricks [sic]

Tony Barrett, Co-Founder and Director of Research of the Global Catastrophic Risk Institute

Haydn Belfield,  Academic Project Administrator, Centre for the Study of Existential Risk at the University of Cambridge

Margaret E. Kosal Associate Director, Sam Nunn School of International Affairs, Georgia Institute of Technology

Catherine Rhodes,  Academic Project Manager, Centre for the Study of Existential Risk at CSER, University of Cambridge

These were the panels that are of interest to me; there are others on the homepage.

Here’s some information from the Conference registration webpage,

Early Bird Registration – $50 off until May 1! Enter discount code: earlybirdGETs50

New: Group Discount – Register 2+ attendees together and receive an additional 20% off for all group members!

Click Here to Register!

Conference registration fees are as follows:

  • General (non-CLE) Registration: $150.00
  • CLE Registration: $350.00
  • *Current Student / ASU Law Alumni Registration: $50.00
  • ^Cybsersecurity sessions only (May 19): $100 CLE / $50 General / Free for students (registration info coming soon)

There you have it.

Neuro-techno future laws

I’m pretty sure this isn’t the first exploration of potential legal issues arising from research into neuroscience although it’s the first one I’ve stumbled across. From an April 25, 2017 news item on phys.org,

New human rights laws to prepare for advances in neurotechnology that put the ‘freedom of the mind’ at risk have been proposed today in the open access journal Life Sciences, Society and Policy.

The authors of the study suggest four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy. The four laws are: the right to cognitive liberty, the right to mental privacy, the right to mental integrity and the right to psychological continuity.

An April 25, 2017 Biomed Central news release on EurekAlert, which originated the news item, describes the work in more detail,

Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel, said: “The mind is considered to be the last refuge of personal freedom and self-determination, but advances in neural engineering, brain imaging and neurotechnology put the freedom of the mind at risk. Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology.”

Advances in neurotechnology, such as sophisticated brain imaging and the development of brain-computer interfaces, have led to these technologies moving away from a clinical setting and into the consumer domain. While these advances may be beneficial for individuals and society, there is a risk that the technology could be misused and create unprecedented threats to personal freedom.

Professor Roberto Andorno, co-author of the research, explained: “Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court, for example as a tool for assessing criminal responsibility or even the risk of reoffending. Consumer companies are using brain imaging for ‘neuromarketing’, to understand consumer behaviour and elicit desired responses from customers. There are also tools such as ‘brain decoders’ which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom which we sought to address with the development of four new human rights laws.”

The authors explain that as neurotechnology improves and becomes commonplace, there is a risk that the technology could be hacked, allowing a third-party to ‘eavesdrop’ on someone’s mind. In the future, a brain-computer interface used to control consumer technology could put the user at risk of physical and psychological damage caused by a third-party attack on the technology. There are also ethical and legal concerns over the protection of data generated by these devices that need to be considered.

International human rights laws make no specific mention to neuroscience, although advances in biomedicine have become intertwined with laws, such as those concerning human genetic data. Similar to the historical trajectory of the genetic revolution, the authors state that the on-going neurorevolution will force a reconceptualization of human rights laws and even the creation of new ones.

Marcello Ienca added: “Science-fiction can teach us a lot about the potential threat of technology. Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes. We need to be prepared to deal with the impact these technologies will have on our personal freedom.”

Here’s a link to and a citation for the paper,

Towards new human rights in the age of neuroscience and neurotechnology by Marcello Ienca and Roberto Andorno. Life Sciences, Society and Policy201713:5 DOI: 10.1186/s40504-017-0050-1 Published: 26 April 2017

©  The Author(s). 2017

This paper is open access.

Growing shells atom-by-atom

The University of California at Davis (UC Davis) and the University of Washington (state) collaborated in research into fundamental questions on how aquatic animals grow. From an Oct. 24, 2016 news item on ScienceDaily,

For the first time scientists can see how the shells of tiny marine organisms grow atom-by-atom, a new study reports. The advance provides new insights into the mechanisms of biomineralization and will improve our understanding of environmental change in Earth’s past.

An Oct. 24, 2016 UC Davis news release by Becky Oskin, which originated the news item, provides more detail,

Led by researchers from the University of California, Davis and the University of Washington, with key support from the U.S. Department of Energy’s Pacific Northwest National Laboratory, the team examined an organic-mineral interface where the first calcium carbonate crystals start to appear in the shells of foraminifera, a type of plankton.

“We’ve gotten the first glimpse of the biological event horizon,” said Howard Spero, a study co-author and UC Davis geochemistry professor. …

Foraminifera’s Final Frontier

The researchers zoomed into shells at the atomic level to better understand how growth processes may influence the levels of trace impurities in shells. The team looked at a key stage — the interaction between the biological ‘template’ and the initiation of shell growth. The scientists produced an atom-scale map of the chemistry at this crucial interface in the foraminifera Orbulina universa. This is the first-ever measurement of the chemistry of a calcium carbonate biomineralization template, Spero said.

Among the new findings are elevated levels of sodium and magnesium in the organic layer. This is surprising because the two elements are not considered important architects in building shells, said lead study author Oscar Branson, a former postdoctoral researcher at UC Davis who is now at the Australian National University in Canberra. Also, the greater concentrations of magnesium and sodium in the organic template may need to be considered when investigating past climate with foraminifera shells.

Calibrating Earth’s Climate

Most of what we know about past climate (beyond ice core records) comes from chemical analyses of shells made by the tiny, one-celled creatures called foraminifera, or “forams.” When forams die, their shells sink and are preserved in seafloor mud. The chemistry preserved in ancient shells chronicles climate change on Earth, an archive that stretches back nearly 200 million years.

The calcium carbonate shells incorporate elements from seawater — such as calcium, magnesium and sodium — as the shells grow. The amount of trace impurities in a shell depends on both the surrounding environmental conditions and how the shells are made. For example, the more magnesium a shell has, the warmer the ocean was where that shell grew.

“Finding out how much magnesium there is in a shell can allow us to find out the temperature of seawater going back up to 150 million years,” Branson said.

But magnesium levels also vary within a shell, because of nanometer-scale growth bands. Each band is one day’s growth (similar to the seasonal variations in tree rings). Branson said considerable gaps persist in understanding what exactly causes the daily bands in the shells.

“We know that shell formation processes are important for shell chemistry, but we don’t know much about these processes or how they might have changed through time,” he said. “This adds considerable uncertainty to climate reconstructions.”

Atomic Maps

The researchers used two cutting-edge techniques: Time-of-Flight Secondary Ionization Mass Spectrometry (ToF-SIMS) and Laser-Assisted Atom Probe Tomography (APT). ToF-SIMS is a two-dimensional chemical mapping technique which shows the elemental composition of the surface of a polished sample. The technique was developed for the elemental analysis of complex polymer materials, and is just starting to be applied to natural samples like shells.

APT is an atomic-scale three-dimensional mapping technique, developed for looking at internal structures in advanced alloys, silicon chips and superconductors. The APT imaging was performed at the Environmental Molecular Sciences Laboratory, a U.S. Department of Energy Office of Science User Facility at the Pacific Northwest National Laboratory.

This foraminifera is just starting to form its adult spherical shell. The calcium carbonate spherical shell first forms on a thin organic template, shown here in white, around the dark juvenile skeleton. Calcium carbonate spines then extend from the juvenile skeleton through the new sphere and outward. The bright flecks are algae that the foraminifera “farm” for sustenance.Howard Spero/University of California, Davis

This foraminifera is just starting to form its adult spherical shell. The calcium carbonate spherical shell first forms on a thin organic template, shown here in white, around the dark juvenile skeleton. Calcium carbonate spines then extend from the juvenile skeleton through the new sphere and outward. The bright flecks are algae that the foraminifera “farm” for sustenance.Howard Spero/University of California, Davis

An Oct. 24, 2016 University of Washington (state) news release (also on EurekAlert) adds more information (there is a little repetition),

Unseen out in the ocean, countless single-celled organisms grow protective shells to keep them safe as they drift along, living off other tiny marine plants and animals. Taken together, the shells are so plentiful that when they sink they provide one of the best records for the history of ocean chemistry.

Oceanographers at the University of Washington and the University of California, Davis, have used modern tools to provide an atomic-scale look at how that shell first forms. Results could help answer fundamental questions about how these creatures grow under different ocean conditions, in the past and in the future. …

“There’s this debate among scientists about whether shelled organisms are slaves to the chemistry of the ocean, or whether they have the physiological capacity to adapt to changing environmental conditions,” said senior author Alex Gagnon, a UW assistant professor of oceanography.

The new work shows, he said, that they do exert some biologically-based control over shell formation.

“I think it’s just incredible that we were able to peer into the intricate details of those first moments that set how a seashell forms,” Gagnon said. “And that’s what sets how much of the rest of the skeleton will grow.”

The results could eventually help understand how organisms at the base of the marine food chain will respond to more acidic waters. And while the study looked at one organism, Orbulina universa, which is important for understanding past climate, the same method could be used for other plankton, corals and shellfish.

The study used tools developed for materials science and semiconductor research to view the shell formation in the most detail yet to see how the organisms turn seawater into solid mineral.

“We’re interested more broadly in the question ‘How do organisms make shells?'” said first author Oscar Branson, a former postdoctoral researcher at the University of California, Davis who is now at Australian National University in Canberra. “We’ve focused on a key stage in mineral formation — the interaction between biological template materials and the initiation of shell growth by an organism.”

These tiny single-celled animals, called foraminifera, can’t reproduce anywhere but in their natural surroundings, which prevents breeding them in captivity. The researchers caught juvenile foraminifera by diving in deep water off Southern California. Then they then raised them in the lab, using tiny pipettes to feed them brine shrimp during their weeklong lives.

Marine shells are made from calcium carbonate, drawing the calcium and carbon from surrounding seawater. But the animal first grows a soft template for the mineral to grow over. Because this template is trapped within the growing skeleton, it acts as a snapshot of the chemical conditions during the first part of skeletal growth.

To see this chemical picture, the authors analyzed tiny sections of foraminifera template with a technique called atom probe tomography at the Pacific Northwest National Laboratory. This tool creates an atom-by-atom picture of the organic template, which was located using a chemical tag.

Results show that the template contains more magnesium and sodium atoms than expected, and that this could influence how the mineral in the shell begins to grow around it.

“One of the key stages in growing a skeleton is when you make that first bit, when you build that first bit of structure. Anything that changes that process is a key control point,” Gagnon said.

The clumping suggests that magnesium and sodium play a role in the first stages of shell growth. If their availability changes for any reason, that could influence how the shell grows beyond what simple chemistry would predict.

“We can say who the players are — further experiments will have to tell us exactly how important each of them is,” Gagnon said.

Follow-up work will try to grow the shells and create models of their formation to see how the template affects growth under different conditions, such as more acidic water.

“Translating that into, ‘Can these forams survive ocean acidification?’ is still many steps down the line,” Gagnon cautioned. “But you can’t do that until you have a picture of what that surface actually looks like.”

The researchers also hope that by better understanding the exact mechanism of shell growth they could tease apart different aspects of seafloor remains so the shells can be used to reconstruct more than just the ocean’s past temperature. In the study, they showed that the template was responsible for causing fine lines in the shells — one example of the rich chemical information encoded in fossil shells.

“There are ways that you could separate the effects of temperature from other things and learn much more about the past ocean,” Gagnon said.

Here’s a link to and a citation for the paper,

Nanometer-Scale Chemistry of a Calcite Biomineralization Template: Implications for Skeletal Composition and Nucleation, Proceedings of the National Academy of Sciences, www.pnas.org/cgi/doi/10.1073/pnas.1522864113

This paper is behind a paywall.

Robots, Dallas (US), ethics, and killing

I’ve waited a while before posting this piece in the hope that the situation would calm. Sadly, it took longer than hoped as there was an additional shooting incident of police officers in Baton Rouge on July 17, 2016. There’s more about that shooting in a July 18, 2016 news posting by Steve Visser for CNN.)

Finally: Robots, Dallas, ethics, and killing: In the wake of the Thursday, July 7, 2016 shooting in Dallas (Texas, US) and subsequent use of a robot armed with a bomb to kill  the suspect, a discussion about ethics has been raised.

This discussion comes at a difficult period. In the same week as the targeted shooting of white police officers in Dallas, two African-American males were shot and killed in two apparently unprovoked shootings by police. The victims were Alton Sterling in Baton Rouge, Louisiana on Tuesday, July 5, 2016 and, Philando Castile in Minnesota on Wednesday, July 6, 2016. (There’s more detail about the shootings prior to Dallas in a July 7, 2016 news item on CNN.) The suspect in Dallas, Micah Xavier Johnson, a 25-year-old African-American male had served in the US Army Reserve and been deployed in Afghanistan (there’s more in a July 9, 2016 news item by Emily Shapiro, Julia Jacobo, and Stephanie Wash for abcnews.go.com). All of this has taken place within the context of a movement started in 2013 in the US, Black Lives Matter.

Getting back to robots, most of the material I’ve seen about ‘killing or killer’ robots has so far involved industrial accidents (very few to date) and ethical issues for self-driven cars (see a May 31, 2016 posting by Noah J. Goodall on the IEEE [Institute of Electrical and Electronics Engineers] Spectrum website).

The incident in Dallas is apparently the first time a US police organization has used a robot as a bomb, although it has been an occasional practice by US Armed Forces in combat situations. Rob Lever in a July 8, 2016 Agence France-Presse piece on phys.org focuses on the technology aspect,

The “bomb robot” killing of a suspected Dallas shooter may be the first lethal use of an automated device by American police, and underscores growing role of technology in law enforcement.

Regardless of the methods in Dallas, the use of robots is expected to grow, to handle potentially dangerous missions in law enforcement and the military.


Researchers at Florida International University meanwhile have been working on a TeleBot that would allow disabled police officers to control a humanoid robot.

The robot, described in some reports as similar to the “RoboCop” in films from 1987 and 2014, was designed “to look intimidating and authoritative enough for citizens to obey the commands,” but with a “friendly appearance” that makes it “approachable to citizens of all ages,” according to a research paper.

Robot developers downplay the potential for the use of automated lethal force by the devices, but some analysts say debate on this is needed, both for policing and the military.

A July 9, 2016 Associated Press piece by Michael Liedtke and Bree Fowler on phys.org focuses more closely on ethical issues raised by the Dallas incident,

When Dallas police used a bomb-carrying robot to kill a sniper, they also kicked off an ethical debate about technology’s use as a crime-fighting weapon.

The strategy opens a new chapter in the escalating use of remote and semi-autonomous devices to fight crime and protect lives. It also raises new questions over when it’s appropriate to dispatch a robot to kill dangerous suspects instead of continuing to negotiate their surrender.

“If lethally equipped robots can be used in this situation, when else can they be used?” says Elizabeth Joh, a University of California at Davis law professor who has followed U.S. law enforcement’s use of technology. “Extreme emergencies shouldn’t define the scope of more ordinary situations where police may want to use robots that are capable of harm.”

In approaching the question about the ethics, Mike Masnick’s July 8, 2016 posting on Techdirt provides a surprisingly sympathetic reading for the Dallas Police Department’s actions, as well as, asking some provocative questions about how robots might be better employed by police organizations (Note: Links have been removed),

The Dallas Police have a long history of engaging in community policing designed to de-escalate situations, rather than encourage antagonism between police and the community, have been handling all of this with astounding restraint, frankly. Many other police departments would be lashing out, and yet the Dallas Police Dept, while obviously grieving for a horrible situation, appear to be handling this tragic situation professionally. And it appears that they did everything they could in a reasonable manner. They first tried to negotiate with Johnson, but after that failed and they feared more lives would be lost, they went with the robot + bomb option. And, obviously, considering he had already shot many police officers, I don’t think anyone would question the police justification if they had shot Johnson.

But, still, at the very least, the whole situation raises a lot of questions about the legality of police using a bomb offensively to blow someone up. And, it raises some serious questions about how other police departments might use this kind of technology in the future. The situation here appears to be one where people reasonably concluded that this was the most effective way to stop further bloodshed. And this is a police department with a strong track record of reasonable behavior. But what about other police departments where they don’t have that kind of history? What are the protocols for sending in a robot or drone to kill someone? Are there any rules at all?

Furthermore, it actually makes you wonder, why isn’t there a focus on using robots to de-escalate these situations? What if, instead of buying military surplus bomb robots, there were robots being designed to disarm a shooter, or detain him in a manner that would make it easier for the police to capture him alive? Why should the focus of remote robotic devices be to kill him? This isn’t faulting the Dallas Police Department for its actions last night. But, rather, if we’re going to enter the age of robocop, shouldn’t we be looking for ways to use such robotic devices in a manner that would help capture suspects alive, rather than dead?

Gordon Corera’s July 12, 2016 article on the BBC’s (British Broadcasting Corporation) news website provides an overview of the use of automation and of ‘killing/killer robots’,

Remote killing is not new in warfare. Technology has always been driven by military application, including allowing killing to be carried out at distance – prior examples might be the introduction of the longbow by the English at Crecy in 1346, then later the Nazi V1 and V2 rockets.

More recently, unmanned aerial vehicles (UAVs) or drones such as the Predator and the Reaper have been used by the US outside of traditional military battlefields.

Since 2009, the official US estimate is that about 2,500 “combatants” have been killed in 473 strikes, along with perhaps more than 100 non-combatants. Critics dispute those figures as being too low.

Back in 2008, I visited the Creech Air Force Base in the Nevada desert, where drones are flown from.

During our visit, the British pilots from the RAF deployed their weapons for the first time.

One of the pilots visibly bristled when I asked him if it ever felt like playing a video game – a question that many ask.

The military uses encrypted channels to control its ordnance disposal robots, but – as any hacker will tell you – there is almost always a flaw somewhere that a determined opponent can find and exploit.

We have already seen cars being taken control of remotely while people are driving them, and the nightmare of the future might be someone taking control of a robot and sending a weapon in the wrong direction.

The military is at the cutting edge of developing robotics, but domestic policing is also a different context in which greater separation from the community being policed risks compounding problems.

The balance between risks and benefits of robots, remote control and automation remain unclear.

But Dallas suggests that the future may be creeping up on us faster than we can debate it.

The excerpts here do not do justice to the articles, if you’re interested in this topic and have the time, I encourage you to read all the articles cited here in their entirety.

*(ETA: July 25, 2016 at 1405 hours PDT: There is a July 25, 2016 essay by Carrie Sheffield for Salon.com which may provide some insight into the Black Lives matter movement and some of the generational issues within the US African-American community as revealed by the movement.)*

A bioelectronic future made possible with DNA-based electromechanical switch

DNA-based electronics are discussed in the context of a Dec. 14, 2015 news item by Beth Ellison for Azonano about research into electromechanical switches at the University of California at Davis,

Researchers from the University of California, Davis (UC Davis) and the University of Washington have shown the possibility of using DNA-based electromechanical switches for nanoscale computing.

DNA is considered to be the molecule of life, and researchers have shown considerable interest in utilizing DNA as a nanoscale material in various applications.

A Dec. 14, 2015 UC Davis news release on EurekAlert, which originated the news item, provides more detail,

In their paper published in Nature Communications, the team demonstrated that changing the structure of the DNA double helix by modifying its environment allows the conductance (the ease with which an electric current passes) to be reversibly controlled. This ability to structurally modulate the charge transport properties may enable the design of unique nanodevices based on DNA. These devices would operate using a completely different paradigm than today’s conventional electronics.

“As electronics get smaller they are becoming more difficult and expensive to manufacture, but DNA-based devices could be designed from the bottom-up using directed self-assembly techniques such as ‘DNA origami’,” said Josh Hihath, assistant professor of electrical and computer engineering at UC Davis and senior author on the paper. DNA origami is the folding of DNA to create two- and three-dimensional shapes at the nanoscale level.

“Considerable progress has been made in understanding DNA’s mechanical, structural, and self-assembly properties and the use of these properties to design structures at the nanoscale. The electrical properties, however, have generally been difficult to control,” said Hihath.

New Twist on DNA? Possible Paradigms for Computing

In addition to potential advantages in fabrication at the nanoscale level, such DNA-based devices may also improve the energy efficiency of electronic circuits. The size of devices has been significantly reduced over the last 40 years, but as the size has decreased, the power density on-chip has increased. Scientists and engineers have been exploring novel solutions to improve the efficiency.

“There’s no reason that computation must be done with traditional transistors. Early computers were fully mechanical and later worked on relays and vacuum tubes,” said Hihath. “Moving to an electromechanical platform may eventually allow us to improve the energy efficiency of electronic devices at the nanoscale.”

This work demonstrates that DNA is capable of operating as an electromechanical switch and could lead to new paradigms for computing.

To develop DNA into a reversible switch, the scientists focused on switching between two stable conformations of DNA, known as the A-form and the B-form. In DNA, the B-form is the conventional DNA duplex that is commonly associated with these molecules. The A-form is a more compact version with different spacing and tilting between the base pairs. Exposure to ethanol forces the DNA into the A-form conformation resulting in an increased conductance. Similarly, by removing the ethanol, the DNA can switch back to the B-form and return to its original reduced conductance value.

One Step Toward Molecular Computing

In order to develop this finding into a technologically viable platform for electronics, the authors also noted that there is still a great deal of work to be done. Although this discovery provides a proof-of-principle demonstration of electromechanical switching in DNA, there are generally two major hurdles yet to be overcome in the field of molecular electronics. First, billions of active molecular devices must be integrated into the same circuit as is done currently in conventional electronics. Next, scientists must be able to gate specific devices individually in such a large system.

“Eventually, the environmental gating aspect of this work will have to be replaced with a mechanical or electrical signal in order to locally address a single device,” noted Hihath.

Here’s a link to and a citation for the paper,

Conformational gating of DNA conductance by Juan Manuel Artés, Yuanhui Li, Jianqing Qi, M. P. Anantram, & Joshua Hihath. Nature Communications 6, Article number: 8870 doi:10.1038/ncomms9870 Published 09 December 2015

This paper is open access.

Sponges made of nanoporous gold and DNA detection

This work from the University of California at Davis seems to represent a step forward for better detection of diseases and pathogens. From a Sept. 4, 2015 news item on ScienceDaily,

Sponge-like nanoporous gold could be key to new devices to detect disease-causing agents in humans and plants, according to UC Davis researchers.

In two recent papers in Analytical Chemistry, a group from the UC Davis Department of Electrical and Computer Engineering demonstrated that they could detect nucleic acids using nanoporous gold, a novel sensor coating material, in mixtures of other biomolecules that would gum up most detectors. This method enables sensitive detection of DNA [deoxyribonucleic acid] in complex biological samples, such as serum from whole blood.

A Sept. 4, 2015 UC Davis news release on EurekAlert, which originated the news item, offers more detail,

“Nanoporous gold can be imagined as a porous metal sponge with pore sizes that are a thousand times smaller than the diameter of a human hair,” said Erkin Şeker, assistant professor of electrical and computer engineering at UC Davis and the senior author on the papers. “What happens is the debris in biological samples, such as proteins, is too large to go through those pores, but the fiber-like nucleic acids that we want to detect can actually fit through them. It’s almost like a natural sieve.”

Rapid and sensitive detection of nucleic acids plays a crucial role in early identification of pathogenic microbes and disease biomarkers. Current sensor approaches usually require nucleic acid purification that relies on multiple steps and specialized laboratory equipment, which limit the sensors’ use in the field. The researchers’ method reduces the need for purification.

“So now we hope to have largely eliminated the need for extensive sample clean-up, which makes the process conducive to use in the field,” Şeker said.

The result is a faster and more efficient process that can be applied in many settings.

The researchers hope the technology can be translated into the development of miniature point-of-care diagnostic platforms for agricultural and clinical applications.

“The applications of the sensor are quite broad ranging from detection of plant pathogens to disease biomarkers,” said Şeker.

For example, in agriculture, scientists could detect whether a certain pathogen exists on a plant without seeing any symptoms. And in sepsis cases in humans, doctors might determine bacterial contamination much more quickly than at present, preventing any unnecessary treatments.

Here are links to and citations for two recent published papers about this work,

Effect of Nanoporous Gold Thin Film Morphology on Electrochemical DNA Sensing by Pallavi Daggumati, Zimple Matharu, and Erkin Şeker. Anal. Chem., 2015, 87 (16), pp 8149–8156 DOI: 10.1021/acs.analchem.5b00846 Publication Date (Web): April 30, 2015

Copyright © 2015 American Chemical Society

Biofouling-Resilient Nanoporous Gold Electrodes for DNA Sensing by Pallavi Daggumati, Zimple Matharu, Ling Wang, and Erkin Şeker. Anal. Chem., 2015, 87 (17), pp 8618–8622 DOI: 10.1021/acs.analchem.5b02969 Publication Date (Web): August 14, 2015

Copyright © 2015 American Chemical Society

These papers are behind a paywall.

Nanopollution of marine life

Concerns are being raised about nanosunscreens and nanotechnology-enabled marine paints and their effect on marine life, specifically, sea urchins. From a May 13, 2015 news item on Nanowerk (Note: A link has been removed),

Nanomaterials commonly used in sunscreens and boat-bottom paints are making sea urchin embryos more vulnerable to toxins, according to a study from the University of California, Davis [UC Davis]. The authors said this could pose a risk to coastal, marine and freshwater environments.

The study, published in the journal Environmental Science and Technology (“Copper Oxide and Zinc Oxide Nanomaterials Act as Inhibitors of Multidrug Resistance Transport in Sea Urchin Embryos: Their Role as Chemosensitizers”), is the first to show that the nanomaterials work as chemosensitizers. In cancer treatments, a chemosensitizer makes tumor cells more sensitive to the effects of chemotherapy.

Similarly, nanozinc and nanocopper made developing sea urchin embryos more sensitive to other chemicals, blocking transporters that would otherwise defend them by pumping toxins out of cells.

A May 12, 2015 UC Davis news release, which originated the news item, includes some cautions,

Nanozinc oxide is used as an additive in cosmetics such as sunscreens, toothpastes and beauty products. Nanocopper oxide is often used for electronics and technology, but also for antifouling paints, which prevent things like barnacles and mussels from attaching to boats.

“At low levels, both of these nanomaterials are nontoxic,” said co-author Gary Cherr, professor and interim director of the UC Davis Bodega Marine Laboratory, and an affiliate of the UC Davis Coastal Marine Sciences Institute. “However, for sea urchins in sensitive life stages, they disrupt the main defense mechanism that would otherwise protect them from environmental toxins.”

Science for safe design

Nanomaterials are tiny chemical substances measured in nanometers, which are about 100,000 times smaller than the diameter of a human hair. Nano-sized particles can enter the body through the skin, ingestion, or inhalation. They are being rapidly introduced across the fields of electronics, medicine and technology, where they are being used to make energy efficient batteries, clean up oil spills, and fight cancer, among many other uses. However, relatively little is known about nanomaterials with respect to the environment and health.

Here’s a link to and a citation for the paper,

Copper Oxide and Zinc Oxide Nanomaterials Act as Inhibitors of Multidrug Resistance Transport in Sea Urchin Embryos: Their Role as Chemosensitizers by Bing Wu, Cristina Torres-Duarte, Bryan J. Cole, and Gary N. Cherr. Environ. Sci. Technol., 2015, 49 (9), pp 5760–5770 DOI: 10.1021/acs.est.5b00345 Publication Date (Web): April 7, 2015

Copyright © 2015 American Chemical Society

This paper is behind a paywall.

While this research into nanoparticles as chemosensitizers is, according to UC Davis, the first of its kind, the concern over nanosunscreens and marine waters has been gaining traction over the last few years. For example, there’s  research featured in a June 10, 2013 article by Roberta Kwok for the University of Washington’s ‘Conservation This Week’ magazine,

Sunscreen offers protection from UV rays, reduces the risk of skin cancer, and even slows down signs of aging. Unfortunately, researchers have found that sunscreen also pollutes the ocean.

Although people have been using these products for decades, “the effect of sunscreens, as a source of introduced chemicals to the coastal marine system, has not yet been addressed,” a research team writes in PLOS ONE. Sunscreens contain chemicals not only for UV protection, but also for coloring, fragrance, and texture. And beaches are becoming ever-more-popular vacation spots; for example, nearly 10 million tourists visited Majorca Island in the Mediterranean Sea in 2010.

Here’s a link to the 2013 PLOS ONE paper,

Sunscreen Products as Emerging Pollutants to Coastal Waters by Antonio Tovar-Sánchez, David Sánchez-Quiles, Gotzon Basterretxea, Juan L. Benedé, Alberto Chisvert, Amparo Salvador, Ignacio Moreno-Garrido, and Julián Blasco. PLOS ONE DOI: 10.1371/journal.pone.0065451 Published: June 5, 2013

This is an open access journal.

Gold and your neurons

Should you need any electrode implants for your neurons at some point in the future, it’s possible they could be coated with gold. Researchers at the Lawrence Livermore National Laboratory (LLNL) and at the University of California at Davis (UC Davis) have discovered that electrodes covered in nanoporous gold could prevent scarring (from a May 5, 2015 news item on Azonano),

A team of researchers from Lawrence Livermore and UC Davis have found that covering an implantable neural electrode with nanoporous gold could eliminate the risk of scar tissue forming over the electrode’s surface.

The team demonstrated that the nanostructure of nanoporous gold achieves close physical coupling of neurons by maintaining a high neuron-to-astrocyte surface coverage ratio. Close physical coupling between neurons and the electrode plays a crucial role in recording fidelity of neural electrical activity.

An April 30, 2015 LLNL news release, which originated the news item, details the scarring issue and offers more information about the proposed solution,

Neural interfaces (e.g., implantable electrodes or multiple-electrode arrays) have emerged as transformative tools to monitor and modify neural electrophysiology, both for fundamental studies of the nervous system, and to diagnose and treat neurological disorders. These interfaces require low electrical impedance to reduce background noise and close electrode-neuron coupling for enhanced recording fidelity.

Designing neural interfaces that maintain close physical coupling of neurons to an electrode surface remains a major challenge for both implantable and in vitro neural recording electrode arrays. An important obstacle in maintaining robust neuron-electrode coupling is the encapsulation of the electrode by scar tissue.

Typically, low-impedance nanostructured electrode coatings rely on chemical cues from pharmaceuticals or surface-immobilized peptides to suppress glial scar tissue formation over the electrode surface, which is an obstacle to reliable neuron−electrode coupling.

However, the team found that nanoporous gold, produced by an alloy corrosion process, is a promising candidate to reduce scar tissue formation on the electrode surface solely through topography by taking advantage of its tunable length scale.

“Our results show that nanoporous gold topography, not surface chemistry, reduces astrocyte surface coverage,” said Monika Biener, one of the LLNL authors of the paper.

Nanoporous gold has attracted significant interest for its use in electrochemical sensors, catalytic platforms, fundamental structure−property studies at the nanoscale and tunable drug release. It also features high effective surface area, tunable pore size, well-defined conjugate chemistry, high electrical conductivity and compatibility with traditional fabrication techniques.

“We found that nanoporous gold reduces scar coverage but also maintains high neuronal coverage in an in vitro neuron-glia co-culture model,” said Juergen Biener, the other LLNL author of the paper. “More broadly, the study demonstrates a novel surface for supporting neuronal cultures without the use of culture medium supplements to reduce scar overgrowth.”

Here’s a link to and a citation for the paper,

Nanoporous Gold as a Neural Interface Coating: Effects of Topography, Surface Chemistry, and Feature Size by Christopher A. R. Chapman, Hao Chen, Marianna Stamou, Juergen Biener, Monika M. Biener, Pamela J. Lein, and Erkin Seker. ACS Appl. Mater. Interfaces, 2015, 7 (13), pp 7093–7100 DOI: 10.1021/acsami.5b00410 Publication Date (Web): February 23, 2015

Copyright © 2015 American Chemical Society

This paper is behind a paywall.

The researchers have provided this image to illustrate their work,

The image depicts a neuronal network growing on a novel nanotextured gold electrode coating. The topographical cues presented by the coating preferentially favor spreading of neurons as opposed to scar tissue. This feature has the potential to enhance the performance of neural interfaces. Image by Ryan Chen/LLNL.

The image depicts a neuronal network growing on a novel nanotextured gold electrode coating. The topographical cues presented by the coating preferentially favor spreading of neurons as opposed to scar tissue. This feature has the potential to enhance the performance of neural interfaces. Image by Ryan Chen/LLNL.

Nanozen: protecting us from nanoparticles (maybe)

Friday, Oct. 24, 2014 the Vancouver Sun (Canada) featured a local nanotechnology company, Nanozen in an article by ‘digital life’ writer, Gillian Shaw. Unfortunately, the article is misleading. Before noting the issues, it should be said that most reporters don’t have much time to prepare stories and are often asked to write on topics that are new or relatively unknown to them. It is a stressful position to be in especially when one is reliant on the interviewee’s expertise and agenda. As for the interviewee, sometimes scientists get excited and enthused and don’t speak with their usual caution.

The article starts off in an unexceptionable manner,

Vancouver startup Nanozen is a creating real-time, wearable particle sensor for use in mines, mills and other industrial locations where dust and other particles can lead to dangerous explosions and debilitating respiratory diseases.

The company founder and, presumably, lead researcher Winnie Chu is described as a former professor of environmental health at the University of British Columbia who has devoted herself to developing a new means of monitoring particles, in particular nanoparticles. Chu is quoted as saying this,

“The current technology is not sufficient to protect workers or the community when concentrations exceed the acceptable level,” she said.

It seems ominous and is made more so with this,

Chu said more than 90 per cent of the firefighters who responded to the 9/11 disaster developed lung disease, having walked into a site full of small and very damaging particles in the air.

“Those nanoparticles go deep into your lungs and cause inflammation and other problems,” Chu said.

It seems odd to mention this particular disaster. The lung issues for the firefighters, first responders and people living close to the site of World Trade Centers collapse are due to a complex mix of materials in the air. Most of the research I can find focuses on micrsoscale particles such as the work from the University of California at Davis’s Delta Group (Detection and Evaluation of the Long-Range Transport of Aerosols). From the Group’s World Trade Center webpage,

The fuming World Trade Center debris pile was a chemical factory that exhaled pollutants in particularly dangerous forms that could penetrate deep into the lungs of workers at Ground Zero, says a new study by UC Davis air-quality experts.

You can find the group’s presentation (-Presentation download (WTC aersols ACS 2003.ppt; 7,500kb)) to an American Chemical Society meeting in 2003 along more details such as this on their webpage,

The conditions would have been “brutal” for people working at Ground Zero without respirators and slightly less so for those working or living in immediately adjacent buildings, said the study’s lead author, Thomas Cahill, a UC Davis professor emeritus of physics and atmospheric science and research professor in engineering.

“Now that we have a model of how the debris pile worked, it gives us a much better idea of what the people working on and near the pile were actually breathing,” Cahill said. “Our first report was based on particles that we collected one mile away. This report gives a reasonable estimate of what type of pollutants were actually present at Ground Zero.

“The debris pile acted like a chemical factory. It cooked together the components of the buildings and their contents, including enormous numbers of computers, and gave off gases of toxic metals, acids and organics for at least six weeks.”

The materials found by this group were not at the nanoscale. In fact, the focus was then and subsequently on materials such as glass shards, asbestos, and metallic aerosols at the microscale, all of which can cause well documented health problems. No doubt effective monitoring would have been helpful It seems the critical issue in the early stages of the disaster was access to a respirator. Also, effective monitoring at later stages which did not seem to have happened would have been a good idea.

A 2004 (?) New York Magazine article by Jennifer Senior titled ‘Fallout‘ had this to say about the air content,

Here, today, is what we know about the dust and air at ground zero: It contained glass shards, pulverized concrete, and many carcinogens, including hundreds of thousands of pounds of asbestos, tens of thousands of pounds of lead, mercury, cadmium, dioxins, PCBs, and polycyclic aromatic hydrocarbons, or PAHs. It also contained benzene. According to a study done by the U.S. Geological Survey, the dust was so caustic in places that its pH exceeded that of ammonia. Thomas Cahill, a scientist who analyzed the plumes from a rooftop one mile away, says that the levels of acids, insoluble particles, high-temperature organic materials, and metals were in most cases higher in very fine particles (which can slip deep into the lungs) than anyplace ever recorded on earth, including the oil fires of Kuwait.

The article describes at some length the problems for first responders and for those who later moved back into their homes nearby the disaster site under the impression the air was clean.

Getting back to the nanoscale, there were carbon nanotubes (CNTs) present as this 2009 research paper, Case Report: Lung Disease in World Trade Center Responders Exposed to Dust and Smoke: Carbon Nanotubes Found in the Lungs of World Trade Center Patients and Dust Samples, noted in relation to a sample of seven patients,

It may well be the most frequent injury pattern in exposed patients with severe respiratory impairment. b) Interstitial disease was present in four cases (Patients A, B, C, and E), characterized by a generally bronchiolocentric pattern of interstitial inflammation and fibrosis of variable severity. The lungs of these patients contained large amounts of silicates, and three of them showed nanotubes.

CNT of commercial origin, common now, would not have been present in substantial numbers in the WTC complex before the disaster in 2001. However, the high temperatures generated during the WTC disaster as a result of the combustion of fuel in the presence of carbon and metals would have been sufficient to locally generate large numbers of CNT. This scenario could have caused the generation of CNT that we have noted in the dust samples and in the lung biopsy specimens.

Given that CNTs are more common now, it would suggest that a monitor for nanoscale materials such as Chu’s proposed equipment could be an excellent idea. Unfortunately, it’s not clear what Chu is trying to achieve as she appears to make a blunder in the article,

Chu said environmental agencies require testing to distinguish between particles equal to or less than 10 microns and smaller particles 2.5 microns or less.

“When we inhale we inhale both size particles but they go into different parts of the lung,” said Chu, who said research shows the smaller the particle the higher the toxicity. [emphasis mine] The monitor she has developed can detect particles as small as one micron and even less.

The word ‘nanoparticle’ is often used generically to include, CNTs, quantum dots, silver nanoparticles, etc. as Chu seems to be doing throughout the article. The only nanomaterial/nanoparticle that researchers agree unequivocally cause lung problems are long carbon nanotubes which resemble asbestos fibres. This is precisely the opposite of Chu’s statement.

For validation, you can conduct your own search or you can check Swiss toxicologist Harald Krug’s (mentioned in my Nanosafety research: a quality control issue posting of Oct. 30, 2014) statement that most health and safety research of nanomaterials and the resultant conclusions are problematic. But he too is unequivocal with regard long carbon nanotubes (from Krug’s study, Nanosafety Research—Are We on the Right Track?).

Comparison of instillation and inhalation experiments: instillation studies have to be carried out with relatively high local doses and, thus, more often meet overload conditions than inhalation studies. Transient inflammatory effects have been observed frequently in both types of lung exposure, irrespective of the type of ENMs used for the experiment. This finding suggests an unspecific particle effect; moreover, the biological response seems to be comparable to a scenario involving exposure to fine dust. Prominent exceptions are long and rigid carbon nanotube (CNT) bundles, which induce a severe tissue reaction (chronic inflammation) that may ultimately result in tumor formation. Overall, the evaluated studies showed no indication of a “nanospecific” effect in the lung. [from the Summary section; 2nd bulleted point]

You can find the Nanozen website here but there doesn’t appear to be any information on the site yet. These search terms ‘about’, ‘team’, ‘technology’, and ‘product’ yielded no results on website as of Oct. 30, 2014 at 1000 hours PDT.