Category Archives: synthetic biology

“Innovation and its enemies” and “Science in Wonderland”: a commentary on two books and a few thoughts about fish (1 of 2)

There’s more than one way to approach the introduction of emerging technologies and sciences to ‘the public’. Calestous Juma in his 2016 book, ”Innovation and Its Enemies; Why People Resist New Technologies” takes a direct approach, as can be seen from the title while Melanie Keene’s 2015 book, “Science in Wonderland; The Scientific Fairy Tales of Victorian Britain” presents a more fantastical one. The fish in the headline tie together, thematically and tenuously, both books with a real life situation.

Innovation and Its Enemies

Calestous Juma, the author of “Innovation and Its Enemies” has impressive credentials,

  • Professor of the Practice of International Development,
  • Director of the Science, Technology, and Globalization Project at Harvard Kennedy School’s Better Science and International Affairs,
  • Founding Director of the African Centre for Technology Studies in Nairobi (Kenya),
  • Fellow of the Royal Society of London, and
  • Foreign Associate of the US National Academy of Sciences.

Even better, Juma is an excellent storyteller perhaps too much so for a book which presents a series of science and technology adoption case histories. (Given the range of historical time periods, geography, and the innovations themselves, he always has to stop short.)  The breadth is breathtaking and Juma manages with aplomb. For example, the innovations covered include: coffee, electricity, mechanical refrigeration, margarine, recorded sound, farm mechanization, and the printing press. He also covers two recently emerging technologies/innovations: transgenic crops and AquAdvantage salmon (more about the salmon later).

Juma provides an analysis of the various ways in which the public and institutions panic over innovation and goes on to offer solutions. He also injects a subtle note of humour from time to time. Here’s how Juma describes various countries’ response to risks and benefits,

In the United States products are safe until proven risky.

In France products are risky until proven safe.

In the United Kingdom products are risky even when proven safe.

In India products are safe when proven risky.

In Canada products are neither safe nor risky.

In Japan products are either safe or risky.

In Brazil products are both safe and risky.

In sub-Saharan Africa products are risky even if they do not exist. (pp. 4-5)

To Calestous Juma, thank you for mentioning Canada and for so aptly describing the quintessentially Canadian approach to not just products and innovation but to life itself, ‘we just don’t know; it could be this or it could be that or it could be something entirely different; we just don’t know and probably will never know.’.

One of the aspects that I most appreciated in this book was the broadening of the geographical perspective on innovation and emerging technologies to include the Middle East, China, and other regions/countries. As I’ve  noted in past postings, much of the discussion here in Canada is Eurocentric and/or UScentric. For example, the Council of Canadian Academies which conducts assessments of various science questions at the request of Canadian and regional governments routinely fills the ‘international’ slot(s) for their expert panels with academics from Europe (mostly Great Britain) and/or the US (or sometimes from Australia and/or New Zealand).

A good example of Juma’s expanded perspective on emerging technology is offered in Art Carden’s July 7, 2017 book review for Forbes.com (Note: A link has been removed),

In the chapter on coffee, Juma discusses how Middle Eastern and European societies resisted the beverage and, in particular, worked to shut down coffeehouses. Islamic jurists debated whether the kick from coffee is the same as intoxication and therefore something to be prohibited. Appealing to “the principle of original permissibility — al-ibaha, al-asliya — under which products were considered acceptable until expressly outlawed,” the fifteenth-century jurist Muhamad al-Dhabani issued several fatwas in support of keeping coffee legal.

This wasn’t the last word on coffee, which was banned and permitted and banned and permitted and banned and permitted in various places over time. Some rulers were skeptical of coffee because it was brewed and consumed in public coffeehouses — places where people could indulge in vices like gambling and tobacco use or perhaps exchange unorthodox ideas that were a threat to their power. It seems absurd in retrospect, but political control of all things coffee is no laughing matter.

The bans extended to Europe, where coffee threatened beverages like tea, wine, and beer. Predictably, and all in the name of public safety (of course!), European governments with the counsel of experts like brewers, vintners, and the British East India Tea Company regulated coffee importation and consumption. The list of affected interest groups is long, as is the list of meddlesome governments. Charles II of England would issue A Proclamation for the Suppression of Coffee Houses in 1675. Sweden prohibited coffee imports on five separate occasions between 1756 and 1817. In the late seventeenth century, France required that all coffee be imported through Marseilles so that it could be more easily monopolized and taxed.

Carden who teaches economics at Stanford University (California, US) focuses on issues of individual liberty and the rule of law with regards to innovation. I can appreciate the need to focus tightly when you have a limited word count but Carden could have a spared a few words to do more justice to Juma’s comprehensive and focused work.

At the risk of being accused of the fault I’ve attributed to Carden, I must mention the printing press chapter. While it was good to see a history of the printing press and attendant social upheavals noting its impact and discovery in regions other than Europe; it was shocking to someone educated in Canada to find Marshall McLuhan entirely ignored. Even now, I believe it’s virtually impossible to discuss the printing press as a technology, in Canada anyway, without mentioning our ‘communications god’ Marshall McLuhan and his 1962 book, The Gutenberg Galaxy.

Getting back to Juma’s book, his breadth and depth of knowledge, history, and geography is packaged in a relatively succinct 316 pp. As a writer, I admire his ability to distill the salient points and to devote chapters on two emerging technologies. It’s notoriously difficult to write about a currently emerging technology and Juma even managed to include a reference published only months (in early 2016) before “Innovation and its enemires” was published in July 2016.

Irrespective of Marshall McLuhan, I feel there are a few flaws. The book is intended for policy makers and industry (lobbyists, anyone?), he reaffirms (in academia, industry, government) a tendency toward a top-down approach to eliminating resistance. From Juma’s perspective, there needs to be better science education because no one who is properly informed should have any objections to an emerging/new technology. Juma never considers the possibility that resistance to a new technology might be a reasonable response. As well, while there was some mention of corporate resistance to new technologies which might threaten profits and revenue, Juma didn’t spare any comments about how corporate sovereignty and/or intellectual property issues are used to stifle innovation and quite successfully, by the way.

My concerns aside, testimony to the book’s worth is Carden’s review almost a year after publication. As well, Sir Peter Gluckman, Chief Science Advisor to the federal government of New Zealand, mentions Juma’s book in his January 16, 2017 talk, Science Advice in a Troubled World, for the Canadian Science Policy Centre.

Science in Wonderland

Melanie Keene’s 2015 book, “Science in Wonderland; The scientific fairy tales of Victorian Britain” provides an overview of the fashion for writing and reading scientific and mathematical fairy tales and, inadvertently, provides an overview of a public education programme,

A fairy queen (Victoria) sat on the throne of Victoria’s Britain, and she presided over a fairy tale age. The nineteenth century witnessed an unprecedented interest in fairies and in their tales, as they were used as an enchanted mirror in which to reflection question, and distort contemporary society.30  …  Fairies could be found disporting themselves thought the century on stage and page, in picture and print, from local haunts to global transports. There were myriad ways in which authors, painters, illustrators, advertisers, pantomime performers, singers, and more, capture this contemporary enthusiasm and engaged with fairyland and folklore; books, exhibitions, and images for children were one of the most significant. (p. 13)

… Anthropologists even made fairies the subject of scientific analysis, as ‘fairyology’ determined whether fairies should be part of natural history or part of supernatural lore; just on aspect of the revival of interest in folklore. Was there a tribe of fairy creatures somewhere out thee waiting to be discovered, across the globe of in the fossil record? Were fairies some kind of folks memory of any extinct race? (p. 14)

Scientific engagements with fairyland was widespread, and not just as an attractive means of packaging new facts for Victorian children.42 … The fairy tales of science had an important role to play in conceiving of new scientific disciplines; in celebrating new discoveries; in criticizing lofty ambitions; in inculcating habits of mind and body; in inspiring wonder; in positing future directions; and in the consideration of what the sciences were, and should be. A close reading of these tales provides a more sophisticated understanding of the content and status of the Victorian sciences; they give insights into what these new scientific disciplines were trying to do; how they were trying to cement a certain place in the world; and how they hoped to recruit and train new participants. (p. 18)

Segue: Should you be inclined to believe that society has moved on from fairies; it is possible to become a certified fairyologist (check out the fairyologist.com website).

“Science in Wonderland,” the title being a reference to Lewis Carroll’s Alice, was marketed quite differently than “innovation and its enemies”. There is no description of the author, as is the protocol in academic tomes, so here’s more from her webpage on the University of Cambridge (Homerton College) website,

Role:
Fellow, Graduate Tutor, Director of Studies for History and Philosophy of Science

Getting back to Keene’s book, she makes the point that the fairy tales were based on science and integrated scientific terminology in imaginative ways although some books with more success than other others. Topics ranged from paleontology, botany, and astronomy to microscopy and more.

This book provides a contrast to Juma’s direct focus on policy makers with its overview of the fairy narratives. Keene is primarily interested in children but her book casts a wider net  “… they give insights into what these new scientific disciplines were trying to do; how they were trying to cement a certain place in the world; and how they hoped to recruit and train new participants.”

In a sense both authors are describing how technologies are introduced and integrated into society. Keene provides a view that must seem almost halcyon for many contemporary innovation enthusiasts. As her topic area is children’s literature any resistance she notes is primarily literary invoking a debate about whether or not science was killing imagination and whimsy.

It would probably help if you’d taken a course in children’s literature of the 19th century before reading Keene’s book is written . Even if you haven’t taken a course, it’s still quite accessible, although I was left wondering about ‘Alice in Wonderland’ and its relationship to mathematics (see Melanie Bayley’s December 16, 2009 story for the New Scientist for a detailed rundown).

As an added bonus, fairy tale illustrations are included throughout the book along with a section of higher quality reproductions.

One of the unexpected delights of Keene’s book was the section on L. Frank Baum and his electricity fairy tale, “The Master Key.” She stretches to include “The Wizard of Oz,” which doesn’t really fit but I can’t see how she could avoid mentioning Baum’s most famous creation. There’s also a surprising (to me) focus on water, which when it’s paired with the interest in microscopy makes sense. Keene isn’t the only one who has to stretch to make things fit into her narrative and so from water I move onto fish bringing me back to one of Juma’s emerging technologies

Part 2: Fish and final comments

Cyborg bacteria to reduce carbon dioxide

This video is a bit technical but then it is about work being presented to chemists at the American Chemical Society’s (ACS) at the 254th National Meeting & Exposition Aug. 20 -24, 2017,

For a more plain language explanation, there’s an August 22, 2017 ACS news release (also on EurekAlert),

Photosynthesis provides energy for the vast majority of life on Earth. But chlorophyll, the green pigment that plants use to harvest sunlight, is relatively inefficient. To enable humans to capture more of the sun’s energy than natural photosynthesis can, scientists have taught bacteria to cover themselves in tiny, highly efficient solar panels to produce useful compounds.

“Rather than rely on inefficient chlorophyll to harvest sunlight, I’ve taught bacteria how to grow and cover their bodies with tiny semiconductor nanocrystals,” says Kelsey K. Sakimoto, Ph.D., who carried out the research in the lab of Peidong Yang, Ph.D. “These nanocrystals are much more efficient than chlorophyll and can be grown at a fraction of the cost of manufactured solar panels.”

Humans increasingly are looking to find alternatives to fossil fuels as sources of energy and feedstocks for chemical production. Many scientists have worked to create artificial photosynthetic systems to generate renewable energy and simple organic chemicals using sunlight. Progress has been made, but the systems are not efficient enough for commercial production of fuels and feedstocks.

Research in Yang’s lab at the University of California, Berkeley, where Sakimoto earned his Ph.D., focuses on harnessing inorganic semiconductors that can capture sunlight to organisms such as bacteria that can then use the energy to produce useful chemicals from carbon dioxide and water. “The thrust of research in my lab is to essentially ‘supercharge’ nonphotosynthetic bacteria by providing them energy in the form of electrons from inorganic semiconductors, like cadmium sulfide, that are efficient light absorbers,” Yang says. “We are now looking for more benign light absorbers than cadmium sulfide to provide bacteria with energy from light.”

Sakimoto worked with a naturally occurring, nonphotosynthetic bacterium, Moorella thermoacetica, which, as part of its normal respiration, produces acetic acid from carbon dioxide (CO2). Acetic acid is a versatile chemical that can be readily upgraded to a number of fuels, polymers, pharmaceuticals and commodity chemicals through complementary, genetically engineered bacteria.

When Sakimoto fed cadmium and the amino acid cysteine, which contains a sulfur atom, to the bacteria, they synthesized cadmium sulfide (CdS) nanoparticles, which function as solar panels on their surfaces. The hybrid organism, M. thermoacetica-CdS, produces acetic acid from CO2, water and light. “Once covered with these tiny solar panels, the bacteria can synthesize food, fuels and plastics, all using solar energy,” Sakimoto says. “These bacteria outperform natural photosynthesis.”

The bacteria operate at an efficiency of more than 80 percent, and the process is self-replicating and self-regenerating, making this a zero-waste technology. “Synthetic biology and the ability to expand the product scope of CO2 reduction will be crucial to poising this technology as a replacement, or one of many replacements, for the petrochemical industry,” Sakimoto says.

So, do the inorganic-biological hybrids have commercial potential? “I sure hope so!” he says. “Many current systems in artificial photosynthesis require solid electrodes, which is a huge cost. Our algal biofuels are much more attractive, as the whole CO2-to-chemical apparatus is self-contained and only requires a big vat out in the sun.” But he points out that the system still requires some tweaking to tune both the semiconductor and the bacteria. He also suggests that it is possible that the hybrid bacteria he created may have some naturally occurring analog. “A future direction, if this phenomenon exists in nature, would be to bioprospect for these organisms and put them to use,” he says.

For more insight into the work, check out Dexter Johnson’s Aug. 22, 2017 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website),

“It’s actually a natural, overlooked feature of their biology,” explains Sakimoto in an e-mail interview with IEEE Spectrum. “This bacterium has a detoxification pathway, meaning if it encounters a toxic metal, like cadmium, it will try to precipitate it out, thereby detoxifying it. So when we introduce cadmium ions into the growth medium in which M. thermoacetica is hanging out, it will convert the amino acid cysteine into sulfide, which precipitates out cadmium as cadmium sulfide. The crystals then assemble and stick onto the bacterium through normal electrostatic interactions.”

I’ve just excerpted one bit, there’s more in Dexter’s posting.

Science for the global citizen course at McMaster University in Winter 2018

It’s never too early to start planning for your course load if a June 20, 2017 McMaster University (Ontario, Canada) news release is to be believed,

In the Winter 2018 term, the School of Interdisciplinary Science is offering Science 2M03: Science for the Global Citizen, a new course designed to explore those questions and more. In this blended-learning course, students from all Faculties will examine the links between science and the larger society through live guest lecturers and evidence-based online discussions.This course is open to students enrolled in Level II or above in any program. No scientific background is needed, only an interest in becoming a more engaged and informed citizen.

The new course will cover a broad range of contemporary scientific issues with significant political, economic, social, and health implications. Topics range from artificial intelligence (AI) to genetically modified organisms (GMOs) to space exploration.

Course instructors, Dr. Kim Dej, Dr. Chad Harvey, Dr. Rosa da Silva, and Dr. Sarah Symons, all from the School of Interdisciplinary Science, will examine the basic scientific theories and concepts behind these topical issues, and highlight the application and interpretation of science in popular media and public policy.

After taking this course, students from all academic backgrounds will have a better understanding of how science is conducted, how knowledge changes, and how we can become better consumers of scientific information and more informed citizens.

3 
 63 
 1 
 68 How can science help address the key challenges in our society? How does society affect the way that science is conducted? Do citizens have a strong enough understanding of science and its methods to answer these and other similar questions? In the Winter 2018 term, the School of Interdisciplinary Science is offering Science 2M03: Science for the Global Citizen, a new course designed to explore those questions and more. In this blended-learning course, students from all Faculties will examine the links between science and the larger society through live guest lecturers and evidence-based online discussions. This course is open to students enrolled in Level II or above in any program. No scientific background is needed, only an interest in becoming a more engaged and informed citizen. The new course will cover a broad range of contemporary scientific issues with significant political, economic, social, and health implications. Topics range from artificial intelligence (AI) to genetically modified organisms (GMOs) to space exploration. Course instructors, Dr. Kim Dej, Dr. Chad Harvey, Dr. Rosa da Silva, and Dr. Sarah Symons, all from the School of Interdisciplinary Science, will examine the basic scientific theories and concepts behind these topical issues, and highlight the application and interpretation of science in popular media and public policy. After taking this course, students from all academic backgrounds will have a better understanding of how science is conducted, how knowledge changes, and how we can become better consumers of scientific information and more informed citizens.

I’m glad to see this kind of course being offered. It does seem a bit odd that none of the instructors involved with this course appear to be from the social sciences or humanities. Drs. Dej, Harvey, and da Silva all have a background in biological sciences and Dr. Symons is a physicist. Taking another look at this line from the course description, “The new course will cover a broad range of contemporary scientific issues with significant political, economic, social, and health implications,” has me wondering how these scientists are going to cover the material, especially as I couldn’t find any papers on these topics written by any of these instructors. This section puzzles me even more, “… highlight the application and interpretation of science in popular media and public policy.” Again none of these instructors seem to have published on the topic of science in popular media or science public policy.

Guest speakers can help to fill in the blanks but with four instructors (and I would imagine a tight budget) it’s hard to believe there are going to be that many guests.

I appreciate that this is more of what they used to call a ‘survey course’ meant to introduce a number of ideas rather than conveying any in depth information but I do find the instructors’ apparent lack of theoretical knowledge about anything other than their respective fields of science somewhat disconcerting.

Regardless, I wish both the instructors and the students all the best.

Emerging technology and the law

I have three news bits about legal issues that are arising as a consequence of emerging technologies.

Deep neural networks, art, and copyright

Caption: The rise of automated art opens new creative avenues, coupled with new problems for copyright protection. Credit: Provided by: Alexander Mordvintsev, Christopher Olah and Mike Tyka

Presumably this artwork is a demonstration of automated art although they never really do explain how in the news item/news release. An April 26, 2017 news item on ScienceDaily announces research into copyright and the latest in using neural networks to create art,

In 1968, sociologist Jean Baudrillard wrote on automatism that “contained within it is the dream of a dominated world […] that serves an inert and dreamy humanity.”

With the growing popularity of Deep Neural Networks (DNN’s), this dream is fast becoming a reality.

Dr. Jean-Marc Deltorn, researcher at the Centre d’études internationales de la propriété intellectuelle in Strasbourg, argues that we must remain a responsive and responsible force in this process of automation — not inert dominators. As he demonstrates in a recent Frontiers in Digital Humanities paper, the dream of automation demands a careful study of the legal problems linked to copyright.

An April 26, 2017 Frontiers (publishing) news release on EurekAlert, which originated the news item, describes the research in more detail,

For more than half a century, artists have looked to computational processes as a way of expanding their vision. DNN’s are the culmination of this cross-pollination: by learning to identify a complex number of patterns, they can generate new creations.

These systems are made up of complex algorithms modeled on the transmission of signals between neurons in the brain.

DNN creations rely in equal measure on human inputs and the non-human algorithmic networks that process them.

Inputs are fed into the system, which is layered. Each layer provides an opportunity for a more refined knowledge of the inputs (shape, color, lines). Neural networks compare actual outputs to expected ones, and correct the predictive error through repetition and optimization. They train their own pattern recognition, thereby optimizing their learning curve and producing increasingly accurate outputs.

The deeper the layers are, the higher the level of abstraction. The highest layers are able to identify the contents of a given input with reasonable accuracy, after extended periods of training.

Creation thus becomes increasingly automated through what Deltorn calls “the arcane traceries of deep architecture”. The results are sufficiently abstracted from their sources to produce original creations that have been exhibited in galleries, sold at auction and performed at concerts.

The originality of DNN’s is a combined product of technological automation on one hand, human inputs and decisions on the other.

DNN’s are gaining popularity. Various platforms (such as DeepDream) now allow internet users to generate their very own new creations . This popularization of the automation process calls for a comprehensive legal framework that ensures a creator’s economic and moral rights with regards to his work – copyright protection.

Form, originality and attribution are the three requirements for copyright. And while DNN creations satisfy the first of these three, the claim to originality and attribution will depend largely on a given country legislation and on the traceability of the human creator.

Legislation usually sets a low threshold to originality. As DNN creations could in theory be able to create an endless number of riffs on source materials, the uncurbed creation of original works could inflate the existing number of copyright protections.

Additionally, a small number of national copyright laws confers attribution to what UK legislation defines loosely as “the person by whom the arrangements necessary for the creation of the work are undertaken.” In the case of DNN’s, this could mean anybody from the programmer to the user of a DNN interface.

Combined with an overly supple take on originality, this view on attribution would further increase the number of copyrightable works.

The risk, in both cases, is that artists will be less willing to publish their own works, for fear of infringement of DNN copyright protections.

In order to promote creativity – one seminal aim of copyright protection – the issue must be limited to creations that manifest a personal voice “and not just the electric glint of a computational engine,” to quote Deltorn. A delicate act of discernment.

DNN’s promise new avenues of creative expression for artists – with potential caveats. Copyright protection – a “catalyst to creativity” – must be contained. Many of us gently bask in the glow of an increasingly automated form of technology. But if we want to safeguard the ineffable quality that defines much art, it might be a good idea to hone in more closely on the differences between the electric and the creative spark.

This research is and be will part of a broader Frontiers Research Topic collection of articles on Deep Learning and Digital Humanities.

Here’s a link to and a citation for the paper,

Deep Creations: Intellectual Property and the Automata by Jean-Marc Deltorn. Front. Digit. Humanit., 01 February 2017 | https://doi.org/10.3389/fdigh.2017.00003

This paper is open access.

Conference on governance of emerging technologies

I received an April 17, 2017 notice via email about this upcoming conference. Here’s more from the Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics webpage,

The Fifth Annual Conference on Governance of Emerging Technologies:

Law, Policy and Ethics held at the new

Beus Center for Law & Society in Phoenix, AZ

May 17-19, 2017!

Call for Abstracts – Now Closed

The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including (but not limited to) nanotechnology, synthetic biology, gene editing, biotechnology, genomics, personalized medicine, human enhancement technologies, telecommunications, information technologies, surveillance technologies, geoengineering, neuroscience, artificial intelligence, and robotics. The conference is premised on the belief that there is much to be learned and shared from and across the governance experience and proposals for these various emerging technologies.

Keynote Speakers:

Gillian HadfieldRichard L. and Antoinette Schamoi Kirtland Professor of Law and Professor of Economics USC [University of Southern California] Gould School of Law

Shobita Parthasarathy, Associate Professor of Public Policy and Women’s Studies, Director, Science, Technology, and Public Policy Program University of Michigan

Stuart Russell, Professor at [University of California] Berkeley, is a computer scientist known for his contributions to artificial intelligence

Craig Shank, Vice President for Corporate Standards Group in Microsoft’s Corporate, External and Legal Affairs (CELA)

Plenary Panels:

Innovation – Responsible and/or Permissionless

Ellen-Marie Forsberg, Senior Researcher/Research Manager at Oslo and Akershus University College of Applied Sciences

Adam Thierer, Senior Research Fellow with the Technology Policy Program at the Mercatus Center at George Mason University

Wendell Wallach, Consultant, ethicist, and scholar at Yale University’s Interdisciplinary Center for Bioethics

 Gene Drives, Trade and International Regulations

Greg Kaebnick, Director, Editorial Department; Editor, Hastings Center Report; Research Scholar, Hastings Center

Jennifer Kuzma, Goodnight-North Carolina GlaxoSmithKline Foundation Distinguished Professor in Social Sciences in the School of Public and International Affairs (SPIA) and co-director of the Genetic Engineering and Society (GES) Center at North Carolina State University

Andrew Maynard, Senior Sustainability Scholar, Julie Ann Wrigley Global Institute of Sustainability Director, Risk Innovation Lab, School for the Future of Innovation in Society Professor, School for the Future of Innovation in Society, Arizona State University

Gary Marchant, Regents’ Professor of Law, Professor of Law Faculty Director and Faculty Fellow, Center for Law, Science & Innovation, Arizona State University

Marc Saner, Inaugural Director of the Institute for Science, Society and Policy, and Associate Professor, University of Ottawa Department of Geography

Big Data

Anupam Chander, Martin Luther King, Jr. Professor of Law and Director, California International Law Center, UC Davis School of Law

Pilar Ossorio, Professor of Law and Bioethics, University of Wisconsin, School of Law and School of Medicine and Public Health; Morgridge Institute for Research, Ethics Scholar-in-Residence

George Poste, Chief Scientist, Complex Adaptive Systems Initiative (CASI) (http://www.casi.asu.edu/), Regents’ Professor and Del E. Webb Chair in Health Innovation, Arizona State University

Emily Shuckburgh, climate scientist and deputy head of the Polar Oceans Team at the British Antarctic Survey, University of Cambridge

 Responsible Development of AI

Spring Berman, Ira A. Fulton Schools of Engineering, Arizona State University

John Havens, The IEEE [Institute of Electrical and Electronics Engineers] Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems

Subbarao Kambhampati, Senior Sustainability Scientist, Julie Ann Wrigley Global Institute of Sustainability, Professor, School of Computing, Informatics and Decision Systems Engineering, Ira A. Fulton Schools of Engineering, Arizona State University

Wendell Wallach, Consultant, Ethicist, and Scholar at Yale University’s Interdisciplinary Center for Bioethics

Existential and Catastrophic Ricks [sic]

Tony Barrett, Co-Founder and Director of Research of the Global Catastrophic Risk Institute

Haydn Belfield,  Academic Project Administrator, Centre for the Study of Existential Risk at the University of Cambridge

Margaret E. Kosal Associate Director, Sam Nunn School of International Affairs, Georgia Institute of Technology

Catherine Rhodes,  Academic Project Manager, Centre for the Study of Existential Risk at CSER, University of Cambridge

These were the panels that are of interest to me; there are others on the homepage.

Here’s some information from the Conference registration webpage,

Early Bird Registration – $50 off until May 1! Enter discount code: earlybirdGETs50

New: Group Discount – Register 2+ attendees together and receive an additional 20% off for all group members!

Click Here to Register!

Conference registration fees are as follows:

  • General (non-CLE) Registration: $150.00
  • CLE Registration: $350.00
  • *Current Student / ASU Law Alumni Registration: $50.00
  • ^Cybsersecurity sessions only (May 19): $100 CLE / $50 General / Free for students (registration info coming soon)

There you have it.

Neuro-techno future laws

I’m pretty sure this isn’t the first exploration of potential legal issues arising from research into neuroscience although it’s the first one I’ve stumbled across. From an April 25, 2017 news item on phys.org,

New human rights laws to prepare for advances in neurotechnology that put the ‘freedom of the mind’ at risk have been proposed today in the open access journal Life Sciences, Society and Policy.

The authors of the study suggest four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy. The four laws are: the right to cognitive liberty, the right to mental privacy, the right to mental integrity and the right to psychological continuity.

An April 25, 2017 Biomed Central news release on EurekAlert, which originated the news item, describes the work in more detail,

Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel, said: “The mind is considered to be the last refuge of personal freedom and self-determination, but advances in neural engineering, brain imaging and neurotechnology put the freedom of the mind at risk. Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology.”

Advances in neurotechnology, such as sophisticated brain imaging and the development of brain-computer interfaces, have led to these technologies moving away from a clinical setting and into the consumer domain. While these advances may be beneficial for individuals and society, there is a risk that the technology could be misused and create unprecedented threats to personal freedom.

Professor Roberto Andorno, co-author of the research, explained: “Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court, for example as a tool for assessing criminal responsibility or even the risk of reoffending. Consumer companies are using brain imaging for ‘neuromarketing’, to understand consumer behaviour and elicit desired responses from customers. There are also tools such as ‘brain decoders’ which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom which we sought to address with the development of four new human rights laws.”

The authors explain that as neurotechnology improves and becomes commonplace, there is a risk that the technology could be hacked, allowing a third-party to ‘eavesdrop’ on someone’s mind. In the future, a brain-computer interface used to control consumer technology could put the user at risk of physical and psychological damage caused by a third-party attack on the technology. There are also ethical and legal concerns over the protection of data generated by these devices that need to be considered.

International human rights laws make no specific mention to neuroscience, although advances in biomedicine have become intertwined with laws, such as those concerning human genetic data. Similar to the historical trajectory of the genetic revolution, the authors state that the on-going neurorevolution will force a reconceptualization of human rights laws and even the creation of new ones.

Marcello Ienca added: “Science-fiction can teach us a lot about the potential threat of technology. Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes. We need to be prepared to deal with the impact these technologies will have on our personal freedom.”

Here’s a link to and a citation for the paper,

Towards new human rights in the age of neuroscience and neurotechnology by Marcello Ienca and Roberto Andorno. Life Sciences, Society and Policy201713:5 DOI: 10.1186/s40504-017-0050-1 Published: 26 April 2017

©  The Author(s). 2017

This paper is open access.

Semi-living gloves as sensors

Researchers at the Massachusetts Institute of Technology (MIT) are calling it a new ‘living material’ according to a Feb. 16, 2017 news item on Nanowerk,

Engineers and biologists at MIT have teamed up to design a new “living material” — a tough, stretchy, biocompatible sheet of hydrogel injected with live cells that are genetically programmed to light up in the presence of certain chemicals.

Researchers have found that the hydrogel’s mostly watery environment helps keep nutrients and programmed bacteria alive and active. When the bacteria reacts to a certain chemical, the bacteria are programmed to light up, as seen on the left. Courtesy of the researchers

A Feb. 15, 2017 MIT news release, which originated the news item, provides more information about this work,

In a paper published this week in the Proceedings of the National Academy of Sciences, the researchers demonstrate the new material’s potential for sensing chemicals, both in the environment and in the human body.

The team fabricated various wearable sensors from the cell-infused hydrogel, including a rubber glove with fingertips that glow after touching a chemically contaminated surface, and bandages that light up when pressed against chemicals on a person’s skin.

Xuanhe Zhao, the Robert N. Noyce Career Development associate professor of mechanical engineering at MIT, says the group’s living material design may be adapted to sense other chemicals and contaminants, for uses ranging from crime scene investigation and forensic science, to pollution monitoring and medical diagnostics.

“With this design, people can put different types of bacteria in these devices to indicate toxins in the environment, or disease on the skin,” says Timothy Lu, associate professor of biological engineering and of electrical engineering and computer science. “We’re demonstrating the potential for living materials and devices.”

The paper’s co-authors are graduate students Xinyue Liu, Tzu-Chieh Tang, Eleonore Tham, Hyunwoo Yuk, and Shaoting Lin.

Infusing life in materials

Lu and his colleagues in MIT’s Synthetic Biology Group specialize in creating biological circuits, genetically reprogramming the biological parts in living cells such as E. coli to work together in sequence, much like logic steps in an electrical circuit. In this way, scientists can reengineer living cells to carry out specific functions, including the ability to sense and signal the presence of viruses and toxins.

However, many of these newly programmed cells have only been demonstrated in situ, within Petri dishes, where scientists can carefully control the nutrient levels necessary to keep the cells alive and active — an environment that has proven extremely difficult to replicate in synthetic materials.

“The challenge to making living materials is how to maintain those living cells, to make them viable and functional in the device,” Lu says. “They require humidity, nutrients, and some require oxygen. The second challenge is how to prevent them from escaping from the material.”

To get around these roadblocks, others have used freeze-dried chemical extracts from genetically engineered cells, incorporating them into paper to create low-cost, virus-detecting diagnostic strips. But extracts, Lu says, are not the same as living cells, which can maintain their functionality over a longer period of time and may have higher sensitivity for detecting pathogens.

Other groups have seeded heart muscle cells onto thin rubber films to make soft, “living” actuators, or robots. When bent repeatedly, however, these films can crack, allowing the live cells to leak out.

A lively host

Zhao’s group in MIT’s Soft Active Materials Laboratory has developed a material that may be ideal for hosting living cells. For the past few years, his team has come up with various formulations of hydrogel — a tough, highly stretchable, biocompatible material made from a mix of polymer and water. Their latest designs have contained up to 95 percent water, providing an environment which Zhao and Lu recognized might be suitable for sustaining living cells. The material also resists cracking even when repeatedly stretched and pulled — a property that could help contain cells within the material.

The two groups teamed up to integrate Lu’s genetically programmed bacterial cells into Zhao’s sheets of hydrogel material. They first fabricated layers of hydrogel and patterned narrow channels within the layers using 3-D printing and micromolding techniques. They fused the hydrogel to a layer of elastomer, or rubber, that is porous enough to let in oxygen. They then injected E. coli cells into the hydrogel’s channels. The cells were programmed to fluoresce, or light up, when in contact with certain chemicals that pass through the hydrogel, in this case a natural compound known as DAPG (2,4-diacetylphloroglucinol).

The researchers then soaked the hydrogel/elastomer material in a bath of nutrients which infused throughout the hydrogel and helped to keep the bacterial cells alive and active for several days.

To demonstrate the material’s potential uses, the researchers first fabricated a sheet of the material with four separate, narrow channels, each containing a type of bacteria engineered to glow green in response to a different chemical compound. They found each channel reliably lit up when exposed to its respective chemical.

Next, the team fashioned the material into a bandage, or “living patch,” patterned with channels containing bacteria sensitive to rhamnose, a naturally occurring sugar. The researchers swabbed a volunteer’s wrist with a cotton ball soaked in rhamnose, then applied the hydrogel patch, which instantly lit up in response to the chemical.

Finally, the researchers fabricated a hydrogel/elastomer glove whose fingertips contained swirl-like channels, each of which they filled with different chemical-sensing bacterial cells. Each fingertip glowed in response to picking up a cotton ball soaked with a respective compound.

The group has also developed a theoretical model to help guide others in designing similar living materials and devices.

“The model helps us to design living devices more efficiently,” Zhao says. “It tells you things like the thickness of the hydrogel layer you should use, the distance between channels, how to pattern the channels, and how much bacteria to use.”

Ultimately, Zhao envisions products made from living materials, such as gloves and rubber soles lined with chemical-sensing hydrogel, or bandages, patches, and even clothing that may detect signs of infection or disease.

Here’s a link to and a citation for the paper,

Stretchable living materials and devices with hydrogel–elastomer hybrids hosting programmed cells by Xinyue Liu, Tzu-Chieh Tang, Eléonore Tham, Hyunwoo Yuk, Shaoting Lin, Timothy K. Lu, and Xuanhe Zhao. PNAS February 15, 2017 doi: 10.1073/pnas.1618307114 Published online before print February 15, 2017

This paper appears to be open access.

What’s a science historian doing in the field of synthetic biology?

Dominic Berry’s essay on why he, a science historian, is involved in a synthetic biology project takes some interesting twists and turns, from a Sept. 2, 2016 news item on phys.org,

What are synthetic biologists doing to plants, and what are plants doing to synthetic biology? This question frames a series of laboratory observations that I am pursuing across the UK as part of the Engineering Life project, which is dedicated to exploring what it might mean to engineer biology. I contribute to the project through a focus on plant scientists and my training in the history and philosophy of science. For plant scientists the engineering of biology can take many forms not all of which are captured by the category ‘synthetic biology’. Scientists that aim to create modified organisms are more inclined to refer to themselves as the latter, while other plant scientists will emphasise an integration of biological work with methods or techniques from engineering without adopting the identity of synthetic biologist. Accordingly, different legacies in the biosciences (from molecular biology to biomimetics) can be drawn upon depending on the features of the project at hand. These category and naming problems are all part of a larger set of questions that social and natural scientists continue to explore together. For the purposes of this post the distinctions between synthetic biology and the broader engineering of biology do not matter greatly, so I will simply refer to synthetic biology throughout.

Berry’s piece was originally posted Sept. 1, 2016 by Stephen Burgess on the PLOS (Public Library of Science) Synbio (Synthetic Biology blog). In this next bit Berry notes briefly why science historians and scientists might find interaction and collaboration fruitful (Note: Links have been removed),

It might seem strange that a historian is focused so closely on the present. However, I am not alone, and one recent author has picked out projects that suggest it is becoming a trend. This is only of interest for readers of the PLOS Synbio blog because it flags up that there are historians of science available for collaboration (hello!), and plenty of historical scholarship to draw upon to see your work in a new light, or rediscover forgotten research programs, or reconsider current practices, precisely as a recent Nature editorial emphasised for all sciences.

The May 17, 2016 Nature editorial ‘Second Thoughts’, mentioned in Berry’s piece, opens provocatively and continues in that vein (Note: A link has been removed),

The thought experiment has a noble place in research, but some thoughts are deemed more noble than others. Darwin and Einstein could let their minds wander and imagine the consequences of certain actions or natural laws. But scientists and historians who try to estimate what might have happened if, say, Darwin had fallen off the Beagle and drowned, are often accused of playing parlour games.

What if Darwin had toppled overboard before he joined the evolutionary dots? That discussion seems useful, because it raises interesting questions about the state of knowledge, then and now, and how it is communicated and portrayed. In his 2013 book Darwin Deleted — in which the young Charles is, indeed, lost in a storm — the historian Peter Bowler argued that the theory of evolution would have emerged just so, but with the pieces perhaps placed in a different order, and therefore less antagonistic to religious society.

In this week’s World View, another historian offers an alternative pathway for science: what if the ideas of Gregor Mendel on the inheritance of traits had been challenged more robustly and more successfully by a rival interpretation by the scientist W. F. R. Weldon? Gregory Radick argues that a twentieth-century genetics driven more by Weldon’s emphasis on environmental context would have weakened the dominance of the current misleading impression that nature always trumps nurture.

Here is Berry on the importance of questions,

The historian can ask: What traditions and legacies are these practitioners either building on or reacting against? How do these ideas cohere (or remain incoherent) for individuals and laboratories? Is a new way of understanding and investigating biology being created, and if so, where can we find evidence of it? Have biologists become increasingly concerned with controlling biological phenomena rather than understanding them? How does the desire to integrate engineering with biology sit within the long history of the establishment of biological science over the course of the 19th and 20th centuries?

Berry is an academic and his piece reflects an academic writing style with its complicated sentence structures and muted conclusions. If you have the patience, it is a good read on a topic that isn’t discussed all that often.

Doing math in a test tube using analog DNA

Basically, scientists at Duke University (US) have created an analog computer at the nanoscale, which can perform basic arithmetic. From an Aug. 23, 2016 news item on ScienceDaily,

Often described as the blueprint of life, DNA contains the instructions for making every living thing from a human to a house fly.

But in recent decades, some researchers have been putting the letters of the genetic code to a different use: making tiny nanoscale computers.

In a new study, a Duke University team led by professor John Reif created strands of synthetic DNA that, when mixed together in a test tube in the right concentrations, form an analog circuit that can add, subtract and multiply as they form and break bonds.

Rather than voltage, DNA circuits use the concentrations of specific DNA strands as signals.

An Aug. 23, 2016 Duke University news release (also on EurekAlert), which originated the news item, describes how most DNA-based circuits operate and what makes the one from Duke different,

Other teams have designed DNA-based circuits that can solve problems ranging from calculating square roots to playing tic-tac-toe. But most DNA circuits are digital, where information is encoded as a sequence of zeroes and ones.

Instead, the new Duke device performs calculations in an analog fashion by measuring the varying concentrations of specific DNA molecules directly, without requiring special circuitry to convert them to zeroes and ones first.

Unlike the silicon-based circuits used in most modern day electronics, commercial applications of DNA circuits are still a long way off, Reif said.

For one, the test tube calculations are slow. It can take hours to get an answer.

“We can do some limited computing, but we can’t even begin to think of competing with modern-day PCs or other conventional computing devices,” Reif said.

But DNA circuits can be far tinier than those made of silicon. And unlike electronic circuits, DNA circuits work in wet environments, which might make them useful for computing inside the bloodstream or the soupy, cramped quarters of the cell.

The technology takes advantage of DNA’s natural ability to zip and unzip to perform computations. Just like Velcro and magnets have complementary hooks or poles, the nucleotide bases of DNA pair up and bind in a predictable way.

The researchers first create short pieces of synthetic DNA, some single-stranded and some double-stranded with single-stranded ends, and mix them in a test tube.

When a single strand encounters a perfect match at the end of one of the partially double-stranded ones, it latches on and binds, displacing the previously bound strand and causing it to detach, like someone cutting in on a dancing couple.

The newly released strand can in turn pair up with other complementary DNA molecules downstream in the circuit, creating a domino effect.

The researchers solve math problems by measuring the concentrations of specific outgoing strands as the reaction reaches equilibrium.

To see how their circuit would perform over time as the reactions proceeded, Reif and Duke graduate student Tianqi Song used computer software to simulate the reactions over a range of input concentrations. They have also been testing the circuit experimentally in the lab.

Besides addition, subtraction and multiplication, the researchers are also designing more sophisticated analog DNA circuits that can do a wider range of calculations, such as logarithms and exponentials.

Conventional computers went digital decades ago. But for DNA computing, the analog approach has its advantages, the researchers say. For one, analog DNA circuits require fewer strands of DNA than digital ones, Song said.

Analog circuits are also better suited for sensing signals that don’t lend themselves to simple on-off, all-or-none values, such as vital signs and other physiological measurements involved in diagnosing and treating disease.

The hope is that, in the distant future, such devices could be programmed to sense whether particular blood chemicals lie inside or outside the range of values considered normal, and release a specific DNA or RNA — DNA’s chemical cousin — that has a drug-like effect.

Reif’s lab is also beginning to work on DNA-based devices that could detect molecular signatures of particular types of cancer cells, and release substances that spur the immune system to fight back.

“Even very simple DNA computing could still have huge impacts in medicine or science,” Reif said.

Here’s a link to and a citation for the paper,

Analog Computation by DNA Strand Displacement Circuits by Tianqi Song, Sudhanshu Garg, Reem Mokhtar, Hieu Bui, and John Reif. ACS Synth. Biol., 2016, 5 (8), pp 898–912 DOI: 10.1021/acssynbio.6b00144 Publication Date (Web): July 01, 2016

Copyright © 2016 American Chemical Society

This paper is behind a paywall.

Combat cells (Robot Wars for cells) and a plea from Concordia University

Students at Concordia University (located in Montréal, Québec, Canada) are requesting help (financial or laboratory supplies) for their submission to  the 2016 iGEM (International Genetically Engineered Machine) competition.

Here’s a little about their entry (from a June 16, 2016 request received via email),

For this year’s project, we plan to design a biological system that mimics the concept of the popular TV series Robot Wars. We will be engineering cellular species to wear nanoparticles as battle shields and then use microfluidics to guide them through an obstacle course leading to a battledome, where both cells will engage into a duel. Essentially, we want to test the interactions between nanoparticles and cell membranes, as well as their protective abilities against varying environmental conditions and other equipped cells. The method in which we will adapt Robot Wars for synthetic biology is by creating a web series that will visualize the cell battle and communicate the research behind it. This web series will serve as an entertaining  medium to educate and inspire the audience to develop an interest in science. We are incorporating the emerging fields of  synthetic biology, nanotechnology and microfluidics to make this process possible.  Furthermore, this study will contribute to the advancement of nanotechnology, an interdisciplinary field aiming to make applicable improvements in other fields such as medicine, optics and cosmetics.

Here’s a little more about iGEM (from the organization’s homepage),

The iGEM Foundation is dedicated to education and competition, advancement of synthetic biology, and the development of open community and collaboration.

The main program at the iGEM Foundation is the International Genetically Engineered Machine (iGEM) Competition. The iGEM Competition is the premiere student competition in Synthetic Biology. Since 2004, participants of the competition have experienced education, teamwork, sharing, and more in a unique competition setting.

The deadline for donations/sponsorships is the end of September 2016 and sponsors/donors will be acknowledged on “our website, all of our social media accounts (Facebook, Instagram, Twitter), at our community outreach events and at the competition [from the June 16, 2016 email].”

For more information contact:

Maria Salouros
iGEM Concordia
igem.concordia@gmail.com

Finally, there’s this:

We are excited to make this year’s project a reality and we are determined to win gold. Any help, either financially or by the donation of laboratory supplies, would contribute to the development of our project and would be greatly appreciated.

Good luck to the students! Hopefully one or more of my readers will be able to help. In which case, thank you!