Category Archives: intellectual property

Making a trademark claim memorable and fun

Usually when I write about intellectual property, it concerns technology and/or science disputes but this particular response to an alleged trademark violation amuses me greatly, swipes at a few Canadian stereotypes, and could act as a model for anyone who wants to lodge such protests. Before getting to the video, here are some details bout the dispute from a July 13, 2017 posting by Mike Masnick for Techdirt,

… — a few years ago, there was a virally popular rap song and video, by Brendan “B.Rich” Richmond, called Out for a Rip, spoofing Canadian culture/stereotypes. It got over 12 million views, and has become a bit of an anthem.

So, yeah. Coca Cola is using the phrase “out for a rip” on its Coke bottles and Richmond and his lawyer Kittredge decided the best way to respond was to write a song calling out Coca Cola on this and then recording a whole video. At the end of the video there’s an actual letter (part of which is dictated in the song itself) which is also pretty damn amusing:

Dear Coke,

I represent Brendan (B.Rich) Richmond (a.k.a. Friggin’ Buddy). You jacked his catchphrase, but you already know that.

Buddy owns the registered trademark “OUT FOR A RIP” in Canada (TMA934277). The music video for buddy’s original composition “OUT FOR A RIP” has been viewed more than 12 million times. Canadians associate the phrase “OUT FOR A RIP” with him.

Personally, I’m pretty psyched about this once-in-a-career opportunity to send a demand letter in the form of a rap video. Nonetheless, unlicensed use of OUT FOR A RIP violates my client’s rights. From what I understand, you guys do fairly well for yourselves – at least in comparison to most other multinational corporations, the GDP of most countries, or, say, the average musician, right? No room in your budget to clear IP rights?

Contact me no later than August 1, 2017 to discuss settlement of this matter. If you do not wish to discuss settlement, we require that you immediately cease using the OUT FOR A RIP mark, recall all OUT FOR A RIP bottles, and take immediate steps to preserve all relevant evidence in anticipation of possible litigation.

Regards,
Rob Kittredege

….

Here’s the ‘cease and desist’ video,

Enjoy!

Robot artists—should they get copyright protection

Clearly a lawyer wrote this June 26, 2017 essay on theconversation.com (Note: A link has been removed),

When a group of museums and researchers in the Netherlands unveiled a portrait entitled The Next Rembrandt, it was something of a tease to the art world. It wasn’t a long lost painting but a new artwork generated by a computer that had analysed thousands of works by the 17th-century Dutch artist Rembrandt Harmenszoon van Rijn.

The computer used something called machine learning [emphasis mine] to analyse and reproduce technical and aesthetic elements in Rembrandt’s works, including lighting, colour, brush-strokes and geometric patterns. The result is a portrait produced based on the styles and motifs found in Rembrandt’s art but produced by algorithms.

But who owns creative works generated by artificial intelligence? This isn’t just an academic question. AI is already being used to generate works in music, journalism and gaming, and these works could in theory be deemed free of copyright because they are not created by a human author.

This would mean they could be freely used and reused by anyone and that would be bad news for the companies selling them. Imagine you invest millions in a system that generates music for video games, only to find that music isn’t protected by law and can be used without payment by anyone in the world.

Unlike with earlier computer-generated works of art, machine learning software generates truly creative works without human input or intervention. AI is not just a tool. While humans program the algorithms, the decision making – the creative spark – comes almost entirely from the machine.

It could have been someone involved in the technology but nobody with that background would write “… something called machine learning … .”  Andres Guadamuz, lecturer in Intellectual Property Law at the University of Sussex, goes on to say (Note: Links have been removed),

Unlike with earlier computer-generated works of art, machine learning software generates truly creative works without human input or intervention. AI is not just a tool. While humans program the algorithms, the decision making – the creative spark – comes almost entirely from the machine.

That doesn’t mean that copyright should be awarded to the computer, however. Machines don’t (yet) have the rights and status of people under the law. But that doesn’t necessarily mean there shouldn’t be any copyright either. Not all copyright is owned by individuals, after all.

Companies are recognised as legal people and are often awarded copyright for works they don’t directly create. This occurs, for example, when a film studio hires a team to make a movie, or a website commissions a journalist to write an article. So it’s possible copyright could be awarded to the person (company or human) that has effectively commissioned the AI to produce work for it.

 

Things are likely to become yet more complex as AI tools are more commonly used by artists and as the machines get better at reproducing creativity, making it harder to discern if an artwork is made by a human or a computer. Monumental advances in computing and the sheer amount of computational power becoming available may well make the distinction moot. At that point, we will have to decide what type of protection, if any, we should give to emergent works created by intelligent algorithms with little or no human intervention.

The most sensible move seems to follow those countries that grant copyright to the person who made the AI’s operation possible, with the UK’s model looking like the most efficient. This will ensure companies keep investing in the technology, safe in the knowledge they will reap the benefits. What happens when we start seriously debating whether computers should be given the status and rights of people is a whole other story.

The team that developed a ‘new’ Rembrandt produced a video about the process,

Mark Brown’s April 5, 2016 article abut this project (which was unveiled on April 5, 2017 in Amsterdam, Netherlands) for the Guardian newspaper provides more detail such as this,

It [Next Rembrandt project] is the result of an 18-month project which asks whether new technology and data can bring back to life one of the greatest, most innovative painters of all time.

Advertising executive [Bas] Korsten, whose brainchild the project was, admitted that there were many doubters. “The idea was greeted with a lot of disbelief and scepticism,” he said. “Also coming up with the idea is one thing, bringing it to life is another.”

The project has involved data scientists, developers, engineers and art historians from organisations including Microsoft, Delft University of Technology, the Mauritshuis in The Hague and the Rembrandt House Museum in Amsterdam.

The final 3D printed painting consists of more than 148 million pixels and is based on 168,263 Rembrandt painting fragments.

Some of the challenges have been in designing a software system that could understand Rembrandt based on his use of geometry, composition and painting materials. A facial recognition algorithm was then used to identify and classify the most typical geometric patterns used to paint human features.

It sounds like it was a fascinating project but I don’t believe ‘The Next Rembrandt’ is an example of AI creativity or an example of the ‘creative spark’ Guadamuz discusses. This seems more like the kind of work  that could be done by a talented forger or fraudster. As I understand it, even when a human creates this type of artwork (a newly discovered and unknown xxx masterpiece), the piece is not considered a creative work in its own right. Some pieces are outright fraudulent and others which are described as “in the manner of xxx.”

Taking a somewhat different approach to mine, Timothy Geigner at Techdirt has also commented on the question of copyright and AI in relation to Guadamuz’s essay in a July 7, 2017 posting,

Unlike with earlier computer-generated works of art, machine learning software generates truly creative works without human input or intervention. AI is not just a tool. While humans program the algorithms, the decision making – the creative spark – comes almost entirely from the machine.

Let’s get the easy part out of the way: the culminating sentence in the quote above is not true. The creative spark is not the artistic output. Rather, the creative spark has always been known as the need to create in the first place. This isn’t a trivial quibble, either, as it factors into the simple but important reasoning for why AI and machines should certainly not receive copyright rights on their output.

That reasoning is the purpose of copyright law itself. Far too many see copyright as a reward system for those that create art rather than what it actually was meant to be: a boon to an artist to compensate for that artist to create more art for the benefit of the public as a whole. Artificial intelligence, however far progressed, desires only what it is programmed to desire. In whatever hierarchy of needs an AI might have, profit via copyright would factor either laughably low or not at all into its future actions. Future actions of the artist, conversely, are the only item on the agenda for copyright’s purpose. If receiving a copyright wouldn’t spur AI to create more art beneficial to the public, then copyright ought not to be granted.

Geigner goes on (July 7, 2017 posting) to elucidate other issues with the ideas expressed in the general debates of AI and ‘rights’ and the EU’s solution.

The ultimate natural sunscreen

For those of us in the northern hemisphere, sunscreen season is on the horizon. While the “ultimate natural sunscreen” researchers from the University of California at San Diego (UCSD) have developed is a long way from the marketplace, this is encouraging news (from a May 17, 2017 news item on Nanowerk),

Chemists, materials scientists and nanoengineers at UC San Diego have created what may be the ultimate natural sunscreen.

In a paper published in the American Chemical Society journal ACS Central Science, they report the development of nanoparticles that mimic the behavior of natural melanosomes, melanin-producing cell structures that protect our skin, eyes and other tissues from the harmful effects of ultraviolet radiation.

“Basically, we succeeded in making a synthetic version of the nanoparticles that our skin uses to produce and store melanin and demonstrated in experiments in skin cells that they mimic the behavior of natural melanosomes,” said Nathan Gianneschi, a professor of chemistry and biochemistry, materials science and engineering and nanoengineering at UC San Diego, who headed the team of researchers. The achievement has practical applications.

A May 17, 2017 UCSD news release, which originated the news item, delves into the research,

“Defects in melanin production in humans can cause diseases such as vitiligo and albinism that lack effective treatments,” Gianneschi added.

Vitiligo develops when the immune system wrongly attempts to clear normal melanocytes from the skin, effectively stopping the production of melanocytes. Albinism is due to genetic defects that lead to either the absence or a chemical defect in tyrosinase, a copper-containing enzyme involved in the production of melanin. Both of these diseases lack effective treatments and result in a significant risk of skin cancer for patients.

“The widespread prevalence of these melanin-related diseases and an increasing interest in the performance of various polymeric materials related to melanin prompted us to look for novel synthetic routes for preparing melanin-like materials,” Gianneschi said.

UC San Diego Ultimate natural sunscreenThe scientists found that the synthetic nanoparticles were taken up in tissue culture by keratinocytes, the predominant cell type found in the epidermis, the outer layer of skin. Photo by Yuran Huang and Ying Jones/UC San Diego

Melanin particles are produced naturally in many different sizes and shapes by animals—for iridescent feathers in birds or the pigmented eyes and skin of some reptiles. But scientists have discovered that extracting melanins from natural sources is a difficult and potentially more complex process than producing them synthetically.

Gianneschi and his team discovered two years ago that synthetic melanin-like nanoparticles could be developed in a precisely controllable manner to mimic the performance of natural melanins used in bird feathers.

“We hypothesized that synthetic melanin-like nanoparticles would mimic naturally occurring melanosomes and be taken up by keratinocytes, the predominant cell type found in the epidermis, the outer layer of skin,” said Gianneschi.

In healthy humans, melanin is delivered to keratinocytes in the skin after being excreted as melanosomes from melanocytes.

The UC San Diego scientists prepared melanin-like nanoparticles through the spontaneous oxidation of dopamine—developing biocompatible, synthetic analogues of naturally occurring melanosomes. Then they studied their update, transport, distribution and ultraviolet radiation-protective capabilities in human keratinocytes in tissue culture.

The researchers found that these synthetic nanoparticles were not only taken up and distributed normally, like natural melanosomes, within the keratinocytes, they protected the skin cells from DNA damage due to ultraviolet radiation.

“Considering limitations in the treatment of melanin-defective related diseases and the biocompatibility of these synthetic melanin-like nanoparticles in terms of uptake and degradation, these systems have potential as artificial melanosomes for the development of novel therapies, possibly supplementing the biological functions of natural melanins,” the researchers said in their paper.

The other co-authors of the study were Yuran Huang and Ziying Hu of UC San Diego’s Materials Science and Engineering Program, Yiwen Li and Maria Proetto of the Department of Chemistry and Biochemistry; Xiujun Yue of the Department of Nanoengineering; and Ying Jones of the Electron Microscopy Core Facility.

The UC San Diego Office of Innovation and Commercialization has filed a patent application on the use of polydopamine-based artificial melanins as an intracellular UV-shield. Companies interested in commercializing this invention should contact Skip Cynar at invent@ucsd.edu

Here’s a link to and a citation fro the paper,

Mimicking Melanosomes: Polydopamine Nanoparticles as Artificial Microparasols by
Yuran Huang, Yiwen Li, Ziying Hu, Xiujun Yue, Maria T. Proetto, Ying Jones, and Nathan C. Gianneschi. ACS Cent. Sci., Article ASAP DOI: 10.1021/acscentsci.6b00230 Publication Date (Web): May 18, 2017

Copyright © 2017 American Chemical Society

This is an open access paper,

Patent Politics: a June 23, 2017 book launch at the Wilson Center (Washington, DC)

I received a June 12, 2017 notice (via email) from the Wilson Center (also know as the Woodrow Wilson Center for International Scholars) about a book examining patents and policies in the United States and in Europe and its upcoming launch,

Patent Politics: Life Forms, Markets, and the Public Interest in the United States and Europe

Over the past thirty years, the world’s patent systems have experienced pressure from civil society like never before. From farmers to patient advocates, new voices are arguing that patents impact public health, economic inequality, morality—and democracy. These challenges, to domains that we usually consider technical and legal, may seem surprising. But in Patent Politics, Shobita Parthasarathy argues that patent systems have always been deeply political and social.

To demonstrate this, Parthasarathy takes readers through a particularly fierce and prolonged set of controversies over patents on life forms linked to important advances in biology and agriculture and potentially life-saving medicines. Comparing battles over patents on animals, human embryonic stem cells, human genes, and plants in the United States and Europe, she shows how political culture, ideology, and history shape patent system politics. Clashes over whose voices and which values matter in the patent system, as well as what counts as knowledge and whose expertise is important, look quite different in these two places. And through these debates, the United States and Europe are developing very different approaches to patent and innovation governance. Not just the first comprehensive look at the controversies swirling around biotechnology patents, Patent Politics is also the first in-depth analysis of the political underpinnings and implications of modern patent systems, and provides a timely analysis of how we can reform these systems around the world to maximize the public interest.

Join us on June 23 [2017] from 4-6 pm [elsewhere the time is listed at 4-7 pm] for a discussion on the role of the patent system in governing emerging technologies, on the launch of Shobita Parthasarathy’s Patent Politics: Life Forms, Markets, and the Public Interest in the United States and Europe (University of Chicago Press, 2017).

You can find more information such as this on the Patent Politics event page,

Speakers

Keynote


  • Shobita Parthasarathy

    Fellow
    Associate Professor of Public Policy and Women’s Studies, and Director of the Science, Technology, and Public Policy Program, at University of Michigan

Moderator


  • Eleonore Pauwels

    Senior Program Associate and Director of Biology Collectives, Science and Technology Innovation Program
    Formerly European Commission, Directorate-General for Research and Technological Development, Directorate on Science, Economy and Society

Panelists


  • Daniel Sarewitz

    Co-Director, Consortium for Science, Policy & Outcomes Professor of Science and Society, School for the Future of Innovation in Society

  • Richard Harris

    Award-Winning Journalist National Public Radio Author of “Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions”

For those who cannot attend in person, there will be a live webcast. If you can be there in person, you can RSVP here (Note: The time frame for the event is listed in some places as 4-7 pm.) I cannot find any reason for the time frame disparity. My best guess is that the discussion is scheduled for two hours with a one hour reception afterwards for those who can attend in person.

Emerging technology and the law

I have three news bits about legal issues that are arising as a consequence of emerging technologies.

Deep neural networks, art, and copyright

Caption: The rise of automated art opens new creative avenues, coupled with new problems for copyright protection. Credit: Provided by: Alexander Mordvintsev, Christopher Olah and Mike Tyka

Presumably this artwork is a demonstration of automated art although they never really do explain how in the news item/news release. An April 26, 2017 news item on ScienceDaily announces research into copyright and the latest in using neural networks to create art,

In 1968, sociologist Jean Baudrillard wrote on automatism that “contained within it is the dream of a dominated world […] that serves an inert and dreamy humanity.”

With the growing popularity of Deep Neural Networks (DNN’s), this dream is fast becoming a reality.

Dr. Jean-Marc Deltorn, researcher at the Centre d’études internationales de la propriété intellectuelle in Strasbourg, argues that we must remain a responsive and responsible force in this process of automation — not inert dominators. As he demonstrates in a recent Frontiers in Digital Humanities paper, the dream of automation demands a careful study of the legal problems linked to copyright.

An April 26, 2017 Frontiers (publishing) news release on EurekAlert, which originated the news item, describes the research in more detail,

For more than half a century, artists have looked to computational processes as a way of expanding their vision. DNN’s are the culmination of this cross-pollination: by learning to identify a complex number of patterns, they can generate new creations.

These systems are made up of complex algorithms modeled on the transmission of signals between neurons in the brain.

DNN creations rely in equal measure on human inputs and the non-human algorithmic networks that process them.

Inputs are fed into the system, which is layered. Each layer provides an opportunity for a more refined knowledge of the inputs (shape, color, lines). Neural networks compare actual outputs to expected ones, and correct the predictive error through repetition and optimization. They train their own pattern recognition, thereby optimizing their learning curve and producing increasingly accurate outputs.

The deeper the layers are, the higher the level of abstraction. The highest layers are able to identify the contents of a given input with reasonable accuracy, after extended periods of training.

Creation thus becomes increasingly automated through what Deltorn calls “the arcane traceries of deep architecture”. The results are sufficiently abstracted from their sources to produce original creations that have been exhibited in galleries, sold at auction and performed at concerts.

The originality of DNN’s is a combined product of technological automation on one hand, human inputs and decisions on the other.

DNN’s are gaining popularity. Various platforms (such as DeepDream) now allow internet users to generate their very own new creations . This popularization of the automation process calls for a comprehensive legal framework that ensures a creator’s economic and moral rights with regards to his work – copyright protection.

Form, originality and attribution are the three requirements for copyright. And while DNN creations satisfy the first of these three, the claim to originality and attribution will depend largely on a given country legislation and on the traceability of the human creator.

Legislation usually sets a low threshold to originality. As DNN creations could in theory be able to create an endless number of riffs on source materials, the uncurbed creation of original works could inflate the existing number of copyright protections.

Additionally, a small number of national copyright laws confers attribution to what UK legislation defines loosely as “the person by whom the arrangements necessary for the creation of the work are undertaken.” In the case of DNN’s, this could mean anybody from the programmer to the user of a DNN interface.

Combined with an overly supple take on originality, this view on attribution would further increase the number of copyrightable works.

The risk, in both cases, is that artists will be less willing to publish their own works, for fear of infringement of DNN copyright protections.

In order to promote creativity – one seminal aim of copyright protection – the issue must be limited to creations that manifest a personal voice “and not just the electric glint of a computational engine,” to quote Deltorn. A delicate act of discernment.

DNN’s promise new avenues of creative expression for artists – with potential caveats. Copyright protection – a “catalyst to creativity” – must be contained. Many of us gently bask in the glow of an increasingly automated form of technology. But if we want to safeguard the ineffable quality that defines much art, it might be a good idea to hone in more closely on the differences between the electric and the creative spark.

This research is and be will part of a broader Frontiers Research Topic collection of articles on Deep Learning and Digital Humanities.

Here’s a link to and a citation for the paper,

Deep Creations: Intellectual Property and the Automata by Jean-Marc Deltorn. Front. Digit. Humanit., 01 February 2017 | https://doi.org/10.3389/fdigh.2017.00003

This paper is open access.

Conference on governance of emerging technologies

I received an April 17, 2017 notice via email about this upcoming conference. Here’s more from the Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics webpage,

The Fifth Annual Conference on Governance of Emerging Technologies:

Law, Policy and Ethics held at the new

Beus Center for Law & Society in Phoenix, AZ

May 17-19, 2017!

Call for Abstracts – Now Closed

The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including (but not limited to) nanotechnology, synthetic biology, gene editing, biotechnology, genomics, personalized medicine, human enhancement technologies, telecommunications, information technologies, surveillance technologies, geoengineering, neuroscience, artificial intelligence, and robotics. The conference is premised on the belief that there is much to be learned and shared from and across the governance experience and proposals for these various emerging technologies.

Keynote Speakers:

Gillian HadfieldRichard L. and Antoinette Schamoi Kirtland Professor of Law and Professor of Economics USC [University of Southern California] Gould School of Law

Shobita Parthasarathy, Associate Professor of Public Policy and Women’s Studies, Director, Science, Technology, and Public Policy Program University of Michigan

Stuart Russell, Professor at [University of California] Berkeley, is a computer scientist known for his contributions to artificial intelligence

Craig Shank, Vice President for Corporate Standards Group in Microsoft’s Corporate, External and Legal Affairs (CELA)

Plenary Panels:

Innovation – Responsible and/or Permissionless

Ellen-Marie Forsberg, Senior Researcher/Research Manager at Oslo and Akershus University College of Applied Sciences

Adam Thierer, Senior Research Fellow with the Technology Policy Program at the Mercatus Center at George Mason University

Wendell Wallach, Consultant, ethicist, and scholar at Yale University’s Interdisciplinary Center for Bioethics

 Gene Drives, Trade and International Regulations

Greg Kaebnick, Director, Editorial Department; Editor, Hastings Center Report; Research Scholar, Hastings Center

Jennifer Kuzma, Goodnight-North Carolina GlaxoSmithKline Foundation Distinguished Professor in Social Sciences in the School of Public and International Affairs (SPIA) and co-director of the Genetic Engineering and Society (GES) Center at North Carolina State University

Andrew Maynard, Senior Sustainability Scholar, Julie Ann Wrigley Global Institute of Sustainability Director, Risk Innovation Lab, School for the Future of Innovation in Society Professor, School for the Future of Innovation in Society, Arizona State University

Gary Marchant, Regents’ Professor of Law, Professor of Law Faculty Director and Faculty Fellow, Center for Law, Science & Innovation, Arizona State University

Marc Saner, Inaugural Director of the Institute for Science, Society and Policy, and Associate Professor, University of Ottawa Department of Geography

Big Data

Anupam Chander, Martin Luther King, Jr. Professor of Law and Director, California International Law Center, UC Davis School of Law

Pilar Ossorio, Professor of Law and Bioethics, University of Wisconsin, School of Law and School of Medicine and Public Health; Morgridge Institute for Research, Ethics Scholar-in-Residence

George Poste, Chief Scientist, Complex Adaptive Systems Initiative (CASI) (http://www.casi.asu.edu/), Regents’ Professor and Del E. Webb Chair in Health Innovation, Arizona State University

Emily Shuckburgh, climate scientist and deputy head of the Polar Oceans Team at the British Antarctic Survey, University of Cambridge

 Responsible Development of AI

Spring Berman, Ira A. Fulton Schools of Engineering, Arizona State University

John Havens, The IEEE [Institute of Electrical and Electronics Engineers] Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems

Subbarao Kambhampati, Senior Sustainability Scientist, Julie Ann Wrigley Global Institute of Sustainability, Professor, School of Computing, Informatics and Decision Systems Engineering, Ira A. Fulton Schools of Engineering, Arizona State University

Wendell Wallach, Consultant, Ethicist, and Scholar at Yale University’s Interdisciplinary Center for Bioethics

Existential and Catastrophic Ricks [sic]

Tony Barrett, Co-Founder and Director of Research of the Global Catastrophic Risk Institute

Haydn Belfield,  Academic Project Administrator, Centre for the Study of Existential Risk at the University of Cambridge

Margaret E. Kosal Associate Director, Sam Nunn School of International Affairs, Georgia Institute of Technology

Catherine Rhodes,  Academic Project Manager, Centre for the Study of Existential Risk at CSER, University of Cambridge

These were the panels that are of interest to me; there are others on the homepage.

Here’s some information from the Conference registration webpage,

Early Bird Registration – $50 off until May 1! Enter discount code: earlybirdGETs50

New: Group Discount – Register 2+ attendees together and receive an additional 20% off for all group members!

Click Here to Register!

Conference registration fees are as follows:

  • General (non-CLE) Registration: $150.00
  • CLE Registration: $350.00
  • *Current Student / ASU Law Alumni Registration: $50.00
  • ^Cybsersecurity sessions only (May 19): $100 CLE / $50 General / Free for students (registration info coming soon)

There you have it.

Neuro-techno future laws

I’m pretty sure this isn’t the first exploration of potential legal issues arising from research into neuroscience although it’s the first one I’ve stumbled across. From an April 25, 2017 news item on phys.org,

New human rights laws to prepare for advances in neurotechnology that put the ‘freedom of the mind’ at risk have been proposed today in the open access journal Life Sciences, Society and Policy.

The authors of the study suggest four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy. The four laws are: the right to cognitive liberty, the right to mental privacy, the right to mental integrity and the right to psychological continuity.

An April 25, 2017 Biomed Central news release on EurekAlert, which originated the news item, describes the work in more detail,

Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel, said: “The mind is considered to be the last refuge of personal freedom and self-determination, but advances in neural engineering, brain imaging and neurotechnology put the freedom of the mind at risk. Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology.”

Advances in neurotechnology, such as sophisticated brain imaging and the development of brain-computer interfaces, have led to these technologies moving away from a clinical setting and into the consumer domain. While these advances may be beneficial for individuals and society, there is a risk that the technology could be misused and create unprecedented threats to personal freedom.

Professor Roberto Andorno, co-author of the research, explained: “Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court, for example as a tool for assessing criminal responsibility or even the risk of reoffending. Consumer companies are using brain imaging for ‘neuromarketing’, to understand consumer behaviour and elicit desired responses from customers. There are also tools such as ‘brain decoders’ which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom which we sought to address with the development of four new human rights laws.”

The authors explain that as neurotechnology improves and becomes commonplace, there is a risk that the technology could be hacked, allowing a third-party to ‘eavesdrop’ on someone’s mind. In the future, a brain-computer interface used to control consumer technology could put the user at risk of physical and psychological damage caused by a third-party attack on the technology. There are also ethical and legal concerns over the protection of data generated by these devices that need to be considered.

International human rights laws make no specific mention to neuroscience, although advances in biomedicine have become intertwined with laws, such as those concerning human genetic data. Similar to the historical trajectory of the genetic revolution, the authors state that the on-going neurorevolution will force a reconceptualization of human rights laws and even the creation of new ones.

Marcello Ienca added: “Science-fiction can teach us a lot about the potential threat of technology. Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes. We need to be prepared to deal with the impact these technologies will have on our personal freedom.”

Here’s a link to and a citation for the paper,

Towards new human rights in the age of neuroscience and neurotechnology by Marcello Ienca and Roberto Andorno. Life Sciences, Society and Policy201713:5 DOI: 10.1186/s40504-017-0050-1 Published: 26 April 2017

©  The Author(s). 2017

This paper is open access.

The Canadian science scene and the 2017 Canadian federal budget

There’s not much happening in the 2017-18 budget in terms of new spending according to Paul Wells’ March 22, 2017 article for TheStar.com,

This is the 22nd or 23rd federal budget I’ve covered. And I’ve never seen the like of the one Bill Morneau introduced on Wednesday [March 22, 2017].

Not even in the last days of the Harper Conservatives did a budget provide for so little new spending — $1.3 billion in the current budget year, total, in all fields of government. That’s a little less than half of one per cent of all federal program spending for this year.

But times are tight. The future is a place where we can dream. So the dollars flow more freely in later years. In 2021-22, the budget’s fifth planning year, new spending peaks at $8.2 billion. Which will be about 2.4 per cent of all program spending.

He’s not alone in this 2017 federal budget analysis; CBC (Canadian Broadcasting Corporation) pundits, Chantal Hébert, Andrew Coyne, and Jennifer Ditchburn said much the same during their ‘At Issue’ segment of the March 22, 2017 broadcast of The National (news).

Before I focus on the science and technology budget, here are some general highlights from the CBC’s March 22, 2017 article on the 2017-18 budget announcement (Note: Links have been removed,

Here are highlights from the 2017 federal budget:

  • Deficit: $28.5 billion, up from $25.4 billion projected in the fall.
  • Trend: Deficits gradually decline over next five years — but still at $18.8 billion in 2021-22.
  • Housing: $11.2 billion over 11 years, already budgeted, will go to a national housing strategy.
  • Child care: $7 billion over 10 years, already budgeted, for new spaces, starting 2018-19.
  • Indigenous: $3.4 billion in new money over five years for infrastructure, health and education.
  • Defence: $8.4 billion in capital spending for equipment pushed forward to 2035.
  • Care givers: New care-giving benefit up to 15 weeks, starting next year.
  • Skills: New agency to research and measure skills development, starting 2018-19.
  • Innovation: $950 million over five years to support business-led “superclusters.”
  • Startups: $400 million over three years for a new venture capital catalyst initiative.
  • AI: $125 million to launch a pan-Canadian Artificial Intelligence Strategy.
  • Coding kids: $50 million over two years for initiatives to teach children to code.
  • Families: Option to extend parental leave up to 18 months.
  • Uber tax: GST to be collected on ride-sharing services.
  • Sin taxes: One cent more on a bottle of wine, five cents on 24 case of beer.
  • Bye-bye: No more Canada Savings Bonds.
  • Transit credit killed: 15 per cent non-refundable public transit tax credit phased out this year.

You can find the entire 2017-18 budget here.

Science and the 2017-18 budget

For anyone interested in the science news, you’ll find most of that in the 2017 budget’s Chapter 1 — Skills, Innovation and Middle Class jobs. As well, Wayne Kondro has written up a précis in his March 22, 2017 article for Science (magazine),

Finance officials, who speak on condition of anonymity during the budget lock-up, indicated the budgets of the granting councils, the main source of operational grants for university researchers, will be “static” until the government can assess recommendations that emerge from an expert panel formed in 2015 and headed by former University of Toronto President David Naylor to review basic science in Canada [highlighted in my June 15, 2016 posting ; $2M has been allocated for the advisor and associated secretariat]. Until then, the officials said, funding for the Natural Sciences and Engineering Research Council of Canada (NSERC) will remain at roughly $848 million, whereas that for the Canadian Institutes of Health Research (CIHR) will remain at $773 million, and for the Social Sciences and Humanities Research Council [SSHRC] at $547 million.

NSERC, though, will receive $8.1 million over 5 years to administer a PromoScience Program that introduces youth, particularly unrepresented groups like Aboriginal people and women, to science, technology, engineering, and mathematics through measures like “space camps and conservation projects.” CIHR, meanwhile, could receive modest amounts from separate plans to identify climate change health risks and to reduce drug and substance abuse, the officials added.

… Canada’s Innovation and Skills Plan, would funnel $600 million over 5 years allocated in 2016, and $112.5 million slated for public transit and green infrastructure, to create Silicon Valley–like “super clusters,” which the budget defined as “dense areas of business activity that contain large and small companies, post-secondary institutions and specialized talent and infrastructure.” …

… The Canadian Institute for Advanced Research will receive $93.7 million [emphasis mine] to “launch a Pan-Canadian Artificial Intelligence Strategy … (to) position Canada as a world-leading destination for companies seeking to invest in artificial intelligence and innovation.”

… Among more specific measures are vows to: Use $87.7 million in previous allocations to the Canada Research Chairs program to create 25 “Canada 150 Research Chairs” honoring the nation’s 150th year of existence, provide $1.5 million per year to support the operations of the office of the as-yet-unappointed national science adviser [see my Dec. 7, 2016 post for information about the job posting, which is now closed]; provide $165.7 million [emphasis mine] over 5 years for the nonprofit organization Mitacs to create roughly 6300 more co-op positions for university students and grads, and provide $60.7 million over five years for new Canadian Space Agency projects, particularly for Canadian participation in the National Aeronautics and Space Administration’s next Mars Orbiter Mission.

Kondros was either reading an earlier version of the budget or made an error regarding Mitacs (from the budget in the “A New, Ambitious Approach to Work-Integrated Learning” subsection),

Mitacs has set an ambitious goal of providing 10,000 work-integrated learning placements for Canadian post-secondary students and graduates each year—up from the current level of around 3,750 placements. Budget 2017 proposes to provide $221 million [emphasis mine] over five years, starting in 2017–18, to achieve this goal and provide relevant work experience to Canadian students.

As well, the budget item for the Pan-Canadian Artificial Intelligence Strategy is $125M.

Moving from Kondros’ précis, the budget (in the “Positioning National Research Council Canada Within the Innovation and Skills Plan” subsection) announces support for these specific areas of science,

Stem Cell Research

The Stem Cell Network, established in 2001, is a national not-for-profit organization that helps translate stem cell research into clinical applications, commercial products and public policy. Its research holds great promise, offering the potential for new therapies and medical treatments for respiratory and heart diseases, cancer, diabetes, spinal cord injury, multiple sclerosis, Crohn’s disease, auto-immune disorders and Parkinson’s disease. To support this important work, Budget 2017 proposes to provide the Stem Cell Network with renewed funding of $6 million in 2018–19.

Space Exploration

Canada has a long and proud history as a space-faring nation. As our international partners prepare to chart new missions, Budget 2017 proposes investments that will underscore Canada’s commitment to innovation and leadership in space. Budget 2017 proposes to provide $80.9 million on a cash basis over five years, starting in 2017–18, for new projects through the Canadian Space Agency that will demonstrate and utilize Canadian innovations in space, including in the field of quantum technology as well as for Mars surface observation. The latter project will enable Canada to join the National Aeronautics and Space Administration’s (NASA’s) next Mars Orbiter Mission.

Quantum Information

The development of new quantum technologies has the potential to transform markets, create new industries and produce leading-edge jobs. The Institute for Quantum Computing is a world-leading Canadian research facility that furthers our understanding of these innovative technologies. Budget 2017 proposes to provide the Institute with renewed funding of $10 million over two years, starting in 2017–18.

Social Innovation

Through community-college partnerships, the Community and College Social Innovation Fund fosters positive social outcomes, such as the integration of vulnerable populations into Canadian communities. Following the success of this pilot program, Budget 2017 proposes to invest $10 million over two years, starting in 2017–18, to continue this work.

International Research Collaborations

The Canadian Institute for Advanced Research (CIFAR) connects Canadian researchers with collaborative research networks led by eminent Canadian and international researchers on topics that touch all humanity. Past collaborations facilitated by CIFAR are credited with fostering Canada’s leadership in artificial intelligence and deep learning. Budget 2017 proposes to provide renewed and enhanced funding of $35 million over five years, starting in 2017–18.

Earlier this week, I highlighted Canada’s strength in the field of regenerative medicine, specifically stem cells in a March 21, 2017 posting. The $6M in the current budget doesn’t look like increased funding but rather a one-year extension. I’m sure they’re happy to receive it  but I imagine it’s a little hard to plan major research projects when you’re not sure how long your funding will last.

As for Canadian leadership in artificial intelligence, that was news to me. Here’s more from the budget,

Canada a Pioneer in Deep Learning in Machines and Brains

CIFAR’s Learning in Machines & Brains program has shaken up the field of artificial intelligence by pioneering a technique called “deep learning,” a computer technique inspired by the human brain and neural networks, which is now routinely used by the likes of Google and Facebook. The program brings together computer scientists, biologists, neuroscientists, psychologists and others, and the result is rich collaborations that have propelled artificial intelligence research forward. The program is co-directed by one of Canada’s foremost experts in artificial intelligence, the Université de Montréal’s Yoshua Bengio, and for his many contributions to the program, the University of Toronto’s Geoffrey Hinton, another Canadian leader in this field, was awarded the title of Distinguished Fellow by CIFAR in 2014.

Meanwhile, from chapter 1 of the budget in the subsection titled “Preparing for the Digital Economy,” there is this provision for children,

Providing educational opportunities for digital skills development to Canadian girls and boys—from kindergarten to grade 12—will give them the head start they need to find and keep good, well-paying, in-demand jobs. To help provide coding and digital skills education to more young Canadians, the Government intends to launch a competitive process through which digital skills training organizations can apply for funding. Budget 2017 proposes to provide $50 million over two years, starting in 2017–18, to support these teaching initiatives.

I wonder if BC Premier Christy Clark is heaving a sigh of relief. At the 2016 #BCTECH Summit, she announced that students in BC would learn to code at school and in newly enhanced coding camp programmes (see my Jan. 19, 2016 posting). Interestingly, there was no mention of additional funding to support her initiative. I guess this money from the federal government comes at a good time as we will have a provincial election later this spring where she can announce the initiative again and, this time, mention there’s money for it.

Attracting brains from afar

Ivan Semeniuk in his March 23, 2017 article (for the Globe and Mail) reads between the lines to analyze the budget’s possible impact on Canadian science,

But a between-the-lines reading of the budget document suggests the government also has another audience in mind: uneasy scientists from the United States and Britain.

The federal government showed its hand at the 2017 #BCTECH Summit. From a March 16, 2017 article by Meera Bains for the CBC news online,

At the B.C. tech summit, Navdeep Bains, Canada’s minister of innovation, said the government will act quickly to fast track work permits to attract highly skilled talent from other countries.

“We’re taking the processing time, which takes months, and reducing it to two weeks for immigration processing for individuals [who] need to come here to help companies grow and scale up,” Bains said.

“So this is a big deal. It’s a game changer.”

That change will happen through the Global Talent Stream, a new program under the federal government’s temporary foreign worker program.  It’s scheduled to begin on June 12, 2017.

U.S. companies are taking notice and a Canadian firm, True North, is offering to help them set up shop.

“What we suggest is that they think about moving their operations, or at least a chunk of their operations, to Vancouver, set up a Canadian subsidiary,” said the company’s founder, Michael Tippett.

“And that subsidiary would be able to house and accommodate those employees.”

Industry experts says while the future is unclear for the tech sector in the U.S., it’s clear high tech in B.C. is gearing up to take advantage.

US business attempts to take advantage of Canada’s relative stability and openness to immigration would seem to be the motive for at least one cross border initiative, the Cascadia Urban Analytics Cooperative. From my Feb. 28, 2017 posting,

There was some big news about the smallest version of the Cascadia region on Thursday, Feb. 23, 2017 when the University of British Columbia (UBC) , the University of Washington (state; UW), and Microsoft announced the launch of the Cascadia Urban Analytics Cooperative. From the joint Feb. 23, 2017 news release (read on the UBC website or read on the UW website),

In an expansion of regional cooperation, the University of British Columbia and the University of Washington today announced the establishment of the Cascadia Urban Analytics Cooperative to use data to help cities and communities address challenges from traffic to homelessness. The largest industry-funded research partnership between UBC and the UW, the collaborative will bring faculty, students and community stakeholders together to solve problems, and is made possible thanks to a $1-million gift from Microsoft.

Today’s announcement follows last September’s [2016] Emerging Cascadia Innovation Corridor Conference in Vancouver, B.C. The forum brought together regional leaders for the first time to identify concrete opportunities for partnerships in education, transportation, university research, human capital and other areas.

A Boston Consulting Group study unveiled at the conference showed the region between Seattle and Vancouver has “high potential to cultivate an innovation corridor” that competes on an international scale, but only if regional leaders work together. The study says that could be possible through sustained collaboration aided by an educated and skilled workforce, a vibrant network of research universities and a dynamic policy environment.

It gets better, it seems Microsoft has been positioning itself for a while if Matt Day’s analysis is correct (from my Feb. 28, 2017 posting),

Matt Day in a Feb. 23, 2017 article for the The Seattle Times provides additional perspective (Note: Links have been removed),

Microsoft’s effort to nudge Seattle and Vancouver, B.C., a bit closer together got an endorsement Thursday [Feb. 23, 2017] from the leading university in each city.

The partnership has its roots in a September [2016] conference in Vancouver organized by Microsoft’s public affairs and lobbying unit [emphasis mine.] That gathering was aimed at tying business, government and educational institutions in Microsoft’s home region in the Seattle area closer to its Canadian neighbor.

Microsoft last year [2016] opened an expanded office in downtown Vancouver with space for 750 employees, an outpost partly designed to draw to the Northwest more engineers than the company can get through the U.S. guest worker system [emphasis mine].

This was all prior to President Trump’s legislative moves in the US, which have at least one Canadian observer a little more gleeful than I’m comfortable with. From a March 21, 2017 article by Susan Lum  for CBC News online,

U.S. President Donald Trump’s efforts to limit travel into his country while simultaneously cutting money from science-based programs provides an opportunity for Canada’s science sector, says a leading Canadian researcher.

“This is Canada’s moment. I think it’s a time we should be bold,” said Alan Bernstein, president of CIFAR [which on March 22, 2017 was awarded $125M to launch the Pan Canada Artificial Intelligence Strategy in the Canadian federal budget announcement], a global research network that funds hundreds of scientists in 16 countries.

Bernstein believes there are many reasons why Canada has become increasingly attractive to scientists around the world, including the political climate in the United States and the Trump administration’s travel bans.

Thankfully, Bernstein calms down a bit,

“It used to be if you were a bright young person anywhere in the world, you would want to go to Harvard or Berkeley or Stanford, or what have you. Now I think you should give pause to that,” he said. “We have pretty good universities here [emphasis mine]. We speak English. We’re a welcoming society for immigrants.”​

Bernstein cautions that Canada should not be seen to be poaching scientists from the United States — but there is an opportunity.

“It’s as if we’ve been in a choir of an opera in the back of the stage and all of a sudden the stars all left the stage. And the audience is expecting us to sing an aria. So we should sing,” Bernstein said.

Bernstein said the federal government, with this week’s so-called innovation budget, can help Canada hit the right notes.

“Innovation is built on fundamental science, so I’m looking to see if the government is willing to support, in a big way, fundamental science in the country.”

Pretty good universities, eh? Thank you, Dr. Bernstein, for keeping some of the boosterism in check. Let’s leave the chest thumping to President Trump and his cronies.

Ivan Semeniuk’s March 23, 2017 article (for the Globe and Mail) provides more details about the situation in the US and in Britain,

Last week, Donald Trump’s first budget request made clear the U.S. President would significantly reduce or entirely eliminate research funding in areas such as climate science and renewable energy if permitted by Congress. Even the National Institutes of Health, which spearheads medical research in the United States and is historically supported across party lines, was unexpectedly targeted for a $6-billion (U.S.) cut that the White House said could be achieved through “efficiencies.”

In Britain, a recent survey found that 42 per cent of academics were considering leaving the country over worries about a less welcoming environment and the loss of research money that a split with the European Union is expected to bring.

In contrast, Canada’s upbeat language about science in the budget makes a not-so-subtle pitch for diversity and talent from abroad, including $117.6-million to establish 25 research chairs with the aim of attracting “top-tier international scholars.”

For good measure, the budget also includes funding for science promotion and $2-million annually for Canada’s yet-to-be-hired Chief Science Advisor, whose duties will include ensuring that government researchers can speak freely about their work.

“What we’ve been hearing over the last few months is that Canada is seen as a beacon, for its openness and for its commitment to science,” said Ms. Duncan [Kirsty Duncan, Minister of Science], who did not refer directly to either the United States or Britain in her comments.

Providing a less optimistic note, Erica Alini in her March 22, 2017 online article for Global News mentions a perennial problem, the Canadian brain drain,

The budget includes a slew of proposed reforms and boosted funding for existing training programs, as well as new skills-development resources for unemployed and underemployed Canadians not covered under current EI-funded programs.

There are initiatives to help women and indigenous people get degrees or training in science, technology, engineering and mathematics (the so-called STEM subjects) and even to teach kids as young as kindergarten-age to code.

But there was no mention of how to make sure Canadians with the right skills remain in Canada, TD’s DePratto {Toronto Dominion Bank} Economics; TD is currently experiencing a scandal {March 13, 2017 Huffington Post news item}] told Global News.

Canada ranks in the middle of the pack compared to other advanced economies when it comes to its share of its graduates in STEM fields, but the U.S. doesn’t shine either, said DePratto [Brian DePratto, senior economist at TD .

The key difference between Canada and the U.S. is the ability to retain domestic talent and attract brains from all over the world, he noted.

To be blunt, there may be some opportunities for Canadian science but it does well to remember (a) US businesses have no particular loyalty to Canada and (b) all it takes is an election to change any perceived advantages to disadvantages.

Digital policy and intellectual property issues

Dubbed by some as the ‘innovation’ budget (official title:  Building a Strong Middle Class), there is an attempt to address a longstanding innovation issue (from a March 22, 2017 posting by Michael Geist on his eponymous blog (Note: Links have been removed),

The release of today’s [march 22, 2017] federal budget is expected to include a significant emphasis on innovation, with the government revealing how it plans to spend (or re-allocate) hundreds of millions of dollars that is intended to support innovation. Canada’s dismal innovation record needs attention, but spending our way to a more innovative economy is unlikely to yield the desired results. While Navdeep Bains, the Innovation, Science and Economic Development Minister, has talked for months about the importance of innovation, Toronto Star columnist Paul Wells today delivers a cutting but accurate assessment of those efforts:

“This government is the first with a minister for innovation! He’s Navdeep Bains. He frequently posts photos of his meetings on Twitter, with the hashtag “#innovation.” That’s how you know there is innovation going on. A year and a half after he became the minister for #innovation, it’s not clear what Bains’s plans are. It’s pretty clear that within the government he has less than complete control over #innovation. There’s an advisory council on economic growth, chaired by the McKinsey guru Dominic Barton, which periodically reports to the government urging more #innovation.

There’s a science advisory panel, chaired by former University of Toronto president David Naylor, that delivered a report to Science Minister Kirsty Duncan more than three months ago. That report has vanished. One presumes that’s because it offered some advice. Whatever Bains proposes, it will have company.”

Wells is right. Bains has been very visible with plenty of meetings and public photo shoots but no obvious innovation policy direction. This represents a missed opportunity since Bains has plenty of policy tools at his disposal that could advance Canada’s innovation framework without focusing on government spending.

For example, Canada’s communications system – wireless and broadband Internet access – falls directly within his portfolio and is crucial for both business and consumers. Yet Bains has been largely missing in action on the file. He gave approval for the Bell – MTS merger that virtually everyone concedes will increase prices in the province and make the communications market less competitive. There are potential policy measures that could bring new competitors into the market (MVNOs [mobile virtual network operators] and municipal broadband) and that could make it easier for consumers to switch providers (ban on unlocking devices). Some of this falls to the CRTC, but government direction and emphasis would make a difference.

Even more troubling has been his near total invisibility on issues relating to new fees or taxes on Internet access and digital services. Canadian Heritage Minister Mélanie Joly has taken control of the issue with the possibility that Canadians could face increased costs for their Internet access or digital services through mandatory fees to contribute to Canadian content.  Leaving aside the policy objections to such an approach (reducing affordable access and the fact that foreign sources now contribute more toward Canadian English language TV production than Canadian broadcasters and distributors), Internet access and e-commerce are supposed to be Bains’ issue and they have a direct connection to the innovation file. How is it possible for the Innovation, Science and Economic Development Minister to have remained silent for months on the issue?

Bains has been largely missing on trade related innovation issues as well. My Globe and Mail column today focuses on a digital-era NAFTA, pointing to likely U.S. demands on data localization, data transfers, e-commerce rules, and net neutrality.  These are all issues that fall under Bains’ portfolio and will impact investment in Canadian networks and digital services. There are innovation opportunities for Canada here, but Bains has been content to leave the policy issues to others, who will be willing to sacrifice potential gains in those areas.

Intellectual property policy is yet another area that falls directly under Bains’ mandate with an obvious link to innovation, but he has done little on the file. Canada won a huge NAFTA victory late last week involving the Canadian patent system, which was challenged by pharmaceutical giant Eli Lilly. Why has Bains not promoted the decision as an affirmation of how Canada’s intellectual property rules?

On the copyright front, the government is scheduled to conduct a review of the Copyright Act later this year, but it is not clear whether Bains will take the lead or again cede responsibility to Joly. The Copyright Act is statutorily under the Industry Minister and reform offers the chance to kickstart innovation. …

For anyone who’s not familiar with this area, innovation is often code for commercialization of science and technology research efforts. These days, digital service and access policies and intellectual property policies are all key to research and innovation efforts.

The country that’s most often (except in mainstream Canadian news media) held up as an example of leadership in innovation is Estonia. The Economist profiled the country in a July 31, 2013 article and a July 7, 2016 article on apolitical.co provides and update.

Conclusions

Science monies for the tri-council science funding agencies (NSERC, SSHRC, and CIHR) are more or less flat but there were a number of line items in the federal budget which qualify as science funding. The $221M over five years for Mitacs, the $125M for the Pan-Canadian Artificial Intelligence Strategy, additional funding for the Canada research chairs, and some of the digital funding could also be included as part of the overall haul. This is in line with the former government’s (Stephen Harper’s Conservatives) penchant for keeping the tri-council’s budgets under control while spreading largesse elsewhere (notably the Perimeter Institute, TRIUMF [Canada’s National Laboratory for Particle and Nuclear Physics], and, in the 2015 budget, $243.5-million towards the Thirty Metre Telescope (TMT) — a massive astronomical observatory to be constructed on the summit of Mauna Kea, Hawaii, a $1.5-billion project). This has lead to some hard feelings in the past with regard to ‘big science’ projects getting what some have felt is an undeserved boost in finances while the ‘small fish’ are left scrabbling for the ever-diminishing (due to budget cuts in years past and inflation) pittances available from the tri-council agencies.

Mitacs, which started life as a federally funded Network Centre for Excellence focused on mathematics, has since shifted focus to become an innovation ‘champion’. You can find Mitacs here and you can find the organization’s March 2016 budget submission to the House of Commons Standing Committee on Finance here. At the time, they did not request a specific amount of money; they just asked for more.

The amount Mitacs expects to receive this year is over $40M which represents more than double what they received from the federal government and almost of 1/2 of their total income in the 2015-16 fiscal year according to their 2015-16 annual report (see p. 327 for the Mitacs Statement of Operations to March 31, 2016). In fact, the federal government forked over $39,900,189. in the 2015-16 fiscal year to be their largest supporter while Mitacs’ total income (receipts) was $81,993,390.

It’s a strange thing but too much money, etc. can be as bad as too little. I wish the folks Mitacs nothing but good luck with their windfall.

I don’t see anything in the budget that encourages innovation and investment from the industrial sector in Canada.

Finallyl, innovation is a cultural issue as much as it is a financial issue and having worked with a number of developers and start-up companies, the most popular business model is to develop a successful business that will be acquired by a large enterprise thereby allowing the entrepreneurs to retire before the age of 30 (or 40 at the latest). I don’t see anything from the government acknowledging the problem let alone any attempts to tackle it.

All in all, it was a decent budget with nothing in it to seriously offend anyone.

CRISPR patent decision: Harvard’s and MIT’s Broad Institute victorious—for now

I have written about the CRISPR patent tussle (Harvard & MIT’s [Massachusetts Institute of Technology] Broad Institute vs the University of California at Berkeley) previously in a Jan. 6, 2015 posting and in a more detailed May 14, 2015 posting. I also mentioned (in a Jan. 17, 2017 posting) CRISPR and its patent issues in the context of a posting about a Slate.com series on Frankenstein and the novel’s applicability to our own time. This patent fight is being bitterly fought as fortunes are at stake.

It seems a decision has been made regarding the CRISPR patent claims. From a Feb. 17, 2017 article by Charmaine Distor for The Science Times,

After an intense court battle, the US Patent and Trademark Office (USPTO) released its ruling on February 15 [2017]. The rights for the CRISPR-Cas9 gene editing technology was handed over to the Broad Institute of Harvard University and the Massachusetts Institute of Technology (MIT).

According to an article in Nature, the said court battle was between the Broad Institute and the University of California. The two institutions are fighting over the intellectual property right for the CRISPR patent. The case between the two started when the patent was first awarded to the Broad Institute despite having the University of California apply first for the CRISPR patent.

Heidi Ledford’s Feb. 17, 2017 article for Nature provides more insight into the situation (Note: Links have been removed),

It [USPTO] ruled that the Broad Institute of Harvard and MIT in Cambridge could keep its patents on using CRISPR–Cas9 in eukaryotic cells. That was a blow to the University of California in Berkeley, which had filed its own patents and had hoped to have the Broad’s thrown out.

The fight goes back to 2012, when Jennifer Doudna at Berkeley, Emmanuelle Charpentier, then at the University of Vienna, and their colleagues outlined how CRISPR–Cas9 could be used to precisely cut isolated DNA1. In 2013, Feng Zhang at the Broad and his colleagues — and other teams — showed2 how it could be adapted to edit DNA in eukaryotic cells such as plants, livestock and humans.

Berkeley filed for a patent earlier, but the USPTO granted the Broad’s patents first — and this week upheld them. There are high stakes involved in the ruling. The holder of key patents could make millions of dollars from CRISPR–Cas9’s applications in industry: already, the technique has sped up genetic research, and scientists are using it to develop disease-resistant livestock and treatments for human diseases.

But the fight for patent rights to CRISPR technology is by no means over. Here are four reasons why.

1. Berkeley can appeal the ruling

2. European patents are still up for grabs

3. Other parties are also claiming patent rights on CRISPR–Cas9

4. CRISPR technology is moving beyond what the patents cover

As for Ledford’s 3rd point, there are an estimated 763 patent families (groups of related patents) claiming CAS9 leading to the distinct possibility that the Broad Institute will be fighting many patent claims in the future.

Once you’ve read Distor’s and Ledford’s articles, you may want to check out Adam Rogers’ and Eric Niiler’s Feb. 16, 2017 CRISPR patent article for Wired,

The fight over who owns the most promising technique for editing genes—cutting and pasting the stuff of life to cure disease and advance scientific knowledge—has been a rough one. A team on the West Coast, at UC Berkeley, filed patents on the method, Crispr-Cas9; a team on the East Coast, based at MIT and the Broad Institute, filed their own patents in 2014 after Berkeley’s, but got them granted first. The Berkeley group contended that this constituted “interference,” and that Berkeley deserved the patent.

At stake: millions, maybe billions of dollars in biotech money and licensing fees, the future of medicine, the future of bioscience. Not nothing. Who will benefit depends on who owns the patents.

On Wednesday [Feb. 15, 2017], the US Patent Trial and Appeal Board kind of, sort of, almost began to answer that question. Berkeley will get the patent for using the system called Crispr-Cas9 in any living cell, from bacteria to blue whales. Broad/MIT gets the patent in eukaryotic cells, which is to say, plants and animals.

It’s … confusing. “The patent that the Broad received is for the use of Crispr gene-editing technology in eukaryotic cells. The patent for the University of California is for all cells,” says Jennifer Doudna, the UC geneticist and co-founder of Caribou Biosciences who co-invented Crispr, on a conference call. Her metaphor: “They have a patent on green tennis balls; we have a patent for all tennis balls.”

Observers didn’t quite buy that topspin. If Caribou is playing tennis, it’s looking like Broad/MIT is Serena Williams.

“UC does not necessarily lose everything, but they’re no doubt spinning the story,” says Robert Cook-Deegan, an expert in genetic policy at Arizona State University’s School for the Future of Innovation in Society. “UC’s claims to eukaryotic uses of Crispr-Cas9 will not be granted in the form they sought. That’s a big deal, and UC was the big loser.”

UC officials said Wednesday [Feb. 15, 2017] that they are studying the 51-page decision and considering whether to appeal. That leaves members of the biotechnology sector wondering who they will have to pay to use Crispr as part of a business—and scientists hoping the outcome won’t somehow keep them from continuing their research.

….

Happy reading!

Detonating (exploding) your way to graphene

Physicists at Kansas State University use controlled detonation to make graphene according to a Jan. 25, 2017 news item on Nanowerk (Note: A link has been removed),

Forget chemicals, catalysts and expensive machinery — a Kansas State University team of physicists has discovered a way to mass-produce graphene with three ingredients: hydrocarbon gas, oxygen and a spark plug.

Their method is simple: Fill a chamber with acetylene or ethylene gas and oxygen. Use a vehicle spark plug to create a contained detonation. Collect the graphene that forms afterward.

Chris Sorensen, Cortelyou-Rust university distinguished professor of physics, is the lead inventor of the recently issued patent, “Process for high-yield production of graphene via detonation of carbon-containing material”. Other Kansas State University researchers involved include Arjun Nepal, postdoctoral researcher and instructor of physics, and Gajendra Prasad Singh, former visiting scientist.

For further reading here’s the Jan. 25, 2017 Kansas State University news release, which originated the news item,

“We have discovered a viable process to make graphene,” Sorensen said. “Our process has many positive properties, from the economic feasibility, the possibility for large-scale production and the lack of nasty chemicals. What might be the best property of all is that the energy required to make a gram of graphene through our process is much less than other processes because all it takes is a single spark.”

Graphene is a single atom-thick sheet of hexagonally coordinated carbon atoms, which makes it the world’s thinnest material. Since graphene was isolated in 2004, scientists have found it has valuable physical and electronic properties with many possible applications, such as more efficient rechargeable batteries or better electronics.

For Sorensen’s research team, the serendipitous path to creating graphene started when they were developing and patenting carbon soot aerosol gels. They created the gels by filling a 17-liter aluminum chamber with acetylene gas and oxygen. Using a spark plug, they created a detonation in the chamber. The soot from the detonation formed aerosol gels that looked like “black angel food cake,” Sorensen said.

But after further analysis, the researchers found that the aerosol gel was more than lookalike dark angel food cake — it was graphene.

“We made graphene by serendipity,” Sorensen said. “We didn’t plan on making graphene. We planned on making the aerosol gel and we got lucky.”

But unlike other methods of creating graphene, Sorensen’s method is simple, efficient, low-cost and scalable for industry.

Other methods of creating graphene involve “cooking” the mineral graphite with chemicals — such as sulfuric acid, sodium nitrate, potassium permanganate or hydrazine — for a long time at precisely prescribed temperatures. Additional methods involve heating hydrocarbons to 1,000 degrees Celsius in the presence of catalysts.

Such methods are energy intensive — and even dangerous — and have low yield, while Sorensen and his team’s method makes larger quantities with minimal energy and no dangerous chemicals.

“The real charm of our experiment is that we can produce graphene in the quantity of grams rather than milligrams,” Nepal said.

Now the research team — including Justin Wright, doctoral student in physics, Camp Hill, Pennsylvania — is working to improve the quality of the graphene and scale the laboratory process to an industrial level. They are upgrading some of the equipment to make it easier to get graphene from the chamber seconds — rather than minutes — after the detonation. Accessing the graphene more quickly could improve the quality of the material, Sorensen said.

The patent was issued to the Kansas State University Research Foundation, a nonprofit corporation responsible for managing technology transfer activities at the university.

I wish they’d filmed one of their graphene explosions even if it meant that all we’d get is the sight of a canister and the sound of a boom. Still, they did show a brief spark from the spark plug.

nano tech 2017 being held in Tokyo from February 15-17, 2017

I found some news about the Alberta technology scene in the programme for Japan’s nano tech 2017 exhibition and conference to be held Feb. 15 – 17, 2017 in Tokyo. First, here’s more about the show in Japan from a Jan. 17, 2017 nano tech 2017 press release on Business Wire (also on Yahoo News),

The nano tech executive committee (chairman: Tomoji Kawai, Specially Appointed Professor, Osaka University) will be holding “nano tech 2017” – one of the world’s largest nanotechnology exhibitions, now in its 16th year – on February 15, 2017, at the Tokyo Big Sight convention center in Japan. 600 organizations (including over 40 first-time exhibitors) from 23 countries and regions are set to exhibit at the event in 1,000 booths, demonstrating revolutionary and cutting edge core technologies spanning such industries as automotive, aerospace, environment/energy, next-generation sensors, cutting-edge medicine, and more. Including attendees at the concurrently held exhibitions, the total number of visitors to the event is expected to exceed 50,000.

The theme of this year’s nano tech exhibition is “Open Nano Collaboration.” By bringing together organizations working in a wide variety of fields, the business matching event aims to promote joint development through cross-field collaboration.

Special Symposium: “Nanotechnology Contributing to the Super Smart Society”

Each year nano tech holds Special Symposium, in which industry specialists from top organizations from Japan and abroad speak about the issues surrounding the latest trends in nanotech. The themes of this year’s Symposium are Life Nanotechnology, Graphene, AI/IoT, Cellulose Nanofibers, and Materials Informatics.

Notable sessions include:

Life Nanotechnology
“Development of microRNA liquid biopsy for early detection of cancer”
Takahiro Ochiya, National Cancer Center Research Institute Division of Molecular and Cellular Medicine, Chief

AI / IoT
“AI Embedded in the Real World”
Hideki Asoh, AIST Deputy Director, Artificial Intelligence Research Center

Cellulose Nanofibers [emphasis mine]
“The Current Trends and Challenges for Industrialization of Nanocellulose”
Satoshi Hirata, Nanocellulose Forum Secretary-General

Materials Informatics
“Perspective of Materials Research”
Hideo Hosono, Tokyo Institute of Technology Professor

View the full list of sessions:
>> http://nanotech2017.icsbizmatch.jp/Presentation/en/Info/List#main_theater

nano tech 2017 Homepage:
>> http://nanotechexpo.jp/

nano tech 2017, the 16th International Nanotechnology Exhibition & Conference
Date: February 15-17, 2017, 10:00-17:00
Venue: Tokyo Big Sight (East Halls 4-6 & Conference Tower)
Organizer: nano tech Executive Committee, JTB Communication Design

As you may have guessed the Alberta information can be found in the .Cellulose Nanofibers session. From the conference/seminar program page; scroll down about 25% of the way to find the Alberta presentation,

Production and Applications Development of Cellulose Nanocrystals (CNC) at InnoTech Alberta

Behzad (Benji) Ahvazi
InnoTech Alberta Team Lead, Cellulose Nanocrystals (CNC)

[ Abstract ]

The production and use of cellulose nanocrystals (CNC) is an emerging technology that has gained considerable interest from a range of industries that are working towards increased use of “green” biobased materials. The construction of one-of-a-kind CNC pilot plant [emphasis mine] at InnoTech Alberta and production of CNC samples represents a critical step for introducing the cellulosic based biomaterials to industrial markets and provides a platform for the development of novel high value and high volume applications. Major key components including feedstock, acid hydrolysis formulation, purification, and drying processes were optimized significantly to reduce the operation cost. Fully characterized CNC samples were provided to a large number of academic and research laboratories including various industries domestically and internationally for applications development.

[ Profile ]

Dr. Ahvazi completed his Bachelor of Science in Honours program at the Department of Chemistry and Biochemistry and graduated with distinction at Concordia University in Montréal, Québec. His Ph.D. program was completed in 1998 at McGill Pulp and Paper Research Centre in the area of macromolecules with solid background in Lignocellulosic, organic wood chemistry as well as pulping and paper technology. After completing his post-doctoral fellowship, he joined FPInnovations formally [formerly?] known as PAPRICAN as a research scientist (R&D) focusing on a number of confidential chemical pulping and bleaching projects. In 2006, he worked at Tembec as a senior research scientist and as a Leader in Alcohol and Lignin (R&D). In April 2009, he held a position as a Research Officer in both National Bioproducts (NBP1 & NBP2) and Industrial Biomaterials Flagship programs at National Research Council Canada (NRC). During his tenure, he had directed and performed innovative R&D activities within both programs on extraction, modification, and characterization of biomass as well as polymer synthesis and formulation for industrial applications. Currently, he is working at InnoTech Alberta as Team Lead for Biomass Conversion and Processing Technologies.

Canada scene update

InnoTech Alberta was until Nov. 1, 2016 known as Alberta Innovates – Technology Futures. Here’s more about InnoTech Alberta from the Alberta Innovates … home page,

Effective November 1, 2016, Alberta Innovates – Technology Futures is one of four corporations now consolidated into Alberta Innovates and a wholly owned subsidiary called InnoTech Alberta.

You will find all the existing programs, services and information offered by InnoTech Alberta on this website. To access the basic research funding and commercialization programs previously offered by Alberta Innovates – Technology Futures, explore here. For more information on Alberta Innovates, visit the new Alberta Innovates website.

As for InnoTech Alberta’s “one-of-a-kind CNC pilot plant,” I’d like to know more about it’s one-of-a-kind status since there are two other CNC production plants in Canada. (Is the status a consequence of regional chauvinism or a writer unfamiliar with the topic?). Getting back to the topic, the largest company (and I believe the first) with a CNC plant was CelluForce, which started as a joint venture between Domtar and FPInnovations and powered with some very heavy investment from the government of Canada. (See my July 16, 2010 posting about the construction of the plant in Quebec and my June 6, 2011 posting about the newly named CelluForce.) Interestingly, CelluForce will have a booth at nano tech 2017 (according to its Jan. 27, 2017 news release) although the company doesn’t seem to have any presentations on the schedule. The other Canadian company is Blue Goose Biorefineries in Saskatchewan. Here’s more about Blue Goose from the company website’s home page,

Blue Goose Biorefineries Inc. (Blue Goose) is pleased to introduce our R3TM process. R3TM technology incorporates green chemistry to fractionate renewable plant biomass into high value products.

Traditionally, separating lignocellulosic biomass required high temperatures, harsh chemicals, and complicated processes. R3TM breaks this costly compromise to yield high quality cellulose, lignin and hemicellulose products.

The robust and environmentally friendly R3TM technology has numerous applications. Our current product focus is cellulose nanocrystals (CNC). Cellulose nanocrystals are “Mother Nature’s Building Blocks” possessing unique properties. These unique properties encourage the design of innovative products from a safe, inherently renewable, sustainable, and carbon neutral resource.

Blue Goose assists companies and research groups in the development of applications for CNC, by offering CNC for sale without Intellectual Property restrictions. [emphasis mine]

Bravo to Blue Goose! Unfortunately, I was not able to determine if the company will be at nano tech 2017.

One final comment, there was some excitement about CNC a while back where I had more than one person contact me asking for information about how to buy CNC. I wasn’t able to be helpful because there was, apparently, an attempt by producers to control sales and limit CNC access to a select few for competitive advantage. Coincidentally or not, CelluForce developed a stockpile which has persisted for some years as I noted in my Aug. 17, 2016 posting (scroll down about 70% of the way) where the company announced amongst other events that it expected deplete its stockpile by mid-2017.

Essays on Frankenstein

Slate.com is dedicating a month (January 2017) to Frankenstein. This means there were will be one or more essays each week on one aspect or another of Frankenstein and science. These essays are one of a series of initiatives jointly supported by Slate, Arizona State University, and an organization known as New America. It gets confusing since these essays are listed as part of two initiatives:  Futurography and Future Tense.

The really odd part, as far as I’m concerned, is that there is no mention of Arizona State University’s (ASU) The Frankenstein Bicentennial Project (mentioned in my Oct. 26, 2016 posting). Perhaps they’re concerned that people will think ASU is advertising the project?

Introductions

Getting back to the essays, a Jan. 3, 2017 article by Jacob Brogan explains, by means of a ‘Question and Answer’ format article, why the book and the monster maintain popular interest after two centuries (Note: We never do find out who or how many people are supplying the answers),

OK, fine. I get that this book is important, but why are we talking about it in a series about emerging technology?

Though people still tend to weaponize it as a simple anti-scientific screed, Frankenstein, which was first published in 1818, is much richer when we read it as a complex dialogue about our relationship to innovation—both our desire for it and our fear of the changes it brings. Mary Shelley was just a teenager when she began to compose Frankenstein, but she was already grappling with our complex relationship to new forces. Almost two centuries on, the book is just as propulsive and compelling as it was when it was first published. That’s partly because it’s so thick with ambiguity—and so resistant to easy interpretation.

Is it really ambiguous? I mean, when someone calls something frankenfood, they aren’t calling it “ethically ambiguous food.”

It’s a fair point. For decades, Frankenstein has been central to discussions in and about bioethics. Perhaps most notably, it frequently crops up as a reference point in discussions of genetically modified organisms, where the prefix Franken- functions as a sort of convenient shorthand for human attempts to meddle with the natural order. Today, the most prominent flashpoint for those anxieties is probably the clustered regularly interspaced short palindromic repeats, or CRISPR, gene-editing technique [emphasis mine]. But it’s really oversimplifying to suggest Frankenstein is a cautionary tale about monkeying with life.

As we’ll see throughout this month on Futurography, it’s become a lens for looking at the unintended consequences of things like synthetic biology, animal experimentation, artificial intelligence, and maybe even social networking. Facebook, for example, has arguably taken on a life of its own, as its algorithms seem to influence the course of elections. Mark Zuckerberg, who’s sometimes been known to disavow the power of his own platform, might well be understood as a Frankensteinian figure, amplifying his creation’s monstrosity by neglecting its practical needs.

But this book is almost 200 years old! Surely the actual science in it is bad.

Shelley herself would probably be the first to admit that the science in the novel isn’t all that accurate. Early in the novel, Victor Frankenstein meets with a professor who castigates him for having read the wrong works of “natural philosophy.” Shelley’s protagonist has mostly been studying alchemical tomes and otherwise fantastical works, the sort of things that were recognized as pseudoscience, even by the standards of the day. Near the start of the novel, Frankenstein attends a lecture in which the professor declaims on the promise of modern science. He observes that where the old masters “promised impossibilities and performed nothing,” the new scientists achieve far more in part because they “promise very little; they know that metals cannot be transmuted and that the elixir of life is a chimera.”

Is it actually about bad science, though?

Not exactly, but it has been read as a story about bad scientists.

Ultimately, Frankenstein outstrips his own teachers, of course, and pulls off the very feats they derided as mere fantasy. But Shelley never seems to confuse fact and fiction, and, in fact, she largely elides any explanation of how Frankenstein pulls off the miraculous feat of animating dead tissue. We never actually get a scene of the doctor awakening his creature. The novel spends far more dwelling on the broader reverberations of that act, showing how his attempt to create one life destroys countless others. Read in this light, Frankenstein isn’t telling us that we shouldn’t try to accomplish new things, just that we should take care when we do.

This speaks to why the novel has stuck around for so long. It’s not about particular scientific accomplishments but the vagaries of scientific progress in general.

Does that make it into a warning against playing God?

It’s probably a mistake to suggest that the novel is just a critique of those who would usurp the divine mantle. Instead, you can read it as a warning about the ways that technologists fall short of their ambitions, even in their greatest moments of triumph.

Look at what happens in the novel: After bringing his creature to life, Frankenstein effectively abandons it. Later, when it entreats him to grant it the rights it thinks it deserves, he refuses. Only then—after he reneges on his responsibilities—does his creation really go bad. We all know that Frankenstein is the doctor and his creation is the monster, but to some extent it’s the doctor himself who’s made monstrous by his inability to take responsibility for what he’s wrought.

I encourage you to read Brogan’s piece in its entirety and perhaps supplement the reading. Mary Shelley has a pretty interesting history. She ran off with Percy Bysshe Shelley who was married to another woman, in 1814  at the age of seventeen years. Her parents were both well known and respected intellectuals and philosophers, William Godwin and Mary Wollstonecraft. By the time Mary Shelley wrote her book, her first baby had died and she had given birth to a second child, a boy.  Percy Shelley was to die a few years later as was her son and a third child she’d given birth to. (Her fourth child born in 1819 did survive.) I mention the births because one analysis I read suggests the novel is also a commentary on childbirth. In fact, the Frankenstein narrative has been examined from many perspectives (other than science) including feminism and LGBTQ studies.

Getting back to the science fiction end of things, the next part of the Futurography series is titled “A Cheat-Sheet Guide to Frankenstein” and that too is written by Jacob Brogan with a publication date of Jan. 3, 2017,

Key Players

Marilyn Butler: Butler, a literary critic and English professor at the University of Cambridge, authored the seminal essay “Frankenstein and Radical Science.”

Jennifer Doudna: A professor of chemistry and biology at the University of California, Berkeley, Doudna helped develop the CRISPR gene-editing technique [emphasis mine].

Stephen Jay Gould: Gould is an evolutionary biologist and has written in defense of Frankenstein’s scientific ambitions, arguing that hubris wasn’t the doctor’s true fault.

Seán Ó hÉigeartaigh: As executive director of the Center for Existential Risk at the University of Cambridge, hÉigeartaigh leads research into technologies that threaten the existience of our species.

Jim Hightower: This columnist and activist helped popularize the term frankenfood to describe genetically modified crops.

Mary Shelley: Shelley, the author of Frankenstein, helped create science fiction as we now know it.

J. Craig Venter: A leading genomic researcher, Venter has pursued a variety of human biotechnology projects.

Lingo

….

Debates

Popular Culture

Further Reading

….

‘Franken’ and CRISPR

The first essay is in a Jan. 6, 2016 article by Kay Waldman focusing on the ‘franken’ prefix (Note: links have been removed),

In a letter to the New York Times on June 2, 1992, an English professor named Paul Lewis lopped off the top of Victor Frankenstein’s surname and sewed it onto a tomato. Railing against genetically modified crops, Lewis put a new generation of natural philosophers on notice: “If they want to sell us Frankenfood, perhaps it’s time to gather the villagers, light some torches and head to the castle,” he wrote.

William Safire, in a 2000 New York Times column, tracked the creation of the franken- prefix to this moment: an academic channeling popular distrust of science by invoking the man who tried to improve upon creation and ended up disfiguring it. “There’s no telling where or how it will end,” he wrote wryly, referring to the spread of the construction. “It has enhanced the sales of the metaphysical novel that Ms. Shelley’s husband, the poet Percy Bysshe Shelley, encouraged her to write, and has not harmed sales at ‘Frank’n’Stein,’ the fast-food chain whose hot dogs and beer I find delectably inorganic.” Safire went on to quote the American Dialect Society’s Laurence Horn, who lamented that despite the ’90s flowering of frankenfruits and frankenpigs, people hadn’t used Frankensense to describe “the opposite of common sense,” as in “politicians’ motivations for a creatively stupid piece of legislation.”

A year later, however, Safire returned to franken- in dead earnest. In an op-ed for the Times avowing the ethical value of embryonic stem cell research, the columnist suggested that a White House conference on bioethics would salve the fears of Americans concerned about “the real dangers of the slippery slope to Frankenscience.”

All of this is to say that franken-, the prefix we use to talk about human efforts to interfere with nature, flips between “funny” and “scary” with ease. Like Shelley’s monster himself, an ungainly patchwork of salvaged parts, it can seem goofy until it doesn’t—until it taps into an abiding anxiety that technology raises in us, a fear of overstepping.

Waldman’s piece hints at how language can shape discussions while retaining a rather playful quality.

This series looks to be a good introduction while being a bit problematic in spots, which roughly sums up my conclusion about their ‘nano’ series in my Oct. 7, 2016 posting titled: Futurography’s nanotechnology series: a digest.

By the way, I noted the mention of CRISPR as it brought up an issue that they don’t appear to be addressing in this series (perhaps they will do this elsewhere?): intellectual property.

There’s a patent dispute over CRISPR as noted in this American Chemical Society’s Chemistry and Engineering News Jan. 9, 2017 video,

Playing God

This series on Frankenstein is taking on other contentious issues. A perennial favourite is ‘playing God’ as noted in Bina Venkataraman’s Jan. 11, 2017 essay on the topic,

Since its publication nearly 200 years ago, Shelley’s gothic novel has been read as a cautionary tale of the dangers of creation and experimentation. James Whale’s 1931 film took the message further, assigning explicitly the hubris of playing God to the mad scientist. As his monster comes to life, Dr. Frankenstein, played by Colin Clive, triumphantly exclaims: “Now I know what it feels like to be God!”

The admonition against playing God has since been ceaselessly invoked as a rhetorical bogeyman. Secular and religious, critic and journalist alike have summoned the term to deride and outright dismiss entire areas of research and technology, including stem cells, genetically modified crops, recombinant DNA, geoengineering, and gene editing. As we near the two-century commemoration of Shelley’s captivating story, we would be wise to shed this shorthand lesson—and to put this part of the Frankenstein legacy to rest in its proverbial grave.

The trouble with the term arises first from its murkiness. What exactly does it mean to play God, and why should we find it objectionable on its face? All but zealots would likely agree that it’s fine to create new forms of life through selective breeding and grafting of fruit trees, or to use in-vitro fertilization to conceive life outside the womb to aid infertile couples. No one objects when people intervene in what some deem “acts of God,” such as earthquakes, to rescue victims and provide relief. People get fully behind treating patients dying of cancer with “unnatural” solutions like chemotherapy. Most people even find it morally justified for humans to mete out decisions as to who lives or dies in the form of organ transplant lists that prize certain people’s survival over others.

So what is it—if not the imitation of a deity or the creation of life—that inspires people to invoke the idea of “playing God” to warn against, or even stop, particular technologies? A presidential commission charged in the early 1980s with studying the ethics of genetic engineering of humans, in the wake of the recombinant DNA revolution, sheds some light on underlying motivations. The commission sought to understand the concerns expressed by leaders of three major religious groups in the United States—representing Protestants, Jews, and Catholics—who had used the phrase “playing God” in a 1980 letter to President Jimmy Carter urging government oversight. Scholars from the three faiths, the commission concluded, did not see a theological reason to flat-out prohibit genetic engineering. Their concerns, it turned out, weren’t exactly moral objections to scientists acting as God. Instead, they echoed those of the secular public; namely, they feared possible negative effects from creating new human traits or new species. In other words, the religious leaders who called recombinant DNA tools “playing God” wanted precautions taken against bad consequences but did not inherently oppose the use of the technology as an act of human hubris.

She presents an interesting argument and offers this as a solution,

The lesson for contemporary science, then, is not that we should cease creating and discovering at the boundaries of current human knowledge. It’s that scientists and technologists ought to steward their inventions into society, and to more rigorously participate in public debate about their work’s social and ethical consequences. Frankenstein’s proper legacy today would be to encourage researchers to address the unsavory implications of their technologies, whether it’s the cognitive and social effects of ubiquitous smartphone use or the long-term consequences of genetically engineered organisms on ecosystems and biodiversity.

Some will undoubtedly argue that this places an undue burden on innovators. Here, again, Shelley’s novel offers a lesson. Scientists who cloister themselves as Dr. Frankenstein did—those who do not fully contemplate the consequences of their work—risk later encounters with the horror of their own inventions.

At a guess, Venkataraman seems to be assuming that if scientists communicate and make their case that the public will cease to panic with reference moralistic and other concerns. My understanding is that social scientists have found this is not the case. Someone may understand the technology quite well and still oppose it.

Frankenstein and anti-vaxxers

The Jan. 16, 2017 essay by Charles Kenny is the weakest of the lot, so far (Note: Links have been removed),

In 1780, University of Bologna physician Luigi Galvani found something peculiar: When he applied an electric current to the legs of a dead frog, they twitched. Thirty-seven years later, Mary Shelley had Galvani’s experiments in mind as she wrote her fable of Faustian overreach, wherein Dr. Victor Frankenstein plays God by reanimating flesh.

And a little less than halfway between those two dates, English physician Edward Jenner demonstrated the efficacy of a vaccine against smallpox—one of the greatest killers of the age. Given the suspicion with which Romantic thinkers like Shelley regarded scientific progress, it is no surprise that many at the time damned the procedure as against the natural order. But what is surprising is how that suspicion continues to endure, even after two centuries of spectacular successes for vaccination. This anti-vaccination stance—which now infects even the White House—demonstrates the immense harm that can be done by excessive distrust of technological advance.

Kenny employs history as a framing device. Crudely, Galvani’s experiments led to Mary Shelley’s Frankenstein which is a fable about ‘playing God’. (Kenny seems unaware there are many other readings of and perspectives on the book.) As for his statement ” … the suspicion with which Romantic thinkers like Shelley regarded scientific progress … ,” I’m not sure how he arrived at his conclusion about Romantic thinkers. According to Richard Holmes (in his book, The Age of Wonder: How the Romantic Generation Discovered the Beauty and Terror of Science), their relationship to science was more complex. Percy Bysshe Shelley ran ballooning experiments and wrote poetry about science, which included footnotes for the literature and concepts he was referencing; John Keats was a medical student prior to his establishment as a poet; and Samuel Taylor Coleridge (The Rime of the Ancient Mariner, etc.) maintained a healthy correspondence with scientists of the day sometimes influencing their research. In fact, when you analyze the matter, you realize even scientists are, on occasion, suspicious of science.

As for the anti-vaccination wars, I wish this essay had been more thoughtful. Yes, Andrew Wakefield’s research showing a link between MMR (measles, mumps, and rubella) vaccinations and autism is a sham. However, having concerns and suspicions about technology does not render you a fool who hasn’t progressed from 18th/19th Century concerns and suspicions about science and technology. For example, vaccines are being touted for all kinds of things, the latest being a possible antidote to opiate addiction (see Susan Gados’ June 28, 2016 article for ScienceNews). Are we going to be vaccinated for everything? What happens when you keep piling vaccination on top of vaccination? Instead of a debate, the discussion has devolved to: “I’m right and you’re wrong.”

For the record, I’m grateful for the vaccinations I’ve had and the diminishment of diseases that were devastating and seem to be making a comeback with this current anti-vaccination fever. That said, I think there are some important questions about vaccines.

Kenny’s essay could have been a nuanced discussion of vaccines that have clearly raised the bar for public health and some of the concerns regarding the current pursuit of yet more vaccines. Instead, he’s been quite dismissive of anyone who questions vaccination orthodoxy.

The end of this piece

There will be more essays in Slate’s Frankenstein series but I don’t have time to digest and write commentary for all of them.

Please use this piece as a critical counterpoint to some of the series and, if I’ve done my job, you’ll critique this critique. Please do let me know if you find any errors or want to add an opinion or add your own critique in the Comments of this blog.

ETA Jan. 25, 2017: Here’s the Frankenstein webspace on Slate’s Futurography which lists all the essays in this series. It’s well worth looking at the list. There are several that were not covered here.