Tag Archives: Jean Baudrillard

Emerging technology and the law

I have three news bits about legal issues that are arising as a consequence of emerging technologies.

Deep neural networks, art, and copyright

Caption: The rise of automated art opens new creative avenues, coupled with new problems for copyright protection. Credit: Provided by: Alexander Mordvintsev, Christopher Olah and Mike Tyka

Presumably this artwork is a demonstration of automated art although they never really do explain how in the news item/news release. An April 26, 2017 news item on ScienceDaily announces research into copyright and the latest in using neural networks to create art,

In 1968, sociologist Jean Baudrillard wrote on automatism that “contained within it is the dream of a dominated world […] that serves an inert and dreamy humanity.”

With the growing popularity of Deep Neural Networks (DNN’s), this dream is fast becoming a reality.

Dr. Jean-Marc Deltorn, researcher at the Centre d’études internationales de la propriété intellectuelle in Strasbourg, argues that we must remain a responsive and responsible force in this process of automation — not inert dominators. As he demonstrates in a recent Frontiers in Digital Humanities paper, the dream of automation demands a careful study of the legal problems linked to copyright.

An April 26, 2017 Frontiers (publishing) news release on EurekAlert, which originated the news item, describes the research in more detail,

For more than half a century, artists have looked to computational processes as a way of expanding their vision. DNN’s are the culmination of this cross-pollination: by learning to identify a complex number of patterns, they can generate new creations.

These systems are made up of complex algorithms modeled on the transmission of signals between neurons in the brain.

DNN creations rely in equal measure on human inputs and the non-human algorithmic networks that process them.

Inputs are fed into the system, which is layered. Each layer provides an opportunity for a more refined knowledge of the inputs (shape, color, lines). Neural networks compare actual outputs to expected ones, and correct the predictive error through repetition and optimization. They train their own pattern recognition, thereby optimizing their learning curve and producing increasingly accurate outputs.

The deeper the layers are, the higher the level of abstraction. The highest layers are able to identify the contents of a given input with reasonable accuracy, after extended periods of training.

Creation thus becomes increasingly automated through what Deltorn calls “the arcane traceries of deep architecture”. The results are sufficiently abstracted from their sources to produce original creations that have been exhibited in galleries, sold at auction and performed at concerts.

The originality of DNN’s is a combined product of technological automation on one hand, human inputs and decisions on the other.

DNN’s are gaining popularity. Various platforms (such as DeepDream) now allow internet users to generate their very own new creations . This popularization of the automation process calls for a comprehensive legal framework that ensures a creator’s economic and moral rights with regards to his work – copyright protection.

Form, originality and attribution are the three requirements for copyright. And while DNN creations satisfy the first of these three, the claim to originality and attribution will depend largely on a given country legislation and on the traceability of the human creator.

Legislation usually sets a low threshold to originality. As DNN creations could in theory be able to create an endless number of riffs on source materials, the uncurbed creation of original works could inflate the existing number of copyright protections.

Additionally, a small number of national copyright laws confers attribution to what UK legislation defines loosely as “the person by whom the arrangements necessary for the creation of the work are undertaken.” In the case of DNN’s, this could mean anybody from the programmer to the user of a DNN interface.

Combined with an overly supple take on originality, this view on attribution would further increase the number of copyrightable works.

The risk, in both cases, is that artists will be less willing to publish their own works, for fear of infringement of DNN copyright protections.

In order to promote creativity – one seminal aim of copyright protection – the issue must be limited to creations that manifest a personal voice “and not just the electric glint of a computational engine,” to quote Deltorn. A delicate act of discernment.

DNN’s promise new avenues of creative expression for artists – with potential caveats. Copyright protection – a “catalyst to creativity” – must be contained. Many of us gently bask in the glow of an increasingly automated form of technology. But if we want to safeguard the ineffable quality that defines much art, it might be a good idea to hone in more closely on the differences between the electric and the creative spark.

This research is and be will part of a broader Frontiers Research Topic collection of articles on Deep Learning and Digital Humanities.

Here’s a link to and a citation for the paper,

Deep Creations: Intellectual Property and the Automata by Jean-Marc Deltorn. Front. Digit. Humanit., 01 February 2017 | https://doi.org/10.3389/fdigh.2017.00003

This paper is open access.

Conference on governance of emerging technologies

I received an April 17, 2017 notice via email about this upcoming conference. Here’s more from the Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics webpage,

The Fifth Annual Conference on Governance of Emerging Technologies:

Law, Policy and Ethics held at the new

Beus Center for Law & Society in Phoenix, AZ

May 17-19, 2017!

Call for Abstracts – Now Closed

The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including (but not limited to) nanotechnology, synthetic biology, gene editing, biotechnology, genomics, personalized medicine, human enhancement technologies, telecommunications, information technologies, surveillance technologies, geoengineering, neuroscience, artificial intelligence, and robotics. The conference is premised on the belief that there is much to be learned and shared from and across the governance experience and proposals for these various emerging technologies.

Keynote Speakers:

Gillian HadfieldRichard L. and Antoinette Schamoi Kirtland Professor of Law and Professor of Economics USC [University of Southern California] Gould School of Law

Shobita Parthasarathy, Associate Professor of Public Policy and Women’s Studies, Director, Science, Technology, and Public Policy Program University of Michigan

Stuart Russell, Professor at [University of California] Berkeley, is a computer scientist known for his contributions to artificial intelligence

Craig Shank, Vice President for Corporate Standards Group in Microsoft’s Corporate, External and Legal Affairs (CELA)

Plenary Panels:

Innovation – Responsible and/or Permissionless

Ellen-Marie Forsberg, Senior Researcher/Research Manager at Oslo and Akershus University College of Applied Sciences

Adam Thierer, Senior Research Fellow with the Technology Policy Program at the Mercatus Center at George Mason University

Wendell Wallach, Consultant, ethicist, and scholar at Yale University’s Interdisciplinary Center for Bioethics

 Gene Drives, Trade and International Regulations

Greg Kaebnick, Director, Editorial Department; Editor, Hastings Center Report; Research Scholar, Hastings Center

Jennifer Kuzma, Goodnight-North Carolina GlaxoSmithKline Foundation Distinguished Professor in Social Sciences in the School of Public and International Affairs (SPIA) and co-director of the Genetic Engineering and Society (GES) Center at North Carolina State University

Andrew Maynard, Senior Sustainability Scholar, Julie Ann Wrigley Global Institute of Sustainability Director, Risk Innovation Lab, School for the Future of Innovation in Society Professor, School for the Future of Innovation in Society, Arizona State University

Gary Marchant, Regents’ Professor of Law, Professor of Law Faculty Director and Faculty Fellow, Center for Law, Science & Innovation, Arizona State University

Marc Saner, Inaugural Director of the Institute for Science, Society and Policy, and Associate Professor, University of Ottawa Department of Geography

Big Data

Anupam Chander, Martin Luther King, Jr. Professor of Law and Director, California International Law Center, UC Davis School of Law

Pilar Ossorio, Professor of Law and Bioethics, University of Wisconsin, School of Law and School of Medicine and Public Health; Morgridge Institute for Research, Ethics Scholar-in-Residence

George Poste, Chief Scientist, Complex Adaptive Systems Initiative (CASI) (http://www.casi.asu.edu/), Regents’ Professor and Del E. Webb Chair in Health Innovation, Arizona State University

Emily Shuckburgh, climate scientist and deputy head of the Polar Oceans Team at the British Antarctic Survey, University of Cambridge

 Responsible Development of AI

Spring Berman, Ira A. Fulton Schools of Engineering, Arizona State University

John Havens, The IEEE [Institute of Electrical and Electronics Engineers] Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems

Subbarao Kambhampati, Senior Sustainability Scientist, Julie Ann Wrigley Global Institute of Sustainability, Professor, School of Computing, Informatics and Decision Systems Engineering, Ira A. Fulton Schools of Engineering, Arizona State University

Wendell Wallach, Consultant, Ethicist, and Scholar at Yale University’s Interdisciplinary Center for Bioethics

Existential and Catastrophic Ricks [sic]

Tony Barrett, Co-Founder and Director of Research of the Global Catastrophic Risk Institute

Haydn Belfield,  Academic Project Administrator, Centre for the Study of Existential Risk at the University of Cambridge

Margaret E. Kosal Associate Director, Sam Nunn School of International Affairs, Georgia Institute of Technology

Catherine Rhodes,  Academic Project Manager, Centre for the Study of Existential Risk at CSER, University of Cambridge

These were the panels that are of interest to me; there are others on the homepage.

Here’s some information from the Conference registration webpage,

Early Bird Registration – $50 off until May 1! Enter discount code: earlybirdGETs50

New: Group Discount – Register 2+ attendees together and receive an additional 20% off for all group members!

Click Here to Register!

Conference registration fees are as follows:

  • General (non-CLE) Registration: $150.00
  • CLE Registration: $350.00
  • *Current Student / ASU Law Alumni Registration: $50.00
  • ^Cybsersecurity sessions only (May 19): $100 CLE / $50 General / Free for students (registration info coming soon)

There you have it.

Neuro-techno future laws

I’m pretty sure this isn’t the first exploration of potential legal issues arising from research into neuroscience although it’s the first one I’ve stumbled across. From an April 25, 2017 news item on phys.org,

New human rights laws to prepare for advances in neurotechnology that put the ‘freedom of the mind’ at risk have been proposed today in the open access journal Life Sciences, Society and Policy.

The authors of the study suggest four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy. The four laws are: the right to cognitive liberty, the right to mental privacy, the right to mental integrity and the right to psychological continuity.

An April 25, 2017 Biomed Central news release on EurekAlert, which originated the news item, describes the work in more detail,

Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel, said: “The mind is considered to be the last refuge of personal freedom and self-determination, but advances in neural engineering, brain imaging and neurotechnology put the freedom of the mind at risk. Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology.”

Advances in neurotechnology, such as sophisticated brain imaging and the development of brain-computer interfaces, have led to these technologies moving away from a clinical setting and into the consumer domain. While these advances may be beneficial for individuals and society, there is a risk that the technology could be misused and create unprecedented threats to personal freedom.

Professor Roberto Andorno, co-author of the research, explained: “Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court, for example as a tool for assessing criminal responsibility or even the risk of reoffending. Consumer companies are using brain imaging for ‘neuromarketing’, to understand consumer behaviour and elicit desired responses from customers. There are also tools such as ‘brain decoders’ which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom which we sought to address with the development of four new human rights laws.”

The authors explain that as neurotechnology improves and becomes commonplace, there is a risk that the technology could be hacked, allowing a third-party to ‘eavesdrop’ on someone’s mind. In the future, a brain-computer interface used to control consumer technology could put the user at risk of physical and psychological damage caused by a third-party attack on the technology. There are also ethical and legal concerns over the protection of data generated by these devices that need to be considered.

International human rights laws make no specific mention to neuroscience, although advances in biomedicine have become intertwined with laws, such as those concerning human genetic data. Similar to the historical trajectory of the genetic revolution, the authors state that the on-going neurorevolution will force a reconceptualization of human rights laws and even the creation of new ones.

Marcello Ienca added: “Science-fiction can teach us a lot about the potential threat of technology. Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes. We need to be prepared to deal with the impact these technologies will have on our personal freedom.”

Here’s a link to and a citation for the paper,

Towards new human rights in the age of neuroscience and neurotechnology by Marcello Ienca and Roberto Andorno. Life Sciences, Society and Policy201713:5 DOI: 10.1186/s40504-017-0050-1 Published: 26 April 2017

©  The Author(s). 2017

This paper is open access.

North Carolina universities go beyond organ-on-a-chip

The researchers in the North Carolina universities involved in this project have high hopes according to an Oct. 9, 2015 news item on Nanowerk,

A team of researchers from the University of North Carolina at Chapel Hill and NC State University has received a $5.3 million, five-year Transformative Research (R01) Award from the National Institutes of Health (NIH) to create fully functioning versions of the human gut that fit on a chip the size of a dime.

Such “organs-on-a-chip” have become vital for biomedical research, as researchers seek alternatives to animal models for drug discovery and testing. The new grant will fund a technology that represents a major step forward for the field, overcoming limitations that have mired other efforts.

The technology will use primary cells derived directly from human biopsies, which are known to provide more relevant results than the immortalized cell lines used in current approaches. In addition, the device will sculpt these cells into the sophisticated architecture of the gut, rather than the disorganized ball of cells that are created in other miniature organ systems.

“We are building a device that goes far beyond the organ-on-a-chip,” said Nancy L. Allbritton, MD, PhD, professor and chair of the UNC-NC State joint department of biomedical engineering and one of four principle investigators on the NIH grant. “We call it a ‘simulacrum,’ [emphasis mine] a term used in science fiction to describe a duplicate. The idea is to create something that is indistinguishable from your own gut.”

I’ve come across the term ‘simulacrum’ in relation to philosophy so it’s a bit of a surprise to find it in a news release about an organ-on-a-chip where it seems to have been redefined somewhat. Here’s more from the Simulacrum entry on Wikipedia (Note: Links have been removed),

A simulacrum (plural: simulacra from Latin: simulacrum, which means “likeness, similarity”), is a representation or imitation of a person or thing.[1] The word was first recorded in the English language in the late 16th century, used to describe a representation, such as a statue or a painting, especially of a god. By the late 19th century, it had gathered a secondary association of inferiority: an image without the substance or qualities of the original.[2] Philosopher Fredric Jameson offers photorealism as an example of artistic simulacrum, where a painting is sometimes created by copying a photograph that is itself a copy of the real.[3] Other art forms that play with simulacra include trompe-l’œil,[4] pop art, Italian neorealism, and French New Wave.[3]

Philosophy

The simulacrum has long been of interest to philosophers. In his Sophist, Plato speaks of two kinds of image making. The first is a faithful reproduction, attempted to copy precisely the original. The second is intentionally distorted in order to make the copy appear correct to viewers. He gives the example of Greek statuary, which was crafted larger on the top than on the bottom so that viewers on the ground would see it correctly. If they could view it in scale, they would realize it was malformed. This example from the visual arts serves as a metaphor for the philosophical arts and the tendency of some philosophers to distort truth so that it appears accurate unless viewed from the proper angle.[5] Nietzsche addresses the concept of simulacrum (but does not use the term) in the Twilight of the Idols, suggesting that most philosophers, by ignoring the reliable input of their senses and resorting to the constructs of language and reason, arrive at a distorted copy of reality.[6]

Postmodernist French social theorist Jean Baudrillard argues that a simulacrum is not a copy of the real, but becomes truth in its own right: the hyperreal. Where Plato saw two types of representation—faithful and intentionally distorted (simulacrum)—Baudrillard sees four: (1) basic reflection of reality; (2) perversion of reality; (3) pretence of reality (where there is no model); and (4) simulacrum, which “bears no relation to any reality whatsoever”.[7] In Baudrillard’s concept, like Nietzsche’s, simulacra are perceived as negative, but another modern philosopher who addressed the topic, Gilles Deleuze, takes a different view, seeing simulacra as the avenue by which an accepted ideal or “privileged position” could be “challenged and overturned”.[8] Deleuze defines simulacra as “those systems in which different relates to different by means of difference itself. What is essential is that we find in these systems no prior identity, no internal resemblance”.[9]

Getting back to the proposed research, an Oct. (?), 2015 University of North Carolina news release, which originated the news item, describes the proposed work in more detail,

Allbritton is an expert at microfabrication and microengineering. Also on the team are intestinal stem cell expert Scott T. Magness, associate professor of medicine, biomedical engineering, and cell and molecular physiology in the UNC School of Medicine; microbiome expert Scott Bultman, associate professor of genetics in the UNC School of Medicine; and bioinformatics expert Shawn Gomez, associate professor of biomedical engineering in UNC’s College of Arts and Sciences and NC State.

The impetus for the “organ-on-chip” movement comes largely from the failings of the pharmaceutical industry. For just a single drug to go through the discovery, testing, and approval process can take as many as 15 years and as much as $5 billion dollars. Animal models are expensive to work with and often don’t respond to drugs and diseases the same way humans do. Human cells grown in flat sheets on Petri dishes are also a poor proxy. Three-dimensional “organoids” are an improvement, but these hollow balls are made of a mishmash of cells that doesn’t accurately mimic the structure and function of the real organ.

Basically, the human gut is a 30-foot long hollow tube made up of a continuous single-layer of specialized cells. Regenerative stem cells reside deep inside millions of small pits or “crypts” along the tube, and mature differentiated cells are linked to the pits and live further out toward the surface. The gut also contains trillions of microbes, which are estimated to outnumber human cells by ten to one. These diverse microbial communities – collectively known as the microbiota – process toxins and pharmaceuticals, stimulate immunity, and even release hormones to impact behavior.

To create a dime-sized version of this complex microenvironment, the UNC-NC State team borrowed fabrication technologies from the electronics and microfluidics world. The device is composed of a polymer base containing an array of imprinted or shaped “hydrogels,” a mesh of molecules that can absorb water like a sponge. These hydrogels are specifically engineered to provide the structural support and biochemical cues for growing cells from the gut. Plugged into the device will be various kinds of plumbing that bring in chemicals, fluids, and gases to provide cues that tell the cells how and where to differentiate and grow. For example, the researchers will engineer a steep oxygen gradient into the device that will enable oxygen-loving human cells and anaerobic microbes to coexist in close proximity.

“The underlying concept – to simply grow a piece of human tissue in a dish – doesn’t seem that groundbreaking,” said Magness. “We have been doing that for a long time with cancer cells, but those efforts do not replicate human physiology. Using native stem cells from the small intestine or colon, we can now develop gut tissue layers in a dish that contains stem cells and all the differentiated cells of the gut. That is the thing stem cell biologists and engineers have been shooting for, to make real tissue behave properly in a dish to create better models for drug screening and cell-based therapies. With this work, we made a big leap toward that goal.”

Right now, the team has a working prototype that can physically and chemically guide mouse intestinal stem cells into the appropriate structure and function of the gut. For several years, Magness has been isolating and banking human stem cells from samples from patients undergoing routine colonoscopies at UNC Hospitals.

As part of the grant, he will work with the rest of the team to apply these stem cells to the new device and create “simulacra” that are representative of each patient’s individual gut. The approach will enable researchers to explore in a personalized way how both the human and microbial cells of the gut behave during healthy and diseased states.

“Having a system like this will advance microbiota research tremendously,” said Bultman. “Right now microbiota studies involve taking samples, doing sequencing, and then compiling an inventory of all the microbes in the disease cases and healthy controls. These studies just draw associations, so it is difficult to glean cause and effect. This device will enable us to probe the microbiota, and gain a better understanding of whether changes in these microbial communities are the cause or the consequence of disease.”

I wish them good luck with their work and to end on another interesting note, the concept of organs-on-a-chip won a design award. From a June 22, 2015 article by Oliver Wainwright for the Guardian (Note: Links have been removed),

Meet the Lung-on-a-chip, a simulation of the biological processes inside the human lung, developed by the Wyss Institute for Biologically Inspired Engineering at Harvard University – and now crowned Design of the Year by London’s Design Museum.

Lined with living human cells, the “organs-on-chips” mimic the tissue structures and mechanical motions of human organs, promising to accelerate drug discovery, decrease development costs and potentially usher in a future of personalised medicine.

“This is the epitome of design innovation,” says Paola Antonelli, design curator at New York’s Museum of Modern Art [MOMA], who nominated the project for the award and recently acquired organs-on-chips for MoMA’s permanent collection. “Removing some of the pitfalls of human and animal testing means, theoretically, that drug trials could be conducted faster and their viable results disseminated more quickly.”

Whodathunkit? (Tor those unfamiliar with slang written in this form: Who would have thought it?)