Tag Archives: artists

Emerging technology and the law

I have three news bits about legal issues that are arising as a consequence of emerging technologies.

Deep neural networks, art, and copyright

Caption: The rise of automated art opens new creative avenues, coupled with new problems for copyright protection. Credit: Provided by: Alexander Mordvintsev, Christopher Olah and Mike Tyka

Presumably this artwork is a demonstration of automated art although they never really do explain how in the news item/news release. An April 26, 2017 news item on ScienceDaily announces research into copyright and the latest in using neural networks to create art,

In 1968, sociologist Jean Baudrillard wrote on automatism that “contained within it is the dream of a dominated world […] that serves an inert and dreamy humanity.”

With the growing popularity of Deep Neural Networks (DNN’s), this dream is fast becoming a reality.

Dr. Jean-Marc Deltorn, researcher at the Centre d’études internationales de la propriété intellectuelle in Strasbourg, argues that we must remain a responsive and responsible force in this process of automation — not inert dominators. As he demonstrates in a recent Frontiers in Digital Humanities paper, the dream of automation demands a careful study of the legal problems linked to copyright.

An April 26, 2017 Frontiers (publishing) news release on EurekAlert, which originated the news item, describes the research in more detail,

For more than half a century, artists have looked to computational processes as a way of expanding their vision. DNN’s are the culmination of this cross-pollination: by learning to identify a complex number of patterns, they can generate new creations.

These systems are made up of complex algorithms modeled on the transmission of signals between neurons in the brain.

DNN creations rely in equal measure on human inputs and the non-human algorithmic networks that process them.

Inputs are fed into the system, which is layered. Each layer provides an opportunity for a more refined knowledge of the inputs (shape, color, lines). Neural networks compare actual outputs to expected ones, and correct the predictive error through repetition and optimization. They train their own pattern recognition, thereby optimizing their learning curve and producing increasingly accurate outputs.

The deeper the layers are, the higher the level of abstraction. The highest layers are able to identify the contents of a given input with reasonable accuracy, after extended periods of training.

Creation thus becomes increasingly automated through what Deltorn calls “the arcane traceries of deep architecture”. The results are sufficiently abstracted from their sources to produce original creations that have been exhibited in galleries, sold at auction and performed at concerts.

The originality of DNN’s is a combined product of technological automation on one hand, human inputs and decisions on the other.

DNN’s are gaining popularity. Various platforms (such as DeepDream) now allow internet users to generate their very own new creations . This popularization of the automation process calls for a comprehensive legal framework that ensures a creator’s economic and moral rights with regards to his work – copyright protection.

Form, originality and attribution are the three requirements for copyright. And while DNN creations satisfy the first of these three, the claim to originality and attribution will depend largely on a given country legislation and on the traceability of the human creator.

Legislation usually sets a low threshold to originality. As DNN creations could in theory be able to create an endless number of riffs on source materials, the uncurbed creation of original works could inflate the existing number of copyright protections.

Additionally, a small number of national copyright laws confers attribution to what UK legislation defines loosely as “the person by whom the arrangements necessary for the creation of the work are undertaken.” In the case of DNN’s, this could mean anybody from the programmer to the user of a DNN interface.

Combined with an overly supple take on originality, this view on attribution would further increase the number of copyrightable works.

The risk, in both cases, is that artists will be less willing to publish their own works, for fear of infringement of DNN copyright protections.

In order to promote creativity – one seminal aim of copyright protection – the issue must be limited to creations that manifest a personal voice “and not just the electric glint of a computational engine,” to quote Deltorn. A delicate act of discernment.

DNN’s promise new avenues of creative expression for artists – with potential caveats. Copyright protection – a “catalyst to creativity” – must be contained. Many of us gently bask in the glow of an increasingly automated form of technology. But if we want to safeguard the ineffable quality that defines much art, it might be a good idea to hone in more closely on the differences between the electric and the creative spark.

This research is and be will part of a broader Frontiers Research Topic collection of articles on Deep Learning and Digital Humanities.

Here’s a link to and a citation for the paper,

Deep Creations: Intellectual Property and the Automata by Jean-Marc Deltorn. Front. Digit. Humanit., 01 February 2017 | https://doi.org/10.3389/fdigh.2017.00003

This paper is open access.

Conference on governance of emerging technologies

I received an April 17, 2017 notice via email about this upcoming conference. Here’s more from the Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics webpage,

The Fifth Annual Conference on Governance of Emerging Technologies:

Law, Policy and Ethics held at the new

Beus Center for Law & Society in Phoenix, AZ

May 17-19, 2017!

Call for Abstracts – Now Closed

The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including (but not limited to) nanotechnology, synthetic biology, gene editing, biotechnology, genomics, personalized medicine, human enhancement technologies, telecommunications, information technologies, surveillance technologies, geoengineering, neuroscience, artificial intelligence, and robotics. The conference is premised on the belief that there is much to be learned and shared from and across the governance experience and proposals for these various emerging technologies.

Keynote Speakers:

Gillian HadfieldRichard L. and Antoinette Schamoi Kirtland Professor of Law and Professor of Economics USC [University of Southern California] Gould School of Law

Shobita Parthasarathy, Associate Professor of Public Policy and Women’s Studies, Director, Science, Technology, and Public Policy Program University of Michigan

Stuart Russell, Professor at [University of California] Berkeley, is a computer scientist known for his contributions to artificial intelligence

Craig Shank, Vice President for Corporate Standards Group in Microsoft’s Corporate, External and Legal Affairs (CELA)

Plenary Panels:

Innovation – Responsible and/or Permissionless

Ellen-Marie Forsberg, Senior Researcher/Research Manager at Oslo and Akershus University College of Applied Sciences

Adam Thierer, Senior Research Fellow with the Technology Policy Program at the Mercatus Center at George Mason University

Wendell Wallach, Consultant, ethicist, and scholar at Yale University’s Interdisciplinary Center for Bioethics

 Gene Drives, Trade and International Regulations

Greg Kaebnick, Director, Editorial Department; Editor, Hastings Center Report; Research Scholar, Hastings Center

Jennifer Kuzma, Goodnight-North Carolina GlaxoSmithKline Foundation Distinguished Professor in Social Sciences in the School of Public and International Affairs (SPIA) and co-director of the Genetic Engineering and Society (GES) Center at North Carolina State University

Andrew Maynard, Senior Sustainability Scholar, Julie Ann Wrigley Global Institute of Sustainability Director, Risk Innovation Lab, School for the Future of Innovation in Society Professor, School for the Future of Innovation in Society, Arizona State University

Gary Marchant, Regents’ Professor of Law, Professor of Law Faculty Director and Faculty Fellow, Center for Law, Science & Innovation, Arizona State University

Marc Saner, Inaugural Director of the Institute for Science, Society and Policy, and Associate Professor, University of Ottawa Department of Geography

Big Data

Anupam Chander, Martin Luther King, Jr. Professor of Law and Director, California International Law Center, UC Davis School of Law

Pilar Ossorio, Professor of Law and Bioethics, University of Wisconsin, School of Law and School of Medicine and Public Health; Morgridge Institute for Research, Ethics Scholar-in-Residence

George Poste, Chief Scientist, Complex Adaptive Systems Initiative (CASI) (http://www.casi.asu.edu/), Regents’ Professor and Del E. Webb Chair in Health Innovation, Arizona State University

Emily Shuckburgh, climate scientist and deputy head of the Polar Oceans Team at the British Antarctic Survey, University of Cambridge

 Responsible Development of AI

Spring Berman, Ira A. Fulton Schools of Engineering, Arizona State University

John Havens, The IEEE [Institute of Electrical and Electronics Engineers] Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems

Subbarao Kambhampati, Senior Sustainability Scientist, Julie Ann Wrigley Global Institute of Sustainability, Professor, School of Computing, Informatics and Decision Systems Engineering, Ira A. Fulton Schools of Engineering, Arizona State University

Wendell Wallach, Consultant, Ethicist, and Scholar at Yale University’s Interdisciplinary Center for Bioethics

Existential and Catastrophic Ricks [sic]

Tony Barrett, Co-Founder and Director of Research of the Global Catastrophic Risk Institute

Haydn Belfield,  Academic Project Administrator, Centre for the Study of Existential Risk at the University of Cambridge

Margaret E. Kosal Associate Director, Sam Nunn School of International Affairs, Georgia Institute of Technology

Catherine Rhodes,  Academic Project Manager, Centre for the Study of Existential Risk at CSER, University of Cambridge

These were the panels that are of interest to me; there are others on the homepage.

Here’s some information from the Conference registration webpage,

Early Bird Registration – $50 off until May 1! Enter discount code: earlybirdGETs50

New: Group Discount – Register 2+ attendees together and receive an additional 20% off for all group members!

Click Here to Register!

Conference registration fees are as follows:

  • General (non-CLE) Registration: $150.00
  • CLE Registration: $350.00
  • *Current Student / ASU Law Alumni Registration: $50.00
  • ^Cybsersecurity sessions only (May 19): $100 CLE / $50 General / Free for students (registration info coming soon)

There you have it.

Neuro-techno future laws

I’m pretty sure this isn’t the first exploration of potential legal issues arising from research into neuroscience although it’s the first one I’ve stumbled across. From an April 25, 2017 news item on phys.org,

New human rights laws to prepare for advances in neurotechnology that put the ‘freedom of the mind’ at risk have been proposed today in the open access journal Life Sciences, Society and Policy.

The authors of the study suggest four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy. The four laws are: the right to cognitive liberty, the right to mental privacy, the right to mental integrity and the right to psychological continuity.

An April 25, 2017 Biomed Central news release on EurekAlert, which originated the news item, describes the work in more detail,

Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel, said: “The mind is considered to be the last refuge of personal freedom and self-determination, but advances in neural engineering, brain imaging and neurotechnology put the freedom of the mind at risk. Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology.”

Advances in neurotechnology, such as sophisticated brain imaging and the development of brain-computer interfaces, have led to these technologies moving away from a clinical setting and into the consumer domain. While these advances may be beneficial for individuals and society, there is a risk that the technology could be misused and create unprecedented threats to personal freedom.

Professor Roberto Andorno, co-author of the research, explained: “Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court, for example as a tool for assessing criminal responsibility or even the risk of reoffending. Consumer companies are using brain imaging for ‘neuromarketing’, to understand consumer behaviour and elicit desired responses from customers. There are also tools such as ‘brain decoders’ which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom which we sought to address with the development of four new human rights laws.”

The authors explain that as neurotechnology improves and becomes commonplace, there is a risk that the technology could be hacked, allowing a third-party to ‘eavesdrop’ on someone’s mind. In the future, a brain-computer interface used to control consumer technology could put the user at risk of physical and psychological damage caused by a third-party attack on the technology. There are also ethical and legal concerns over the protection of data generated by these devices that need to be considered.

International human rights laws make no specific mention to neuroscience, although advances in biomedicine have become intertwined with laws, such as those concerning human genetic data. Similar to the historical trajectory of the genetic revolution, the authors state that the on-going neurorevolution will force a reconceptualization of human rights laws and even the creation of new ones.

Marcello Ienca added: “Science-fiction can teach us a lot about the potential threat of technology. Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes. We need to be prepared to deal with the impact these technologies will have on our personal freedom.”

Here’s a link to and a citation for the paper,

Towards new human rights in the age of neuroscience and neurotechnology by Marcello Ienca and Roberto Andorno. Life Sciences, Society and Policy201713:5 DOI: 10.1186/s40504-017-0050-1 Published: 26 April 2017

©  The Author(s). 2017

This paper is open access.

The mathematics of Disney’s ‘Moana’

The hit Disney movie “Moana” features stunning visual effects, including the animation of water to such a degree that it becomes a distinct character in the film. Courtesy of Walt Disney Animation Studios

Few people think to marvel over the mathematics when watching an animated feature but without mathematicians, the artists would not be able to achieve their artistic goals as a Jan. 4, 2017 news item on phys.org makes clear (Note: A link has been removed),

UCLA [University of California at Los Angeles] mathematics professor Joseph Teran, a Walt Disney consultant on animated movies since 2007, is under no illusion that artists want lengthy mathematics lessons, but many of them realize that the success of animated movies often depends on advanced mathematics.

“In general, the animators and artists at the studios want as little to do with mathematics and physics as possible, but the demands for realism in animated movies are so high,” Teran said. “Things are going to look fake if you don’t at least start with the correct physics and mathematics for many materials, such as water and snow. If the physics and mathematics are not simulated accurately, it will be very glaring that something is wrong with the animation of the material.”

Teran and his research team have helped infuse realism into several Disney movies, including “Frozen,” where they used science to animate snow scenes. Most recently, they applied their knowledge of math, physics and computer science to enliven the new 3-D computer-animated hit, “Moana,” a tale about an adventurous teenage girl who is drawn to the ocean and is inspired to leave the safety of her island on a daring journey to save her people.

A Jan. 3, 2017 UCLA news release, which originated the news item, explains in further nontechnical detail,

Alexey Stomakhin, a former UCLA doctoral student of Teran’s and Andrea Bertozzi’s, played an important role in the making of “Moana.” After earning his Ph.D. in applied mathematics in 2013, he became a senior software engineer at Walt Disney Animation Studios. Working with Disney’s effects artists, technical directors and software developers, Stomakhin led the development of the code that was used to simulate the movement of water in “Moana,” enabling it to play a role as one of the characters in the film.

“The increased demand for realism and complexity in animated movies makes it preferable to get assistance from computers; this means we have to simulate the movement of the ocean surface and how the water splashes, for example, to make it look believable,” Stomakhin explained. “There is a lot of mathematics, physics and computer science under the hood. That’s what we do.”

“Moana” has been praised for its stunning visual effects in words the mathematicians love hearing. “Everything in the movie looks almost real, so the movement of the water has to look real too, and it does,” Teran said. “’Moana’ has the best water effects I’ve ever seen, by far.”

Stomakhin said his job is fun and “super-interesting, especially when we cheat physics and step beyond physics. It’s almost like building your own universe with your own laws of physics and trying to simulate that universe.

“Disney movies are about magic, so magical things happen which do not exist in the real world,” said the software engineer. “It’s our job to add some extra forces and other tricks to help create those effects. If you have an understanding of how the real physical laws work, you can push parameters beyond physical limits and change equations slightly; we can predict the consequences of that.”

To make animated movies these days, movie studios need to solve, or nearly solve, partial differential equations. Stomakhin, Teran and their colleagues build the code that solves the partial differential equations. More accurately, they write algorithms that closely approximate the partial differential equations because they cannot be solved perfectly. “We try to come up with new algorithms that have the highest-quality metrics in all possible categories, including preserving angular momentum perfectly and preserving energy perfectly. Many algorithms don’t have these properties,” Teran said.

Stomakhin was also involved in creating the ocean’s crashing waves that have to break at a certain place and time. That task required him to get creative with physics and use other tricks. “You don’t allow physics to completely guide it,” he said.  “You allow the wave to break only when it needs to break.”

Depicting boats on waves posed additional challenges for the scientists.

“It’s easy to simulate a boat traveling through a static lake, but a boat on waves is much more challenging to simulate,” Stomakhin said. “We simulated the fluid around the boat; the challenge was to blend that fluid with the rest of the ocean. It can’t look like the boat is splashing in a little swimming pool — the blend needs to be seamless.”

Stomakhin spent more than a year developing the code and understanding the physics that allowed him to achieve this effect.

“It’s nice to see the great visual effect, something you couldn’t have achieved if you hadn’t designed the algorithm to solve physics accurately,” said Teran, who has taught an undergraduate course on scientific computing for the visual-effects industry.

While Teran loves spectacular visual effects, he said the research has many other scientific applications as well. It could be used to simulate plasmas, simulate 3-D printing or for surgical simulation, for example. Teran is using a related algorithm to build virtual livers to substitute for the animal livers that surgeons train on. He is also using the algorithm to study traumatic leg injuries.

Teran describes the work with Disney as “bread-and-butter, high-performance computing for simulating materials, as mechanical engineers and physicists at national laboratories would. Simulating water for a movie is not so different, but there are, of course, small tweaks to make the water visually compelling. We don’t have a separate branch of research for computer graphics. We create new algorithms that work for simulating wide ranges of materials.”

Teran, Stomakhin and three other applied mathematicians — Chenfanfu Jiang, Craig Schroeder and Andrew Selle — also developed a state-of-the-art simulation method for fluids in graphics, called APIC, based on months of calculations. It allows for better realism and stunning visual results. Jiang is a UCLA postdoctoral scholar in Teran’s laboratory, who won a 2015 UCLA best dissertation prize.  Schroeder is a former UCLA postdoctoral scholar who worked with Teran and is now at UC Riverside. Selle, who worked at Walt Disney Animation Studios, is now at Google.

Their newest version of APIC has been accepted for publication by the peer-reviewed Journal of Computational Physics.

“Alexey is using ideas from high-performance computing to make movies,” Teran said, “and we are contributing to the scientific community by improving the algorithm.”

Unfortunately, the paper does not seem to have been published early online so I cannot offer a link.

Final comment, it would have been interesting to have had a comment from one of the film’s artists or animators included in the article but it may not have been possible due to time or space constraints.

Beakerhead’s Big Bang (art/engineering) Residency in Alberta, Canada

I am sorry for the late notice as the deadline for submissions is Oct. 9, 2015 so there’s not much time to prepare. In any event, here’s more information about the Big Bang Residency Program call for proposals,

Every September, Beakerhead erupts onto the streets and venues of Calgary with cultural works that have science or engineering at their core. This is a call for proposals to build a creative work through an initiative called the Big Bang Residency Program. The work will be built over the course of a year with a collaborative team and will premiere on September 14, 2016, at Beakerhead in Calgary, Canada.

About the Big Bang Residency Program

The Big Bang Residency Program is funded by the Remarkable Experience Accelerator; a joint initiative of Calgary Arts Development and the Calgary Hotel Association. The program is led by Beakerhead with partnership support from the internationally renowned Banff Centre.

The program will support the creation of a total of three major new artworks over three years that will premiere internationally in Calgary during Beakerhead each year. This residency program will support:

  • One team per year each consisting of no less than four and no more than five individuals (additional support members are possible; however, the maximum size of the core team in residence will be five).
  • Two weeks in residence total; one week in the late fall and one week the following summer, with exact dates to be arranged with The Banff Centre and the selected team in residence. The production of the work is expected to take place in-between these two residency periods in Calgary.
  • Call for Proposals

    Beakerhead and The Banff Centre will support the design and build of a work to be shared with the world during Beakerhead, September 14 to 18, 2016. It will be created over the course of the year, which will include two weeks in residence at The Banff Centre with an interdisciplinary team of collaborators.

    Who is Eligible?

    This Call for Proposals is open to international artists, engineers, architects, designers, scientists and others. In addition to meeting the requirements for team composition below, the team must have a connection to Calgary so that the building of the work takes place in Calgary, the work is developed in Banff, the work premieres in Calgary and calls Calgary its home base. The proposal need not be submitted by a complete team: individuals may apply. The team can be assembled with support from The Banff Centre and Beakerhead to ensure that the collaboration of artists and engineers will result in a project that is created in Calgary/Banff over the course of the year.

    Team Composition 

    Each team must include:

    1. At least one individual who has received specialized art training (degree from a recognizing art institution) and has developed and exhibited a body of work;
    2. At least one individual who has received specialized engineering training (degree from an accredited engineering school), and previous experience in any artistic medium;
    3. Other members of the team should bring additional art and design skills, technical skills and project management skills. They may include emerging and professional roles.

    Staging and Exhibition

    The engineered artworks produced during the residency will be presented during Beakerhead in an unprecedented spectacle of performance and public engagement. The staging of the premiere may be developed in partnership with other venues, as dictated by the artworks. Many Beakerhead events take place in partnership with existing venues, such as theatres, galleries, public spaces, business revitalization zones, universities and libraries. The artistic disciplines may include installation, performance, visual art, music or any other media.

    The Details

    Design Criteria

    The successful proposal will meet the following criteria.

    • Location: The installation will be in a public location or available venue in Calgary, Alberta, from September 14 to 18 2016, and can be toured afterwards. Park-like settings and public roadways may be possible.
    • Dimension: There is no limit on dimension. However, proposals for works that can engage larger numbers of people at the scale of public art will be given preference.
    • Scope: Preference will be given to works that are both arresting to view and interesting to experience first-hand.
    • Install and De-install: Up to four days can be provided to install and de-install. The successful team must be capable of completing this work with volunteer crews.
    • Material: All materials must meet North American and European building and fire safety codes.

    Budget                                     

    A budget of CAD 24,000 is available for materials and supplies. The artist/collaborator fee is CAD 5,000 per team member up to CAD 25,000. Two weeks in residence will be provided for a five-person team, including accommodation and meals at The Banff Centre. Support for venue rental over the winter for build space will be provided, as well as heavy equipment costs.

    The budget may include:

    • All additional materials costs
    • Equipment services/rental for installation and de-installation
    • Contracted labour for specialized services
    • Documentation expenses
    • Stipend per team member (CAD 5,000 per member up to CAD 25,000)
    • Workshop and fabrication space rental in Calgary

    The budget may not include:

    • Travel costs
    • Salaries and wages

    If the budget proposed exceeds the amount of funding available, please detail your plans for acquiring additional funds to make up any projected shortfall.

    Additional

    Preference will be given to projects that consider:

    • Delightful and thought-provoking experiences at the crossroads of art and engineering
    • Use of public space
    • Assembly, strike and touring ability
    • Engagement of a large volume of viewers
    • Durability for multiple days of high volume public interaction

    Timeline

    Important 2015/16 Dates

    • Aug 6, 2015:  Call for proposals
    • Oct 9: Deadline for submissions
    • Nov 6: Announcement of the successful proposal
    • Dec 6: Presentation of the successful team at the annual Beakerhead partners meeting
    • Dec 7-12*: Residency Week 1 in Banff: Detailed production plan completed
    • Jan 20, 2016: Concept unveiled to public and build volunteers engaged
    • Feb-August: Build period in Calgary
    • Aug 22-27*: Residency Week 2 in Banff: Presentation planning and rehearsals
    • Sept 14 – 18: International premiere at Beakerhead!

    *dates may change

    Timeline Details

    The program will lift off with an announcement in August 2015, and the first major artworks premiered in September 2016. A second round will be announced in the summer of 2016, and a third in the summer of 2017.

    Interested applicants are encouraged to attend Beakerhead 2015 (September 16 – 20), or have an associate attend, to fully understand the presentation opportunities. The final team will be announced in the fall, and will commence the term with a one-week period “in residence” at the Banff Centre (a week to work full-time on the project) to develop the detailed design and production plan. The partnership with The Banff Centre will support the development of design drawings and a business strategy.

    The build will then take place over the winter and summer in Calgary. Beakerhead will support the successful team by making introductions to local resources and facilities.

    The team in residence will be strongly encouraged to engage an expanded team of volunteers in the building process to create a community of support around the spectacle element.

There are more details here including the information on how to make a submission.