Tag Archives: privacy

Internet of toys, the robotification of childhood, and privacy issues

Leave it to the European Commission’s (EC) Joint Research Centre (JRC) to look into the future of toys. As far as I’m aware there are no such moves in either Canada or the US despite the ubiquity of robot toys and other such devices. From a March 23, 2017 EC JRC  press release (also on EurekAlert),

Action is needed to monitor and control the emerging Internet of Toys, concludes a new JRC report. Privacy and security are highlighted as main areas of concern.

Large numbers of connected toys have been put on the market over the past few years, and the turnover is expected to reach €10 billion by 2020 – up from just €2.6 billion in 2015.

Connected toys come in many different forms, from smart watches to teddy bears that interact with their users. They are connected to the internet and together with other connected appliances they form the Internet of Things, which is bringing technology into our daily lives more than ever.

However, the toys’ ability to record, store and share information about their young users raises concerns about children’s safety, privacy and social development.

A team of JRC scientists and international experts looked at the safety, security, privacy and societal questions emerging from the rise of the Internet of Toys. The report invites policymakers, industry, parents and teachers to study connected toys more in depth in order to provide a framework which ensures that these toys are safe and beneficial for children.

Robotification of childhood

Robots are no longer only used in industry to carry out repetitive or potentially dangerous tasks. In the past years, robots have entered our everyday lives and also children are more and more likely to encounter robotic or artificial intelligence-enhanced toys.

We still know relatively little about the consequences of children’s interaction with robotic toys. However, it is conceivable that they represent both opportunities and risks for children’s cognitive, socio-emotional and moral-behavioural development.

For example, social robots may further the acquisition of foreign language skills by compensating for the lack of native speakers as language tutors or by removing the barriers and peer pressure encountered in class room. There is also evidence about the benefits of child-robot interaction for children with developmental problems, such as autism or learning difficulties, who may find human interaction difficult.

However, the internet-based personalization of children’s education via filtering algorithms may also increase the risk of ‘educational bubbles’ where children only receive information that fits their pre-existing knowledge and interest – similar to adult interaction on social media networks.

Safety and security considerations

The rapid rise in internet connected toys also raises concerns about children’s safety and privacy. In particular, the way that data gathered by connected toys is analysed, manipulated and stored is not transparent, which poses an emerging threat to children’s privacy.

The data provided by children while they play, i.e the sounds, images and movements recorded by connected toys is personal data protected by the EU data protection framework, as well as by the new General Data Protection Regulation (GDPR). However, information on how this data is stored, analysed and shared might be hidden in long privacy statements or policies and often go unnoticed by parents.

Whilst children’s right to privacy is the most immediate concern linked to connected toys, there is also a long term concern: growing up in a culture where the tracking, recording and analysing of children’s everyday choices becomes a normal part of life is also likely to shape children’s behaviour and development.

Usage framework to guide the use of connected toys

The report calls for industry and policymakers to create a connected toys usage framework to act as a guide for their design and use.

This would also help toymakers to meet the challenge of complying with the new European General Data Protection Regulation (GDPR) which comes into force in May 2018, which will increase citizens’ control over their personal data.

The report also calls for the connected toy industry and academic researchers to work together to produce better designed and safer products.

Advice for parents

The report concludes that it is paramount that we understand how children interact with connected toys and which risks and opportunities they entail for children’s development.

“These devices come with really interesting possibilities and the more we use them, the more we will learn about how to best manage them. Locking them up in a cupboard is not the way to go. We as adults have to understand how they work – and how they might ‘misbehave’ – so that we can provide the right tools and the right opportunities for our children to grow up happy in a secure digital world”, Stéphane Chaudron, the report’s lead researcher at the Joint Research Centre (JRC).).

The authors of the report encourage parents to get informed about the capabilities, functions, security measures and privacy settings of toys before buying them. They also urge parents to focus on the quality of play by observing their children, talking to them about their experiences and playing alongside and with their children.

Protecting and empowering children

Through the Alliance to better protect minors online and with the support of UNICEF, NGOs, Toy Industries Europe and other industry and stakeholder groups, European and global ICT and media companies  are working to improve the protection and empowerment of children when using connected toys. This self-regulatory initiative is facilitated by the European Commission and aims to create a safer and more stimulating digital environment for children.

There’s an engaging video accompanying this press release,

You can find the report (Kaleidoscope on the Internet of Toys: Safety, security, privacy and societal insights) here and both the PDF and print versions are free (although I imagine you’ll have to pay postage for the print version). This report was published in 2016; the authors are Stéphane Chaudron, Rosanna Di Gioia, Monica Gemo, Donell Holloway , Jackie Marsh , Giovanna Mascheroni , Jochen Peter, Dylan Yamada-Rice and organizations involved include European Cooperation in Science and Technology (COST), Digital Literacy and Multimodal Practices of Young Children (DigiLitEY), and COST Action IS1410. DigiLitEY is a European network of 33 countries focusing on research in this area (2015-2019).

Emerging technology and the law

I have three news bits about legal issues that are arising as a consequence of emerging technologies.

Deep neural networks, art, and copyright

Caption: The rise of automated art opens new creative avenues, coupled with new problems for copyright protection. Credit: Provided by: Alexander Mordvintsev, Christopher Olah and Mike Tyka

Presumably this artwork is a demonstration of automated art although they never really do explain how in the news item/news release. An April 26, 2017 news item on ScienceDaily announces research into copyright and the latest in using neural networks to create art,

In 1968, sociologist Jean Baudrillard wrote on automatism that “contained within it is the dream of a dominated world […] that serves an inert and dreamy humanity.”

With the growing popularity of Deep Neural Networks (DNN’s), this dream is fast becoming a reality.

Dr. Jean-Marc Deltorn, researcher at the Centre d’études internationales de la propriété intellectuelle in Strasbourg, argues that we must remain a responsive and responsible force in this process of automation — not inert dominators. As he demonstrates in a recent Frontiers in Digital Humanities paper, the dream of automation demands a careful study of the legal problems linked to copyright.

An April 26, 2017 Frontiers (publishing) news release on EurekAlert, which originated the news item, describes the research in more detail,

For more than half a century, artists have looked to computational processes as a way of expanding their vision. DNN’s are the culmination of this cross-pollination: by learning to identify a complex number of patterns, they can generate new creations.

These systems are made up of complex algorithms modeled on the transmission of signals between neurons in the brain.

DNN creations rely in equal measure on human inputs and the non-human algorithmic networks that process them.

Inputs are fed into the system, which is layered. Each layer provides an opportunity for a more refined knowledge of the inputs (shape, color, lines). Neural networks compare actual outputs to expected ones, and correct the predictive error through repetition and optimization. They train their own pattern recognition, thereby optimizing their learning curve and producing increasingly accurate outputs.

The deeper the layers are, the higher the level of abstraction. The highest layers are able to identify the contents of a given input with reasonable accuracy, after extended periods of training.

Creation thus becomes increasingly automated through what Deltorn calls “the arcane traceries of deep architecture”. The results are sufficiently abstracted from their sources to produce original creations that have been exhibited in galleries, sold at auction and performed at concerts.

The originality of DNN’s is a combined product of technological automation on one hand, human inputs and decisions on the other.

DNN’s are gaining popularity. Various platforms (such as DeepDream) now allow internet users to generate their very own new creations . This popularization of the automation process calls for a comprehensive legal framework that ensures a creator’s economic and moral rights with regards to his work – copyright protection.

Form, originality and attribution are the three requirements for copyright. And while DNN creations satisfy the first of these three, the claim to originality and attribution will depend largely on a given country legislation and on the traceability of the human creator.

Legislation usually sets a low threshold to originality. As DNN creations could in theory be able to create an endless number of riffs on source materials, the uncurbed creation of original works could inflate the existing number of copyright protections.

Additionally, a small number of national copyright laws confers attribution to what UK legislation defines loosely as “the person by whom the arrangements necessary for the creation of the work are undertaken.” In the case of DNN’s, this could mean anybody from the programmer to the user of a DNN interface.

Combined with an overly supple take on originality, this view on attribution would further increase the number of copyrightable works.

The risk, in both cases, is that artists will be less willing to publish their own works, for fear of infringement of DNN copyright protections.

In order to promote creativity – one seminal aim of copyright protection – the issue must be limited to creations that manifest a personal voice “and not just the electric glint of a computational engine,” to quote Deltorn. A delicate act of discernment.

DNN’s promise new avenues of creative expression for artists – with potential caveats. Copyright protection – a “catalyst to creativity” – must be contained. Many of us gently bask in the glow of an increasingly automated form of technology. But if we want to safeguard the ineffable quality that defines much art, it might be a good idea to hone in more closely on the differences between the electric and the creative spark.

This research is and be will part of a broader Frontiers Research Topic collection of articles on Deep Learning and Digital Humanities.

Here’s a link to and a citation for the paper,

Deep Creations: Intellectual Property and the Automata by Jean-Marc Deltorn. Front. Digit. Humanit., 01 February 2017 | https://doi.org/10.3389/fdigh.2017.00003

This paper is open access.

Conference on governance of emerging technologies

I received an April 17, 2017 notice via email about this upcoming conference. Here’s more from the Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics webpage,

The Fifth Annual Conference on Governance of Emerging Technologies:

Law, Policy and Ethics held at the new

Beus Center for Law & Society in Phoenix, AZ

May 17-19, 2017!

Call for Abstracts – Now Closed

The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including (but not limited to) nanotechnology, synthetic biology, gene editing, biotechnology, genomics, personalized medicine, human enhancement technologies, telecommunications, information technologies, surveillance technologies, geoengineering, neuroscience, artificial intelligence, and robotics. The conference is premised on the belief that there is much to be learned and shared from and across the governance experience and proposals for these various emerging technologies.

Keynote Speakers:

Gillian HadfieldRichard L. and Antoinette Schamoi Kirtland Professor of Law and Professor of Economics USC [University of Southern California] Gould School of Law

Shobita Parthasarathy, Associate Professor of Public Policy and Women’s Studies, Director, Science, Technology, and Public Policy Program University of Michigan

Stuart Russell, Professor at [University of California] Berkeley, is a computer scientist known for his contributions to artificial intelligence

Craig Shank, Vice President for Corporate Standards Group in Microsoft’s Corporate, External and Legal Affairs (CELA)

Plenary Panels:

Innovation – Responsible and/or Permissionless

Ellen-Marie Forsberg, Senior Researcher/Research Manager at Oslo and Akershus University College of Applied Sciences

Adam Thierer, Senior Research Fellow with the Technology Policy Program at the Mercatus Center at George Mason University

Wendell Wallach, Consultant, ethicist, and scholar at Yale University’s Interdisciplinary Center for Bioethics

 Gene Drives, Trade and International Regulations

Greg Kaebnick, Director, Editorial Department; Editor, Hastings Center Report; Research Scholar, Hastings Center

Jennifer Kuzma, Goodnight-North Carolina GlaxoSmithKline Foundation Distinguished Professor in Social Sciences in the School of Public and International Affairs (SPIA) and co-director of the Genetic Engineering and Society (GES) Center at North Carolina State University

Andrew Maynard, Senior Sustainability Scholar, Julie Ann Wrigley Global Institute of Sustainability Director, Risk Innovation Lab, School for the Future of Innovation in Society Professor, School for the Future of Innovation in Society, Arizona State University

Gary Marchant, Regents’ Professor of Law, Professor of Law Faculty Director and Faculty Fellow, Center for Law, Science & Innovation, Arizona State University

Marc Saner, Inaugural Director of the Institute for Science, Society and Policy, and Associate Professor, University of Ottawa Department of Geography

Big Data

Anupam Chander, Martin Luther King, Jr. Professor of Law and Director, California International Law Center, UC Davis School of Law

Pilar Ossorio, Professor of Law and Bioethics, University of Wisconsin, School of Law and School of Medicine and Public Health; Morgridge Institute for Research, Ethics Scholar-in-Residence

George Poste, Chief Scientist, Complex Adaptive Systems Initiative (CASI) (http://www.casi.asu.edu/), Regents’ Professor and Del E. Webb Chair in Health Innovation, Arizona State University

Emily Shuckburgh, climate scientist and deputy head of the Polar Oceans Team at the British Antarctic Survey, University of Cambridge

 Responsible Development of AI

Spring Berman, Ira A. Fulton Schools of Engineering, Arizona State University

John Havens, The IEEE [Institute of Electrical and Electronics Engineers] Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems

Subbarao Kambhampati, Senior Sustainability Scientist, Julie Ann Wrigley Global Institute of Sustainability, Professor, School of Computing, Informatics and Decision Systems Engineering, Ira A. Fulton Schools of Engineering, Arizona State University

Wendell Wallach, Consultant, Ethicist, and Scholar at Yale University’s Interdisciplinary Center for Bioethics

Existential and Catastrophic Ricks [sic]

Tony Barrett, Co-Founder and Director of Research of the Global Catastrophic Risk Institute

Haydn Belfield,  Academic Project Administrator, Centre for the Study of Existential Risk at the University of Cambridge

Margaret E. Kosal Associate Director, Sam Nunn School of International Affairs, Georgia Institute of Technology

Catherine Rhodes,  Academic Project Manager, Centre for the Study of Existential Risk at CSER, University of Cambridge

These were the panels that are of interest to me; there are others on the homepage.

Here’s some information from the Conference registration webpage,

Early Bird Registration – $50 off until May 1! Enter discount code: earlybirdGETs50

New: Group Discount – Register 2+ attendees together and receive an additional 20% off for all group members!

Click Here to Register!

Conference registration fees are as follows:

  • General (non-CLE) Registration: $150.00
  • CLE Registration: $350.00
  • *Current Student / ASU Law Alumni Registration: $50.00
  • ^Cybsersecurity sessions only (May 19): $100 CLE / $50 General / Free for students (registration info coming soon)

There you have it.

Neuro-techno future laws

I’m pretty sure this isn’t the first exploration of potential legal issues arising from research into neuroscience although it’s the first one I’ve stumbled across. From an April 25, 2017 news item on phys.org,

New human rights laws to prepare for advances in neurotechnology that put the ‘freedom of the mind’ at risk have been proposed today in the open access journal Life Sciences, Society and Policy.

The authors of the study suggest four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy. The four laws are: the right to cognitive liberty, the right to mental privacy, the right to mental integrity and the right to psychological continuity.

An April 25, 2017 Biomed Central news release on EurekAlert, which originated the news item, describes the work in more detail,

Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel, said: “The mind is considered to be the last refuge of personal freedom and self-determination, but advances in neural engineering, brain imaging and neurotechnology put the freedom of the mind at risk. Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology.”

Advances in neurotechnology, such as sophisticated brain imaging and the development of brain-computer interfaces, have led to these technologies moving away from a clinical setting and into the consumer domain. While these advances may be beneficial for individuals and society, there is a risk that the technology could be misused and create unprecedented threats to personal freedom.

Professor Roberto Andorno, co-author of the research, explained: “Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court, for example as a tool for assessing criminal responsibility or even the risk of reoffending. Consumer companies are using brain imaging for ‘neuromarketing’, to understand consumer behaviour and elicit desired responses from customers. There are also tools such as ‘brain decoders’ which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom which we sought to address with the development of four new human rights laws.”

The authors explain that as neurotechnology improves and becomes commonplace, there is a risk that the technology could be hacked, allowing a third-party to ‘eavesdrop’ on someone’s mind. In the future, a brain-computer interface used to control consumer technology could put the user at risk of physical and psychological damage caused by a third-party attack on the technology. There are also ethical and legal concerns over the protection of data generated by these devices that need to be considered.

International human rights laws make no specific mention to neuroscience, although advances in biomedicine have become intertwined with laws, such as those concerning human genetic data. Similar to the historical trajectory of the genetic revolution, the authors state that the on-going neurorevolution will force a reconceptualization of human rights laws and even the creation of new ones.

Marcello Ienca added: “Science-fiction can teach us a lot about the potential threat of technology. Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes. We need to be prepared to deal with the impact these technologies will have on our personal freedom.”

Here’s a link to and a citation for the paper,

Towards new human rights in the age of neuroscience and neurotechnology by Marcello Ienca and Roberto Andorno. Life Sciences, Society and Policy201713:5 DOI: 10.1186/s40504-017-0050-1 Published: 26 April 2017

©  The Author(s). 2017

This paper is open access.

Offering privacy and light control via smart windows

There have been quite a few ‘smart’ window stories here on this blog but this one is the first to feature a privacy option. From a Nov. 17, 2016 news item on Nanowerk,

Smart windows get darker to filter out the sun’s rays on bright days, and turn clear on cloudy days to let more light in. This feature can help control indoor temperatures and offers some privacy without resorting to aids such as mini-blinds.

Now scientists report a new development in this growing niche: solar smart windows that can turn opaque on demand and even power other devices. …

A Nov. 17, 2016 American Chemical Society (ACS) news release, which originated the news item, goes on to explain the work,

Most existing solar-powered smart windows are designed to respond automatically to changing conditions, such as light or heat. But this means that on cool or cloudy days, consumers can’t flip a switch and tint the windows for privacy. Also, these devices often operate on a mere fraction of the light energy they are exposed to while the rest gets absorbed by the windows. This heats them up, which can add warmth to a room that the windows are supposed to help keep cool. Jeremy Munday and colleagues wanted to address these limitations.

The researchers created a new smart window by sandwiching a polymer matrix containing microdroplets of liquid crystal materials, and an amorphous silicon layer — the type often used in solar cells — between two glass panes. When the window is “off,” the liquid crystals scatter light, making the glass opaque. The silicon layer absorbs the light and provides the low power needed to align the crystals so light can pass through and make the window transparent when the window is turned “on” by the user. The extra energy that doesn’t go toward operating the window is harvested and could be redirected to power other devices, such as lights, TVs or smartphones, the researchers say.

For anyone who finds reading text a bit onerous, there’s this video,

Here’s a link to and a citation for the paper,

Electrically Controllable Light Trapping for Self-Powered Switchable Solar Windows by Joseph Murray, Dakang Ma, and Jeremy N. Munday. ACS Photonics, Article ASAP DOI: 10.1021/acsphotonics.6b00518 Publication Date (Web): October 26, 2016

Copyright © 2016 American Chemical Society

This paper is behind a paywall.

Does more nano-enabled security = more nano-enabled surveillance?

A May 6, 2014 essay by Brandon Engel published on Nanotechnology Now poses an interesting question about the use of nanotechnology-enabled security and surveillance measures (Note: Links have been removed),

Security is of prime importance in an increasingly globalized society. It has a role to play in protecting citizens and states from myriad malevolent forces, such as organized crime or terrorist acts, and in responding, as well as preventing, both natural and man-made disasters. Research and development in this field often focuses on certain broad areas, including security of infrastructures and utilities; intelligence surveillance and border security; and stability and safety in cases of crisis. …

Nanotechnology is coming to play an ever greater title:role in these applications. Whether it’s used for detecting potentially harmful materials for homeland security, finding pathogens in water supply systems, or for early warning and detoxification of harmful airborne substances, its usefulness and efficiency are becoming more evident by the day.

He’s quite right about these applications. For example, I’ve just published (May 9, 2014) piece ‘Textiles laced with carbon nanotubes for clothing that protects against poison gas‘.

Engel goes on to describe a dark side to nanotechnology-enabled security,

On the other hand, more and more unsettling scenarios are fathomable with the advent of this new technology, such as covertly infiltrated devices, as small as tiny insects, being used to coordinate and execute a disarming attack on obsolete weapons systems, information apparatuses, or power grids.

Engel is also right about the potential surveillance issues. In a Dec. 18, 2013 posting I featured a special issue of SIGNAL Magazine (which covers the latest trends and techniques in topics that include C4ISR, information security, intelligence, electronics, homeland security, cyber technologies,  …) focusing on nanotechnology-enabled security and surveillance,

The Dec. 1, 2013 article by Rita Boland (h/t Dec. 13, 2013 Azonano news item) does a good job of presenting a ‘big picture’ approach including nonmilitary and military  nanotechnology applications  by interviewing the main players in the US,

Nanotechnology is the new cyber, according to several major leaders in the field. Just as cyber is entrenched across global society now, nano is poised to be the major capabilities enabler of the next decades. Expert members from the National Nanotechnology Initiative representing government and science disciplines say nano has great significance for the military and the general public.

For anyone who may think Engel is exaggerating when he mentions tiny insects being used for surveillance, there’s this May 8, 2014 post (Cyborg Beetles Detect Nerve Gas) by Dexter Johnson on his Nanoclast blog (Note: Dexter is an engineer who describes the technology in a somewhat detailed, technical fashion). I have a less technical description of some then current research in an Aug. 12, 2011 posting featuring some military experiments, for example, a surveillance camera disguised as a hummingbird (I have a brief video of a demonstration) and some research into how smartphones can be used for surveillance.

Engel comes to an interesting conclusion (Note: A link has been removed),

The point is this: whatever conveniences are seemingly afforded by these sort of technological advances, there is persistent ambiguity about the extent to which this technology actually protects or makes us more vulnerable. Striking the right balance between respecting privacy and security is an ever-elusive goal, and at such an early point in the development of nanotech, must be approached on a case by case basis. … [emphasis mine]

I don’t understand what Engel means when he says “case by case.” Are these individual applications that he feels are prone to misuse or specific usages of these applications? In any event, while I appreciate the concerns (I share many of them), I don’t think his proposed approach is practicable and that leads to another question, what can be done? Sadly, I have no answers but I am glad to see the question being asked in the ‘nanotechnology webspace’.

I did some searching for Bandon Engel online and found this January 17, 2014 guest post (about a Dean Koontz book) on The Belle’s Tales blog. He also has a blog of his own, Brandon Engel where he describes himself this way,

Musician, filmmaker, multimedia journalist, puppeteer, and professional blogger based in Chicago.

The man clearly has a wide range of interests and concerns.

As for the question posed in this post’s head, I don’t think there is a simple one-to-one equivalency where one more security procedure results in one more surveillance procedure. However, I do believe there is a relationship between the two and that sometimes increased security is an argument used to support increased surveillance procedures. While Engel doesn’t state that explicitly in his piece, I think it is implied.

One final thought, surveillance is not new and one of the more interesting examples of the ‘art’ is featured in a description of the Parisian constabulary of the 18th century written by Nina Kushner in ,

The Case of the Closely Watched Courtesans
The French police obsessively tracked the kept women of 18th-century Paris. Why? (Slate.com, April 15, 2014)

or

Republished as: French police obsessively tracked elite sex workers of 18th-century Paris — and well-to-do men who hired them (National Post, April 16, 2014)

Kushner starts her article by describing contemporary sex workers and a 2014 Urban Institute study and then draws parallels between now and 18th Century Parisian sex workers while detailing advances in surveillance reports,

… One of the very first police forces in the Western world emerged in 18th-century Paris, and one of its vice units asked many of the same questions as the Urban Institute authors: How much do sex workers earn? Why do they turn to sex work in the first place? What are their relationships with their employers?

The vice unit, which operated from 1747 to 1771, turned out thousands of hand-written pages detailing what these dames entretenues [kept women] did. …

… They gathered biographical and financial data on the men who hired kept women — princes, peers of the realm, army officers, financiers, and their sons, a veritable “who’s who” of high society, or le monde. Assembling all of this information required cultivating extensive spy networks. Making it intelligible required certain bureaucratic developments: These inspectors perfected the genre of the report and the information management system of the dossier. These forms of “police writing,” as one scholar has described them, had been emerging for a while. But they took a giant leap forward at midcentury, with the work of several Paris police inspectors, including Inspector Jean-Baptiste Meusnier, the officer in charge of this vice unit from its inception until 1759. Meusnier and his successor also had clear literary talent; the reports are extremely well written, replete with irony, clever turns of phrase, and even narrative tension — at times, they read like novels.

If you have the time, Kushner’s well written article offers fascinating insight.