Tag Archives: privacy

Smart City tech brief: facial recognition, cybersecurity; privacy protection; and transparency

This May 10, 2022 Association for Computing Machinery (ACM) announcement (received via email) has an eye-catching head,

Should Smart Cities Adopt Facial Recognition, Remote Monitoring Software+Social Media to Police [verb] Info?

The Association for Computing Machinery, the largest and most prestigious computer science society worldwide (100,000 members) has released a report, ACM TechBrief: Smart Cities, for smart city planners to address 1) cybersecurity; 2) privacy protections; 3) fairness and transparency; and 4) sustainability when planning and designing systems, including climate impact. 

There’s a May 3, 2022 ACM news release about the latest technical brief,

The Association for Computing Machinery’s global Technology Policy Council (ACM TPC) just released, “ACM TechBrief: Smart Cities,” which highlights the challenges involved in deploying information and communication technology to create smart cities and calls for policy leaders planning such projects to do so without compromising security, privacy, fairness and sustainability. The TechBrief includes a primer on smart cities, key statistics about the growth and use of these technologies, and a short list of important policy implications.

“Smart cities” are municipalities that use a network of physical devices and computer technologies to make the delivery of public services more efficient and/or more environmentally friendly. Examples of smart city applications include using sensors to turn off streetlights when no one is present, monitoring traffic patterns to reduce roadway congestion and air pollution, or keeping track of home-bound medical patients in order to dispatch emergency responders when needed. Smart cities are an outgrowth of the Internet of Things (IoT), the rapidly growing infrastructure of literally billions of physical devices embedded with sensors that are connected to computers and the Internet.

The deployment of smart city technology is growing across the world, and these technologies offer significant benefits. For example, the TechBrief notes that “investing in smart cities could contribute significantly to achieving greenhouse gas emissions reduction targets,” and that “smart cities use digital innovation to make urban service delivery more efficient.”

Because of the meteoric growth and clear benefits of smart city technologies, the TechBrief notes that now is an urgent time to address some of the important public policy concerns that smart city technologies raise. The TechBrief lists four key policy implications that government officials, as well as the private companies that develop these technologies, should consider.

These include:

Cybersecurity risks must be considered at every stage of every smart city technology’s life cycle.

Effective privacy protection mechanisms must be an essential component of any smart city technology deployed.

Such mechanisms should be transparently fair to all city users, not just residents.

The climate impact of smart city infrastructures must be fully understood as they are being designed and regularly assessed after they are deployed

“Smart cities are fast becoming a reality around the world,”explains Chris Hankin, a Professor at Imperial College London and lead author of the ACM TechBrief on Smart Cities. “By 2025, 26% of all internet-connected devices will be used in a smart city application. As technologists, we feel we have a responsibility to raise important questions to ensure that these technologies best serve the public interest. For example, many people are unaware that some smart city technologies involve the collection of personally identifiable data. We developed this TechBrief to familiarize the public and lawmakers with this topic and present some key issues for consideration. Our overarching goal is to guide enlightened public policy in this area.”

“Our new TechBrief series builds on earlier and ongoing work by ACM’s technology policy committees,” added James Hendler, Professor at Rensselaer Polytechnic Institute and Chair of the ACM Technology Policy Council. “Because many smart city applications involve algorithms making decisions which impact people directly, this TechBrief calls for methods to ensure fairness and transparency in how these systems are developed. This reinforces an earlier statement we issued that outlined seven principles for algorithmic transparency and accountability. We also note that smart city infrastructures are especially vulnerable to malicious attacks.”

This TechBrief is the third in a series of short technical bulletins by ACM TPC that present scientifically grounded perspectives on the impact of specific developments or applications of technology. Designed to complement ACM’s activities in the policy arena, TechBriefs aim to inform policymakers, the public, and others about the nature and implications of information technologies. The first ACM TechBrief focused on climate change, while the second addressed facial recognition. Topics under consideration for future issues include quantum computing, election security, and encryption.

About the ACM Technology Policy Council

ACM’s global Technology Policy Council sets the agenda for ACM’s global policy activities and serves as the central convening point for ACM’s interactions with government organizations, the computing community, and the public in all matters of public policy related to computing and information technology. The Council’s members are drawn from ACM’s global membership. It coordinates the activities of ACM’s regional technology policy groups and sets the agenda for global initiatives to address evolving technology policy issues.

About ACM

ACM, the Association for Computing Machinery, is the world’s largest educational and scientific computing society, uniting educators, researchers and professionals to inspire dialogue, share resources and address the field’s challenges. ACM strengthens the computing profession’s collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM supports the professional growth of its members by providing opportunities for life-long learning, career development, and professional networking.

This is indeed a brief. I recommend reading it as it provides a very good overview to the topic of ‘smart cities’ and raises a question or two. For example, there’s this passage from the April 2022 Issue 3 Technical Brief on p. 2,

… policy makers should target broad and fair access and application of AI and, in general, ICT [information and communication technologies]. This can be achieved through transparent planning and decision-making processes for smart city infrastructure and application developments, such as open hearings, focus groups, and advisory panels. The goal must be to minimize potential harm while maximizing the benefits that algorithmic decision-making [emphasis mine] can bring

Is this algorithmic decision-making under human supervision? It doesn’t seem to be specified in the brief itself. It’s possible the answer lies elsewhere. After all, this is the third in the series.

AI x 2: the Amnesty International and Artificial Intelligence story

Amnesty International and artificial intelligence seem like an unexpected combination but it all makes sense when you read a June 13, 2018 article by Steven Melendez for Fast Company (Note: Links have been removed),

If companies working on artificial intelligence don’t take steps to safeguard human rights, “nightmare scenarios” could unfold, warns Rasha Abdul Rahim, an arms control and artificial intelligence researcher at Amnesty International in a blog post. Those scenarios could involve armed, autonomous systems choosing military targets with little human oversight, or discrimination caused by biased algorithms, she warns.

Rahim pointed at recent reports of Google’s involvement in the Pentagon’s Project Maven, which involves harnessing AI image recognition technology to rapidly process photos taken by drones. Google recently unveiled new AI ethics policies and has said it won’t continue with the project once its current contract expires next year after high-profile employee dissent over the project. …

“Compliance with the laws of war requires human judgement [sic] –the ability to analyze the intentions behind actions and make complex decisions about the proportionality or necessity of an attack,” Rahim writes. “Machines and algorithms cannot recreate these human skills, and nor can they negotiate, produce empathy, or respond to unpredictable situations. In light of these risks, Amnesty International and its partners in the Campaign to Stop Killer Robots are calling for a total ban on the development, deployment, and use of fully autonomous weapon systems.”

Rasha Abdul Rahim’s June 14, 2018 posting (I’m putting the discrepancy in publication dates down to timezone differences) on the Amnesty International website (Note: Links have been removed),

Last week [June 7, 2018] Google released a set of principles to govern its development of AI technologies. They include a broad commitment not to design or deploy AI in weaponry, and come in the wake of the company’s announcement that it will not renew its existing contract for Project Maven, the US Department of Defense’s AI initiative, when it expires in 2019.

The fact that Google maintains its existing Project Maven contract for now raises an important question. Does Google consider that continuing to provide AI technology to the US government’s drone programme is in line with its new principles? Project Maven is a litmus test that allows us to see what Google’s new principles mean in practice.

As details of the US drone programme are shrouded in secrecy, it is unclear precisely what role Google plays in Project Maven. What we do know is that US drone programme, under successive administrations, has been beset by credible allegations of unlawful killings and civilian casualties. The cooperation of Google, in any capacity, is extremely troubling and could potentially implicate it in unlawful strikes.

As AI technology advances, the question of who will be held accountable for associated human rights abuses is becoming increasingly urgent. Machine learning, and AI more broadly, impact a range of human rights including privacy, freedom of expression and the right to life. It is partly in the hands of companies like Google to safeguard these rights in relation to their operations – for us and for future generations. If they don’t, some nightmare scenarios could unfold.

Warfare has already changed dramatically in recent years – a couple of decades ago the idea of remote controlled bomber planes would have seemed like science fiction. While the drones currently in use are still controlled by humans, China, France, Israel, Russia, South Korea, the UK and the US are all known to be developing military robots which are getting smaller and more autonomous.

For example, the UK is developing a number of autonomous systems, including the BAE [Systems] Taranis, an unmanned combat aircraft system which can fly in autonomous mode and automatically identify a target within a programmed area. Kalashnikov, the Russian arms manufacturer, is developing a fully automated, high-calibre gun that uses artificial neural networks to choose targets. The US Army Research Laboratory in Maryland, in collaboration with BAE Systems and several academic institutions, has been developing micro drones which weigh less than 30 grams, as well as pocket-sized robots that can hop or crawl.

Of course, it’s not just in conflict zones that AI is threatening human rights. Machine learning is already being used by governments in a wide range of contexts that directly impact people’s lives, including policing [emphasis mine], welfare systems, criminal justice and healthcare. Some US courts use algorithms to predict future behaviour of defendants and determine their sentence lengths accordingly. The potential for this approach to reinforce power structures, discrimination or inequalities is huge.

In july 2017, the Vancouver Police Department announced its use of predictive policing software, the first such jurisdiction in Canada to make use of the technology. My Nov. 23, 2017 posting featured the announcement.

The almost too aptly named Campaign to Stop Killer Robots can be found here. Their About Us page provides a brief history,

Formed by the following non-governmental organizations (NGOs) at a meeting in New York on 19 October 2012 and launched in London in April 2013, the Campaign to Stop Killer Robots is an international coalition working to preemptively ban fully autonomous weapons. See the Chronology charting our major actions and achievements to date.

Steering Committee

The Steering Committee is the campaign’s principal leadership and decision-making body. It is comprised of five international NGOs, a regional NGO network, and four national NGOs that work internationally:

Human Rights Watch
Article 36
Association for Aid and Relief Japan
International Committee for Robot Arms Control
Mines Action Canada
Nobel Women’s Initiative
PAX (formerly known as IKV Pax Christi)
Pugwash Conferences on Science & World Affairs
Seguridad Humana en América Latina y el Caribe (SEHLAC)
Women’s International League for Peace and Freedom

For more information, see this Overview. A Terms of Reference is also available on request, detailing the committee’s selection process, mandate, decision-making, meetings and communication, and expected commitments.

For anyone who may be interested in joining Amnesty International, go here.

Getting chipped

A January 23, 2018 article by John Converse Townsend for Fast Company highlights the author’s experience of ‘getting chipped’ in Wisconsin (US),

I have an RFID, or radio frequency ID, microchip implanted in my hand. Now with a wave, I can unlock doors, fire off texts, login to my computer, and even make credit card payments.

There are others like me: The majority of employees at the Wisconsin tech company Three Square Market (or 32M) have RFID implants, too. Last summer, with the help of Andy “Gonzo” Whitehead, a local body piercer with 17 years of experience, the company hosted a “chipping party” for employees who’d volunteered to test the technology in the workplace.

“We first presented the concept of being chipped to the employees, thinking we might get a few people interested,” CEO [Chief Executive Officer] Todd Westby, who has implants in both hands, told me. “Literally out of the box, we had 40 people out of close to 90 that were here that said, within 10 minutes, ‘I would like to be chipped.’”

Westby’s left hand can get him into the office, make phone calls, and stores his living will and drivers license information, while the chip in his right hand is using for testing new applications. (The CEO’s entire family is chipped, too.) Other employees said they have bitcoin wallets and photos stored on their devices.

The legendary Gonzo Whitehead was waiting for me when I arrived at Three Square Market HQ, located in quiet River Falls, 40 minutes east of Minneapolis. The minutes leading up to the big moment were a bit nervy, after seeing the size of the needle (it’s huge), but the experience was easier than I could have imagined. The RFID chip is the size of a grain of basmati rice, but the pain wasn’t so bad–comparable to a bee sting, and maybe less so. I experienced a bit of bruising afterward (no bleeding), and today the last remaining mark of trauma is a tiny, fading scar between my thumb and index finger. Unless you were looking for it, the chip resting under my skin is invisible.

Truth is, the applications for RFID implants are pretty cool. But right now, they’re also limited. Without a near-field communication (NFC) writer/reader, which powers on a “passive” RFID chip to write and read information to the device’s memory, an implant isn’t of much use. But that’s mostly a hardware issue. As NFC technology becomes available, which is increasingly everywhere thanks to Samsung Pay and Apple Pay and new contactless “tap-and-go” credit cards, the possibilities become limitless. [emphasis mine]

Health and privacy?

Townsend does cover a few possible downsides to the ‘limitless possibilities’ offered by RFID’s combined with NFC technology,

From a health perspective, the RFID implants are biologically safe–not so different from birth control implants [emphasis mine]. [US Food and Drug Administration] FDA-sanctioned for use in humans since 2004, the chips neither trigger metal detectors nor disrupt [magnetic resonance imaging] MRIs, and their glass casings hold up to pressure testing, whether that’s being dropped from a rooftop or being run over by a pickup truck.

The privacy side of things is a bit more complicated, but the undeniable reality is that privacy isn’t as prized as we’d like to think [emphasis mine]. It’s already a regular concession to convenience.

“Your information’s for sale every day,” McMullen [Patrick McMullen, president, Three Square Market] says. “Thirty-four billion avenues exist for your information to travel down every single day, whether you’re checking Facebook, checking out at the supermarket, driving your car . . . your information’s everywhere.

Townsend may not be fully up-to-date on the subject of birth control implants. I think ‘safeish’ might be a better description in light of this news of almost two years ago (from a March 1, 2016 news item on CBS [Columbia Broadcasting Service] News [online]), Note: Links have been removed,

[US] Federal health regulators plan to warn consumers more strongly about Essure, a contraceptive implant that has drawn thousands of complaints from women reporting chronic pain, bleeding and other health problems.

The Food and Drug Administration announced Monday it would add a boxed warning — its most serious type — to alert doctors and patients to problems reported with the nickel-titanium implant.

But the FDA stopped short of removing the device from the market, a step favored by many women who have petitioned the agency in the last year. Instead, the agency is requiring manufacturer Bayer to conduct studies of the device to further assess its risks in different groups of women.

The FDA is requiring Bayer to conduct a study of 2,000 patients comparing problems like unplanned pregnancy and pelvic pain between patients getting Essure and those receiving traditional “tube tying” surgery. Agency officials said they have reviewed more than 600 reports of women becoming pregnant after receiving Essure. Women are supposed to get a test after three months to make sure Essure is working appropriately, but the agency noted some women do not follow-up for the test.

FDA officials acknowledged the proposed study would take years to complete, but said Bayer would be expected to submit interim results by mid-2017.

According to a Sept. 25, 2017 article by Kerri O’Brien for WRIC.com, Bayer had suspended sales of their device in all countries except the US,

Bayer, the manufacturer of Essure, has announced it’s halting sales of Essure in all countries outside of the U.S. In a statement, Bayer told 8News it’s due to a lack of interest in the product outside of the U.S.

“Bayer made a commercial decision this Spring to discontinue the distribution of Essure® outside of the U.S. where there is not as much patient interest in permanent birth control,” the statement read.

The move also comes after the European Union suspended sales of the device. The suspension was prompted by the National Standards Authority of Ireland declining to renew Essure’s CE marketing. “CE,” according to the European Commission website signifies products sold in the EEA that has been assessed to meet “high safety, health, and environmental protection requirements.”

These excerpts are about the Essure birth control implant. Perhaps others are safer? That noted, it does seem that Townsend was a bit dismissive of safety concerns.

As for privacy, he does investigate further to discover this,

As technology evolves and becomes more sophisticated, the methods to break it also evolve and get more sophisticated, says D.C.-based privacy expert Michelle De Mooy. Even so, McMullen believes that our personal information is safer in our hand than in our wallets. He  says the smartphone you touch 2,500 times a day does 100 times more reporting of data than does an RFID implant, plus the chip can save you from pickpockets and avoid credit card skimmers altogether.

Well, the first sentence suggests some caution. As for De Mooy, there’s this from her profile page on the Center for Democracy and Technology website (Note: A link has been removed),

Michelle De Mooy is Director of the Privacy & Data Project at the Center for Democracy & Technology. She advocates for data privacy rights and protections in legislation and regulation, works closely with industry and other stakeholders to investigate good data practices and controls, as well as identifying and researching emerging technology that impacts personal privacy. She leads CDT’s health privacy work, chairing the Health Privacy Working Group and focusing on the intersection between individual privacy, health information and technology. Michelle’s current research is focused on ethical and privacy-aware internal research and development in wearables, the application of data analytics to health information found on non-traditional platforms, like social media, and the growing market for genetic data. She has testified before Congress on health policy, spoken about native advertising at the Federal Trade Commission, and written about employee wellness programs for US News & World Report’s “Policy Dose” blog. Michelle is a frequent media contributor, appearing in the New York Times, the Guardian, the Wall Street Journal, Vice, and the Los Angeles Times, as well as on The Today Show, Voice of America, and Government Matters TV programs.

Ethics anyone?

Townsend does raise some ethical issues (Note: A link has been removed),

… Word from CEO Todd Westby is that parents in Wisconsin have been asking whether (and when) they can have their children implanted with GPS-enabled devices (which, incidentally, is the subject of the “Arkangel” episode in the new season of Black Mirror [US television programme]). But that, of course, raises ethical questions: What if a kid refused to be chipped? What if they never knew?

Final comments on implanted RFID chips and bodyhacking

It doesn’t seem that implantable chips have changed much since I first wrote about them in a May 27, 2010 posting titled: Researcher infects self with virus.  In that instance, Dr Mark Gasson, a researcher at the University of Reading. introduced a virus into a computer chip implanted in his body.

Of course since 2010, there are additional implantable items such as computer chips and more making their way into our bodies and it doesn’t seem to be much public discussion (other than in popular culture) about the implications.

Presumably, there are policy makers tracking these developments. I have to wonder if the technology gurus will continue to tout these technologies as already here or having made such inroads that we (the public) are presented with a fait accompli with the policy makers following behind.

Internet of toys, the robotification of childhood, and privacy issues

Leave it to the European Commission’s (EC) Joint Research Centre (JRC) to look into the future of toys. As far as I’m aware there are no such moves in either Canada or the US despite the ubiquity of robot toys and other such devices. From a March 23, 2017 EC JRC  press release (also on EurekAlert),

Action is needed to monitor and control the emerging Internet of Toys, concludes a new JRC report. Privacy and security are highlighted as main areas of concern.

Large numbers of connected toys have been put on the market over the past few years, and the turnover is expected to reach €10 billion by 2020 – up from just €2.6 billion in 2015.

Connected toys come in many different forms, from smart watches to teddy bears that interact with their users. They are connected to the internet and together with other connected appliances they form the Internet of Things, which is bringing technology into our daily lives more than ever.

However, the toys’ ability to record, store and share information about their young users raises concerns about children’s safety, privacy and social development.

A team of JRC scientists and international experts looked at the safety, security, privacy and societal questions emerging from the rise of the Internet of Toys. The report invites policymakers, industry, parents and teachers to study connected toys more in depth in order to provide a framework which ensures that these toys are safe and beneficial for children.

Robotification of childhood

Robots are no longer only used in industry to carry out repetitive or potentially dangerous tasks. In the past years, robots have entered our everyday lives and also children are more and more likely to encounter robotic or artificial intelligence-enhanced toys.

We still know relatively little about the consequences of children’s interaction with robotic toys. However, it is conceivable that they represent both opportunities and risks for children’s cognitive, socio-emotional and moral-behavioural development.

For example, social robots may further the acquisition of foreign language skills by compensating for the lack of native speakers as language tutors or by removing the barriers and peer pressure encountered in class room. There is also evidence about the benefits of child-robot interaction for children with developmental problems, such as autism or learning difficulties, who may find human interaction difficult.

However, the internet-based personalization of children’s education via filtering algorithms may also increase the risk of ‘educational bubbles’ where children only receive information that fits their pre-existing knowledge and interest – similar to adult interaction on social media networks.

Safety and security considerations

The rapid rise in internet connected toys also raises concerns about children’s safety and privacy. In particular, the way that data gathered by connected toys is analysed, manipulated and stored is not transparent, which poses an emerging threat to children’s privacy.

The data provided by children while they play, i.e the sounds, images and movements recorded by connected toys is personal data protected by the EU data protection framework, as well as by the new General Data Protection Regulation (GDPR). However, information on how this data is stored, analysed and shared might be hidden in long privacy statements or policies and often go unnoticed by parents.

Whilst children’s right to privacy is the most immediate concern linked to connected toys, there is also a long term concern: growing up in a culture where the tracking, recording and analysing of children’s everyday choices becomes a normal part of life is also likely to shape children’s behaviour and development.

Usage framework to guide the use of connected toys

The report calls for industry and policymakers to create a connected toys usage framework to act as a guide for their design and use.

This would also help toymakers to meet the challenge of complying with the new European General Data Protection Regulation (GDPR) which comes into force in May 2018, which will increase citizens’ control over their personal data.

The report also calls for the connected toy industry and academic researchers to work together to produce better designed and safer products.

Advice for parents

The report concludes that it is paramount that we understand how children interact with connected toys and which risks and opportunities they entail for children’s development.

“These devices come with really interesting possibilities and the more we use them, the more we will learn about how to best manage them. Locking them up in a cupboard is not the way to go. We as adults have to understand how they work – and how they might ‘misbehave’ – so that we can provide the right tools and the right opportunities for our children to grow up happy in a secure digital world”, Stéphane Chaudron, the report’s lead researcher at the Joint Research Centre (JRC).).

The authors of the report encourage parents to get informed about the capabilities, functions, security measures and privacy settings of toys before buying them. They also urge parents to focus on the quality of play by observing their children, talking to them about their experiences and playing alongside and with their children.

Protecting and empowering children

Through the Alliance to better protect minors online and with the support of UNICEF, NGOs, Toy Industries Europe and other industry and stakeholder groups, European and global ICT and media companies  are working to improve the protection and empowerment of children when using connected toys. This self-regulatory initiative is facilitated by the European Commission and aims to create a safer and more stimulating digital environment for children.

There’s an engaging video accompanying this press release,

You can find the report (Kaleidoscope on the Internet of Toys: Safety, security, privacy and societal insights) here and both the PDF and print versions are free (although I imagine you’ll have to pay postage for the print version). This report was published in 2016; the authors are Stéphane Chaudron, Rosanna Di Gioia, Monica Gemo, Donell Holloway , Jackie Marsh , Giovanna Mascheroni , Jochen Peter, Dylan Yamada-Rice and organizations involved include European Cooperation in Science and Technology (COST), Digital Literacy and Multimodal Practices of Young Children (DigiLitEY), and COST Action IS1410. DigiLitEY is a European network of 33 countries focusing on research in this area (2015-2019).

Emerging technology and the law

I have three news bits about legal issues that are arising as a consequence of emerging technologies.

Deep neural networks, art, and copyright

Caption: The rise of automated art opens new creative avenues, coupled with new problems for copyright protection. Credit: Provided by: Alexander Mordvintsev, Christopher Olah and Mike Tyka

Presumably this artwork is a demonstration of automated art although they never really do explain how in the news item/news release. An April 26, 2017 news item on ScienceDaily announces research into copyright and the latest in using neural networks to create art,

In 1968, sociologist Jean Baudrillard wrote on automatism that “contained within it is the dream of a dominated world […] that serves an inert and dreamy humanity.”

With the growing popularity of Deep Neural Networks (DNN’s), this dream is fast becoming a reality.

Dr. Jean-Marc Deltorn, researcher at the Centre d’études internationales de la propriété intellectuelle in Strasbourg, argues that we must remain a responsive and responsible force in this process of automation — not inert dominators. As he demonstrates in a recent Frontiers in Digital Humanities paper, the dream of automation demands a careful study of the legal problems linked to copyright.

An April 26, 2017 Frontiers (publishing) news release on EurekAlert, which originated the news item, describes the research in more detail,

For more than half a century, artists have looked to computational processes as a way of expanding their vision. DNN’s are the culmination of this cross-pollination: by learning to identify a complex number of patterns, they can generate new creations.

These systems are made up of complex algorithms modeled on the transmission of signals between neurons in the brain.

DNN creations rely in equal measure on human inputs and the non-human algorithmic networks that process them.

Inputs are fed into the system, which is layered. Each layer provides an opportunity for a more refined knowledge of the inputs (shape, color, lines). Neural networks compare actual outputs to expected ones, and correct the predictive error through repetition and optimization. They train their own pattern recognition, thereby optimizing their learning curve and producing increasingly accurate outputs.

The deeper the layers are, the higher the level of abstraction. The highest layers are able to identify the contents of a given input with reasonable accuracy, after extended periods of training.

Creation thus becomes increasingly automated through what Deltorn calls “the arcane traceries of deep architecture”. The results are sufficiently abstracted from their sources to produce original creations that have been exhibited in galleries, sold at auction and performed at concerts.

The originality of DNN’s is a combined product of technological automation on one hand, human inputs and decisions on the other.

DNN’s are gaining popularity. Various platforms (such as DeepDream) now allow internet users to generate their very own new creations . This popularization of the automation process calls for a comprehensive legal framework that ensures a creator’s economic and moral rights with regards to his work – copyright protection.

Form, originality and attribution are the three requirements for copyright. And while DNN creations satisfy the first of these three, the claim to originality and attribution will depend largely on a given country legislation and on the traceability of the human creator.

Legislation usually sets a low threshold to originality. As DNN creations could in theory be able to create an endless number of riffs on source materials, the uncurbed creation of original works could inflate the existing number of copyright protections.

Additionally, a small number of national copyright laws confers attribution to what UK legislation defines loosely as “the person by whom the arrangements necessary for the creation of the work are undertaken.” In the case of DNN’s, this could mean anybody from the programmer to the user of a DNN interface.

Combined with an overly supple take on originality, this view on attribution would further increase the number of copyrightable works.

The risk, in both cases, is that artists will be less willing to publish their own works, for fear of infringement of DNN copyright protections.

In order to promote creativity – one seminal aim of copyright protection – the issue must be limited to creations that manifest a personal voice “and not just the electric glint of a computational engine,” to quote Deltorn. A delicate act of discernment.

DNN’s promise new avenues of creative expression for artists – with potential caveats. Copyright protection – a “catalyst to creativity” – must be contained. Many of us gently bask in the glow of an increasingly automated form of technology. But if we want to safeguard the ineffable quality that defines much art, it might be a good idea to hone in more closely on the differences between the electric and the creative spark.

This research is and be will part of a broader Frontiers Research Topic collection of articles on Deep Learning and Digital Humanities.

Here’s a link to and a citation for the paper,

Deep Creations: Intellectual Property and the Automata by Jean-Marc Deltorn. Front. Digit. Humanit., 01 February 2017 | https://doi.org/10.3389/fdigh.2017.00003

This paper is open access.

Conference on governance of emerging technologies

I received an April 17, 2017 notice via email about this upcoming conference. Here’s more from the Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics webpage,

The Fifth Annual Conference on Governance of Emerging Technologies:

Law, Policy and Ethics held at the new

Beus Center for Law & Society in Phoenix, AZ

May 17-19, 2017!

Call for Abstracts – Now Closed

The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including (but not limited to) nanotechnology, synthetic biology, gene editing, biotechnology, genomics, personalized medicine, human enhancement technologies, telecommunications, information technologies, surveillance technologies, geoengineering, neuroscience, artificial intelligence, and robotics. The conference is premised on the belief that there is much to be learned and shared from and across the governance experience and proposals for these various emerging technologies.

Keynote Speakers:

Gillian HadfieldRichard L. and Antoinette Schamoi Kirtland Professor of Law and Professor of Economics USC [University of Southern California] Gould School of Law

Shobita Parthasarathy, Associate Professor of Public Policy and Women’s Studies, Director, Science, Technology, and Public Policy Program University of Michigan

Stuart Russell, Professor at [University of California] Berkeley, is a computer scientist known for his contributions to artificial intelligence

Craig Shank, Vice President for Corporate Standards Group in Microsoft’s Corporate, External and Legal Affairs (CELA)

Plenary Panels:

Innovation – Responsible and/or Permissionless

Ellen-Marie Forsberg, Senior Researcher/Research Manager at Oslo and Akershus University College of Applied Sciences

Adam Thierer, Senior Research Fellow with the Technology Policy Program at the Mercatus Center at George Mason University

Wendell Wallach, Consultant, ethicist, and scholar at Yale University’s Interdisciplinary Center for Bioethics

 Gene Drives, Trade and International Regulations

Greg Kaebnick, Director, Editorial Department; Editor, Hastings Center Report; Research Scholar, Hastings Center

Jennifer Kuzma, Goodnight-North Carolina GlaxoSmithKline Foundation Distinguished Professor in Social Sciences in the School of Public and International Affairs (SPIA) and co-director of the Genetic Engineering and Society (GES) Center at North Carolina State University

Andrew Maynard, Senior Sustainability Scholar, Julie Ann Wrigley Global Institute of Sustainability Director, Risk Innovation Lab, School for the Future of Innovation in Society Professor, School for the Future of Innovation in Society, Arizona State University

Gary Marchant, Regents’ Professor of Law, Professor of Law Faculty Director and Faculty Fellow, Center for Law, Science & Innovation, Arizona State University

Marc Saner, Inaugural Director of the Institute for Science, Society and Policy, and Associate Professor, University of Ottawa Department of Geography

Big Data

Anupam Chander, Martin Luther King, Jr. Professor of Law and Director, California International Law Center, UC Davis School of Law

Pilar Ossorio, Professor of Law and Bioethics, University of Wisconsin, School of Law and School of Medicine and Public Health; Morgridge Institute for Research, Ethics Scholar-in-Residence

George Poste, Chief Scientist, Complex Adaptive Systems Initiative (CASI) (http://www.casi.asu.edu/), Regents’ Professor and Del E. Webb Chair in Health Innovation, Arizona State University

Emily Shuckburgh, climate scientist and deputy head of the Polar Oceans Team at the British Antarctic Survey, University of Cambridge

 Responsible Development of AI

Spring Berman, Ira A. Fulton Schools of Engineering, Arizona State University

John Havens, The IEEE [Institute of Electrical and Electronics Engineers] Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems

Subbarao Kambhampati, Senior Sustainability Scientist, Julie Ann Wrigley Global Institute of Sustainability, Professor, School of Computing, Informatics and Decision Systems Engineering, Ira A. Fulton Schools of Engineering, Arizona State University

Wendell Wallach, Consultant, Ethicist, and Scholar at Yale University’s Interdisciplinary Center for Bioethics

Existential and Catastrophic Ricks [sic]

Tony Barrett, Co-Founder and Director of Research of the Global Catastrophic Risk Institute

Haydn Belfield,  Academic Project Administrator, Centre for the Study of Existential Risk at the University of Cambridge

Margaret E. Kosal Associate Director, Sam Nunn School of International Affairs, Georgia Institute of Technology

Catherine Rhodes,  Academic Project Manager, Centre for the Study of Existential Risk at CSER, University of Cambridge

These were the panels that are of interest to me; there are others on the homepage.

Here’s some information from the Conference registration webpage,

Early Bird Registration – $50 off until May 1! Enter discount code: earlybirdGETs50

New: Group Discount – Register 2+ attendees together and receive an additional 20% off for all group members!

Click Here to Register!

Conference registration fees are as follows:

  • General (non-CLE) Registration: $150.00
  • CLE Registration: $350.00
  • *Current Student / ASU Law Alumni Registration: $50.00
  • ^Cybsersecurity sessions only (May 19): $100 CLE / $50 General / Free for students (registration info coming soon)

There you have it.

Neuro-techno future laws

I’m pretty sure this isn’t the first exploration of potential legal issues arising from research into neuroscience although it’s the first one I’ve stumbled across. From an April 25, 2017 news item on phys.org,

New human rights laws to prepare for advances in neurotechnology that put the ‘freedom of the mind’ at risk have been proposed today in the open access journal Life Sciences, Society and Policy.

The authors of the study suggest four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy. The four laws are: the right to cognitive liberty, the right to mental privacy, the right to mental integrity and the right to psychological continuity.

An April 25, 2017 Biomed Central news release on EurekAlert, which originated the news item, describes the work in more detail,

Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel, said: “The mind is considered to be the last refuge of personal freedom and self-determination, but advances in neural engineering, brain imaging and neurotechnology put the freedom of the mind at risk. Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology.”

Advances in neurotechnology, such as sophisticated brain imaging and the development of brain-computer interfaces, have led to these technologies moving away from a clinical setting and into the consumer domain. While these advances may be beneficial for individuals and society, there is a risk that the technology could be misused and create unprecedented threats to personal freedom.

Professor Roberto Andorno, co-author of the research, explained: “Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court, for example as a tool for assessing criminal responsibility or even the risk of reoffending. Consumer companies are using brain imaging for ‘neuromarketing’, to understand consumer behaviour and elicit desired responses from customers. There are also tools such as ‘brain decoders’ which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom which we sought to address with the development of four new human rights laws.”

The authors explain that as neurotechnology improves and becomes commonplace, there is a risk that the technology could be hacked, allowing a third-party to ‘eavesdrop’ on someone’s mind. In the future, a brain-computer interface used to control consumer technology could put the user at risk of physical and psychological damage caused by a third-party attack on the technology. There are also ethical and legal concerns over the protection of data generated by these devices that need to be considered.

International human rights laws make no specific mention to neuroscience, although advances in biomedicine have become intertwined with laws, such as those concerning human genetic data. Similar to the historical trajectory of the genetic revolution, the authors state that the on-going neurorevolution will force a reconceptualization of human rights laws and even the creation of new ones.

Marcello Ienca added: “Science-fiction can teach us a lot about the potential threat of technology. Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes. We need to be prepared to deal with the impact these technologies will have on our personal freedom.”

Here’s a link to and a citation for the paper,

Towards new human rights in the age of neuroscience and neurotechnology by Marcello Ienca and Roberto Andorno. Life Sciences, Society and Policy201713:5 DOI: 10.1186/s40504-017-0050-1 Published: 26 April 2017

©  The Author(s). 2017

This paper is open access.

Offering privacy and light control via smart windows

There have been quite a few ‘smart’ window stories here on this blog but this one is the first to feature a privacy option. From a Nov. 17, 2016 news item on Nanowerk,

Smart windows get darker to filter out the sun’s rays on bright days, and turn clear on cloudy days to let more light in. This feature can help control indoor temperatures and offers some privacy without resorting to aids such as mini-blinds.

Now scientists report a new development in this growing niche: solar smart windows that can turn opaque on demand and even power other devices. …

A Nov. 17, 2016 American Chemical Society (ACS) news release, which originated the news item, goes on to explain the work,

Most existing solar-powered smart windows are designed to respond automatically to changing conditions, such as light or heat. But this means that on cool or cloudy days, consumers can’t flip a switch and tint the windows for privacy. Also, these devices often operate on a mere fraction of the light energy they are exposed to while the rest gets absorbed by the windows. This heats them up, which can add warmth to a room that the windows are supposed to help keep cool. Jeremy Munday and colleagues wanted to address these limitations.

The researchers created a new smart window by sandwiching a polymer matrix containing microdroplets of liquid crystal materials, and an amorphous silicon layer — the type often used in solar cells — between two glass panes. When the window is “off,” the liquid crystals scatter light, making the glass opaque. The silicon layer absorbs the light and provides the low power needed to align the crystals so light can pass through and make the window transparent when the window is turned “on” by the user. The extra energy that doesn’t go toward operating the window is harvested and could be redirected to power other devices, such as lights, TVs or smartphones, the researchers say.

For anyone who finds reading text a bit onerous, there’s this video,

Here’s a link to and a citation for the paper,

Electrically Controllable Light Trapping for Self-Powered Switchable Solar Windows by Joseph Murray, Dakang Ma, and Jeremy N. Munday. ACS Photonics, Article ASAP DOI: 10.1021/acsphotonics.6b00518 Publication Date (Web): October 26, 2016

Copyright © 2016 American Chemical Society

This paper is behind a paywall.

Does more nano-enabled security = more nano-enabled surveillance?

A May 6, 2014 essay by Brandon Engel published on Nanotechnology Now poses an interesting question about the use of nanotechnology-enabled security and surveillance measures (Note: Links have been removed),

Security is of prime importance in an increasingly globalized society. It has a role to play in protecting citizens and states from myriad malevolent forces, such as organized crime or terrorist acts, and in responding, as well as preventing, both natural and man-made disasters. Research and development in this field often focuses on certain broad areas, including security of infrastructures and utilities; intelligence surveillance and border security; and stability and safety in cases of crisis. …

Nanotechnology is coming to play an ever greater title:role in these applications. Whether it’s used for detecting potentially harmful materials for homeland security, finding pathogens in water supply systems, or for early warning and detoxification of harmful airborne substances, its usefulness and efficiency are becoming more evident by the day.

He’s quite right about these applications. For example, I’ve just published (May 9, 2014) piece ‘Textiles laced with carbon nanotubes for clothing that protects against poison gas‘.

Engel goes on to describe a dark side to nanotechnology-enabled security,

On the other hand, more and more unsettling scenarios are fathomable with the advent of this new technology, such as covertly infiltrated devices, as small as tiny insects, being used to coordinate and execute a disarming attack on obsolete weapons systems, information apparatuses, or power grids.

Engel is also right about the potential surveillance issues. In a Dec. 18, 2013 posting I featured a special issue of SIGNAL Magazine (which covers the latest trends and techniques in topics that include C4ISR, information security, intelligence, electronics, homeland security, cyber technologies,  …) focusing on nanotechnology-enabled security and surveillance,

The Dec. 1, 2013 article by Rita Boland (h/t Dec. 13, 2013 Azonano news item) does a good job of presenting a ‘big picture’ approach including nonmilitary and military  nanotechnology applications  by interviewing the main players in the US,

Nanotechnology is the new cyber, according to several major leaders in the field. Just as cyber is entrenched across global society now, nano is poised to be the major capabilities enabler of the next decades. Expert members from the National Nanotechnology Initiative representing government and science disciplines say nano has great significance for the military and the general public.

For anyone who may think Engel is exaggerating when he mentions tiny insects being used for surveillance, there’s this May 8, 2014 post (Cyborg Beetles Detect Nerve Gas) by Dexter Johnson on his Nanoclast blog (Note: Dexter is an engineer who describes the technology in a somewhat detailed, technical fashion). I have a less technical description of some then current research in an Aug. 12, 2011 posting featuring some military experiments, for example, a surveillance camera disguised as a hummingbird (I have a brief video of a demonstration) and some research into how smartphones can be used for surveillance.

Engel comes to an interesting conclusion (Note: A link has been removed),

The point is this: whatever conveniences are seemingly afforded by these sort of technological advances, there is persistent ambiguity about the extent to which this technology actually protects or makes us more vulnerable. Striking the right balance between respecting privacy and security is an ever-elusive goal, and at such an early point in the development of nanotech, must be approached on a case by case basis. … [emphasis mine]

I don’t understand what Engel means when he says “case by case.” Are these individual applications that he feels are prone to misuse or specific usages of these applications? In any event, while I appreciate the concerns (I share many of them), I don’t think his proposed approach is practicable and that leads to another question, what can be done? Sadly, I have no answers but I am glad to see the question being asked in the ‘nanotechnology webspace’.

I did some searching for Bandon Engel online and found this January 17, 2014 guest post (about a Dean Koontz book) on The Belle’s Tales blog. He also has a blog of his own, Brandon Engel where he describes himself this way,

Musician, filmmaker, multimedia journalist, puppeteer, and professional blogger based in Chicago.

The man clearly has a wide range of interests and concerns.

As for the question posed in this post’s head, I don’t think there is a simple one-to-one equivalency where one more security procedure results in one more surveillance procedure. However, I do believe there is a relationship between the two and that sometimes increased security is an argument used to support increased surveillance procedures. While Engel doesn’t state that explicitly in his piece, I think it is implied.

One final thought, surveillance is not new and one of the more interesting examples of the ‘art’ is featured in a description of the Parisian constabulary of the 18th century written by Nina Kushner in ,

The Case of the Closely Watched Courtesans
The French police obsessively tracked the kept women of 18th-century Paris. Why? (Slate.com, April 15, 2014)

or

Republished as: French police obsessively tracked elite sex workers of 18th-century Paris — and well-to-do men who hired them (National Post, April 16, 2014)

Kushner starts her article by describing contemporary sex workers and a 2014 Urban Institute study and then draws parallels between now and 18th Century Parisian sex workers while detailing advances in surveillance reports,

… One of the very first police forces in the Western world emerged in 18th-century Paris, and one of its vice units asked many of the same questions as the Urban Institute authors: How much do sex workers earn? Why do they turn to sex work in the first place? What are their relationships with their employers?

The vice unit, which operated from 1747 to 1771, turned out thousands of hand-written pages detailing what these dames entretenues [kept women] did. …

… They gathered biographical and financial data on the men who hired kept women — princes, peers of the realm, army officers, financiers, and their sons, a veritable “who’s who” of high society, or le monde. Assembling all of this information required cultivating extensive spy networks. Making it intelligible required certain bureaucratic developments: These inspectors perfected the genre of the report and the information management system of the dossier. These forms of “police writing,” as one scholar has described them, had been emerging for a while. But they took a giant leap forward at midcentury, with the work of several Paris police inspectors, including Inspector Jean-Baptiste Meusnier, the officer in charge of this vice unit from its inception until 1759. Meusnier and his successor also had clear literary talent; the reports are extremely well written, replete with irony, clever turns of phrase, and even narrative tension — at times, they read like novels.

If you have the time, Kushner’s well written article offers fascinating insight.