Tag Archives: University of California at Berkeley

Artificial intelligence and metaphors

This is a different approach to artificial intelligence. From a June 27, 2017 news item on ScienceDaily,

Ask Siri to find a math tutor to help you “grasp” calculus and she’s likely to respond that your request is beyond her abilities. That’s because metaphors like “grasp” are difficult for Apple’s voice-controlled personal assistant to, well, grasp.

But new UC Berkeley research suggests that Siri and other digital helpers could someday learn the algorithms that humans have used for centuries to create and understand metaphorical language.

Mapping 1,100 years of metaphoric English language, researchers at UC Berkeley and Lehigh University in Pennsylvania have detected patterns in how English speakers have added figurative word meanings to their vocabulary.

The results, published in the journal Cognitive Psychology, demonstrate how throughout history humans have used language that originally described palpable experiences such as “grasping an object” to describe more intangible concepts such as “grasping an idea.”

Unfortunately, this image is not the best quality,

Scientists have created historical maps showing the evolution of metaphoric language. (Image courtesy of Mahesh Srinivasan)

A June 27, 2017 University of California at Berkeley (or UC Berkeley) news release by Yasmin Anwar, which originated the news item,

“The use of concrete language to talk about abstract ideas may unlock mysteries about how we are able to communicate and conceptualize things we can never see or touch,” said study senior author Mahesh Srinivasan, an assistant professor of psychology at UC Berkeley. “Our results may also pave the way for future advances in artificial intelligence.”

The findings provide the first large-scale evidence that the creation of new metaphorical word meanings is systematic, researchers said. They can also inform efforts to design natural language processing systems like Siri to help them understand creativity in human language.

“Although such systems are capable of understanding many words, they are often tripped up by creative uses of words that go beyond their existing, pre-programmed vocabularies,” said study lead author Yang Xu, a postdoctoral researcher in linguistics and cognitive science at UC Berkeley.

“This work brings opportunities toward modeling metaphorical words at a broad scale, ultimately allowing the construction of artificial intelligence systems that are capable of creating and comprehending metaphorical language,” he added.

Srinivasan and Xu conducted the study with Lehigh University psychology professor Barbara Malt.

Using the Metaphor Map of English database, researchers examined more than 5,000 examples from the past millennium in which word meanings from one semantic domain, such as “water,” were extended to another semantic domain, such as “mind.”

Researchers called the original semantic domain the “source domain” and the domain that the metaphorical meaning was extended to, the “target domain.”

More than 1,400 online participants were recruited to rate semantic domains such as “water” or “mind” according to the degree to which they were related to the external world (light, plants), animate things (humans, animals), or intense emotions (excitement, fear).

These ratings were fed into computational models that the researchers had developed to predict which semantic domains had been the sources or targets of metaphorical extension.

In comparing their computational predictions against the actual historical record provided by the Metaphor Map of English, researchers found that their models correctly forecast about 75 percent of recorded metaphorical language mappings over the past millennium.

Furthermore, they found that the degree to which a domain is tied to experience in the external world, such as “grasping a rope,” was the primary predictor of how a word would take on a new metaphorical meaning such as “grasping an idea.”

For example, time and again, researchers found that words associated with textiles, digestive organs, wetness, solidity and plants were more likely to provide sources for metaphorical extension, while mental and emotional states, such as excitement, pride and fear were more likely to be the targets of metaphorical extension.

Scientists have created historical maps showing the evolution of metaphoric language. (Image courtesy of Mahesh Srinivasan)

Here’s a link to and a citation for the paper,

Evolution of word meanings through metaphorical mapping: Systematicity over the past millennium by Yang Xu, Barbara C. Malt, Mahesh Srinivasan. Cognitive Psychology Volume 96, August 2017, Pages 41–53 DOI: https://doi.org/10.1016/j.cogpsych.2017.05.005

The early web version of this paper is behind a paywall.

For anyone interested in the ‘Metaphor Map of English’ database mentioned in the news release, you find it here on the University of Glasgow website. By the way, it also seems to be known as ‘Mapping Metaphor with the Historical Thesaurus‘.

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 1 of 3

This sucker (INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research, also known as, Canada’s Fundamental Science Review 2017 or the Naylor report) is a 280 pp. (PDF) and was released on Monday, April 10, 2017. I didn’t intend that this commentary should stretch out into three parts (sigh). Them’s the breaks. This first part provides an introduction to the panel and the report as well as some ‘first thoughts’. Part 2 offers more detailed thoughts and Part 3 offers ‘special cases’ and sums up some of the ideas first introduced in part 1.

I first wrote about this review in a June 15, 2017 posting where amongst other comments I made this one,

Getting back to the review and more specifically, the panel, it’s good to see that four of the nine participants are women but other than that there doesn’t seem to be much diversity, i.e.,the majority (five) spring from the Ontario/Québec nexus of power and all the Canadians are from the southern part of country. Back to diversity, there is one business man, Mike Laziridis known primarily as the founder of Research in Motion (RIM or more popularly as the Blackberry company) making the panel not a wholly ivory tower affair. Still, I hope one day these panels will have members from the Canadian North and international members who come from somewhere other than the US, Great Britain, and/or if they’re having a particularly wild day, Germany. Here are some candidate countries for other places to look for panel members: Japan, Israel, China, South Korea, and India. Other possibilities include one of the South American countries, African countries, and/or the Middle Eastern countries.

Take the continent of Africa for example, where many countries seem to have successfully tackled one of the issues as we face. Specifically, the problem of encouraging young researchers. …

Here’s a quick summary about the newly released report from the April 10, 2017 federal government news release on Canada’s Public Policy Forum,

Today [April 10, 2017], the Government of Canada published the final report of the expert panel on Canada’s Fundamental Science Review. Commissioned by the Honourable Kirsty Duncan, Minister of Science, the report by the blue-ribbon panel offers a comprehensive review of the mechanisms for federal funding that supports research undertaken at academic institutions and research institutes across Canada, as well as the levels of that funding. It provides a multi-year blueprint for improving the oversight and governance of what the panelists call the “research ecosystem.” The report also recommends making major new investments to restore support for front-line research and strengthen the foundations of Canadian science and research at this pivotal point in global history.

The review is the first of its type in more than 40 years. While it focused most closely on the four major federal agencies that support science and scholarly inquiry across all disciplines, the report also takes a wide-angle view of governance mechanisms ranging from smaller agencies to big science facilities. Another issue closely examined by the panel was the effect of the current configuration of funding on the prospects of early career researchers—a group that includes a higher proportion of women and is more diverse than previous generations of scientists and scholars.

The panel’s deliberations were informed by a broad consultative process. The panel received 1,275 written submissions [emphasis mine] from individuals, associations and organizations. It also held a dozen round tables in five cities, engaging some 230 researchers [emphasis mine] at different career stages.

Among the findings:

  • Basic research worldwide has led to most of the technological, medical and social advances that make our quality of life today so much better than a century ago. Canadian scientists and scholars have contributed meaningfully to these advances through the decades; however, by various measures, Canada’s research competitiveness has eroded in recent years.
  • This trend emerged during a period when there was a drop of more than 30 percent in real per capita funding for independent or investigator-led research by front-line scientists and scholars in universities, colleges, institutes and research hospitals. This drop occurred as a result of caps on federal funding to the granting councils and a dramatic change in the balance of funding toward priority-driven and partnership-oriented research.
  • Canada is an international outlier in that funding from federal government sources accounts for less than 25 percent of total spending on research and development in the higher education sector. While governments sometimes highlight that, relative to GDP, Canada leads the G7 in total spending by this sector, institutions themselves now underwrite 50 percent of these costs—with adverse effects on both research and education.
  • Coordination and collaboration among the four key federal research agencies [Canada Foundation for Innovation {CFI}; Social Sciences and Humanities Research Council {SSHRC}; Natural Sciences and Engineering Research Council {NSERC}; Canadian Institutes of Health Research {CIHR}] is suboptimal, with poor alignment of supports for different aspects of research such as infrastructure, operating costs and personnel awards. Governance and administrative practices vary inexplicably, and support for areas such as international partnerships or multidisciplinary research is uneven.
  • Early career researchers are struggling in some disciplines, and Canada lacks a career-spanning strategy for supporting both research operations and staff.
  • Flagship personnel programs such as the Canada Research Chairs have had the same value since 2000. Levels of funding and numbers of awards for students and post-doctoral fellows have not kept pace with inflation, peer nations or the size of applicant pools.

The report also outlines a comprehensive agenda to strengthen the foundations of Canadian extramural research. Recommended improvements in oversight include:

  • legislation to create an independent National Advisory Council on Research and Innovation (NACRI) that would work closely with Canada’s new Chief Science Advisor (CSA) to raise the bar in terms of ongoing evaluations of all research programming;
  • wide-ranging improvements to oversight and governance of the four agencies, including the appointment of a coordinating board chaired by the CSA; and
  • lifecycle governance of national-scale research facilities as well as improved methods for overseeing and containing the growth in ad-hoc funding of smaller non-profit research entities.

With regard to funding, the panel recommends a major multi-year reinvestment in front-line research, targeting several areas of identified need. Each recommendation is benchmarked and is focused on making long-term improvements in Canada’s research capacity. The panel’s recommendations, to be phased in over four years, would raise annual spending across the four major federal agencies and other key entities from approximately $3.5 billion today to $4.8 billion in 2022. The goal is to ensure that Canada benefits from an outsized concentration of world-leading scientists and scholars who can make exciting discoveries and generate novel insights while educating and inspiring the next generation of researchers, innovators and leaders.

Given global competition, the current conditions in the ecosystem, the role of research in underpinning innovation and educating innovators, and the need for research to inform evidence-based policy-making, the panel concludes that this is among the highest-yield investments in Canada’s future that any government could make.

The full report is posted on www.sciencereview.ca.

Quotes

“In response to the request from Prime Minister Trudeau and Minister Duncan, the Science Review panel has put together a comprehensive roadmap for Canadian pre-eminence in science and innovation far into the future. The report provides creative pathways for optimizing Canada’s investments in fundamental research in the physical, life and social sciences as well as the humanities in a cost effective way. Implementation of the panel’s recommendations will make Canada the destination of choice for the world’s best talent. It will also guarantee that young Canadian researchers can fulfill their dreams in their own country, bringing both Nobel Prizes and a thriving economy to Canada. American scientists will look north with envy.”

– Robert J. Birgeneau, Silverman Professor of Physics and Public Policy, University of California, Berkeley

“We have paid close attention not only to hard data on performance and funding but also to the many issues raised by the science community in our consultations. I sincerely hope the report will serve as a useful guide to policy-makers for years to come.”

– Martha Crago, Vice-President, Research and Professor of Human Communication Disorders, Dalhousie University

“Science is the bedrock of modern civilization. Our report’s recommendations to increase and optimize government investments in fundamental scientific research will help ensure that Canada’s world-class researchers can continue to make their critically important contributions to science, industry and society in Canada while educating and inspiring future generations. At the same time, such investments will enable Canada to attract top researchers from around the world. Canada must strategically build critical density in our researcher communities to elevate its global competitiveness. This is the path to new technologies, new businesses, new jobs and new value creation for Canada.”

– Mike Lazaridis, Founder and Managing Partner, Quantum Valley Investments

“This was a very comprehensive review. We heard from a wide range of researchers—from the newest to those with ambitious, established and far-reaching research careers. At all these levels, researchers spoke of their gratitude for federal funding, but they also described enormous barriers to their success. These ranged from personal career issues like gaps in parental leave to a failure to take gender, age, geographic location and ethnicity into account. They also included mechanical and economic issues like gaps between provincial and federal granting timelines and priorities, as well as a lack of money for operating and maintaining critical equipment.”

– Claudia Malacrida, Associate Vice-President, Research and Professor of Sociology, University of Lethbridge

“We would like to thank the community for its extensive participation in this review. We reflect that community perspective in recommending improvements to funding and governance for fundamental science programs to restore the balance with recent industry-oriented programs and improve both science and innovation in Canada.”

– Arthur B. McDonald, Professor Emeritus, Queen’s University

“This report sets out a multi-year agenda that, if implemented, could transform Canadian research capacity and have enormous long-term impacts across the nation. It proffers a legacy-building opportunity for a new government that has boldly nailed its colours to the mast of science and evidence-informed policy-making. I urge the Prime Minister to act decisively on our recommendations.”

– C. David Naylor, Professor of Medicine, University of Toronto (Chair)

“This report outlines all the necessary ingredients to advance basic research, thereby positioning Canada as a leading ‘knowledge’ nation. Rarely does a country have such a unique opportunity to transform the research landscape and lay the foundation for a future of innovation, prosperity and well-being.”

– Martha C. Piper, President Emeritus, University of British Columbia

“Our report shows a clear path forward. Now it is up to the government to make sure that Canada truly becomes a world leader in how it both organizes and financially supports fundamental research.”

– Rémi Quirion, Le scientifique en chef du Québec

“The government’s decision to initiate this review reflected a welcome commitment to fundamental research. I am hopeful that the release of our report will energize the government and research community to take the next steps needed to strengthen Canada’s capacity for discovery and research excellence. A research ecosystem that supports a diversity of scholars at every career stage conducting research in every discipline will best serve Canada and the next generation of students and citizens as we move forward to meet social, technological, economic and ecological challenges.”

– Anne Wilson, Professor of Psychology, Wilfrid Laurier University

Quick facts

  • The Fundamental Science Review Advisory Panel is an independent and non-partisan body whose mandate was to provide advice and recommendations to the Minister of Science on how to improve federal science programs and initiatives.
  • The panel was asked to consider whether there are gaps in the federal system of support for fundamental research and recommend how to address them.
  • The scope of the review included the federal granting councils along with some federally funded organizations such as the Canada Foundation for Innovation.

First thoughts

Getting to the report itself, I have quickly skimmed through it  but before getting to that and for full disclosure purposes, please note, I made a submission to the panel. That said, I’m a little disappointed. I would have liked to have seen a little more imagination in the recommendations which set forth future directions. Albeit the questions themselves would not seem to encourage any creativity,

Our mandate was summarized in two broad questions:

1. Are there any overall program gaps in Canada’s fundamental research funding ecosystem that need to be addressed?

2. Are there elements or programming features in other countries that could provide a useful example for the Government of Canada in addressing these gaps? (p. 1 print; p. 35 PDF)

A new agency to replace the STIC (Science, Technology and Innovation Council)

There are no big surprises. Of course they’ve recommended another organization, NACRI [National Advisory Council on Research and Innovation], most likely to replace the Conservative government’s advisory group, the Science, Technology and Innovation Council (STIC) which seems to have died as of Nov. 2015, one month after the Liberals won. There was no Chief Science Advisor under the Conservatives. As I recall, the STIC replaced a previous Liberal government’s advisory group and Chief Science Advisor (Arthur Carty, now the executive director of the Waterloo [as in University of Waterloo] Institute of Nanotechnology).

Describing the NACRI as peopled by volunteers doesn’t exactly describe the situation. This is the sort of ‘volunteer opportunity’ a dedicated careerist salivates over because it’s a career builder where you rub shoulders with movers and shakers in other academic institutions, in government, and in business. BTW, flights to meetings will be paid for along with per diems (accommodations and meals). These volunteers will also have a staff. Admittedly, it will be unpaid extra time for the ‘volunteer’ but the payoff promises to be considerable.

Canada’s eroding science position

There is considerable concern evinced over Canada’s eroding position although we still have bragging rights in some areas (regenerative medicine, artificial intelligence for two areas). As for erosion, the OECD (Organization for Economic Cooperation and Development) dates the erosion back to 2001 (from my June 2, 2014 posting),

Interestingly, the OECD (Organization for Economic Cooperation and Development) Science, Technology and Industry Scoreboard 2013 dates the decline to 2001. From my Oct. 30, 2013 posting (excerpted from the scorecard),

Canada is among the few OECD countries where R&D expenditure declined between 2000 and 2011 (Figure 1). This decline was mainly due to reduced business spending on R&D. It occurred despite relatively generous public support for business R&D, primarily through tax incentives. In 2011, Canada was amongst the OECD countries with the most generous tax support for R&D and the country with the largest share of government funding for business R&D being accounted for by tax credits (Figure 2). …

It should be noted, the Liberals have introduced another budget with flat funding for science (if you want to see a scathing review see Nassif Ghoussoub’s (professor of mathematics at the University of British Columbia April 10, 2017 posting) on his Piece of Mind blog). Although the funding isn’t quite so flat as it might seem at first glance (see my March 24, 2017 posting about the 2017 budget). The government explained that the science funding agencies didn’t receive increased funding as the government was waiting on this report which was released only weeks later (couldn’t they have a sneak preview?). In any event, it seems it will be at least a year before the funding issues described in the report can be addressed through another budget unless there’s some ‘surprise’ funding ahead.

Again, here’s a link to the other parts:

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report) Commentaries

Part 2

Part 3

Emerging technology and the law

I have three news bits about legal issues that are arising as a consequence of emerging technologies.

Deep neural networks, art, and copyright

Caption: The rise of automated art opens new creative avenues, coupled with new problems for copyright protection. Credit: Provided by: Alexander Mordvintsev, Christopher Olah and Mike Tyka

Presumably this artwork is a demonstration of automated art although they never really do explain how in the news item/news release. An April 26, 2017 news item on ScienceDaily announces research into copyright and the latest in using neural networks to create art,

In 1968, sociologist Jean Baudrillard wrote on automatism that “contained within it is the dream of a dominated world […] that serves an inert and dreamy humanity.”

With the growing popularity of Deep Neural Networks (DNN’s), this dream is fast becoming a reality.

Dr. Jean-Marc Deltorn, researcher at the Centre d’études internationales de la propriété intellectuelle in Strasbourg, argues that we must remain a responsive and responsible force in this process of automation — not inert dominators. As he demonstrates in a recent Frontiers in Digital Humanities paper, the dream of automation demands a careful study of the legal problems linked to copyright.

An April 26, 2017 Frontiers (publishing) news release on EurekAlert, which originated the news item, describes the research in more detail,

For more than half a century, artists have looked to computational processes as a way of expanding their vision. DNN’s are the culmination of this cross-pollination: by learning to identify a complex number of patterns, they can generate new creations.

These systems are made up of complex algorithms modeled on the transmission of signals between neurons in the brain.

DNN creations rely in equal measure on human inputs and the non-human algorithmic networks that process them.

Inputs are fed into the system, which is layered. Each layer provides an opportunity for a more refined knowledge of the inputs (shape, color, lines). Neural networks compare actual outputs to expected ones, and correct the predictive error through repetition and optimization. They train their own pattern recognition, thereby optimizing their learning curve and producing increasingly accurate outputs.

The deeper the layers are, the higher the level of abstraction. The highest layers are able to identify the contents of a given input with reasonable accuracy, after extended periods of training.

Creation thus becomes increasingly automated through what Deltorn calls “the arcane traceries of deep architecture”. The results are sufficiently abstracted from their sources to produce original creations that have been exhibited in galleries, sold at auction and performed at concerts.

The originality of DNN’s is a combined product of technological automation on one hand, human inputs and decisions on the other.

DNN’s are gaining popularity. Various platforms (such as DeepDream) now allow internet users to generate their very own new creations . This popularization of the automation process calls for a comprehensive legal framework that ensures a creator’s economic and moral rights with regards to his work – copyright protection.

Form, originality and attribution are the three requirements for copyright. And while DNN creations satisfy the first of these three, the claim to originality and attribution will depend largely on a given country legislation and on the traceability of the human creator.

Legislation usually sets a low threshold to originality. As DNN creations could in theory be able to create an endless number of riffs on source materials, the uncurbed creation of original works could inflate the existing number of copyright protections.

Additionally, a small number of national copyright laws confers attribution to what UK legislation defines loosely as “the person by whom the arrangements necessary for the creation of the work are undertaken.” In the case of DNN’s, this could mean anybody from the programmer to the user of a DNN interface.

Combined with an overly supple take on originality, this view on attribution would further increase the number of copyrightable works.

The risk, in both cases, is that artists will be less willing to publish their own works, for fear of infringement of DNN copyright protections.

In order to promote creativity – one seminal aim of copyright protection – the issue must be limited to creations that manifest a personal voice “and not just the electric glint of a computational engine,” to quote Deltorn. A delicate act of discernment.

DNN’s promise new avenues of creative expression for artists – with potential caveats. Copyright protection – a “catalyst to creativity” – must be contained. Many of us gently bask in the glow of an increasingly automated form of technology. But if we want to safeguard the ineffable quality that defines much art, it might be a good idea to hone in more closely on the differences between the electric and the creative spark.

This research is and be will part of a broader Frontiers Research Topic collection of articles on Deep Learning and Digital Humanities.

Here’s a link to and a citation for the paper,

Deep Creations: Intellectual Property and the Automata by Jean-Marc Deltorn. Front. Digit. Humanit., 01 February 2017 | https://doi.org/10.3389/fdigh.2017.00003

This paper is open access.

Conference on governance of emerging technologies

I received an April 17, 2017 notice via email about this upcoming conference. Here’s more from the Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics webpage,

The Fifth Annual Conference on Governance of Emerging Technologies:

Law, Policy and Ethics held at the new

Beus Center for Law & Society in Phoenix, AZ

May 17-19, 2017!

Call for Abstracts – Now Closed

The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including (but not limited to) nanotechnology, synthetic biology, gene editing, biotechnology, genomics, personalized medicine, human enhancement technologies, telecommunications, information technologies, surveillance technologies, geoengineering, neuroscience, artificial intelligence, and robotics. The conference is premised on the belief that there is much to be learned and shared from and across the governance experience and proposals for these various emerging technologies.

Keynote Speakers:

Gillian HadfieldRichard L. and Antoinette Schamoi Kirtland Professor of Law and Professor of Economics USC [University of Southern California] Gould School of Law

Shobita Parthasarathy, Associate Professor of Public Policy and Women’s Studies, Director, Science, Technology, and Public Policy Program University of Michigan

Stuart Russell, Professor at [University of California] Berkeley, is a computer scientist known for his contributions to artificial intelligence

Craig Shank, Vice President for Corporate Standards Group in Microsoft’s Corporate, External and Legal Affairs (CELA)

Plenary Panels:

Innovation – Responsible and/or Permissionless

Ellen-Marie Forsberg, Senior Researcher/Research Manager at Oslo and Akershus University College of Applied Sciences

Adam Thierer, Senior Research Fellow with the Technology Policy Program at the Mercatus Center at George Mason University

Wendell Wallach, Consultant, ethicist, and scholar at Yale University’s Interdisciplinary Center for Bioethics

 Gene Drives, Trade and International Regulations

Greg Kaebnick, Director, Editorial Department; Editor, Hastings Center Report; Research Scholar, Hastings Center

Jennifer Kuzma, Goodnight-North Carolina GlaxoSmithKline Foundation Distinguished Professor in Social Sciences in the School of Public and International Affairs (SPIA) and co-director of the Genetic Engineering and Society (GES) Center at North Carolina State University

Andrew Maynard, Senior Sustainability Scholar, Julie Ann Wrigley Global Institute of Sustainability Director, Risk Innovation Lab, School for the Future of Innovation in Society Professor, School for the Future of Innovation in Society, Arizona State University

Gary Marchant, Regents’ Professor of Law, Professor of Law Faculty Director and Faculty Fellow, Center for Law, Science & Innovation, Arizona State University

Marc Saner, Inaugural Director of the Institute for Science, Society and Policy, and Associate Professor, University of Ottawa Department of Geography

Big Data

Anupam Chander, Martin Luther King, Jr. Professor of Law and Director, California International Law Center, UC Davis School of Law

Pilar Ossorio, Professor of Law and Bioethics, University of Wisconsin, School of Law and School of Medicine and Public Health; Morgridge Institute for Research, Ethics Scholar-in-Residence

George Poste, Chief Scientist, Complex Adaptive Systems Initiative (CASI) (http://www.casi.asu.edu/), Regents’ Professor and Del E. Webb Chair in Health Innovation, Arizona State University

Emily Shuckburgh, climate scientist and deputy head of the Polar Oceans Team at the British Antarctic Survey, University of Cambridge

 Responsible Development of AI

Spring Berman, Ira A. Fulton Schools of Engineering, Arizona State University

John Havens, The IEEE [Institute of Electrical and Electronics Engineers] Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems

Subbarao Kambhampati, Senior Sustainability Scientist, Julie Ann Wrigley Global Institute of Sustainability, Professor, School of Computing, Informatics and Decision Systems Engineering, Ira A. Fulton Schools of Engineering, Arizona State University

Wendell Wallach, Consultant, Ethicist, and Scholar at Yale University’s Interdisciplinary Center for Bioethics

Existential and Catastrophic Ricks [sic]

Tony Barrett, Co-Founder and Director of Research of the Global Catastrophic Risk Institute

Haydn Belfield,  Academic Project Administrator, Centre for the Study of Existential Risk at the University of Cambridge

Margaret E. Kosal Associate Director, Sam Nunn School of International Affairs, Georgia Institute of Technology

Catherine Rhodes,  Academic Project Manager, Centre for the Study of Existential Risk at CSER, University of Cambridge

These were the panels that are of interest to me; there are others on the homepage.

Here’s some information from the Conference registration webpage,

Early Bird Registration – $50 off until May 1! Enter discount code: earlybirdGETs50

New: Group Discount – Register 2+ attendees together and receive an additional 20% off for all group members!

Click Here to Register!

Conference registration fees are as follows:

  • General (non-CLE) Registration: $150.00
  • CLE Registration: $350.00
  • *Current Student / ASU Law Alumni Registration: $50.00
  • ^Cybsersecurity sessions only (May 19): $100 CLE / $50 General / Free for students (registration info coming soon)

There you have it.

Neuro-techno future laws

I’m pretty sure this isn’t the first exploration of potential legal issues arising from research into neuroscience although it’s the first one I’ve stumbled across. From an April 25, 2017 news item on phys.org,

New human rights laws to prepare for advances in neurotechnology that put the ‘freedom of the mind’ at risk have been proposed today in the open access journal Life Sciences, Society and Policy.

The authors of the study suggest four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy. The four laws are: the right to cognitive liberty, the right to mental privacy, the right to mental integrity and the right to psychological continuity.

An April 25, 2017 Biomed Central news release on EurekAlert, which originated the news item, describes the work in more detail,

Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel, said: “The mind is considered to be the last refuge of personal freedom and self-determination, but advances in neural engineering, brain imaging and neurotechnology put the freedom of the mind at risk. Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology.”

Advances in neurotechnology, such as sophisticated brain imaging and the development of brain-computer interfaces, have led to these technologies moving away from a clinical setting and into the consumer domain. While these advances may be beneficial for individuals and society, there is a risk that the technology could be misused and create unprecedented threats to personal freedom.

Professor Roberto Andorno, co-author of the research, explained: “Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court, for example as a tool for assessing criminal responsibility or even the risk of reoffending. Consumer companies are using brain imaging for ‘neuromarketing’, to understand consumer behaviour and elicit desired responses from customers. There are also tools such as ‘brain decoders’ which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom which we sought to address with the development of four new human rights laws.”

The authors explain that as neurotechnology improves and becomes commonplace, there is a risk that the technology could be hacked, allowing a third-party to ‘eavesdrop’ on someone’s mind. In the future, a brain-computer interface used to control consumer technology could put the user at risk of physical and psychological damage caused by a third-party attack on the technology. There are also ethical and legal concerns over the protection of data generated by these devices that need to be considered.

International human rights laws make no specific mention to neuroscience, although advances in biomedicine have become intertwined with laws, such as those concerning human genetic data. Similar to the historical trajectory of the genetic revolution, the authors state that the on-going neurorevolution will force a reconceptualization of human rights laws and even the creation of new ones.

Marcello Ienca added: “Science-fiction can teach us a lot about the potential threat of technology. Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes. We need to be prepared to deal with the impact these technologies will have on our personal freedom.”

Here’s a link to and a citation for the paper,

Towards new human rights in the age of neuroscience and neurotechnology by Marcello Ienca and Roberto Andorno. Life Sciences, Society and Policy201713:5 DOI: 10.1186/s40504-017-0050-1 Published: 26 April 2017

©  The Author(s). 2017

This paper is open access.

Formation of a time (temporal) crystal

It’s a crystal arranged in time according to a March 8, 2017 University of Texas at Austin news release (also on EurekAlert), Note: Links have been removed,

Salt, snowflakes and diamonds are all crystals, meaning their atoms are arranged in 3-D patterns that repeat. Today scientists are reporting in the journal Nature on the creation of a phase of matter, dubbed a time crystal, in which atoms move in a pattern that repeats in time rather than in space.

The atoms in a time crystal never settle down into what’s known as thermal equilibrium, a state in which they all have the same amount of heat. It’s one of the first examples of a broad new class of matter, called nonequilibrium phases, that have been predicted but until now have remained out of reach. Like explorers stepping onto an uncharted continent, physicists are eager to explore this exotic new realm.

“This opens the door to a whole new world of nonequilibrium phases,” says Andrew Potter, an assistant professor of physics at The University of Texas at Austin. “We’ve taken these theoretical ideas that we’ve been poking around for the last couple of years and actually built it in the laboratory. Hopefully, this is just the first example of these, with many more to come.”

Some of these nonequilibrium phases of matter may prove useful for storing or transferring information in quantum computers.

Potter is part of the team led by researchers at the University of Maryland who successfully created the first time crystal from ions, or electrically charged atoms, of the element ytterbium. By applying just the right electrical field, the researchers levitated 10 of these ions above a surface like a magician’s assistant. Next, they whacked the atoms with a laser pulse, causing them to flip head over heels. Then they hit them again and again in a regular rhythm. That set up a pattern of flips that repeated in time.

Crucially, Potter noted, the pattern of atom flips repeated only half as fast as the laser pulses. This would be like pounding on a bunch of piano keys twice a second and notes coming out only once a second. This weird quantum behavior was a signature that he and his colleagues predicted, and helped confirm that the result was indeed a time crystal.

The team also consists of researchers at the National Institute of Standards and Technology, the University of California, Berkeley and Harvard University, in addition to the University of Maryland and UT Austin.

Frank Wilczek, a Nobel Prize-winning physicist at the Massachusetts Institute of Technology, was teaching a class about crystals in 2012 when he wondered whether a phase of matter could be created such that its atoms move in a pattern that repeats in time, rather than just in space.

Potter and his colleague Norman Yao at UC Berkeley created a recipe for building such a time crystal and developed ways to confirm that, once you had built such a crystal, it was in fact the real deal. That theoretical work was announced publically last August and then published in January in the journal Physical Review Letters.

A team led by Chris Monroe of the University of Maryland in College Park built a time crystal, and Potter and Yao helped confirm that it indeed had the properties they predicted. The team announced that breakthrough—constructing a working time crystal—last September and is publishing the full, peer-reviewed description today in Nature.

A team led by Mikhail Lukin at Harvard University created a second time crystal a month after the first team, in that case, from a diamond.

Here’s a link to and a citation for the paper,

Observation of a discrete time crystal by J. Zhang, P. W. Hess, A. Kyprianidis, P. Becker, A. Lee, J. Smith, G. Pagano, I.-D. Potirniche, A. C. Potter, A. Vishwanath, N. Y. Yao, & C. Monroe. Nature 543, 217–220 (09 March 2017) doi:10.1038/nature21413 Published online 08 March 2017

This paper is behind a paywall.

CRISPR patent decision: Harvard’s and MIT’s Broad Institute victorious—for now

I have written about the CRISPR patent tussle (Harvard & MIT’s [Massachusetts Institute of Technology] Broad Institute vs the University of California at Berkeley) previously in a Jan. 6, 2015 posting and in a more detailed May 14, 2015 posting. I also mentioned (in a Jan. 17, 2017 posting) CRISPR and its patent issues in the context of a posting about a Slate.com series on Frankenstein and the novel’s applicability to our own time. This patent fight is being bitterly fought as fortunes are at stake.

It seems a decision has been made regarding the CRISPR patent claims. From a Feb. 17, 2017 article by Charmaine Distor for The Science Times,

After an intense court battle, the US Patent and Trademark Office (USPTO) released its ruling on February 15 [2017]. The rights for the CRISPR-Cas9 gene editing technology was handed over to the Broad Institute of Harvard University and the Massachusetts Institute of Technology (MIT).

According to an article in Nature, the said court battle was between the Broad Institute and the University of California. The two institutions are fighting over the intellectual property right for the CRISPR patent. The case between the two started when the patent was first awarded to the Broad Institute despite having the University of California apply first for the CRISPR patent.

Heidi Ledford’s Feb. 17, 2017 article for Nature provides more insight into the situation (Note: Links have been removed),

It [USPTO] ruled that the Broad Institute of Harvard and MIT in Cambridge could keep its patents on using CRISPR–Cas9 in eukaryotic cells. That was a blow to the University of California in Berkeley, which had filed its own patents and had hoped to have the Broad’s thrown out.

The fight goes back to 2012, when Jennifer Doudna at Berkeley, Emmanuelle Charpentier, then at the University of Vienna, and their colleagues outlined how CRISPR–Cas9 could be used to precisely cut isolated DNA1. In 2013, Feng Zhang at the Broad and his colleagues — and other teams — showed2 how it could be adapted to edit DNA in eukaryotic cells such as plants, livestock and humans.

Berkeley filed for a patent earlier, but the USPTO granted the Broad’s patents first — and this week upheld them. There are high stakes involved in the ruling. The holder of key patents could make millions of dollars from CRISPR–Cas9’s applications in industry: already, the technique has sped up genetic research, and scientists are using it to develop disease-resistant livestock and treatments for human diseases.

But the fight for patent rights to CRISPR technology is by no means over. Here are four reasons why.

1. Berkeley can appeal the ruling

2. European patents are still up for grabs

3. Other parties are also claiming patent rights on CRISPR–Cas9

4. CRISPR technology is moving beyond what the patents cover

As for Ledford’s 3rd point, there are an estimated 763 patent families (groups of related patents) claiming CAS9 leading to the distinct possibility that the Broad Institute will be fighting many patent claims in the future.

Once you’ve read Distor’s and Ledford’s articles, you may want to check out Adam Rogers’ and Eric Niiler’s Feb. 16, 2017 CRISPR patent article for Wired,

The fight over who owns the most promising technique for editing genes—cutting and pasting the stuff of life to cure disease and advance scientific knowledge—has been a rough one. A team on the West Coast, at UC Berkeley, filed patents on the method, Crispr-Cas9; a team on the East Coast, based at MIT and the Broad Institute, filed their own patents in 2014 after Berkeley’s, but got them granted first. The Berkeley group contended that this constituted “interference,” and that Berkeley deserved the patent.

At stake: millions, maybe billions of dollars in biotech money and licensing fees, the future of medicine, the future of bioscience. Not nothing. Who will benefit depends on who owns the patents.

On Wednesday [Feb. 15, 2017], the US Patent Trial and Appeal Board kind of, sort of, almost began to answer that question. Berkeley will get the patent for using the system called Crispr-Cas9 in any living cell, from bacteria to blue whales. Broad/MIT gets the patent in eukaryotic cells, which is to say, plants and animals.

It’s … confusing. “The patent that the Broad received is for the use of Crispr gene-editing technology in eukaryotic cells. The patent for the University of California is for all cells,” says Jennifer Doudna, the UC geneticist and co-founder of Caribou Biosciences who co-invented Crispr, on a conference call. Her metaphor: “They have a patent on green tennis balls; we have a patent for all tennis balls.”

Observers didn’t quite buy that topspin. If Caribou is playing tennis, it’s looking like Broad/MIT is Serena Williams.

“UC does not necessarily lose everything, but they’re no doubt spinning the story,” says Robert Cook-Deegan, an expert in genetic policy at Arizona State University’s School for the Future of Innovation in Society. “UC’s claims to eukaryotic uses of Crispr-Cas9 will not be granted in the form they sought. That’s a big deal, and UC was the big loser.”

UC officials said Wednesday [Feb. 15, 2017] that they are studying the 51-page decision and considering whether to appeal. That leaves members of the biotechnology sector wondering who they will have to pay to use Crispr as part of a business—and scientists hoping the outcome won’t somehow keep them from continuing their research.

….

Happy reading!

Sustainable Nanotechnologies (SUN) project draws to a close in March 2017

Two Oct. 31, 2016 news item on Nanowerk signal the impending sunset date for the European Union’s Sustainable Nanotechnologies (SUN) project. The first Oct. 31, 2016 news item on Nanowerk describes the projects latest achievements,

The results from the 3rd SUN annual meeting showed great advancement of the project. The meeting was held in Edinburgh, Scotland, UK on 4-5 October 2016 where the project partners presented the results obtained during the second reporting period of the project.

SUN is a three and a half year EU project, running from 2013 to 2017, with a budget of about €14 million. Its main goal is to evaluate the risks along the supply chain of engineered nanomaterials and incorporate the results into tools and guidelines for sustainable manufacturing.

The ultimate goal of the SUN Project is the development of an online software Decision Support System – SUNDS – aimed at estimating and managing occupational, consumer, environmental and public health risks from nanomaterials in real industrial products along their lifecycles. The SUNDS beta prototype has been released last October, 2015, and since then the main focus has been on refining the methodologies and testing them on selected case studies i.e. nano-copper oxide based wood preserving paint and nano- sized colourants for plastic car part: organic pigment and carbon black. Obtained results and open issues were discussed during the third annual meeting in order collect feedbacks from the consortium that will inform, in the next months, the implementation of the final version of the SUNDS software system, due by March 2017.

An Oct. 27, 2016 SUN project press release, which originated the news item, adds more information,

Significant interest has been payed towards the results obtained in WP2 (Lifecycle Thinking) which main objectives are to assess the environmental impacts arising from each life cycle stage of the SUN case studies (i.e. Nano-WC-Cobalt (Tungsten Carbide-cobalt) sintered ceramics, Nanocopper wood preservatives, Carbon Nano Tube (CNT) in plastics, Silicon Dioxide (SiO2) as food additive, Nano-Titanium Dioxide (TiO2) air filter system, Organic pigment in plastics and Nanosilver (Ag) in textiles), and compare them to conventional products with similar uses and functionality, in order to develop and validate criteria and guiding principles for green nano-manufacturing. Specifically, the consortium partner COLOROBBIA CONSULTING S.r.l. expressed its willingness to exploit the results obtained from the life cycle assessment analysis related to nanoTiO2 in their industrial applications.

On 6th October [2016], the discussions about the SUNDS advancement continued during a Stakeholder Workshop, where representatives from industry, regulatory and insurance sectors shared their feedback on the use of the decision support system. The recommendations collected during the workshop will be used for the further refinement and implemented in the final version of the software which will be released by March 2017.

The second Oct. 31, 2016 news item on Nanowerk led me to this Oct. 27, 2016 SUN project press release about the activities in the upcoming final months,

The project has designed its final events to serve as an effective platform to communicate the main results achieved in its course within the Nanosafety community and bridge them to a wider audience addressing the emerging risks of Key Enabling Technologies (KETs).

The series of events include the New Tools and Approaches for Nanomaterial Safety Assessment: A joint conference organized by NANOSOLUTIONS, SUN, NanoMILE, GUIDEnano and eNanoMapper to be held on 7 – 9 February 2017 in Malaga, Spain, the SUN-CaLIBRAte Stakeholders workshop to be held on 28 February – 1 March 2017 in Venice, Italy and the SRA Policy Forum: Risk Governance for Key Enabling Technologies to be held on 1- 3 March in Venice, Italy.

Jointly organized by the Society for Risk Analysis (SRA) and the SUN Project, the SRA Policy Forum will address current efforts put towards refining the risk governance of emerging technologies through the integration of traditional risk analytic tools alongside considerations of social and economic concerns. The parallel sessions will be organized in 4 tracks:  Risk analysis of engineered nanomaterials along product lifecycle, Risks and benefits of emerging technologies used in medical applications, Challenges of governing SynBio and Biotech, and Methods and tools for risk governance.

The SRA Policy Forum has announced its speakers and preliminary Programme. Confirmed speakers include:

  • Keld Alstrup Jensen (National Research Centre for the Working Environment, Denmark)
  • Elke Anklam (European Commission, Belgium)
  • Adam Arkin (University of California, Berkeley, USA)
  • Phil Demokritou (Harvard University, USA)
  • Gerard Escher (École polytechnique fédérale de Lausanne, Switzerland)
  • Lisa Friedersdor (National Nanotechnology Initiative, USA)
  • James Lambert (President, Society for Risk Analysis, USA)
  • Andre Nel (The University of California, Los Angeles, USA)
  • Bernd Nowack (EMPA, Switzerland)
  • Ortwin Renn (University of Stuttgart, Germany)
  • Vicki Stone (Heriot-Watt University, UK)
  • Theo Vermeire (National Institute for Public Health and the Environment (RIVM), Netherlands)
  • Tom van Teunenbroek (Ministry of Infrastructure and Environment, The Netherlands)
  • Wendel Wohlleben (BASF, Germany)

The New Tools and Approaches for Nanomaterial Safety Assessment (NMSA) conference aims at presenting the main results achieved in the course of the organizing projects fostering a discussion about their impact in the nanosafety field and possibilities for future research programmes.  The conference welcomes consortium partners, as well as representatives from other EU projects, industry, government, civil society and media. Accordingly, the conference topics include: Hazard assessment along the life cycle of nano-enabled products, Exposure assessment along the life cycle of nano-enabled products, Risk assessment & management, Systems biology approaches in nanosafety, Categorization & grouping of nanomaterials, Nanosafety infrastructure, Safe by design. The NMSA conference key note speakers include:

  • Harri Alenius (University of Helsinki, Finland,)
  • Antonio Marcomini (Ca’ Foscari University of Venice, Italy)
  • Wendel Wohlleben (BASF, Germany)
  • Danail Hristozov (Ca’ Foscari University of Venice, Italy)
  • Eva Valsami-Jones (University of Birmingham, UK)
  • Socorro Vázquez-Campos (LEITAT Technolоgical Center, Spain)
  • Barry Hardy (Douglas Connect GmbH, Switzerland)
  • Egon Willighagen (Maastricht University, Netherlands)
  • Nina Jeliazkova (IDEAconsult Ltd., Bulgaria)
  • Haralambos Sarimveis (The National Technical University of Athens, Greece)

During the SUN-caLIBRAte Stakeholder workshop the final version of the SUN user-friendly, software-based Decision Support System (SUNDS) for managing the environmental, economic and social impacts of nanotechnologies will be presented and discussed with its end users: industries, regulators and insurance sector representatives. The results from the discussion will be used as a foundation of the development of the caLIBRAte’s Risk Governance framework for assessment and management of human and environmental risks of MN and MN-enabled products.

The SRA Policy Forum: Risk Governance for Key Enabling Technologies and the New Tools and Approaches for Nanomaterial Safety Assessment conference are now open for registration. Abstracts for the SRA Policy Forum can be submitted till 15th November 2016.
For further information go to:
www.sra.org/riskgovernanceforum2017
http://www.nmsaconference.eu/

There you have it.

Creating multiferroic material at room temperature

A Sept. 23, 2016 news item on ScienceDaily describes some research from Cornell University (US),

Multiferroics — materials that exhibit both magnetic and electric order — are of interest for next-generation computing but difficult to create because the conditions conducive to each of those states are usually mutually exclusive. And in most multiferroics found to date, their respective properties emerge only at extremely low temperatures.

Two years ago, researchers in the labs of Darrell Schlom, the Herbert Fisk Johnson Professor of Industrial Chemistry in the Department of Materials Science and Engineering, and Dan Ralph, the F.R. Newman Professor in the College of Arts and Sciences, in collaboration with professor Ramamoorthy Ramesh at UC Berkeley, published a paper announcing a breakthrough in multiferroics involving the only known material in which magnetism can be controlled by applying an electric field at room temperature: the multiferroic bismuth ferrite.

Schlom’s group has partnered with David Muller and Craig Fennie, professors of applied and engineering physics, to take that research a step further: The researchers have combined two non-multiferroic materials, using the best attributes of both to create a new room-temperature multiferroic.

Their paper, “Atomically engineered ferroic layers yield a room-temperature magnetoelectric multiferroic,” was published — along with a companion News & Views piece — Sept. 22 [2016] in Nature. …

A Sept. 22, 2016 Cornell University news release by Tom Fleischman, which originated the news item, details more about the work (Note: A link has been removed),

The group engineered thin films of hexagonal lutetium iron oxide (LuFeO3), a material known to be a robust ferroelectric but not strongly magnetic. The LuFeO3 consists of alternating single monolayers of lutetium oxide and iron oxide, and differs from a strong ferrimagnetic oxide (LuFe2O4), which consists of alternating monolayers of lutetium oxide with double monolayers of iron oxide.

The researchers found, however, that they could combine these two materials at the atomic-scale to create a new compound that was not only multiferroic but had better properties that either of the individual constituents. In particular, they found they need to add just one extra monolayer of iron oxide to every 10 atomic repeats of the LuFeO3 to dramatically change the properties of the system.

That precision engineering was done via molecular-beam epitaxy (MBE), a specialty of the Schlom lab. A technique Schlom likens to “atomic spray painting,” MBE let the researchers design and assemble the two different materials in layers, a single atom at a time.

The combination of the two materials produced a strongly ferrimagnetic layer near room temperature. They then tested the new material at the Lawrence Berkeley National Laboratory (LBNL) Advanced Light Source in collaboration with co-author Ramesh to show that the ferrimagnetic atoms followed the alignment of their ferroelectric neighbors when switched by an electric field.

“It was when our collaborators at LBNL demonstrated electrical control of magnetism in the material that we made that things got super exciting,” Schlom said. “Room-temperature multiferroics are exceedingly rare and only multiferroics that enable electrical control of magnetism are relevant to applications.”

In electronics devices, the advantages of multiferroics include their reversible polarization in response to low-power electric fields – as opposed to heat-generating and power-sapping electrical currents – and their ability to hold their polarized state without the need for continuous power. High-performance memory chips make use of ferroelectric or ferromagnetic materials.

“Our work shows that an entirely different mechanism is active in this new material,” Schlom said, “giving us hope for even better – higher-temperature and stronger – multiferroics for the future.”

Collaborators hailed from the University of Illinois at Urbana-Champaign, the National Institute of Standards and Technology, the University of Michigan and Penn State University.

Here is a link and a citation to the paper and to a companion piece,

Atomically engineered ferroic layers yield a room-temperature magnetoelectric multiferroic by Julia A. Mundy, Charles M. Brooks, Megan E. Holtz, Jarrett A. Moyer, Hena Das, Alejandro F. Rébola, John T. Heron, James D. Clarkson, Steven M. Disseler, Zhiqi Liu, Alan Farhan, Rainer Held, Robert Hovden, Elliot Padgett, Qingyun Mao, Hanjong Paik, Rajiv Misra, Lena F. Kourkoutis, Elke Arenholz, Andreas Scholl, Julie A. Borchers, William D. Ratcliff, Ramamoorthy Ramesh, Craig J. Fennie, Peter Schiffer et al. Nature 537, 523–527 (22 September 2016) doi:10.1038/nature19343 Published online 21 September 2016

Condensed-matter physics: Multitasking materials from atomic templates by Manfred Fiebig. Nature 537, 499–500  (22 September 2016) doi:10.1038/537499a Published online 21 September 2016

Both the paper and its companion piece are behind a paywall.

Westworld: a US television programme investigating AI (artificial intelligence) and consciousness

The US television network, Home Box Office (HBO) is getting ready to première Westworld, a new series based on a movie first released in 1973. Here’s more about the movie from its Wikipedia entry (Note: Links have been removed),

Westworld is a 1973 science fiction Western thriller film written and directed by novelist Michael Crichton and produced by Paul Lazarus III about amusement park robots that malfunction and begin killing visitors. It stars Yul Brynner as an android in a futuristic Western-themed amusement park, and Richard Benjamin and James Brolin as guests of the park.

Westworld was the first theatrical feature directed by Michael Crichton.[3] It was also the first feature film to use digital image processing, to pixellate photography to simulate an android point of view.[4] The film was nominated for Hugo, Nebula and Saturn awards, and was followed by a sequel film, Futureworld, and a short-lived television series, Beyond Westworld. In August 2013, HBO announced plans for a television series based on the original film.

The latest version is due to start broadcasting in the US on Sunday, Oct. 2, 2016 and as part of the publicity effort the producers are profiled by Sean Captain for Fast Company in a Sept. 30, 2016 article,

As Game of Thrones marches into its final seasons, HBO is debuting this Sunday what it hopes—and is betting millions of dollars on—will be its new blockbuster series: Westworld, a thorough reimagining of Michael Crichton’s 1973 cult classic film about a Western theme park populated by lifelike robot hosts. A philosophical prelude to Jurassic Park, Crichton’s Westworld is a cautionary tale about technology gone very wrong: the classic tale of robots that rise up and kill the humans. HBO’s new series, starring Evan Rachel Wood, Anthony Hopkins, and Ed Harris, is subtler and also darker: The humans are the scary ones.

“We subverted the entire premise of Westworld in that our sympathies are meant to be with the robots, the hosts,” says series co-creator Lisa Joy. She’s sitting on a couch in her Burbank office next to her partner in life and on the show—writer, director, producer, and husband Jonathan Nolan—who goes by Jonah. …

Their Westworld, which runs in the revered Sunday-night 9 p.m. time slot, combines present-day production values and futuristic technological visions—thoroughly revamping Crichton’s story with hybrid mechanical-biological robots [emphasis mine] fumbling along the blurry line between simulated and actual consciousness.

Captain never does explain the “hybrid mechanical-biological robots.” For example, do they have human skin or other organs grown for use in a robot? In other words, how are they hybrid?

That nitpick aside, the article provides some interesting nuggets of information and insight into the themes and ideas 2016 Westworld’s creators are exploring (Note: A link has been removed),

… Based on the four episodes I previewed (which get progressively more interesting), Westworld does a good job with the trope—which focused especially on the awakening of Dolores, an old soul of a robot played by Evan Rachel Wood. Dolores is also the catchall Spanish word for suffering, pain, grief, and other displeasures. “There are no coincidences in Westworld,” says Joy, noting that the name is also a play on Dolly, the first cloned mammal.

The show operates on a deeper, though hard-to-define level, that runs beneath the shoot-em and screw-em frontier adventure and robotic enlightenment narratives. It’s an allegory of how even today’s artificial intelligence is already taking over, by cataloging and monetizing our lives and identities. “Google and Facebook, their business is reading your mind in order to advertise shit to you,” says Jonah Nolan. …

“Exist free of rules, laws or judgment. No impulse is taboo,” reads a spoof home page for the resort that HBO launched a few weeks ago. That’s lived to the fullest by the park’s utterly sadistic loyal guest, played by Ed Harris and known only as the Man in Black.

The article also features some quotes from scientists on the topic of artificial intelligence (Note: Links have been removed),

“In some sense, being human, but less than human, it’s a good thing,” says Jon Gratch, professor of computer science and psychology at the University of Southern California [USC]. Gratch directs research at the university’s Institute for Creative Technologies on “virtual humans,” AI-driven onscreen avatars used in military-funded training programs. One of the projects, SimSensei, features an avatar of a sympathetic female therapist, Ellie. It uses AI and sensors to interpret facial expressions, posture, tension in the voice, and word choices by users in order to direct a conversation with them.

“One of the things that we’ve found is that people don’t feel like they’re being judged by this character,” says Gratch. In work with a National Guard unit, Ellie elicited more honest responses about their psychological stresses than a web form did, he says. Other data show that people are more honest when they know the avatar is controlled by an AI versus being told that it was controlled remotely by a human mental health clinician.

“If you build it like a human, and it can interact like a human. That solves a lot of the human-computer or human-robot interaction issues,” says professor Paul Rosenbloom, also with USC’s Institute for Creative Technologies. He works on artificial general intelligence, or AGI—the effort to create a human-like or human level of intellect.

Rosenbloom is building an AGI platform called Sigma that models human cognition, including emotions. These could make a more effective robotic tutor, for instance, “There are times you want the person to know you are unhappy with them, times you want them to know that you think they’re doing great,” he says, where “you” is the AI programmer. “And there’s an emotional component as well as the content.”

Achieving full AGI could take a long time, says Rosenbloom, perhaps a century. Bernie Meyerson, IBM’s chief innovation officer, is also circumspect in predicting if or when Watson could evolve into something like HAL or Her. “Boy, we are so far from that reality, or even that possibility, that it becomes ludicrous trying to get hung up there, when we’re trying to get something to reasonably deal with fact-based data,” he says.

Gratch, Rosenbloom, and Meyerson are talking about screen-based entities and concepts of consciousness and emotions. Then, there’s a scientist who’s talking about the difficulties with robots,

… Ken Goldberg, an artist and professor of engineering at UC [University of California] Berkeley, calls the notion of cyborg robots in Westworld “a pretty common trope in science fiction.” (Joy will take up the theme again, as the screenwriter for a new Battlestar Galactica movie.) Goldberg’s lab is struggling just to build and program a robotic hand that can reliably pick things up. But a sympathetic, somewhat believable Dolores in a virtual setting is not so farfetched.

Captain delves further into a thorny issue,

“Can simulations, at some point, become the real thing?” asks Patrick Lin, director of the Ethics + Emerging Sciences Group at California Polytechnic State University. “If we perfectly simulate a rainstorm on a computer, it’s still not a rainstorm. We won’t get wet. But is the mind or consciousness different? The jury is still out.”

While artificial consciousness is still in the dreamy phase, today’s level of AI is serious business. “What was sort of a highfalutin philosophical question a few years ago has become an urgent industrial need,” says Jonah Nolan. It’s not clear yet how the Delos management intends, beyond entrance fees, to monetize Westworld, although you get a hint when Ford tells Theresa Cullen “We know everything about our guests, don’t we? As we know everything about our employees.”

AI has a clear moneymaking model in this world, according to Nolan. “Facebook is monetizing your social graph, and Google is advertising to you.” Both companies (and others) are investing in AI to better understand users and find ways to make money off this knowledge. …

As my colleague David Bruggeman has often noted on his Pasco Phronesis blog, there’s a lot of science on television.

For anyone who’s interested in artificial intelligence and the effects it may have on urban life, see my Sept. 27, 2016 posting featuring the ‘One Hundred Year Study on Artificial Intelligence (AI100)’, hosted by Stanford University.

Points to anyone who recognized Jonah (Jonathan) Nolan as the producer for the US television series, Person of Interest, a programme based on the concept of a supercomputer with intelligence and personality and the ability to continuously monitor the population 24/7.

How might artificial intelligence affect urban life in 2030? A study

Peering into the future is always a chancy business as anyone who’s seen those film shorts from the 1950’s and 60’s which speculate exuberantly as to what the future will bring knows.

A sober approach (appropriate to our times) has been taken in a study about the impact that artificial intelligence might have by 2030. From a Sept. 1, 2016 Stanford University news release (also on EurekAlert) by Tom Abate (Note: Links have been removed),

A panel of academic and industrial thinkers has looked ahead to 2030 to forecast how advances in artificial intelligence (AI) might affect life in a typical North American city – in areas as diverse as transportation, health care and education ­– and to spur discussion about how to ensure the safe, fair and beneficial development of these rapidly emerging technologies.

Titled “Artificial Intelligence and Life in 2030,” this year-long investigation is the first product of the One Hundred Year Study on Artificial Intelligence (AI100), an ongoing project hosted by Stanford to inform societal deliberation and provide guidance on the ethical development of smart software, sensors and machines.

“We believe specialized AI applications will become both increasingly common and more useful by 2030, improving our economy and quality of life,” said Peter Stone, a computer scientist at the University of Texas at Austin and chair of the 17-member panel of international experts. “But this technology will also create profound challenges, affecting jobs and incomes and other issues that we should begin addressing now to ensure that the benefits of AI are broadly shared.”

The new report traces its roots to a 2009 study that brought AI scientists together in a process of introspection that became ongoing in 2014, when Eric and Mary Horvitz created the AI100 endowment through Stanford. AI100 formed a standing committee of scientists and charged this body with commissioning periodic reports on different aspects of AI over the ensuing century.

“This process will be a marathon, not a sprint, but today we’ve made a good start,” said Russ Altman, a professor of bioengineering and the Stanford faculty director of AI100. “Stanford is excited to host this process of introspection. This work makes practical contribution to the public debate on the roles and implications of artificial intelligence.”

The AI100 standing committee first met in 2015, led by chairwoman and Harvard computer scientist Barbara Grosz. It sought to convene a panel of scientists with diverse professional and personal backgrounds and enlist their expertise to assess the technological, economic and policy implications of potential AI applications in a societally relevant setting.

“AI technologies can be reliable and broadly beneficial,” Grosz said. “Being transparent about their design and deployment challenges will build trust and avert unjustified fear and suspicion.”

The report investigates eight domains of human activity in which AI technologies are beginning to affect urban life in ways that will become increasingly pervasive and profound by 2030.

The 28,000-word report includes a glossary to help nontechnical readers understand how AI applications such as computer vision might help screen tissue samples for cancers or how natural language processing will allow computerized systems to grasp not simply the literal definitions, but the connotations and intent, behind words.

The report is broken into eight sections focusing on applications of AI. Five examine application arenas such as transportation where there is already buzz about self-driving cars. Three other sections treat technological impacts, like the section on employment and workplace trends which touches on the likelihood of rapid changes in jobs and incomes.

“It is not too soon for social debate on how the fruits of an AI-dominated economy should be shared,” the researchers write in the report, noting also the need for public discourse.

“Currently in the United States, at least sixteen separate agencies govern sectors of the economy related to AI technologies,” the researchers write, highlighting issues raised by AI applications: “Who is responsible when a self-driven car crashes or an intelligent medical device fails? How can AI applications be prevented from [being used for] racial discrimination or financial cheating?”

The eight sections discuss:

Transportation: Autonomous cars, trucks and, possibly, aerial delivery vehicles may alter how we commute, work and shop and create new patterns of life and leisure in cities.

Home/service robots: Like the robotic vacuum cleaners already in some homes, specialized robots will clean and provide security in live/work spaces that will be equipped with sensors and remote controls.

Health care: Devices to monitor personal health and robot-assisted surgery are hints of things to come if AI is developed in ways that gain the trust of doctors, nurses, patients and regulators.

Education: Interactive tutoring systems already help students learn languages, math and other skills. More is possible if technologies like natural language processing platforms develop to augment instruction by humans.

Entertainment: The conjunction of content creation tools, social networks and AI will lead to new ways to gather, organize and deliver media in engaging, personalized and interactive ways.

Low-resource communities: Investments in uplifting technologies like predictive models to prevent lead poisoning or improve food distributions could spread AI benefits to the underserved.

Public safety and security: Cameras, drones and software to analyze crime patterns should use AI in ways that reduce human bias and enhance safety without loss of liberty or dignity.

Employment and workplace: Work should start now on how to help people adapt as the economy undergoes rapid changes as many existing jobs are lost and new ones are created.

“Until now, most of what is known about AI comes from science fiction books and movies,” Stone said. “This study provides a realistic foundation to discuss how AI technologies are likely to affect society.”

Grosz said she hopes the AI 100 report “initiates a century-long conversation about ways AI-enhanced technologies might be shaped to improve life and societies.”

You can find the A100 website here, and the group’s first paper: “Artificial Intelligence and Life in 2030” here. Unfortunately, I don’t have time to read the report but I hope to do so soon.

The AI100 website’s About page offered a surprise,

This effort, called the One Hundred Year Study on Artificial Intelligence, or AI100, is the brainchild of computer scientist and Stanford alumnus Eric Horvitz who, among other credits, is a former president of the Association for the Advancement of Artificial Intelligence.

In that capacity Horvitz convened a conference in 2009 at which top researchers considered advances in artificial intelligence and its influences on people and society, a discussion that illuminated the need for continuing study of AI’s long-term implications.

Now, together with Russ Altman, a professor of bioengineering and computer science at Stanford, Horvitz has formed a committee that will select a panel to begin a series of periodic studies on how AI will affect automation, national security, psychology, ethics, law, privacy, democracy and other issues.

“Artificial intelligence is one of the most profound undertakings in science, and one that will affect every aspect of human life,” said Stanford President John Hennessy, who helped initiate the project. “Given’s Stanford’s pioneering role in AI and our interdisciplinary mindset, we feel obliged and qualified to host a conversation about how artificial intelligence will affect our children and our children’s children.”

Five leading academicians with diverse interests will join Horvitz and Altman in launching this effort. They are:

  • Barbara Grosz, the Higgins Professor of Natural Sciences at HarvardUniversity and an expert on multi-agent collaborative systems;
  • Deirdre K. Mulligan, a lawyer and a professor in the School of Information at the University of California, Berkeley, who collaborates with technologists to advance privacy and other democratic values through technical design and policy;

    This effort, called the One Hundred Year Study on Artificial Intelligence, or AI100, is the brainchild of computer scientist and Stanford alumnus Eric Horvitz who, among other credits, is a former president of the Association for the Advancement of Artificial Intelligence.

    In that capacity Horvitz convened a conference in 2009 at which top researchers considered advances in artificial intelligence and its influences on people and society, a discussion that illuminated the need for continuing study of AI’s long-term implications.

    Now, together with Russ Altman, a professor of bioengineering and computer science at Stanford, Horvitz has formed a committee that will select a panel to begin a series of periodic studies on how AI will affect automation, national security, psychology, ethics, law, privacy, democracy and other issues.

    “Artificial intelligence is one of the most profound undertakings in science, and one that will affect every aspect of human life,” said Stanford President John Hennessy, who helped initiate the project. “Given’s Stanford’s pioneering role in AI and our interdisciplinary mindset, we feel obliged and qualified to host a conversation about how artificial intelligence will affect our children and our children’s children.”

    Five leading academicians with diverse interests will join Horvitz and Altman in launching this effort. They are:

    • Barbara Grosz, the Higgins Professor of Natural Sciences at HarvardUniversity and an expert on multi-agent collaborative systems;
    • Deirdre K. Mulligan, a lawyer and a professor in the School of Information at the University of California, Berkeley, who collaborates with technologists to advance privacy and other democratic values through technical design and policy;
    • Yoav Shoham, a professor of computer science at Stanford, who seeks to incorporate common sense into AI;
    • Tom Mitchell, the E. Fredkin University Professor and chair of the machine learning department at Carnegie Mellon University, whose studies include how computers might learn to read the Web;
    • and Alan Mackworth, a professor of computer science at the University of British Columbia [emphases mine] and the Canada Research Chair in Artificial Intelligence, who built the world’s first soccer-playing robot.

    I wasn’t expecting to see a Canadian listed as a member of the AI100 standing committee and then I got another surprise (from the AI100 People webpage),

    Study Panels

    Study Panels are planned to convene every 5 years to examine some aspect of AI and its influences on society and the world. The first study panel was convened in late 2015 to study the likely impacts of AI on urban life by the year 2030, with a focus on typical North American cities.

    2015 Study Panel Members

    • Peter Stone, UT Austin, Chair
    • Rodney Brooks, Rethink Robotics
    • Erik Brynjolfsson, MIT
    • Ryan Calo, University of Washington
    • Oren Etzioni, Allen Institute for AI
    • Greg Hager, Johns Hopkins University
    • Julia Hirschberg, Columbia University
    • Shivaram Kalyanakrishnan, IIT Bombay
    • Ece Kamar, Microsoft
    • Sarit Kraus, Bar Ilan University
    • Kevin Leyton-Brown, [emphasis mine] UBC [University of British Columbia]
    • David Parkes, Harvard
    • Bill Press, UT Austin
    • AnnaLee (Anno) Saxenian, Berkeley
    • Julie Shah, MIT
    • Milind Tambe, USC
    • Astro Teller, Google[X]
  • [emphases mine] and the Canada Research Chair in Artificial Intelligence, who built the world’s first soccer-playing robot.

I wasn’t expecting to see a Canadian listed as a member of the AI100 standing committee and then I got another surprise (from the AI100 People webpage),

Study Panels

Study Panels are planned to convene every 5 years to examine some aspect of AI and its influences on society and the world. The first study panel was convened in late 2015 to study the likely impacts of AI on urban life by the year 2030, with a focus on typical North American cities.

2015 Study Panel Members

  • Peter Stone, UT Austin, Chair
  • Rodney Brooks, Rethink Robotics
  • Erik Brynjolfsson, MIT
  • Ryan Calo, University of Washington
  • Oren Etzioni, Allen Institute for AI
  • Greg Hager, Johns Hopkins University
  • Julia Hirschberg, Columbia University
  • Shivaram Kalyanakrishnan, IIT Bombay
  • Ece Kamar, Microsoft
  • Sarit Kraus, Bar Ilan University
  • Kevin Leyton-Brown, [emphasis mine] UBC [University of British Columbia]
  • David Parkes, Harvard
  • Bill Press, UT Austin
  • AnnaLee (Anno) Saxenian, Berkeley
  • Julie Shah, MIT
  • Milind Tambe, USC
  • Astro Teller, Google[X]

I see they have representation from Israel, India, and the private sector as well. Refreshingly, there’s more than one woman on the standing committee and in this first study group. It’s good to see these efforts at inclusiveness and I’m particularly delighted with the inclusion of an organization from Asia. All too often inclusiveness means Europe, especially the UK. So, it’s good (and I think important) to see a different range of representation.

As for the content of report, should anyone have opinions about it, please do let me know your thoughts in the blog comments.

‘Neural dust’ could lead to introduction of electroceuticals

In case anyone is wondering, the woman who’s manipulating a prosthetic arm so she can eat or a drink of coffee probably has a bulky implant/docking station in her head. Right now that bulky implant is the latest and greatest innovation for tetraplegics (aka, quadriplegics) as it frees, to some extent, people who’ve had no independent movement of any kind. By virtue of the juxtaposition of the footage of the woman with the ‘neural dust’ footage, they seem to be suggesting that neural dust might some day accomplish the same type of connection. At this point, hopes for the ‘neural dust’ are more modest.

An Aug. 3, 2016 news item on ScienceDaily announces the ‘neural dust’,

University of California, Berkeley engineers have built the first dust-sized, wireless sensors that can be implanted in the body, bringing closer the day when a Fitbit-like device could monitor internal nerves, muscles or organs in real time.

Because these batteryless sensors could also be used to stimulate nerves and muscles, the technology also opens the door to “electroceuticals” to treat disorders such as epilepsy or to stimulate the immune system or tamp down inflammation.

An Aug. 3, 2016 University of California at Berkeley news release (also on EurekAlert) by Robert Sanders, which originated the news item, explains further and describes the researchers’ hope that one day the neural dust could be used to control implants and prosthetics,

The so-called neural dust, which the team implanted in the muscles and peripheral nerves of rats, is unique in that ultrasound is used both to power and read out the measurements. Ultrasound technology is already well-developed for hospital use, and ultrasound vibrations can penetrate nearly anywhere in the body, unlike radio waves, the researchers say.

“I think the long-term prospects for neural dust are not only within nerves and the brain, but much broader,“ said Michel Maharbiz, an associate professor of electrical engineering and computer sciences and one of the study’s two main authors. “Having access to in-body telemetry has never been possible because there has been no way to put something supertiny superdeep. But now I can take a speck of nothing and park it next to a nerve or organ, your GI tract or a muscle, and read out the data.“

Maharbiz, neuroscientist Jose Carmena, a professor of electrical engineering and computer sciences and a member of the Helen Wills Neuroscience Institute, and their colleagues will report their findings in the August 3 [2016] issue of the journal Neuron.

The sensors, which the researchers have already shrunk to a 1 millimeter cube – about the size of a large grain of sand – contain a piezoelectric crystal that converts ultrasound vibrations from outside the body into electricity to power a tiny, on-board transistor that is in contact with a nerve or muscle fiber. A voltage spike in the fiber alters the circuit and the vibration of the crystal, which changes the echo detected by the ultrasound receiver, typically the same device that generates the vibrations. The slight change, called backscatter, allows them to determine the voltage.

Motes sprinkled thoughout the body

In their experiment, the UC Berkeley team powered up the passive sensors every 100 microseconds with six 540-nanosecond ultrasound pulses, which gave them a continual, real-time readout. They coated the first-generation motes – 3 millimeters long, 1 millimeter high and 4/5 millimeter thick – with surgical-grade epoxy, but they are currently building motes from biocompatible thin films which would potentially last in the body without degradation for a decade or more.

While the experiments so far have involved the peripheral nervous system and muscles, the neural dust motes could work equally well in the central nervous system and brain to control prosthetics, the researchers say. Today’s implantable electrodes degrade within 1 to 2 years, and all connect to wires that pass through holes in the skull. Wireless sensors – dozens to a hundred – could be sealed in, avoiding infection and unwanted movement of the electrodes.

“The original goal of the neural dust project was to imagine the next generation of brain-machine interfaces, and to make it a viable clinical technology,” said neuroscience graduate student Ryan Neely. “If a paraplegic wants to control a computer or a robotic arm, you would just implant this electrode in the brain and it would last essentially a lifetime.”

In a paper published online in 2013, the researchers estimated that they could shrink the sensors down to a cube 50 microns on a side – about 2 thousandths of an inch, or half the width of a human hair. At that size, the motes could nestle up to just a few nerve axons and continually record their electrical activity.

“The beauty is that now, the sensors are small enough to have a good application in the peripheral nervous system, for bladder control or appetite suppression, for example,“ Carmena said. “The technology is not really there yet to get to the 50-micron target size, which we would need for the brain and central nervous system. Once it’s clinically proven, however, neural dust will just replace wire electrodes. This time, once you close up the brain, you’re done.“

The team is working now to miniaturize the device further, find more biocompatible materials and improve the surface transceiver that sends and receives the ultrasounds, ideally using beam-steering technology to focus the sounds waves on individual motes. They are now building little backpacks for rats to hold the ultrasound transceiver that will record data from implanted motes.

They’re also working to expand the motes’ ability to detect non-electrical signals, such as oxygen or hormone levels.

“The vision is to implant these neural dust motes anywhere in the body, and have a patch over the implanted site send ultrasonic waves to wake up and receive necessary information from the motes for the desired therapy you want,” said Dongjin Seo, a graduate student in electrical engineering and computer sciences. “Eventually you would use multiple implants and one patch that would ping each implant individually, or all simultaneously.”

Ultrasound vs radio

Maharbiz and Carmena conceived of the idea of neural dust about five years ago, but attempts to power an implantable device and read out the data using radio waves were disappointing. Radio attenuates very quickly with distance in tissue, so communicating with devices deep in the body would be difficult without using potentially damaging high-intensity radiation.

Marharbiz hit on the idea of ultrasound, and in 2013 published a paper with Carmena, Seo and their colleagues describing how such a system might work. “Our first study demonstrated that the fundamental physics of ultrasound allowed for very, very small implants that could record and communicate neural data,” said Maharbiz. He and his students have now created that system.

“Ultrasound is much more efficient when you are targeting devices that are on the millimeter scale or smaller and that are embedded deep in the body,” Seo said. “You can get a lot of power into it and a lot more efficient transfer of energy and communication when using ultrasound as opposed to electromagnetic waves, which has been the go-to method for wirelessly transmitting power to miniature implants”

“Now that you have a reliable, minimally invasive neural pickup in your body, the technology could become the driver for a whole gamut of applications, things that today don’t even exist,“ Carmena said.

Here’s a link to and a citation for the team’s latest paper,

Wireless Recording in the Peripheral Nervous System with Ultrasonic Neural Dust by Dongjin Seo, Ryan M. Neely, Konlin Shen, Utkarsh Singhal, Elad Alon, Jan M. Rabaey, Jose M. Carmena. and Michel M. Maharbiz. Neuron Volume 91, Issue 3, p529–539, 3 August 2016 DOI: http://dx.doi.org/10.1016/j.neuron.2016.06.034

This paper appears to be open access.