Tag Archives: Georgia Institute of Technology

A question of consciousness: Facebotlish (a new language); a July 5, 2017 rap guide performance in Vancouver, Canada; Tom Stoppard’s play; and a little more

This would usually be a simple event announcement but with the advent of a new, related (in my mind if no one else’s) development on Facebook, this has become a roundup of sorts.

Facebotlish (Facebook’s chatbots create their own language)

The language created by Facebook’s chatbots, Facebotlish, was an unintended consequence—that’s right Facebook’s developers did not design a language for the chatbots or anticipate its independent development, apparently.  Adrienne LaFrance’s June 20, 2017 article for theatlantic.com explores the development and the question further,

Something unexpected happened recently at the Facebook Artificial Intelligence Research lab. Researchers who had been training bots to negotiate with one another realized that the bots, left to their own devices, started communicating in a non-human language.

In order to actually follow what the bots were saying, the researchers had to tweak their model, limiting the machines to a conversation humans could understand. (They want bots to stick to human languages because eventually they want those bots to be able to converse with human Facebook users.) …

Here’s what the language looks like (from LaFrance article),

Here’s an example of one of the bot negotiations that Facebook observed:Bob: i can i i everything else . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i everything else . . . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i i can i i i everything else . . . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i . . . . . . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i i i i i everything else . . . . . . . . . . . . . .
Alice: balls have 0 to me to me to me to me to me to me to me to me to
Bob: you i i i everything else . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to

It is incomprehensible to humans even after being tweaked, even so, some successful negotiations can ensue.

Facebook’s researchers aren’t the only one to come across the phenomenon (from LaFrance’s article; Note: Links have been removed),

Other AI researchers, too, say they’ve observed machines that can develop their own languages, including languages with a coherent structure, and defined vocabulary and syntax—though not always actual meaningful, by human standards.

In one preprint paper added earlier this year [2017] to the research repository arXiv, a pair of computer scientists from the non-profit AI research firm OpenAI wrote about how bots learned to communicate in an abstract language—and how those bots turned to non-verbal communication, the equivalent of human gesturing or pointing, when language communication was unavailable. (Bots don’t need to have corporeal form to engage in non-verbal communication; they just engage with what’s called a visual sensory modality.) Another recent preprint paper, from researchers at the Georgia Institute of Technology, Carnegie Mellon, and Virginia Tech, describes an experiment in which two bots invent their own communication protocol by discussing and assigning values to colors and shapes—in other words, the researchers write, they witnessed the “automatic emergence of grounded language and communication … no human supervision!”

The implications of this kind of work are dizzying. Not only are researchers beginning to see how bots could communicate with one another, they may be scratching the surface of how syntax and compositional structure emerged among humans in the first place.

LaFrance’s article is well worth reading in its entirety especially since the speculation is focused on whether or not the chatbots’ creation is in fact language. There is no mention of consciousness and perhaps this is just a crazy idea but is it possible that these chatbots have consciousness? The question is particularly intriguing in light of some of philosopher David Chalmers’ work (see his 2014 TED talk in Vancouver, Canada: https://www.ted.com/talks/david_chalmers_how_do_you_explain_consciousness/transcript?language=en runs roughly 18 mins.); a text transcript is also featured. There’s a condensed version of Chalmers’ TED talk offered in a roughly 9 minute NPR (US National Public Radio) interview by Gus Raz. Here are some highlights from the text transcript,

So we’ve been hearing from brain scientists who are asking how a bunch of neurons and synaptic connections in the brain add up to us, to who we are. But it’s consciousness, the subjective experience of the mind, that allows us to ask the question in the first place. And where consciousness comes from – that is an entirely separate question.

DAVID CHALMERS: Well, I like to distinguish between the easy problems of consciousness and the hard problem.

RAZ: This is David Chalmers. He’s a philosopher who coined this term, the hard problem of consciousness.

CHALMERS: Well, the easy problems are ultimately a matter of explaining behavior – things we do. And I think brain science is great at problems like that. It can isolate a neural circuit and show how it enables you to see a red object, to respondent and say, that’s red. But the hard problem of consciousness is subjective experience. Why, when all that happens in this circuit, does it feel like something? How does a bunch of – 86 billion neurons interacting inside the brain, coming together – how does that produce the subjective experience of a mind and of the world?

RAZ: Here’s how David Chalmers begins his TED Talk.

(SOUNDBITE OF TED TALK)

CHALMERS: Right now, you have a movie playing inside your head. It has 3-D vision and surround sound for what you’re seeing and hearing right now. Your movie has smell and taste and touch. It has a sense of your body, pain, hunger, orgasms. It has emotions, anger and happiness. It has memories, like scenes from your childhood, playing before you. This movie is your stream of consciousness. If we weren’t conscious, nothing in our lives would have meaning or value. But at the same time, it’s the most mysterious phenomenon in the universe. Why are we conscious?

RAZ: Why is consciousness more than just the sum of the brain’s parts?

CHALMERS: Well, the question is, you know, what is the brain? It’s this giant complex computer, a bunch of interacting parts with great complexity. What does all that explain? That explains objective mechanism. Consciousness is subjective by its nature. It’s a matter of subjective experience. And it seems that we can imagine all of that stuff going on in the brain without consciousness. And the question is, where is the consciousness from there? It’s like, if someone could do that, they’d get a Nobel Prize, you know?

RAZ: Right.

CHALMERS: So here’s the mapping from this circuit to this state of consciousness. But underneath that is always going be the question, why and how does the brain give you consciousness in the first place?

(SOUNDBITE OF TED TALK)

CHALMERS: Right now, nobody knows the answers to those questions. So we may need one or two ideas that initially seem crazy before we can come to grips with consciousness, scientifically. The first crazy idea is that consciousness is fundamental. Physicists sometimes take some aspects of the universe as fundamental building blocks – space and time and mass – and you build up the world from there. Well, I think that’s the situation we’re in. If you can’t explain consciousness in terms of the existing fundamentals – space, time – the natural thing to do is to postulate consciousness itself as something fundamental – a fundamental building block of nature. The second crazy idea is that consciousness might be universal. This view is sometimes called panpsychism – pan, for all – psych, for mind. Every system is conscious. Not just humans, dogs, mice, flies, but even microbes. Even a photon has some degree of consciousness. The idea is not that photons are intelligent or thinking. You know, it’s not that a photon is wracked with angst because it’s thinking, oh, I’m always buzzing around near the speed of light. I never get to slow down and smell the roses. No, not like that. But the thought is, maybe photons might have some element of raw subjective feeling, some primitive precursor to consciousness.

RAZ: So this is a pretty big idea – right? – like, that not just flies, but microbes or photons all have consciousness. And I mean we, like, as humans, we want to believe that our consciousness is what makes us special, right – like, different from anything else.

CHALMERS: Well, I would say yes and no. I’d say the fact of consciousness does not make us special. But maybe we’ve a special type of consciousness ’cause you know, consciousness is not on and off. It comes in all these rich and amazing varieties. There’s vision. There’s hearing. There’s thinking. There’s emotion and so on. So our consciousness is far richer, I think, than the consciousness, say, of a mouse or a fly. But if you want to look for what makes us distinct, don’t look for just our being conscious, look for the kind of consciousness we have. …

Intriguing, non?

Vancouver premiere of Baba Brinkman’s Rap Guide to Consciousness

Baba Brinkman, former Vancouverite and current denizen of New York City, is back in town offering a new performance at the Rio Theatre (1680 E. Broadway, near Commercial Drive). From a July 5, 2017 Rio Theatre event page and ticket portal,

Baba Brinkman’s Rap Guide to Consciousness

Wednesday, July 5 [2017] at 6:30pm PDT

Baba Brinkman’s new hip-hop theatre show “Rap Guide to Consciousness” is all about the neuroscience of consciousness. See it in Vancouver at the Rio Theatre before it goes to the Edinburgh Fringe Festival in August [2017].

This event also features a performance of “Off the Top” with Dr. Heather Berlin (cognitive neuroscientist, TV host, and Baba’s wife), which is also going to Edinburgh.

Wednesday, July 5
Doors 6:00 pm | Show 6:30 pm

Advance tickets $12 | $15 at the door

*All ages welcome!
*Sorry, Groupons and passes not accepted for this event.

“Utterly unique… both brilliantly entertaining and hugely informative” ★ ★ ★ ★ ★ – Broadway Baby

“An education, inspiring, and wonderfully entertaining show from beginning to end” ★ ★ ★ ★ ★ – Mumble Comedy

There’s quite the poster for this rap guide performance,

In addition to  the Vancouver and Edinburgh performance (the show was premiered at the Brighton Fringe Festival in May 2017; see Simon Topping’s very brief review in this May 10, 2017 posting on the reviewshub.com), Brinkman is raising money (goal is $12,000US; he has raised a little over $3,000 with approximately one month before the deadline) to produce a CD. Here’s more from the Rap Guide to Consciousness campaign page on Indiegogo,

Brinkman has been working with neuroscientists, Dr. Anil Seth (professor and co-director of Sackler Centre for Consciousness Science) and Dr. Heather Berlin (Brinkman’s wife as noted earlier; see her Wikipedia entry or her website).

There’s a bit more information about the rap project and Anil Seth in a May 3, 2017 news item by James Hakner for the University of Sussex,

The research frontiers of consciousness science find an unusual outlet in an exciting new Rap Guide to Consciousness, premiering at this year’s Brighton Fringe Festival.

Professor Anil Seth, Co-Director of the Sackler Centre for Consciousness Science at the University of Sussex, has teamed up with New York-based ‘peer-reviewed rapper’ Baba Brinkman, to explore the latest findings from the neuroscience and cognitive psychology of subjective experience.

What is it like to be a baby? We might have to take LSD to find out. What is it like to be an octopus? Imagine most of your brain was actually built into your fingertips. What is it like to be a rapper kicking some of the world’s most complex lyrics for amused fringe audiences? Surreal.

In this new production, Baba brings his signature mix of rap comedy storytelling to the how and why behind your thoughts and perceptions. Mixing cutting-edge research with lyrical performance and projected visuals, Baba takes you through the twists and turns of the only organ it’s better to donate than receive: the human brain. Discover how the various subsystems of your brain come together to create your own rich experience of the world, including the sights and sounds of a scientifically peer-reviewed rapper dropping knowledge.

The result is a truly mind-blowing multimedia hip-hop theatre performance – the perfect meta-medium through which to communicate the dazzling science of consciousness.

Baba comments: “This topic is endlessly fascinating because it underlies everything we do pretty much all the time, which is probably why it remains one of the toughest ideas to get your head around. The first challenge with this show is just to get people to accept the (scientifically uncontroversial) idea that their brains and minds are actually the same thing viewed from different angles. But that’s just the starting point, after that the details get truly amazing.”

Baba Brinkman is a Canadian rap artist and award-winning playwright, best known for his “Rap Guide” series of plays and albums. Baba has toured the world and enjoyed successful runs at the Edinburgh Fringe Festival and off-Broadway in New York. The Rap Guide to Religion was nominated for a 2015 Drama Desk Award for “Unique Theatrical Experience” and The Rap Guide to Evolution (“Astonishing and brilliant” NY Times), won a Scotsman Fringe First Award and a Drama Desk Award nomination for “Outstanding Solo Performance”. The Rap Guide to Climate Chaos premiered in Edinburgh in 2015, followed by a six-month off-Broadway run in 2016.

Baba is also a pioneer in the genre of “lit-hop” or literary hip-hop, known for his adaptations of The Canterbury Tales, Beowulf, and Gilgamesh. He is a recent recipient of the National Center for Science Education’s “Friend of Darwin Award” for his efforts to improve the public understanding of evolutionary biology.

Anil Seth is an internationally renowned researcher into the biological basis of consciousness, with more than 100 (peer-reviewed!) academic journal papers on the subject. Alongside science he is equally committed to innovative public communication. A Wellcome Trust Engagement Fellow (from 2016) and the 2017 British Science Association President (Psychology), Professor Seth has co-conceived and consulted on many science-art projects including drama (Donmar Warehouse), dance (Siobhan Davies dance company), and the visual arts (with artist Lindsay Seers). He has also given popular public talks on consciousness at the Royal Institution (Friday Discourse) and at the main TED conference in Vancouver. He is a regular presence in print and on the radio and is the recipient of awards including the BBC Audio Award for Best Single Drama (for ‘The Sky is Wider’) and the Royal Society Young People’s Book Prize (for EyeBenders). This is his first venture into rap.

Professor Seth said: “There is nothing more familiar, and at the same time more mysterious than consciousness, but research is finally starting to shed light on this most central aspect of human existence. Modern neuroscience can be incredibly arcane and complex, posing challenges to us as public communicators.

“It’s been a real pleasure and privilege to work with Baba on this project over the last year. I never thought I’d get involved with a rap artist – but hearing Baba perform his ‘peer reviewed’ breakdowns of other scientific topics I realized here was an opportunity not to be missed.”

Interestingly, Seth has another Canadian connection; he’s a Senior Fellow of the Azrieli Program in Brain, Mind & Consciousness at the Canadian Institute for Advanced Research (CIFAR; Wikipedia entry). By the way, the institute  was promised $93.7M in the 2017 Canadian federal government budget for the establishment of a Pan-Canadian Artificial Intelligence Strategy (see my March 24, 2017 posting; scroll down about 25% of the way and look for the highlighted dollar amount). You can find out more about the Azrieli programme here and about CIFAR on its website.

The Hard Problem (a Tom Stoppard play)

Brinkman isn’t the only performance-based artist to be querying the concept of consciousness, Tom Stoppard has written a play about consciousness titled ‘The Hard Problem’, which debuted at the National Theatre (UK) in January 2015 (see BBC [British Broadcasting Corporation] news online’s Jan. 29, 2015 roundup of reviews). A May 25, 2017 commentary by Andrew Brown for the Guardian offers some insight into the play and the issues (Note: Links have been removed),

There is a lovely exchange in Tom Stoppard’s play about consciousness, The Hard Problem, when an atheist has been sneering at his girlfriend for praying. It is, he says, an utterly meaningless activity. Right, she says, then do one thing for me: pray! I can’t do that, he replies. It would betray all I believe in.

So prayer can have meanings, and enormously important ones, even for people who are certain that it doesn’t have the meaning it is meant to have. In that sense, your really convinced atheist is much more religious than someone who goes along with all the prayers just because that’s what everyone does, without for a moment supposing the action means anything more than asking about the weather.

The Hard Problem of the play’s title is a phrase coined by the Australian philosopher David Chalmers to describe the way in which consciousness arises from a physical world. What makes it hard is that we don’t understand it. What makes it a problem is slightly different. It isn’t the fact of consciousness, but our representations of consciousness, that give rise to most of the difficulties. We don’t know how to fit the first-person perspective into the third-person world that science describes and explores. But this isn’t because they don’t fit: it’s because we don’t understand how they fit. For some people, this becomes a question of consuming interest.

There are also a couple of video of Tom Stoppard, the playwright, discussing his play with various interested parties, the first being the director at the National Theatre who tackled the debut run, Nicolas Hytner: https://www.youtube.com/watch?v=s7J8rWu6HJg (it runs approximately 40 mins.). Then, there’s the chat Stoppard has with previously mentioned philosopher, David Chalmers: https://www.youtube.com/watch?v=4BPY2c_CiwA (this runs approximately 1 hr. 32 mins.).

I gather ‘consciousness’ is a hot topic these days and, in the venacular of the 1960s, I guess you could describe all of this as ‘expanding our consciousness’. Have a nice weekend!

4D printing, what is that?

According to an April 12, 2017 news item on ScienceDaily, shapeshifting in response to environmental stimuli is the fourth dimension (I have a link to a posting about 4D printing with another fourth dimension),

A team of researchers from Georgia Institute of Technology and two other institutions has developed a new 3-D printing method to create objects that can permanently transform into a range of different shapes in response to heat.

The team, which included researchers from the Singapore University of Technology and Design (SUTD) and Xi’an Jiaotong University in China, created the objects by printing layers of shape memory polymers with each layer designed to respond differently when exposed to heat.

“This new approach significantly simplifies and increases the potential of 4-D printing by incorporating the mechanical programming post-processing step directly into the 3-D printing process,” said Jerry Qi, a professor in the George W. Woodruff School of Mechanical Engineering at Georgia Tech. “This allows high-resolution 3-D printed components to be designed by computer simulation, 3-D printed, and then directly and rapidly transformed into new permanent configurations by simply heating.”

The research was reported April 12 [2017] in the journal Science Advances, a publication of the American Association for the Advancement of Science. The work is funded by the U.S. Air Force Office of Scientific Research, the U.S. National Science Foundation and the Singapore National Research Foundation through the SUTD DManD Centre.

An April 12, 2017 Singapore University of Technology and Design (SUTD) press release on EurekAlert provides more detail,

4D printing is an emerging technology that allows a 3D-printed component to transform its structure by exposing it to heat, light, humidity, or other environmental stimuli. This technology extends the shape creation process beyond 3D printing, resulting in additional design flexibility that can lead to new types of products which can adjust its functionality in response to the environment, in a pre-programmed manner. However, 4D printing generally involves complex and time-consuming post-processing steps to mechanically programme the component. Furthermore, the materials are often limited to soft polymers, which limit their applicability in structural scenarios.

A group of researchers from the SUTD, Georgia Institute of Technology, Xi’an Jiaotong University and Zhejiang University has introduced an approach that significantly simplifies and increases the potential of 4D printing by incorporating the mechanical programming post-processing step directly into the 3D printing process. This allows high-resolution 3D-printed components to be designed by computer simulation, 3D printed, and then directly and rapidly transformed into new permanent configurations by using heat. This approach can help save printing time and materials used by up to 90%, while completely eliminating the time-consuming mechanical programming process from the design and manufacturing workflow.

“Our approach involves printing composite materials where at room temperature one material is soft but can be programmed to contain internal stress, and the other material is stiff,” said Dr. Zhen Ding of SUTD. “We use computational simulations to design composite components where the stiff material has a shape and size that prevents the release of the programmed internal stress from the soft material after 3D printing. Upon heating, the stiff material softens and allows the soft material to release its stress. This results in a change – often dramatic – in the product shape.” This new shape is fixed when the product is cooled, with good mechanical stiffness. The research demonstrated many interesting shape changing parts, including a lattice that can expand by almost 8 times when heated.

This new shape becomes permanent and the composite material will not return to its original 3D-printed shape, upon further heating or cooling. “This is because of the shape memory effect,” said Prof. H. Jerry Qi of Georgia Tech. “In the two-material composite design, the stiff material exhibits shape memory, which helps lock the transformed shape into a permanent one. Additionally, the printed structure also exhibits the shape memory effect, i.e. it can then be programmed into further arbitrary shapes that can always be recovered to its new permanent shape, but not its 3D-printed shape.”

Said SUTD’s Prof. Martin Dunn, “The key advance of this work, is a 4D printing method that is dramatically simplified and allows the creation of high-resolution complex 3D reprogrammable products; it promises to enable myriad applications across biomedical devices, 3D electronics, and consumer products. It even opens the door to a new paradigm in product design, where components are designed from the onset to inhabit multiple configurations during service.”

Here’s a video,


Uploaded on Apr 17, 2017

A research team led by the Singapore University of Technology and Design’s (SUTD) Associate Provost of Research, Professor Martin Dunn, has come up with a new and simplified 4D printing method that uses a 3D printer to rapidly create 3D objects, which can permanently transform into a range of different shapes in response to heat.

Here’s a link to and a citation for the paper,

Direct 4D printing via active composite materials by Zhen Ding, Chao Yuan, Xirui Peng, Tiejun Wang, H. Jerry Qi, and Martin L. Dunn. Science Advances  12 Apr 2017: Vol. 3, no. 4, e1602890 DOI: 10.1126/sciadv.1602890

This paper is open access.

Here is a link to a post about another 4th dimension, time,

4D printing: a hydrogel orchid (Jan. 28, 2016)

Emerging technology and the law

I have three news bits about legal issues that are arising as a consequence of emerging technologies.

Deep neural networks, art, and copyright

Caption: The rise of automated art opens new creative avenues, coupled with new problems for copyright protection. Credit: Provided by: Alexander Mordvintsev, Christopher Olah and Mike Tyka

Presumably this artwork is a demonstration of automated art although they never really do explain how in the news item/news release. An April 26, 2017 news item on ScienceDaily announces research into copyright and the latest in using neural networks to create art,

In 1968, sociologist Jean Baudrillard wrote on automatism that “contained within it is the dream of a dominated world […] that serves an inert and dreamy humanity.”

With the growing popularity of Deep Neural Networks (DNN’s), this dream is fast becoming a reality.

Dr. Jean-Marc Deltorn, researcher at the Centre d’études internationales de la propriété intellectuelle in Strasbourg, argues that we must remain a responsive and responsible force in this process of automation — not inert dominators. As he demonstrates in a recent Frontiers in Digital Humanities paper, the dream of automation demands a careful study of the legal problems linked to copyright.

An April 26, 2017 Frontiers (publishing) news release on EurekAlert, which originated the news item, describes the research in more detail,

For more than half a century, artists have looked to computational processes as a way of expanding their vision. DNN’s are the culmination of this cross-pollination: by learning to identify a complex number of patterns, they can generate new creations.

These systems are made up of complex algorithms modeled on the transmission of signals between neurons in the brain.

DNN creations rely in equal measure on human inputs and the non-human algorithmic networks that process them.

Inputs are fed into the system, which is layered. Each layer provides an opportunity for a more refined knowledge of the inputs (shape, color, lines). Neural networks compare actual outputs to expected ones, and correct the predictive error through repetition and optimization. They train their own pattern recognition, thereby optimizing their learning curve and producing increasingly accurate outputs.

The deeper the layers are, the higher the level of abstraction. The highest layers are able to identify the contents of a given input with reasonable accuracy, after extended periods of training.

Creation thus becomes increasingly automated through what Deltorn calls “the arcane traceries of deep architecture”. The results are sufficiently abstracted from their sources to produce original creations that have been exhibited in galleries, sold at auction and performed at concerts.

The originality of DNN’s is a combined product of technological automation on one hand, human inputs and decisions on the other.

DNN’s are gaining popularity. Various platforms (such as DeepDream) now allow internet users to generate their very own new creations . This popularization of the automation process calls for a comprehensive legal framework that ensures a creator’s economic and moral rights with regards to his work – copyright protection.

Form, originality and attribution are the three requirements for copyright. And while DNN creations satisfy the first of these three, the claim to originality and attribution will depend largely on a given country legislation and on the traceability of the human creator.

Legislation usually sets a low threshold to originality. As DNN creations could in theory be able to create an endless number of riffs on source materials, the uncurbed creation of original works could inflate the existing number of copyright protections.

Additionally, a small number of national copyright laws confers attribution to what UK legislation defines loosely as “the person by whom the arrangements necessary for the creation of the work are undertaken.” In the case of DNN’s, this could mean anybody from the programmer to the user of a DNN interface.

Combined with an overly supple take on originality, this view on attribution would further increase the number of copyrightable works.

The risk, in both cases, is that artists will be less willing to publish their own works, for fear of infringement of DNN copyright protections.

In order to promote creativity – one seminal aim of copyright protection – the issue must be limited to creations that manifest a personal voice “and not just the electric glint of a computational engine,” to quote Deltorn. A delicate act of discernment.

DNN’s promise new avenues of creative expression for artists – with potential caveats. Copyright protection – a “catalyst to creativity” – must be contained. Many of us gently bask in the glow of an increasingly automated form of technology. But if we want to safeguard the ineffable quality that defines much art, it might be a good idea to hone in more closely on the differences between the electric and the creative spark.

This research is and be will part of a broader Frontiers Research Topic collection of articles on Deep Learning and Digital Humanities.

Here’s a link to and a citation for the paper,

Deep Creations: Intellectual Property and the Automata by Jean-Marc Deltorn. Front. Digit. Humanit., 01 February 2017 | https://doi.org/10.3389/fdigh.2017.00003

This paper is open access.

Conference on governance of emerging technologies

I received an April 17, 2017 notice via email about this upcoming conference. Here’s more from the Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics webpage,

The Fifth Annual Conference on Governance of Emerging Technologies:

Law, Policy and Ethics held at the new

Beus Center for Law & Society in Phoenix, AZ

May 17-19, 2017!

Call for Abstracts – Now Closed

The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including (but not limited to) nanotechnology, synthetic biology, gene editing, biotechnology, genomics, personalized medicine, human enhancement technologies, telecommunications, information technologies, surveillance technologies, geoengineering, neuroscience, artificial intelligence, and robotics. The conference is premised on the belief that there is much to be learned and shared from and across the governance experience and proposals for these various emerging technologies.

Keynote Speakers:

Gillian HadfieldRichard L. and Antoinette Schamoi Kirtland Professor of Law and Professor of Economics USC [University of Southern California] Gould School of Law

Shobita Parthasarathy, Associate Professor of Public Policy and Women’s Studies, Director, Science, Technology, and Public Policy Program University of Michigan

Stuart Russell, Professor at [University of California] Berkeley, is a computer scientist known for his contributions to artificial intelligence

Craig Shank, Vice President for Corporate Standards Group in Microsoft’s Corporate, External and Legal Affairs (CELA)

Plenary Panels:

Innovation – Responsible and/or Permissionless

Ellen-Marie Forsberg, Senior Researcher/Research Manager at Oslo and Akershus University College of Applied Sciences

Adam Thierer, Senior Research Fellow with the Technology Policy Program at the Mercatus Center at George Mason University

Wendell Wallach, Consultant, ethicist, and scholar at Yale University’s Interdisciplinary Center for Bioethics

 Gene Drives, Trade and International Regulations

Greg Kaebnick, Director, Editorial Department; Editor, Hastings Center Report; Research Scholar, Hastings Center

Jennifer Kuzma, Goodnight-North Carolina GlaxoSmithKline Foundation Distinguished Professor in Social Sciences in the School of Public and International Affairs (SPIA) and co-director of the Genetic Engineering and Society (GES) Center at North Carolina State University

Andrew Maynard, Senior Sustainability Scholar, Julie Ann Wrigley Global Institute of Sustainability Director, Risk Innovation Lab, School for the Future of Innovation in Society Professor, School for the Future of Innovation in Society, Arizona State University

Gary Marchant, Regents’ Professor of Law, Professor of Law Faculty Director and Faculty Fellow, Center for Law, Science & Innovation, Arizona State University

Marc Saner, Inaugural Director of the Institute for Science, Society and Policy, and Associate Professor, University of Ottawa Department of Geography

Big Data

Anupam Chander, Martin Luther King, Jr. Professor of Law and Director, California International Law Center, UC Davis School of Law

Pilar Ossorio, Professor of Law and Bioethics, University of Wisconsin, School of Law and School of Medicine and Public Health; Morgridge Institute for Research, Ethics Scholar-in-Residence

George Poste, Chief Scientist, Complex Adaptive Systems Initiative (CASI) (http://www.casi.asu.edu/), Regents’ Professor and Del E. Webb Chair in Health Innovation, Arizona State University

Emily Shuckburgh, climate scientist and deputy head of the Polar Oceans Team at the British Antarctic Survey, University of Cambridge

 Responsible Development of AI

Spring Berman, Ira A. Fulton Schools of Engineering, Arizona State University

John Havens, The IEEE [Institute of Electrical and Electronics Engineers] Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems

Subbarao Kambhampati, Senior Sustainability Scientist, Julie Ann Wrigley Global Institute of Sustainability, Professor, School of Computing, Informatics and Decision Systems Engineering, Ira A. Fulton Schools of Engineering, Arizona State University

Wendell Wallach, Consultant, Ethicist, and Scholar at Yale University’s Interdisciplinary Center for Bioethics

Existential and Catastrophic Ricks [sic]

Tony Barrett, Co-Founder and Director of Research of the Global Catastrophic Risk Institute

Haydn Belfield,  Academic Project Administrator, Centre for the Study of Existential Risk at the University of Cambridge

Margaret E. Kosal Associate Director, Sam Nunn School of International Affairs, Georgia Institute of Technology

Catherine Rhodes,  Academic Project Manager, Centre for the Study of Existential Risk at CSER, University of Cambridge

These were the panels that are of interest to me; there are others on the homepage.

Here’s some information from the Conference registration webpage,

Early Bird Registration – $50 off until May 1! Enter discount code: earlybirdGETs50

New: Group Discount – Register 2+ attendees together and receive an additional 20% off for all group members!

Click Here to Register!

Conference registration fees are as follows:

  • General (non-CLE) Registration: $150.00
  • CLE Registration: $350.00
  • *Current Student / ASU Law Alumni Registration: $50.00
  • ^Cybsersecurity sessions only (May 19): $100 CLE / $50 General / Free for students (registration info coming soon)

There you have it.

Neuro-techno future laws

I’m pretty sure this isn’t the first exploration of potential legal issues arising from research into neuroscience although it’s the first one I’ve stumbled across. From an April 25, 2017 news item on phys.org,

New human rights laws to prepare for advances in neurotechnology that put the ‘freedom of the mind’ at risk have been proposed today in the open access journal Life Sciences, Society and Policy.

The authors of the study suggest four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy. The four laws are: the right to cognitive liberty, the right to mental privacy, the right to mental integrity and the right to psychological continuity.

An April 25, 2017 Biomed Central news release on EurekAlert, which originated the news item, describes the work in more detail,

Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel, said: “The mind is considered to be the last refuge of personal freedom and self-determination, but advances in neural engineering, brain imaging and neurotechnology put the freedom of the mind at risk. Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology.”

Advances in neurotechnology, such as sophisticated brain imaging and the development of brain-computer interfaces, have led to these technologies moving away from a clinical setting and into the consumer domain. While these advances may be beneficial for individuals and society, there is a risk that the technology could be misused and create unprecedented threats to personal freedom.

Professor Roberto Andorno, co-author of the research, explained: “Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court, for example as a tool for assessing criminal responsibility or even the risk of reoffending. Consumer companies are using brain imaging for ‘neuromarketing’, to understand consumer behaviour and elicit desired responses from customers. There are also tools such as ‘brain decoders’ which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom which we sought to address with the development of four new human rights laws.”

The authors explain that as neurotechnology improves and becomes commonplace, there is a risk that the technology could be hacked, allowing a third-party to ‘eavesdrop’ on someone’s mind. In the future, a brain-computer interface used to control consumer technology could put the user at risk of physical and psychological damage caused by a third-party attack on the technology. There are also ethical and legal concerns over the protection of data generated by these devices that need to be considered.

International human rights laws make no specific mention to neuroscience, although advances in biomedicine have become intertwined with laws, such as those concerning human genetic data. Similar to the historical trajectory of the genetic revolution, the authors state that the on-going neurorevolution will force a reconceptualization of human rights laws and even the creation of new ones.

Marcello Ienca added: “Science-fiction can teach us a lot about the potential threat of technology. Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes. We need to be prepared to deal with the impact these technologies will have on our personal freedom.”

Here’s a link to and a citation for the paper,

Towards new human rights in the age of neuroscience and neurotechnology by Marcello Ienca and Roberto Andorno. Life Sciences, Society and Policy201713:5 DOI: 10.1186/s40504-017-0050-1 Published: 26 April 2017

©  The Author(s). 2017

This paper is open access.

Hopes for nanocellulose in the fields of medicine and green manufacturing

Initially this seemed like an essay extolling the possibilities for nanocellulose but it is also a research announcement. From a Nov. 7, 2016 news item on Nanowerk,

What if you could take one of the most abundant natural materials on earth and harness its strength to lighten the heaviest of objects, to replace synthetic materials, or use it in scaffolding to grow bone, in a fast-growing area of science in oral health care?

This all might be possible with cellulose nanocrystals, the molecular matter of all plant life. As industrial filler material, they can be blended with plastics and other synthetics. They are as strong as steel, tough as glass, lightweight, and green.

“Plastics are currently reinforced with fillers made of steel, carbon, Kevlar, or glass. There is an increasing demand in manufacturing for sustainable materials that are lightweight and strong to replace these fillers,” said Douglas M. Fox, associate professor of chemistry at American University.
“Cellulose nanocrystals are an environmentally friendly filler. If there comes a time that they’re used widely in manufacturing, cellulose nanocrystals will lessen the weight of materials, which will reduce energy.”

A Nov. 7, 2016 American University news release on EurekAlert, which originated the news item, continues into the research,

Fox has submitted a patent for his work with cellulose nanocrystals, which involves a simple, scalable method to improve their performance. Published results of his method can be found in the chemistry journal ACS Applied Materials and Interfaces. Fox’s method could be used as a biomaterial and for applications in transportation, infrastructure and wind turbines.

The power of cellulose

Cellulose gives stems, leaves and other organic material in the natural world their strength. That strength already has been harnessed for use in many commercial materials. At the nano-level, cellulose fibers can be broken down into tiny crystals, particles smaller than ten millionths of a meter. Deriving cellulose from natural sources such as wood, tunicate (ocean-dwelling sea cucumbers) and certain kinds of bacteria, researchers prepare crystals of different sizes and strengths.

For all of the industry potential, hurdles abound. As nanocellulose disperses within plastic, scientists must find the sweet spot: the right amount of nanoparticle-matrix interaction that yields the strongest, lightest property. Fox overcame four main barriers by altering the surface chemistry of nanocrystals with a simple process of ion exchange. Ion exchange reduces water absorption (cellulose composites lose their strength if they absorb water); increases the temperature at which the nanocrystals decompose (needed to blend with plastics); reduces clumping; and improves re-dispersal after the crystals dry.

Cell growth

Cellulose nanocrystals as a biomaterial is yet another commercial prospect. In dental regenerative medicine, restoring sufficient bone volume is needed to support a patient’s teeth or dental implants. Researchers at the National Institute of Standards and Technology [NIST], through an agreement with the National Institute of Dental and Craniofacial Research of the National Institutes of Health, are looking for an improved clinical approach that would regrow a patient’s bone. When researchers experimented with Fox’s modified nanocrystals, they were able to disperse the nanocrystals in scaffolds for dental regenerative medicine purposes.

“When we cultivated cells on the cellulose nanocrystal-based scaffolds, preliminary results showed remarkable potential of the scaffolds for both their mechanical properties and the biological response. This suggests that scaffolds with appropriate cellulose nanocrystal concentrations are a promising approach for bone regeneration,” said Martin Chiang, team leader for NIST’s Biomaterials for Oral Health Project.

Another collaboration Fox has is with Georgia Institute of Technology and Owens Corning, a company specializing in fiberglass insulation and composites, to research the benefits to replace glass-reinforced plastic used in airplanes, cars and wind turbines. He also is working with Vireo Advisors and NIST to characterize the health and safety of cellulose nanocrystals and nanofibers.

“As we continue to show these nanomaterials are safe, and make it easier to disperse them into a variety of materials, we get closer to utilizing nature’s chemically resistant, strong, and most abundant polymer in everyday products,” Fox said.

Here’s a link to and a citation for the paper,

Simultaneously Tailoring Surface Energies and Thermal Stabilities of Cellulose Nanocrystals Using Ion Exchange: Effects on Polymer Composite Properties for Transportation, Infrastructure, and Renewable Energy Applications by Douglas M. Fox, Rebeca S. Rodriguez, Mackenzie N. Devilbiss, Jeremiah Woodcock, Chelsea S. Davis, Robert Sinko, Sinan Keten, and Jeffrey W. Gilman. ACS Appl. Mater. Interfaces, 2016, 8 (40), pp 27270–27281 DOI: 10.1021/acsami.6b06083 Publication Date (Web): September 14, 2016

Copyright © 2016 American Chemical Society

This paper is behind a paywall.

Achieving ultra-low friction without oil

Oiled gears as small parts of large mechanism Courtesy: Georgia Institute of Technology

Oiled gears as small parts of large mechanism Courtesy: Georgia Institute of Technology

Those gears are gorgeous, especially in full size; I will be giving a link to a full size version in a bit. Meanwhile, an Oct. 11, 2016 news item on Nanowerk makes an announcement about ultra-low friction without oil,

Researchers at Georgia Institute of Technology [Georgia Tech; US] have developed a new process for treating metal surfaces that has the potential to improve efficiency in piston engines and a range of other equipment.

The method improves the ability of metal surfaces to bond with oil, significantly reducing friction without special oil additives.

“About 50 percent of the mechanical energy losses in an internal combustion engine result from piston assembly friction. So if we can reduce the friction, we can save energy and reduce fuel and oil consumption,” said Michael Varenberg, an assistant professor in Georgia Tech’s George W. Woodruff School of Mechanical Engineering.

An Oct. 5, 2016 Georgia Tech news release (also on EurekAlert but dated Oct. 11, 2016), which originated the news item, describes the research in more detail,

In the study, which was published Oct. 5 [2016] in the journal Tribology Letters, the researchers at Georgia Tech and Technion – Israel Institute of Technology tested treating the surface of cast iron blocks by blasting it with mixture of copper sulfide and aluminum oxide. The shot peening modified the surface chemically that changed how oil molecules bonded with the metal and led to a superior surface lubricity.

“We want oil molecules to be connected strongly to the surface. Traditionally this connection is created by putting additives in the oil,” Varenberg said. “In this specific case, we shot peen the surface with a blend of alumina and copper sulfide particles.  Making the surface more active chemically by deforming it allows for replacement reaction to form iron sulfide on top of the iron. And iron sulfides are known for very strong bonds with oil molecules.”

Oil is the primary tool used to reduce the friction that occurs when two surfaces slide in contact. The new surface treatment results in an ultra-low friction coefficient of about 0.01 in a base oil environment, which is about 10 times less than a friction coefficient obtained on a reference untreated surface, the researchers reported.

“The reported result surpasses the performance of the best current commercial oils and is similar to the performance of lubricants formulated with tungsten disulfide-based nanoparticles, but critically, our process does not use any expensive nanostructured media,” Varenberg said.

The method for reducing surface friction is flexible and similar results can be achieved using a variety of processes other than shot peening, such as lapping, honing, burnishing, laser shock peening, the researchers suggest. That would make the process even easier to adapt to a range of uses and industries. The researchers plan to continue to examine that fundamental functional principles and physicochemical mechanisms that caused the treatment to be so successful.

“This straightforward, scalable pathway to ultra-low friction opens new horizons for surface engineering, and it could significantly reduce energy losses on an industrial scale,” Varenberg said. “Moreover, our finding may result in a paradigm shift in the art of lubrication and initiate a whole new direction in surface science and engineering due to the generality of the idea and a broad range of potential applications.”

Here’s a link to and a citation for the paper,

Mechano-Chemical Surface Modification with Cu2S: Inducing Superior Lubricity by Michael Varenberg, Grigory Ryk, Alexander Yakhnis, Yuri Kligerman, Neha Kondekar, & Matthew T. McDowell. Tribol Lett (2016) 64: 28. doi:10.1007/s11249-016-0758-8 First online: Oct. 5, 2016

This paper is behind a paywall.

A human user manual—for robots

Researchers from the Georgia Institute of Technology (Georgia Tech), funded by the US Office of Naval Research (ONR), have developed a program that teaches robots to read stories and more in an effort to educate them about humans. From a June 16, 2016 ONR news release by Warren Duffie Jr. (also on EurekAlert),

With support from the Office of Naval Research (ONR), researchers at the Georgia Institute of Technology have created an artificial intelligence software program named Quixote to teach robots to read stories, learn acceptable behavior and understand successful ways to conduct themselves in diverse social situations.

“For years, researchers have debated how to teach robots to act in ways that are appropriate, non-intrusive and trustworthy,” said Marc Steinberg, an ONR program manager who oversees the research. “One important question is how to explain complex concepts such as policies, values or ethics to robots. Humans are really good at using narrative stories to make sense of the world and communicate to other people. This could one day be an effective way to interact with robots.”

The rapid pace of artificial intelligence has stirred fears by some that robots could act unethically or harm humans. Dr. Mark Riedl, an associate professor and director of Georgia Tech’s Entertainment Intelligence Lab, hopes to ease concerns by having Quixote serve as a “human user manual” by teaching robots values through simple stories. After all, stories inform, educate and entertain–reflecting shared cultural knowledge, social mores and protocols.

For example, if a robot is tasked with picking up a pharmacy prescription for a human as quickly as possible, it could: a) take the medicine and leave, b) interact politely with pharmacists, c) or wait in line. Without value alignment and positive reinforcement, the robot might logically deduce robbery is the fastest, cheapest way to accomplish its task. However, with value alignment from Quixote, it would be rewarded for waiting patiently in line and paying for the prescription.

For their research, Riedl and his team crowdsourced stories from the Internet. Each tale needed to highlight daily social interactions–going to a pharmacy or restaurant, for example–as well as socially appropriate behaviors (e.g., paying for meals or medicine) within each setting.

The team plugged the data into Quixote to create a virtual agent–in this case, a video game character placed into various game-like scenarios mirroring the stories. As the virtual agent completed a game, it earned points and positive reinforcement for emulating the actions of protagonists in the stories.

Riedl’s team ran the agent through 500,000 simulations, and it displayed proper social interactions more than 90 percent of the time.

“These games are still fairly simple,” said Riedl, “more like ‘Pac-Man’ instead of ‘Halo.’ However, Quixote enables these artificial intelligence agents to immerse themselves in a story, learn the proper sequence of events and be encoded with acceptable behavior patterns. This type of artificial intelligence can be adapted to robots, offering a variety of applications.”

Within the next six months, Riedl’s team hopes to upgrade Quixote’s games from “old-school” to more modern and complex styles like those found in Minecraft–in which players use blocks to build elaborate structures and societies.

Riedl believes Quixote could one day make it easier for humans to train robots to perform diverse tasks. Steinberg notes that robotic and artificial intelligence systems may one day be a much larger part of military life. This could involve mine detection and deactivation, equipment transport and humanitarian and rescue operations.

“Within a decade, there will be more robots in society, rubbing elbows with us,” said Riedl. “Social conventions grease the wheels of society, and robots will need to understand the nuances of how humans do things. That’s where Quixote can serve as a valuable tool. We’re already seeing it with virtual agents like Siri and Cortana, which are programmed not to say hurtful or insulting things to users.”

This story brought to mind two other projects: RoboEarth (an internet for robots only) mentioned in my Jan. 14, 2014 which was an update on the project featuring its use in hospitals and RoboBrain, a robot learning project (sourcing the internet, YouTube, and more for information to teach robots) was mentioned in my Sept. 2, 2014 posting.

Titanium dioxide nanoparticles have subtle effects on oxidative stress genes?

There’s research from the Georgia Institute of Technology (Georgia Tech; US) suggesting that titanium dioxide nanoparticles may have long term side effects. From a May 10, 2016 news item on ScienceDaily,

A nanoparticle commonly used in food, cosmetics, sunscreen and other products can have subtle effects on the activity of genes expressing enzymes that address oxidative stress inside two types of cells. While the titanium dioxide (TiO2) nanoparticles are considered non-toxic because they don’t kill cells at low concentrations, these cellular effects could add to concerns about long-term exposure to the nanomaterial.

A May 9, 2016 Georgia Tech news release on Newswire (also on EurekAlert), which originated the news item, describes the research in more detail,

Researchers at the Georgia Institute of Technology used high-throughput screening techniques to study the effects of titanium dioxide nanoparticles on the expression of 84 genes related to cellular oxidative stress. Their work found that six genes, four of them from a single gene family, were affected by a 24-hour exposure to the nanoparticles.

The effect was seen in two different kinds of cells exposed to the nanoparticles: human HeLa* cancer cells commonly used in research, and a line of monkey kidney cells. Polystyrene nanoparticles similar in size and surface electrical charge to the titanium dioxide nanoparticles did not produce a similar effect on gene expression.

“This is important because every standard measure of cell health shows that cells are not affected by these titanium dioxide nanoparticles,” said Christine Payne, an associate professor in Georgia Tech’s School of Chemistry and Biochemistry. “Our results show that there is a more subtle change in oxidative stress that could be damaging to cells or lead to long-term changes. This suggests that other nanoparticles should be screened for similar low-level effects.”

The research was reported online May 6 in the Journal of Physical Chemistry C. The work was supported by the National Institutes of Health (NIH) through the HERCULES Center at Emory University, and by a Vasser Woolley Fellowship.

Titanium dioxide nanoparticles help make powdered donuts white, protect skin from the sun’s rays and reflect light in painted surfaces. In concentrations commonly used, they are considered non-toxic, though several other studies have raised concern about potential effects on gene expression that may not directly impact the short-term health of cells.

To determine whether the nanoparticles could affect genes involved in managing oxidative stress in cells, Payne and colleague Melissa Kemp – an associate professor in the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University – designed a study to broadly evaluate the nanoparticle’s impact on the two cell lines.

Working with graduate students Sabiha Runa and Dipesh Khanal, they separately incubated HeLa cells and monkey kidney cells with titanium oxide at levels 100 times less than the minimum concentration known to initiate effects on cell health. After incubating the cells for 24 hours with the TiO2, the cells were lysed and their contents analyzed using both PCR and Western Blot techniques to study the expression of 84 genes associated with the cells’ ability to address oxidative processes.

Payne and Kemp were surprised to find changes in the expression of six genes, including four from the peroxiredoxin family of enzymes that helps cells degrade hydrogen peroxide, a byproduct of cellular oxidation processes. Too much hydrogen peroxide can create oxidative stress which can damage DNA and other molecules.

The effect measured was significant – changes of about 50 percent in enzyme expression compared to cells that had not been incubated with nanoparticles. The tests were conducted in triplicate and produced similar results each time.

“One thing that was really surprising was that this whole family of proteins was affected, though some were up-regulated and some were down-regulated,” Kemp said. “These were all related proteins, so the question is why they would respond differently to the presence of the nanoparticles.”

The researchers aren’t sure how the nanoparticles bind with the cells, but they suspect it may involve the protein corona that surrounds the particles. The corona is made up of serum proteins that normally serve as food for the cells, but adsorb to the nanoparticles in the culture medium. The corona proteins have a protective effect on the cells, but may also serve as a way for the nanoparticles to bind to cell receptors.

Titanium dioxide is well known for its photo-catalytic effects under ultraviolet light, but the researchers don’t think that’s in play here because their culturing was done in ambient light – or in the dark. The individual nanoparticles had diameters of about 21 nanometers, but in cell culture formed much larger aggregates.

In future work, Payne and Kemp hope to learn more about the interaction, including where the enzyme-producing proteins are located in the cells. For that, they may use HyPer-Tau, a reporter protein they developed to track the location of hydrogen peroxide within cells.

The research suggests a re-evaluation may be necessary for other nanoparticles that could create subtle effects even though they’ve been deemed safe.

“Earlier work had suggested that nanoparticles can lead to oxidative stress, but nobody had really looked at this level and at so many different proteins at the same time,” Payne said. “Our research looked at such low concentrations that it does raise questions about what else might be affected. We looked specifically at oxidative stress, but there may be other genes that are affected, too.”

Those subtle differences may matter when they’re added to other factors.

“Oxidative stress is implicated in all kinds of inflammatory and immune responses,” Kemp noted. “While the titanium dioxide alone may just be modulating the expression levels of this family of proteins, if that is happening at the same time you have other types of oxidative stress for different reasons, then you may have a cumulative effect.”

*HeLa cells are named for Henrietta Lacks who unknowingly donated her immortal cell line to medical research. You can find more about the  story on the Oprah Winfrey website, which features an excerpt from the Rebecca Skloot book “The Immortal Life of Henrietta Lacks.” By the way, on May 2, 2016 it was announced that Oprah Winfrey would star in a movie for HBO as Henrietta Lacks’ daughter in an adaptation of the Rebecca Skloot book. You can read more about the proposed production in a May 3, 2016 article by Benjamin Lee for the Guardian.

Getting back to titanium dioxide nanoparticles and their possible long term effects, here’s a link to and a citation for the Georgia Tech team’s paper,

TiO2 Nanoparticles Alter the Expression of Peroxiredoxin Antioxidant Genes by Sabiha Runa, Dipesh Khanal, Melissa L. Kemp‡, and Christine K. Payne. J. Phys. Chem. C, Article ASAP DOI: 10.1021/acs.jpcc.6b01939 Publication Date (Web): April 21, 2016

Copyright © 2016 American Chemical Society

This paper is behind a paywall.

What robots and humans?

I have two robot news bits for this posting. The first probes the unease currently being expressed (pop culture movies, Stephen Hawking, the Cambridge Centre for Existential Risk, etc.) about robots and their increasing intelligence and increased use in all types of labour formerly and currently performed by humans. The second item is about a research project where ‘artificial agents’ (robots) are being taught human values with stories.

Human labour obsolete?

‘When machines can do any job, what will humans do?’ is the question being asked in a presentation by Rice University computer scientist, Moshe Vardi, for the American Association for the Advancement of Science (AAAS) annual meeting held in Washington, D.C. from Feb. 11 – 15, 2016.

Here’s more about Dr. Vardi’s provocative question from a Feb. 14, 2016 Rice University news release (also on EurekAlert),

Rice University computer scientist Moshe Vardi expects that within 30 years, machines will be capable of doing almost any job that a human can. In anticipation, he is asking his colleagues to consider the societal implications. Can the global economy adapt to greater than 50 percent unemployment? Will those out of work be content to live a life of leisure?

“We are approaching a time when machines will be able to outperform humans at almost any task,” Vardi said. “I believe that society needs to confront this question before it is upon us: If machines are capable of doing almost any work humans can do, what will humans do?”

Vardi addressed this issue Sunday [Feb. 14, 2016] in a presentation titled “Smart Robots and Their Impact on Society” at one of the world’s largest and most prestigious scientific meetings — the annual meeting of the American Association for the Advancement of Science in Washington, D.C.

“The question I want to put forward is, Does the technology we are developing ultimately benefit mankind?” Vardi said. He asked the question after presenting a body of evidence suggesting that the pace of advancement in the field of artificial intelligence (AI) is increasing, even as existing robotic and AI technologies are eliminating a growing number of middle-class jobs and thereby driving up income inequality.

Vardi, a member of both the National Academy of Engineering and the National Academy of Science, is a Distinguished Service Professor and the Karen Ostrum George Professor of Computational Engineering at Rice, where he also directs Rice’s Ken Kennedy Institute for Information Technology. Since 2008 he has served as the editor-in-chief of Communications of the ACM, the flagship publication of the Association for Computing Machinery (ACM), one of the world’s largest computational professional societies.

Vardi said some people believe that future advances in automation will ultimately benefit humans, just as automation has benefited society since the dawn of the industrial age.

“A typical answer is that if machines will do all our work, we will be free to pursue leisure activities,” Vardi said. But even if the world economic system could be restructured to enable billions of people to live lives of leisure, Vardi questioned whether it would benefit humanity.

“I do not find this a promising future, as I do not find the prospect of leisure-only life appealing. I believe that work is essential to human well-being,” he said.

“Humanity is about to face perhaps its greatest challenge ever, which is finding meaning in life after the end of ‘In the sweat of thy face shalt thou eat bread,’” Vardi said. “We need to rise to the occasion and meet this challenge” before human labor becomes obsolete, he said.

In addition to dual membership in the National Academies, Vardi is a Guggenheim fellow and a member of the American Academy of Arts and Sciences, the European Academy of Sciences and the Academia Europa. He is a fellow of the ACM, the American Association for Artificial Intelligence and the Institute for Electrical and Electronics Engineers (IEEE). His numerous honors include the Southeastern Universities Research Association’s 2013 Distinguished Scientist Award, the 2011 IEEE Computer Society Harry H. Goode Award, the 2008 ACM Presidential Award, the 2008 Blaise Pascal Medal for Computer Science by the European Academy of Sciences and the 2000 Goedel Prize for outstanding papers in the area of theoretical computer science.

Vardi joined Rice’s faculty in 1993. His research centers upon the application of logic to computer science, database systems, complexity theory, multi-agent systems and specification and verification of hardware and software. He is the author or co-author of more than 500 technical articles and of two books, “Reasoning About Knowledge” and “Finite Model Theory and Its Applications.”

In a Feb. 5, 2015 post, I rounded up a number of articles about our robot future. It provides a still useful overview of the thinking on the topic.

Teaching human values with stories

A Feb. 12, 2016 Georgia (US) Institute of Technology (Georgia Tech) news release (also on EurekAlert) describes the research,

The rapid pace of artificial intelligence (AI) has raised fears about whether robots could act unethically or soon choose to harm humans. Some are calling for bans on robotics research; others are calling for more research to understand how AI might be constrained. But how can robots learn ethical behavior if there is no “user manual” for being human?

Researchers Mark Riedl and Brent Harrison from the School of Interactive Computing at the Georgia Institute of Technology believe the answer lies in “Quixote” — to be unveiled at the AAAI [Association for the Advancement of Artificial Intelligence]-16 Conference in Phoenix, Ariz. (Feb. 12 – 17, 2016). Quixote teaches “value alignment” to robots by training them to read stories, learn acceptable sequences of events and understand successful ways to behave in human societies.

“The collected stories of different cultures teach children how to behave in socially acceptable ways with examples of proper and improper behavior in fables, novels and other literature,” says Riedl, associate professor and director of the Entertainment Intelligence Lab. “We believe story comprehension in robots can eliminate psychotic-appearing behavior and reinforce choices that won’t harm humans and still achieve the intended purpose.”

Quixote is a technique for aligning an AI’s goals with human values by placing rewards on socially appropriate behavior. It builds upon Riedl’s prior research — the Scheherazade system — which demonstrated how artificial intelligence can gather a correct sequence of actions by crowdsourcing story plots from the Internet.

Scheherazade learns what is a normal or “correct” plot graph. It then passes that data structure along to Quixote, which converts it into a “reward signal” that reinforces certain behaviors and punishes other behaviors during trial-and-error learning. In essence, Quixote learns that it will be rewarded whenever it acts like the protagonist in a story instead of randomly or like the antagonist.

For example, if a robot is tasked with picking up a prescription for a human as quickly as possible, the robot could a) rob the pharmacy, take the medicine, and run; b) interact politely with the pharmacists, or c) wait in line. Without value alignment and positive reinforcement, the robot would learn that robbing is the fastest and cheapest way to accomplish its task. With value alignment from Quixote, the robot would be rewarded for waiting patiently in line and paying for the prescription.

Riedl and Harrison demonstrate in their research how a value-aligned reward signal can be produced to uncover all possible steps in a given scenario, map them into a plot trajectory tree, which is then used by the robotic agent to make “plot choices” (akin to what humans might remember as a Choose-Your-Own-Adventure novel) and receive rewards or punishments based on its choice.

The Quixote technique is best for robots that have a limited purpose but need to interact with humans to achieve it, and it is a primitive first step toward general moral reasoning in AI, Riedl says.

“We believe that AI has to be enculturated to adopt the values of a particular society, and in doing so, it will strive to avoid unacceptable behavior,” he adds. “Giving robots the ability to read and understand our stories may be the most expedient means in the absence of a human user manual.”

So there you have it, some food for thought.

When an atom more or less makes a big difference

As scientists continue exploring the nanoscale, it seems that finding the number of atoms in your particle makes a difference is no longer so surprising. From a Jan. 28, 2016 news item on ScienceDaily,

Combining experimental investigations and theoretical simulations, researchers have explained why platinum nanoclusters of a specific size range facilitate the hydrogenation reaction used to produce ethane from ethylene. The research offers new insights into the role of cluster shapes in catalyzing reactions at the nanoscale, and could help materials scientists optimize nanocatalysts for a broad class of other reactions.

A Jan. 28, 2016 Georgia Institute of Technology (Georgia Tech) news release (*also on EurekAlert*), which originated the news item, expands on the theme,

At the macro-scale, the conversion of ethylene has long been considered among the reactions insensitive to the structure of the catalyst used. However, by examining reactions catalyzed by platinum clusters containing between 9 and 15 atoms, researchers in Germany and the United States found that at the nanoscale, that’s no longer true. The shape of nanoscale clusters, they found, can dramatically affect reaction efficiency.

While the study investigated only platinum nanoclusters and the ethylene reaction, the fundamental principles may apply to other catalysts and reactions, demonstrating how materials at the very smallest size scales can provide different properties than the same material in bulk quantities. …

“We have re-examined the validity of a very fundamental concept on a very fundamental reaction,” said Uzi Landman, a Regents’ Professor and F.E. Callaway Chair in the School of Physics at the Georgia Institute of Technology. “We found that in the ultra-small catalyst range, on the order of a nanometer in size, old concepts don’t hold. New types of reactivity can occur because of changes in one or two atoms of a cluster at the nanoscale.”

The widely-used conversion process actually involves two separate reactions: (1) dissociation of H2 molecules into single hydrogen atoms, and (2) their addition to the ethylene, which involves conversion of a double bond into a single bond. In addition to producing ethane, the reaction can also take an alternative route that leads to the production of ethylidyne, which poisons the catalyst and prevents further reaction.

The project began with Professor Ueli Heiz and researchers in his group at the Technical University of Munich experimentally examining reaction rates for clusters containing 9, 10, 11, 12 or 13 platinum atoms that had been placed atop a magnesium oxide substrate. The 9-atom nanoclusters failed to produce a significant reaction, while larger clusters catalyzed the ethylene hydrogenation reaction with increasingly better efficiency. The best reaction occurred with 13-atom clusters.

Bokwon Yoon, a research scientist in Georgia Tech’s Center for Computational Materials Science, and Landman, the center’s director, then used large-scale first-principles quantum mechanical simulations to understand how the size of the clusters – and their shape – affected the reactivity. Using their simulations, they discovered that the 9-atom cluster resembled a symmetrical “hut,” while the larger clusters had bulges that served to concentrate electrical charges from the substrate.

“That one atom changes the whole activity of the catalyst,” Landman said. “We found that the extra atom operates like a lightning rod. The distribution of the excess charge from the substrate helps facilitate the reaction. Platinum 9 has a compact shape that doesn’t facilitate the reaction, but adding just one atom changes everything.”

Here’s an illustration featuring the difference between a 9 atom cluster and a 10 atom cluster,

A single atom makes a difference in the catalytic properties of platinum nanoclusters. Shown are platinum 9 (top) and platinum 10 (bottom). (Credit: Uzi Landman, Georgia Tech)

A single atom makes a difference in the catalytic properties of platinum nanoclusters. Shown are platinum 9 (top) and platinum 10 (bottom). (Credit: Uzi Landman, Georgia Tech)

The news release explains why the larger clusters function as catalysts,

Nanoclusters with 13 atoms provided the maximum reactivity because the additional atoms shift the structure in a phenomena Landman calls “fluxionality.” This structural adjustment has also been noted in earlier work of these two research groups, in studies of clusters of gold [emphasis mine] which are used in other catalytic reactions.

“Dynamic fluxionality is the ability of the cluster to distort its structure to accommodate the reactants to actually enhance reactivity,” he explained. “Only very small aggregates of metal can show such behavior, which mimics a biochemical enzyme.”

The simulations showed that catalyst poisoning also varies with cluster size – and temperature. The 10-atom clusters can be poisoned at room temperature, while the 13-atom clusters are poisoned only at higher temperatures, helping to account for their improved reactivity.

“Small really is different,” said Landman. “Once you get into this size regime, the old rules of structure sensitivity and structure insensitivity must be assessed for their continued validity. It’s not a question anymore of surface-to-volume ratio because everything is on the surface in these very small clusters.”

While the project examined only one reaction and one type of catalyst, the principles governing nanoscale catalysis – and the importance of re-examining traditional expectations – likely apply to a broad range of reactions catalyzed by nanoclusters at the smallest size scale. Such nanocatalysts are becoming more attractive as a means of conserving supplies of costly platinum.

“It’s a much richer world at the nanoscale than at the macroscopic scale,” added Landman. “These are very important messages for materials scientists and chemists who wish to design catalysts for new purposes, because the capabilities can be very different.”

Along with the experimental surface characterization and reactivity measurements, the first-principles theoretical simulations provide a unique practical means for examining these structural and electronic issues because the clusters are too small to be seen with sufficient resolution using most electron microscopy techniques or traditional crystallography.

“We have looked at how the number of atoms dictates the geometrical structure of the cluster catalysts on the surface and how this geometrical structure is associated with electronic properties that bring about chemical bonding characteristics that enhance the reactions,” Landman added.

I highlighted the news release’s reference to gold nanoclusters as I have noted the number issue in two April 14, 2015 postings, neither of which featured Georgia Tech, Gold atoms: sometimes they’re a metal and sometimes they’re a molecule and Nature’s patterns reflected in gold nanoparticles.

Here’s a link to and a citation for the ‘platinum catalyst’ paper,

Structure sensitivity in the nonscalable regime explored via catalysed ethylene hydrogenation on supported platinum nanoclusters by Andrew S. Crampton, Marian D. Rötzer, Claron J. Ridge, Florian F. Schweinberger, Ueli Heiz, Bokwon Yoon, & Uzi Landman.  Nature Communications 7, Article number: 10389  doi:10.1038/ncomms10389 Published 28 January 2016

This paper is open access.

*’also on EurekAlert’ added Jan. 29, 2016.

$81M for US National Nanotechnology Coordinated Infrastructure (NNCI)

Academics, small business, and industry researchers are the big winners in a US National Science Foundation bonanza according to a Sept. 16, 2015 news item on Nanowerk,

To advance research in nanoscale science, engineering and technology, the National Science Foundation (NSF) will provide a total of $81 million over five years to support 16 sites and a coordinating office as part of a new National Nanotechnology Coordinated Infrastructure (NNCI).

The NNCI sites will provide researchers from academia, government, and companies large and small with access to university user facilities with leading-edge fabrication and characterization tools, instrumentation, and expertise within all disciplines of nanoscale science, engineering and technology.

A Sept. 16, 2015 NSF news release provides a brief history of US nanotechnology infrastructures and describes this latest effort in slightly more detail (Note: Links have been removed),

The NNCI framework builds on the National Nanotechnology Infrastructure Network (NNIN), which enabled major discoveries, innovations, and contributions to education and commerce for more than 10 years.

“NSF’s long-standing investments in nanotechnology infrastructure have helped the research community to make great progress by making research facilities available,” said Pramod Khargonekar, assistant director for engineering. “NNCI will serve as a nationwide backbone for nanoscale research, which will lead to continuing innovations and economic and societal benefits.”

The awards are up to five years and range from $500,000 to $1.6 million each per year. Nine of the sites have at least one regional partner institution. These 16 sites are located in 15 states and involve 27 universities across the nation.

Through a fiscal year 2016 competition, one of the newly awarded sites will be chosen to coordinate the facilities. This coordinating office will enhance the sites’ impact as a national nanotechnology infrastructure and establish a web portal to link the individual facilities’ websites to provide a unified entry point to the user community of overall capabilities, tools and instrumentation. The office will also help to coordinate and disseminate best practices for national-level education and outreach programs across sites.

New NNCI awards:

Mid-Atlantic Nanotechnology Hub for Research, Education and Innovation, University of Pennsylvania with partner Community College of Philadelphia, principal investigator (PI): Mark Allen
Texas Nanofabrication Facility, University of Texas at Austin, PI: Sanjay Banerjee

Northwest Nanotechnology Infrastructure, University of Washington with partner Oregon State University, PI: Karl Bohringer

Southeastern Nanotechnology Infrastructure Corridor, Georgia Institute of Technology with partners North Carolina A&T State University and University of North Carolina-Greensboro, PI: Oliver Brand

Midwest Nano Infrastructure Corridor, University of  Minnesota Twin Cities with partner North Dakota State University, PI: Stephen Campbell

Montana Nanotechnology Facility, Montana State University with partner Carlton College, PI: David Dickensheets
Soft and Hybrid Nanotechnology Experimental Resource,

Northwestern University with partner University of Chicago, PI: Vinayak Dravid

The Virginia Tech National Center for Earth and Environmental Nanotechnology Infrastructure, Virginia Polytechnic Institute and State University, PI: Michael Hochella

North Carolina Research Triangle Nanotechnology Network, North Carolina State University with partners Duke University and University of North Carolina-Chapel Hill, PI: Jacob Jones

San Diego Nanotechnology Infrastructure, University of California, San Diego, PI: Yu-Hwa Lo

Stanford Site, Stanford University, PI: Kathryn Moler

Cornell Nanoscale Science and Technology Facility, Cornell University, PI: Daniel Ralph

Nebraska Nanoscale Facility, University of Nebraska-Lincoln, PI: David Sellmyer

Nanotechnology Collaborative Infrastructure Southwest, Arizona State University with partners Maricopa County Community College District and Science Foundation Arizona, PI: Trevor Thornton

The Kentucky Multi-scale Manufacturing and Nano Integration Node, University of Louisville with partner University of Kentucky, PI: Kevin Walsh

The Center for Nanoscale Systems at Harvard University, Harvard University, PI: Robert Westervelt

The universities are trumpeting this latest nanotechnology funding,

NSF-funded network set to help businesses, educators pursue nanotechnology innovation (North Carolina State University, Duke University, and University of North Carolina at Chapel Hill)

Nanotech expertise earns Virginia Tech a spot in National Science Foundation network

ASU [Arizona State University] chosen to lead national nanotechnology site

UChicago, Northwestern awarded $5 million nanotechnology infrastructure grant

That is a lot of excitement.