I was not expecting a Canadian connection but it seems we are heavily invested in this research at the Georgia Institute of Technology (Georgia Tech), from a March 19, 2018 news item on ScienceDaily,
Some novel materials that sound too good to be true turn out to be true and good. An emergent class of semiconductors, which could affordably light up our future with nuanced colors emanating from lasers, lamps, and even window glass, could be the latest example.
These materials are very radiant, easy to process from solution, and energy-efficient. The nagging question of whether hybrid organic-inorganic perovskites (HOIPs) could really work just received a very affirmative answer in a new international study led by physical chemists at the Georgia Institute of Technology.
The researchers observed in an HOIP a “richness” of semiconducting physics created by what could be described as electrons dancing on chemical underpinnings that wobble like a funhouse floor in an earthquake. That bucks conventional wisdom because established semiconductors rely upon rigidly stable chemical foundations, that is to say, quieter molecular frameworks, to produce the desired quantum properties.
“We don’t know yet how it works to have these stable quantum properties in this intense molecular motion,” said first author Felix Thouin, a graduate research assistant at Georgia Tech. “It defies physics models we have to try to explain it. It’s like we need some new physics.”
Quantum properties surprise
Their gyrating jumbles have made HOIPs challenging to examine, but the team of researchers from a total of five research institutes in four countries succeeded in measuring a prototypical HOIP and found its quantum properties on par with those of established, molecularly rigid semiconductors, many of which are graphene-based.
“The properties were at least as good as in those materials and may be even better,” said Carlos Silva, a professor in Georgia Tech’s School of Chemistry and Biochemistry. Not all semiconductors also absorb and emit light well, but HOIPs do, making them optoelectronic and thus potentially useful in lasers, LEDs, other lighting applications, and also in photovoltaics.
The lack of molecular-level rigidity in HOIPs also plays into them being more flexibly produced and applied.
Silva co-led the study with physicist Ajay Ram Srimath Kandada. Their team published the results of their study on two-dimensional HOIPs on March 8, 2018, in the journal Physical Review Materials. Their research was funded by EU Horizon 2020, the Natural Sciences and Engineering Research Council of Canada, the Fond Québécois pour la Recherche, the [National] Research Council of Canada, and the National Research Foundation of Singapore. [emphases mine]
The ‘solution solution’
Commonly, semiconducting properties arise from static crystalline lattices of neatly interconnected atoms. In silicon, for example, which is used in most commercial solar cells, they are interconnected silicon atoms. The same principle applies to graphene-like semiconductors.
“These lattices are structurally not very complex,” Silva said. “They’re only one atom thin, and they have strict two-dimensional properties, so they’re much more rigid.”
“You forcefully limit these systems to two dimensions,” said Srimath Kandada, who is a Marie Curie International Fellow at Georgia Tech and the Italian Institute of Technology. “The atoms are arranged in infinitely expansive, flat sheets, and then these very interesting and desirable optoelectronic properties emerge.”
These proven materials impress. So, why pursue HOIPs, except to explore their baffling physics? Because they may be more practical in important ways.
“One of the compelling advantages is that they’re all made using low-temperature processing from solutions,” Silva said. “It takes much less energy to make them.”
By contrast, graphene-based materials are produced at high temperatures in small amounts that can be tedious to work with. “With this stuff (HOIPs), you can make big batches in solution and coat a whole window with it if you want to,” Silva said.
Funhouse in an earthquake
For all an HOIP’s wobbling, it’s also a very ordered lattice with its own kind of rigidity, though less limiting than in the customary two-dimensional materials.
“It’s not just a single layer,” Srimath Kandada said. “There is a very specific perovskite-like geometry.” Perovskite refers to the shape of an HOIPs crystal lattice, which is a layered scaffolding.
“The lattice self-assembles,” Srimath Kandada said, “and it does so in a three-dimensional stack made of layers of two-dimensional sheets. But HOIPs still preserve those desirable 2D quantum properties.”
Those sheets are held together by interspersed layers of another molecular structure that is a bit like a sheet of rubber bands. That makes the scaffolding wiggle like a funhouse floor.
“At room temperature, the molecules wiggle all over the place. That disrupts the lattice, which is where the electrons live. It’s really intense,” Silva said. “But surprisingly, the quantum properties are still really stable.”
Having quantum properties work at room temperature without requiring ultra-cooling is important for practical use as a semiconductor.
Going back to what HOIP stands for — hybrid organic-inorganic perovskites – this is how the experimental material fit into the HOIP chemical class: It was a hybrid of inorganic layers of a lead iodide (the rigid part) separated by organic layers (the rubber band-like parts) of phenylethylammonium (chemical formula (PEA)2PbI4).
The lead in this prototypical material could be swapped out for a metal safer for humans to handle before the development of an applicable material.
HOIPs are great semiconductors because their electrons do an acrobatic square dance.
Usually, electrons live in an orbit around the nucleus of an atom or are shared by atoms in a chemical bond. But HOIP chemical lattices, like all semiconductors, are configured to share electrons more broadly.
Energy levels in a system can free the electrons to run around and participate in things like the flow of electricity and heat. The orbits, which are then empty, are called electron holes, and they want the electrons back.
“The hole is thought of as a positive charge, and of course, the electron has a negative charge,” Silva said. “So, hole and electron attract each other.”
The electrons and holes race around each other like dance partners pairing up to what physicists call an “exciton.” Excitons act and look a lot like particles themselves, though they’re not really particles.
Hopping biexciton light
In semiconductors, millions of excitons are correlated, or choreographed, with each other, which makes for desirable properties, when an energy source like electricity or laser light is applied. Additionally, excitons can pair up to form biexcitons, boosting the semiconductor’s energetic properties.
“In this material, we found that the biexciton binding energies were high,” Silva said. “That’s why we want to put this into lasers because the energy you input ends up to 80 or 90 percent as biexcitons.”
Biexcitons bump up energetically to absorb input energy. Then they contract energetically and pump out light. That would work not only in lasers but also in LEDs or other surfaces using the optoelectronic material.
“You can adjust the chemistry (of HOIPs) to control the width between biexciton states, and that controls the wavelength of the light given off,” Silva said. “And the adjustment can be very fine to give you any wavelength of light.”
That translates into any color of light the heart desires.
Coauthors of this paper were Stefanie Neutzner and Annamaria Petrozza from the Italian Institute of Technology (IIT); Daniele Cortecchia from IIT and Nanyang Technological University (NTU), Singapore; Cesare Soci from the Centre for Disruptive Photonic Technologies, Singapore; Teddy Salim and Yeng Ming Lam from NTU; and Vlad Dragomir and Richard Leonelli from the University of Montreal. …
Three Canadian science funding agencies plus European and Singaporean science funding agencies but not one from the US ? That’s a bit unusual for research undertaken at a US educational institution.
In any event, here’s a link to and a citation for the paper,
As you might suspect, a neuristor is based on a memristor .(For a description of a memristor there’s this Wikipedia entry and you can search this blog with the tags ‘memristor’ and neuromorphic engineering’ for more here.)
Being new to neuristors ,I needed a little more information before reading the latest and found this Dec. 24, 2012 article by John Timmer for Ars Technica (Note: Links have been removed),
Computing hardware is composed of a series of binary switches; they’re either on or off. The other piece of computational hardware we’re familiar with, the brain, doesn’t work anything like that. Rather than being on or off, individual neurons exhibit brief spikes of activity, and encode information in the pattern and timing of these spikes. The differences between the two have made it difficult to model neurons using computer hardware. In fact, the recent, successful generation of a flexible neural system required that each neuron be modeled separately in software in order to get the sort of spiking behavior real neurons display.
But researchers may have figured out a way to create a chip that spikes. The people at HP labs who have been working on memristors have figured out a combination of memristors and capacitors that can create a spiking output pattern. Although these spikes appear to be more regular than the ones produced by actual neurons, it might be possible to create versions that are a bit more variable than this one. And, more significantly, it should be possible to fabricate them in large numbers, possibly right on a silicon chip.
The key to making the devices is something called a Mott insulator. These are materials that would normally be able to conduct electricity, but are unable to because of interactions among their electrons. Critically, these interactions weaken with elevated temperatures. So, by heating a Mott insulator, it’s possible to turn it into a conductor. In the case of the material used here, NbO2, the heat is supplied by resistance itself. By applying a voltage to the NbO2 in the device, it becomes a resistor, heats up, and, when it reaches a critical temperature, turns into a conductor, allowing current to flow through. But, given the chance to cool off, the device will return to its resistive state. Formally, this behavior is described as a memristor.
To get the sort of spiking behavior seen in a neuron, the authors turned to a simplified model of neurons based on the proteins that allow them to transmit electrical signals. When a neuron fires, sodium channels open, allowing ions to rush into a nerve cell, and changing the relative charges inside and outside its membrane. In response to these changes, potassium channels then open, allowing different ions out, and restoring the charge balance. That shuts the whole thing down, and allows various pumps to start restoring the initial ion balance.
Here’s a link to and a citation for the research paper described in Timmer’s article,
A future android brain like that of Star Trek’s Commander Data might contain neuristors, multi-circuit components that emulate the firings of human neurons.
Neuristors already exist today in labs, in small quantities, and to fuel the quest to boost neuristors’ power and numbers for practical use in brain-like computing, the U.S. Department of Defense has awarded a $7.1 million grant to a research team led by the Georgia Institute of Technology. The researchers will mainly work on new metal oxide materials that buzz electronically at the nanoscale to emulate the way human neural networks buzz with electric potential on a cellular level.
A July 28, 2017 Georgia Tech news release, which originated the news item, delves further into neuristors and the proposed work leading to an artificial retina that can learn (!). This was not where I was expecting things to go,
But let’s walk expectations back from the distant sci-fi future into the scientific present: The research team is developing its neuristor materials to build an intelligent light sensor, and not some artificial version of the human brain, which would require hundreds of trillions of circuits.
But an artificial retina that can learn autonomously appears well within reach of the research team from Georgia Tech and Binghamton University. Despite the term “retina,” the development is not intended as a medical implant, but it could be used in advanced image recognition cameras for national defense and police work.
At the same time, it would significantly advance brain-mimicking, or neuromorphic, computing. The research field that takes its cues from what science already does know about how the brain computes to develop exponentially more powerful computing.
The retina would be comprised of an array of ultra-compact circuits called neuristors (a word combining “neuron” and “transistor”) that sense light, compute an image out of it and store the image. All three of the functions would occur simultaneously and nearly instantaneously.
“The same device senses, computes and stores the image,” Doolittle said. “The device is the sensor, and it’s the processor, and it’s the memory all at the same time.” A neuristor itself is comprised in part of devices called memristors inspired by the way human neurons work.
Brain vs. PC
That cuts out loads of processing and memory lag time that are inherent in traditional computing.
Take the device you’re reading this article on: Its microprocessor has to tap a separate memory component to get data, then do some processing, tap memory again for more data, process some more, etc. “That back-and-forth from memory to microprocessor has created a bottleneck,” Doolittle said.
A neuristor array breaks the bottleneck by emulating the extreme flexibility of biological nervous systems: When a brain computes, it uses a broad set of neural pathways that flash with enormous data. Then, later, to compute the same thing again, it will use quite different neural paths.
Traditional computer pathways, by contrast, are hardwired. For example, look at a present-day processor and you’ll see lines etched into it. Those are pathways that computational signals are limited to.
The new memristor materials at the heart of the neuristor are not etched, and signals flow through the surface very freely, more like they do through the brain, exponentially increasing the number of possible pathways computation can take. That helps the new intelligent retina compute powerfully and swiftly.
Terrorists, missing children
The retina’s memory could also store thousands of photos, allowing it to immediately match up what it sees with the saved images. The retina could pinpoint known terror suspects in a crowd, find missing children, or identify enemy aircraft virtually instantaneously, without having to trawl databases to correctly identify what is in the images.
Even if you take away the optics, the new neuristor arrays still advance artificial intelligence. Instead of light, a surface of neuristors could absorb massive data streams at once, compute them, store them, and compare them to patterns of other data, immediately. It could even autonomously learn to extrapolate further information, like calculating the third dimension out of data from two dimensions.
“It will work with anything that has a repetitive pattern like radar signatures, for example,” Doolittle said. “Right now, that’s too challenging to compute, because radar information is flying out at such a high data rate that no computer can even think about keeping up.”
The research project’s title acronym CEREBRAL may hint at distant dreams of an artificial brain, but what it stands for spells out the present goal in neuromorphic computing: Cross-disciplinary Electronic-ionic Research Enabling Biologically Realistic Autonomous Learning.
The new materials have already been created, and they work, but the researchers don’t yet fully understand why.
Much of the project is dedicated to examining quantum states in the materials and how those states help create useful electronic-ionic properties. Researchers will view them by bombarding the metal oxides with extremely bright x-ray photons at the recently constructed National Synchrotron Light Source II.
Grant sub-awardee Binghamton University is located close by, and Binghamton physicists will run experiments and hone them via theoretical modeling.
‘Sea of lithium’
The neuristors are created mainly by the way the metal oxide materials are grown in the lab, which has advantages over building neuristors in a more wired way.
This materials-growing approach is conducive to mass production. Also, though neuristors in general free signals to take multiple pathways, Georgia Tech’s neuristors do it much more flexibly thanks to chemical properties.
“We also have a sea of lithium, and it’s like an infinite reservoir of computational ionic fluid,” Doolittle said. The lithium niobite imitates the way ionic fluid bathes biological neurons and allows them to flash with electric potential while signaling. In a neuristor array, the lithium niobite helps computational signaling move in myriad directions.
“It’s not like the typical semiconductor material, where you etch a line, and only that line has the computational material,” Doolittle said.
Commander Data’s brain?
“Unlike any other previous neuristors, our neuristors will adapt themselves in their computational-electronic pulsing on the fly, which makes them more like a neurological system,” Doolittle said. “They mimic biology in that we have ion drift across the material to create the memristors (the memory part of neuristors).”
Brains are far superior to computers at most things, but not all. Brains recognize objects and do motor tasks much better. But computers are much better at arithmetic and data processing.
Neuristor arrays can meld both types of computing, making them biological and algorithmic at once, a bit like Commander Data’s brain.
The research is being funded through the U.S. Department of Defense’s Multidisciplinary University Research Initiatives (MURI) Program under grant number FOA: N00014-16-R-FO05. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of those agencies.
This would usually be a simple event announcement but with the advent of a new, related (in my mind if no one else’s) development on Facebook, this has become a roundup of sorts.
Facebotlish (Facebook’s chatbots create their own language)
The language created by Facebook’s chatbots, Facebotlish, was an unintended consequence—that’s right Facebook’s developers did not design a language for the chatbots or anticipate its independent development, apparently. Adrienne LaFrance’s June 20, 2017 article for theatlantic.com explores the development and the question further,
Something unexpected happened recently at the Facebook Artificial Intelligence Research lab. Researchers who had been training bots to negotiate with one another realized that the bots, left to their own devices, started communicating in a non-human language.
In order to actually follow what the bots were saying, the researchers had to tweak their model, limiting the machines to a conversation humans could understand. (They want bots to stick to human languages because eventually they want those bots to be able to converse with human Facebook users.) …
Here’s what the language looks like (from LaFrance article),
Here’s an example of one of the bot negotiations that Facebook observed:Bob: i can i i everything else . . . . . . . . . . . . . . Alice: balls have zero to me to me to me to me to me to me to me to me to Bob: you i everything else . . . . . . . . . . . . . . Alice: balls have a ball to me to me to me to me to me to me to me Bob: i i can i i i everything else . . . . . . . . . . . . . . Alice: balls have a ball to me to me to me to me to me to me to me Bob: i . . . . . . . . . . . . . . . . . . . Alice: balls have zero to me to me to me to me to me to me to me to me to Bob: you i i i i i everything else . . . . . . . . . . . . . . Alice: balls have 0 to me to me to me to me to me to me to me to me to Bob: you i i i everything else . . . . . . . . . . . . . . Alice: balls have zero to me to me to me to me to me to me to me to me to
It is incomprehensible to humans even after being tweaked, even so, some successful negotiations can ensue.
Facebook’s researchers aren’t the only one to come across the phenomenon (from LaFrance’s article; Note: Links have been removed),
Other AI researchers, too, say they’ve observed machines that can develop their own languages, including languages with a coherent structure, and defined vocabulary and syntax—though not always actual meaningful, by human standards.
In one preprint paper added earlier this year  to the research repository arXiv, a pair of computer scientists from the non-profit AI research firm OpenAI wrote about how bots learned to communicate in an abstract language—and how those bots turned to non-verbal communication, the equivalent of human gesturing or pointing, when language communication was unavailable. (Bots don’t need to have corporeal form to engage in non-verbal communication; they just engage with what’s called a visual sensory modality.) Another recent preprint paper, from researchers at the Georgia Institute of Technology, Carnegie Mellon, and Virginia Tech, describes an experiment in which two bots invent their own communication protocol by discussing and assigning values to colors and shapes—in other words, the researchers write, they witnessed the “automatic emergence of grounded language and communication … no human supervision!”
The implications of this kind of work are dizzying. Not only are researchers beginning to see how bots could communicate with one another, they may be scratching the surface of how syntax and compositional structure emerged among humans in the first place.
LaFrance’s article is well worth reading in its entirety especially since the speculation is focused on whether or not the chatbots’ creation is in fact language. There is no mention of consciousness and perhaps this is just a crazy idea but is it possible that these chatbots have consciousness? The question is particularly intriguing in light of some of philosopher David Chalmers’ work (see his 2014 TED talk in Vancouver, Canada: https://www.ted.com/talks/david_chalmers_how_do_you_explain_consciousness/transcript?language=en runs roughly 18 mins.); a text transcript is also featured. There’s a condensed version of Chalmers’ TED talk offered in a roughly 9 minute NPR (US National Public Radio) interview by Gus Raz. Here are some highlights from the text transcript,
So we’ve been hearing from brain scientists who are asking how a bunch of neurons and synaptic connections in the brain add up to us, to who we are. But it’s consciousness, the subjective experience of the mind, that allows us to ask the question in the first place. And where consciousness comes from – that is an entirely separate question.
DAVID CHALMERS: Well, I like to distinguish between the easy problems of consciousness and the hard problem.
RAZ: This is David Chalmers. He’s a philosopher who coined this term, the hard problem of consciousness.
CHALMERS: Well, the easy problems are ultimately a matter of explaining behavior – things we do. And I think brain science is great at problems like that. It can isolate a neural circuit and show how it enables you to see a red object, to respondent and say, that’s red. But the hard problem of consciousness is subjective experience. Why, when all that happens in this circuit, does it feel like something? How does a bunch of – 86 billion neurons interacting inside the brain, coming together – how does that produce the subjective experience of a mind and of the world?
RAZ: Here’s how David Chalmers begins his TED Talk.
(SOUNDBITE OF TED TALK)
CHALMERS: Right now, you have a movie playing inside your head. It has 3-D vision and surround sound for what you’re seeing and hearing right now. Your movie has smell and taste and touch. It has a sense of your body, pain, hunger, orgasms. It has emotions, anger and happiness. It has memories, like scenes from your childhood, playing before you. This movie is your stream of consciousness. If we weren’t conscious, nothing in our lives would have meaning or value. But at the same time, it’s the most mysterious phenomenon in the universe. Why are we conscious?
RAZ: Why is consciousness more than just the sum of the brain’s parts?
CHALMERS: Well, the question is, you know, what is the brain? It’s this giant complex computer, a bunch of interacting parts with great complexity. What does all that explain? That explains objective mechanism. Consciousness is subjective by its nature. It’s a matter of subjective experience. And it seems that we can imagine all of that stuff going on in the brain without consciousness. And the question is, where is the consciousness from there? It’s like, if someone could do that, they’d get a Nobel Prize, you know?
CHALMERS: So here’s the mapping from this circuit to this state of consciousness. But underneath that is always going be the question, why and how does the brain give you consciousness in the first place?
(SOUNDBITE OF TED TALK)
CHALMERS: Right now, nobody knows the answers to those questions. So we may need one or two ideas that initially seem crazy before we can come to grips with consciousness, scientifically. The first crazy idea is that consciousness is fundamental. Physicists sometimes take some aspects of the universe as fundamental building blocks – space and time and mass – and you build up the world from there. Well, I think that’s the situation we’re in. If you can’t explain consciousness in terms of the existing fundamentals – space, time – the natural thing to do is to postulate consciousness itself as something fundamental – a fundamental building block of nature. The second crazy idea is that consciousness might be universal. This view is sometimes called panpsychism – pan, for all – psych, for mind. Every system is conscious. Not just humans, dogs, mice, flies, but even microbes. Even a photon has some degree of consciousness. The idea is not that photons are intelligent or thinking. You know, it’s not that a photon is wracked with angst because it’s thinking, oh, I’m always buzzing around near the speed of light. I never get to slow down and smell the roses. No, not like that. But the thought is, maybe photons might have some element of raw subjective feeling, some primitive precursor to consciousness.
RAZ: So this is a pretty big idea – right? – like, that not just flies, but microbes or photons all have consciousness. And I mean we, like, as humans, we want to believe that our consciousness is what makes us special, right – like, different from anything else.
CHALMERS: Well, I would say yes and no. I’d say the fact of consciousness does not make us special. But maybe we’ve a special type of consciousness ’cause you know, consciousness is not on and off. It comes in all these rich and amazing varieties. There’s vision. There’s hearing. There’s thinking. There’s emotion and so on. So our consciousness is far richer, I think, than the consciousness, say, of a mouse or a fly. But if you want to look for what makes us distinct, don’t look for just our being conscious, look for the kind of consciousness we have. …
Vancouver premiere of Baba Brinkman’s Rap Guide to Consciousness
Baba Brinkman’s new hip-hop theatre show “Rap Guide to Consciousness” is all about the neuroscience of consciousness. See it in Vancouver at the Rio Theatre before it goes to the Edinburgh Fringe Festival in August .
This event also features a performance of “Off the Top” with Dr. Heather Berlin (cognitive neuroscientist, TV host, and Baba’s wife), which is also going to Edinburgh.
Wednesday, July 5
Doors 6:00 pm | Show 6:30 pm
Advance tickets $12 | $15 at the door
*All ages welcome!
*Sorry, Groupons and passes not accepted for this event.
“Utterly unique… both brilliantly entertaining and hugely informative” ★ ★ ★ ★ ★ – Broadway Baby
“An education, inspiring, and wonderfully entertaining show from beginning to end” ★ ★ ★ ★ ★ – Mumble Comedy
There’s quite the poster for this rap guide performance,
In addition to the Vancouver and Edinburgh performance (the show was premiered at the Brighton Fringe Festival in May 2017; see Simon Topping’s very brief review in this May 10, 2017 posting on the reviewshub.com), Brinkman is raising money (goal is $12,000US; he has raised a little over $3,000 with approximately one month before the deadline) to produce a CD. Here’s more from the Rap Guide to Consciousness campaign page on Indiegogo,
Brinkman has been working with neuroscientists, Dr. Anil Seth (professor and co-director of Sackler Centre for Consciousness Science) and Dr. Heather Berlin (Brinkman’s wife as noted earlier; see her Wikipedia entry or her website).
There’s a bit more information about the rap project and Anil Seth in a May 3, 2017 news item by James Hakner for the University of Sussex,
The research frontiers of consciousness science find an unusual outlet in an exciting new Rap Guide to Consciousness, premiering at this year’s Brighton Fringe Festival.
Professor Anil Seth, Co-Director of the Sackler Centre for Consciousness Science at the University of Sussex, has teamed up with New York-based ‘peer-reviewed rapper’ Baba Brinkman, to explore the latest findings from the neuroscience and cognitive psychology of subjective experience.
What is it like to be a baby? We might have to take LSD to find out. What is it like to be an octopus? Imagine most of your brain was actually built into your fingertips. What is it like to be a rapper kicking some of the world’s most complex lyrics for amused fringe audiences? Surreal.
In this new production, Baba brings his signature mix of rap comedy storytelling to the how and why behind your thoughts and perceptions. Mixing cutting-edge research with lyrical performance and projected visuals, Baba takes you through the twists and turns of the only organ it’s better to donate than receive: the human brain. Discover how the various subsystems of your brain come together to create your own rich experience of the world, including the sights and sounds of a scientifically peer-reviewed rapper dropping knowledge.
The result is a truly mind-blowing multimedia hip-hop theatre performance – the perfect meta-medium through which to communicate the dazzling science of consciousness.
Baba comments: “This topic is endlessly fascinating because it underlies everything we do pretty much all the time, which is probably why it remains one of the toughest ideas to get your head around. The first challenge with this show is just to get people to accept the (scientifically uncontroversial) idea that their brains and minds are actually the same thing viewed from different angles. But that’s just the starting point, after that the details get truly amazing.”
Baba Brinkman is a Canadian rap artist and award-winning playwright, best known for his “Rap Guide” series of plays and albums. Baba has toured the world and enjoyed successful runs at the Edinburgh Fringe Festival and off-Broadway in New York. The Rap Guide to Religion was nominated for a 2015 Drama Desk Award for “Unique Theatrical Experience” and The Rap Guide to Evolution (“Astonishing and brilliant” NY Times), won a Scotsman Fringe First Award and a Drama Desk Award nomination for “Outstanding Solo Performance”. The Rap Guide to Climate Chaos premiered in Edinburgh in 2015, followed by a six-month off-Broadway run in 2016.
Baba is also a pioneer in the genre of “lit-hop” or literary hip-hop, known for his adaptations of The Canterbury Tales, Beowulf, and Gilgamesh. He is a recent recipient of the National Center for Science Education’s “Friend of Darwin Award” for his efforts to improve the public understanding of evolutionary biology.
Anil Seth is an internationally renowned researcher into the biological basis of consciousness, with more than 100 (peer-reviewed!) academic journal papers on the subject. Alongside science he is equally committed to innovative public communication. A Wellcome Trust Engagement Fellow (from 2016) and the 2017 British Science Association President (Psychology), Professor Seth has co-conceived and consulted on many science-art projects including drama (Donmar Warehouse), dance (Siobhan Davies dance company), and the visual arts (with artist Lindsay Seers). He has also given popular public talks on consciousness at the Royal Institution (Friday Discourse) and at the main TED conference in Vancouver. He is a regular presence in print and on the radio and is the recipient of awards including the BBC Audio Award for Best Single Drama (for ‘The Sky is Wider’) and the Royal Society Young People’s Book Prize (for EyeBenders). This is his first venture into rap.
Professor Seth said: “There is nothing more familiar, and at the same time more mysterious than consciousness, but research is finally starting to shed light on this most central aspect of human existence. Modern neuroscience can be incredibly arcane and complex, posing challenges to us as public communicators.
“It’s been a real pleasure and privilege to work with Baba on this project over the last year. I never thought I’d get involved with a rap artist – but hearing Baba perform his ‘peer reviewed’ breakdowns of other scientific topics I realized here was an opportunity not to be missed.”
Brinkman isn’t the only performance-based artist to be querying the concept of consciousness, Tom Stoppard has written a play about consciousness titled ‘The Hard Problem’, which debuted at the National Theatre (UK) in January 2015 (see BBC [British Broadcasting Corporation] news online’s Jan. 29, 2015 roundup of reviews). A May 25, 2017 commentary by Andrew Brown for the Guardian offers some insight into the play and the issues (Note: Links have been removed),
There is a lovely exchange in Tom Stoppard’s play about consciousness, The Hard Problem, when an atheist has been sneering at his girlfriend for praying. It is, he says, an utterly meaningless activity. Right, she says, then do one thing for me: pray! I can’t do that, he replies. It would betray all I believe in.
So prayer can have meanings, and enormously important ones, even for people who are certain that it doesn’t have the meaning it is meant to have. In that sense, your really convinced atheist is much more religious than someone who goes along with all the prayers just because that’s what everyone does, without for a moment supposing the action means anything more than asking about the weather.
The Hard Problem of the play’s title is a phrase coined by the Australian philosopher David Chalmers to describe the way in which consciousness arises from a physical world. What makes it hard is that we don’t understand it. What makes it a problem is slightly different. It isn’t the fact of consciousness, but our representations of consciousness, that give rise to most of the difficulties. We don’t know how to fit the first-person perspective into the third-person world that science describes and explores. But this isn’t because they don’t fit: it’s because we don’t understand how they fit. For some people, this becomes a question of consuming interest.
There are also a couple of video of Tom Stoppard, the playwright, discussing his play with various interested parties, the first being the director at the National Theatre who tackled the debut run, Nicolas Hytner: https://www.youtube.com/watch?v=s7J8rWu6HJg (it runs approximately 40 mins.). Then, there’s the chat Stoppard has with previously mentioned philosopher, David Chalmers: https://www.youtube.com/watch?v=4BPY2c_CiwA (this runs approximately 1 hr. 32 mins.).
I gather ‘consciousness’ is a hot topic these days and, in the venacular of the 1960s, I guess you could describe all of this as ‘expanding our consciousness’. Have a nice weekend!
According to an April 12, 2017 news item on ScienceDaily, shapeshifting in response to environmental stimuli is the fourth dimension (I have a link to a posting about 4D printing with another fourth dimension),
A team of researchers from Georgia Institute of Technology and two other institutions has developed a new 3-D printing method to create objects that can permanently transform into a range of different shapes in response to heat.
The team, which included researchers from the Singapore University of Technology and Design (SUTD) and Xi’an Jiaotong University in China, created the objects by printing layers of shape memory polymers with each layer designed to respond differently when exposed to heat.
“This new approach significantly simplifies and increases the potential of 4-D printing by incorporating the mechanical programming post-processing step directly into the 3-D printing process,” said Jerry Qi, a professor in the George W. Woodruff School of Mechanical Engineering at Georgia Tech. “This allows high-resolution 3-D printed components to be designed by computer simulation, 3-D printed, and then directly and rapidly transformed into new permanent configurations by simply heating.”
The research was reported April 12  in the journal Science Advances, a publication of the American Association for the Advancement of Science. The work is funded by the U.S. Air Force Office of Scientific Research, the U.S. National Science Foundation and the Singapore National Research Foundation through the SUTD DManD Centre.
4D printing is an emerging technology that allows a 3D-printed component to transform its structure by exposing it to heat, light, humidity, or other environmental stimuli. This technology extends the shape creation process beyond 3D printing, resulting in additional design flexibility that can lead to new types of products which can adjust its functionality in response to the environment, in a pre-programmed manner. However, 4D printing generally involves complex and time-consuming post-processing steps to mechanically programme the component. Furthermore, the materials are often limited to soft polymers, which limit their applicability in structural scenarios.
A group of researchers from the SUTD, Georgia Institute of Technology, Xi’an Jiaotong University and Zhejiang University has introduced an approach that significantly simplifies and increases the potential of 4D printing by incorporating the mechanical programming post-processing step directly into the 3D printing process. This allows high-resolution 3D-printed components to be designed by computer simulation, 3D printed, and then directly and rapidly transformed into new permanent configurations by using heat. This approach can help save printing time and materials used by up to 90%, while completely eliminating the time-consuming mechanical programming process from the design and manufacturing workflow.
“Our approach involves printing composite materials where at room temperature one material is soft but can be programmed to contain internal stress, and the other material is stiff,” said Dr. Zhen Ding of SUTD. “We use computational simulations to design composite components where the stiff material has a shape and size that prevents the release of the programmed internal stress from the soft material after 3D printing. Upon heating, the stiff material softens and allows the soft material to release its stress. This results in a change – often dramatic – in the product shape.” This new shape is fixed when the product is cooled, with good mechanical stiffness. The research demonstrated many interesting shape changing parts, including a lattice that can expand by almost 8 times when heated.
This new shape becomes permanent and the composite material will not return to its original 3D-printed shape, upon further heating or cooling. “This is because of the shape memory effect,” said Prof. H. Jerry Qi of Georgia Tech. “In the two-material composite design, the stiff material exhibits shape memory, which helps lock the transformed shape into a permanent one. Additionally, the printed structure also exhibits the shape memory effect, i.e. it can then be programmed into further arbitrary shapes that can always be recovered to its new permanent shape, but not its 3D-printed shape.”
Said SUTD’s Prof. Martin Dunn, “The key advance of this work, is a 4D printing method that is dramatically simplified and allows the creation of high-resolution complex 3D reprogrammable products; it promises to enable myriad applications across biomedical devices, 3D electronics, and consumer products. It even opens the door to a new paradigm in product design, where components are designed from the onset to inhabit multiple configurations during service.”
Here’s a video,
Uploaded on Apr 17, 2017
A research team led by the Singapore University of Technology and Design’s (SUTD) Associate Provost of Research, Professor Martin Dunn, has come up with a new and simplified 4D printing method that uses a 3D printer to rapidly create 3D objects, which can permanently transform into a range of different shapes in response to heat.
I have three news bits about legal issues that are arising as a consequence of emerging technologies.
Deep neural networks, art, and copyright
Caption: The rise of automated art opens new creative avenues, coupled with new problems for copyright protection. Credit: Provided by: Alexander Mordvintsev, Christopher Olah and Mike Tyka
Presumably this artwork is a demonstration of automated art although they never really do explain how in the news item/news release. An April 26, 2017 news item on ScienceDaily announces research into copyright and the latest in using neural networks to create art,
In 1968, sociologist Jean Baudrillard wrote on automatism that “contained within it is the dream of a dominated world […] that serves an inert and dreamy humanity.”
With the growing popularity of Deep Neural Networks (DNN’s), this dream is fast becoming a reality.
Dr. Jean-Marc Deltorn, researcher at the Centre d’études internationales de la propriété intellectuelle in Strasbourg, argues that we must remain a responsive and responsible force in this process of automation — not inert dominators. As he demonstrates in a recent Frontiers in Digital Humanities paper, the dream of automation demands a careful study of the legal problems linked to copyright.
For more than half a century, artists have looked to computational processes as a way of expanding their vision. DNN’s are the culmination of this cross-pollination: by learning to identify a complex number of patterns, they can generate new creations.
These systems are made up of complex algorithms modeled on the transmission of signals between neurons in the brain.
DNN creations rely in equal measure on human inputs and the non-human algorithmic networks that process them.
Inputs are fed into the system, which is layered. Each layer provides an opportunity for a more refined knowledge of the inputs (shape, color, lines). Neural networks compare actual outputs to expected ones, and correct the predictive error through repetition and optimization. They train their own pattern recognition, thereby optimizing their learning curve and producing increasingly accurate outputs.
The deeper the layers are, the higher the level of abstraction. The highest layers are able to identify the contents of a given input with reasonable accuracy, after extended periods of training.
Creation thus becomes increasingly automated through what Deltorn calls “the arcane traceries of deep architecture”. The results are sufficiently abstracted from their sources to produce original creations that have been exhibited in galleries, sold at auction and performed at concerts.
The originality of DNN’s is a combined product of technological automation on one hand, human inputs and decisions on the other.
DNN’s are gaining popularity. Various platforms (such as DeepDream) now allow internet users to generate their very own new creations . This popularization of the automation process calls for a comprehensive legal framework that ensures a creator’s economic and moral rights with regards to his work – copyright protection.
Form, originality and attribution are the three requirements for copyright. And while DNN creations satisfy the first of these three, the claim to originality and attribution will depend largely on a given country legislation and on the traceability of the human creator.
Legislation usually sets a low threshold to originality. As DNN creations could in theory be able to create an endless number of riffs on source materials, the uncurbed creation of original works could inflate the existing number of copyright protections.
Additionally, a small number of national copyright laws confers attribution to what UK legislation defines loosely as “the person by whom the arrangements necessary for the creation of the work are undertaken.” In the case of DNN’s, this could mean anybody from the programmer to the user of a DNN interface.
Combined with an overly supple take on originality, this view on attribution would further increase the number of copyrightable works.
The risk, in both cases, is that artists will be less willing to publish their own works, for fear of infringement of DNN copyright protections.
In order to promote creativity – one seminal aim of copyright protection – the issue must be limited to creations that manifest a personal voice “and not just the electric glint of a computational engine,” to quote Deltorn. A delicate act of discernment.
DNN’s promise new avenues of creative expression for artists – with potential caveats. Copyright protection – a “catalyst to creativity” – must be contained. Many of us gently bask in the glow of an increasingly automated form of technology. But if we want to safeguard the ineffable quality that defines much art, it might be a good idea to hone in more closely on the differences between the electric and the creative spark.
The Fifth Annual Conference on Governance of Emerging Technologies:
Law, Policy and Ethics held at the new
Beus Center for Law & Society in Phoenix, AZ
May 17-19, 2017!
Call for Abstracts – Now Closed
The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including (but not limited to) nanotechnology, synthetic biology, gene editing, biotechnology, genomics, personalized medicine, human enhancement technologies, telecommunications, information technologies, surveillance technologies, geoengineering, neuroscience, artificial intelligence, and robotics. The conference is premised on the belief that there is much to be learned and shared from and across the governance experience and proposals for these various emerging technologies.
Gillian Hadfield, Richard L. and Antoinette Schamoi Kirtland Professor of Law and Professor of Economics USC [University of Southern California] Gould School of Law
Shobita Parthasarathy, Associate Professor of Public Policy and Women’s Studies, Director, Science, Technology, and Public Policy Program University of Michigan
Stuart Russell, Professor at [University of California] Berkeley, is a computer scientist known for his contributions to artificial intelligence
Craig Shank,Vice President for Corporate Standards Group in Microsoft’s Corporate, External and Legal Affairs (CELA)
Innovation – Responsible and/or Permissionless
Ellen-Marie Forsberg,Senior Researcher/Research Manager at Oslo and Akershus University College of Applied Sciences
Adam Thierer,Senior Research Fellow with the Technology Policy Program at the Mercatus Center at George Mason University
Andrew Maynard,Senior Sustainability Scholar, Julie Ann Wrigley Global Institute of Sustainability Director, Risk Innovation Lab, School for the Future of Innovation in Society Professor, School for the Future of Innovation in Society, Arizona State University
Gary Marchant,Regents’ Professor of Law, Professor of Law Faculty Director and Faculty Fellow, Center for Law, Science & Innovation, Arizona State University
Anupam Chander,Martin Luther King, Jr. Professor of Law and Director, California International Law Center, UC Davis School of Law
Pilar Ossorio,Professor of Law and Bioethics, University of Wisconsin, School of Law and School of Medicine and Public Health; Morgridge Institute for Research, Ethics Scholar-in-Residence
George Poste,Chief Scientist, Complex Adaptive Systems Initiative (CASI) (http://www.casi.asu.edu/), Regents’ Professor and Del E. Webb Chair in Health Innovation, Arizona State University
Emily Shuckburgh, climate scientist and deputy head of the Polar Oceans Team at the British Antarctic Survey, University of Cambridge
Responsible Development of AI
Spring Berman,Ira A. Fulton Schools of Engineering, Arizona State University
John Havens, The IEEE [Institute of Electrical and Electronics Engineers] Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems
Subbarao Kambhampati,Senior Sustainability Scientist, Julie Ann Wrigley Global Institute of Sustainability, Professor, School of Computing, Informatics and Decision Systems Engineering, Ira A. Fulton Schools of Engineering, Arizona State University
Wendell Wallach, Consultant, Ethicist, and Scholar at Yale University’s Interdisciplinary Center for Bioethics
*Current Student / ASU Law Alumni Registration: $50.00
^Cybsersecurity sessions only (May 19): $100 CLE / $50 General / Free for students (registration info coming soon)
There you have it.
Neuro-techno future laws
I’m pretty sure this isn’t the first exploration of potential legal issues arising from research into neuroscience although it’s the first one I’ve stumbled across. From an April 25, 2017 news item on phys.org,
New human rights laws to prepare for advances in neurotechnology that put the ‘freedom of the mind’ at risk have been proposed today in the open access journal Life Sciences, Society and Policy.
The authors of the study suggest four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy. The four laws are: the right to cognitive liberty, the right to mental privacy, the right to mental integrity and the right to psychological continuity.
Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel, said: “The mind is considered to be the last refuge of personal freedom and self-determination, but advances in neural engineering, brain imaging and neurotechnology put the freedom of the mind at risk. Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology.”
Advances in neurotechnology, such as sophisticated brain imaging and the development of brain-computer interfaces, have led to these technologies moving away from a clinical setting and into the consumer domain. While these advances may be beneficial for individuals and society, there is a risk that the technology could be misused and create unprecedented threats to personal freedom.
Professor Roberto Andorno, co-author of the research, explained: “Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court, for example as a tool for assessing criminal responsibility or even the risk of reoffending. Consumer companies are using brain imaging for ‘neuromarketing’, to understand consumer behaviour and elicit desired responses from customers. There are also tools such as ‘brain decoders’ which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom which we sought to address with the development of four new human rights laws.”
The authors explain that as neurotechnology improves and becomes commonplace, there is a risk that the technology could be hacked, allowing a third-party to ‘eavesdrop’ on someone’s mind. In the future, a brain-computer interface used to control consumer technology could put the user at risk of physical and psychological damage caused by a third-party attack on the technology. There are also ethical and legal concerns over the protection of data generated by these devices that need to be considered.
International human rights laws make no specific mention to neuroscience, although advances in biomedicine have become intertwined with laws, such as those concerning human genetic data. Similar to the historical trajectory of the genetic revolution, the authors state that the on-going neurorevolution will force a reconceptualization of human rights laws and even the creation of new ones.
Marcello Ienca added: “Science-fiction can teach us a lot about the potential threat of technology. Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes. We need to be prepared to deal with the impact these technologies will have on our personal freedom.”
Initially this seemed like an essay extolling the possibilities for nanocellulose but it is also a research announcement. From a Nov. 7, 2016 news item on Nanowerk,
What if you could take one of the most abundant natural materials on earth and harness its strength to lighten the heaviest of objects, to replace synthetic materials, or use it in scaffolding to grow bone, in a fast-growing area of science in oral health care?
This all might be possible with cellulose nanocrystals, the molecular matter of all plant life. As industrial filler material, they can be blended with plastics and other synthetics. They are as strong as steel, tough as glass, lightweight, and green.
“Plastics are currently reinforced with fillers made of steel, carbon, Kevlar, or glass. There is an increasing demand in manufacturing for sustainable materials that are lightweight and strong to replace these fillers,” said Douglas M. Fox, associate professor of chemistry at American University.
“Cellulose nanocrystals are an environmentally friendly filler. If there comes a time that they’re used widely in manufacturing, cellulose nanocrystals will lessen the weight of materials, which will reduce energy.”
Fox has submitted a patent for his work with cellulose nanocrystals, which involves a simple, scalable method to improve their performance. Published results of his method can be found in the chemistry journal ACS Applied Materials and Interfaces. Fox’s method could be used as a biomaterial and for applications in transportation, infrastructure and wind turbines.
The power of cellulose
Cellulose gives stems, leaves and other organic material in the natural world their strength. That strength already has been harnessed for use in many commercial materials. At the nano-level, cellulose fibers can be broken down into tiny crystals, particles smaller than ten millionths of a meter. Deriving cellulose from natural sources such as wood, tunicate (ocean-dwelling sea cucumbers) and certain kinds of bacteria, researchers prepare crystals of different sizes and strengths.
For all of the industry potential, hurdles abound. As nanocellulose disperses within plastic, scientists must find the sweet spot: the right amount of nanoparticle-matrix interaction that yields the strongest, lightest property. Fox overcame four main barriers by altering the surface chemistry of nanocrystals with a simple process of ion exchange. Ion exchange reduces water absorption (cellulose composites lose their strength if they absorb water); increases the temperature at which the nanocrystals decompose (needed to blend with plastics); reduces clumping; and improves re-dispersal after the crystals dry.
Cellulose nanocrystals as a biomaterial is yet another commercial prospect. In dental regenerative medicine, restoring sufficient bone volume is needed to support a patient’s teeth or dental implants. Researchers at the National Institute of Standards and Technology [NIST], through an agreement with the National Institute of Dental and Craniofacial Research of the National Institutes of Health, are looking for an improved clinical approach that would regrow a patient’s bone. When researchers experimented with Fox’s modified nanocrystals, they were able to disperse the nanocrystals in scaffolds for dental regenerative medicine purposes.
“When we cultivated cells on the cellulose nanocrystal-based scaffolds, preliminary results showed remarkable potential of the scaffolds for both their mechanical properties and the biological response. This suggests that scaffolds with appropriate cellulose nanocrystal concentrations are a promising approach for bone regeneration,” said Martin Chiang, team leader for NIST’s Biomaterials for Oral Health Project.
Another collaboration Fox has is with Georgia Institute of Technology and Owens Corning, a company specializing in fiberglass insulation and composites, to research the benefits to replace glass-reinforced plastic used in airplanes, cars and wind turbines. He also is working with Vireo Advisors and NIST to characterize the health and safety of cellulose nanocrystals and nanofibers.
“As we continue to show these nanomaterials are safe, and make it easier to disperse them into a variety of materials, we get closer to utilizing nature’s chemically resistant, strong, and most abundant polymer in everyday products,” Fox said.
Oiled gears as small parts of large mechanism Courtesy: Georgia Institute of Technology
Those gears are gorgeous, especially in full size; I will be giving a link to a full size version in a bit. Meanwhile, an Oct. 11, 2016 news item on Nanowerk makes an announcement about ultra-low friction without oil,
Researchers at Georgia Institute of Technology [Georgia Tech; US] have developed a new process for treating metal surfaces that has the potential to improve efficiency in piston engines and a range of other equipment.
The method improves the ability of metal surfaces to bond with oil, significantly reducing friction without special oil additives.
“About 50 percent of the mechanical energy losses in an internal combustion engine result from piston assembly friction. So if we can reduce the friction, we can save energy and reduce fuel and oil consumption,” said Michael Varenberg, an assistant professor in Georgia Tech’s George W. Woodruff School of Mechanical Engineering.
In the study, which was published Oct. 5  in the journal Tribology Letters, the researchers at Georgia Tech and Technion – Israel Institute of Technology tested treating the surface of cast iron blocks by blasting it with mixture of copper sulfide and aluminum oxide. The shot peening modified the surface chemically that changed how oil molecules bonded with the metal and led to a superior surface lubricity.
“We want oil molecules to be connected strongly to the surface. Traditionally this connection is created by putting additives in the oil,” Varenberg said. “In this specific case, we shot peen the surface with a blend of alumina and copper sulfide particles. Making the surface more active chemically by deforming it allows for replacement reaction to form iron sulfide on top of the iron. And iron sulfides are known for very strong bonds with oil molecules.”
Oil is the primary tool used to reduce the friction that occurs when two surfaces slide in contact. The new surface treatment results in an ultra-low friction coefficient of about 0.01 in a base oil environment, which is about 10 times less than a friction coefficient obtained on a reference untreated surface, the researchers reported.
“The reported result surpasses the performance of the best current commercial oils and is similar to the performance of lubricants formulated with tungsten disulfide-based nanoparticles, but critically, our process does not use any expensive nanostructured media,” Varenberg said.
The method for reducing surface friction is flexible and similar results can be achieved using a variety of processes other than shot peening, such as lapping, honing, burnishing, laser shock peening, the researchers suggest. That would make the process even easier to adapt to a range of uses and industries. The researchers plan to continue to examine that fundamental functional principles and physicochemical mechanisms that caused the treatment to be so successful.
“This straightforward, scalable pathway to ultra-low friction opens new horizons for surface engineering, and it could significantly reduce energy losses on an industrial scale,” Varenberg said. “Moreover, our finding may result in a paradigm shift in the art of lubrication and initiate a whole new direction in surface science and engineering due to the generality of the idea and a broad range of potential applications.”
Researchers from the Georgia Institute of Technology (Georgia Tech), funded by the US Office of Naval Research (ONR), have developed a program that teaches robots to read stories and more in an effort to educate them about humans. From a June 16, 2016 ONR news release by Warren Duffie Jr. (also on EurekAlert),
With support from the Office of Naval Research (ONR), researchers at the Georgia Institute of Technology have created an artificial intelligence software program named Quixote to teach robots to read stories, learn acceptable behavior and understand successful ways to conduct themselves in diverse social situations.
“For years, researchers have debated how to teach robots to act in ways that are appropriate, non-intrusive and trustworthy,” said Marc Steinberg, an ONR program manager who oversees the research. “One important question is how to explain complex concepts such as policies, values or ethics to robots. Humans are really good at using narrative stories to make sense of the world and communicate to other people. This could one day be an effective way to interact with robots.”
The rapid pace of artificial intelligence has stirred fears by some that robots could act unethically or harm humans. Dr. Mark Riedl, an associate professor and director of Georgia Tech’s Entertainment Intelligence Lab, hopes to ease concerns by having Quixote serve as a “human user manual” by teaching robots values through simple stories. After all, stories inform, educate and entertain–reflecting shared cultural knowledge, social mores and protocols.
For example, if a robot is tasked with picking up a pharmacy prescription for a human as quickly as possible, it could: a) take the medicine and leave, b) interact politely with pharmacists, c) or wait in line. Without value alignment and positive reinforcement, the robot might logically deduce robbery is the fastest, cheapest way to accomplish its task. However, with value alignment from Quixote, it would be rewarded for waiting patiently in line and paying for the prescription.
For their research, Riedl and his team crowdsourced stories from the Internet. Each tale needed to highlight daily social interactions–going to a pharmacy or restaurant, for example–as well as socially appropriate behaviors (e.g., paying for meals or medicine) within each setting.
The team plugged the data into Quixote to create a virtual agent–in this case, a video game character placed into various game-like scenarios mirroring the stories. As the virtual agent completed a game, it earned points and positive reinforcement for emulating the actions of protagonists in the stories.
Riedl’s team ran the agent through 500,000 simulations, and it displayed proper social interactions more than 90 percent of the time.
“These games are still fairly simple,” said Riedl, “more like ‘Pac-Man’ instead of ‘Halo.’ However, Quixote enables these artificial intelligence agents to immerse themselves in a story, learn the proper sequence of events and be encoded with acceptable behavior patterns. This type of artificial intelligence can be adapted to robots, offering a variety of applications.”
Within the next six months, Riedl’s team hopes to upgrade Quixote’s games from “old-school” to more modern and complex styles like those found in Minecraft–in which players use blocks to build elaborate structures and societies.
Riedl believes Quixote could one day make it easier for humans to train robots to perform diverse tasks. Steinberg notes that robotic and artificial intelligence systems may one day be a much larger part of military life. This could involve mine detection and deactivation, equipment transport and humanitarian and rescue operations.
“Within a decade, there will be more robots in society, rubbing elbows with us,” said Riedl. “Social conventions grease the wheels of society, and robots will need to understand the nuances of how humans do things. That’s where Quixote can serve as a valuable tool. We’re already seeing it with virtual agents like Siri and Cortana, which are programmed not to say hurtful or insulting things to users.”
This story brought to mind two other projects: RoboEarth (an internet for robots only) mentioned in my Jan. 14, 2014 which was an update on the project featuring its use in hospitals and RoboBrain, a robot learning project (sourcing the internet, YouTube, and more for information to teach robots) was mentioned in my Sept. 2, 2014 posting.
There’s research from the Georgia Institute of Technology (Georgia Tech; US) suggesting that titanium dioxide nanoparticles may have long term side effects. From a May 10, 2016 news item on ScienceDaily,
A nanoparticle commonly used in food, cosmetics, sunscreen and other products can have subtle effects on the activity of genes expressing enzymes that address oxidative stress inside two types of cells. While the titanium dioxide (TiO2) nanoparticles are considered non-toxic because they don’t kill cells at low concentrations, these cellular effects could add to concerns about long-term exposure to the nanomaterial.
Researchers at the Georgia Institute of Technology used high-throughput screening techniques to study the effects of titanium dioxide nanoparticles on the expression of 84 genes related to cellular oxidative stress. Their work found that six genes, four of them from a single gene family, were affected by a 24-hour exposure to the nanoparticles.
The effect was seen in two different kinds of cells exposed to the nanoparticles: human HeLa* cancer cells commonly used in research, and a line of monkey kidney cells. Polystyrene nanoparticles similar in size and surface electrical charge to the titanium dioxide nanoparticles did not produce a similar effect on gene expression.
“This is important because every standard measure of cell health shows that cells are not affected by these titanium dioxide nanoparticles,” said Christine Payne, an associate professor in Georgia Tech’s School of Chemistry and Biochemistry. “Our results show that there is a more subtle change in oxidative stress that could be damaging to cells or lead to long-term changes. This suggests that other nanoparticles should be screened for similar low-level effects.”
The research was reported online May 6 in the Journal of Physical Chemistry C. The work was supported by the National Institutes of Health (NIH) through the HERCULES Center at Emory University, and by a Vasser Woolley Fellowship.
Titanium dioxide nanoparticles help make powdered donuts white, protect skin from the sun’s rays and reflect light in painted surfaces. In concentrations commonly used, they are considered non-toxic, though several other studies have raised concern about potential effects on gene expression that may not directly impact the short-term health of cells.
To determine whether the nanoparticles could affect genes involved in managing oxidative stress in cells, Payne and colleague Melissa Kemp – an associate professor in the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University – designed a study to broadly evaluate the nanoparticle’s impact on the two cell lines.
Working with graduate students Sabiha Runa and Dipesh Khanal, they separately incubated HeLa cells and monkey kidney cells with titanium oxide at levels 100 times less than the minimum concentration known to initiate effects on cell health. After incubating the cells for 24 hours with the TiO2, the cells were lysed and their contents analyzed using both PCR and Western Blot techniques to study the expression of 84 genes associated with the cells’ ability to address oxidative processes.
Payne and Kemp were surprised to find changes in the expression of six genes, including four from the peroxiredoxin family of enzymes that helps cells degrade hydrogen peroxide, a byproduct of cellular oxidation processes. Too much hydrogen peroxide can create oxidative stress which can damage DNA and other molecules.
The effect measured was significant – changes of about 50 percent in enzyme expression compared to cells that had not been incubated with nanoparticles. The tests were conducted in triplicate and produced similar results each time.
“One thing that was really surprising was that this whole family of proteins was affected, though some were up-regulated and some were down-regulated,” Kemp said. “These were all related proteins, so the question is why they would respond differently to the presence of the nanoparticles.”
The researchers aren’t sure how the nanoparticles bind with the cells, but they suspect it may involve the protein corona that surrounds the particles. The corona is made up of serum proteins that normally serve as food for the cells, but adsorb to the nanoparticles in the culture medium. The corona proteins have a protective effect on the cells, but may also serve as a way for the nanoparticles to bind to cell receptors.
Titanium dioxide is well known for its photo-catalytic effects under ultraviolet light, but the researchers don’t think that’s in play here because their culturing was done in ambient light – or in the dark. The individual nanoparticles had diameters of about 21 nanometers, but in cell culture formed much larger aggregates.
In future work, Payne and Kemp hope to learn more about the interaction, including where the enzyme-producing proteins are located in the cells. For that, they may use HyPer-Tau, a reporter protein they developed to track the location of hydrogen peroxide within cells.
The research suggests a re-evaluation may be necessary for other nanoparticles that could create subtle effects even though they’ve been deemed safe.
“Earlier work had suggested that nanoparticles can lead to oxidative stress, but nobody had really looked at this level and at so many different proteins at the same time,” Payne said. “Our research looked at such low concentrations that it does raise questions about what else might be affected. We looked specifically at oxidative stress, but there may be other genes that are affected, too.”
Those subtle differences may matter when they’re added to other factors.
“Oxidative stress is implicated in all kinds of inflammatory and immune responses,” Kemp noted. “While the titanium dioxide alone may just be modulating the expression levels of this family of proteins, if that is happening at the same time you have other types of oxidative stress for different reasons, then you may have a cumulative effect.”
*HeLa cells are named for Henrietta Lacks who unknowingly donated her immortal cell line to medical research. You can find more about the story on the Oprah Winfrey website, which features an excerpt from the Rebecca Skloot book “The Immortal Life of Henrietta Lacks.” By the way, on May 2, 2016 it was announced that Oprah Winfrey would star in a movie for HBO as Henrietta Lacks’ daughter in an adaptation of the Rebecca Skloot book. You can read more about the proposed production in a May 3, 2016 article by Benjamin Lee for the Guardian.
Getting back to titanium dioxide nanoparticles and their possible long term effects, here’s a link to and a citation for the Georgia Tech team’s paper,
I have two robot news bits for this posting. The first probes the unease currently being expressed (pop culture movies, Stephen Hawking, the Cambridge Centre for Existential Risk, etc.) about robots and their increasing intelligence and increased use in all types of labour formerly and currently performed by humans. The second item is about a research project where ‘artificial agents’ (robots) are being taught human values with stories.
Human labour obsolete?
‘When machines can do any job, what will humans do?’ is the question being asked in a presentation by Rice University computer scientist, Moshe Vardi, for the American Association for the Advancement of Science (AAAS) annual meeting held in Washington, D.C. from Feb. 11 – 15, 2016.
Rice University computer scientist Moshe Vardi expects that within 30 years, machines will be capable of doing almost any job that a human can. In anticipation, he is asking his colleagues to consider the societal implications. Can the global economy adapt to greater than 50 percent unemployment? Will those out of work be content to live a life of leisure?
“We are approaching a time when machines will be able to outperform humans at almost any task,” Vardi said. “I believe that society needs to confront this question before it is upon us: If machines are capable of doing almost any work humans can do, what will humans do?”
Vardi addressed this issue Sunday [Feb. 14, 2016] in a presentation titled “Smart Robots and Their Impact on Society” at one of the world’s largest and most prestigious scientific meetings — the annual meeting of the American Association for the Advancement of Science in Washington, D.C.
“The question I want to put forward is, Does the technology we are developing ultimately benefit mankind?” Vardi said. He asked the question after presenting a body of evidence suggesting that the pace of advancement in the field of artificial intelligence (AI) is increasing, even as existing robotic and AI technologies are eliminating a growing number of middle-class jobs and thereby driving up income inequality.
Vardi, a member of both the National Academy of Engineering and the National Academy of Science, is a Distinguished Service Professor and the Karen Ostrum George Professor of Computational Engineering at Rice, where he also directs Rice’s Ken Kennedy Institute for Information Technology. Since 2008 he has served as the editor-in-chief of Communications of the ACM, the flagship publication of the Association for Computing Machinery (ACM), one of the world’s largest computational professional societies.
Vardi said some people believe that future advances in automation will ultimately benefit humans, just as automation has benefited society since the dawn of the industrial age.
“A typical answer is that if machines will do all our work, we will be free to pursue leisure activities,” Vardi said. But even if the world economic system could be restructured to enable billions of people to live lives of leisure, Vardi questioned whether it would benefit humanity.
“I do not find this a promising future, as I do not find the prospect of leisure-only life appealing. I believe that work is essential to human well-being,” he said.
“Humanity is about to face perhaps its greatest challenge ever, which is finding meaning in life after the end of ‘In the sweat of thy face shalt thou eat bread,’” Vardi said. “We need to rise to the occasion and meet this challenge” before human labor becomes obsolete, he said.
In addition to dual membership in the National Academies, Vardi is a Guggenheim fellow and a member of the American Academy of Arts and Sciences, the European Academy of Sciences and the Academia Europa. He is a fellow of the ACM, the American Association for Artificial Intelligence and the Institute for Electrical and Electronics Engineers (IEEE). His numerous honors include the Southeastern Universities Research Association’s 2013 Distinguished Scientist Award, the 2011 IEEE Computer Society Harry H. Goode Award, the 2008 ACM Presidential Award, the 2008 Blaise Pascal Medal for Computer Science by the European Academy of Sciences and the 2000 Goedel Prize for outstanding papers in the area of theoretical computer science.
Vardi joined Rice’s faculty in 1993. His research centers upon the application of logic to computer science, database systems, complexity theory, multi-agent systems and specification and verification of hardware and software. He is the author or co-author of more than 500 technical articles and of two books, “Reasoning About Knowledge” and “Finite Model Theory and Its Applications.”
In a Feb. 5, 2015 post, I rounded up a number of articles about our robot future. It provides a still useful overview of the thinking on the topic.
The rapid pace of artificial intelligence (AI) has raised fears about whether robots could act unethically or soon choose to harm humans. Some are calling for bans on robotics research; others are calling for more research to understand how AI might be constrained. But how can robots learn ethical behavior if there is no “user manual” for being human?
Researchers Mark Riedl and Brent Harrison from the School of Interactive Computing at the Georgia Institute of Technology believe the answer lies in “Quixote” — to be unveiled at the AAAI [Association for the Advancement of Artificial Intelligence]-16 Conference in Phoenix, Ariz. (Feb. 12 – 17, 2016). Quixote teaches “value alignment” to robots by training them to read stories, learn acceptable sequences of events and understand successful ways to behave in human societies.
“The collected stories of different cultures teach children how to behave in socially acceptable ways with examples of proper and improper behavior in fables, novels and other literature,” says Riedl, associate professor and director of the Entertainment Intelligence Lab. “We believe story comprehension in robots can eliminate psychotic-appearing behavior and reinforce choices that won’t harm humans and still achieve the intended purpose.”
Quixote is a technique for aligning an AI’s goals with human values by placing rewards on socially appropriate behavior. It builds upon Riedl’s prior research — the Scheherazade system — which demonstrated how artificial intelligence can gather a correct sequence of actions by crowdsourcing story plots from the Internet.
Scheherazade learns what is a normal or “correct” plot graph. It then passes that data structure along to Quixote, which converts it into a “reward signal” that reinforces certain behaviors and punishes other behaviors during trial-and-error learning. In essence, Quixote learns that it will be rewarded whenever it acts like the protagonist in a story instead of randomly or like the antagonist.
For example, if a robot is tasked with picking up a prescription for a human as quickly as possible, the robot could a) rob the pharmacy, take the medicine, and run; b) interact politely with the pharmacists, or c) wait in line. Without value alignment and positive reinforcement, the robot would learn that robbing is the fastest and cheapest way to accomplish its task. With value alignment from Quixote, the robot would be rewarded for waiting patiently in line and paying for the prescription.
Riedl and Harrison demonstrate in their research how a value-aligned reward signal can be produced to uncover all possible steps in a given scenario, map them into a plot trajectory tree, which is then used by the robotic agent to make “plot choices” (akin to what humans might remember as a Choose-Your-Own-Adventure novel) and receive rewards or punishments based on its choice.
The Quixote technique is best for robots that have a limited purpose but need to interact with humans to achieve it, and it is a primitive first step toward general moral reasoning in AI, Riedl says.
“We believe that AI has to be enculturated to adopt the values of a particular society, and in doing so, it will strive to avoid unacceptable behavior,” he adds. “Giving robots the ability to read and understand our stories may be the most expedient means in the absence of a human user manual.”