Gold’s origin in the universe due to cosmic collision

An hypothesis for gold’s origins was first mentioned here in a May 26, 2016 posting,

The link between this research and my side project on gold nanoparticles is a bit tenuous but this work on the origins for gold and other precious metals being found in the stars is so fascinating and I’m determined to find a connection.

An artist's impression of two neutron stars colliding. (Credit: Dana Berry / Skyworks Digital, Inc.) Courtesy: Kavli Foundation

An artist’s impression of two neutron stars colliding. (Credit: Dana Berry / Skyworks Digital, Inc.) Courtesy: Kavli Foundation

From a May 19, 2016 news item on phys.org,

The origin of many of the most precious elements on the periodic table, such as gold, silver and platinum, has perplexed scientists for more than six decades. Now a recent study has an answer, evocatively conveyed in the faint starlight from a distant dwarf galaxy.

In a roundtable discussion, published today [May 19, 2016?], The Kavli Foundation spoke to two of the researchers behind the discovery about why the source of these heavy elements, collectively called “r-process” elements, has been so hard to crack.

From the Spring 2016 Kavli Foundation webpage hosting the  “Galactic ‘Gold Mine’ Explains the Origin of Nature’s Heaviest Elements” Roundtable ,

Astronomers studying a galaxy called Reticulum II have just discovered that its stars contain whopping amounts of these metals—collectively known as “r-process” elements (See “What is the R-Process?”). Of the 10 dwarf galaxies that have been similarly studied so far, only Reticulum II bears such strong chemical signatures. The finding suggests some unusual event took place billions of years ago that created ample amounts of heavy elements and then strew them throughout the galaxy’s reservoir of gas and dust. This r-process-enriched material then went on to form Reticulum II’s standout stars.

Based on the new study, from a team of researchers at the Kavli Institute at the Massachusetts Institute of Technology, the unusual event in Reticulum II was likely the collision of two, ultra-dense objects called neutron stars. Scientists have hypothesized for decades that these collisions could serve as a primary source for r-process elements, yet the idea had lacked solid observational evidence. Now armed with this information, scientists can further hope to retrace the histories of galaxies based on the contents of their stars, in effect conducting “stellar archeology.”

Researchers have confirmed the hypothesis according to an Oct. 16, 2017 news item on phys.org,

Gold’s origin in the Universe has finally been confirmed, after a gravitational wave source was seen and heard for the first time ever by an international collaboration of researchers, with astronomers at the University of Warwick playing a leading role.

Members of Warwick’s Astronomy and Astrophysics Group, Professor Andrew Levan, Dr Joe Lyman, Dr Sam Oates and Dr Danny Steeghs, led observations which captured the light of two colliding neutron stars, shortly after being detected through gravitational waves – perhaps the most eagerly anticipated phenomenon in modern astronomy.

Marina Koren’s Oct. 16, 2017 article for The Atlantic presents a richly evocative view (Note: Links have been removed),

Some 130 million years ago, in another galaxy, two neutron stars spiraled closer and closer together until they smashed into each other in spectacular fashion. The violent collision produced gravitational waves, cosmic ripples powerful enough to stretch and squeeze the fabric of the universe. There was a brief flash of light a million trillion times as bright as the sun, and then a hot cloud of radioactive debris. The afterglow hung for several days, shifting from bright blue to dull red as the ejected material cooled in the emptiness of space.

Astronomers detected the aftermath of the merger on Earth on August 17. For the first time, they could see the source of universe-warping forces Albert Einstein predicted a century ago. Unlike with black-hole collisions, they had visible proof, and it looked like a bright jewel in the night sky.

But the merger of two neutron stars is more than fireworks. It’s a factory.

Using infrared telescopes, astronomers studied the spectra—the chemical composition of cosmic objects—of the collision and found that the plume ejected by the merger contained a host of newly formed heavy chemical elements, including gold, silver, platinum, and others. Scientists estimate the amount of cosmic bling totals about 10,000 Earth-masses of heavy elements.

I’m not sure exactly what this image signifies but it did accompany Koren’s article so presumably it’s a representation of colliding neutron stars,

NSF / LIGO / Sonoma State University /A. Simonnet. Downloaded from: https://www.theatlantic.com/science/archive/2017/10/the-making-of-cosmic-bling/543030/

An Oct. 16, 2017 University of Warwick press release (also on EurekAlert), which originated the news item on phys.org, provides more detail,

Huge amounts of gold, platinum, uranium and other heavy elements were created in the collision of these compact stellar remnants, and were pumped out into the universe – unlocking the mystery of how gold on wedding rings and jewellery is originally formed.

The collision produced as much gold as the mass of the Earth. [emphasis mine]

This discovery has also confirmed conclusively that short gamma-ray bursts are directly caused by the merging of two neutron stars.

The neutron stars were very dense – as heavy as our Sun yet only 10 kilometres across – and they collided with each other 130 million years ago, when dinosaurs roamed the Earth, in a relatively old galaxy that was no longer forming many stars.

They drew towards each other over millions of light years, and revolved around each other increasingly quickly as they got closer – eventually spinning around each other five hundred times per second.

Their merging sent ripples through the fabric of space and time – and these ripples are the elusive gravitational waves spotted by the astronomers.

The gravitational waves were detected by the Advanced Laser Interferometer Gravitational-Wave Observatory (Adv-LIGO) on 17 August this year [2017], with a short duration gamma-ray burst detected by the Fermi satellite just two seconds later.

This led to a flurry of observations as night fell in Chile, with a first report of a new source from the Swope 1m telescope.

Longstanding collaborators Professor Levan and Professor Nial Tanvir (from the University of Leicester) used the facilities of the European Southern Observatory to pinpoint the source in infrared light.

Professor Levan’s team was the first one to get observations of this new source with the Hubble Space Telescope. It comes from a galaxy called NGC 4993, 130 million light years away.

Andrew Levan, Professor in the Astronomy & Astrophysics group at the University of Warwick, commented: “Once we saw the data, we realised we had caught a new kind of astrophysical object. This ushers in the era of multi-messenger astronomy, it is like being able to see and hear for the first time.”

Dr Joe Lyman, who was observing at the European Southern Observatory at the time was the first to alert the community that the source was unlike any seen before.

He commented: “The exquisite observations obtained in a few days showed we were observing a kilonova, an object whose light is powered by extreme nuclear reactions. This tells us that the heavy elements, like the gold or platinum in jewellery are the cinders, forged in the billion degree remnants of a merging neutron star.”

Dr Samantha Oates added: “This discovery has answered three questions that astronomers have been puzzling for decades: what happens when neutron stars merge? What causes the short duration gamma-ray bursts? Where are the heavy elements, like gold, made? In the space of about a week all three of these mysteries were solved.”

Dr Danny Steeghs said: “This is a new chapter in astrophysics. We hope that in the next few years we will detect many more events like this. Indeed, in Warwick we have just finished building a telescope designed to do just this job, and we expect it to pinpoint these sources in this new era of multi-messenger astronomy”.

Congratulations to all of the researchers involved in this work!

Many, many research teams were  involved. Here’s a sampling of their news releases which focus on their areas of research,

University of the Witwatersrand (South Africa)

https://www.eurekalert.org/pub_releases/2017-10/uotw-wti101717.php

Weizmann Institute of Science (Israel)

https://www.eurekalert.org/pub_releases/2017-10/wios-cns101717.php

Carnegie Institution for Science (US)

https://www.eurekalert.org/pub_releases/2017-10/cifs-dns101217.php

Northwestern University (US)

https://www.eurekalert.org/pub_releases/2017-10/nu-adc101617.php

National Radio Astronomy Observatory (US)

https://www.eurekalert.org/pub_releases/2017-10/nrao-ru101317.php

Max-Planck-Gesellschaft (Germany)

https://www.eurekalert.org/pub_releases/2017-10/m-gwf101817.php

Penn State (Pennsylvania State University; US)

https://www.eurekalert.org/pub_releases/2017-10/ps-stl101617.php

University of California – Davis

https://www.eurekalert.org/pub_releases/2017-10/uoc–cns101717.php

The American Association for the Advancement of Science’s (AAAS) magazine, Science, has published seven papers on this research. Here’s an Oct. 16, 2017 AAAS news release with an overview of the papers,

https://www.eurekalert.org/pub_releases/2017-10/aaft-btf101617.php

I’m sure there are more news releases out there and that there will be many more papers published in many journals, so if this interests, I encourage you to keep looking.

Two final pieces I’d like to draw your attention to: one answers basic questions and another focuses on how artists knew what to draw when neutron stars collide.

Keith A Spencer’s Oct. 18, 2017 piece on salon.com answers a lot of basic questions for those of us who don’t have a background in astronomy. Here are a couple of examples,

What is a neutron star?

Okay, you know how atoms have protons, neutrons, and electrons in them? And you know how protons are positively charged, and electrons are negatively charged, and neutrons are neutral?

Yeah, I remember that from watching Bill Nye as a kid.

Totally. Anyway, have you ever wondered why the negatively-charged electrons and the positively-charged protons don’t just merge into each other and form a neutral neutron? I mean, they’re sitting there in the atom’s nucleus pretty close to each other. Like, if you had two magnets that close, they’d stick together immediately.

I guess now that you mention it, yeah, it is weird.

Well, it’s because there’s another force deep in the atom that’s preventing them from merging.

It’s really really strong.

The only way to overcome this force is to have a huge amount of matter in a really hot, dense space — basically shove them into each other until they give up and stick together and become a neutron. This happens in very large stars that have been around for a while — the core collapses, and in the aftermath, the electrons in the star are so close to the protons, and under so much pressure, that they suddenly merge. There’s a big explosion and the outer material of the star is sloughed off.

Okay, so you’re saying under a lot of pressure and in certain conditions, some stars collapse and become big balls of neutrons?

Pretty much, yeah.

So why do the neutrons just stick around in a huge ball? Aren’t they neutral? What’s keeping them together? 

Gravity, mostly. But also the strong nuclear force, that aforementioned weird strong force. This isn’t something you’d encounter on a macroscopic scale — the strong force only really works at the type of distances typified by particles in atomic nuclei. And it’s different, fundamentally, than the electromagnetic force, which is what makes magnets attract and repel and what makes your hair stick up when you rub a balloon on it.

So these neutrons in a big ball are bound by gravity, but also sticking together by virtue of the strong nuclear force. 

So basically, the new ball of neutrons is really small, at least, compared to how heavy it is. That’s because the neutrons are all clumped together as if this neutron star is one giant atomic nucleus — which it kinda is. It’s like a giant atom made only of neutrons. If our sun were a neutron star, it would be less than 20 miles wide. It would also not be something you would ever want to get near.

Got it. That means two giant balls of neutrons that weighed like, more than our sun and were only ten-ish miles wide, suddenly smashed into each other, and in the aftermath created a black hole, and we are just now detecting it on Earth?

Exactly. Pretty weird, no?

Spencer does a good job of gradually taking you through increasingly complex explanations.

For those with artistic interests, Neel V. Patel tries to answer a question about how artists knew what draw when neutron stars collided in his Oct. 18, 2017 piece for Slate.com,

All of these things make this discovery easy to marvel at and somewhat impossible to picture. Luckily, artists have taken up the task of imagining it for us, which you’ve likely seen if you’ve already stumbled on coverage of the discovery. Two bright, furious spheres of light and gas spiraling quickly into one another, resulting in a massive swell of lit-up matter along with light and gravitational waves rippling off speedily in all directions, towards parts unknown. These illustrations aren’t just alluring interpretations of a rare phenomenon; they are, to some extent, the translation of raw data and numbers into a tangible visual that gives scientists and nonscientists alike some way of grasping what just happened. But are these visualizations realistic? Is this what it actually looked like? No one has any idea. Which is what makes the scientific illustrators’ work all the more fascinating.

“My goal is to represent what the scientists found,” says Aurore Simmonet, a scientific illustrator based at Sonoma State University in Rohnert Park, California. Even though she said she doesn’t have a rigorous science background (she certainly didn’t know what a kilonova was before being tasked to illustrate one), she also doesn’t believe that type of experience is an absolute necessity. More critical, she says, is for the artist to have an interest in the subject matter and in learning new things, as well as a capacity to speak directly to scientists about their work.

Illustrators like Simmonet usually start off work on an illustration by asking the scientist what’s the biggest takeaway a viewer should grasp when looking at a visual. Unfortunately, this latest discovery yielded a multitude of papers emphasizing different conclusions and highlights. With so many scientific angles, there’s a stark challenge in trying to cram every important thing into a single drawing.

Clearly, however, the illustrations needed to center around the kilonova. Simmonet loves colors, so she began by discussing with the researchers what kind of color scheme would work best. The smash of two neutron stars lends itself well to deep, vibrant hues. Simmonet and Robin Dienel at the Carnegie Institution for Science elected to use a wide array of colors and drew bright cracking to show pressure forming at the merging. Others, like Luis Calcada at the European Southern Observatory, limited the color scheme in favor of emphasizing the bright moment of collision and the signal waves created by the kilonova.

Animators have even more freedom to show the event, since they have much more than a single frame to play with. The Conceptual Image Lab at NASA’s [US National Aeronautics and Space Administration] Goddard Space Flight Center created a short video about the new findings, and lead animator Brian Monroe says the video he and his colleagues designed shows off the evolution of the entire process: the rising action, climax, and resolution of the kilonova event.

The illustrators try to adhere to what the likely physics of the event entailed, soliciting feedback from the scientists to make sure they’re getting it right. The swirling of gas, the direction of ejected matter upon impact, the reflection of light, the proportions of the objects—all of these things are deliberately framed such that they make scientific sense. …

Do take a look at Patel’s piece, if for no other reason than to see all of the images he has embedded there. You may recognize Aurore Simmonet’s name from the credit line in the second image I have embedded here.

From the memristor to the atomristor?

I’m going to let Michael Berger explain the memristor (from Berger’s Jan. 2, 2017 Nanowerk Spotlight article),

In trying to bring brain-like (neuromorphic) computing closer to reality, researchers have been working on the development of memory resistors, or memristors, which are resistors in a circuit that ‘remember’ their state even if you lose power.

Today, most computers use random access memory (RAM), which moves very quickly as a user works but does not retain unsaved data if power is lost. Flash drives, on the other hand, store information when they are not powered but work much slower. Memristors could provide a memory that is the best of both worlds: fast and reliable.

He goes on to discuss a team at the University of Texas at Austin’s work on creating an extraordinarily thin memristor: an atomristor,

he team’s work features the thinnest memory devices and it appears to be a universal effect available in all semiconducting 2D monolayers.

The scientists explain that the unexpected discovery of nonvolatile resistance switching (NVRS) in monolayer transitional metal dichalcogenides (MoS2, MoSe2, WS2, WSe2) is likely due to the inherent layered crystalline nature that produces sharp interfaces and clean tunnel barriers. This prevents excessive leakage and affords stable phenomenon so that NVRS can be used for existing memory and computing applications.

“Our work opens up a new field of research in exploiting defects at the atomic scale, and can advance existing applications such as future generation high density storage, and 3D cross-bar networks for neuromorphic memory computing,” notes Akinwande [Deji Akinwande, an Associate Professor at the University of Texas at Austin]. “We also discovered a completely new application, which is non-volatile switching for radio-frequency (RF) communication systems. This is a rapidly emerging field because of the massive growth in wireless technologies and the need for very low-power switches. Our devices consume no static power, an important feature for battery life in mobile communication systems.”

Here’s a link to and a citation for the Akinwande team’s paper,

Atomristor: Nonvolatile Resistance Switching in Atomic Sheets of Transition Metal Dichalcogenides by Ruijing Ge, Xiaohan Wu, Myungsoo Kim, Jianping Shi, Sushant Sonde, Li Tao, Yanfeng Zhang, Jack C. Lee, and Deji Akinwande. Nano Lett., Article ASAP DOI: 10.1021/acs.nanolett.7b04342 Publication Date (Web): December 13, 2017

Copyright © 2017 American Chemical Society

This paper appears to be open access.

Leftover 2017 memristor news bits

i have two bits of news, one from this October 2017 about using light to control a memristor’s learning properties and one from December 2017 about memristors and neural networks.

Shining a light on the memristor

Michael Berger wrote an October 30, 2017 Nanowerk Sportlight article about some of the latest work concerning memristors and light,

Memristors – or resistive memory – are nanoelectronic devices that are very promising components for next generation memory and computing devices. They are two-terminal electric elements similar to a conventional resistor – however, the electric resistance in a memristor is dependent on the charge passing through it; which means that its conductance can be precisely modulated by charge or flux through it. Its special property is that its resistance can be programmed (resistor function) and subsequently remains stored (memory function).

In this sense, a memristor is similar to a synapse in the human brain because it exhibits the same switching characteristics, i.e. it is able, with a high level of plasticity, to modify the efficiency of signal transfer between neurons under the influence of the transfer itself. That’s why researchers are hopeful to use memristors for the fabrication of electronic synapses for neuromorphic (i.e. brain-like) computing that mimics some of the aspects of learning and computation in human brains.

Human brains may be slow at pure number crunching but they are excellent at handling fast dynamic sensory information such as image and voice recognition. Walking is something that we take for granted but this is quite challenging for robots, especially over uneven terrain.

“Memristors present an opportunity to make new types of computers that are different from existing von Neumann architectures, which traditional computers are based upon,” Dr Neil T. Kemp, a Lecturer in Physics at the University of Hull [UK], tells Nanowerk. “Our team at the University of Hull is focussed on making memristor devices dynamically reconfigurable and adaptive – we believe this is the route to making a new generation of artificial intelligence systems that are smarter and can exhibit complex behavior. Such systems would also have the advantage of memristors, high density integration and lower power usage, so these systems would be more lightweight, portable and not need re-charging so often – which is something really needed for robots etc.”

In their new paper in Nanoscale (“Reversible Optical Switching Memristors with Tunable STDP Synaptic Plasticity: A Route to Hierarchical Control in Artificial Intelligent Systems”), Kemp and his team demonstrate the ability to reversibly control the learning properties of memristors via optical means.

The reversibility is achieved by changing the polarization of light. The researchers have used this effect to demonstrate tuneable learning in a memristor. One way this is achieved is through something called Spike Timing Dependent Plasticity (STDP), which is an effect known to occur in human brains and is linked with sensory perception, spatial reasoning, language and conscious thought in the neocortex.

STDP learning is based upon differences in the arrival time of signals from two adjacent neurons. The University of Hull team has shown that they can modulate the synaptic plasticity via optical means which enables the devices to have tuneable learning.

“Our research findings are important because it demonstrates that light can be used to control the learning properties of a memristor,” Kemp points out. “We have shown that light can be used in a reversible manner to change the connection strength (or conductivity) of artificial memristor synapses and as well control their ability to forget i.e. we can dynamically change device to have short-term or long-term memory.”

According to the team, there are many potential applications, such as adaptive electronic circuits controllable via light, or in more complex systems, such as neuromorphic computing, the development of optically reconfigurable neural networks.

Having optically controllable memristors can also facilitate the implementation of hierarchical control in larger artificial-brain like systems, whereby some of the key processes that are carried out by biological molecules in human brains can be emulated in solid-state devices through patterning with light.

Some of these processes include synaptic pruning, conversion of short term memory to long term memory, erasing of certain memories that are no longer needed or changing the sensitivity of synapses to be more adept at learning new information.

“The ability to control this dynamically, both spatially and temporally, is particularly interesting since it would allow neural networks to be reconfigurable on the fly through either spatial patterning or by adjusting the intensity of the light source,” notes Kemp.

In their new paper in Nanoscale Currently, the devices are more suited to neuromorphic computing applications, which do not need to be as fast. Optical control of memristors opens the route to dynamically tuneable and reprogrammable synaptic circuits as well the ability (via optical patterning) to have hierarchical control in larger and more complex artificial intelligent systems.

“Artificial Intelligence is really starting to come on strong in many areas, especially in the areas of voice/image recognition and autonomous systems – we could even say that this is the next revolution, similarly to what the industrial revolution was to farming and production processes,” concludes Kemp. “There are many challenges to overcome though. …

That excerpt should give you the gist of Berger’s article and, for those who need more information, there’s Berger’s article and, also, a link to and a citation for the paper,

Reversible optical switching memristors with tunable STDP synaptic plasticity: a route to hierarchical control in artificial intelligent systems by Ayoub H. Jaafar, Robert J. Gray, Emanuele Verrelli, Mary O’Neill, Stephen. M. Kelly, and Neil T. Kemp. Nanoscale, 2017,9, 17091-17098 DOI: 10.1039/C7NR06138B First published on 24 Oct 2017

This paper is behind a paywall.

The memristor and the neural network

It would seem machine learning could experience a significant upgrade if the work in Wei Lu’s University of Michigan laboratory can be scaled for general use. From a December 22, 2017 news item on ScienceDaily,

A new type of neural network made with memristors can dramatically improve the efficiency of teaching machines to think like humans.

The network, called a reservoir computing system, could predict words before they are said during conversation, and help predict future outcomes based on the present.

The research team that created the reservoir computing system, led by Wei Lu, professor of electrical engineering and computer science at the University of Michigan, recently published their work in Nature Communications.

A December 19, 2017 University of Michigan news release (also on EurekAlert) by Dan Newman, which originated the news item, expands on the theme,

Reservoir computing systems, which improve on a typical neural network’s capacity and reduce the required training time, have been created in the past with larger optical components. However, the U-M group created their system using memristors, which require less space and can be integrated more easily into existing silicon-based electronics.

Memristors are a special type of resistive device that can both perform logic and store data. This contrasts with typical computer systems, where processors perform logic separate from memory modules. In this study, Lu’s team used a special memristor that memorizes events only in the near history.

Inspired by brains, neural networks are composed of neurons, or nodes, and synapses, the connections between nodes.

To train a neural network for a task, a neural network takes in a large set of questions and the answers to those questions. In this process of what’s called supervised learning, the connections between nodes are weighted more heavily or lightly to minimize the amount of error in achieving the correct answer.

Once trained, a neural network can then be tested without knowing the answer. For example, a system can process a new photo and correctly identify a human face, because it has learned the features of human faces from other photos in its training set.

“A lot of times, it takes days or months to train a network,” says Lu. “It is very expensive.”

Image recognition is also a relatively simple problem, as it doesn’t require any information apart from a static image. More complex tasks, such as speech recognition, can depend highly on context and require neural networks to have knowledge of what has just occurred, or what has just been said.

“When transcribing speech to text or translating languages, a word’s meaning and even pronunciation will differ depending on the previous syllables,” says Lu.

This requires a recurrent neural network, which incorporates loops within the network that give the network a memory effect. However, training these recurrent neural networks is especially expensive, Lu says.

Reservoir computing systems built with memristors, however, can skip most of the expensive training process and still provide the network the capability to remember. This is because the most critical component of the system – the reservoir – does not require training.

When a set of data is inputted into the reservoir, the reservoir identifies important time-related features of the data, and hands it off in a simpler format to a second network. This second network then only needs training like simpler neural networks, changing weights of the features and outputs that the first network passed on until it achieves an acceptable level of error.

Enlargereservoir computing system

IMAGE:  Schematic of a reservoir computing system, showing the reservoir with internal dynamics and the simpler output. Only the simpler output needs to be trained, allowing for quicker and lower-cost training. Courtesy Wei Lu.

 

“The beauty of reservoir computing is that while we design it, we don’t have to train it,” says Lu.

The team proved the reservoir computing concept using a test of handwriting recognition, a common benchmark among neural networks. Numerals were broken up into rows of pixels, and fed into the computer with voltages like Morse code, with zero volts for a dark pixel and a little over one volt for a white pixel.

Using only 88 memristors as nodes to identify handwritten versions of numerals, compared to a conventional network that would require thousands of nodes for the task, the reservoir achieved 91% accuracy.

Reservoir computing systems are especially adept at handling data that varies with time, like a stream of data or words, or a function depending on past results.

To demonstrate this, the team tested a complex function that depended on multiple past results, which is common in engineering fields. The reservoir computing system was able to model the complex function with minimal error.

Lu plans on exploring two future paths with this research: speech recognition and predictive analysis.

“We can make predictions on natural spoken language, so you don’t even have to say the full word,” explains Lu.

“We could actually predict what you plan to say next.”

In predictive analysis, Lu hopes to use the system to take in signals with noise, like static from far-off radio stations, and produce a cleaner stream of data. “It could also predict and generate an output signal even if the input stopped,” he says.

EnlargeWei Lu

IMAGE:  Wei Lu, Professor of Electrical Engineering & Computer Science at the University of Michigan holds a memristor he created. Photo: Marcin Szczepanski.

 

The work was published in Nature Communications in the article, “Reservoir computing using dynamic memristors for temporal information processing”, with authors Chao Du, Fuxi Cai, Mohammed Zidan, Wen Ma, Seung Hwan Lee, and Prof. Wei Lu.

The research is part of a $6.9 million DARPA [US Defense Advanced Research Projects Agency] project, called “Sparse Adaptive Local Learning for Sensing and Analytics [also known as SALLSA],” that aims to build a computer chip based on self-organizing, adaptive neural networks. The memristor networks are fabricated at Michigan’s Lurie Nanofabrication Facility.

Lu and his team previously used memristors in implementing “sparse coding,” which used a 32-by-32 array of memristors to efficiently analyze and recreate images.

Here’s a link to and a citation for the paper,

Reservoir computing using dynamic memristors for temporal information processing by Chao Du, Fuxi Cai, Mohammed A. Zidan, Wen Ma, Seung Hwan Lee & Wei D. Lu. Nature Communications 8, Article number: 2204 (2017) doi:10.1038/s41467-017-02337-y Published online: 19 December 2017

This is an open access paper.

Mott memristor

Mott memristors (mentioned in my Aug. 24, 2017 posting about neuristors and brainlike computing) gets more fulsome treatment in an Oct. 9, 2017 posting by Samuel K. Moore on the Nanoclast blog (found on the IEEE [Institute of Electrical and Electronics Engineers] website) Note: 1: Links have been removed; Note 2 : I quite like Moore’s writing style but he’s not for the impatient reader,

When you’re really harried, you probably feel like your head is brimful of chaos. You’re pretty close. Neuroscientists say your brain operates in a regime termed the “edge of chaos,” and it’s actually a good thing. It’s a state that allows for fast, efficient analog computation of the kind that can solve problems that grow vastly more difficult as they become bigger in size.

The trouble is, if you’re trying to replicate that kind of chaotic computation with electronics, you need an element that both acts chaotically—how and when you want it to—and could scale up to form a big system.

“No one had been able to show chaotic dynamics in a single scalable electronic device,” says Suhas Kumar, a researcher at Hewlett Packard Labs, in Palo Alto, Calif. Until now, that is.

He, John Paul Strachan, and R. Stanley Williams recently reported in the journal Nature that a particular configuration of a certain type of memristor contains that seed of controlled chaos. What’s more, when they simulated wiring these up into a type of circuit called a Hopfield neural network, the circuit was capable of solving a ridiculously difficult problem—1,000 instances of the traveling salesman problem—at a rate of 10 trillion operations per second per watt.

(It’s not an apples-to-apples comparison, but the world’s most powerful supercomputer as of June 2017 managed 93,015 trillion floating point operations per second but consumed 15 megawatts doing it. So about 6 billion operations per second per watt.)

The device in question is called a Mott memristor. Memristors generally are devices that hold a memory, in the form of resistance, of the current that has flowed through them. The most familiar type is called resistive RAM (or ReRAM or RRAM, depending on who’s asking). Mott memristors have an added ability in that they can also reflect a temperature-driven change in resistance.

The HP Labs team made their memristor from an 8-nanometer-thick layer of niobium dioxide (NbO2) sandwiched between two layers of titanium nitride. The bottom titanium nitride layer was in the form of a 70-nanometer wide pillar. “We showed that this type of memristor can generate chaotic and nonchaotic signals,” says Williams, who invented the memristor based on theory by Leon Chua.

(The traveling salesman problem is one of these. In it, the salesman must find the shortest route that lets him visit all of his customers’ cities, without going through any of them twice. It’s a difficult problem because it becomes exponentially more difficult to solve with each city you add.)

Here’s what the niobium dioxide-based Mott memristor looks like,

Photo: Suhas Kumar/Hewlett Packard Labs
A micrograph shows the construction of a Mott memristor composed of an 8-nanometer-thick layer of niobium dioxide between two layers of titanium nitride.

Here’s a link to and a citation for the paper,

Chaotic dynamics in nanoscale NbO2 Mott memristors for analogue computing by Suhas Kumar, John Paul Strachan & R. Stanley Williams. Nature 548, 318–321 (17 August 2017) doi:10.1038/nature23307 Published online: 09 August 2017

This paper is behind a paywall.

FrogHeart’s good-bye to 2017 and hello to 2018

This is going to be relatively short and sweet(ish). Starting with the 2017 review:

Nano blogosphere and the Canadian blogosphere

From my perspective there’s been a change taking place in the nano blogosphere over the last few years. There are fewer blogs along with fewer postings from those who still blog. Interestingly, some blogs are becoming more generalized. At the same time, Foresight Institute’s Nanodot blog (as has FrogHeart) has expanded its range of topics to include artificial intelligence and other topics. Andrew Maynard’s 2020 Science blog now exists in an archived from but before its demise, it, too, had started to include other topics, notably risk in its many forms as opposed to risk and nanomaterials. Dexter Johnson’s blog, Nanoclast (on the IEEE [Institute for Electrical and Electronics Engineers] website), maintains its 3x weekly postings. Tim Harper who often wrote about nanotechnology on his Cientifica blog appears to have found a more freewheeling approach that is dominated by his Twitter feed although he also seems (I can’t confirm that the latest posts were written in 2017) to blog here on timharper.net.

The Canadian science blogosphere seems to be getting quieter if Science Borealis (blog aggregator) is a measure. My overall impression is that the bloggers have been a bit quieter this year with fewer postings on the feed or perhaps that’s due to some technical issues (sometimes FrogHeart posts do not get onto the feed). On the promising side, Science Borealis teamed with the Science Writers and Communicators of Canada Association to run a contest, “2017 People’s Choice Awards: Canada’s Favourite Science Online!”  There were two categories (Favourite Science Blog and Favourite Science Site) and you can find a list of the finalists with links to the winners here.

Big congratulations for the winners: Canada’s Favourite Blog 2017: Body of Evidence (Dec. 6, 2017 article by Alina Fisher for Science Borealis) and Let’s Talk Science won Canada’s Favourite Science Online 2017 category as per this announcement.

However, I can’t help wondering: where were ASAP Science, Acapella Science, Quirks & Quarks, IFLS (I f***ing love science), and others on the list for finalists? I would have thought any of these would have a lock on a position as a finalist. These are Canadian online science purveyors and they are hugely popular, which should mean they’d have no problem getting nominated and getting votes. I can’t find the criteria for nominations (or any hint there will be a 2018 contest) so I imagine their nonpresence on the 2017 finalists list will remain a mystery to me.

Looking forward to 2018, I think that the nano blogosphere will continue with its transformation into a more general science/technology-oriented community. To some extent, I believe this reflects the fact that nanotechnology is being absorbed into the larger science/technology effort as foundational (something wiser folks than me predicted some years ago).

As for Science Borealis and the Canadian science online effort, I’m going to interpret the quieter feeds as a sign of a maturing community. After all, there are always ups and downs in terms of enthusiasm and participation and as I noted earlier the launch of an online contest is promising as is the collaboration with Science Writers and Communicators of Canada.

Canadian science policy

It was a big year.

Canada’s Chief Science Advisor

With Canada’s first chief science advisor in many years, being announced Dr. Mona Nemer stepped into her position sometime in Fall 2017. The official announcement was made on Sept. 26, 2017. I covered the event in my Sept. 26, 2017 posting, which includes a few more details than found the official announcement.

You’ll also find in that Sept. 26, 2017 posting a brief discourse on the Naylor report (also known as the Review of Fundamental Science) and some speculation on why, to my knowledge, there has been no action taken as a consequence.  The Naylor report was released April 10, 2017 and was covered here in a three-part review, published on June 8, 2017,

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 1 of 3

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 2 of 3

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 3 of 3

I have found another commentary (much briefer than mine) by Paul Dufour on the Canadian Science Policy Centre website. (November 9, 2017)

Subnational and regional science funding

This began in 2016 with a workshop mentioned in my November 10, 2016 posting: ‘Council of Canadian Academies and science policy for Alberta.” By the time the report was published the endeavour had been transformed into: Science Policy: Considerations for Subnational Governments (report here and my June 22, 2017 commentary here).

I don’t know what will come of this but I imagine scientists will be supportive as it means more money and they are always looking for more money. Still, the new government in British Columbia has only one ‘science entity’ and I’m not sure it’s still operational but i was called the Premier’s Technology Council. To my knowledge, there is no ministry or other agency that is focused primarily or partially on science.

Meanwhile, a couple of representatives from the health sciences (neither of whom were involved in the production of the report) seem quite enthused about the prospects for provincial money in their (Bev Holmes, Interim CEO, Michael Smith Foundation for Health Research, British Columbia, and Patrick Odnokon (CEO, Saskatchewan Health Research Foundation) October 27, 2017 opinion piece for the Canadian Science Policy Centre.

Artificial intelligence and Canadians

An event which I find more interesting with time was the announcement of the Pan=Canadian Artificial Intelligence Strategy in the 2017 Canadian federal budget. Since then there has been a veritable gold rush mentality with regard to artificial intelligence in Canada. One announcement after the next about various corporations opening new offices in Toronto or Montréal has been made in the months since.

What has really piqued my interest recently is a report being written for Canada’s Treasury Board by Michael Karlin (you can learn more from his Twitter feed although you may need to scroll down past some of his more personal tweets (something cassoulet in the Dec. 29, 2017 tweets).  As for Karlin’s report, which is a work in progress, you can find out more about the report and Karlin in a December 12, 2017 article by Rob Hunt for the Algorithmic Media Observatory (sponsored by the Social Sciences and Humanities Research Council of Canada [SHRCC], the Centre for Study of Democratic Citizenship, and the Fonds de recherche du Québec: Société et culture).

You can ring in 2018 by reading and making comments, which could influence the final version, on Karlin’s “Responsible Artificial Intelligence in the Government of Canada” part of the government’s Digital Disruption White Paper Series.

As for other 2018 news, the Council of Canadian Academies is expected to publish “The State of Science and Technology and Industrial Research and Development in Canada” at some point soon (we hope). This report follows and incorporates two previous ‘states’, The State of Science and Technology in Canada, 2012 (the first of these was a 2006 report) and the 2013 version of The State of Industrial R&D in Canada. There is already some preliminary data for this latest ‘state of’  (you can find a link and commentary in my December 15, 2016 posting).

FrogHeart then (2017) and soon (2018)

On looking back I see that the year started out at quite a clip as I was attempting to hit the 5000th blog posting mark, which I did on March 3,  2017. I have cut back somewhat from the 3 postings/day high to approximately 1 posting/day. It makes things more manageable allowing me to focus on other matters.

By the way, you may note that the ‘Donate’ button has disappeared from my sidebard. I thank everyone who donated from the bottom of my heart. The money was more than currency, it also symbolized encouragement. On the sad side, I moved from one hosting service to a new one (Sibername) late in December 2016 and have been experiencing serious bandwidth issues which result on FrogHeart’s disappearance from the web for days at a time. I am trying to resolve the issues and hope that such actions as removing the ‘Donate’ button will help.

I wish my readers all the best for 2018 as we explore nanotechnology and other emerging technologies!

(I apologize for any and all errors. I usually take a little more time to write this end-of-year and coming-year piece but due to bandwidth issues I was unable to access my draft and give it at least one review. And at this point, I’m too tired to try spotting error. If you see any, please do let me know.)

A customized cruise experience with wearable technology (and decreased personal agency?)

The days when you went cruising to ‘get away from it all’ seem to have passed (if they ever really existed) with the introduction of wearable technology that will register your every preference and make life easier according to Cliff Kuang’s Oct. 19, 2017 article for Fast Company,

This month [October 2017], the 141,000-ton Regal Princess will push out to sea after a nine-figure revamp of mind-boggling scale. Passengers won’t be greeted by new restaurants, swimming pools, or onboard activities, but will instead step into a future augured by the likes of Netflix and Uber, where nearly everything is on demand and personally tailored. An ambitious new customization platform has been woven into the ship’s 19 passenger decks: some 7,000 onboard sensors and 4,000 “guest portals” (door-access panels and touch-screen TVs), all of them connected by 75 miles of internal cabling. As the Carnival-owned ship cruises to Nassau, Bahamas, and Grand Turk, its 3,500 passengers will have the option of carrying a quarter-size device, called the Ocean Medallion, which can be slipped into a pocket or worn on the wrist and is synced with a companion app.

The platform will provide a new level of service for passengers; the onboard sensors record their tastes and respond to their movements, and the app guides them around the ship and toward activities aligned with their preferences. Carnival plans to roll out the platform to another seven ships by January 2019. Eventually, the Ocean Medallion could be opening doors, ordering drinks, and scheduling activities for passengers on all 102 of Carnival’s vessels across 10 cruise lines, from the mass-market Princess ships to the legendary ocean liners of Cunard.

Kuang goes on to explain the reasoning behind this innovation,

The Ocean Medallion is Carnival’s attempt to address a problem that’s become increasingly vexing to the $35.5 billion cruise industry. Driven by economics, ships have exploded in size: In 1996, Carnival Destiny was the world’s largest cruise ship, carrying 2,600 passengers. Today, Royal Caribbean’s MS Harmony of the Seas carries up to 6,780 passengers and 2,300 crew. Larger ships expend less fuel per passenger; the money saved can then go to adding more amenities—which, in turn, are geared to attracting as many types of people as possible. Today on a typical ship you can do practically anything—from attending violin concertos to bungee jumping. And that’s just onboard. Most of a cruise is spent in port, where each day there are dozens of experiences available. This avalanche of choice can bury a passenger. It has also made personalized service harder to deliver. …

Kuang also wrote this brief description of how the technology works from the passenger’s perspective in an Oct. 19, 2017 item for Fast Company,

1. Pre-trip

On the web or on the app, you can book experiences, log your tastes and interests, and line up your days. That data powers the recommendations you’ll see. The Ocean Medallion arrives by mail and becomes the key to ship access.

2. Stateroom

When you draw near, your cabin-room door unlocks without swiping. The room’s unique 43-inch TV, which doubles as a touch screen, offers a range of Carnival’s bespoke travel shows. Whatever you watch is fed into your excursion suggestions.

3. Food

When you order something, sensors detect where you are, allowing your server to find you. Your allergies and preferences are also tracked, and shape the choices you’re offered. In all, the back-end data has 45,000 allergens tagged and manages 250,000 drink combinations.

4. Activities

The right algorithms can go beyond suggesting wines based on previous orders. Carnival is creating a massive semantic database, so if you like pricey reds, you’re more apt to be guided to a violin concerto than a limbo competition. Your onboard choices—the casino, the gym, the pool—inform your excursion recommendations.

In Kuang’s Oct. 19, 2017 article he notes that the cruise ship line is putting a lot of effort into retraining their staff and emphasizing the ‘soft’ skills that aren’t going to be found in this iteration of the technology. No mention is made of whether or not there will be reductions in the number of staff members on this cruise ship nor is the possibility that ‘soft’ skills may in the future be incorporated into this technological marvel.

Personalization/customization is increasingly everywhere

How do you feel about customized news feeds? As it turns out, this is not a rhetorical question as Adrienne LaFrance notes in her Oct. 19, 2017 article for The Atlantic (Note: Links have been removed),

Today, a Google search for news runs through the same algorithmic filtration system as any other Google search: A person’s individual search history, geographic location, and other demographic information affects what Google shows you. Exactly how your search results differ from any other person’s is a mystery, however. Not even the computer scientists who developed the algorithm could precisely reverse engineer it, given the fact that the same result can be achieved through numerous paths, and that ranking factors—deciding which results show up first—are constantly changing, as are the algorithms themselves.

We now get our news in real time, on demand, tailored to our interests, across multiple platforms, without knowing just how much is actually personalized. It was technology companies like Google and Facebook, not traditional newsrooms, that made it so. But news organizations are increasingly betting that offering personalized content can help them draw audiences to their sites—and keep them coming back.

Personalization extends beyond how and where news organizations meet their readers. Already, smartphone users can subscribe to push notifications for the specific coverage areas that interest them. On Facebook, users can decide—to some extent—which organizations’ stories they would like to appear in their news feeds. At the same time, devices and platforms that use machine learning to get to know their users will increasingly play a role in shaping ultra-personalized news products. Meanwhile, voice-activated artificially intelligent devices, such as Google Home and Amazon Echo, are poised to redefine the relationship between news consumers and the news [emphasis mine].

While news personalization can help people manage information overload by making individuals’ news diets unique, it also threatens to incite filter bubbles and, in turn, bias [emphasis mine]. This “creates a bit of an echo chamber,” says Judith Donath, author of The Social Machine: Designs for Living Online and a researcher affiliated with Harvard University ’s Berkman Klein Center for Internet and Society. “You get news that is designed to be palatable to you. It feeds into people’s appetite of expecting the news to be entertaining … [and] the desire to have news that’s reinforcing your beliefs, as opposed to teaching you about what’s happening in the world and helping you predict the future better.”

Still, algorithms have a place in responsible journalism. “An algorithm actually is the modern editorial tool,” says Tamar Charney, the managing editor of NPR One, the organization’s customizable mobile-listening app. A handcrafted hub for audio content from both local and national programs as well as podcasts from sources other than NPR, NPR One employs an algorithm to help populate users’ streams with content that is likely to interest them. But Charney assures there’s still a human hand involved: “The whole editorial vision of NPR One was to take the best of what humans do and take the best of what algorithms do and marry them together.” [emphasis mine]

The skimming and diving Charney describes sounds almost exactly like how Apple and Google approach their distributed-content platforms. With Apple News, users can decide which outlets and topics they are most interested in seeing, with Siri offering suggestions as the algorithm gets better at understanding your preferences. Siri now has have help from Safari. The personal assistant can now detect browser history and suggest news items based on what someone’s been looking at—for example, if someone is searching Safari for Reykjavík-related travel information, they will then see Iceland-related news on Apple News. But the For You view of Apple News isn’t 100 percent customizable, as it still spotlights top stories of the day, and trending stories that are popular with other users, alongside those curated just for you.

Similarly, with Google’s latest update to Google News, readers can scan fixed headlines, customize sidebars on the page to their core interests and location—and, of course, search. The latest redesign of Google News makes it look newsier than ever, and adds to many of the personalization features Google first introduced in 2010. There’s also a place where you can preprogram your own interests into the algorithm.

Google says this isn’t an attempt to supplant news organizations, nor is it inspired by them. The design is rather an embodiment of Google’s original ethos, the product manager for Google News Anand Paka says: “Just due to the deluge of information, users do want ways to control information overload. In other words, why should I read the news that I don’t care about?” [emphasis mine]

Meanwhile, in May [2017?], Google briefly tested a personalized search filter that would dip into its trove of data about users with personal Google and Gmail accounts and include results exclusively from their emails, photos, calendar items, and other personal data related to their query. [emphasis mine] The “personal” tab was supposedly “just an experiment,” a Google spokesperson said, and the option was temporarily removed, but seems to have rolled back out for many users as of August [2017?].

Now, Google, in seeking to settle a class-action lawsuit alleging that scanning emails to offer targeted ads amounts to illegal wiretapping, is promising that for the next three years it won’t use the content of its users’ emails to serve up targeted ads in Gmail. The move, which will go into effect at an unspecified date, doesn’t mean users won’t see ads, however. Google will continue to collect data from users’ search histories, YouTube, and Chrome browsing habits, and other activity.

The fear that personalization will encourage filter bubbles by narrowing the selection of stories is a valid one, especially considering that the average internet user or news consumer might not even be aware of such efforts. Elia Powers, an assistant professor of journalism and news media at Towson University in Maryland, studied the awareness of news personalization among students after he noticed those in his own classes didn’t seem to realize the extent to which Facebook and Google customized users’ results. “My sense is that they didn’t really understand … the role that people that were curating the algorithms [had], how influential that was. And they also didn’t understand that they could play a pretty active role on Facebook in telling Facebook what kinds of news they want them to show and how to prioritize [content] on Google,” he says.

The results of Powers’s study, which was published in Digital Journalism in February [2017], showed that the majority of students had no idea that algorithms were filtering the news content they saw on Facebook and Google. When asked if Facebook shows every news item, posted by organizations or people, in a users’ newsfeed, only 24 percent of those surveyed were aware that Facebook prioritizes certain posts and hides others. Similarly, only a quarter of respondents said Google search results would be different for two different people entering the same search terms at the same time. [emphasis mine; Note: Respondents in this study were students.]

This, of course, has implications beyond the classroom, says Powers: “People as news consumers need to be aware of what decisions are being made [for them], before they even open their news sites, by algorithms and the people behind them, and also be able to understand how they can counter the effects or maybe even turn off personalization or make tweaks to their feeds or their news sites so they take a more active role in actually seeing what they want to see in their feeds.”

On Google and Facebook, the algorithm that determines what you see is invisible. With voice-activated assistants, the algorithm suddenly has a persona. “We are being trained to have a relationship with the AI,” says Amy Webb, founder of the Future Today Institute and an adjunct professor at New York University Stern School of Business. “This is so much more catastrophically horrible for news organizations than the internet. At least with the internet, I have options. The voice ecosystem is not built that way. It’s being built so I just get the information I need in a pleasing way.”

LaFrance’s article is thoughtful and well worth reading in its entirety. Now, onto some commentary.

Loss of personal agency

I have been concerned for some time about the increasingly dull results I get from a Google search and while I realize the company has been gathering information about me via my searches , supposedly in service of giving me better searches, I had no idea how deeply the company can mine for personal data. It makes me wonder what would happen if Google and Facebook attempted a merger.

More cogently, I rather resent the search engines and artificial intelligence agents (e.g. Facebook bots) which have usurped my role as the arbiter of what interests me, in short, my increasing loss of personal agency.

I’m also deeply suspicious of what these companies are going to do with my data. Will it be used to manipulate me in some way? Presumably, the data will be sold and used for some purpose. In the US, they have married electoral data with consumer data as Brent Bambury notes in an Oct. 13, 2017 article for his CBC (Canadian Broadcasting Corporation) Radio show,

How much of your personal information circulates in the free-market ether of metadata? It could be more than you imagine, and it might be enough to let others change the way you vote.

A data firm that specializes in creating psychological profiles of voters claims to have up to 5,000 data points on 220 million Americans. Cambridge Analytica has deep ties to the American right and was hired by the campaigns of Ben Carson, Ted Cruz and Donald Trump.

During the U.S. election, CNN called them “Donald Trump’s mind readers” and his secret weapon.

David Carroll is a Professor at the Parsons School of Design in New York City. He is one of the millions of Americans profiled by Cambridge Analytica and he’s taking legal action to find out where the company gets its masses of data and how they use it to create their vaunted psychographic profiles of voters.

On Day 6 [Banbury’s CBC radio programme], he explained why that’s important.

“They claim to have figured out how to project our voting behavior based on our consumer behavior. So it’s important for citizens to be able to understand this because it would affect our ability to understand how we’re being targeted by campaigns and how the messages that we’re seeing on Facebook and television are being directed at us to manipulate us.” [emphasis mine]

The parent company of Cambridge Analytica, SCL Group, is a U.K.-based data operation with global ties to military and political activities. David Carroll says the potential for sharing personal data internationally is a cause for concern.

“It’s the first time that this kind of data is being collected and transferred across geographic boundaries,” he says.

But that also gives Carroll an opening for legal action. An individual has more rights to access their personal information in the U.K., so that’s where he’s launching his lawsuit.

Reports link Michael Flynn, briefly Trump’s National Security Adviser, to SCL Group and indicate that former White House strategist Steve Bannon is a board member of Cambridge Analytica. Billionaire Robert Mercer, who has underwritten Bannon’s Breitbart operations and is a major Trump donor, also has a significant stake in Cambridge Analytica.

In the world of data, Mercer’s credentials are impeccable.

“He is an important contributor to the field of artificial intelligence,” says David Carroll.

“His work at IBM is seminal and really important in terms of the foundational ideas that go into big data analytics, so the relationship between AI and big data analytics. …

Banbury’s piece offers a lot more, including embedded videos, than I’ve not included in that excerpt but I also wanted to include some material from Carole Cadwalladr’s Oct. 1, 2017 Guardian article about Carroll and his legal fight in the UK,

“There are so many disturbing aspects to this. One of the things that really troubles me is how the company can buy anonymous data completely legally from all these different sources, but as soon as it attaches it to voter files, you are re-identified. It means that every privacy policy we have ignored in our use of technology is a broken promise. It would be one thing if this information stayed in the US, if it was an American company and it only did voter data stuff.”

But, he [Carroll] argues, “it’s not just a US company and it’s not just a civilian company”. Instead, he says, it has ties with the military through SCL – “and it doesn’t just do voter targeting”. Carroll has provided information to the Senate intelligence committee and believes that the disclosures mandated by a British court could provide evidence helpful to investigators.

Frank Pasquale, a law professor at the University of Maryland, author of The Black Box Society and a leading expert on big data and the law, called the case a “watershed moment”.

“It really is a David and Goliath fight and I think it will be the model for other citizens’ actions against other big corporations. I think we will look back and see it as a really significant case in terms of the future of algorithmic accountability and data protection. …

Nobody is discussing personal agency directly but if you’re only being exposed to certain kinds of messages then your personal agency has been taken from you. Admittedly we don’t have complete personal agency in our lives but AI along with the data gathering done online and increasingly with wearable and smart technology means that another layer of control has been added to your life and it is largely invisible. After all, the students in Elia Powers’ study didn’t realize their news feeds were being pre-curated.

Nano- and neuro- together for nanoneuroscience

This is not the first time I’ve posted about nanotechnology and neuroscience (see this April 2, 2013 piece about then new brain science initiative in the US and Michael Berger’s  Nanowerk Spotlight article/review of an earlier paper covering the topic of nanotechnology and neuroscience).

Interestingly, the European Union (EU) had announced its two  $1B Euro research initiatives, the Human Brain Project and the Graphene Flagship (see my Jan. 28, 2013 posting about it),  months prior to the US brain research push. For those unfamiliar with the nanotechnology effort, graphene is a nanomaterial and there is high interest in its potential use in biomedical technology, thus partially connecting both EU projects.

In any event, Berger is highlighting a nanotechnology and neuroscience connection again in his Oct. 18, 2017 Nanowerk Spotlight article, or overview of, a new paper, which updates our understanding of the potential connections between the two fields (Note: A link has been removed),

Over the past several years, nanoscale analysis tools and in the design and synthesis of nanomaterials have generated optical, electrical, and chemical methods that can readily be adapted for use in neuroscience and brain activity mapping.

A review paper in Advanced Functional Materials (“Nanotechnology for Neuroscience: Promising Approaches for Diagnostics, Therapeutics and Brain Activity Mapping”) summarizes the basic concepts associated with neuroscience and the current journey of nanotechnology towards the study of neuron function by addressing various concerns on the significant role of nanomaterials in neuroscience and by describing the future applications of this emerging technology.

The collaboration between nanotechnology and neuroscience, though still at the early stages, utilizes broad concepts, such as drug delivery, cell protection, cell regeneration and differentiation, imaging and surgery, to give birth to novel clinical methods in neuroscience.

Ultimately, the clinical translation of nanoneuroscience implicates that central nervous system (CNS) diseases, including neurodevelopmental, neurodegenerative and psychiatric diseases, have the potential to be cured, while the industrial translation of nanoneuroscience indicates the need for advancement of brain-computer interface technologies.

Future Developing Arenas in Nanoneuroscience

The Brain Activity Map (BAM) Project aims to map the neural activity of every neuron across all neural circuits with the ultimate aim of curing diseases associated with the nervous system. The announcement of this collaborative, public-private research initiative in 2013 by President Obama has driven the surge in developing methods to elucidate neural circuitry. Three current developing arenas in the context of nanoneuroscience applications that will push such initiative forward are 1) optogenetics, 2) molecular/ion sensing and monitoring and 3) piezoelectric effects.

In their review, the authors discuss these aspects in detail.

Neurotoxicity of Nanomaterials

By engineering particles on the scale of molecular-level entities – proteins, lipid bilayers and nucleic acids – we can stereotactically interface with many of the components of cell systems, and at the cutting edge of this technology, we can begin to devise ways in which we can manipulate these components to our own ends. However, interfering with the internal environment of cells, especially neurons, is by no means simple.

“If we are to continue to make great strides in nanoneuroscience, functional investigations of nanomaterials must be complemented with robust toxicology studies,” the authors point out. “A database on the toxicity of materials that fully incorporates these findings for use in future schema must be developed. These databases should include information and data on 1) the chemical nature of the nanomaterials in complex aqueous environments; 2) the biological interactions of nanomaterials with chemical specificity; 3) the effects of various nanomaterial properties on living systems; and 4) a model for the simulation and computation of possible effects of nanomaterials in living systems across varying time and space. If we can establish such methods, it may be possible to design nanopharmaceuticals for improved research as well as quality of life.”

“However, challenges in nanoneuroscience are present in many forms, such as neurotoxicity; the inability to cross the blood-brain barrier [emphasis mine]; the need for greater specificity, bioavailability and short half-lives; and monitoring of disease treatment,” the authors conclude their review. “The nanoneurotoxicity surrounding these nanomaterials is a barrier that must be overcome for the translation of these applications from bench-to-bedside. While the challenges associated with nanoneuroscience seem unending, they represent opportunities for future work.”

I have a March 26, 2015 posting about Canadian researchers breaching the blood-brain barrier and an April 13, 2016 posting about US researchers at Cornell University also breaching the blood-brain barrier. Perhaps the “inability” mentioned in this Spotlight article means that it can’t be done consistently or that it hasn’t been achieved on humans.

Here’s a link to and a citation for the paper,

Nanotechnology for Neuroscience: Promising Approaches for Diagnostics, Therapeutics and Brain Activity Mapping by Anil Kumar, Aaron Tan, Joanna Wong, Jonathan Clayton Spagnoli, James Lam, Brianna Diane Blevins, Natasha G, Lewis Thorne, Keyoumars Ashkan, Jin Xie, and Hong Liu. Advanced Functional Materials Volume 27, Issue 39, October 19, 2017 DOI: 10.1002/adfm.201700489 Version of Record online: 14 AUG 2017

© 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

I took a look at the authors’ information and found that most of these researchers are based in  China and in the UK, with a sole researcher based in the US.

(Merry Christmas!) Japanese tree frogs inspire hardware for the highest of tech: a swarmalator

First, the frog,

[Japanese Tree Frog] By 池田正樹 (talk)masaki ikeda – Own work, Public Domain, https://commons.wikimedia.org/w/index.php?curid=4593224

I wish they had a recording of the mating calls for Japanese tree frogs since they were the inspiration for mathematicians at Cornell University (New York state, US) according to a November 17, 2017 news item on ScienceDaily,

How does the Japanese tree frog figure into the latest work of noted mathematician Steven Strogatz? As it turns out, quite prominently.

“We had read about these funny frogs that hop around and croak,” said Strogatz, the Jacob Gould Schurman Professor of Applied Mathematics. “They form patterns in space and time. Usually it’s about reproduction. And based on how the other guy or guys are croaking, they don’t want to be around another one that’s croaking at the same time as they are, because they’ll jam each other.”

A November 15, 2017 Cornell University news release (also on EurekAlert but dated November 17, 2017) by Tom Fleischman, which originated the news item, details how the calls led to ‘swarmalators’ (Note: Links have been removed),

Strogatz and Kevin O’Keeffe, Ph.D. ’17, used the curious mating ritual of male Japanese tree frogs as inspiration for their exploration of “swarmalators” – their term for systems in which both synchronization and swarming occur together.

Specifically, they considered oscillators whose phase dynamics and spatial dynamics are coupled. In the instance of the male tree frogs, they attempt to croak in exact anti-phase (one croaks while the other is silent) while moving away from a rival so as to be heard by females.

This opens up “a new class of math problems,” said Strogatz, a Stephen H. Weiss Presidential Fellow. “The question is, what do we expect to see when people start building systems like this or observing them in biology?”

Their paper, “Oscillators That Sync and Swarm,” was published Nov. 13 [2017] in Nature Communications. Strogatz and O’Keeffe – now a postdoctoral researcher with the Senseable City Lab at the Massachusetts Institute of Technology – collaborated with Hyunsuk Hong from Chonbuk National University in Jeonju, South Korea.

Swarming and synchronization both involve large, self-organizing groups of individuals interacting according to simple rules, but rarely have they been studied together, O’Keeffe said.

“No one had connected these two areas, in spite of the fact that there were all these parallels,” he said. “That was the theoretical idea that sort of seduced us, I suppose. And there were also a couple of concrete examples, which we liked – including the tree frogs.”

Studies of swarms focus on how animals move – think of birds flocking or fish schooling – while neglecting the dynamics of their internal states. Studies of synchronization do the opposite: They focus on oscillators’ internal dynamics. Strogatz long has been fascinated by fireflies’ synchrony and other similar phenomena, giving a TED Talk on the topic in 2004, but not on their motion.

“[Swarming and synchronization] are so similar, and yet they were never connected together, and it seems so obvious,” O’Keeffe said. “It’s a whole new landscape of possible behaviors that hadn’t been explored before.”

Using a pair of governing equations that assume swarmalators are free to move about, along with numerical simulations, the group found that a swarmalator system settles into one of five states:

  • Static synchrony – featuring circular symmetry, crystal-like distribution, fully synchronized in phase;
  • Static asynchrony – featuring uniform distribution, meaning that every phase occurs everywhere;
  • Static phase wave – swarmalators settle near others in a phase similar to their own, and phases are frozen at their initial values;
  • Splintered phase wave – nonstationary, disconnected clusters of distinct phases; and
  • Active phase wave – similar to bidirectional states found in biological swarms, where populations split into counter-rotating subgroups; also similar to vortex arrays formed by groups of sperm.

Through the study of simple models, the group found that the coupling of “sync” and “swarm” leads to rich patterns in both time and space, and could lead to further study of systems that exhibit this dual behavior.

“This opens up a lot of questions for many parts of science – there are a lot of things to try that people hadn’t thought of trying,” Strogatz said. “It’s science that opens doors for science. It’s inaugurating science, rather than culminating science.”

Here’s a link to and a citation for the paper,

Oscillators that sync and swarm by Kevin P. O’Keeffe, Hyunsuk Hong, & Steven H. Strogatz. Nature Communications 8, Article number: 1504 (2017) doi:10.1038/s41467-017-01190-3 Published online: 15 November 2017

This paper is open access.

One last thing, these frogs have also inspired WiFi improvements (from the Japanese tree frog Wikipedia entry; Note: Links have been removed),

Journalist Toyohiro Akiyama carried some Japanese tree frogs with him during his trip to the Mir space station in December 1990.[citation needed] Calling behavior of the species was used to create an algorithm for optimizing Wi-Fi networks.[3]

While it’s not clear in the Wikipedia entry, the frogs were part of an experiment. Here’s a link to and a citation for the paper about the experiment, along with an abstract,

The Frog in Space (FRIS) experiment onboard Space Station Mir: final report and follow-on studies by Yamashita, M.; Izumi-Kurotani, A.; Mogami, Y.; Okuno,k M.; Naitoh, T.; Wassersug, R. J. Biol Sci Space. 1997 Dec 11(4):313-20.

Abstract

The “Frog in Space” (FRIS) experiment marked a major step for Japanese space life science, on the occasion of the first space flight of a Japanese cosmonaut. At the core of FRIS were six Japanese tree frogs, Hyla japonica, flown on Space Station Mir for 8 days in 1990. The behavior of these frogs was observed and recorded under microgravity. The frogs took up a “parachuting” posture when drifting in a free volume on Mir. When perched on surfaces, they typically sat with their heads bent backward. Such a peculiar posture, after long exposure to microgravity, is discussed in light of motion sickness in amphibians. Histological examinations and other studies were made on the specimens upon recovery. Some organs, such as the liver and the vertebra, showed changes as a result of space flight; others were unaffected. Studies that followed FRIS have been conducted to prepare for a second FRIS on the International Space Station. Interspecific diversity in the behavioral reactions of anurans to changes in acceleration is the major focus of these investigations. The ultimate goal of this research is to better understand how organisms have adapted to gravity through their evolution on earth.

The paper is open access.

NanoFARM: food, agriculture, and nanoparticles

The research focus for the NanoFARM consortium is on pesticides according to an October 19, 2017 news item on Nanowerk,

The answer to the growing, worldwide food production problem may have a tiny solution—nanoparticles, which are being explored as both fertilizers and fungicides for crops.

NanoFARM – research consortium formed between Carnegie Mellon University [US], the University of Kentucky [US], the University of Vienna [Austria], and Aveiro University in Prague [Czech Republic] – is studying the effects of nanoparticles on agriculture. The four universities received grants from their countries’ respective National Science Foundations to discover how these tiny particles – some just 4 nanometers in diameter – can revolutionize how farmers grow their food.

An October ??, 2017 Carnegie Mellon University news release by Adam Dove, which originated the news item, fills in a few more details,

“What we’re doing is getting a fundamental understanding of nanoparticle-to-plant interactions to enable future applications,” says Civil and Environmental Engineering (CEE) Professor Greg Lowry, the principal investigator for the nanoFARM project. “With pesticides, less than 5% goes into the crop—the rest just goes into the environment and does harmful things. What we’re trying to do is minimize that waste and corresponding environmental damage by doing a better job of targeting the delivery.”

The teams are looking at related questions: How much nanomaterial is needed to help crops when it comes to driving away pests and delivering nutrients, and how much could potentially hurt plants or surrounding ecosystems?

Applied pesticides and fertilizers are vulnerable to washing away—especially if there’s a rainstorm soon after application. But nanoparticles are not so easily washed off, making them extremely efficient for delivering micronutrients like zinc or copper to crops.

“If you put in zinc oxide nanoparticles instead, it might take days or weeks to dissolve, providing a slow, long-term delivery system.”

Gao researches the rate at which nanoparticles dissolve. His most recent finding is that nanoparticles of copper oxide take up to 20-30 days to dissolve in soil, meaning that they can deliver nutrients to plants at a steady rate over that time period.

“In many developing countries, a huge number of people are starving,” says Gao. “This kind of technology can help provide food and save energy.”

But Gao’s research is only one piece of the NanoFARM puzzle. Lowry recently traveled to Australia with Ph.D. student Eleanor Spielman-Sun to explore how differently charged nanoparticles were absorbed into wheat plants.

They learned that negatively charged particles were able to move into the veins of a plant—making them a good fit for a farmer who wanted to apply a fungicide. Neutrally charged particles went into the tissue of the leaves, which would be beneficial for growers who wanted to fortify a food with nutritional value.

Lowry said they are still a long way from signing off on a finished product for all crops—right now they are concentrating on tomato and wheat plants. But with the help of their university partners, they are slowly creating new nano-enabled agrochemicals for more efficient and environmentally friendly agriculture.

For more information, you can find the NanoFARM website here.