Tag Archives: memory

More memory, less space and a walk down the cryptocurrency road

Libraries, archives, records management, oral history, etc. there are many institutions and names for how we manage collective and personal memory. You might call it a peculiarly human obsession stretching back into antiquity. For example, there’s the Library of Alexandria (Wikipedia entry) founded in the third, or possibly 2nd, century BCE (before the common era) and reputed to store all the knowledge in the world. It was destroyed although accounts differ as to when and how but its loss remains a potent reminder of memory’s fragility.

These days, the technology community is terribly concerned with storing ever more bits of data on materials that are reaching their limits for storage.I have news of a possible solution,  an interview of sorts with the researchers working on this new technology, and some very recent research into policies for cryptocurrency mining and development. That bit about cryptocurrency makes more sense when you read what the response to one of the interview questions.


It seems University of Alberta researchers may have found a way to increase memory exponentially, from a July 23, 2018 news item on ScienceDaily,

The most dense solid-state memory ever created could soon exceed the capabilities of current computer storage devices by 1,000 times, thanks to a new technique scientists at the University of Alberta have perfected.

“Essentially, you can take all 45 million songs on iTunes and store them on the surface of one quarter,” said Roshan Achal, PhD student in Department of Physics and lead author on the new research. “Five years ago, this wasn’t even something we thought possible.”

A July 23, 2018 University of Alberta news release (also on EurekAlert) by Jennifer-Anne Pascoe, which originated the news item, provides more information,

Previous discoveries were stable only at cryogenic conditions, meaning this new finding puts society light years closer to meeting the need for more storage for the current and continued deluge of data. One of the most exciting features of this memory is that it’s road-ready for real-world temperatures, as it can withstand normal use and transportation beyond the lab.

“What is often overlooked in the nanofabrication business is actual transportation to an end user, that simply was not possible until now given temperature restrictions,” continued Achal. “Our memory is stable well above room temperature and precise down to the atom.”

Achal explained that immediate applications will be data archival. Next steps will be increasing readout and writing speeds, meaning even more flexible applications.

More memory, less space

Achal works with University of Alberta physics professor Robert Wolkow, a pioneer in the field of atomic-scale physics. Wolkow perfected the art of the science behind nanotip technology, which, thanks to Wolkow and his team’s continued work, has now reached a tipping point, meaning scaling up atomic-scale manufacturing for commercialization.

“With this last piece of the puzzle now in-hand, atom-scale fabrication will become a commercial reality in the very near future,” said Wolkow. Wolkow’s Spin-off [sic] company, Quantum Silicon Inc., is hard at work on commercializing atom-scale fabrication for use in all areas of the technology sector.

To demonstrate the new discovery, Achal, Wolkow, and their fellow scientists not only fabricated the world’s smallest maple leaf, they also encoded the entire alphabet at a density of 138 terabytes, roughly equivalent to writing 350,000 letters across a grain of rice. For a playful twist, Achal also encoded music as an atom-sized song, the first 24 notes of which will make any video-game player of the 80s and 90s nostalgic for yesteryear but excited for the future of technology and society.

As noted in the news release, there is an atom-sized song, which is available in this video,

As for the nano-sized maple leaf, I highlighted that bit of whimsy in a June 30, 2017 posting.

Here’s a link to and a citation for the paper,

Lithography for robust and editable atomic-scale silicon devices and memories by Roshan Achal, Mohammad Rashidi, Jeremiah Croshaw, David Churchill, Marco Taucer, Taleana Huff, Martin Cloutier, Jason Pitters, & Robert A. Wolkow. Nature Communicationsvolume 9, Article number: 2778 (2018) DOI: https://doi.org/10.1038/s41467-018-05171-y Published 23 July 2018

This paper is open access.

For interested parties, you can find Quantum Silicon (QSI) here. My Edmonton geography is all but nonexistent, still, it seems to me the company address on Saskatchewan Drive is a University of Alberta address. It’s also the address for the National Research Council of Canada. Perhaps this is a university/government spin-off company?

The ‘interview’

I sent some questions to the researchers at the University of Alberta who very kindly provided me with the following answers. Roshan Achal passed on one of the questions to his colleague Taleana Huff for her response. Both Achal and Huff are associated with QSI.

Unfortunately I could not find any pictures of all three researchers (Achal, Huff, and Wolkow) together.

Roshan Achal (left) used nanotechnology perfected by his PhD supervisor, Robert Wolkow (right) to create atomic-scale computer memory that could exceed the capacity of today’s solid-state storage drives by 1,000 times. (Photo: Faculty of Science)


I got started in this field about 6 years ago, when I undertook a MSc
with Dr. Wolkow here at the University of Alberta. Before that point, I
had only ever heard of a scanning tunneling microscope from what was
taught in my classes. I was aware of the famous IBM logo made up from
just a handful of atoms using this machine, but I didn’t know what
else could be done. Here, Dr. Wolkow introduced me to his line of
research, and I saw the immense potential for growth in this area and
decided to pursue it further. I had the chance to interact with and
learn from nanofabrication experts and gain the skills necessary to
begin playing around with my own techniques and ideas during my PhD.


One of the things missing from the list, that we are currently working
on, is the ability to easily communicate (electrically) from the
macroscale (our world) to the nanoscale, without the use of a scanning
tunneling microscope. With this, we would be able to then construct
devices using the other pieces we’ve developed up to this point, and
then integrate them with more conventional electronics. This would bring
us yet another step closer to the realization of atomic-scale


That is an interesting question. With the density we’ve achieved,
there are not too many surfaces where atomic sites are more closely
spaced to allow for another factor of two improvement. In that sense, it
would be difficult to improve memory densities further using these
techniques alone. In order to continue Moore’s law, new techniques, or
storage methods would have to be developed to move beyond atomic-scale

The memory design itself does not have anything to do with quantum
computing, however, the lithographic techniques developed through our
work, may enable the development of certain quantum-dot-based quantum
computing schemes.


I am not very familiar with these topics, however, co-author Taleana
Huff has provided some thoughts:

Taleana Huff (downloaded from https://ca.linkedin.com/in/taleana-huff]

“The memory, as we’ve designed it, might not have too much of an
impact in and of itself. Cryptocurrencies fall into two categories.
Proof of Work and Proof of Stake. Proof of Work relies on raw
computational power to solve a difficult math problem. If you solve it,
you get rewarded with a small amount of that coin. The problem is that
it can take a lot of power and energy for your computer to crunch
through that problem. Faster access to memory alone could perhaps
streamline small parts of this slightly, but it would be very slight.
Proof of Stake is already quite power efficient and wouldn’t really
have a drastic advantage from better faster computers.

Now, atomic-scale circuitry built using these new lithographic
techniques that we’ve developed, which could perform computations at
significantly lower energy costs, would be huge for Proof of Work coins.
One of the things holding bitcoin back, for example, is that mining it
is now consuming power on the order of the annual energy consumption
required by small countries. A more efficient way to mine while still
taking the same amount of time to solve the problem would make bitcoin
much more attractive as a currency.”

Thank you to Roshan Achal and Taleana Huff for helping me to further explore the implications of their work with Dr. Wolkow.


As usual, after receiving the replies I have more questions but these people have other things to do so I’ll content myself with noting that there is something extraordinary in the fact that we can imagine a near future where atomic scale manufacturing is possible and where as Achal says, ” … storage methods would have to be developed to move beyond atomic-scale [emphasis mine] storage”. In decades past it was the stuff of science fiction or of theorists who didn’t have the tools to turn the idea into a reality. With Wolkow’s, Achal’s, Hauff’s, and their colleagues’ work, atomic scale manufacturing is attainable in the foreseeable future.

Hopefully we’ll be wiser than we have been in the past in how we deploy these new manufacturing techniques. Of course, before we need the wisdom, scientists, as  Achal notes,  need to find a new way to communicate between the macroscale and the nanoscale.

As for Huff’s comments about cryptocurrencies and cyptocurrency and blockchain technology, I stumbled across this very recent research, from a July 31, 2018 Elsevier press release (also on EurekAlert),

A study [behind a paywall] published in Energy Research & Social Science warns that failure to lower the energy use by Bitcoin and similar Blockchain designs may prevent nations from reaching their climate change mitigation obligations under the Paris Agreement.

The study, authored by Jon Truby, PhD, Assistant Professor, Director of the Centre for Law & Development, College of Law, Qatar University, Doha, Qatar, evaluates the financial and legal options available to lawmakers to moderate blockchain-related energy consumption and foster a sustainable and innovative technology sector. Based on this rigorous review and analysis of the technologies, ownership models, and jurisdictional case law and practices, the article recommends an approach that imposes new taxes, charges, or restrictions to reduce demand by users, miners, and miner manufacturers who employ polluting technologies, and offers incentives that encourage developers to create less energy-intensive/carbon-neutral Blockchain.

“Digital currency mining is the first major industry developed from Blockchain, because its transactions alone consume more electricity than entire nations,” said Dr. Truby. “It needs to be directed towards sustainability if it is to realize its potential advantages.

“Many developers have taken no account of the environmental impact of their designs, so we must encourage them to adopt consensus protocols that do not result in high emissions. Taking no action means we are subsidizing high energy-consuming technology and causing future Blockchain developers to follow the same harmful path. We need to de-socialize the environmental costs involved while continuing to encourage progress of this important technology to unlock its potential economic, environmental, and social benefits,” explained Dr. Truby.

As a digital ledger that is accessible to, and trusted by all participants, Blockchain technology decentralizes and transforms the exchange of assets through peer-to-peer verification and payments. Blockchain technology has been advocated as being capable of delivering environmental and social benefits under the UN’s Sustainable Development Goals. However, Bitcoin’s system has been built in a way that is reminiscent of physical mining of natural resources – costs and efforts rise as the system reaches the ultimate resource limit and the mining of new resources requires increasing hardware resources, which consume huge amounts of electricity.

Putting this into perspective, Dr. Truby said, “the processes involved in a single Bitcoin transaction could provide electricity to a British home for a month – with the environmental costs socialized for private benefit.

“Bitcoin is here to stay, and so, future models must be designed without reliance on energy consumption so disproportionate on their economic or social benefits.”

The study evaluates various Blockchain technologies by their carbon footprints and recommends how to tax or restrict Blockchain types at different phases of production and use to discourage polluting versions and encourage cleaner alternatives. It also analyzes the legal measures that can be introduced to encourage technology innovators to develop low-emissions Blockchain designs. The specific recommendations include imposing levies to prevent path-dependent inertia from constraining innovation:

  • Registration fees collected by brokers from digital coin buyers.
  • “Bitcoin Sin Tax” surcharge on digital currency ownership.
  • Green taxes and restrictions on machinery purchases/imports (e.g. Bitcoin mining machines).
  • Smart contract transaction charges.

According to Dr. Truby, these findings may lead to new taxes, charges or restrictions, but could also lead to financial rewards for innovators developing carbon-neutral Blockchain.

The press release doesn’t fully reflect Dr. Truby’s thoughtfulness or the incentives he has suggested. it’s not all surcharges, taxes, and fees constitute encouragement.  Here’s a sample from the conclusion,

The possibilities of Blockchain are endless and incentivisation can help solve various climate change issues, such as through the development of digital currencies to fund climate finance programmes. This type of public-private finance initiative is envisioned in the Paris Agreement, and fiscal tools can incentivize innovators to design financially rewarding Blockchain technology that also achieves environmental goals. Bitcoin, for example, has various utilitarian intentions in its White Paper, which may or may not turn out to be as envisioned, but it would not have been such a success without investors seeking remarkable returns. Embracing such technology, and promoting a shift in behaviour with such fiscal tools, can turn the industry itself towards achieving innovative solutions for environmental goals.

I realize Wolkow, et. al, are not focused on cryptocurrency and blockchain technology per se but as Huff notes in her reply, “… new lithographic techniques that we’ve developed, which could perform computations at significantly lower energy costs, would be huge for Proof of Work coins.”

Whether or not there are implications for cryptocurrencies, energy needs, climate change, etc., it’s the kind of innovative work being done by scientists at the University of Alberta which may have implications in fields far beyond the researchers’ original intentions such as more efficient computation and data storage.

ETA Aug. 6, 2018: Dexter Johnson weighed in with an August 3, 2018 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website),

Researchers at the University of Alberta in Canada have developed a new approach to rewritable data storage technology by using a scanning tunneling microscope (STM) to remove and replace hydrogen atoms from the surface of a silicon wafer. If this approach realizes its potential, it could lead to a data storage technology capable of storing 1,000 times more data than today’s hard drives, up to 138 terabytes per square inch.

As a bit of background, Gerd Binnig and Heinrich Rohrer developed the first STM in 1986 for which they later received the Nobel Prize in physics. In the over 30 years since an STM first imaged an atom by exploiting a phenomenon known as tunneling—which causes electrons to jump from the surface atoms of a material to the tip of an ultrasharp electrode suspended a few angstroms above—the technology has become the backbone of so-called nanotechnology.

In addition to imaging the world on the atomic scale for the last thirty years, STMs have been experimented with as a potential data storage device. Last year, we reported on how IBM (where Binnig and Rohrer first developed the STM) used an STM in combination with an iron atom to serve as an electron-spin resonance sensor to read the magnetic pole of holmium atoms. The north and south poles of the holmium atoms served as the 0 and 1 of digital logic.

The Canadian researchers have taken a somewhat different approach to making an STM into a data storage device by automating a known technique that uses the ultrasharp tip of the STM to apply a voltage pulse above an atom to remove individual hydrogen atoms from the surface of a silicon wafer. Once the atom has been removed, there is a vacancy on the surface. These vacancies can be patterned on the surface to create devices and memories.

If you have the time, I recommend reading Dexter’s posting as he provides clear explanations, additional insight into the work, and more historical detail.

Thanks for the memory: the US National Institute of Standards and Technology (NIST) and memristors

In January 2018 it seemed like I was tripping across a lot of memristor stories . This came from a January 19, 2018 news item on Nanowerk,

In the race to build a computer that mimics the massive computational power of the human brain, researchers are increasingly turning to memristors, which can vary their electrical resistance based on the memory of past activity. Scientists at the National Institute of Standards and Technology (NIST) have now unveiled the long-mysterious inner workings of these semiconductor elements, which can act like the short-term memory of nerve cells.

A January 18, 2018 NIST news release (also on EurekAlert), which originated the news item, fills in the details,

Just as the ability of one nerve cell to signal another depends on how often the cells have communicated in the recent past, the resistance of a memristor depends on the amount of current that recently flowed through it. Moreover, a memristor retains that memory even when electrical power is switched off.

But despite the keen interest in memristors, scientists have lacked a detailed understanding of how these devices work and have yet to develop a standard toolset to study them.

Now, NIST scientists have identified such a toolset and used it to more deeply probe how memristors operate. Their findings could lead to more efficient operation of the devices and suggest ways to minimize the leakage of current.

Brian Hoskins of NIST and the University of California, Santa Barbara, along with NIST scientists Nikolai Zhitenev, Andrei Kolmakov, Jabez McClelland and their colleagues from the University of Maryland’s NanoCenter (link is external) in College Park and the Institute for Research and Development in Microtechnologies in Bucharest, reported the findings (link is external) in a recent Nature Communications.

To explore the electrical function of memristors, the team aimed a tightly focused beam of electrons at different locations on a titanium dioxide memristor. The beam knocked free some of the device’s electrons, which formed ultrasharp images of those locations. The beam also induced four distinct currents to flow within the device. The team determined that the currents are associated with the multiple interfaces between materials in the memristor, which consists of two metal (conducting) layers separated by an insulator.

“We know exactly where each of the currents are coming from because we are controlling the location of the beam that is inducing those currents,” said Hoskins.

In imaging the device, the team found several dark spots—regions of enhanced conductivity—which indicated places where current might leak out of the memristor during its normal operation. These leakage pathways resided outside the memristor’s core—where it switches between the low and high resistance levels that are useful in an electronic device. The finding suggests that reducing the size of a memristor could minimize or even eliminate some of the unwanted current pathways. Although researchers had suspected that might be the case, they had lacked experimental guidance about just how much to reduce the size of the device.

Because the leakage pathways are tiny, involving distances of only 100 to 300 nanometers, “you’re probably not going to start seeing some really big improvements until you reduce dimensions of the memristor on that scale,” Hoskins said.

To their surprise, the team also found that the current that correlated with the memristor’s switch in resistance didn’t come from the active switching material at all, but the metal layer above it. The most important lesson of the memristor study, Hoskins noted, “is that you can’t just worry about the resistive switch, the switching spot itself, you have to worry about everything around it.” The team’s study, he added, “is a way of generating much stronger intuition about what might be a good way to engineer memristors.”

Here’s a link to and a citation for the paper,

Stateful characterization of resistive switching TiO2 with electron beam induced currents by Brian D. Hoskins, Gina C. Adam, Evgheni Strelcov, Nikolai Zhitenev, Andrei Kolmakov, Dmitri B. Strukov, & Jabez J. McClelland. Nature Communications 8, Article number: 1972 (2017) doi:10.1038/s41467-017-02116-9 Published online: 07 December 2017

This is an open access paper.

It might be my imagination but it seemed like a lot of papers from 2017 were being publicized in early 2018.

Finally, I borrowed much of my headline from the NIST’s headline for its news release, specifically, “Thanks for the memory,” which is a rather old song,

Bob Hope and Shirley Ross in “The Big Broadcast of 1938.”

Brain-like computing and memory with magnetoresistance

This is an approach to brain-like computing that’s new (to me, anyway). From a January 9, 2018 news item on Nanowerk (Note: A link has been removed),

From various magnetic tapes, floppy disks and computer hard disk drives, magnetic materials have been storing our electronic information along with our valuable knowledge and memories for well over half of a century.

In more recent years, the new types [sic] phenomena known as magnetoresistance, which is the tendency of a material to change its electrical resistance when an externally-applied magnetic field or its own magnetization is changed, has found its success in hard disk drive read heads, magnetic field sensors and the rising star in the memory technologies, the magnetoresistive random access memory.

A new discovery, led by researchers at the University of Minnesota, demonstrates the existence of a new kind of magnetoresistance involving topological insulators that could result in improvements in future computing and computer storage. The details of their research are published in the most recent issue of the scientific journal Nature Communications (“Unidirectional spin-Hall and Rashba-Edelstein magnetoresistance in topological insulator-ferromagnet layer heterostructures”).

This image illustrates the work,

The schematic figure illustrates the concept and behavior of magnetoresistance. The spins are generated in topological insulators. Those at the interface between ferromagnet and topological insulators interact with the ferromagnet and result in either high or low resistance of the device, depending on the relative directions of magnetization and spins. Credit: University of Minnesota

A January 9, 2018 University of Minnesota College of Science and Engineering news release, which originated the news item, expands on the theme,

“Our discovery is one missing piece of the puzzle to improve the future of low-power computing and memory for the semiconductor industry, including brain-like computing and chips for robots and 3D magnetic memory,” said University of Minnesota Robert F. Hartmann Professor of Electrical and Computer Engineering Jian-Ping Wang, director of the Center for Spintronic Materials, Interfaces, and Novel Structures (C-SPIN) based at the University of Minnesota and co-author of the study.

Emerging technology using topological insulators

While magnetic recording still dominates data storage applications, the magnetoresistive random access memory is gradually finding its place in the field of computing memory. From the outside, they are unlike the hard disk drives which have mechanically spinning disks and swinging heads—they are more like any other type of memory. They are chips (solid state) which you’d find being soldered on circuit boards in a computer or mobile device.

Recently, a group of materials called topological insulators has been found to further improve the writing energy efficiency of magnetoresistive random access memory cells in electronics. However, the new device geometry demands a new magnetoresistance phenomenon to accomplish the read function of the memory cell in 3D system and network.

Following the recent discovery of the unidirectional spin Hall magnetoresistance in a conventional metal bilayer material systems, researchers at the University of Minnesota collaborated with colleagues at Pennsylvania State University and demonstrated for the first time the existence of such magnetoresistance in the topological insulator-ferromagnet bilayers.

The study confirms the existence of such unidirectional magnetoresistance and reveals that the adoption of topological insulators, compared to heavy metals, doubles the magnetoresistance performance at 150 Kelvin (-123.15 Celsius). From an application perspective, this work provides the missing piece of the puzzle to create a proposed 3D and cross-bar type computing and memory device involving topological insulators by adding the previously missing or very inconvenient read functionality.

In addition to Wang, researchers involved in this study include Yang Lv, Delin Zhang and Mahdi Jamali from the University of Minnesota Department of Electrical and Computer Engineering and James Kally, Joon Sue Lee and Nitin Samarth from Pennsylvania State University Department of Physics.

This research was funded by the Center for Spintronic Materials, Interfaces and Novel Architectures (C-SPIN) at the University of Minnesota, a Semiconductor Research Corporation program sponsored by the Microelectronics Advanced Research Corp. (MARCO) and the Defense Advanced Research Projects Agency (DARPA).

Here’s a link to and a citation for the paper,

Unidirectional spin-Hall and Rashba−Edelstein magnetoresistance in topological insulator-ferromagnet layer heterostructures by Yang Lv, James Kally, Delin Zhang, Joon Sue Lee, Mahdi Jamali, Nitin Samarth, & Jian-Ping Wang. Nature Communications 9, Article number: 111 (2018) doi:10.1038/s41467-017-02491-3 Published online: 09 January 2018

This is an open access paper.

Memristive-like qualities with pectin

As the drive to create a synthetic neuronal network, as powered by memristors, continues, scientists are investigating pectin. From a Nov. 11, 2016 news item on ScienceDaily,

Most of us know pectin as a key ingredient for making delicious jellies and jams, not as a component for a complex hybrid device that links biological and electronic systems. But a team of Italian scientists have built on previous work in this field using pectin with a high degree of methylation as the medium to create a new architecture of hybrid device with a double-layered polyelectrolyte that alone drives memristive behavior.

A Nov. 11, 2016 American Institute of Physics news release on EurekAlert, which originated the news item, defines memristors and describes the research,

A memristive device can be thought of as a synapse analogue, a device that has a memory. Simply stated, its behavior in a certain moment depends on its previous activity, similar to the way information in the human brain is transmitted from one neuron to another.

In an article published this week in AIP Advances, from AIP Publishing, the team explains the creation of the hybrid device. “In this research, we applied materials generally used in the pharmaceutical and food industries in our electrochemical devices,” said Angelica Cifarelli, a doctoral candidate at the University of Parma in Italy. “The idea of using the ‘buffering’ capability of these biocompatible materials as solid polyelectrolyte is completely innovative and our work is the first time that these bio-polymers have been used in devices based on organic polymers and in a memristive device.”

Memristors can provide a bridge for interfacing electronic circuits with nervous systems, moving us closer to realization of a double-layer perceptron, an element that can perform classification functions after an appropriate learning procedure. The main difficulty the research team faced was understanding the complex electrochemical interplay that is the basis for the memristive behavior, which would give them the means to control it. The team addressed this challenge by using commercial polymers, and modifying their electrochemical properties at the macroscopic level. The most surprising result was that it was possible to check the electrochemical response of the device by changing the formulation of gels acting as polyelectrolytes, allowing study of the ionic exchanges relating to the biological object, which activates the electrochemical response of the conductive polymer.

“Our developments open the way to make compatible polyaniline based devices with an interface that should be naturally, biologically and electrochemically compatible and functional,” said Cifarelli. The next steps are interfacing the memristor network with other living beings, for example, plants and ultimately the realization of hybrid systems that can “learn” and perform logic/classification functions.

Here’s a link to and a citation for the paper,

Polysaccarides-based gels and solid-state electronic devices with memresistive properties: Synergy between polyaniline electrochemistry and biology by Angelica Cifarelli, Tatiana Berzina, Antonella Parisini, Victor Erokhin, and Salvatore Iannotta. AIP Advances 6, 111302 (2016); http://dx.doi.org/10.1063/1.4966559 Published Nov. 8, 2016

This paper appears to be open access.

Memristor-based electronic synapses for neural networks

Caption: Neuron connections in biological neural networks. Credit: MIPT press office

Caption: Neuron connections in biological neural networks. Credit: MIPT press office

Russian scientists have recently published a paper about neural networks and electronic synapses based on ‘thin film’ memristors according to an April 19, 2016 news item on Nanowerk,

A team of scientists from the Moscow Institute of Physics and Technology (MIPT) have created prototypes of “electronic synapses” based on ultra-thin films of hafnium oxide (HfO2). These prototypes could potentially be used in fundamentally new computing systems.

An April 20, 2016 MIPT press release (also on EurekAlert), which originated the news item (the date inconsistency likely due to timezone differences) explains the connection between thin films and memristors,

The group of researchers from MIPT have made HfO2-based memristors measuring just 40×40 nm2. The nanostructures they built exhibit properties similar to biological synapses. Using newly developed technology, the memristors were integrated in matrices: in the future this technology may be used to design computers that function similar to biological neural networks.

Memristors (resistors with memory) are devices that are able to change their state (conductivity) depending on the charge passing through them, and they therefore have a memory of their “history”. In this study, the scientists used devices based on thin-film hafnium oxide, a material that is already used in the production of modern processors. This means that this new lab technology could, if required, easily be used in industrial processes.

“In a simpler version, memristors are promising binary non-volatile memory cells, in which information is written by switching the electric resistance – from high to low and back again. What we are trying to demonstrate are much more complex functions of memristors – that they behave similar to biological synapses,” said Yury Matveyev, the corresponding author of the paper, and senior researcher of MIPT’s Laboratory of Functional Materials and Devices for Nanoelectronics, commenting on the study.

The press release offers a description of biological synapses and their relationship to learning and memory,

A synapse is point of connection between neurons, the main function of which is to transmit a signal (a spike – a particular type of signal, see fig. 2) from one neuron to another. Each neuron may have thousands of synapses, i.e. connect with a large number of other neurons. This means that information can be processed in parallel, rather than sequentially (as in modern computers). This is the reason why “living” neural networks are so immensely effective both in terms of speed and energy consumption in solving large range of tasks, such as image / voice recognition, etc.

Over time, synapses may change their “weight”, i.e. their ability to transmit a signal. This property is believed to be the key to understanding the learning and memory functions of thebrain.

From the physical point of view, synaptic “memory” and “learning” in the brain can be interpreted as follows: the neural connection possesses a certain “conductivity”, which is determined by the previous “history” of signals that have passed through the connection. If a synapse transmits a signal from one neuron to another, we can say that it has high “conductivity”, and if it does not, we say it has low “conductivity”. However, synapses do not simply function in on/off mode; they can have any intermediate “weight” (intermediate conductivity value). Accordingly, if we want to simulate them using certain devices, these devices will also have to have analogous characteristics.

The researchers have provided an illustration of a biological synapse,

Fig.2 The type of electrical signal transmitted by neurons (a “spike”). The red lines are various other biological signals, the black line is the averaged signal. Source: MIPT press office

Fig.2 The type of electrical signal transmitted by neurons (a “spike”). The red lines are various other biological signals, the black line is the averaged signal. Source: MIPT press office

Now, the press release ties the memristor information together with the biological synapse information to describe the new work at the MIPT,

As in a biological synapse, the value of the electrical conductivity of a memristor is the result of its previous “life” – from the moment it was made.

There is a number of physical effects that can be exploited to design memristors. In this study, the authors used devices based on ultrathin-film hafnium oxide, which exhibit the effect of soft (reversible) electrical breakdown under an applied external electric field. Most often, these devices use only two different states encoding logic zero and one. However, in order to simulate biological synapses, a continuous spectrum of conductivities had to be used in the devices.

“The detailed physical mechanism behind the function of the memristors in question is still debated. However, the qualitative model is as follows: in the metal–ultrathin oxide–metal structure, charged point defects, such as vacancies of oxygen atoms, are formed and move around in the oxide layer when exposed to an electric field. It is these defects that are responsible for the reversible change in the conductivity of the oxide layer,” says the co-author of the paper and researcher of MIPT’s Laboratory of Functional Materials and Devices for Nanoelectronics, Sergey Zakharchenko.

The authors used the newly developed “analogue” memristors to model various learning mechanisms (“plasticity”) of biological synapses. In particular, this involved functions such as long-term potentiation (LTP) or long-term depression (LTD) of a connection between two neurons. It is generally accepted that these functions are the underlying mechanisms of  memory in the brain.

The authors also succeeded in demonstrating a more complex mechanism – spike-timing-dependent plasticity, i.e. the dependence of the value of the connection between neurons on the relative time taken for them to be “triggered”. It had previously been shown that this mechanism is responsible for associative learning – the ability of the brain to find connections between different events.

To demonstrate this function in their memristor devices, the authors purposefully used an electric signal which reproduced, as far as possible, the signals in living neurons, and they obtained a dependency very similar to those observed in living synapses (see fig. 3).

Fig.3. The change in conductivity of memristors depending on the temporal separation between "spikes"(rigth) and thr change in potential of the neuron connections in biological neural networks. Source: MIPT press office

Fig.3. The change in conductivity of memristors depending on the temporal separation between “spikes”(rigth) and thr change in potential of the neuron connections in biological neural networks. Source: MIPT press office

These results allowed the authors to confirm that the elements that they had developed could be considered a prototype of the “electronic synapse”, which could be used as a basis for the hardware implementation of artificial neural networks.

“We have created a baseline matrix of nanoscale memristors demonstrating the properties of biological synapses. Thanks to this research, we are now one step closer to building an artificial neural network. It may only be the very simplest of networks, but it is nevertheless a hardware prototype,” said the head of MIPT’s Laboratory of Functional Materials and Devices for Nanoelectronics, Andrey Zenkevich.

Here’s a link to and a citation for the paper,

Crossbar Nanoscale HfO2-Based Electronic Synapses by Yury Matveyev, Roman Kirtaev, Alena Fetisova, Sergey Zakharchenko, Dmitry Negrov and Andrey Zenkevich. Nanoscale Research Letters201611:147 DOI: 10.1186/s11671-016-1360-6

Published: 15 March 2016

This is an open access paper.

Does digitizing material mean it’s safe? A tale of Canada’s Fisheries and Oceans scientific libraries

As has been noted elsewhere the federal government of Canada has shut down a number of Fisheries and Oceans Canada libraries in a cost-saving exercise. The government is hoping to save some $440,000 in the 2014-15 fiscal year by digitizing, consolidating, and discarding the libraries and their holdings.

One would imagine that this is being done in a measured, thoughtful fashion but one would be wrong.

Andrew Nikiforuk in a December 23, 2013 article for The Tyee wrote one of the first articles about the closure of the fisheries libraries,

Scientists say the closure of some of the world’s finest fishery, ocean and environmental libraries by the Harper government has been so chaotic that irreplaceable collections of intellectual capital built by Canadian taxpayers for future generations has been lost forever.

Glyn Moody in a Jan. 7, 2014 post on Techdirt noted this,

What’s strange is that even though the rationale for this mass destruction is apparently in order to reduce costs, opportunities to sell off more valuable items have been ignored. A scientist is quoted as follows:

“Hundreds of bound journals, technical reports and texts still on the shelves, presumably meant for the garbage or shredding. I saw one famous monograph on zooplankton, which would probably fetch a pretty penny at a used science bookstore… anybody could go in and help themselves, with no record kept of who got what.”

Gloria Galloway in a Jan. 7, 2014 article for the Globe and Mail adds more details about what has been lost,

Peter Wells, an adjunct professor and senior research fellow at the International Ocean Institute at Dalhousie University in Halifax, said it is not surprising few members of the public used the libraries. But “the public benefits by the researchers and the different research labs being able to access the information,” he said.

Scientists say it is true that most modern research is done online.

But much of the material in the DFO libraries was not available digitally, Dr. Wells said, adding that some of it had great historical value. And some was data from decades ago that researchers use to determine how lakes and rivers have changed.

“I see this situation as a national tragedy, done under the pretext of cost savings, which, when examined closely, will prove to be a false motive,” Dr. Wells said. “A modern democratic society should value its information resources, not reduce, or worse, trash them.”

Dr. Ayles [Burton Ayles, a former DFO regional director and the former director of science for the Freshwater Institute in Winnipeg] said the Freshwater Institute had reports from the 1880s and some that were available nowhere else. “There was a whole core people who used that library on a regular basis,” he said.

Dr. Ayles pointed to a collection of three-ringed binders, occupying seven metres of shelf space, that contained the data collected during a study in the 1960s and 1970s of the proposed Mackenzie Valley pipeline. For a similar study in the early years of this century, he said, “scientists could go back to that information and say, ‘What was the baseline 30 years ago? What was there then and what is there now?’ ”

When asked how much of the discarded information has been digitized, the government did not provide an answer, but said the process continues.

Today, Margo McDiarmid’s Jan. 30, 2014 article for the Canadian Broadcasting Corporation (CBC) news online further explores digitization of the holdings,

Fisheries and Oceans is closing seven of its 11 libraries by 2015. It’s hoping to save more than $443,000 in 2014-15 by consolidating its collections into four remaining libraries.

Shea [Fisheries and Oceans Minister Gail Shea] told CBC News in a statement Jan. 6 that all copyrighted material has been digitized and the rest of the collection will be soon. The government says that putting material online is a more efficient way of handling it.

But documents from her office show there’s no way of really knowing that is happening.

“The Department of Fisheries and Oceans’ systems do not enable us to determine the number of items digitized by location and collection,” says the response by the minister’s office to MacAulay’s inquiry. [emphasis mine]

The documents also that show the department had to figure out what to do with 242,207 books and research documents from the libraries being closed. It kept 158,140 items and offered the remaining 84,067 to libraries outside the federal government.

Shea’s office told CBC that the books were also “offered to the general public and recycled in a ‘green fashion’ if there were no takers.”

The fate of thousands of books appears to be “unknown,” although the documents’ numbers show 160 items from the Maurice Lamontagne Library in Mont Jolie, Que., were “discarded.”  A Radio-Canada story in June about the library showed piles of volumes in dumpsters.

And the numbers prove a lot more material was tossed out. The bill to discard material from four of the seven libraries totals $22,816.76

Leaving aside the issue of whether or not rare books were given away or put in dumpsters, It’s not confidence-building when the government minister can’t offer information about which books have been digitized and where they might located online.

Interestingly,  Fisheries and Oceans is not the only department/ministry shutting down libraries (from McDiarmid’s CBC article),

Fisheries and Oceans is just one of the 14 federal departments, including Health Canada and Environment Canada, that have been shutting physical libraries and digitizing or consolidating the material into closed central book vaults.

I was unaware of the problems with Health Canada’s libraries but Laura Payton’s and Max Paris’ Jan. 20, 2014 article for CBC news online certainly raised my eyebrows,

Health Canada scientists are so concerned about losing access to their research library that they’re finding workarounds, with one squirrelling away journals and books in his basement for colleagues to consult, says a report obtained by CBC News.

The draft report from a consultant hired by the department warned it not to close its library, but the report was rejected as flawed and the advice went unheeded.

Before the main library closed, the inter-library loan functions were outsourced to a private company called Infotrieve, the consultant wrote in a report ordered by the department. The library’s physical collection was moved to the National Science Library on the Ottawa campus of the National Research Council last year.

“Staff requests have dropped 90 per cent over in-house service levels prior to the outsource. This statistic has been heralded as a cost savings by senior HC [Health Canada] management,” the report said.

“However, HC scientists have repeatedly said during the interview process that the decrease is because the information has become inaccessible — either it cannot arrive in due time, or it is unaffordable due to the fee structure in place.”


The report noted the workarounds scientists used to overcome their access problems.

Mueller [Dr. Rudi Mueller, who left the department in 2012] used his contacts in industry for scientific literature. He also went to university libraries where he had a faculty connection.

The report said Health Canada scientists sometimes use the library cards of university students in co-operative programs at the department.

Unsanctioned libraries have been created by science staff.

“One group moved its 250 feet of published materials to an employee’s basement. When you need a book, you email ‘Fred,’ and ‘Fred’ brings the book in with him the next day,” the consultant wrote in his report.

“I think it’s part of being a scientist. You find a way around the problems,” Mueller told CBC News.

Unsanctioned, underground libraries aside, the assumption that digitizing documents and books ensures access is false.  Glyn Moody in a Nov. 12, 2013 article for Techdirt gives a chastening example of how vulnerable our digital memories are,

The Internet Archive is the world’s online memory, holding the only copies of many historic (and not-so-historic) Web pages that have long disappeared from the Web itself.

Bad news:

This morning at about 3:30 a.m. a fire started at the Internet Archive’s San Francisco scanning center.

Good news:

no one was hurt and no data was lost. Our main building was not affected except for damage to one electrical run. This power issue caused us to lose power to some servers for a while.

Bad news:

Some physical materials were in the scanning center because they were being digitized, but most were in a separate locked room or in our physical archive and were not lost. Of those materials we did unfortunately lose, about half had already been digitized. We are working with our library partners now to assess.

That loss is unfortunate, but imagine if the fire had been in the main server room holding the Internet Archive’s 2 petabytes of data. Wisely, the project has placed copies at other locations …

That’s good to know, but it seems rather foolish for the world to depend on the Internet Archive always being able to keep all its copies up to date, especially as the quantity of data that it stores continues to rise. This digital library is so important in historical and cultural terms: surely it’s time to start mirroring the Internet Archive around the world in many locations, with direct and sustained support from multiple governments.

In addition to the issue of vulnerability, there’s also the issue of authenticity, from my June 5, 2013 posting about science, archives and memories,

… Luciana Duranti [Professor and Chair, MAS {Master of Archival Studies}Program at the University of British Columbia and Director, InterPARES] and her talk titled, Trust and Authenticity in the Digital Environment: An Increasingly Cloudy Issue, which took place in Vancouver (Canada) last year (mentioned in my May 18, 2012 posting).

Duranti raised many, many issues that most of us don’t consider when we blithely store information in the ‘cloud’ or create blogs that turn out to be repositories of a sort (and then don’t know what to do with them; ça c’est moi). She also previewed a Sept. 26 – 28, 2013 conference to be hosted in Vancouver by UNESCO (United Nations Educational, Scientific, and Cultural Organization), “Memory of the World in the Digital Age: Digitization and Preservation.” (UNESCO’s Memory of the World programme hosts a number of these themed conferences and workshops.)

The Sept. 2013 UNESCO ‘memory of the world’ conference in Vancouver seems rather timely in retrospect. The Council of Canadian Academies (CCA) announced that Dr. Doug Owram would be chairing their Memory Institutions and the Digital Revolution assessment (mentioned in my Feb. 22, 2013 posting; scroll down 80% of the way) and, after checking recently, I noticed that the Expert Panel has been assembled and it includes Duranti. Here’s the assessment description from the CCA’s ‘memory institutions’ webpage,

Library and Archives Canada has asked the Council of Canadian Academies to assess how memory institutions, which include archives, libraries, museums, and other cultural institutions, can embrace the opportunities and challenges of the changing ways in which Canadians are communicating and working in the digital age.


Over the past three decades, Canadians have seen a dramatic transformation in both personal and professional forms of communication due to new technologies. Where the early personal computer and word-processing systems were largely used and understood as extensions of the typewriter, advances in technology since the 1980s have enabled people to adopt different approaches to communicating and documenting their lives, culture, and work. Increased computing power, inexpensive electronic storage, and the widespread adoption of broadband computer networks have thrust methods of communication far ahead of our ability to grasp the implications of these advances.

These trends present both significant challenges and opportunities for traditional memory institutions as they work towards ensuring that valuable information is safeguarded and maintained for the long term and for the benefit of future generations. It requires that they keep track of new types of records that may be of future cultural significance, and of any changes in how decisions are being documented. As part of this assessment, the Council’s expert panel will examine the evidence as it relates to emerging trends, international best practices in archiving, and strengths and weaknesses in how Canada’s memory institutions are responding to these opportunities and challenges. Once complete, this assessment will provide an in-depth and balanced report that will support Library and Archives Canada and other memory institutions as they consider how best to manage and preserve the mass quantity of communications records generated as a result of new and emerging technologies.

The Council’s assessment is running concurrently with the Royal Society of Canada’s expert panel assessment on Libraries and Archives in 21st century Canada. Though similar in subject matter, these assessments have a different focus and follow a different process. The Council’s assessment is concerned foremost with opportunities and challenges for memory institutions as they adapt to a rapidly changing digital environment. In navigating these issues, the Council will draw on a highly qualified and multidisciplinary expert panel to undertake a rigorous assessment of the evidence and of significant international trends in policy and technology now underway. The final report will provide Canadians, policy-makers, and decision-makers with the evidence and information needed to consider policy directions. In contrast, the RSC panel focuses on the status and future of libraries and archives, and will draw upon a public engagement process.

So, the government is shutting down libraries in order to save money and they’re praying (?) that the materials have been digitized and adequate care has been taken to ensure that they will not be lost in some disaster or other. Meanwhile the Council of Canadian Academies is conducting an assessment of memory institutions in the digital age. The approach seems to backwards.

On a more amusing note, Rick Mercer parodies at lease one way scientists are finding to circumvent the cost-cutting exercise in an excerpt (approximately 1 min.)  from his Jan. 29, 2014 Rick Mercer Report telecast (thanks Roz),

Mercer’s comment about sports and Canada’s Prime Minister, Stephen Harper’s preferences is a reference to Harper’s expressed desire to write a book about hockey and possibly a veiled reference to Harper’s successful move to prorogue parliament during the 2010 Winter Olympic games in Vancouver in what many observers suggested was a strategy allowing Harper to attend the games at his leisure.

Whether or not you agree with the decision to shutdown some libraries, the implementation seems to have been a remarkably sloppy affair.

Memory chips could get organic and a nod to singer, Dean Martin

Researchers from the University of Washington (located in Washington state) and Southeast University (China) have found a way to create organic ferroelectric molecules which offer the possibility of flexible, nontoxic memory chips according a Jan. 24, 2013 news item on ScienceDaily,

At the heart of computing are tiny crystals that transmit and store digital information’s ones and zeroes. Today these are hard and brittle materials. But cheap, flexible, nontoxic organic molecules may play a role in the future of hardware.

A team led by the University of Washington in Seattle and the Southeast University in China discovered a molecule [diisopropylammonium bromide?] that shows promise as an organic alternative to today’s silicon-based semiconductors. The findings, published this week in the journal Science, display properties that make it well suited to a wide range of applications in memory, sensing and low-cost energy storage.

“This molecule is quite remarkable, with some of the key properties that are comparable with the most popular inorganic crystals,” said co-corresponding author Jiangyu Li, a UW associate professor of mechanical engineering.

The Jan. 24, 2013 University of Washington news release by Hannah Hickey, which originated the news item, details the advantages of these crystals while noting they are not likely to replace currently used ferroelectric materials as the new molecule is not suitable for all uses (Note: Links have been removed),

The carbon-based material could offer even cheaper ways to store digital information; provide a flexible, nontoxic material for medical sensors that would be implanted in the body; and create a less costly, lighter material to harvest energy from natural vibrations.

The new molecule is a ferroelectric, meaning it is positively charged on one side and negatively charged on the other, where the direction can be flipped by applying an electrical field. Synthetic ferroelectrics are now used in some displays, sensors and memory chips.

In the study the authors pitted their molecule against barium titanate, a long-known ferroelectric material that is a standard for performance. Barium titanate is a ceramic crystal and contains titanium; it has largely been replaced in industrial applications by better-performing but lead-containing alternatives.

The new molecule holds its own against the standard-bearer. It has a natural polarization, a measure of how strongly the molecules align to store information, of 23, compared to 26 for barium titanate. To Li’s knowledge this is the best organic ferroelectric discovered to date.

A recent study in Nature announced an organic ferroelectric that works at room temperature. By contrast, this molecule retains its properties up to 153 degrees Celsius (307 degrees F), even higher than for barium titanate.

The new molecule also offers a full bag of electric tricks. Its dielectric constant – a measure of how well it can store energy – is more than 10 times higher than for other organic ferroelectrics. And it’s also a good piezoelectric, meaning it’s efficient at converting movement into electricity, which is useful in sensors.

The new molecule is made from bromine, a natural element isolated from sea salt, mixed with carbon, hydrogen and nitrogen (its full name is diisopropylammonium bromide). Researchers dissolved the elements in water and evaporated the liquid to grow the crystal. Because the molecule contains carbon, it is organic, and pivoting chemical bonds allow it to flex.

The molecule would not replace current inorganic materials, Li said, but it could be used in applications where cost, ease of manufacturing, weight, flexibility and toxicity are important.

Here’s a citation and link to the paper,

Diisopropylammonium Bromide Is a High-Temperature Molecular Ferroelectric Crystal by Da-Wei Fu, Hong-Ling Ci, Yuanming Liu, Qiong Ye, Wen Zhang, Yi Zhang, Xue-Yuan Chen, Gianluca Giovannetti, Massimo Capone, Jiangyu Li, Ren-Gen Xiong. Science 25 January 2013:
Vol. 339 no. 6118 pp. 425-428. DOI: 10.1126/science.1229675

This paper, along with a few others about ferroelectric materials in the Jan. 2013 issue of Science, is behind a paywall. Given the title of the paper, I’ve made the assumption that the new molecule is diisopropylammonium bromide.

At any rate, all of this has led me to an old song by singer, Dean Martin, titled ‘Memories are made of this,’

I found this piece of information in the comments,

 neuro518 3 weeks ago

the guitarist is Terry Gilkyson and his group here is called the Easy Riders. He wrote this song and hundreds of others including “Fast Freight” performed by the Kingston Trio. He was in at the very beginning of the transition of American music from pop to folk and was one of the best. For some reason he never gets much credit, but he was one of the best.

Happy Friday, Jan. 25, 2013.