Tag Archives: memory

Two approaches to memristors

Within one day of each other in October 2018, two different teams working on memristors with applications to neuroprosthetics and neuromorphic computing (brainlike computing) announced their results.

Russian team

An October 15, 2018 (?) Lobachevsky University press release (also published on October 15, 2018 on EurekAlert) describes a new approach to memristors,

Biological neurons are coupled unidirectionally through a special junction called a synapse. An electrical signal is transmitted along a neuron after some biochemical reactions initiate a chemical release to activate an adjacent neuron. These junctions are crucial for cognitive functions, such as perception, learning and memory.

A group of researchers from Lobachevsky University in Nizhny Novgorod investigates the dynamics of an individual memristive device when it receives a neuron-like signal as well as the dynamics of a network of analog electronic neurons connected by means of a memristive device. According to Svetlana Gerasimova, junior researcher at the Physics and Technology Research Institute and at the Neurotechnology Department of Lobachevsky University, this system simulates the interaction between synaptically coupled brain neurons while the memristive device imitates a neuron axon.

A memristive device is a physical model of Chua’s [Dr. Leon Chua, University of California at Berkeley; see my May 9, 2008 posting for a brief description Dr. Chua’s theory] memristor, which is an electric circuit element capable of changing its resistance depending on the electric signal received at the input. The device based on a Au/ZrO2(Y)/TiN/Ti structure demonstrates reproducible bipolar switching between the low and high resistance states. Resistive switching is determined by the oxidation and reduction of segments of conducting channels (filaments) in the oxide film when voltage with different polarity is applied to it. In the context of the present work, the ability of a memristive device to change conductivity under the action of pulsed signals makes it an almost ideal electronic analog of a synapse.

Lobachevsky University scientists and engineers supported by the Russian Science Foundation (project No.16-19-00144) have experimentally implemented and theoretically described the synaptic connection of neuron-like generators using the memristive interface and investigated the characteristics of this connection.

“Each neuron is implemented in the form of a pulse signal generator based on the FitzHugh-Nagumo model. This model provides a qualitative description of the main neurons’ characteristics: the presence of the excitation threshold, the presence of excitable and self-oscillatory regimes with the possibility of a changeover. At the initial time moment, the master generator is in the self-oscillatory mode, the slave generator is in the excitable mode, and the memristive device is used as a synapse. The signal from the master generator is conveyed to the input of the memristive device, the signal from the output of the memristive device is transmitted to the input of the slave generator via the loading resistance. When the memristive device switches from a high resistance to a low resistance state, the connection between the two neuron-like generators is established. The master generator goes into the oscillatory mode and the signals of the generators are synchronized. Different signal modulation mode synchronizations were demonstrated for the Au/ZrO2(Y)/TiN/Ti memristive device,” – says Svetlana Gerasimova.

UNN researchers believe that the next important stage in the development of neuromorphic systems based on memristive devices is to apply such systems in neuroprosthetics. Memristive systems will provide a highly efficient imitation of synaptic connection due to the stochastic nature of the memristive phenomenon and can be used to increase the flexibility of the connections for neuroprosthetic purposes. Lobachevsky University scientists have vast experience in the development of neurohybrid systems. In particular, a series of experiments was performed with the aim of connecting the FitzHugh-Nagumo oscillator with a biological object, a rat brain hippocampal slice. The signal from the electronic neuron generator was transmitted through the optic fiber communication channel to the bipolar electrode which stimulated Schaffer collaterals (axons of pyramidal neurons in the CA3 field) in the hippocampal slices. “We are going to combine our efforts in the design of artificial neuromorphic systems and our experience of working with living cells to improve flexibility of prosthetics,” concludes S. Gerasimova.

The results of this research were presented at the 38th International Conference on Nonlinear Dynamics (Dynamics Days Europe) at Loughborough University (Great Britain).

This diagram illustrates an aspect of the work,

Caption: Schematic of electronic neurons coupling via a memristive device. Credit: Lobachevsky University

US team

The American Institute of Physics (AIP) announced the publication of a ‘memristor paper’ by a team from the University of Southern California (USC) in an October 16, 2018 news item on phys.org,

Just like their biological counterparts, hardware that mimics the neural circuitry of the brain requires building blocks that can adjust how they synapse, with some connections strengthening at the expense of others. One such approach, called memristors, uses current resistance to store this information. New work looks to overcome reliability issues in these devices by scaling memristors to the atomic level.

An October 16, 2018 AIP news release (also on EurekAlert), which originated the news item, delves further into the particulars of this particular piece of memristor research,

A group of researchers demonstrated a new type of compound synapse that can achieve synaptic weight programming and conduct vector-matrix multiplication with significant advances over the current state of the art. Publishing its work in the Journal of Applied Physics, from AIP Publishing, the group’s compound synapse is constructed with atomically thin boron nitride memristors running in parallel to ensure efficiency and accuracy.

The article appears in a special topic section of the journal devoted to “New Physics and Materials for Neuromorphic Computation,” which highlights new developments in physical and materials science research that hold promise for developing the very large-scale, integrated “neuromorphic” systems of tomorrow that will carry computation beyond the limitations of current semiconductors today.

“There’s a lot of interest in using new types of materials for memristors,” said Ivan Sanchez Esqueda, an author on the paper. “What we’re showing is that filamentary devices can work well for neuromorphic computing applications, when constructed in new clever ways.”

Current memristor technology suffers from a wide variation in how signals are stored and read across devices, both for different types of memristors as well as different runs of the same memristor. To overcome this, the researchers ran several memristors in parallel. The combined output can achieve accuracies up to five times those of conventional devices, an advantage that compounds as devices become more complex.

The choice to go to the subnanometer level, Sanchez said, was born out of an interest to keep all of these parallel memristors energy-efficient. An array of the group’s memristors were found to be 10,000 times more energy-efficient than memristors currently available.

“It turns out if you start to increase the number of devices in parallel, you can see large benefits in accuracy while still conserving power,” Sanchez said. Sanchez said the team next looks to further showcase the potential of the compound synapses by demonstrating their use completing increasingly complex tasks, such as image and pattern recognition.

Here’s an image illustrating the parallel artificial synapses,

Caption: Hardware that mimics the neural circuitry of the brain requires building blocks that can adjust how they synapse. One such approach, called memristors, uses current resistance to store this information. New work looks to overcome reliability issues in these devices by scaling memristors to the atomic level. Researchers demonstrated a new type of compound synapse that can achieve synaptic weight programming and conduct vector-matrix multiplication with significant advances over the current state of the art. They discuss their work in this week’s Journal of Applied Physics. This image shows a conceptual schematic of the 3D implementation of compound synapses constructed with boron nitride oxide (BNOx) binary memristors, and the crossbar array with compound BNOx synapses for neuromorphic computing applications. Credit: Ivan Sanchez Esqueda

Here’s a link to and a citation for the paper,

Efficient learning and crossbar operations with atomically-thin 2-D material compound synapses by Ivan Sanchez Esqueda, Huan Zhao and Han Wang. The article will appear in the Journal of Applied Physics Oct. 16, 2018 (DOI: 10.1063/1.5042468).

This paper is behind a paywall.

*Title corrected from ‘Two approaches to memristors featuring’ to ‘Two approaches to memristors’ on May 31, 2019 at 1455 hours PDT.

Genes, intelligence, Chinese CRISPR (clustered regularly interspaced short palindromic repeats) babies, and other children

This started out as an update and now it’s something else. What follows is a brief introduction to the Chinese CRISPR twins; a brief examination of parents, children, and competitiveness; and, finally, a suggestion that genes may not be what we thought. I also include a discussion about how some think scientists should respond when they know beforehand that one of their kin is crossing an ethical line. Basically, this is a complex topic and I am attempting to interweave a number of competing lines of query into one narrative about human nature and the latest genetics obsession.

Introduction to the Chinese CRISPR twins

Back in November 2018 I covered the story about the Chinese scientist, He Jiankui , who had used CRISPR technology to edit genes in embryos that were subsequently implanted in a waiting mother (apparently there could be as many as eight mothers) with the babies being brought to term despite an international agreement (of sorts) not to do that kind of work. At this time, we know of the twins, Lulu and Nana but, by now, there may be more babies. (I have much more detail about the initial controversies in my November 28, 2018 posting.)

It seems the drama has yet to finish unfolding. There may be another consequence of He’s genetic tinkering.

Could the CRISPR babies, Lulu and Nana, have enhanced cognitive abilities?

Yes, according to Antonio Regalado’s February 21, 2019 article (behind a paywall) for MIT’s (Massachusetts Institute of Technology) Technology Review, those engineered babies may have enhanced abilities for learning and remembering.

For those of us who can’t get beyond the paywall, others have been successful. Josh Gabbatiss in his February 22, 2019 article for independent.co.uk provides some detail,

The world’s first gene edited babies may have had their brains unintentionally altered – and perhaps cognitively enhanced – as a result of the controversial treatment undertaken by a team of Chinese scientists.

Dr He Jiankui and his team allegedly deleted a gene from a number of human embryos before implanting them in their mothers, a move greeted with horror by the global scientific community. The only known successful birth so far is the case of twin girls Nana and Lulu.

The now disgraced scientist claimed that he removed a gene called CCR5 [emphasis mine] from their embroyos in an effort to make the twins resistant to infection by HIV.

But another twist in the saga has now emerged after a new paper provided more evidence that the impact of CCR5 deletion reaches far beyond protection against dangerous viruses – people who naturally lack this gene appear to recover more quickly from strokes, and even go further in school. [emphasis mine]

Dr Alcino Silva, a neurobiologist at the University of California, Los Angeles, who helped identify this role for CCR5 said the work undertaken by Dr Jiankui likely did change the girls’ brains.

“The simplest interpretation is that those mutations will probably have an impact on cognitive function in the twins,” he told the MIT Technology Review.

The connection immediately raised concerns that the gene was targeted due to its known links with intelligence, which Dr Silva said was his immediate response when he heard the news.

… there is no evidence that this was Dr Jiankui’s goal and at a press conference organised after the initial news broke, he said he was aware of the work but was “against using genome editing for enhancement”.

..

Claire Maldarelli’s February 22, 2019 article for Popular Science provides more information about the CCR5 gene/protein (Note: Links have been removed),

CCR5 is a protein that sits on the surface of white blood cells, a major component of the human immune system. There, it allows HIV to enter and infect a cell. A chunk of the human population naturally carries a mutation that makes CCR5 nonfunctional (one study found that 10 percent of Europeans have this mutation), which often results in a smaller protein size and one that isn’t located on the outside of the cell, preventing HIV from ever entering and infecting the human immune system.

The goal of the Chinese researchers’ work, led by He Jiankui of the Southern University of Science and Technology located in Shenzhen, was to tweak the embryos’ genome to lack CCR5, ensuring the babies would be immune to HIV.

But genetics is rarely that simple.

In recent years, the CCR5 gene has been a target of ongoing research, and not just for its relationship to HIV. In an attempt to understand what influences memory formation and learning in the brain, a group of researchers at UCLA found that lowering the levels of CCR5 production enhanced both learning and memory formation. This connection led those researchers to think that CCR5 could be a good drug target for helping stroke victims recover: Relearning how to move, walk, and talk is a key component to stroke rehabilitation.

… promising research, but it begs the question: What does that mean for the babies who had their CCR5 genes edited via CRISPR prior to their birth? Researchers speculate that the alternation will have effects on the children’s cognitive functioning. …

John Loeffler’s February 22, 2019 article for interestingengineering.com notes that there are still many questions about He’s (scientist’s name) research including, did he (pronoun) do what he claimed? (Note: Links have been removed),

Considering that no one knows for sure whether He has actually done as he and his team claim, the swiftness of the condemnation of his work—unproven as it is—shows the sensitivity around this issue.

Whether He did in fact edit Lulu and Nana’s genes, it appears he didn’t intend to impact their cognitive capacities. According to MIT Technology Review, not a single researcher studying CCR5’s role in intelligence was contacted by He, even as other doctors and scientists were sought out for advice about his project.

This further adds to the alarm as there is every expectation that He should have known about the connection between CCR5 and cognition.

At a gathering of gene-editing researchers in Hong Kong two days after the birth of the potentially genetically-altered twins was announced, He was asked about the potential impact of erasing CCR5 from the twins DNA on their mental capacity.

He responded that he knew about the potential cognitive link shown in Silva’s 2016 research. “I saw that paper, it needs more independent verification,” He said, before adding that “I am against using genome editing for enhancement.”

The problem, as Silva sees it, is that He may be blazing the trail for exactly that outcome, whether He intends to or not. Silva says that after his 2016 research was published, he received an uncomfortable amount of attention from some unnamed, elite Silicon Valley leaders who seem to be expressing serious interest in using CRISPR to give their children’s brains a boost through gene editing. [emphasis mine]

As such, Silva can be forgiven for not quite believing He’s claims that he wasn’t intending to alter the human genome for enhancement. …

The idea of designer babies isn’t new. As far back as Plato, the thought of using science to “engineer” a better human has been tossed about, but other than selective breeding, there really hasn’t been a path forward.

In the late 1800s, early 1900s, Eugenics made a real push to accomplish something along these lines, and the results were horrifying, even before Nazism. After eugenics mid-wifed the Holocaust in World War II, the concept of designer children has largely been left as fodder for science fiction since few reputable scientists would openly declare their intention to dabble in something once championed and pioneered by the greatest monsters of the 20th century.

Memories have faded though, and CRISPR significantly changes this decades-old calculus. CRISPR makes it easier than ever to target specific traits in order to add or subtract them from an embryos genetic code. Embryonic research is also a diverse enough field that some scientist could see pioneering designer babies as a way to establish their star power in academia while getting their names in the history books, [emphasis mine] all while working in relative isolation. They only need to reveal their results after the fact and there is little the scientific community can do to stop them, unfortunately.

When He revealed his research and data two days after announcing the births of Lulu and Nana, the gene-scientists at the Hong Kong conference were not all that impressed with the quality of He’s work. He has not provided access for fellow researchers to either his data on Lulu, Nana, and their family’s genetic data so that others can verify that Lulu and Nana’s CCR5 genes were in fact eliminated.

This almost rudimentary verification and validation would normally accompany a major announcement such as this. Neither has He’s work undergone a peer-review process and it hasn’t been formally published in any scientific journal—possibly for good reason.

Researchers such as Eric Topol, a geneticist at the Scripps Research Institute, have been finding several troubling signs in what little data He has released. Topol says that the editing itself was not precise and show “all kinds of glitches.”

Gaetan Burgio, a geneticist at the Australian National University, is likewise unimpressed with the quality of He’s work. Speaking of the slides He showed at the conference to support his claim, Burgio calls it amateurish, “I can believe that he did it because it’s so bad.”

Worse of all, its entirely possible that He actually succeeded in editing Lulu and Nana’s genetic code in an ad hoc, unethical, and medically substandard way. Sadly, there is no shortage of families with means who would be willing to spend a lot of money to design their idea of a perfect child, so there is certainly demand for such a “service.”

It’s nice to know (sarcasm icon) that the ‘Silicon Valley elite’ are willing to volunteer their babies for scientific experimentation in a bid to enhance intelligence.

The ethics of not saying anything

Natalie Kofler, a molecular biologist, wrote a February 26, 2019 Nature opinion piece and call to action on the subject of why scientists who were ‘in the know’ remained silent about He’s work prior to his announcements,

Millions [?] were shocked to learn of the birth of gene-edited babies last year, but apparently several scientists were already in the know. Chinese researcher He Jiankui had spoken with them about his plans to genetically modify human embryos intended for pregnancy. His work was done before adequate animal studies and in direct violation of the international scientific consensus that CRISPR–Cas9 gene-editing technology is not ready or appropriate for making changes to humans that could be passed on through generations.

Scholars who have spoken publicly about their discussions with He described feeling unease. They have defended their silence by pointing to uncertainty over He’s intentions (or reassurance that he had been dissuaded), a sense of obligation to preserve confidentiality and, perhaps most consistently, the absence of a global oversight body. Others who have not come forward probably had similar rationales. But He’s experiments put human health at risk; anyone with enough knowledge and concern could have posted to blogs or reached out to their deans, the US National Institutes of Health or relevant scientific societies, such as the Association for Responsible Research and Innovation in Genome Editing (see page 440). Unfortunately, I think that few highly established scientists would have recognized an obligation to speak up.

I am convinced that this silence is a symptom of a broader scientific cultural crisis: a growing divide between the values upheld by the scientific community and the mission of science itself.

A fundamental goal of the scientific endeavour is to advance society through knowledge and innovation. As scientists, we strive to cure disease, improve environmental health and understand our place in the Universe. And yet the dominant values ingrained in scientists centre on the virtues of independence, ambition and objectivity. That is a grossly inadequate set of skills with which to support a mission of advancing society.

Editing the genes of embryos could change our species’ evolutionary trajectory. Perhaps one day, the technology will eliminate heritable diseases such as sickle-cell anaemia and cystic fibrosis. But it might also eliminate deafness or even brown eyes. In this quest to improve the human race, the strengths of our diversity could be lost, and the rights of already vulnerable populations could be jeopardized.

Decisions about how and whether this technology should be used will require an expanded set of scientific virtues: compassion to ensure its applications are designed to be just, humility to ensure its risks are heeded and altruism to ensure its benefits are equitably distributed.

Calls for improved global oversight and robust ethical frameworks are being heeded. Some researchers who apparently knew of He’s experiments are under review by their universities. Chinese investigators have said He skirted regulations and will be punished. But punishment is an imperfect motivator. We must foster researchers’ sense of societal values.

Fortunately, initiatives popping up throughout the scientific community are cultivating a scientific culture informed by a broader set of values and considerations. The Scientific Citizenship Initiative at Harvard University in Cambridge, Massachusetts, trains scientists to align their research with societal needs. The Summer Internship for Indigenous Peoples in Genomics offers genomics training that also focuses on integrating indigenous cultural perspectives into gene studies. The AI Now Institute at New York University has initiated a holistic approach to artificial-intelligence research that incorporates inclusion, bias and justice. And Editing Nature, a programme that I founded, provides platforms that integrate scientific knowledge with diverse cultural world views to foster the responsible development of environmental genetic technologies.

Initiatives such as these are proof [emphasis mine] that science is becoming more socially aware, equitable and just. …

I’m glad to see there’s work being done on introducing a broader set of values into the scientific endeavour. That said, these programmes seem to be voluntary, i.e., people self-select, and those most likely to participate in these programmes are the ones who might be inclined to integrate social values into their work in the first place.

This doesn’t address the issue of how to deal with unscrupulous governments pressuring scientists to create designer babies along with hypercompetitive and possibly unscrupulous individuals such as the members of the ‘Silicon Valley insiders mentioned in Loeffler’s article, teaming up with scientists who will stop at nothing to get their place in the history books.

Like Kofler, I’m encouraged to see these programmes but I’m a little less convinced that they will be enough. What form it might take I don’t know but I think something a little more punitive is also called for.

CCR5 and freedom from HIV

I’ve added this piece about the Berlin and London patients because, back in November 2018, I failed to realize how compelling the idea of eradicating susceptibility to AIDS/HIV might be. Reading about some real life remissions helped me to understand some of He’s stated motivations a bit better. Unfortunately, there’s a major drawback described here in a March 5, 2019 news item on CBC (Canadian Broadcasting Corporation) online news attributed to Reuters,

An HIV-positive man in Britain has become the second known adult worldwide to be cleared of the virus that causes AIDS after he received a bone marrow transplant from an HIV-resistant donor, his doctors said.

The therapy had an early success with a man known as “the Berlin patient,” Timothy Ray Brown, a U.S. man treated in Germany who is 12 years post-transplant and still free of HIV. Until now, Brown was the only person thought to have been cured of infection with HIV, the virus that causes AIDS.

Such transplants are dangerous and have failed in other patients. They’re also impractical to try to cure the millions already infected.

In the latest case, the man known as “the London patient” has no trace of HIV infection, almost three years after he received bone marrow stem cells from a donor with a rare genetic mutation that resists HIV infection — and more than 18 months after he came off antiretroviral drugs.

“There is no virus there that we can measure. We can’t detect anything,” said Ravindra Gupta, a professor and HIV biologist who co-led a team of doctors treating the man.

Gupta described his patient as “functionally cured” and “in remission,” but cautioned: “It’s too early to say he’s cured.”

Gupta, now at Cambridge University, treated the London patient when he was working at University College London. The man, who has asked to remain anonymous, had contracted HIV in 2003, Gupta said, and in 2012 was also diagnosed with a type of blood cancer called Hodgkin’s lymphoma.

In 2016, when he was very sick with cancer, doctors decided to seek a transplant match for him.

“This was really his last chance of survival,” Gupta told Reuters.

Doctors found a donor with a gene mutation known as CCR5 delta 32, which confers resistance to HIV. About one per cent of people descended from northern Europeans have inherited the mutation from both parents and are immune to most HIV. The donor had this double copy of the mutation.

That was “an improbable event,” Gupta said. “That’s why this has not been observed more frequently.”

Most experts say it is inconceivable such treatments could be a way of curing all patients. The procedure is expensive, complex and risky. To do this in others, exact match donors would have to be found in the tiny proportion of people who have the CCR5 mutation.

Specialists said it is also not yet clear whether the CCR5 resistance is the only key [emphasis mine] — or whether the graft-versus-host disease may have been just as important. Both the Berlin and London patients had this complication, which may have played a role in the loss of HIV-infected cells, Gupta said.

Not only is there some question as to what role the CCR5 gene plays, there’s also a question as to whether or not we know what role genes play.

A big question: are genes what we thought?

Ken Richardson’s January 3, 2019 article for Nautilus (I stumbled across it on May 14, 2019 so I’m late to the party) makes and supports a startling statement, It’s the End of the Gene As We Know It We are not nearly as determined by our genes as once thought (Note: A link has been removed),

We’ve all seen the stark headlines: “Being Rich and Successful Is in Your DNA” (Guardian, July 12); “A New Genetic Test Could Help Determine Children’s Success” (Newsweek, July 10); “Our Fortunetelling Genes” make us (Wall Street Journal, Nov. 16); and so on.

The problem is, many of these headlines are not discussing real genes at all, but a crude statistical model of them, involving dozens of unlikely assumptions. Now, slowly but surely, that whole conceptual model of the gene is being challenged.

We have reached peak gene, and passed it.

The preferred dogma started to appear in different versions in the 1920s. It was aptly summarized by renowned physicist Erwin Schrödinger in a famous lecture in Dublin in 1943. He told his audience that chromosomes “contain, in some kind of code-script, the entire pattern of the individual’s future development and of its functioning in the mature state.”

Around that image of the code a whole world order of rank and privilege soon became reinforced. These genes, we were told, come in different “strengths,” different permutations forming ranks that determine the worth of different “races” and of different classes in a class-structured society. A whole intelligence testing movement was built around that preconception, with the tests constructed accordingly.

The image fostered the eugenics and Nazi movements of the 1930s, with tragic consequences. Governments followed a famous 1938 United Kingdom education commission in decreeing that, “The facts of genetic inequality are something that we cannot escape,” and that, “different children … require types of education varying in certain important respects.”

Today, 1930s-style policy implications are being drawn once again. Proposals include gene-testing at birth for educational intervention, embryo selection for desired traits, identifying which classes or “races” are fitter than others, and so on. And clever marketizing now sees millions of people scampering to learn their genetic horoscopes in DNA self-testing kits.[emphasis mine]

So the hype now pouring out of the mass media is popularizing what has been lurking in the science all along: a gene-god as an entity with almost supernatural powers. Today it’s the gene that, in the words of the Anglican hymn, “makes us high and lowly and orders our estate.”

… at the same time, a counter-narrative is building, not from the media but from inside science itself.

So it has been dawning on us is that there is no prior plan or blueprint for development: Instructions are created on the hoof, far more intelligently than is possible from dumb DNA. That is why today’s molecular biologists are reporting “cognitive resources” in cells; “bio-information intelligence”; “cell intelligence”; “metabolic memory”; and “cell knowledge”—all terms appearing in recent literature.1,2 “Do cells think?” is the title of a 2007 paper in the journal Cellular and Molecular Life Sciences.3 On the other hand the assumed developmental “program” coded in a genotype has never been described.


It is such discoveries that are turning our ideas of genetic causation inside out. We have traditionally thought of cell contents as servants to the DNA instructions. But, as the British biologist Denis Noble insists in an interview with the writer Suzan Mazur,1 “The modern synthesis has got causality in biology wrong … DNA on its own does absolutely nothing [ emphasis mine] until activated by the rest of the system … DNA is not a cause in an active sense. I think it is better described as a passive data base which is used by the organism to enable it to make the proteins that it requires.”

I highly recommend reading Richardson’s article in its entirety. As well, you may want to read his book, ” Genes, Brains and Human Potential: The Science and Ideology of Intelligence .”

As for “DNA on its own doing absolutely nothing,” that might be a bit of a eye-opener for the Silicon Valley elite types investigating cognitive advantages attributed to the lack of a CCR5 gene. Meanwhile, there are scientists inserting a human gene associated with brain development into monkeys,

Transgenic monkeys and human intelligence

An April 2, 2019 news item on chinadaily.com describes research into transgenic monkeys,

Researchers from China and the United States have created transgenic monkeys carrying a human gene that is important for brain development, and the monkeys showed human-like brain development.

Scientists have identified several genes that are linked to primate brain size. MCPH1 is a gene that is expressed during fetal brain development. Mutations in MCPH1 can lead to microcephaly, a developmental disorder characterized by a small brain.

In the study published in the Beijing-based National Science Review, researchers from the Kunming Institute of Zoology, Chinese Academy of Sciences, the University of North Carolina in the United States and other research institutions reported that they successfully created 11 transgenic rhesus monkeys (eight first-generation and three second-generation) carrying human copies of MCPH1.

According to the research article, brain imaging and tissue section analysis showed an altered pattern of neuron differentiation and a delayed maturation of the neural system, which is similar to the developmental delay (neoteny) in humans.

Neoteny in humans is the retention of juvenile features into adulthood. One key difference between humans and nonhuman primates is that humans require a much longer time to shape their neuro-networks during development, greatly elongating childhood, which is the so-called “neoteny.”

Here’s a link to and a citation for the paper,

Transgenic rhesus monkeys carrying the human MCPH1 gene copies show human-like neoteny of brain development by Lei Shi, Xin Luo, Jin Jiang, Yongchang Chen, Cirong Liu, Ting Hu, Min Li, Qiang Lin, Yanjiao Li, Jun Huang Hong Wang, Yuyu Niu, Yundi Shi, Martin Styner, Jianhong Wang, Yi Lu, Xuejin Sun, Hualin Yu, Weizhi Ji, Bing Su. National Science Review, nwz043, https://doi.org/10.1093/nsr/nwz043 Published: 27 March 2019

This appears to be an open access paper,

Transgenic monkeys and an ethical uproar

Predictably, this research set off alarms as Sharon Kirkey’s April 12, 2019 article for the National Post describes in detail (Note: A link has been removed)l,

Their brains may not be bigger than normal, but monkeys created with human brain genes are exhibiting cognitive changes that suggest they might be smarter — and the experiments have ethicists shuddering.

In the wake of the genetically modified human babies scandal, Chinese scientists [as a scientist from the US] are drawing fresh condemnation from philosophers and ethicists, this time over the announcement they’ve created transgenic monkeys with elements of a human brain.

Six of the monkeys died, however the five survivors “exhibited better short-term memory and shorter reaction time” compared to their wild-type controls, the researchers report in the journa.

According to the researchers, the experiments represent the first attempt to study the genetic basis of human brain origin using transgenic monkeys. The findings, they insist, “have the potential to provide important — and potentially unique — insights into basic questions of what actually makes humans unique.”

For others, the work provokes a profoundly moral and visceral uneasiness. Even one of the collaborators — University of North Carolina computer scientist Martin Styner — told MIT Technology Review he considered removing his name from the paper, which he said was unable to find a publisher in the West.

“Now we have created this animal which is different than it is supposed to be,” Styner said. “When we do experiments, we have to have a good understanding of what we are trying to learn, to help society, and that is not the case here.” l

In an email to the National Post, Styner said he has an expertise in medical image analysis and was approached by the researchers back in 2011. He said he had no input on the science in the project, beyond how to best do the analysis of their MRI data. “At the time, I did not think deeply enough about the ethical consideration.”

….

When it comes to the scientific use of nonhuman primates, ethicists say the moral compass is skewed in cases like this.

Given the kind of beings monkeys are, “I certainly would have thought you would have had to have a reasonable expectation of high benefit to human beings to justify the harms that you are going to have for intensely social, cognitively complex, emotional animals like monkeys,” said Letitia Meynell, an associate professor in the department of philosophy at Dalhousie University in Halifax.

“It’s not clear that this kind of research has any reasonable expectation of having any useful application for human beings,” she said.

The science itself is also highly dubious and fundamentally flawed in its logic, she said.
“If you took Einstein as a baby and you raised him in the lab he wouldn’t turn out to be Einstein,” Meynell said. “If you’re actually interested in studying the cognitive complexity of these animals, you’re not going to get a good representation of that by raising them in labs, because they can’t develop the kind of cognitive and social skills they would in their normal environment.”

The Chinese said the MCPH1 gene is one of the strongest candidates for human brain evolution. But looking at a single gene is just bad genetics, Meynell said. Multiple genes and their interactions affect the vast majority of traits.

My point is that there’s a lot of research focused on intelligence and genes when we don’t really know what role genes actually play and when there doesn’t seem to be any serious oversight.

Global plea for moratorium on heritable genome editing

A March 13, 2019 University of Otago (New Zealand) press release (also on EurekAlert) describes a global plea for a moratorium,

A University of Otago bioethicist has added his voice to a global plea for a moratorium on heritable genome editing from a group of international scientists and ethicists in the wake of the recent Chinese experiment aiming to produce HIV immune children.

In an article in the latest issue of international scientific journal Nature, Professor Jing-Bao Nie together with another 16 [17] academics from seven countries, call for a global moratorium on all clinical uses of human germline editing to make genetically modified children.

They would like an international governance framework – in which nations voluntarily commit to not approve any use of clinical germline editing unless certain conditions are met – to be created potentially for a five-year period.

Professor Nie says the scientific scandal of the experiment that led to the world’s first genetically modified babies raises many intriguing ethical, social and transcultural/transglobal issues. His main personal concerns include what he describes as the “inadequacy” of the Chinese and international responses to the experiment.

“The Chinese authorities have conducted a preliminary investigation into the scientist’s genetic misadventure and issued a draft new regulation on the related biotechnologies. These are welcome moves. Yet, by putting blame completely on the rogue scientist individually, the institutional failings are overlooked,” Professor Nie explains.

“In the international discourse, partly due to the mentality of dichotomising China and the West, a tendency exists to characterise the scandal as just a Chinese problem. As a result, the global context of the experiment and Chinese science schemes have been far from sufficiently examined.”

The group of 17 [18] scientists and bioethicists say it is imperative that extensive public discussions about the technical, scientific, medical, societal, ethical and moral issues must be considered before germline editing is permitted. A moratorium would provide time to establish broad societal consensus and an international framework.

“For germline editing to even be considered for a clinical application, its safety and efficacy must be sufficient – taking into account the unmet medical need, the risks and potential benefits and the existence of alternative approaches,” the opinion article states.

Although techniques have improved in recent years, germline editing is not yet safe or effective enough to justify any use in the clinic with the risk of failing to make the desired change or of introducing unintended mutations still unacceptably high, the scientists and ethicists say.

“No clinical application of germline editing should be considered unless its long-term biological consequences are sufficiently understood – both for individuals and for the human species.”

The proposed moratorium does not however, apply to germline editing for research uses or in human somatic (non-reproductive) cells to treat diseases.

Professor Nie considers it significant that current presidents of the UK Royal Society, the US National Academy of Medicine and the Director and Associate Director of the US National Institute of Health have expressed their strong support for such a proposed global moratorium in two correspondences published in the same issue of Nature. The editorial in the issue also argues that the right decision can be reached “only through engaging more communities in the debate”.

“The most challenging questions are whether international organisations and different countries will adopt a moratorium and if yes, whether it will be effective at all,” Professor Nie says.

A March 14, 2019 news item on phys.org provides a précis of the Comment in Nature. Or, you ,can access the Comment with this link

Adopt a moratorium on heritable genome editing; Eric Lander, Françoise Baylis, Feng Zhang, Emmanuelle Charpentier, Paul Berg and specialists from seven countries call for an international governance framework.signed by: Eric S. Lander, Françoise Baylis, Feng Zhang, Emmanuelle Charpentier, Paul Berg, Catherine Bourgain, Bärbel Friedrich, J. Keith Joung, Jinsong Li, David Liu, Luigi Naldini, Jing-Bao Nie, Renzong Qiu, Bettina Schoene-Seifert, Feng Shao, Sharon Terry, Wensheng Wei, & Ernst-Ludwig Winnacker. Nature 567, 165-168 (2019) doi: 10.1038/d41586-019-00726-5

This Comment in Nature is open access.

World Health Organization (WHO) chimes in

Better late than never, eh? The World Health Organization has called heritable gene editing of humans ‘irresponsible’ and made recommendations. From a March 19, 2019 news item on the Canadian Broadcasting Corporation’s Online news webpage,

A panel convened by the World Health Organization said it would be “irresponsible” for scientists to use gene editing for reproductive purposes, but stopped short of calling for a ban.

The experts also called for the U.N. health agency to create a database of scientists working on gene editing. The recommendation was announced Tuesday after a two-day meeting in Geneva to examine the scientific, ethical, social and legal challenges of such research.

“At this time, it is irresponsible for anyone to proceed” with making gene-edited babies since DNA changes could be passed down to future generations, the experts said in a statement.

Germline editing has been on my radar since 2015 (see my May 14, 2015 posting) and the probability that someone would experiment with viable embryos and bring them to term shouldn’t be that much of a surprise.

Slow science from Canada

Canada has banned germline editing but there is pressure to lift that ban. (I touched on the specifics of the campaign in an April 26, 2019 posting.) This March 17, 2019 essay on The Conversation by Landon J Getz and Graham Dellaire, both of Dalhousie University (Nova Scotia, Canada) elucidates some of the discussion about whether research into germline editing should be slowed down.

Naughty (or Haughty, if you prefer) scientists

There was scoffing from some, if not all, members of the scientific community about the potential for ‘designer babies’ that can be seen in an excerpt from an article by Ed Yong for The Atlantic (originally published in my ,August 15, 2017 posting titled: CRISPR and editing the germline in the US (part 2 of 3): ‘designer babies’?),

Ed Yong in an Aug. 2, 2017 article for The Atlantic offered a comprehensive overview of the research and its implications (unusually for Yong, there seems to be mildly condescending note but it’s worth ignoring for the wealth of information in the article; Note: Links have been removed),

” … the full details of the experiment, which are released today, show that the study is scientifically important but much less of a social inflection point than has been suggested. “This has been widely reported as the dawn of the era of the designer baby, making it probably the fifth or sixth time people have reported that dawn,” says Alta Charo, an expert on law and bioethics at the University of Wisconsin-Madison. “And it’s not.”

Then about 15 months later, the possibility seemed to be realized.

Interesting that scientists scoffed at the public’s concerns (you can find similar arguments about robots and artificial intelligence not being a potentially catastrophic problem), yes? Often, nonscientists’ concerns are dismissed as being founded in science fiction.

To be fair, there are times when concerns are overblown, the difficulty is that it seems the scientific community’s default position is to uniformly dismiss concerns rather than approaching them in a nuanced fashion. If the scoffers had taken the time to think about it, germline editing on viable embryos seems like an obvious and inevitable next step (as I’ve noted previously).

At this point, no one seems to know if He actually succeeded at removing CCR5 from Lulu’s and Nana’s genomes. In November 2018, scientists were guessing that at least one of the twins was a ‘mosaic’. In other words, some of her cells did not include CCR5 while others did.

Parents, children, competition

A recent college admissions scandal in the US has highlighted the intense competition to get into high profile educational institutions. (This scandal brought to mind the Silicon Valey elite who wanted to know more about gene editing that might result in improved cognitive skills.)

Since it can be easy to point the finger at people in other countries, I’d like to note that there was a Canadian parent among these wealthy US parents attempting to give their children advantages by any means, legal or not. (Note: These are alleged illegalities.) From a March 12, 2019 news article by Scott Brown, Kevin Griffin, and Keith Fraser for the Vancouver Sun,

Vancouver businessman and former CFL [Canadian Football League] player David Sidoo has been charged with conspiracy to commit mail and wire fraud in connection with a far-reaching FBI investigation into a criminal conspiracy that sought to help privileged kids with middling grades gain admission to elite U.S. universities.

In a 12-page indictment filed March 5 [2019] in the U.S. District Court of Massachusetts, Sidoo is accused of making two separate US$100,000 payments to have others take college entrance exams in place of his two sons.

Sidoo is also accused of providing documents for the purpose of creating falsified identification cards for the people taking the tests.

In what is being called the biggest college-admissions scam ever prosecuted by the U.S. Justice Department, Sidoo has been charged with nearly 50 other people. Nine athletic coaches and 33 parents including Hollywood actresses Felicity Huffman and Lori Loughlin. are among those charged in the investigation, dubbed Operation Varsity Blues.

According to the indictment, an unidentified person flew from Tampa, Fla., to Vancouver in 2011 to take the Scholastic Aptitude Test (SAT) in place of Sidoo’s older son and was directed not to obtain too high a score since the older son had previously taken the exam, obtaining a score of 1460 out of a possible 2400.

A copy of the resulting SAT score — 1670 out of 2400 — was mailed to Chapman University, a private university in Orange, Calif., on behalf of the older son, who was admitted to and ultimately enrolled in the university in January 2012, according to the indictment.

It’s also alleged that Sidoo arranged to have someone secretly take the older boy’s Canadian high school graduation exam, with the person posing as the boy taking the exam in June 2012.

The Vancouver businessman is also alleged to have paid another $100,000 to have someone take the SAT in place of his younger son.

Sidoo, an investment banker currently serving as CEO of Advantage Lithium, was awarded the Order of B.C. in 2016 for his philanthropic efforts.

He is a former star with the UBC [University of British Columbia] Thunderbirds football team and helped the school win its first Vanier Cup in 1982. He went on to play five seasons in the CFL with the Saskatchewan Roughriders and B.C. Lions.

Sidoo is a prominent donor to UBC and is credited with spearheading an alumni fundraising campaign, 13th Man Foundation, that resuscitated the school’s once struggling football team. He reportedly donated $2 million of his own money to support the program.

Sidoo Field at UBC’s Thunderbird Stadium is named in his honour.

In 2016, he received the B.C. [British Columbia] Sports Hall of Fame’s W.A.C. Bennett Award for his contributions to the sporting life of the province.

The question of whether or not these people like the ‘Silicon Valley elite’ (mentioned in John Loeffler’s February 22, 2019 article) would choose to tinker with their children’s genome if it gave them an advantage, is still hypothetical but it’s easy to believe that at least some might seriously consider the possibility especially if the researcher or doctor didn’t fully explain just how little is known about the impact of tinkering with the genome. For example, there’s a big question about whether those parents in China fully understood what they signed up for.

By the way, cheating scandals aren’t new (see Vanity Fair’s Schools For Scandal; The Inside Dramas at 16 of America’s Most Elite Campuses—Plus Oxford! Edited by Graydon Carter, published in August 2018 and covering 25 years of the magazine’s reporting). On a similar line, there’s this March13, 2019 essay which picks apart some of the hierarchical and power issues at play in the US higher educational system which led to this latest (but likely not last) scandal.

Scientists under pressure

While Kofler’s February 26, 2019 Nature opinion piece and call to action seems to address the concerns regarding germline editing by advocating that scientists become more conscious of how their choices impact society, as I noted earlier, the ideas expressed seem a little ungrounded in harsh realities. Perhaps it’s time to give some recognition to the various pressures put on scientists from their own governments and from an academic environment that fosters ‘success’ at any cost to peer pressure, etc. (For more about the costs of a science culture focused on success, read this March 2, 2019 blog posting by Jon Tennant on digital-science.com for a breakdown.)

One other thing I should mention, for some scientists getting into the history books, winning Nobel prizes, etc. is a very important goal. Scientists are people too.

Some thoughts

There seems to be a great disjunction between what Richardson presents as an alternative narrative to the ‘gene-god’ and how genetic research is being performed and reported on. What is clear to me is that no one really understands genetics and this business of inserting and deleting genes is essentially research designed to satisfy curiosity and/or allay fears about being left behind in a great scientific race to a an unknown destination.

I’d like to see some better reporting and a more agile response by the scientific community, the various governments, and international agencies. What shape or form a more agile response might take, I don’t know but I’d like to see some efforts.

Back to the regular programme

There’s a lot about CRISPR here on this blog. A simple search of ‘CRISPR ‘in the blog’s search engine should get you more than enough information about the technology and the various issues ranging from intellectual property to risks and more.

The three part series (CRISPR and editing the germline in the US …), mentioned previously, was occasioned by the publication of a study on germline editing research with nonviable embryos in the US. The 2017 research was done at the Oregon Health and Science University by Shoukhrat Mitalipov following similar research published by Chinese scientists in 2015. The series gives relatively complete coverage of the issues along with an introduction to CRISPR and embedded video describing the technique. Here’s part 1 to get you started..

Artificial synapse based on tantalum oxide from Korean researchers

This memristor story comes from South Korea as we progress on the way to neuromorphic computing (brainlike computing). A Sept. 7, 2018 news item on ScienceDaily makes the announcement,

A research team led by Director Myoung-Jae Lee from the Intelligent Devices and Systems Research Group at DGIST (Daegu Gyeongbuk Institute of Science and Technology) has succeeded in developing an artificial synaptic device that mimics the function of the nerve cells (neurons) and synapses that are response for memory in human brains. [sic]

Synapses are where axons and dendrites meet so that neurons in the human brain can send and receive nerve signals; there are known to be hundreds of trillions of synapses in the human brain.

This chemical synapse information transfer system, which transfers information from the brain, can handle high-level parallel arithmetic with very little energy, so research on artificial synaptic devices, which mimic the biological function of a synapse, is under way worldwide.

Dr. Lee’s research team, through joint research with teams led by Professor Gyeong-Su Park from Seoul National University; Professor Sung Kyu Park from Chung-ang University; and Professor Hyunsang Hwang from Pohang University of Science and Technology (POSTEC), developed a high-reliability artificial synaptic device with multiple values by structuring tantalum oxide — a trans-metallic material — into two layers of Ta2O5-x and TaO2-x and by controlling its surface.

A September 7, 2018 DGIST press release (also on EurekAlert), which originated the news item, delves further into the work,

The artificial synaptic device developed by the research team is an electrical synaptic device that simulates the function of synapses in the brain as the resistance of the tantalum oxide layer gradually increases or decreases depending on the strength of the electric signals. It has succeeded in overcoming durability limitations of current devices by allowing current control only on one layer of Ta2O5-x.

In addition, the research team successfully implemented an experiment that realized synapse plasticity [or synaptic plasticity], which is the process of creating, storing, and deleting memories, such as long-term strengthening of memory and long-term suppression of memory deleting by adjusting the strength of the synapse connection between neurons.

The non-volatile multiple-value data storage method applied by the research team has the technological advantage of having a small area of an artificial synaptic device system, reducing circuit connection complexity, and reducing power consumption by more than one-thousandth compared to data storage methods based on digital signals using 0 and 1 such as volatile CMOS (Complementary Metal Oxide Semiconductor).

The high-reliability artificial synaptic device developed by the research team can be used in ultra-low-power devices or circuits for processing massive amounts of big data due to its capability of low-power parallel arithmetic. It is expected to be applied to next-generation intelligent semiconductor device technologies such as development of artificial intelligence (AI) including machine learning and deep learning and brain-mimicking semiconductors.

Dr. Lee said, “This research secured the reliability of existing artificial synaptic devices and improved the areas pointed out as disadvantages. We expect to contribute to the development of AI based on the neuromorphic system that mimics the human brain by creating a circuit that imitates the function of neurons.”

Here’s a link to and a citation for the paper,

Reliable Multivalued Conductance States in TaOx Memristors through Oxygen Plasma-Assisted Electrode Deposition with in Situ-Biased Conductance State Transmission Electron Microscopy Analysis by Myoung-Jae Lee, Gyeong-Su Park, David H. Seo, Sung Min Kwon, Hyeon-Jun Lee, June-Seo Kim, MinKyung Jung, Chun-Yeol You, Hyangsook Lee, Hee-Goo Kim, Su-Been Pang, Sunae Seo, Hyunsang Hwang, and Sung Kyu Park. ACS Appl. Mater. Interfaces, 2018, 10 (35), pp 29757–29765 DOI: 10.1021/acsami.8b09046 Publication Date (Web): July 23, 2018

Copyright © 2018 American Chemical Society

This paper is open access.

You can find other memristor and neuromorphic computing stories here by using the search terms I’ve highlighted,  My latest (more or less) is an April 19, 2018 posting titled, New path to viable memristor/neuristor?

Finally, here’s an image from the Korean researchers that accompanied their work,

Caption: Representation of neurons and synapses in the human brain. The magnified synapse represents the portion mimicked using solid-state devices. Credit: Daegu Gyeongbuk Institute of Science and Technology(DGIST)

More memory, less space and a walk down the cryptocurrency road

Libraries, archives, records management, oral history, etc. there are many institutions and names for how we manage collective and personal memory. You might call it a peculiarly human obsession stretching back into antiquity. For example, there’s the Library of Alexandria (Wikipedia entry) founded in the third, or possibly 2nd, century BCE (before the common era) and reputed to store all the knowledge in the world. It was destroyed although accounts differ as to when and how but its loss remains a potent reminder of memory’s fragility.

These days, the technology community is terribly concerned with storing ever more bits of data on materials that are reaching their limits for storage.I have news of a possible solution,  an interview of sorts with the researchers working on this new technology, and some very recent research into policies for cryptocurrency mining and development. That bit about cryptocurrency makes more sense when you read what the response to one of the interview questions.

Memory

It seems University of Alberta researchers may have found a way to increase memory exponentially, from a July 23, 2018 news item on ScienceDaily,

The most dense solid-state memory ever created could soon exceed the capabilities of current computer storage devices by 1,000 times, thanks to a new technique scientists at the University of Alberta have perfected.

“Essentially, you can take all 45 million songs on iTunes and store them on the surface of one quarter,” said Roshan Achal, PhD student in Department of Physics and lead author on the new research. “Five years ago, this wasn’t even something we thought possible.”

A July 23, 2018 University of Alberta news release (also on EurekAlert) by Jennifer-Anne Pascoe, which originated the news item, provides more information,

Previous discoveries were stable only at cryogenic conditions, meaning this new finding puts society light years closer to meeting the need for more storage for the current and continued deluge of data. One of the most exciting features of this memory is that it’s road-ready for real-world temperatures, as it can withstand normal use and transportation beyond the lab.

“What is often overlooked in the nanofabrication business is actual transportation to an end user, that simply was not possible until now given temperature restrictions,” continued Achal. “Our memory is stable well above room temperature and precise down to the atom.”

Achal explained that immediate applications will be data archival. Next steps will be increasing readout and writing speeds, meaning even more flexible applications.

More memory, less space

Achal works with University of Alberta physics professor Robert Wolkow, a pioneer in the field of atomic-scale physics. Wolkow perfected the art of the science behind nanotip technology, which, thanks to Wolkow and his team’s continued work, has now reached a tipping point, meaning scaling up atomic-scale manufacturing for commercialization.

“With this last piece of the puzzle now in-hand, atom-scale fabrication will become a commercial reality in the very near future,” said Wolkow. Wolkow’s Spin-off [sic] company, Quantum Silicon Inc., is hard at work on commercializing atom-scale fabrication for use in all areas of the technology sector.

To demonstrate the new discovery, Achal, Wolkow, and their fellow scientists not only fabricated the world’s smallest maple leaf, they also encoded the entire alphabet at a density of 138 terabytes, roughly equivalent to writing 350,000 letters across a grain of rice. For a playful twist, Achal also encoded music as an atom-sized song, the first 24 notes of which will make any video-game player of the 80s and 90s nostalgic for yesteryear but excited for the future of technology and society.

As noted in the news release, there is an atom-sized song, which is available in this video,

As for the nano-sized maple leaf, I highlighted that bit of whimsy in a June 30, 2017 posting.

Here’s a link to and a citation for the paper,

Lithography for robust and editable atomic-scale silicon devices and memories by Roshan Achal, Mohammad Rashidi, Jeremiah Croshaw, David Churchill, Marco Taucer, Taleana Huff, Martin Cloutier, Jason Pitters, & Robert A. Wolkow. Nature Communicationsvolume 9, Article number: 2778 (2018) DOI: https://doi.org/10.1038/s41467-018-05171-y Published 23 July 2018

This paper is open access.

For interested parties, you can find Quantum Silicon (QSI) here. My Edmonton geography is all but nonexistent, still, it seems to me the company address on Saskatchewan Drive is a University of Alberta address. It’s also the address for the National Research Council of Canada. Perhaps this is a university/government spin-off company?

The ‘interview’

I sent some questions to the researchers at the University of Alberta who very kindly provided me with the following answers. Roshan Achal passed on one of the questions to his colleague Taleana Huff for her response. Both Achal and Huff are associated with QSI.

Unfortunately I could not find any pictures of all three researchers (Achal, Huff, and Wolkow) together.

Roshan Achal (left) used nanotechnology perfected by his PhD supervisor, Robert Wolkow (right) to create atomic-scale computer memory that could exceed the capacity of today’s solid-state storage drives by 1,000 times. (Photo: Faculty of Science)

(1) SHRINKING THE MANUFACTURING PROCESS TO THE ATOMIC SCALE HAS
ATTRACTED A LOT OF ATTENTION OVER THE YEARS STARTING WITH SCIENCE
FICTION OR RICHARD FEYNMAN OR K. ERIC DREXLER, ETC. IN ANY EVENT, THE
ORIGINS ARE CONTESTED SO I WON’T PUT YOU ON THE SPOT BY ASKING WHO
STARTED IT ALL INSTEAD ASKING HOW DID YOU GET STARTED?

I got started in this field about 6 years ago, when I undertook a MSc
with Dr. Wolkow here at the University of Alberta. Before that point, I
had only ever heard of a scanning tunneling microscope from what was
taught in my classes. I was aware of the famous IBM logo made up from
just a handful of atoms using this machine, but I didn’t know what
else could be done. Here, Dr. Wolkow introduced me to his line of
research, and I saw the immense potential for growth in this area and
decided to pursue it further. I had the chance to interact with and
learn from nanofabrication experts and gain the skills necessary to
begin playing around with my own techniques and ideas during my PhD.

(2) AS I UNDERSTAND IT, THESE ARE THE PIECES YOU’VE BEEN
WORKING ON: (1) THE TUNGSTEN MICROSCOPE TIP, WHICH MAKE[s] (2) THE SMALLEST
QUANTUM DOTS (SINGLE ATOMS OF SILICON), (3) THE AUTOMATION OF THE
QUANTUM DOT PRODUCTION PROCESS, AND (4) THE “MOST DENSE SOLID-STATE
MEMORY EVER CREATED.” WHAT’S MISSING FROM THE LIST AND IS THAT WHAT
YOU’RE WORKING ON NOW?

One of the things missing from the list, that we are currently working
on, is the ability to easily communicate (electrically) from the
macroscale (our world) to the nanoscale, without the use of a scanning
tunneling microscope. With this, we would be able to then construct
devices using the other pieces we’ve developed up to this point, and
then integrate them with more conventional electronics. This would bring
us yet another step closer to the realization of atomic-scale
electronics.

(3) PERHAPS YOU COULD CLARIFY SOMETHING FOR ME. USUALLY WHEN SOLID STATE
MEMORY IS MENTIONED, THERE’S GREAT CONCERN ABOUT MOORE’S LAW. IS
THIS WORK GOING TO CREATE A NEW LAW? AND, WHAT IF ANYTHING DOES
;YOUR MEMORY DEVICE HAVE TO DO WITH QUANTUM COMPUTING?

That is an interesting question. With the density we’ve achieved,
there are not too many surfaces where atomic sites are more closely
spaced to allow for another factor of two improvement. In that sense, it
would be difficult to improve memory densities further using these
techniques alone. In order to continue Moore’s law, new techniques, or
storage methods would have to be developed to move beyond atomic-scale
storage.

The memory design itself does not have anything to do with quantum
computing, however, the lithographic techniques developed through our
work, may enable the development of certain quantum-dot-based quantum
computing schemes.

(4) THIS MAY BE A LITTLE OUT OF LEFT FIELD (OR FURTHER OUT THAN THE
OTHERS), COULD;YOUR MEMORY DEVICE HAVE AN IMPACT ON THE
DEVELOPMENT OF CRYPTOCURRENCY AND BLOCKCHAIN? IF SO, WHAT MIGHT THAT
IMPACT BE?

I am not very familiar with these topics, however, co-author Taleana
Huff has provided some thoughts:

Taleana Huff (downloaded from https://ca.linkedin.com/in/taleana-huff]

“The memory, as we’ve designed it, might not have too much of an
impact in and of itself. Cryptocurrencies fall into two categories.
Proof of Work and Proof of Stake. Proof of Work relies on raw
computational power to solve a difficult math problem. If you solve it,
you get rewarded with a small amount of that coin. The problem is that
it can take a lot of power and energy for your computer to crunch
through that problem. Faster access to memory alone could perhaps
streamline small parts of this slightly, but it would be very slight.
Proof of Stake is already quite power efficient and wouldn’t really
have a drastic advantage from better faster computers.

Now, atomic-scale circuitry built using these new lithographic
techniques that we’ve developed, which could perform computations at
significantly lower energy costs, would be huge for Proof of Work coins.
One of the things holding bitcoin back, for example, is that mining it
is now consuming power on the order of the annual energy consumption
required by small countries. A more efficient way to mine while still
taking the same amount of time to solve the problem would make bitcoin
much more attractive as a currency.”

Thank you to Roshan Achal and Taleana Huff for helping me to further explore the implications of their work with Dr. Wolkow.

Comments

As usual, after receiving the replies I have more questions but these people have other things to do so I’ll content myself with noting that there is something extraordinary in the fact that we can imagine a near future where atomic scale manufacturing is possible and where as Achal says, ” … storage methods would have to be developed to move beyond atomic-scale [emphasis mine] storage”. In decades past it was the stuff of science fiction or of theorists who didn’t have the tools to turn the idea into a reality. With Wolkow’s, Achal’s, Hauff’s, and their colleagues’ work, atomic scale manufacturing is attainable in the foreseeable future.

Hopefully we’ll be wiser than we have been in the past in how we deploy these new manufacturing techniques. Of course, before we need the wisdom, scientists, as  Achal notes,  need to find a new way to communicate between the macroscale and the nanoscale.

As for Huff’s comments about cryptocurrencies and cyptocurrency and blockchain technology, I stumbled across this very recent research, from a July 31, 2018 Elsevier press release (also on EurekAlert),

A study [behind a paywall] published in Energy Research & Social Science warns that failure to lower the energy use by Bitcoin and similar Blockchain designs may prevent nations from reaching their climate change mitigation obligations under the Paris Agreement.

The study, authored by Jon Truby, PhD, Assistant Professor, Director of the Centre for Law & Development, College of Law, Qatar University, Doha, Qatar, evaluates the financial and legal options available to lawmakers to moderate blockchain-related energy consumption and foster a sustainable and innovative technology sector. Based on this rigorous review and analysis of the technologies, ownership models, and jurisdictional case law and practices, the article recommends an approach that imposes new taxes, charges, or restrictions to reduce demand by users, miners, and miner manufacturers who employ polluting technologies, and offers incentives that encourage developers to create less energy-intensive/carbon-neutral Blockchain.

“Digital currency mining is the first major industry developed from Blockchain, because its transactions alone consume more electricity than entire nations,” said Dr. Truby. “It needs to be directed towards sustainability if it is to realize its potential advantages.

“Many developers have taken no account of the environmental impact of their designs, so we must encourage them to adopt consensus protocols that do not result in high emissions. Taking no action means we are subsidizing high energy-consuming technology and causing future Blockchain developers to follow the same harmful path. We need to de-socialize the environmental costs involved while continuing to encourage progress of this important technology to unlock its potential economic, environmental, and social benefits,” explained Dr. Truby.

As a digital ledger that is accessible to, and trusted by all participants, Blockchain technology decentralizes and transforms the exchange of assets through peer-to-peer verification and payments. Blockchain technology has been advocated as being capable of delivering environmental and social benefits under the UN’s Sustainable Development Goals. However, Bitcoin’s system has been built in a way that is reminiscent of physical mining of natural resources – costs and efforts rise as the system reaches the ultimate resource limit and the mining of new resources requires increasing hardware resources, which consume huge amounts of electricity.

Putting this into perspective, Dr. Truby said, “the processes involved in a single Bitcoin transaction could provide electricity to a British home for a month – with the environmental costs socialized for private benefit.

“Bitcoin is here to stay, and so, future models must be designed without reliance on energy consumption so disproportionate on their economic or social benefits.”

The study evaluates various Blockchain technologies by their carbon footprints and recommends how to tax or restrict Blockchain types at different phases of production and use to discourage polluting versions and encourage cleaner alternatives. It also analyzes the legal measures that can be introduced to encourage technology innovators to develop low-emissions Blockchain designs. The specific recommendations include imposing levies to prevent path-dependent inertia from constraining innovation:

  • Registration fees collected by brokers from digital coin buyers.
  • “Bitcoin Sin Tax” surcharge on digital currency ownership.
  • Green taxes and restrictions on machinery purchases/imports (e.g. Bitcoin mining machines).
  • Smart contract transaction charges.

According to Dr. Truby, these findings may lead to new taxes, charges or restrictions, but could also lead to financial rewards for innovators developing carbon-neutral Blockchain.

The press release doesn’t fully reflect Dr. Truby’s thoughtfulness or the incentives he has suggested. it’s not all surcharges, taxes, and fees constitute encouragement.  Here’s a sample from the conclusion,

The possibilities of Blockchain are endless and incentivisation can help solve various climate change issues, such as through the development of digital currencies to fund climate finance programmes. This type of public-private finance initiative is envisioned in the Paris Agreement, and fiscal tools can incentivize innovators to design financially rewarding Blockchain technology that also achieves environmental goals. Bitcoin, for example, has various utilitarian intentions in its White Paper, which may or may not turn out to be as envisioned, but it would not have been such a success without investors seeking remarkable returns. Embracing such technology, and promoting a shift in behaviour with such fiscal tools, can turn the industry itself towards achieving innovative solutions for environmental goals.

I realize Wolkow, et. al, are not focused on cryptocurrency and blockchain technology per se but as Huff notes in her reply, “… new lithographic techniques that we’ve developed, which could perform computations at significantly lower energy costs, would be huge for Proof of Work coins.”

Whether or not there are implications for cryptocurrencies, energy needs, climate change, etc., it’s the kind of innovative work being done by scientists at the University of Alberta which may have implications in fields far beyond the researchers’ original intentions such as more efficient computation and data storage.

ETA Aug. 6, 2018: Dexter Johnson weighed in with an August 3, 2018 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website),

Researchers at the University of Alberta in Canada have developed a new approach to rewritable data storage technology by using a scanning tunneling microscope (STM) to remove and replace hydrogen atoms from the surface of a silicon wafer. If this approach realizes its potential, it could lead to a data storage technology capable of storing 1,000 times more data than today’s hard drives, up to 138 terabytes per square inch.

As a bit of background, Gerd Binnig and Heinrich Rohrer developed the first STM in 1986 for which they later received the Nobel Prize in physics. In the over 30 years since an STM first imaged an atom by exploiting a phenomenon known as tunneling—which causes electrons to jump from the surface atoms of a material to the tip of an ultrasharp electrode suspended a few angstroms above—the technology has become the backbone of so-called nanotechnology.

In addition to imaging the world on the atomic scale for the last thirty years, STMs have been experimented with as a potential data storage device. Last year, we reported on how IBM (where Binnig and Rohrer first developed the STM) used an STM in combination with an iron atom to serve as an electron-spin resonance sensor to read the magnetic pole of holmium atoms. The north and south poles of the holmium atoms served as the 0 and 1 of digital logic.

The Canadian researchers have taken a somewhat different approach to making an STM into a data storage device by automating a known technique that uses the ultrasharp tip of the STM to apply a voltage pulse above an atom to remove individual hydrogen atoms from the surface of a silicon wafer. Once the atom has been removed, there is a vacancy on the surface. These vacancies can be patterned on the surface to create devices and memories.

If you have the time, I recommend reading Dexter’s posting as he provides clear explanations, additional insight into the work, and more historical detail.

Thanks for the memory: the US National Institute of Standards and Technology (NIST) and memristors

In January 2018 it seemed like I was tripping across a lot of memristor stories . This came from a January 19, 2018 news item on Nanowerk,

In the race to build a computer that mimics the massive computational power of the human brain, researchers are increasingly turning to memristors, which can vary their electrical resistance based on the memory of past activity. Scientists at the National Institute of Standards and Technology (NIST) have now unveiled the long-mysterious inner workings of these semiconductor elements, which can act like the short-term memory of nerve cells.

A January 18, 2018 NIST news release (also on EurekAlert), which originated the news item, fills in the details,

Just as the ability of one nerve cell to signal another depends on how often the cells have communicated in the recent past, the resistance of a memristor depends on the amount of current that recently flowed through it. Moreover, a memristor retains that memory even when electrical power is switched off.

But despite the keen interest in memristors, scientists have lacked a detailed understanding of how these devices work and have yet to develop a standard toolset to study them.

Now, NIST scientists have identified such a toolset and used it to more deeply probe how memristors operate. Their findings could lead to more efficient operation of the devices and suggest ways to minimize the leakage of current.

Brian Hoskins of NIST and the University of California, Santa Barbara, along with NIST scientists Nikolai Zhitenev, Andrei Kolmakov, Jabez McClelland and their colleagues from the University of Maryland’s NanoCenter (link is external) in College Park and the Institute for Research and Development in Microtechnologies in Bucharest, reported the findings (link is external) in a recent Nature Communications.

To explore the electrical function of memristors, the team aimed a tightly focused beam of electrons at different locations on a titanium dioxide memristor. The beam knocked free some of the device’s electrons, which formed ultrasharp images of those locations. The beam also induced four distinct currents to flow within the device. The team determined that the currents are associated with the multiple interfaces between materials in the memristor, which consists of two metal (conducting) layers separated by an insulator.

“We know exactly where each of the currents are coming from because we are controlling the location of the beam that is inducing those currents,” said Hoskins.

In imaging the device, the team found several dark spots—regions of enhanced conductivity—which indicated places where current might leak out of the memristor during its normal operation. These leakage pathways resided outside the memristor’s core—where it switches between the low and high resistance levels that are useful in an electronic device. The finding suggests that reducing the size of a memristor could minimize or even eliminate some of the unwanted current pathways. Although researchers had suspected that might be the case, they had lacked experimental guidance about just how much to reduce the size of the device.

Because the leakage pathways are tiny, involving distances of only 100 to 300 nanometers, “you’re probably not going to start seeing some really big improvements until you reduce dimensions of the memristor on that scale,” Hoskins said.

To their surprise, the team also found that the current that correlated with the memristor’s switch in resistance didn’t come from the active switching material at all, but the metal layer above it. The most important lesson of the memristor study, Hoskins noted, “is that you can’t just worry about the resistive switch, the switching spot itself, you have to worry about everything around it.” The team’s study, he added, “is a way of generating much stronger intuition about what might be a good way to engineer memristors.”

Here’s a link to and a citation for the paper,

Stateful characterization of resistive switching TiO2 with electron beam induced currents by Brian D. Hoskins, Gina C. Adam, Evgheni Strelcov, Nikolai Zhitenev, Andrei Kolmakov, Dmitri B. Strukov, & Jabez J. McClelland. Nature Communications 8, Article number: 1972 (2017) doi:10.1038/s41467-017-02116-9 Published online: 07 December 2017

This is an open access paper.

It might be my imagination but it seemed like a lot of papers from 2017 were being publicized in early 2018.

Finally, I borrowed much of my headline from the NIST’s headline for its news release, specifically, “Thanks for the memory,” which is a rather old song,

Bob Hope and Shirley Ross in “The Big Broadcast of 1938.”

Brain-like computing and memory with magnetoresistance

This is an approach to brain-like computing that’s new (to me, anyway). From a January 9, 2018 news item on Nanowerk (Note: A link has been removed),

From various magnetic tapes, floppy disks and computer hard disk drives, magnetic materials have been storing our electronic information along with our valuable knowledge and memories for well over half of a century.

In more recent years, the new types [sic] phenomena known as magnetoresistance, which is the tendency of a material to change its electrical resistance when an externally-applied magnetic field or its own magnetization is changed, has found its success in hard disk drive read heads, magnetic field sensors and the rising star in the memory technologies, the magnetoresistive random access memory.

A new discovery, led by researchers at the University of Minnesota, demonstrates the existence of a new kind of magnetoresistance involving topological insulators that could result in improvements in future computing and computer storage. The details of their research are published in the most recent issue of the scientific journal Nature Communications (“Unidirectional spin-Hall and Rashba-Edelstein magnetoresistance in topological insulator-ferromagnet layer heterostructures”).

This image illustrates the work,

The schematic figure illustrates the concept and behavior of magnetoresistance. The spins are generated in topological insulators. Those at the interface between ferromagnet and topological insulators interact with the ferromagnet and result in either high or low resistance of the device, depending on the relative directions of magnetization and spins. Credit: University of Minnesota

A January 9, 2018 University of Minnesota College of Science and Engineering news release, which originated the news item, expands on the theme,

“Our discovery is one missing piece of the puzzle to improve the future of low-power computing and memory for the semiconductor industry, including brain-like computing and chips for robots and 3D magnetic memory,” said University of Minnesota Robert F. Hartmann Professor of Electrical and Computer Engineering Jian-Ping Wang, director of the Center for Spintronic Materials, Interfaces, and Novel Structures (C-SPIN) based at the University of Minnesota and co-author of the study.

Emerging technology using topological insulators

While magnetic recording still dominates data storage applications, the magnetoresistive random access memory is gradually finding its place in the field of computing memory. From the outside, they are unlike the hard disk drives which have mechanically spinning disks and swinging heads—they are more like any other type of memory. They are chips (solid state) which you’d find being soldered on circuit boards in a computer or mobile device.

Recently, a group of materials called topological insulators has been found to further improve the writing energy efficiency of magnetoresistive random access memory cells in electronics. However, the new device geometry demands a new magnetoresistance phenomenon to accomplish the read function of the memory cell in 3D system and network.

Following the recent discovery of the unidirectional spin Hall magnetoresistance in a conventional metal bilayer material systems, researchers at the University of Minnesota collaborated with colleagues at Pennsylvania State University and demonstrated for the first time the existence of such magnetoresistance in the topological insulator-ferromagnet bilayers.

The study confirms the existence of such unidirectional magnetoresistance and reveals that the adoption of topological insulators, compared to heavy metals, doubles the magnetoresistance performance at 150 Kelvin (-123.15 Celsius). From an application perspective, this work provides the missing piece of the puzzle to create a proposed 3D and cross-bar type computing and memory device involving topological insulators by adding the previously missing or very inconvenient read functionality.

In addition to Wang, researchers involved in this study include Yang Lv, Delin Zhang and Mahdi Jamali from the University of Minnesota Department of Electrical and Computer Engineering and James Kally, Joon Sue Lee and Nitin Samarth from Pennsylvania State University Department of Physics.

This research was funded by the Center for Spintronic Materials, Interfaces and Novel Architectures (C-SPIN) at the University of Minnesota, a Semiconductor Research Corporation program sponsored by the Microelectronics Advanced Research Corp. (MARCO) and the Defense Advanced Research Projects Agency (DARPA).

Here’s a link to and a citation for the paper,

Unidirectional spin-Hall and Rashba−Edelstein magnetoresistance in topological insulator-ferromagnet layer heterostructures by Yang Lv, James Kally, Delin Zhang, Joon Sue Lee, Mahdi Jamali, Nitin Samarth, & Jian-Ping Wang. Nature Communications 9, Article number: 111 (2018) doi:10.1038/s41467-017-02491-3 Published online: 09 January 2018

This is an open access paper.

Memristive-like qualities with pectin

As the drive to create a synthetic neuronal network, as powered by memristors, continues, scientists are investigating pectin. From a Nov. 11, 2016 news item on ScienceDaily,

Most of us know pectin as a key ingredient for making delicious jellies and jams, not as a component for a complex hybrid device that links biological and electronic systems. But a team of Italian scientists have built on previous work in this field using pectin with a high degree of methylation as the medium to create a new architecture of hybrid device with a double-layered polyelectrolyte that alone drives memristive behavior.

A Nov. 11, 2016 American Institute of Physics news release on EurekAlert, which originated the news item, defines memristors and describes the research,

A memristive device can be thought of as a synapse analogue, a device that has a memory. Simply stated, its behavior in a certain moment depends on its previous activity, similar to the way information in the human brain is transmitted from one neuron to another.

In an article published this week in AIP Advances, from AIP Publishing, the team explains the creation of the hybrid device. “In this research, we applied materials generally used in the pharmaceutical and food industries in our electrochemical devices,” said Angelica Cifarelli, a doctoral candidate at the University of Parma in Italy. “The idea of using the ‘buffering’ capability of these biocompatible materials as solid polyelectrolyte is completely innovative and our work is the first time that these bio-polymers have been used in devices based on organic polymers and in a memristive device.”

Memristors can provide a bridge for interfacing electronic circuits with nervous systems, moving us closer to realization of a double-layer perceptron, an element that can perform classification functions after an appropriate learning procedure. The main difficulty the research team faced was understanding the complex electrochemical interplay that is the basis for the memristive behavior, which would give them the means to control it. The team addressed this challenge by using commercial polymers, and modifying their electrochemical properties at the macroscopic level. The most surprising result was that it was possible to check the electrochemical response of the device by changing the formulation of gels acting as polyelectrolytes, allowing study of the ionic exchanges relating to the biological object, which activates the electrochemical response of the conductive polymer.

“Our developments open the way to make compatible polyaniline based devices with an interface that should be naturally, biologically and electrochemically compatible and functional,” said Cifarelli. The next steps are interfacing the memristor network with other living beings, for example, plants and ultimately the realization of hybrid systems that can “learn” and perform logic/classification functions.

Here’s a link to and a citation for the paper,

Polysaccarides-based gels and solid-state electronic devices with memresistive properties: Synergy between polyaniline electrochemistry and biology by Angelica Cifarelli, Tatiana Berzina, Antonella Parisini, Victor Erokhin, and Salvatore Iannotta. AIP Advances 6, 111302 (2016); http://dx.doi.org/10.1063/1.4966559 Published Nov. 8, 2016

This paper appears to be open access.

Memristor-based electronic synapses for neural networks

Caption: Neuron connections in biological neural networks. Credit: MIPT press office

Caption: Neuron connections in biological neural networks. Credit: MIPT press office

Russian scientists have recently published a paper about neural networks and electronic synapses based on ‘thin film’ memristors according to an April 19, 2016 news item on Nanowerk,

A team of scientists from the Moscow Institute of Physics and Technology (MIPT) have created prototypes of “electronic synapses” based on ultra-thin films of hafnium oxide (HfO2). These prototypes could potentially be used in fundamentally new computing systems.

An April 20, 2016 MIPT press release (also on EurekAlert), which originated the news item (the date inconsistency likely due to timezone differences) explains the connection between thin films and memristors,

The group of researchers from MIPT have made HfO2-based memristors measuring just 40×40 nm2. The nanostructures they built exhibit properties similar to biological synapses. Using newly developed technology, the memristors were integrated in matrices: in the future this technology may be used to design computers that function similar to biological neural networks.

Memristors (resistors with memory) are devices that are able to change their state (conductivity) depending on the charge passing through them, and they therefore have a memory of their “history”. In this study, the scientists used devices based on thin-film hafnium oxide, a material that is already used in the production of modern processors. This means that this new lab technology could, if required, easily be used in industrial processes.

“In a simpler version, memristors are promising binary non-volatile memory cells, in which information is written by switching the electric resistance – from high to low and back again. What we are trying to demonstrate are much more complex functions of memristors – that they behave similar to biological synapses,” said Yury Matveyev, the corresponding author of the paper, and senior researcher of MIPT’s Laboratory of Functional Materials and Devices for Nanoelectronics, commenting on the study.

The press release offers a description of biological synapses and their relationship to learning and memory,

A synapse is point of connection between neurons, the main function of which is to transmit a signal (a spike – a particular type of signal, see fig. 2) from one neuron to another. Each neuron may have thousands of synapses, i.e. connect with a large number of other neurons. This means that information can be processed in parallel, rather than sequentially (as in modern computers). This is the reason why “living” neural networks are so immensely effective both in terms of speed and energy consumption in solving large range of tasks, such as image / voice recognition, etc.

Over time, synapses may change their “weight”, i.e. their ability to transmit a signal. This property is believed to be the key to understanding the learning and memory functions of thebrain.

From the physical point of view, synaptic “memory” and “learning” in the brain can be interpreted as follows: the neural connection possesses a certain “conductivity”, which is determined by the previous “history” of signals that have passed through the connection. If a synapse transmits a signal from one neuron to another, we can say that it has high “conductivity”, and if it does not, we say it has low “conductivity”. However, synapses do not simply function in on/off mode; they can have any intermediate “weight” (intermediate conductivity value). Accordingly, if we want to simulate them using certain devices, these devices will also have to have analogous characteristics.

The researchers have provided an illustration of a biological synapse,

Fig.2 The type of electrical signal transmitted by neurons (a “spike”). The red lines are various other biological signals, the black line is the averaged signal. Source: MIPT press office

Fig.2 The type of electrical signal transmitted by neurons (a “spike”). The red lines are various other biological signals, the black line is the averaged signal. Source: MIPT press office

Now, the press release ties the memristor information together with the biological synapse information to describe the new work at the MIPT,

As in a biological synapse, the value of the electrical conductivity of a memristor is the result of its previous “life” – from the moment it was made.

There is a number of physical effects that can be exploited to design memristors. In this study, the authors used devices based on ultrathin-film hafnium oxide, which exhibit the effect of soft (reversible) electrical breakdown under an applied external electric field. Most often, these devices use only two different states encoding logic zero and one. However, in order to simulate biological synapses, a continuous spectrum of conductivities had to be used in the devices.

“The detailed physical mechanism behind the function of the memristors in question is still debated. However, the qualitative model is as follows: in the metal–ultrathin oxide–metal structure, charged point defects, such as vacancies of oxygen atoms, are formed and move around in the oxide layer when exposed to an electric field. It is these defects that are responsible for the reversible change in the conductivity of the oxide layer,” says the co-author of the paper and researcher of MIPT’s Laboratory of Functional Materials and Devices for Nanoelectronics, Sergey Zakharchenko.

The authors used the newly developed “analogue” memristors to model various learning mechanisms (“plasticity”) of biological synapses. In particular, this involved functions such as long-term potentiation (LTP) or long-term depression (LTD) of a connection between two neurons. It is generally accepted that these functions are the underlying mechanisms of  memory in the brain.

The authors also succeeded in demonstrating a more complex mechanism – spike-timing-dependent plasticity, i.e. the dependence of the value of the connection between neurons on the relative time taken for them to be “triggered”. It had previously been shown that this mechanism is responsible for associative learning – the ability of the brain to find connections between different events.

To demonstrate this function in their memristor devices, the authors purposefully used an electric signal which reproduced, as far as possible, the signals in living neurons, and they obtained a dependency very similar to those observed in living synapses (see fig. 3).

Fig.3. The change in conductivity of memristors depending on the temporal separation between "spikes"(rigth) and thr change in potential of the neuron connections in biological neural networks. Source: MIPT press office

Fig.3. The change in conductivity of memristors depending on the temporal separation between “spikes”(rigth) and thr change in potential of the neuron connections in biological neural networks. Source: MIPT press office

These results allowed the authors to confirm that the elements that they had developed could be considered a prototype of the “electronic synapse”, which could be used as a basis for the hardware implementation of artificial neural networks.

“We have created a baseline matrix of nanoscale memristors demonstrating the properties of biological synapses. Thanks to this research, we are now one step closer to building an artificial neural network. It may only be the very simplest of networks, but it is nevertheless a hardware prototype,” said the head of MIPT’s Laboratory of Functional Materials and Devices for Nanoelectronics, Andrey Zenkevich.

Here’s a link to and a citation for the paper,

Crossbar Nanoscale HfO2-Based Electronic Synapses by Yury Matveyev, Roman Kirtaev, Alena Fetisova, Sergey Zakharchenko, Dmitry Negrov and Andrey Zenkevich. Nanoscale Research Letters201611:147 DOI: 10.1186/s11671-016-1360-6

Published: 15 March 2016

This is an open access paper.

Does digitizing material mean it’s safe? A tale of Canada’s Fisheries and Oceans scientific libraries

As has been noted elsewhere the federal government of Canada has shut down a number of Fisheries and Oceans Canada libraries in a cost-saving exercise. The government is hoping to save some $440,000 in the 2014-15 fiscal year by digitizing, consolidating, and discarding the libraries and their holdings.

One would imagine that this is being done in a measured, thoughtful fashion but one would be wrong.

Andrew Nikiforuk in a December 23, 2013 article for The Tyee wrote one of the first articles about the closure of the fisheries libraries,

Scientists say the closure of some of the world’s finest fishery, ocean and environmental libraries by the Harper government has been so chaotic that irreplaceable collections of intellectual capital built by Canadian taxpayers for future generations has been lost forever.

Glyn Moody in a Jan. 7, 2014 post on Techdirt noted this,

What’s strange is that even though the rationale for this mass destruction is apparently in order to reduce costs, opportunities to sell off more valuable items have been ignored. A scientist is quoted as follows:

“Hundreds of bound journals, technical reports and texts still on the shelves, presumably meant for the garbage or shredding. I saw one famous monograph on zooplankton, which would probably fetch a pretty penny at a used science bookstore… anybody could go in and help themselves, with no record kept of who got what.”

Gloria Galloway in a Jan. 7, 2014 article for the Globe and Mail adds more details about what has been lost,

Peter Wells, an adjunct professor and senior research fellow at the International Ocean Institute at Dalhousie University in Halifax, said it is not surprising few members of the public used the libraries. But “the public benefits by the researchers and the different research labs being able to access the information,” he said.

Scientists say it is true that most modern research is done online.

But much of the material in the DFO libraries was not available digitally, Dr. Wells said, adding that some of it had great historical value. And some was data from decades ago that researchers use to determine how lakes and rivers have changed.

“I see this situation as a national tragedy, done under the pretext of cost savings, which, when examined closely, will prove to be a false motive,” Dr. Wells said. “A modern democratic society should value its information resources, not reduce, or worse, trash them.”

Dr. Ayles [Burton Ayles, a former DFO regional director and the former director of science for the Freshwater Institute in Winnipeg] said the Freshwater Institute had reports from the 1880s and some that were available nowhere else. “There was a whole core people who used that library on a regular basis,” he said.

Dr. Ayles pointed to a collection of three-ringed binders, occupying seven metres of shelf space, that contained the data collected during a study in the 1960s and 1970s of the proposed Mackenzie Valley pipeline. For a similar study in the early years of this century, he said, “scientists could go back to that information and say, ‘What was the baseline 30 years ago? What was there then and what is there now?’ ”

When asked how much of the discarded information has been digitized, the government did not provide an answer, but said the process continues.

Today, Margo McDiarmid’s Jan. 30, 2014 article for the Canadian Broadcasting Corporation (CBC) news online further explores digitization of the holdings,

Fisheries and Oceans is closing seven of its 11 libraries by 2015. It’s hoping to save more than $443,000 in 2014-15 by consolidating its collections into four remaining libraries.

Shea [Fisheries and Oceans Minister Gail Shea] told CBC News in a statement Jan. 6 that all copyrighted material has been digitized and the rest of the collection will be soon. The government says that putting material online is a more efficient way of handling it.

But documents from her office show there’s no way of really knowing that is happening.

“The Department of Fisheries and Oceans’ systems do not enable us to determine the number of items digitized by location and collection,” says the response by the minister’s office to MacAulay’s inquiry. [emphasis mine]

The documents also that show the department had to figure out what to do with 242,207 books and research documents from the libraries being closed. It kept 158,140 items and offered the remaining 84,067 to libraries outside the federal government.

Shea’s office told CBC that the books were also “offered to the general public and recycled in a ‘green fashion’ if there were no takers.”

The fate of thousands of books appears to be “unknown,” although the documents’ numbers show 160 items from the Maurice Lamontagne Library in Mont Jolie, Que., were “discarded.”  A Radio-Canada story in June about the library showed piles of volumes in dumpsters.

And the numbers prove a lot more material was tossed out. The bill to discard material from four of the seven libraries totals $22,816.76

Leaving aside the issue of whether or not rare books were given away or put in dumpsters, It’s not confidence-building when the government minister can’t offer information about which books have been digitized and where they might located online.

Interestingly,  Fisheries and Oceans is not the only department/ministry shutting down libraries (from McDiarmid’s CBC article),

Fisheries and Oceans is just one of the 14 federal departments, including Health Canada and Environment Canada, that have been shutting physical libraries and digitizing or consolidating the material into closed central book vaults.

I was unaware of the problems with Health Canada’s libraries but Laura Payton’s and Max Paris’ Jan. 20, 2014 article for CBC news online certainly raised my eyebrows,

Health Canada scientists are so concerned about losing access to their research library that they’re finding workarounds, with one squirrelling away journals and books in his basement for colleagues to consult, says a report obtained by CBC News.

The draft report from a consultant hired by the department warned it not to close its library, but the report was rejected as flawed and the advice went unheeded.

Before the main library closed, the inter-library loan functions were outsourced to a private company called Infotrieve, the consultant wrote in a report ordered by the department. The library’s physical collection was moved to the National Science Library on the Ottawa campus of the National Research Council last year.

“Staff requests have dropped 90 per cent over in-house service levels prior to the outsource. This statistic has been heralded as a cost savings by senior HC [Health Canada] management,” the report said.

“However, HC scientists have repeatedly said during the interview process that the decrease is because the information has become inaccessible — either it cannot arrive in due time, or it is unaffordable due to the fee structure in place.”

….

The report noted the workarounds scientists used to overcome their access problems.

Mueller [Dr. Rudi Mueller, who left the department in 2012] used his contacts in industry for scientific literature. He also went to university libraries where he had a faculty connection.

The report said Health Canada scientists sometimes use the library cards of university students in co-operative programs at the department.

Unsanctioned libraries have been created by science staff.

“One group moved its 250 feet of published materials to an employee’s basement. When you need a book, you email ‘Fred,’ and ‘Fred’ brings the book in with him the next day,” the consultant wrote in his report.

“I think it’s part of being a scientist. You find a way around the problems,” Mueller told CBC News.

Unsanctioned, underground libraries aside, the assumption that digitizing documents and books ensures access is false.  Glyn Moody in a Nov. 12, 2013 article for Techdirt gives a chastening example of how vulnerable our digital memories are,

The Internet Archive is the world’s online memory, holding the only copies of many historic (and not-so-historic) Web pages that have long disappeared from the Web itself.

Bad news:

This morning at about 3:30 a.m. a fire started at the Internet Archive’s San Francisco scanning center.

Good news:

no one was hurt and no data was lost. Our main building was not affected except for damage to one electrical run. This power issue caused us to lose power to some servers for a while.

Bad news:

Some physical materials were in the scanning center because they were being digitized, but most were in a separate locked room or in our physical archive and were not lost. Of those materials we did unfortunately lose, about half had already been digitized. We are working with our library partners now to assess.

That loss is unfortunate, but imagine if the fire had been in the main server room holding the Internet Archive’s 2 petabytes of data. Wisely, the project has placed copies at other locations …

That’s good to know, but it seems rather foolish for the world to depend on the Internet Archive always being able to keep all its copies up to date, especially as the quantity of data that it stores continues to rise. This digital library is so important in historical and cultural terms: surely it’s time to start mirroring the Internet Archive around the world in many locations, with direct and sustained support from multiple governments.

In addition to the issue of vulnerability, there’s also the issue of authenticity, from my June 5, 2013 posting about science, archives and memories,

… Luciana Duranti [Professor and Chair, MAS {Master of Archival Studies}Program at the University of British Columbia and Director, InterPARES] and her talk titled, Trust and Authenticity in the Digital Environment: An Increasingly Cloudy Issue, which took place in Vancouver (Canada) last year (mentioned in my May 18, 2012 posting).

Duranti raised many, many issues that most of us don’t consider when we blithely store information in the ‘cloud’ or create blogs that turn out to be repositories of a sort (and then don’t know what to do with them; ça c’est moi). She also previewed a Sept. 26 – 28, 2013 conference to be hosted in Vancouver by UNESCO (United Nations Educational, Scientific, and Cultural Organization), “Memory of the World in the Digital Age: Digitization and Preservation.” (UNESCO’s Memory of the World programme hosts a number of these themed conferences and workshops.)

The Sept. 2013 UNESCO ‘memory of the world’ conference in Vancouver seems rather timely in retrospect. The Council of Canadian Academies (CCA) announced that Dr. Doug Owram would be chairing their Memory Institutions and the Digital Revolution assessment (mentioned in my Feb. 22, 2013 posting; scroll down 80% of the way) and, after checking recently, I noticed that the Expert Panel has been assembled and it includes Duranti. Here’s the assessment description from the CCA’s ‘memory institutions’ webpage,

Library and Archives Canada has asked the Council of Canadian Academies to assess how memory institutions, which include archives, libraries, museums, and other cultural institutions, can embrace the opportunities and challenges of the changing ways in which Canadians are communicating and working in the digital age.

Background

Over the past three decades, Canadians have seen a dramatic transformation in both personal and professional forms of communication due to new technologies. Where the early personal computer and word-processing systems were largely used and understood as extensions of the typewriter, advances in technology since the 1980s have enabled people to adopt different approaches to communicating and documenting their lives, culture, and work. Increased computing power, inexpensive electronic storage, and the widespread adoption of broadband computer networks have thrust methods of communication far ahead of our ability to grasp the implications of these advances.

These trends present both significant challenges and opportunities for traditional memory institutions as they work towards ensuring that valuable information is safeguarded and maintained for the long term and for the benefit of future generations. It requires that they keep track of new types of records that may be of future cultural significance, and of any changes in how decisions are being documented. As part of this assessment, the Council’s expert panel will examine the evidence as it relates to emerging trends, international best practices in archiving, and strengths and weaknesses in how Canada’s memory institutions are responding to these opportunities and challenges. Once complete, this assessment will provide an in-depth and balanced report that will support Library and Archives Canada and other memory institutions as they consider how best to manage and preserve the mass quantity of communications records generated as a result of new and emerging technologies.

The Council’s assessment is running concurrently with the Royal Society of Canada’s expert panel assessment on Libraries and Archives in 21st century Canada. Though similar in subject matter, these assessments have a different focus and follow a different process. The Council’s assessment is concerned foremost with opportunities and challenges for memory institutions as they adapt to a rapidly changing digital environment. In navigating these issues, the Council will draw on a highly qualified and multidisciplinary expert panel to undertake a rigorous assessment of the evidence and of significant international trends in policy and technology now underway. The final report will provide Canadians, policy-makers, and decision-makers with the evidence and information needed to consider policy directions. In contrast, the RSC panel focuses on the status and future of libraries and archives, and will draw upon a public engagement process.

So, the government is shutting down libraries in order to save money and they’re praying (?) that the materials have been digitized and adequate care has been taken to ensure that they will not be lost in some disaster or other. Meanwhile the Council of Canadian Academies is conducting an assessment of memory institutions in the digital age. The approach seems to backwards.

On a more amusing note, Rick Mercer parodies at lease one way scientists are finding to circumvent the cost-cutting exercise in an excerpt (approximately 1 min.)  from his Jan. 29, 2014 Rick Mercer Report telecast (thanks Roz),

Mercer’s comment about sports and Canada’s Prime Minister, Stephen Harper’s preferences is a reference to Harper’s expressed desire to write a book about hockey and possibly a veiled reference to Harper’s successful move to prorogue parliament during the 2010 Winter Olympic games in Vancouver in what many observers suggested was a strategy allowing Harper to attend the games at his leisure.

Whether or not you agree with the decision to shutdown some libraries, the implementation seems to have been a remarkably sloppy affair.