Featured post

Brief note about changes

June 19,2019: Hello! I apologize for this site’s unavailability over the last 10 days or so (June 7 – 18, 2019). Moving to a new web hosting service meant that the ‘law of unintended consequences’ came into play. Fingers crossed that all the problems have been resolved.

On another matter, I’ve accumulated quite a backlog of postings, which I will be resizing (publishing) over the next few months. I’ve been trying to bring that backlog down to a reasonable size for quite some time now but I see more drastic, focused action is required. I will continue posting some more recent news items along with my older pieces.

Colo(u)r-changing building surfaces thanks to gold nanoparticles

Gold, at the nanoscale, has different properties than it has at the macroscale and research at the University of Cambridge has found a new way to exploit gold’s unique properties at the nanoscale according to a May 13, 2019 news item item on ScienceDaily,

The smallest pixels yet created — a million times smaller than those in smartphones, made by trapping particles of light under tiny rocks of gold — could be used for new types of large-scale flexible displays, big enough to cover entire buildings.

The colour pixels, developed by a team of scientists led by the University of Cambridge, are compatible with roll-to-roll fabrication on flexible plastic films, dramatically reducing their production cost. The results are reported in the journal Science Advances [May 10, 2019].

A May 10,2019 University of Cambridge press release (also on EurekAlert), which originated the news item, delves further into the research,

It has been a long-held dream to mimic the colour-changing skin of octopus or squid, allowing people or objects to disappear into the natural background, but making large-area flexible display screens is still prohibitively expensive because they are constructed from highly precise multiple layers.

At the centre of the pixels developed by the Cambridge scientists is a tiny particle of gold a few billionths of a metre across. The grain sits on top of a reflective surface, trapping light in the gap in between. Surrounding each grain is a thin sticky coating which changes chemically when electrically switched, causing the pixel to change colour across the spectrum.

The team of scientists, from different disciplines including physics, chemistry and manufacturing, made the pixels by coating vats of golden grains with an active polymer called polyaniline and then spraying them onto flexible mirror-coated plastic, to dramatically drive down production cost.

The pixels are the smallest yet created, a million times smaller than typical smartphone pixels. They can be seen in bright sunlight and because they do not need constant power to keep their set colour, have an energy performance that makes large areas feasible and sustainable. “We started by washing them over aluminized food packets, but then found aerosol spraying is faster,” said co-lead author Hyeon-Ho Jeong from Cambridge’s Cavendish Laboratory.

“These are not the normal tools of nanotechnology, but this sort of radical approach is needed to make sustainable technologies feasible,” said Professor Jeremy J Baumberg of the NanoPhotonics Centre at Cambridge’s Cavendish Laboratory, who led the research. “The strange physics of light on the nanoscale allows it to be switched, even if less than a tenth of the film is coated with our active pixels. That’s because the apparent size of each pixel for light is many times larger than their physical area when using these resonant gold architectures.”

The pixels could enable a host of new application possibilities such as building-sized display screens, architecture which can switch off solar heat load, active camouflage clothing and coatings, as well as tiny indicators for coming internet-of-things devices.
The team are currently working at improving the colour range and are looking for partners to develop the technology further.

The research is funded as part of a UK Engineering and Physical Sciences Research Council (EPSRC) investment in the Cambridge NanoPhotonics Centre, as well as the European Research Council (ERC) and the China Scholarship Council.

This image accompanies the press release,

Caption: eNPoMs formed from gold nanoparticles (Au NPs) encapsulated in a conductive polymer shell. Credit: NanoPhotonics Cambridge/Hyeon-Ho Jeong, Jialong Peng Credit: NanoPhotonics Cambridge/Hyeon-Ho Jeong, Jialong Peng

Here’s a link to and a citation for the paper,

Scalable electrochromic nanopixels using plasmonics by Jialong Peng, Hyeon-Ho Jeong, Qianqi Lin, Sean Cormier, Hsin-Ling Liang, Michael F. L. De Volder, Silvia Vignolini, and Jeremy J. Baumberg. Science Advances Vol. 5, no. 5, eaaw2205 DOI: 10.1126/sciadv.aaw2205 Published: 01 May 2019

This paper appears to be open access.

Electronics begone! Enter: the light-based brainlike computing chip

At this point, it’s possible I’m wrong but I think this is the first ‘memristor’ type device (also called a neuromorphic chip) based on light rather than electronics that I’ve featured here on this blog. In other words, it’s not, technically speaking, a memristor but it does have the same properties so it is a neuromorphic chip.

Caption: The optical microchips that the researchers are working on developing are about the size of a one-cent piece. Credit: WWU Muenster – Peter Leßmann

A May 8, 2019 news item on Nanowerk announces this new approach to neuromorphic hardware (Note: A link has been removed),

Researchers from the Universities of Münster (Germany), Oxford and Exeter (both UK) have succeeded in developing a piece of hardware which could pave the way for creating computers which resemble the human brain.

The scientists produced a chip containing a network of artificial neurons that works with light and can imitate the behaviour of neurons and their synapses. The network is able to “learn” information and use this as a basis for computing and recognizing patterns. As the system functions solely with light and not with electrons, it can process data many times faster than traditional systems. …

A May 8, 2019 University of Münster press release (also on EurekAlert), which originated the news item, reveals the full story,

A technology that functions like a brain? In these times of artificial intelligence, this no longer seems so far-fetched – for example, when a mobile phone can recognise faces or languages. With more complex applications, however, computers still quickly come up against their own limitations. One of the reasons for this is that a computer traditionally has separate memory and processor units – the consequence of which is that all data have to be sent back and forth between the two. In this respect, the human brain is way ahead of even the most modern computers because it processes and stores information in the same place – in the synapses, or connections between neurons, of which there are a million-billion in the brain. An international team of researchers from the Universities of Münster (Germany), Oxford and Exeter (both UK) have now succeeded in developing a piece of hardware which could pave the way for creating computers which resemble the human brain. The scientists managed to produce a chip containing a network of artificial neurons that works with light and can imitate the behaviour of neurons and their synapses.

The researchers were able to demonstrate, that such an optical neurosynaptic network is able to “learn” information and use this as a basis for computing and recognizing patterns – just as a brain can. As the system functions solely with light and not with traditional electrons, it can process data many times faster. “This integrated photonic system is an experimental milestone,” says Prof. Wolfram Pernice from Münster University and lead partner in the study. “The approach could be used later in many different fields for evaluating patterns in large quantities of data, for example in medical diagnoses.” The study is published in the latest issue of the “Nature” journal.

The story in detail – background and method used

Most of the existing approaches relating to so-called neuromorphic networks are based on electronics, whereas optical systems – in which photons, i.e. light particles, are used – are still in their infancy. The principle which the German and British scientists have now presented works as follows: optical waveguides that can transmit light and can be fabricated into optical microchips are integrated with so-called phase-change materials – which are already found today on storage media such as re-writable DVDs. These phase-change materials are characterised by the fact that they change their optical properties dramatically, depending on whether they are crystalline – when their atoms arrange themselves in a regular fashion – or amorphous – when their atoms organise themselves in an irregular fashion. This phase-change can be triggered by light if a laser heats the material up. “Because the material reacts so strongly, and changes its properties dramatically, it is highly suitable for imitating synapses and the transfer of impulses between two neurons,” says lead author Johannes Feldmann, who carried out many of the experiments as part of his PhD thesis at the Münster University.

In their study, the scientists succeeded for the first time in merging many nanostructured phase-change materials into one neurosynaptic network. The researchers developed a chip with four artificial neurons and a total of 60 synapses. The structure of the chip – consisting of different layers – was based on the so-called wavelength division multiplex technology, which is a process in which light is transmitted on different channels within the optical nanocircuit.

In order to test the extent to which the system is able to recognise patterns, the researchers “fed” it with information in the form of light pulses, using two different algorithms of machine learning. In this process, an artificial system “learns” from examples and can, ultimately, generalise them. In the case of the two algorithms used – both in so-called supervised and in unsupervised learning – the artificial network was ultimately able, on the basis of given light patterns, to recognise a pattern being sought – one of which was four consecutive letters.

“Our system has enabled us to take an important step towards creating computer hardware which behaves similarly to neurons and synapses in the brain and which is also able to work on real-world tasks,” says Wolfram Pernice. “By working with photons instead of electrons we can exploit to the full the known potential of optical technologies – not only in order to transfer data, as has been the case so far, but also in order to process and store them in one place,” adds co-author Prof. Harish Bhaskaran from the University of Oxford.

A very specific example is that with the aid of such hardware cancer cells could be identified automatically. Further work will need to be done, however, before such applications become reality. The researchers need to increase the number of artificial neurons and synapses and increase the depth of neural networks. This can be done, for example, with optical chips manufactured using silicon technology. “This step is to be taken in the EU joint project ‘Fun-COMP’ by using foundry processing for the production of nanochips,” says co-author and leader of the Fun-COMP project, Prof. C. David Wright from the University of Exeter.

Here’s a link to and a citation for the paper,

All-optical spiking neurosynaptic networks with self-learning capabilities by J. Feldmann, N. Youngblood, C. D. Wright, H. Bhaskaran & W. H. P. Pernice. Nature volume 569, pages208–214 (2019) DOI: https://doi.org/10.1038/s41586-019-1157-8 Issue Date: 09 May 2019

This paper is behind a paywall.

For the curious, I found a little more information about Fun-COMP (functionally-scaled computer technology). It’s a European Commission (EC) Horizon 2020 project coordinated through the University of Exeter. For information with details such as the total cost, contribution from the EC, the list of partnerships and more there is the Fun-COMP webpage on fabiodisconzi.com.

Bad battery, good synapse from Stanford University

A May 4, 2019 news item on ScienceDaily announces the latest advance made by Stanford University and Sandia National Laboratories in the field of neuromorphic (brainlike) computing,

The brain’s capacity for simultaneously learning and memorizing large amounts of information while requiring little energy has inspired an entire field to pursue brain-like — or neuromorphic — computers. Researchers at Stanford University and Sandia National Laboratories previously developed one portion of such a computer: a device that acts as an artificial synapse, mimicking the way neurons communicate in the brain.

In a paper published online by the journal Science on April 25 [2019], the team reports that a prototype array of nine of these devices performed even better than expected in processing speed, energy efficiency, reproducibility and durability.

Looking forward, the team members want to combine their artificial synapse with traditional electronics, which they hope could be a step toward supporting artificially intelligent learning on small devices.

“If you have a memory system that can learn with the energy efficiency and speed that we’ve presented, then you can put that in a smartphone or laptop,” said Scott Keene, co-author of the paper and a graduate student in the lab of Alberto Salleo, professor of materials science and engineering at Stanford who is co-senior author. “That would open up access to the ability to train our own networks and solve problems locally on our own devices without relying on data transfer to do so.”

An April 25, 2019 Stanford University news release (also on EurekAlert but published May 3, 2019) by Taylor Kubota, which originated the news item, expands on the theme,

A bad battery, a good synapse

The team’s artificial synapse is similar to a battery, modified so that the researchers can dial up or down the flow of electricity between the two terminals. That flow of electricity emulates how learning is wired in the brain. This is an especially efficient design because data processing and memory storage happen in one action, rather than a more traditional computer system where the data is processed first and then later moved to storage.

Seeing how these devices perform in an array is a crucial step because it allows the researchers to program several artificial synapses simultaneously. This is far less time consuming than having to program each synapse one-by-one and is comparable to how the brain actually works.

In previous tests of an earlier version of this device, the researchers found their processing and memory action requires about one-tenth as much energy as a state-of-the-art computing system needs in order to carry out specific tasks. Still, the researchers worried that the sum of all these devices working together in larger arrays could risk drawing too much power. So, they retooled each device to conduct less electrical current – making them much worse batteries but making the array even more energy efficient.

The 3-by-3 array relied on a second type of device – developed by Joshua Yang at the University of Massachusetts, Amherst, who is co-author of the paper – that acts as a switch for programming synapses within the array.

“Wiring everything up took a lot of troubleshooting and a lot of wires. We had to ensure all of the array components were working in concert,” said Armantas Melianas, a postdoctoral scholar in the Salleo lab. “But when we saw everything light up, it was like a Christmas tree. That was the most exciting moment.”

During testing, the array outperformed the researchers’ expectations. It performed with such speed that the team predicts the next version of these devices will need to be tested with special high-speed electronics. After measuring high energy efficiency in the 3-by-3 array, the researchers ran computer simulations of a larger 1024-by-1024 synapse array and estimated that it could be powered by the same batteries currently used in smartphones or small drones. The researchers were also able to switch the devices over a billion times – another testament to its speed – without seeing any degradation in its behavior.

“It turns out that polymer devices, if you treat them well, can be as resilient as traditional counterparts made of silicon. That was maybe the most surprising aspect from my point of view,” Salleo said. “For me, it changes how I think about these polymer devices in terms of reliability and how we might be able to use them.”

Room for creativity

The researchers haven’t yet submitted their array to tests that determine how well it learns but that is something they plan to study. The team also wants to see how their device weathers different conditions – such as high temperatures – and to work on integrating it with electronics. There are also many fundamental questions left to answer that could help the researchers understand exactly why their device performs so well.

“We hope that more people will start working on this type of device because there are not many groups focusing on this particular architecture, but we think it’s very promising,” Melianas said. “There’s still a lot of room for improvement and creativity. We only barely touched the surface.”

Here’s a link to and a citation for the paper,

Parallel programming of an ionic floating-gate memory array for scalable neuromorphic computing by Elliot J. Fuller, Scott T. Keene, Armantas Melianas, Zhongrui Wang, Sapan Agarwal, Yiyang Li, Yaakov Tuchman, Conrad D. James, Matthew J. Marinella, J. Joshua Yang3, Alberto Salleo, A. Alec Talin1. Science 25 Apr 2019: eaaw5581 DOI: 10.1126/science.aaw5581

This paper is behind a paywall.

For anyone interested in more about brainlike/brain-like/neuromorphic computing/neuromorphic engineering/memristors, use any or all of those terms in this blog’s search engine.

Automated science writing?

It seems that automated science writing is not ready—yet. Still, an April 18, 2019 news item on ScienceDaily suggests that progress is being made,

The work of a science writer, including this one, includes reading journal papers filled with specialized technical terminology, and figuring out how to explain their contents in language that readers without a scientific background can understand.

Now, a team of scientists at MIT [Massachusetts Institute of Technology] and elsewhere has developed a neural network, a form of artificial intelligence (AI), that can do much the same thing, at least to a limited extent: It can read scientific papers and render a plain-English summary in a sentence or two.

An April 17, 2019 MIT news release, which originated the news item, delves into the research and its implications,

Even in this limited form, such a neural network could be useful for helping editors, writers, and scientists [emphasis mine] scan a large number of papers to get a preliminary sense of what they’re about. But the approach the team developed could also find applications in a variety of other areas besides language processing, including machine translation and speech recognition.

The work is described in the journal Transactions of the Association for Computational Linguistics, in a paper by Rumen Dangovski and Li Jing, both MIT graduate students; Marin Soljačić, a professor of physics at MIT; Preslav Nakov, a principal scientist at the Qatar Computing Research Institute, HBKU; and Mićo Tatalović, a former Knight Science Journalism fellow at MIT and a former editor at New Scientist magazine.

From AI for physics to natural language

The work came about as a result of an unrelated project, which involved developing new artificial intelligence approaches based on neural networks, aimed at tackling certain thorny problems in physics. However, the researchers soon realized that the same approach could be used to address other difficult computational problems, including natural language processing, in ways that might outperform existing neural network systems.

“We have been doing various kinds of work in AI for a few years now,” Soljačić says. “We use AI to help with our research, basically to do physics better. And as we got to be  more familiar with AI, we would notice that every once in a while there is an opportunity to add to the field of AI because of something that we know from physics — a certain mathematical construct or a certain law in physics. We noticed that hey, if we use that, it could actually help with this or that particular AI algorithm.”

This approach could be useful in a variety of specific kinds of tasks, he says, but not all. “We can’t say this is useful for all of AI, but there are instances where we can use an insight from physics to improve on a given AI algorithm.”

Neural networks in general are an attempt to mimic the way humans learn certain new things: The computer examines many different examples and “learns” what the key underlying patterns are. Such systems are widely used for pattern recognition, such as learning to identify objects depicted in photos.

But neural networks in general have difficulty correlating information from a long string of data, such as is required in interpreting a research paper. Various tricks have been used to improve this capability, including techniques known as long short-term memory (LSTM) and gated recurrent units (GRU), but these still fall well short of what’s needed for real natural-language processing, the researchers say.

The team came up with an alternative system, which instead of being based on the multiplication of matrices, as most conventional neural networks are, is based on vectors rotating in a multidimensional space. The key concept is something they call a rotational unit of memory (RUM).

Essentially, the system represents each word in the text by a vector in multidimensional space — a line of a certain length pointing in a particular direction. Each subsequent word swings this vector in some direction, represented in a theoretical space that can ultimately have thousands of dimensions. At the end of the process, the final vector or set of vectors is translated back into its corresponding string of words.

“RUM helps neural networks to do two things very well,” Nakov says. “It helps them to remember better, and it enables them to recall information more accurately.”

After developing the RUM system to help with certain tough physics problems such as the behavior of light in complex engineered materials, “we realized one of the places where we thought this approach could be useful would be natural language processing,” says Soljačić,  recalling a conversation with Tatalović, who noted that such a tool would be useful for his work as an editor trying to decide which papers to write about. Tatalović was at the time exploring AI in science journalism as his Knight fellowship project.

“And so we tried a few natural language processing tasks on it,” Soljačić says. “One that we tried was summarizing articles, and that seems to be working quite well.”

The proof is in the reading

As an example, they fed the same research paper through a conventional LSTM-based neural network and through their RUM-based system. The resulting summaries were dramatically different.

The LSTM system yielded this highly repetitive and fairly technical summary: “Baylisascariasis,” kills mice, has endangered the allegheny woodrat and has caused disease like blindness or severe consequences. This infection, termed “baylisascariasis,” kills mice, has endangered the allegheny woodrat and has caused disease like blindness or severe consequences. This infection, termed “baylisascariasis,” kills mice, has endangered the allegheny woodrat.

Based on the same paper, the RUM system produced a much more readable summary, and one that did not include the needless repetition of phrases: Urban raccoons may infect people more than previously assumed. 7 percent of surveyed individuals tested positive for raccoon roundworm antibodies. Over 90 percent of raccoons in Santa Barbara play host to this parasite.

Already, the RUM-based system has been expanded so it can “read” through entire research papers, not just the abstracts, to produce a summary of their contents. The researchers have even tried using the system on their own research paper describing these findings — the paper that this news story is attempting to summarize.

Here is the new neural network’s summary: Researchers have developed a new representation process on the rotational unit of RUM, a recurrent memory that can be used to solve a broad spectrum of the neural revolution in natural language processing.

It may not be elegant prose, but it does at least hit the key points of information.

Çağlar Gülçehre, a research scientist at the British AI company Deepmind Technologies, who was not involved in this work, says this research tackles an important problem in neural networks, having to do with relating pieces of information that are widely separated in time or space. “This problem has been a very fundamental issue in AI due to the necessity to do reasoning over long time-delays in sequence-prediction tasks,” he says. “Although I do not think this paper completely solves this problem, it shows promising results on the long-term dependency tasks such as question-answering, text summarization, and associative recall.”

Gülçehre adds, “Since the experiments conducted and model proposed in this paper are released as open-source on Github, as a result many researchers will be interested in trying it on their own tasks. … To be more specific, potentially the approach proposed in this paper can have very high impact on the fields of natural language processing and reinforcement learning, where the long-term dependencies are very crucial.”

The research received support from the Army Research Office, the National Science Foundation, the MIT-SenseTime Alliance on Artificial Intelligence, and the Semiconductor Research Corporation. The team also had help from the Science Daily website, whose articles were used in training some of the AI models in this research.

As usual, this ‘automated writing system’ is framed as a ‘helper’ not an usurper of anyone’s job. However, its potential for changing the nature of the work is there. About five years ago I featured another ‘automated writing’ story in a July 16, 2014 posting titled: ‘Writing and AI or is a robot writing this blog?’ You may have been reading ‘automated’ news stories for years. At the time, the focus was on sports and business.

Getting back to 2019 and science writing, here’s a link to and a citation for the paper,

Rotational Unit of Memory: A Novel Representation Unit for RNNs with Scalable Applications by Rumen Dangovski, Li Jing, Preslav Nakov, Mićo Tatalović and Marin Soljačić. Transactions of the Association for Computational Linguistics Volume 07, 2019 pp.121-138 DOI: https://doi.org/10.1162/tacl_a_00258 Posted Online 2019

© 2019 Association for Computational Linguistics. Distributed under a CC-BY 4.0 license.

This paper is open access.

Dessert or computer screen?

Scientists at Japan’s University of Osaka have a technique for creating higher resolution computer and smart phone screens from the main ingredient for a dessert, nata de coco. From the nata de coco Wikipedia entry (Note: Links have been removed),

Nata de coco (also marketed as “coconut gel”) is a chewy, translucent, jelly-like food produced by the fermentation of coconut water,[1] which gels through the production of microbial cellulose by ‘Komagataeibacter xylinus’. Originating in the Philippines, nata de coco is most commonly sweetened as a candy or dessert, and can accompany a variety of foods, including pickles, drinks, ice cream, puddings, and fruit cocktails.[2]

An April 18, 2018 news item on Nanowerk announces the research (Note: A link has been removed),

A team at the Institute of Scientific and Industrial Research at Osaka University has determined the optical parameters of cellulose molecules with unprecedented precision. They found that cellulose’s intrinsic birefringence, which describes how a material reacts differently to light of various orientations, is powerful enough to be used in optical displays, such as flexible screens or electronic paper (ACS Macro Letters, “Estimation of the Intrinsic Birefringence of Cellulose Using Bacterial Cellulose Nanofiber Films”

An April 18, 2019 Osaka University press release on AlphaGalileo, which originated the news release, provides some historical context for the use of cellulose along with additional detail about the research,

Cellulose is an ancient material that may be poised for a major comeback. It has been utilized for millennia as the primary component of paper books, cotton clothing, and nata de coco, a tropical dessert made from coconut water. While books made of dead trees and plain old shirts might seem passé in world increasingly filled with tablets and smartphones, researchers at Osaka University have shown that cellulose might have just what it takes to make our modern electronic screens cheaper and provide sharper, more vibrant images.

Cellulose, a naturally occurring polymer, consists of many long molecular chains. Because of its rigidity and strength, cellulose helps maintain the structural integrity of the cell walls in plants. It makes up about 99% of the nanofibers that comprise nata de coco, and helps create its unique and tasty texture.

The team at Osaka University achieved better results using unidirectionally-aligned cellulose nanofiber films created by stretching hydrogels from nata de coco at various rates. Nata de coco nanofibers allow the cellulose chains to be straight on the molecular level, and this is helpful for the precise determination of the intrinsic birefringence–that is, the maximum birefringence of fully extended polymer chains. The researchers were also able to measure the birefringence more accurately through improvements in method. “Using high quality samples and methods, we were able to reliably determine the inherent birefringence of cellulose, for which very different values had been previously estimated,” says senior author Masaya Nogi.

The main application the researchers envision is as light compensation films for liquid crystal displays (LCDs), since they operate by controlling the brightness of pixels with filters that allow only one orientation of light to pass through. Potentially, any smartphone, computer, or television that has an LCD screen could see improved contrast, along with reduced color unevenness and light leakage with the addition of cellulose nanofiber films.

“Cellulose nanofibers are promising light compensation materials for optoelectronics, such as flexible displays and electronic paper, since they simultaneously have good transparency, flexibility, dimensional stability, and thermal conductivity,” says lead author Kojiro Uetani. “So look for this ancient material in your future high-tech devices.”

Here’s a link to and a citation for the paper,

Estimation of the Intrinsic Birefringence of Cellulose Using Bacterial Cellulose Nanofiber Films by Kojiro Uetani, Hirotaka Koga, and Masaya Nogi. ACS Macro Lett., 2019, 8 (3), pp 250–254 DOI: 10.1021/acsmacrolett.9b00024 Publication Date (Web): February 22, 2019 Copyright © 2019 American Chemical Society

This paper is behind a paywall.

Brainlike computing with spintronic devices

Adding to the body of ‘memristor’ research I have here, there’s an April 17, 2019 news item on Nanowerk announcing the development of ‘memristor’ hardware by Japanese researchers (Note: A link has been removed),

A research group from Tohoku University has developed spintronics devices which are promising for future energy-efficient and adoptive computing systems, as they behave like neurons and synapses in the human brain (Advanced Materials, “Artificial Neuron and Synapse Realized in an Antiferromagnet/Ferromagnet Heterostructure Using Dynamics of Spin–Orbit Torque Switching”).

Just because this ‘synapse’ is pretty,

Courtesy: Tohoku University

An April 16, 2019 Tohoku University press release, which originated the news item, expands on the theme,

Today’s information society is built on digital computers that have evolved drastically for half a century and are capable of executing complicated tasks reliably. The human brain, by contrast, operates under very limited power and is capable of executing complex tasks efficiently using an architecture that is vastly different from that of digital computers.

So the development of computing schemes or hardware inspired by the processing of information in the brain is of broad interest to scientists in fields ranging from physics, chemistry, material science and mathematics, to electronics and computer science.

In computing, there are various ways to implement the processing of information by a brain. Spiking neural network is a kind of implementation method which closely mimics the brain’s architecture and temporal information processing. Successful implementation of spiking neural network requires dedicated hardware with artificial neurons and synapses that are designed to exhibit the dynamics of biological neurons and synapses.

Here, the artificial neuron and synapse would ideally be made of the same material system and operated under the same working principle. However, this has been a challenging issue due to the fundamentally different nature of the neuron and synapse in biological neural networks.

The research group – which includes Professor Hideo Ohno (currently the university president), Associate Professor Shunsuke Fukami, Dr. Aleksandr Kurenkov and Professor Yoshihiko Horio – created an artificial neuron and synapse by using spintronics technology. Spintronics is an academic field that aims to simultaneously use an electron’s electric (charge) and magnetic (spin) properties.

The research group had previously developed a functional material system consisting of antiferromagnetic and ferromagnetic materials. This time, they prepared artificial neuronal and synaptic devices microfabricated from the material system, which demonstrated fundamental behavior of biological neuron and synapse – leaky integrate-and-fire and spike-timing-dependent plasticity, respectively – based on the same concept of spintronics.

The spiking neural network is known to be advantageous over today’s artificial intelligence for the processing and prediction of temporal information. Expansion of the developed technology to unit-circuit, block and system levels is expected to lead to computers that can process time-varying information such as voice and video with a small amount of power or edge devices that have the an ability to adopt users and the environment through usage.

Here’s a link to and a citation for the paper,

Artificial Neuron and Synapse Realized in an Antiferromagnet/Ferromagnet Heterostructure Using Dynamics of Spin–Orbit Torque Switching by Aleksandr Kurenkov, Samik DuttaGupta, Chaoliang Zhang, Shunsuke Fukami, Yoshihiko Horio, Hideo Ohno. Advanced Materials https://doi.org/10.1002/adma.201900636 First published: 16 April 2019

This paper is behind a paywall.

Needle-free tattoos, smart and otherwise

Before getting to the research news from the University of Twente (Netherlands), there’s this related event which took place on April 18, 2019 (from the Future Under Our Skin webpage (on the University of Twente website) Note: I have made some formatting changes,

Why this event?

Our skin can give information about our health, mood and surroundings. Medical and recreational tattoos have decorated humans for centuries. But we can inject other materials besides ink, such as sensing devices, nano- or bio-responsive materials. With the increased percentage of tattooed population in recent years new health challenges have emerged; but is also a unique possibility to “read from our own skin”, beyond an artistic design. 
 
We have invited scientists, innovators, entrepreneurs, dermatologists, cosmetic permanent make-up technicians, tattoo artists, philosophers, and other experts. They will share with us their vision of the current and future role our skin has for improving the quality of life.

Open Event

This event is open to students, citizens in general as well as societal and governmental organisations around the different uses of our skin. The presence of scientists, medical doctors, tattoo artists and industry representatives is guaranteed. Then, we will all explore together the potential for co-creation with healthy citizens, patients, entreprises and other stakeholders.


If you want to hear from experts and share your own ideas, feel free to come to this Open Event!
 
It is possible to take the dish of the day (‘goed gevulde noedels met kippendij en satésaus en kroepoek’) in restaurant The Gallery (same building as DesignLab) at own costs (€7,85). Of course it is also possible to eat à la carte in Grand Café 

Wanneer: : 18 april 2019
Tijd: :17:30 – 20:00
Organisator: University of Twente
Locatie: Design Lab University of Twente
Hengelosestraat 500
7521 AN Enschede

Just days before, the University of Twente announced this research in an April 16, 2019 news item on Naowerk (Note: A link has been removed),

A tattoo that is warning you for too many hours of sunlight exposure, or is alerting you for taking your medication? Next to their cosmetic role, tattoos could get new functionality using intelligent ink. That would require more precise and less invasive injection technique.

Researchers of the University of Twente now develop a micro-jet injection technology that doesn’t use needles at all. Instead, an ultrafast liquid jet with the thickness of a human hair penetrates the skin. It isn’t painful and there is less waste.

In their new publication in the American Journal of Physics (“High speed imaging of solid needle and liquid micro-jet injections”), the scientists compare both the needle and the fluid jet approach.

Here’s an image provided by the researchers which illustrates the technique they have developed,

Working principle of needle-free injection: laser heating the fluid.The growing bubble pushes out the fluid (medicine or ink) at very high speed. Courtesy: University of Twente

An April 15, 2019 University of Twente press release, which originated the news item, provides more detail about tattoos and the research leading to ‘need-free’ tattoos,

Ötzi the Iceman already had, over 5000 years ago, dozens of simple tattoos on his body, apparently for pain relief. Since the classic ‘anchor’ tattoo that sailors had on their arms, tattoos have become more and more common. About 44 million Europeans wear one or more of them. Despite its wider acceptance in society, the underlying technique didn’t change and still has health risks. One or more moving needles put ink underneath the skin surface. This is painful and can damage the skin. Apart from that, needles have to be disposed of in a responsible way, and quite some ink is wasted. The alternative that David Fernández Rivas and his colleagues are developing, doesn’t use any needles. In their new paper, they compare this new approach with classic needle technology, on an artificial skin material and using high speed images. Remarkably, according to Fernández Rivas, the classic needle technology has never been subject of research in such a thorough way, using high speed images.

Fast fluid jet

The new technique employs a laser for rapidly heating a fluid that is inside a microchannel on a glass chip. Heated above the boiling point, a vapour bubble forms and grows, pushing the liquid out at speeds up to 100 meter per second (360 km/h). The jet, about the diameter of a human hair, is capable of going through human skin. “You don’t feel much of it, no more than a mosquito bite”, say Fernandez Rivas.

The researchers did their experiments with a number of commercially available inks. Compared to a tattoo machine, the micro-jet consumes a small amount of energy. What’s more important, it minimizes skin damage and the injection efficiency is much higher, there is no loss of fluids. And there is no risk of contaminated needles. The current microjet is a single one, while tattooing is often done using multiple needles with different types or colours of ink. Also, the volume that can be ‘delivered’ by the microjet has to be increased. These are next steps in developing the needle-free technology.

Skin treatment

In today’s medical world, tattoo-resembling techniques are used for treatment of skin, masking scars, or treating hair diseases. These are other areas in which the new technique can be used, as well as in vaccination. A challenging idea is using tattoos for cosmetic purposes and as health sensors at the same time. What if ink is light-sensitive or responds to certain substances that are present in the skin or in sweat?

On this new approach, scientists, students, entrepreneurs and tattoo artists join a special event ‘The future under our skin’, organized by David Fernandez Rivas.

Research has been done in the Mesoscale Chemical Systems group, part of UT’s MESA+ Institute.

Here’s a link to an d a citation for the paper,

High speed imaging of solid needle and liquid micro-jet injections by Loreto Oyarte Gálveza, Maria Brió Pérez, and David Fernández Rivas. Journal of Applied Physics 125, 144504 (2019); Volume 125, Issue 14 DOI: 10.1063/1.5074176 https://doi.org/10.1063/1.5074176 Free Published Online: 09 April 2019

This paper appears to be open access.

Art/science and a paintable diagnostic test for cancer

One of Joseph Cohen’s painting incorporating carbon nanotubes photographed in normal light. Photo courtesy of Joseph Cohen. [downloaded from https://news.artnet.com/art-world/carbon-nanotube-cancer-paint-1638340?utm_content=from_&utm_source=Sailthru&utm_medium=email&utm_campaign=Global%20September%202%20PM&utm_term=artnet%20News%20Daily%20Newsletter%20USE%20%2830%20Day%20Engaged%20Only%29]

The artist credited with the work seen in the above, Joseph Cohen, has done something remarkable with carbon nanotubes (CNTs). Something even more remarkable than the painting as Sarah Cascone recounts in her August 30, 2019 article for artnet.com (Note: A link has been removed),

Not every artist can say that his or her work is helping in the fight against cancer. But over the past several years, Joseph Cohen has done just that, working to develop a new, high-tech paint that can be used not only on canvas, but also to detect cancers and medical conditions such as hypertension and diabetes.

Sloan Kettering Institute scientist Daniel Heller first suggested that Cohen come work at his lab after seeing the artist’s work, which is often made with pigments that incorporate diamond dust and gold, at the DeBuck Gallery in New York.

“We initially thought that in working with an artist, we would make art to shed a little light on our science for the public,” Heller told the Memorial Sloan Kettering blog. “But the collaboration actually taught us something that could help us shine a light on cancer.”

For Cohen, the project was initially intended to develop a new way of art-making. In Heller’s lab, he worked with carbon nanotubes, which Heller was already employing in cancer research, for their optical properties. “They fluoresce in the infrared spectrum,” Cohen says. “That gives artists the opportunity to create paintings in a new spectrum, with a whole new palette of colors.”

Because human eyesight is limited, we can’t actually see infrared fluorescence. But using a special short-wave infrared camera, Cohen is able to document otherwise invisible effects, revealing the carbon nanotube paint’s hidden colors.

“What you’re perceiving as a static painting is actually in motion,” Cohen says. “I’m creating paintings that exist outside of the visible experience.”

Art Supplies—and a Diagnostic Tool

That same imaging technique can be used by doctors looking for microalbuminuria, a condition that causes the kidneys to leak trace amounts of albumin into urine, which is an early sign of of several cancers, diabetes, and high blood pressure.

Cohen helped co-author a paper published this month in Nature Communications about using the nanosensor paint in litmus paper tests with patient urine samples. The study found that the paint, when viewed through infrared light, was able to reveal the presence of albumin based on changes in the paint’s fluorescence after being exposed to the urine sample.

“It’s easy to detect albumen with a dipstick if there’s a lot of levels in the urine, but that would be like looking at stage four cancer,” Cohen says. “This is early detection.”

What’s more, a nanosensor paint can be easily used around the world, even in poor areas that don’t have access to the best diagnostic technologies. Doctors may even be able to view the urine samples using an infrared imaging attachments on their smartphones.

One of Joseph Cohen’s painting incorporating carbon nanotubes shown in both the visible light (left) and in UV fluorescence (right). Photo courtesy of Joseph Cohen. [downloaded from https://news.artnet.com/art-world/carbon-nanotube-cancer-paint-1638340?utm_content=from_&utm_source=Sailthru&utm_medium=email&utm_campaign=Global%20September%202%20PM&utm_term=artnet%20News%20Daily%20Newsletter%20USE%20%2830%20Day%20Engaged%20Only%29]

Amazing, eh? If you have the time, do read Cascone’s article in its entirety and should your curiosity be insatiable, there’s also an August 22, 2019 posting by Jim Stallard on the Memorial Sloan Kettering Cancer Center blog,

Here’s a link to and a citation for the paper,

Synthetic molecular recognition nanosensor paint for microalbuminuria by Januka Budhathoki-Uprety, Janki Shah, Joshua A. Korsen, Alysandria E. Wayne, Thomas V. Galassi, Joseph R. Cohen, Jackson D. Harvey, Prakrit V. Jena, Lakshmi V. Ramanathan, Edgar A. Jaimes & Daniel A. Heller. Nature Communicationsvolume 10, Article number: 3605 (2019) DOI: https://doi.org/10.1038/s41467-019-11583-1 Published: 09 August 2019

This paper is open access.

Joseph Cohen has graced this blog before in a May 3, 2019 posting titled, Where do I stand? a graphene artwork. It seems Cohen is very invested in using nanoscale carbon particles for his art.

First 3D heart printed using patient’s biological materials

This is very exciting news and it’s likely be at least 10 years before this technology could be made available to the public.

Caption: A 3D-printed, small-scaled human heart engineered from the patient’s own materials and cells. Credit: Advanced Science. © 2019 The Authors.

An April 15, 2019 news item on ScienceDaily makes a remarkable announcement,

In a major medical breakthrough, Tel Aviv University researchers have “printed” the world’s first 3D vascularised engineered heart using a patient’s own cells and biological materials. Their findings were published on April 15 [2019] in a study in Advanced Science.

Until now, scientists in regenerative medicine — a field positioned at the crossroads of biology and technology — have been successful in printing only simple tissues without blood vessels.

“This is the first time anyone anywhere has successfully engineered and printed an entire heart replete with cells, blood vessels, ventricles and chambers,” says Prof. Tal Dvir of TAU’s School of Molecular Cell Biology and Biotechnology, Department of Materials Science and Engineering, Center for Nanoscience and Nanotechnology and Sagol Center for Regenerative Biotechnology, who led the research for the study.

An April 15, 2019 Amricna Friends of Tel Aviv University (TAU) news release (also on EurekAlert), which originated the news item, provides more detail,

Heart disease is the leading cause of death among both men and women in the United States. Heart transplantation is currently the only treatment available to patients with end-stage heart failure. Given the dire shortage of heart donors, the need to develop new approaches to regenerate the diseased heart is urgent.

“This heart is made from human cells and patient-specific biological materials. In our process these materials serve as the bioinks, substances made of sugars and proteins that can be used for 3D printing of complex tissue models,” Prof. Dvir says. “People have managed to 3D-print the structure of a heart in the past, but not with cells or with blood vessels. Our results demonstrate the potential of our approach for engineering personalized tissue and organ replacement in the future.

Research for the study was conducted jointly by Prof. Dvir, Dr. Assaf Shapira of TAU’s Faculty of Life Sciences and Nadav Moor, a doctoral student in Prof. Dvir’s lab.

“At this stage, our 3D heart is small, the size of a rabbit’s heart, [emphasis mine] ” explains Prof. Dvir. “But larger human hearts require the same technology.”

For the research, a biopsy of fatty tissue was taken from patients. The cellular and a-cellular materials of the tissue were then separated. While the cells were reprogrammed to become pluripotent stem cells, the extracellular matrix (ECM), a three-dimensional network of extracellular macromolecules such as collagen and glycoproteins, were processed into a personalized hydrogel that served as the printing “ink.”

After being mixed with the hydrogel, the cells were efficiently differentiated to cardiac or endothelial cells to create patient-specific, immune-compatible cardiac patches with blood vessels and, subsequently, an entire heart.

According to Prof. Dvir, the use of “native” patient-specific materials is crucial to successfully engineering tissues and organs.

“The biocompatibility of engineered materials is crucial to eliminating the risk of implant rejection, which jeopardizes the success of such treatments,” Prof. Dvir says. “Ideally, the biomaterial should possess the same biochemical, mechanical and topographical properties of the patient’s own tissues. Here, we can report a simple approach to 3D-printed thick, vascularized and perfusable cardiac tissues that completely match the immunological, cellular, biochemical and anatomical properties of the patient.”

The researchers are now planning on culturing the printed hearts in the lab and “teaching them to behave” like hearts, Prof. Dvir says. They then plan to transplant the 3D-printed heart in animal models.

“We need to develop the printed heart further,” he concludes. “The cells need to form a pumping ability; they can currently contract, but we need them to work together. Our hope is that we will succeed and prove our method’s efficacy and usefulness.

“Maybe, in ten years, there will be organ printers in the finest hospitals around the world, and these procedures will be conducted routinely.”

Growing the heart to human size and getting the cells to work together so the heart will pump makes it seem like the 10 years Dvir imagines as the future date when there will be organ printers in hospitals routinely printing up hearts seems a bit optimistic. Regardless, I hope he’s right. Bravo to these Israeli researchers!

Here’s a link to and a citation for the paper,

3D Printing of Personalized Thick and Perfusable Cardiac Patches and Hearts by Nadav Noor, Assaf Shapira, Reuven Edri, Idan Gal, Lior Wertheim, Tal Dvir. Advanced Science DOI: https://doi.org/10.1002/advs.201900344 First published: 15 April 2019

This paper is open access.

Breakthrough with Alpaca nanobodies

Caption: Bryson and Sanchez, two alpacas who produce unusually small antibodies. These ‘nanobodies’ could help highly promising CAR T-cell therapies kill solid tumors, where right now they work only in blood cancers. Credit: Courtesy of Boston Children’s Hospital

Bryson and Sanchez are not the first camelids to grace this blog. ‘Llam’ me lend you some antibodies—antibody particles extracted from camels and llamas, a June 12, 2014 posting, and Llama-derived nanobodies are good for solving crystal structure, a December 14, 2017 posting, both feature news about medical breakthroughs with regard to the antibodies found in Llamas, camels, and other camelids (including alpacas) could enable.

The latest camelid-oriented medical research story is in an April 11, 2019 news item on phys.org (Note: A link has been removed),

In 1989, two undergraduate students at the Free University of Brussels were asked to test frozen blood serum from camels, and stumbled on a previously unknown kind of antibody. It was a miniaturized version of a human antibody, made up only of two heavy protein chains, rather than two light and two heavy chains. As they eventually reported, the antibodies’ presence was confirmed not only in camels, but also in llamas and alpacas.

Fast forward 30 years. In the journal PNAS [Proceedings of the National Academy of Science] this week [April 8 – 12, 2019], researchers at Boston Children’s Hospital and MIT [Massachusetts Institute of Technology] show that these mini-antibodies, shrunk further to create so-called nanobodies, may help solve a problem in the cancer field: making CAR T-cell therapies work in solid tumors.

An April 11, 2019 Boston Children’s Hospital news release on EurekAlert, which originated the news item, explores the technology,

Highly promising for blood cancers, chimeric antigen receptor (CAR) T-cell therapy genetically engineers a patient’s own T cells to make them better at attacking cancer cells. The Dana-Farber/Boston Children’s Cancer and Blood Disorders Center is currently using CAR T-cell therapy for relapsed acute lymphocytic leukemia (ALL), for example.

But CAR T cells haven’t been good at eliminating solid tumors. It’s been hard to find cancer-specific proteins on solid tumors that could serve as safe targets. Solid tumors are also protected by an extracellular matrix, a supportive web of proteins that acts as a barrier, as well as immunosuppressive molecules that weaken the T-cell attack.

Rethinking CAR T cells

That’s where nanobodies come in. For two decades, they largely remained in the hands of the Belgian team. But that changed after the patent expired in 2013. [emphases mine]

“A lot of people got into the game and began to appreciate nanobodies’ unique properties,” says Hidde Ploegh, PhD, an immunologist in the Program in Cellular and Molecular Medicine at Boston Children’s and senior investigator on the PNAS study.

One useful attribute is their enhanced targeting abilities. Ploegh and his team at Boston Children’s, in collaboration with Noo Jalikhani, PhD, and Richard Hynes, PhD at MIT’s Koch Institute for Integrative Cancer Research, have harnessed nanobodies to carry imaging agents, allowing precise visualization of metastatic cancers.

The Hynes team targeted the nanobodies to the tumors’ extracellular matrix, or ECM — aiming imaging agents not at the cancer cells themselves, but at the environment that surrounds them. Such markers are common to many tumors, but don’t typically appear on normal cells.

“Our lab and the Hynes lab are among the few actively pursuing this approach of targeting the tumor micro-environment,” says Ploegh. “Most labs are looking for tumor-specific antigens.”

Targeting tumor protectors

Ploegh’s lab took this idea to CAR T-cell therapy. His team, including members of the Hynes lab, took aim at the very factors that make solid tumors difficult to treat.

The CAR T cells they created were studded with nanobodies that recognize specific proteins in the tumor environment, bearing signals directing them to kill any cell they bound to. One protein, EIIIB, a variant of fibronectin, is found only on newly formed blood vessels that supply tumors with nutrients. Another, PD-L1, is an immunosuppressive protein that most cancers use to silence approaching T cells.

Biochemist Jessica Ingram, PhD of the Dana-Farber Cancer Institute, Ploegh’s partner and a coauthor on the paper, led the manufacturing pipeline. She would drive to Amherst, Mass., to gather T cells from two alpacas, Bryson and Sanchez, inject them with the antigen of interest and harvest their blood for further processing back in Boston to generate mini-antibodies.

Taking down melanoma and colon cancer

Tested in two separate melanoma mouse models, as well as a colon adenocarcinoma model in mice, the nanobody-based CAR T cells killed tumor cells, significantly slowed tumor growth and improved the animals’ survival, with no readily apparent side effects.

Ploegh thinks that the engineered T cells work through a combination of factors. They caused damage to tumor tissue, which tends to stimulate inflammatory immune responses. Targeting EIIIB may damage blood vessels in a way that decreases blood supply to tumors, while making them more permeable to cancer drugs.

“If you destroy the local blood supply and cause vascular leakage, you could perhaps improve the delivery of other things that might have a harder time getting in,” says Ploegh. “I think we should look at this as part of a combination therapy.”

Future directions

Ploegh thinks his team’s approach could be useful in many solid tumors. He’s particularly interested in testing nanobody-based CAR T cells in models of pancreatic cancer and cholangiocarcinoma, a bile duct cancer from which Ingram passed away in 2018.

The technology itself can be pushed even further, says Ploegh.

“Nanobodies could potentially carry a cytokine to boost the immune response to the tumor, toxic molecules that kill tumor and radioisotopes to irradiate the tumor at close range,” he says. “CAR T cells are the battering ram that would come in to open the door; the other elements would finish the job. In theory, you could equip a single T cell with multiple chimeric antigen receptors and achieve even more precision. That’s something we would like to pursue.”

So, the Belgian researchers have a patent for two decades and, after it expires, more researchers could help to take the work further. Hmm …

Moving on, here’s a link to and a citation for the paper,

Nanobody-based CAR T cells that target the tumor microenvironment inhibit the growth of solid tumors in immunocompetent mice by Yushu Joy Xie, Michael Dougan, Noor Jailkhani, Jessica Ingram, Tao Fang, Laura Kummer, Noor Momin, Novalia Pishesha, Steffen Rickelt, Richard O. Hynes, and Hidde Ploegh. PNAS DOI: https://doi.org/10.1073/pnas.1817147116
First published April 1, 2019

This paper is behind a paywall