Tag Archives: Germany

Better anti-parasitic medicine delivery with chitosan-based nanocapsules

I mage: The common liver fluke which can cause fascioliasis. Credit: Wikimedia creative commons Courtesy: Leeds University

It looks like a pair of lips to me but, according to a December 12, 2018 news item on Nanowerk, this liver fluke heralds a flatworm infection is a serious health problem,

An international team, led by Professor Francisco Goycoolea from the University of Leeds [UK] and Dr Claudio Salomon from the Universidad Nacional de Rosario, Argentina, and in collaboration with colleagues at the University of Münster, Germany, have developed a novel pharmaceutical formulation to administer triclabendazole – an anti-parasitic drug used to treat a type of flatworm infection – in billions of tiny capsules.

The World Health Organisation estimates that 2.4 million people are infected with fascioliasis, the disease caused by flatworms and treated with triclabendazole.

A December 12, 2018 University of Leeds  press release (also on EurekAlert), which originated the news item,

Anti-parasitic drugs do not become effective until they dissolve and are absorbed. Traditionally, these medicines are highly insoluble and this limits their therapeutic effect.
In a bid to overcome this limitation and accomplish the new formulation, the team used “soft” nanotechnology and nanomedicine approaches, which utilises the self-assembly properties of organic nanostructures and uses techniques in which components, such as polymers and surfactants in solution, play key roles.

Their formulation produces capsules that are less than one micron in size – the diameter of a human hair is roughly 75 microns. These tiny capsules are loaded with triclabendazole and then bundled together to deliver the required dose.

The team used chitosan, a naturally-occurring sugar polymer found in the exoskeleton of shellfish and the cell walls of certain fungi, to coat the oil-core of capsules and bind the drug together, while stabilising the capsule and helping to preserve it.
In its nanocapsule form, the drug would be 100 times more soluble than its current tablet form.

Professor Goycoolea, from the School of Food Science and Nutrition at Leeds, said: “Solubility is critical challenge for effective anti-parasite medicine. We looked to tackle this problem at the particle level. Triclabendazole taken as a dose made up of billions of tiny capsules would mean the medicine would be more efficiently and quickly absorbed

“Through the use of nanocapsules and nanoemulsions, drug efficiency can be enhanced and new solutions can be considered for the best ways to target medicine delivery.”
Dr Salomon said: “To date, this is the first report on triclabendazole nanoencapsulation and we believe this type of formulation could be applied to other anti-parasitic drugs as well. But more research is needed to ensure this new pharmaceutical formulation of the drug does not diminish the anti-parasitic effect. Our ongoing research is working to answer this very question.”

Although there have been cases of fascioliasis in more than 70 countries worldwide, with increasing reports from Europe and the Americas, it is considered a neglected disease, as it does not receive much attention and often goes untreated.
Symptoms of the disease when it reaches the chronic phase include intermittent pain, jaundice and anaemia. Patients can also experience hardening of the liver in the case of long-term inflammation.

Because of the highly insoluble nature of anti-parasitic drugs, they need to be administered in very high dosages to ensure enough of the active ingredient is absorbed. This is particularly problematic when treating children for parasites. Tablets needs to be divided into smaller pieces to adjust the dosage and make swallowing easier, but this can cause side effects due to incorrect dosage.

The team’s technique to formulate triclabendazole into nanocapsules, published today [Dec. 12, 2018] in the journal PLOS ONE, would also allow for lower doses to be administered. s

Here’s a link to and a citation for the paper,

Chitosan-based nanodelivery systems applied to the development of novel triclabendazole formulations by Daniel Real, Stefan Hoffmann, Darío Leonardi, Claudio Salomon, Francisco M. Goycoolea. PLOS DOI https://doi.org/10.1371/journal.pone.0207625 Published: December 12, 2018

This paper is open access. BTW, I loved the title for the press release (Helping the anti-parasitic medicine go down) for its reference to the song, A spoonful of sugar helps the medicine go down, in the 1964 film musical, Mary Poppins, and the shout out for the sort of sequel, Mary Poppins Returns, released on Dec. 19, 2018.

Membrane stretching as a new transport mechanism for nanomaterials

This work comes from Catalonia, Spain by way of a collaboration between Chinese, German, and, of course, Spanish scientists. From a December 12, 2018 Universitat Rovira i Virgili press release (also on EurekAlert),

Increasing awareness of bioeffects and toxicity of nanomaterials interacting with cells puts in focus the mechanisms by which nanomaterials can cross lipid membranes. Apart from well-discussed energy-dependent endocytosis for large objects and passive diffusion through membranes by solute molecules, there can exist other transport mechanisms based on physical principles. Based on this hypothesis, the team of theoretical physics at Universitat Rovira i Virgili in Tarragona, led by Dr. Vladimir Baulin, designed a research project to investigate the interaction between nanotube and lipid membranes. In computer simulations, the researchers studied what they call a “model bilayer”, composed only by one type of lipids. Based on their calculations, the team of Dr. Baulin observed that ultra -short nanotube (10nm length) can insert perpendicularly to the lipid bilayer core.

They observed that these nanotubes stay trapped in the cell membrane, as commonly accepted by the scientific community. But a surprise appears when they stretched their model cell membrane, then inserted nanotubes which were trapped in the bilayer, suddenly started to escape from the bilayer on both sides. This means that it is possible to control the transport of nanomaterial across a cell membrane by tuning the membrane tension.

This is where Dr. Baulin contacted Dr. Jean-Baptiste Fleury at the Saarland University (Germany) to confirm this mechanism and to study experimentally this tension-mediated transport phenomena. Dr. Fleury and his team, designed a microfluidic experiment with a well-controlled phospholipid bilayer, an experimental model for cell membranes and added ultra-small carbon nanotubes (10nm in length) in solution. The nanotubes had an adsorbed lipid monolayer that guarantees their stable dispersion and prevent their clustering. Using a combination of optical fluorescent microscopy and electrophysiological measurements, the team of Dr. Fleury could follow individual nanotube crossing a bilayer and unravel their pathway on a molecular level. And as predicted by the simulations, they observed that nanotubes inserted into the bilayer by dissolving their lipid coating into the artificial membrane. When a tension of 4mN/m was applied to the bilayer, nanotubes spontaneously escaped the bilayer just in few milliseconds, while at lower tensions nanotubes remain trapped inside the membrane.

This discovery of translocation of tiny nanotubes through barriers protecting cells, i.e. lipid bilayer, may raise concerns about safety of nanomaterials for public health and suggest new mechanical mechanisms to control the drug delivery.

Caption: Nanotubes trapped inside the membrane. Credit: © URV

Here’s a link to and a citation for the paper,

Tension-Induced Translocation of an Ultrashort Carbon Nanotube through a Phospholipid Bilayer by Yachong Guo, Marco Werner, Ralf Seemann, Vladimir A. Baulin, and Jean-Baptiste Fleury. ACS Nano, Article ASAP DOI: 10.1021/acsnano.8b04657 Publication Date (Web): November 19, 2018

Copyright © 2018 American Chemical Society

This paper is behind a paywall.

Artificial synapse courtesy of nanowires

It looks like a popsicle to me,

Caption: Image captured by an electron microscope of a single nanowire memristor (highlighted in colour to distinguish it from other nanowires in the background image). Blue: silver electrode, orange: nanowire, yellow: platinum electrode. Blue bubbles are dispersed over the nanowire. They are made up of silver ions and form a bridge between the electrodes which increases the resistance. Credit: Forschungszentrum Jülich

Not a popsicle but a representation of a device (memristor) scientists claim mimics a biological nerve cell according to a December 5, 2018 news item on ScienceDaily,

Scientists from Jülich [Germany] together with colleagues from Aachen [Germany] and Turin [Italy] have produced a memristive element made from nanowires that functions in much the same way as a biological nerve cell. The component is able to both save and process information, as well as receive numerous signals in parallel. The resistive switching cell made from oxide crystal nanowires is thus proving to be the ideal candidate for use in building bioinspired “neuromorphic” processors, able to take over the diverse functions of biological synapses and neurons.

A Dec. 5, 2018 Forschungszentrum Jülich press release (also on EurekAlert), which originated the news item, provides more details,

Computers have learned a lot in recent years. Thanks to rapid progress in artificial intelligence they are now able to drive cars, translate texts, defeat world champions at chess, and much more besides. In doing so, one of the greatest challenges lies in the attempt to artificially reproduce the signal processing in the human brain. In neural networks, data are stored and processed to a high degree in parallel. Traditional computers on the other hand rapidly work through tasks in succession and clearly distinguish between the storing and processing of information. As a rule, neural networks can only be simulated in a very cumbersome and inefficient way using conventional hardware.

Systems with neuromorphic chips that imitate the way the human brain works offer significant advantages. Experts in the field describe this type of bioinspired computer as being able to work in a decentralised way, having at its disposal a multitude of processors, which, like neurons in the brain, are connected to each other by networks. If a processor breaks down, another can take over its function. What is more, just like in the brain, where practice leads to improved signal transfer, a bioinspired processor should have the capacity to learn.

“With today’s semiconductor technology, these functions are to some extent already achievable. These systems are however suitable for particular applications and require a lot of space and energy,” says Dr. Ilia Valov from Forschungszentrum Jülich. “Our nanowire devices made from zinc oxide crystals can inherently process and even store information, as well as being extremely small and energy efficient,” explains the researcher from Jülich’s Peter Grünberg Institute.

For years memristive cells have been ascribed the best chances of being capable of taking over the function of neurons and synapses in bioinspired computers. They alter their electrical resistance depending on the intensity and direction of the electric current flowing through them. In contrast to conventional transistors, their last resistance value remains intact even when the electric current is switched off. Memristors are thus fundamentally capable of learning.

In order to create these properties, scientists at Forschungszentrum Jülich and RWTH Aachen University used a single zinc oxide nanowire, produced by their colleagues from the polytechnic university in Turin. Measuring approximately one ten-thousandth of a millimeter in size, this type of nanowire is over a thousand times thinner than a human hair. The resulting memristive component not only takes up a tiny amount of space, but also is able to switch much faster than flash memory.

Nanowires offer promising novel physical properties compared to other solids and are used among other things in the development of new types of solar cells, sensors, batteries and computer chips. Their manufacture is comparatively simple. Nanowires result from the evaporation deposition of specified materials onto a suitable substrate, where they practically grow of their own accord.

In order to create a functioning cell, both ends of the nanowire must be attached to suitable metals, in this case platinum and silver. The metals function as electrodes, and in addition, release ions triggered by an appropriate electric current. The metal ions are able to spread over the surface of the wire and build a bridge to alter its conductivity.

Components made from single nanowires are, however, still too isolated to be of practical use in chips. Consequently, the next step being planned by the Jülich and Turin researchers is to produce and study a memristive element, composed of a larger, relatively easy to generate group of several hundred nanowires offering more exciting functionalities.

The Italians have also written about the work in a December 4, 2018 news item for the Polytecnico di Torino’s inhouse magazine, PoliFlash’. I like the image they’ve used better as it offers a bit more detail and looks less like a popsicle. First, the image,

Courtesy: Polytecnico di Torino

Now, the news item, which includes some historical information about the memristor (Note: There is some repetition and links have been removed),

Emulating and understanding the human brain is one of the most important challenges for modern technology: on the one hand, the ability to artificially reproduce the processing of brain signals is one of the cornerstones for the development of artificial intelligence, while on the other the understanding of the cognitive processes at the base of the human mind is still far away.

And the research published in the prestigious journal Nature Communications by Gianluca Milano and Carlo Ricciardi, PhD student and professor, respectively, of the Applied Science and Technology Department of the Politecnico di Torino, represents a step forward in these directions. In fact, the study entitled “Self-limited single nanowire systems combining all-in-one memristive and neuromorphic functionalities” shows how it is possible to artificially emulate the activity of synapses, i.e. the connections between neurons that regulate the learning processes in our brain, in a single “nanowire” with a diameter thousands of times smaller than that of a hair.

It is a crystalline nanowire that takes the “memristor”, the electronic device able to artificially reproduce the functions of biological synapses, to a more performing level. Thanks to the use of nanotechnologies, which allow the manipulation of matter at the atomic level, it was for the first time possible to combine into one single device the synaptic functions that were individually emulated through specific devices. For this reason, the nanowire allows an extreme miniaturisation of the “memristor”, significantly reducing the complexity and energy consumption of the electronic circuits necessary for the implementation of learning algorithms.

Starting from the theorisation of the “memristor” in 1971 by Prof. Leon Chua – now visiting professor at the Politecnico di Torino, who was conferred an honorary degree by the University in 2015 – this new technology will not only allow smaller and more performing devices to be created for the implementation of increasingly “intelligent” computers, but is also a significant step forward for the emulation and understanding of the functioning of the brain.

“The nanowire memristor – said Carlo Ricciardirepresents a model system for the study of physical and electrochemical phenomena that govern biological synapses at the nanoscale. The work is the result of the collaboration between our research team and the RWTH University of Aachen in Germany, supported by INRiM, the National Institute of Metrological Research, and IIT, the Italian Institute of Technology.”

h.t for the Italian info. to Nanowerk’s Dec. 10, 2018 news item.

Here’s a link to and a citation for the paper,

Self-limited single nanowire systems combining all-in-one memristive and neuromorphic functionalities by Gianluca Milano, Michael Luebben, Zheng Ma, Rafal Dunin-Borkowski, Luca Boarino, Candido F. Pirri, Rainer Waser, Carlo Ricciardi, & Ilia Valov. Nature Communicationsvolume 9, Article number: 5151 (2018) DOI: https://doi.org/10.1038/s41467-018-07330-7 Published: 04 December 2018

This paper is open access.

Just use the search term “memristor” in the blog search engine if you’re curious about the multitudinous number of postings on the topic here.

Defending nanoelectronics from cyber attacks

There’s a new program at the University of Stuttgart (Germany) and their call for projects was recently announced. First, here’s a description of the program in a May 30, 2019 news item on Nanowerk,

Today’s societies critically depend on electronic systems. Past spectacular cyber-attacks have clearly demonstrated the vulnerability of existing systems and the need to prevent such attacks in the future. The majority of available cyber-defenses concentrate on protecting the software part of electronic systems or their communication interfaces.

However, manufacturing technology advancements and the increasing hardware complexity provide a large number of challenges so that the focus of attackers has shifted towards the hardware level. We saw already evidence for powerful and successful hardware-level attacks, including Rowhammer, Meltdown and Spectre.

These attacks happened on products built using state-of-the-art microelectronic technology, however, we are facing completely new security challenges due to the ongoing transition to radically new types of nanoelectronic devices, such as memristors, spintronics, or carbon nanotubes and graphene based transistors.

The use of such emerging nanotechnologies is inevitable to address the key challenges related to energy efficiency, computing power and performance. Therefore, the entire industry, are switching to emerging nano-electronics alongside scaled CMOS technologies in heterogeneous integrated systems.

These technologies come with new properties and also facilitate the development of radically different computer architectures. The new technologies and architectures provide new opportunities for achieving security targets, but also raise questions about their vulnerabilities to new types of hardware attacks.

A May 28, 2019 University of Stuttgart press release provides more information about the program and the call for projects,

Whether it’s cars, industrial plants or the government network, spectacular cyber attacks over the past few months have shown how vulnerable modern electronic systems are. The aim of the new Priority Program “Nano Security”, which is coordinated by the University of Stuttgart, is protecting you and preventing the cyber attacks of the future. The program, which is funded by the German Research Foundation (DFG), emphasizes making the hardware into a reliable foundation of a system or a layer of security.

The challenges of nanoelectronics

Completely new challenges also emerge as a result of the switch to radically new nanoelectronic components, which for example are used to master the challenges of the future in terms of energy efficiency, computing power and secure data transmission. For example, memristors (components which are not just used to store information but also function as logic modules), the spintronics, which exploit quantum-mechanical effects, or carbon nanotubes.

The new technologies, as well as the fundamentally different computer architecture associated with them, offer new opportunities for cryptographic primitives in order to achieve an even more secure data transmission. However, they also raise questions about their vulnerability to new types of hardware attacks.

The problem is part of the solution

In this context, a better understanding should be developed of what consequences the new nanoelectronic technologies have for the security of circuits and systems as part of the new Priority Program. Here, the hardware is not just thought of as part of the problem but also as an important and necessary part of the solution to security problems. The starting points here for example are the hardware-based generation of cryptographic keys, the secure storage and processing of sensitive data, and the isolation of system components which is guaranteed by the hardware. Lastly, it should be ensured that an attack cannot be spread further by the system.

In this process, the scientists want to assess the possible security risks and weaknesses which stem from the new type of nanoelectronics. Furthermore, they want to develop innovative approaches for system security which are based on nanoelectronics as a security anchor.

The Priority Program promotes cooperation between scientists, who develop innovative security solutions for the computer systems of the future on different levels of abstraction. Likewise, it makes methods available to system designers to keep ahead in the race between attackers and security measures over the next few decades.

The call has started

The DFG Priority Program “Nano Security. From Nano-Electronics to Secure Systems“ (SPP 2253) is scheduled to last for a period of six years. The call for projects for the first three-year funding period was advertised a few days ago, and the first projects are set to start at the beginning of 2020.

For more information go to the Nano Security: From Nano-Electronics to Secure Systems webpage on the University of Stuttgart website.

It’s a very ‘carbony’ time: graphene jacket, graphene-skinned airplane, and schwarzite

In August 2018, I been stumbled across several stories about graphene-based products and a new form of carbon.

Graphene jacket

The company producing this jacket has as its goal “… creating bionic clothing that is both bulletproof and intelligent.” Well, ‘bionic‘ means biologically-inspired engineering and ‘intelligent‘ usually means there’s some kind of computing capability in the product. This jacket, which is the first step towards the company’s goal, is not bionic, bulletproof, or intelligent. Nonetheless, it represents a very interesting science experiment in which you, the consumer, are part of step two in the company’s R&D (research and development).

Onto Vollebak’s graphene jacket,

Courtesy: Vollebak

From an August 14, 2018 article by Jesus Diaz for Fast Company,

Graphene is the thinnest possible form of graphite, which you can find in your everyday pencil. It’s purely bi-dimensional, a single layer of carbon atoms that has unbelievable properties that have long threatened to revolutionize everything from aerospace engineering to medicine. …

Despite its immense promise, graphene still hasn’t found much use in consumer products, thanks to the fact that it’s hard to manipulate and manufacture in industrial quantities. The process of developing Vollebak’s jacket, according to the company’s cofounders, brothers Steve and Nick Tidball, took years of intensive research, during which the company worked with the same material scientists who built Michael Phelps’ 2008 Olympic Speedo swimsuit (which was famously banned for shattering records at the event).

The jacket is made out of a two-sided material, which the company invented during the extensive R&D process. The graphene side looks gunmetal gray, while the flipside appears matte black. To create it, the scientists turned raw graphite into something called graphene “nanoplatelets,” which are stacks of graphene that were then blended with polyurethane to create a membrane. That, in turn, is bonded to nylon to form the other side of the material, which Vollebak says alters the properties of the nylon itself. “Adding graphene to the nylon fundamentally changes its mechanical and chemical properties–a nylon fabric that couldn’t naturally conduct heat or energy, for instance, now can,” the company claims.

The company says that it’s reversible so you can enjoy graphene’s properties in different ways as the material interacts with either your skin or the world around you. “As physicists at the Max Planck Institute revealed, graphene challenges the fundamental laws of heat conduction, which means your jacket will not only conduct the heat from your body around itself to equalize your skin temperature and increase it, but the jacket can also theoretically store an unlimited amount of heat, which means it can work like a radiator,” Tidball explains.

He means it literally. You can leave the jacket out in the sun, or on another source of warmth, as it absorbs heat. Then, the company explains on its website, “If you then turn it inside out and wear the graphene next to your skin, it acts like a radiator, retaining its heat and spreading it around your body. The effect can be visibly demonstrated by placing your hand on the fabric, taking it away and then shooting the jacket with a thermal imaging camera. The heat of the handprint stays long after the hand has left.”

There’s a lot more to the article although it does feature some hype and I’m not sure I believe Diaz’s claim (August 14, 2018 article) that ‘graphene-based’ hair dye is perfectly safe ( Note: A link has been removed),

Graphene is the thinnest possible form of graphite, which you can find in your everyday pencil. It’s purely bi-dimensional, a single layer of carbon atoms that has unbelievable properties that will one day revolutionize everything from aerospace engineering to medicine. Its diverse uses are seemingly endless: It can stop a bullet if you add enough layers. It can change the color of your hair with no adverse effects. [emphasis mine] It can turn the walls of your home into a giant fire detector. “It’s so strong and so stretchy that the fibers of a spider web coated in graphene could catch a falling plane,” as Vollebak puts it in its marketing materials.

Not unless things have changed greatly since March 2018. My August 2, 2018 posting featured the graphene-based hair dye announcement from March 2018 and a cautionary note from Dr. Andrew Maynard (scroll down ab out 50% of the way for a longer excerpt of Maynard’s comments),

Northwestern University’s press release proudly announced, “Graphene finds new application as nontoxic, anti-static hair dye.” The announcement spawned headlines like “Enough with the toxic hair dyes. We could use graphene instead,” and “’Miracle material’ graphene used to create the ultimate hair dye.”

From these headlines, you might be forgiven for getting the idea that the safety of graphene-based hair dyes is a done deal. Yet having studied the potential health and environmental impacts of engineered nanomaterials for more years than I care to remember, I find such overly optimistic pronouncements worrying – especially when they’re not backed up by clear evidence.

These studies need to be approached with care, as the precise risks of graphene exposure will depend on how the material is used, how exposure occurs and how much of it is encountered. Yet there’s sufficient evidence to suggest that this substance should be used with caution – especially where there’s a high chance of exposure or that it could be released into the environment.

The full text of Dr. Maynard’s comments about graphene hair dyes and risk can be found here.

Bearing in mind  that graphene-based hair dye is an entirely different class of product from the jacket, I wouldn’t necessarily dismiss risks; I would like to know what kind of risk assessment and safety testing has been done. Due to their understandable enthusiasm, the brothers Tidball have focused all their marketing on the benefits and the opportunity for the consumer to test their product (from graphene jacket product webpage),

While it’s completely invisible and only a single atom thick, graphene is the lightest, strongest, most conductive material ever discovered, and has the same potential to change life on Earth as stone, bronze and iron once did. But it remains difficult to work with, extremely expensive to produce at scale, and lives mostly in pioneering research labs. So following in the footsteps of the scientists who discovered it through their own highly speculative experiments, we’re releasing graphene-coated jackets into the world as experimental prototypes. Our aim is to open up our R&D and accelerate discovery by getting graphene out of the lab and into the field so that we can harness the collective power of early adopters as a test group. No-one yet knows the true limits of what graphene can do, so the first edition of the Graphene Jacket is fully reversible with one side coated in graphene and the other side not. If you’d like to take part in the next stage of this supermaterial’s history, the experiment is now open. You can now buy it, test it and tell us about it. [emphasis mine]

How maverick experiments won the Nobel Prize

While graphene’s existence was first theorised in the 1940s, it wasn’t until 2004 that two maverick scientists, Andre Geim and Konstantin Novoselov, were able to isolate and test it. Through highly speculative and unfunded experimentation known as their ‘Friday night experiments,’ they peeled layer after layer off a shaving of graphite using Scotch tape until they produced a sample of graphene just one atom thick. After similarly leftfield thinking won Geim the 2000 Ig Nobel prize for levitating frogs using magnets, the pair won the Nobel prize in 2010 for the isolation of graphene.

Should you be interested, in beta-testing the jacket, it will cost you $695 (presumably USD); order here. One last thing, Vollebak is based in the UK.

Graphene skinned plane

An August 14, 2018 news item (also published as an August 1, 2018 Haydale press release) by Sue Keighley on Azonano heralds a new technology for airplans,

Haydale, (AIM: HAYD), the global advanced materials group, notes the announcement made yesterday from the University of Central Lancashire (UCLAN) about the recent unveiling of the world’s first graphene skinned plane at the internationally renowned Farnborough air show.

The prepreg material, developed by Haydale, has potential value for fuselage and wing surfaces in larger scale aero and space applications especially for the rapidly expanding drone market and, in the longer term, the commercial aerospace sector. By incorporating functionalised nanoparticles into epoxy resins, the electrical conductivity of fibre-reinforced composites has been significantly improved for lightning-strike protection, thereby achieving substantial weight saving and removing some manufacturing complexities.

Before getting to the photo, here’s a definition for pre-preg from its Wikipedia entry (Note: Links have been removed),

Pre-preg is “pre-impregnated” composite fibers where a thermoset polymer matrix material, such as epoxy, or a thermoplastic resin is already present. The fibers often take the form of a weave and the matrix is used to bond them together and to other components during manufacture.

Haydale has supplied graphene enhanced prepreg material for Juno, a three-metre wide graphene-enhanced composite skinned aircraft, that was revealed as part of the ‘Futures Day’ at Farnborough Air Show 2018. [downloaded from https://www.azonano.com/news.aspx?newsID=36298]

A July 31, 2018 University of Central Lancashire (UCLan) press release provides a tiny bit more (pun intended) detail,

The University of Central Lancashire (UCLan) has unveiled the world’s first graphene skinned plane at an internationally renowned air show.

Juno, a three-and-a-half-metre wide graphene skinned aircraft, was revealed on the North West Aerospace Alliance (NWAA) stand as part of the ‘Futures Day’ at Farnborough Air Show 2018.

The University’s aerospace engineering team has worked in partnership with the Sheffield Advanced Manufacturing Research Centre (AMRC), the University of Manchester’s National Graphene Institute (NGI), Haydale Graphene Industries (Haydale) and a range of other businesses to develop the unmanned aerial vehicle (UAV), which also includes graphene batteries and 3D printed parts.

Billy Beggs, UCLan’s Engineering Innovation Manager, said: “The industry reaction to Juno at Farnborough was superb with many positive comments about the work we’re doing. Having Juno at one the world’s biggest air shows demonstrates the great strides we’re making in leading a programme to accelerate the uptake of graphene and other nano-materials into industry.

“The programme supports the objectives of the UK Industrial Strategy and the University’s Engineering Innovation Centre (EIC) to increase industry relevant research and applications linked to key local specialisms. Given that Lancashire represents the fourth largest aerospace cluster in the world, there is perhaps no better place to be developing next generation technologies for the UK aerospace industry.”

Previous graphene developments at UCLan have included the world’s first flight of a graphene skinned wing and the launch of a specially designed graphene-enhanced capsule into near space using high altitude balloons.

UCLan engineering students have been involved in the hands-on project, helping build Juno on the Preston Campus.

Haydale supplied much of the material and all the graphene used in the aircraft. Ray Gibbs, Chief Executive Officer, said: “We are delighted to be part of the project team. Juno has highlighted the capability and benefit of using graphene to meet key issues faced by the market, such as reducing weight to increase range and payload, defeating lightning strike and protecting aircraft skins against ice build-up.”

David Bailey Chief Executive of the North West Aerospace Alliance added: “The North West aerospace cluster contributes over £7 billion to the UK economy, accounting for one quarter of the UK aerospace turnover. It is essential that the sector continues to develop next generation technologies so that it can help the UK retain its competitive advantage. It has been a pleasure to support the Engineering Innovation Centre team at the University in developing the world’s first full graphene skinned aircraft.”

The Juno project team represents the latest phase in a long-term strategic partnership between the University and a range of organisations. The partnership is expected to go from strength to strength following the opening of the £32m EIC facility in February 2019.

The next step is to fly Juno and conduct further tests over the next two months.

Next item, a new carbon material.

Schwarzite

I love watching this gif of a schwarzite,

The three-dimensional cage structure of a schwarzite that was formed inside the pores of a zeolite. (Graphics by Yongjin Lee and Efrem Braun)

An August 13, 2018 news item on Nanowerk announces the new carbon structure,

The discovery of buckyballs [also known as fullerenes, C60, or buckminsterfullerenes] surprised and delighted chemists in the 1980s, nanotubes jazzed physicists in the 1990s, and graphene charged up materials scientists in the 2000s, but one nanoscale carbon structure – a negatively curved surface called a schwarzite – has eluded everyone. Until now.

University of California, Berkeley [UC Berkeley], chemists have proved that three carbon structures recently created by scientists in South Korea and Japan are in fact the long-sought schwarzites, which researchers predict will have unique electrical and storage properties like those now being discovered in buckminsterfullerenes (buckyballs or fullerenes for short), nanotubes and graphene.

An August 13, 2018 UC Berkeley news release by Robert Sanders, which originated the news item, describes how the Berkeley scientists and the members of their international  collaboration from Germany, Switzerland, Russia, and Italy, have contributed to the current state of schwarzite research,

The new structures were built inside the pores of zeolites, crystalline forms of silicon dioxide – sand – more commonly used as water softeners in laundry detergents and to catalytically crack petroleum into gasoline. Called zeolite-templated carbons (ZTC), the structures were being investigated for possible interesting properties, though the creators were unaware of their identity as schwarzites, which theoretical chemists have worked on for decades.

Based on this theoretical work, chemists predict that schwarzites will have unique electronic, magnetic and optical properties that would make them useful as supercapacitors, battery electrodes and catalysts, and with large internal spaces ideal for gas storage and separation.

UC Berkeley postdoctoral fellow Efrem Braun and his colleagues identified these ZTC materials as schwarzites based of their negative curvature, and developed a way to predict which zeolites can be used to make schwarzites and which can’t.

“We now have the recipe for how to make these structures, which is important because, if we can make them, we can explore their behavior, which we are working hard to do now,” said Berend Smit, an adjunct professor of chemical and biomolecular engineering at UC Berkeley and an expert on porous materials such as zeolites and metal-organic frameworks.

Smit, the paper’s corresponding author, Braun and their colleagues in Switzerland, China, Germany, Italy and Russia will report their discovery this week in the journal Proceedings of the National Academy of Sciences. Smit is also a faculty scientist at Lawrence Berkeley National Laboratory.

Playing with carbon

Diamond and graphite are well-known three-dimensional crystalline arrangements of pure carbon, but carbon atoms can also form two-dimensional “crystals” — hexagonal arrangements patterned like chicken wire. Graphene is one such arrangement: a flat sheet of carbon atoms that is not only the strongest material on Earth, but also has a high electrical conductivity that makes it a promising component of electronic devices.

schwarzite carbon cage

The cage structure of a schwarzite that was formed inside the pores of a zeolite. The zeolite is subsequently dissolved to release the new material. (Graphics by Yongjin Lee and Efrem Braun)

Graphene sheets can be wadded up to form soccer ball-shaped fullerenes – spherical carbon cages that can store molecules and are being used today to deliver drugs and genes into the body. Rolling graphene into a cylinder yields fullerenes called nanotubes, which are being explored today as highly conductive wires in electronics and storage vessels for gases like hydrogen and carbon dioxide. All of these are submicroscopic, 10,000 times smaller than the width of a human hair.

To date, however, only positively curved fullerenes and graphene, which has zero curvature, have been synthesized, feats rewarded by Nobel Prizes in 1996 and 2010, respectively.

In the 1880s, German physicist Hermann Schwarz investigated negatively curved structures that resemble soap-bubble surfaces, and when theoretical work on carbon cage molecules ramped up in the 1990s, Schwarz’s name became attached to the hypothetical negatively curved carbon sheets.

“The experimental validation of schwarzites thus completes the triumvirate of possible curvatures to graphene; positively curved, flat, and now negatively curved,” Braun added.

Minimize me

Like soap bubbles on wire frames, schwarzites are topologically minimal surfaces. When made inside a zeolite, a vapor of carbon-containing molecules is injected, allowing the carbon to assemble into a two-dimensional graphene-like sheet lining the walls of the pores in the zeolite. The surface is stretched tautly to minimize its area, which makes all the surfaces curve negatively, like a saddle. The zeolite is then dissolved, leaving behind the schwarzite.

soap bubble schwarzite structure

A computer-rendered negatively curved soap bubble that exhibits the geometry of a carbon schwarzite. (Felix Knöppel image)

“These negatively-curved carbons have been very hard to synthesize on their own, but it turns out that you can grow the carbon film catalytically at the surface of a zeolite,” Braun said. “But the schwarzites synthesized to date have been made by choosing zeolite templates through trial and error. We provide very simple instructions you can follow to rationally make schwarzites and we show that, by choosing the right zeolite, you can tune schwarzites to optimize the properties you want.”

Researchers should be able to pack unusually large amounts of electrical charge into schwarzites, which would make them better capacitors than conventional ones used today in electronics. Their large interior volume would also allow storage of atoms and molecules, which is also being explored with fullerenes and nanotubes. And their large surface area, equivalent to the surface areas of the zeolites they’re grown in, could make them as versatile as zeolites for catalyzing reactions in the petroleum and natural gas industries.

Braun modeled ZTC structures computationally using the known structures of zeolites, and worked with topological mathematician Senja Barthel of the École Polytechnique Fédérale de Lausanne in Sion, Switzerland, to determine which of the minimal surfaces the structures resembled.

The team determined that, of the approximately 200 zeolites created to date, only 15 can be used as a template to make schwarzites, and only three of them have been used to date to produce schwarzite ZTCs. Over a million zeolite structures have been predicted, however, so there could be many more possible schwarzite carbon structures made using the zeolite-templating method.

Other co-authors of the paper are Yongjin Lee, Seyed Mohamad Moosavi and Barthel of the École Polytechnique Fédérale de Lausanne, Rocio Mercado of UC Berkeley, Igor Baburin of the Technische Universität Dresden in Germany and Davide Proserpio of the Università degli Studi di Milano in Italy and Samara State Technical University in Russia.

Here’s a link to and a citation for the paper,

Generating carbon schwarzites via zeolite-templating by Efrem Braun, Yongjin Lee, Seyed Mohamad Moosavi, Senja Barthel, Rocio Mercado, Igor A. Baburin, Davide M. Proserpio, and Berend Smit. PNAS August 14, 2018. 201805062; published ahead of print August 14, 2018. https://doi.org/10.1073/pnas.1805062115

This paper appears to be open access.

Build nanoparticles using techniques from the ancient Egyptians

Great Pyramid of Giza and Sphinx [downloaded from http://news.ifmo.ru/en/science/photonics/news/7731/]

Russian and German scientists have taken a closer look at the Great Pyramid as they investigate better ways of designing sensors and solar cells. From a July 30, 2018 news item on Nanowerk,

An international research group applied methods of theoretical physics to investigate the electromagnetic response of the Great Pyramid to radio waves. Scientists predicted that under resonance conditions the pyramid can concentrate electromagnetic energy in its internal chambers and under the base. The research group plans to use these theoretical results to design nanoparticles capable of reproducing similar effects in the optical range. Such nanoparticles may be used, for example, to develop sensors and highly efficient solar cells.

A July 30, 2018 ITMO University press release, which originated the news item,  expands on the theme,

While Egyptian pyramids are surrounded by many myths and legends, we have little scientifically reliable information about their physical properties. As it turns out, sometimes this information proves to be more fascinating than any fiction. This idea found confirmation in a new joint study undertaken by scientists from ITMO University and the Laser Zentrum Hannover. The physicists took an interest in how the Great Pyramid would interact with electromagnetic waves of a proportional, or resonant, length. Calculations showed that in the resonant state the pyramid can concentrate electromagnetic energy in its internal chambers as well as under its base, where the third unfinished chamber is located.

These conclusions were derived on the basis of numerical modeling and analytical methods of physics. The researchers first estimated that resonances in the pyramid can be induced by radio waves with a length ranging from 200 to 600 meters. Then they made a model of the electromagnetic response of the pyramid and calculated the extinction cross section. This value helps to estimate which part of the incident wave energy can be scattered or absorbed by the pyramid under resonant conditions. Finally, for the same conditions, the scientists obtained the electromagnetic fields distribution inside the pyramid.

3D model of the pyramid. Credit: cheops.SU
3D model of the pyramid. Credit: cheops.SU

In order to explain the results, the scientists conducted a multipole analysis. This method is widely used in physics to study the interaction between a complex object and electromagnetic field. The object scattering the field is replaced by a set of simpler sources of radiation: multipoles. The collection of multipoles radiation coincides with the field scattering by an entire object. Therefore, by knowing the type of each multipole, it is possible to predict and explain the distribution and configuration of the scattered fields in the whole system.

The Great Pyramid attracted the researchers’ attention while they were studying the interaction between light and dielectric nanoparticles. The scattering of light by nanoparticles depends on their size, shape, and refractive index of the source material. By varying these parameters, it is possible to determine the resonance scattering regimes and use them to develop devices for controlling light at the nanoscale.

“Egyptian pyramids have always attracted great attention. We as scientists were interested in them as well, and so we decided to look at the Great Pyramid as a particle resonantly dissipating radio waves. Due to the lack of information about the physical properties of the pyramid, we had to make some assumptions. For example, we assumed that there are no unknown cavities inside, and the building material has the properties of an ordinary limestone and is evenly distributed in and out of the pyramid. With these assumptions, we obtained interesting results that can have important practical applications,” says Andrey Evlyukhin, DSc, scientific supervisor and coordinator of the research.

Now the scientists plan to use the results to reproduce similar effects at the nanoscale.

Polina Kapitanova
Polina Kapitanova

“By choosing a material with suitable electromagnetic properties, we can obtain pyramidal nanoparticles with a potential for practical application in nanosensors and effective solar cells,” says Polina Kapitanova, PhD, associate at the Faculty of Physics and Engineering of ITMO University.

The research was supported by the Russian Science Foundation and the Deutsche Forschungsgemeinschaft (grants № 17-79-20379 and №16-12-10287).

Here’s a link to and a citation for the paper,

Electromagnetic properties of the Great Pyramid: First multipole resonances and energy concentration featured by Mikhail Balezin, Kseniia V. Baryshnikova, Polina Kapitanova, and Andrey B. Evlyukhin. Journal of Applied Physics 124, 034903 (2018) https://doi.org/10.1063/1.5026556 or Journal of Applied Physics, Volume 124, Issue 3. 10.1063/1.5026556 Published Online 20 July 2018

This paper is behind a paywall..

Artificial intelligence (AI) brings together International Telecommunications Union (ITU) and World Health Organization (WHO) and AI outperforms animal testing

Following on my May 11, 2018 posting about the International Telecommunications Union (ITU) and the 2018 AI for Good Global Summit in mid- May, there’s an announcement. My other bit of AI news concerns animal testing.

Leveraging the power of AI for health

A July 24, 2018 ITU press release (a shorter version was received via email) announces a joint initiative focused on improving health,

Two United Nations specialized agencies are joining forces to expand the use of artificial intelligence (AI) in the health sector to a global scale, and to leverage the power of AI to advance health for all worldwide. The International Telecommunication Union (ITU) and the World Health Organization (WHO) will work together through the newly established ITU Focus Group on AI for Health to develop an international “AI for health” standards framework and to identify use cases of AI in the health sector that can be scaled-up for global impact. The group is open to all interested parties.

“AI could help patients to assess their symptoms, enable medical professionals in underserved areas to focus on critical cases, and save great numbers of lives in emergencies by delivering medical diagnoses to hospitals before patients arrive to be treated,” said ITU Secretary-General Houlin Zhao. “ITU and WHO plan to ensure that such capabilities are available worldwide for the benefit of everyone, everywhere.”

The demand for such a platform was first identified by participants of the second AI for Good Global Summit held in Geneva, 15-17 May 2018. During the summit, AI and the health sector were recognized as a very promising combination, and it was announced that AI-powered technologies such as skin disease recognition and diagnostic applications based on symptom questions could be deployed on six billion smartphones by 2021.

The ITU Focus Group on AI for Health is coordinated through ITU’s Telecommunications Standardization Sector – which works with ITU’s 193 Member States and more than 800 industry and academic members to establish global standards for emerging ICT innovations. It will lead an intensive two-year analysis of international standardization opportunities towards delivery of a benchmarking framework of international standards and recommendations by ITU and WHO for the use of AI in the health sector.

“I believe the subject of AI for health is both important and useful for advancing health for all,” said WHO Director-General Tedros Adhanom Ghebreyesus.

The ITU Focus Group on AI for Health will also engage researchers, engineers, practitioners, entrepreneurs and policy makers to develop guidance documents for national administrations, to steer the creation of policies that ensure the safe, appropriate use of AI in the health sector.

“1.3 billion people have a mobile phone and we can use this technology to provide AI-powered health data analytics to people with limited or no access to medical care. AI can enhance health by improving medical diagnostics and associated health intervention decisions on a global scale,” said Thomas Wiegand, ITU Focus Group on AI for Health Chairman, and Executive Director of the Fraunhofer Heinrich Hertz Institute, as well as professor at TU Berlin.

He added, “The health sector is in many countries among the largest economic sectors or one of the fastest-growing, signalling a particularly timely need for international standardization of the convergence of AI and health.”

Data analytics are certain to form a large part of the ITU focus group’s work. AI systems are proving increasingly adept at interpreting laboratory results and medical imagery and extracting diagnostically relevant information from text or complex sensor streams.

As part of this, the ITU Focus Group for AI for Health will also produce an assessment framework to standardize the evaluation and validation of AI algorithms — including the identification of structured and normalized data to train AI algorithms. It will develop open benchmarks with the aim of these becoming international standards.

The ITU Focus Group for AI for Health will report to the ITU standardization expert group for multimedia, Study Group 16.

I got curious about Study Group 16 (from the Study Group 16 at a glance webpage),

Study Group 16 leads ITU’s standardization work on multimedia coding, systems and applications, including the coordination of related studies across the various ITU-T SGs. It is also the lead study group on ubiquitous and Internet of Things (IoT) applications; telecommunication/ICT accessibility for persons with disabilities; intelligent transport system (ITS) communications; e-health; and Internet Protocol television (IPTV).

Multimedia is at the core of the most recent advances in information and communication technologies (ICTs) – especially when we consider that most innovation today is agnostic of the transport and network layers, focusing rather on the higher OSI model layers.

SG16 is active in all aspects of multimedia standardization, including terminals, architecture, protocols, security, mobility, interworking and quality of service (QoS). It focuses its studies on telepresence and conferencing systems; IPTV; digital signage; speech, audio and visual coding; network signal processing; PSTN modems and interfaces; facsimile terminals; and ICT accessibility.

I wonder which group deals with artificial intelligence and, possibly, robots.

Chemical testing without animals

Thomas Hartung, professor of environmental health and engineering at Johns Hopkins University (US), describes in his July 25, 2018 essay (written for The Conversation) on phys.org the situation where chemical testing is concerned,

Most consumers would be dismayed with how little we know about the majority of chemicals. Only 3 percent of industrial chemicals – mostly drugs and pesticides – are comprehensively tested. Most of the 80,000 to 140,000 chemicals in consumer products have not been tested at all or just examined superficially to see what harm they may do locally, at the site of contact and at extremely high doses.

I am a physician and former head of the European Center for the Validation of Alternative Methods of the European Commission (2002-2008), and I am dedicated to finding faster, cheaper and more accurate methods of testing the safety of chemicals. To that end, I now lead a new program at Johns Hopkins University to revamp the safety sciences.

As part of this effort, we have now developed a computer method of testing chemicals that could save more than a US$1 billion annually and more than 2 million animals. Especially in times where the government is rolling back regulations on the chemical industry, new methods to identify dangerous substances are critical for human and environmental health.

Having written on the topic of alternatives to animal testing on a number of occasions (my December 26, 2014 posting provides an overview of sorts), I was particularly interested to see this in Hartung’s July 25, 2018 essay on The Conversation (Note: Links have been removed),

Following the vision of Toxicology for the 21st Century, a movement led by U.S. agencies to revamp safety testing, important work was carried out by my Ph.D. student Tom Luechtefeld at the Johns Hopkins Center for Alternatives to Animal Testing. Teaming up with Underwriters Laboratories, we have now leveraged an expanded database and machine learning to predict toxic properties. As we report in the journal Toxicological Sciences, we developed a novel algorithm and database for analyzing chemicals and determining their toxicity – what we call read-across structure activity relationship, RASAR.

This graphic reveals a small part of the chemical universe. Each dot represents a different chemical. Chemicals that are close together have similar structures and often properties. Thomas Hartung, CC BY-SA

To do this, we first created an enormous database with 10 million chemical structures by adding more public databases filled with chemical data, which, if you crunch the numbers, represent 50 trillion pairs of chemicals. A supercomputer then created a map of the chemical universe, in which chemicals are positioned close together if they share many structures in common and far where they don’t. Most of the time, any molecule close to a toxic molecule is also dangerous. Even more likely if many toxic substances are close, harmless substances are far. Any substance can now be analyzed by placing it into this map.

If this sounds simple, it’s not. It requires half a billion mathematical calculations per chemical to see where it fits. The chemical neighborhood focuses on 74 characteristics which are used to predict the properties of a substance. Using the properties of the neighboring chemicals, we can predict whether an untested chemical is hazardous. For example, for predicting whether a chemical will cause eye irritation, our computer program not only uses information from similar chemicals, which were tested on rabbit eyes, but also information for skin irritation. This is because what typically irritates the skin also harms the eye.

How well does the computer identify toxic chemicals?

This method will be used for new untested substances. However, if you do this for chemicals for which you actually have data, and compare prediction with reality, you can test how well this prediction works. We did this for 48,000 chemicals that were well characterized for at least one aspect of toxicity, and we found the toxic substances in 89 percent of cases.

This is clearly more accurate that the corresponding animal tests which only yield the correct answer 70 percent of the time. The RASAR shall now be formally validated by an interagency committee of 16 U.S. agencies, including the EPA [Environmental Protection Agency] and FDA [Food and Drug Administration], that will challenge our computer program with chemicals for which the outcome is unknown. This is a prerequisite for acceptance and use in many countries and industries.

The potential is enormous: The RASAR approach is in essence based on chemical data that was registered for the 2010 and 2013 REACH [Registration, Evaluation, Authorizations and Restriction of Chemicals] deadlines [in Europe]. If our estimates are correct and chemical producers would have not registered chemicals after 2013, and instead used our RASAR program, we would have saved 2.8 million animals and $490 million in testing costs – and received more reliable data. We have to admit that this is a very theoretical calculation, but it shows how valuable this approach could be for other regulatory programs and safety assessments.

In the future, a chemist could check RASAR before even synthesizing their next chemical to check whether the new structure will have problems. Or a product developer can pick alternatives to toxic substances to use in their products. This is a powerful technology, which is only starting to show all its potential.

It’s been my experience that these claims having led a movement (Toxicology for the 21st Century) are often contested with many others competing for the title of ‘leader’ or ‘first’. That said, this RASAR approach seems very exciting, especially in light of the skepticism about limiting and/or making animal testing unnecessary noted in my December 26, 2014 posting.it was from someone I thought knew better.

Here’s a link to and a citation for the paper mentioned in Hartung’s essay,

Machine learning of toxicological big data enables read-across structure activity relationships (RASAR) outperforming animal test reproducibility by Thomas Luechtefeld, Dan Marsh, Craig Rowlands, Thomas Hartung. Toxicological Sciences, kfy152, https://doi.org/10.1093/toxsci/kfy152 Published: 11 July 2018

This paper is open access.

Quantum back action and devil’s play

I always appreciate a reference to James Clerk Maxwell’s demon thought experiment (you can find out about it in the Maxwell’s demon Wikipedia entry). This time it comes from physicist  Kater Murch in a July 23, 2018 Washington University in st. Louis (WUSTL) news release (published July 25, 2018 on EurekAlert) written by Brandie Jefferson (offering a good explanation of the thought experiment and more),

Thermodynamics is one of the most human of scientific enterprises, according to Kater Murch, associate professor of physics in Arts & Sciences at Washington University in St. Louis.

“It has to do with our fascination of fire and our laziness,” he said. “How can we get fire” — or heat — “to do work for us?”

Now, Murch and colleagues have taken that most human enterprise down to the intangible quantum scale — that of ultra low temperatures and microscopic systems — and discovered that, as in the macroscopic world, it is possible to use information to extract work.

There is a catch, though: Some information may be lost in the process.

“We’ve experimentally confirmed the connection between information in the classical case and the quantum case,” Murch said, “and we’re seeing this new effect of information loss.”

The results were published in the July 20 [2018] issue of Physical Review Letters.

The international team included Eric Lutz of the University of Stuttgart; J. J. Alonzo of the University of Erlangen-Nuremberg; Alessandro Romito of Lancaster University; and Mahdi Naghiloo, a Washington University graduate research assistant in physics.

That we can get energy from information on a macroscopic scale was most famously illustrated in a thought experiment known as Maxwell’s Demon. [emphasis mine] The “demon” presides over a box filled with molecules. The box is divided in half by a wall with a door. If the demon knows the speed and direction of all of the molecules, it can open the door when a fast-moving molecule is moving from the left half of the box to the right side, allowing it to pass. It can do the same for slow particles moving in the opposite direction, opening the door when a slow-moving molecule is approaching from the right, headed left. ­

After a while, all of the quickly-moving molecules are on the right side of the box. Faster motion corresponds to higher temperature. In this way, the demon has created a temperature imbalance, where one side of the box is hotter. That temperature imbalance can be turned into work — to push on a piston as in a steam engine, for instance. At first the thought experiment seemed to show that it was possible create a temperature difference without doing any work, and since temperature differences allow you to extract work, one could build a perpetual motion machine — a violation of the second law of thermodynamics.

“Eventually, scientists realized that there’s something about the information that the demon has about the molecules,” Murch said. “It has a physical quality like heat and work and energy.”

His team wanted to know if it would be possible to use information to extract work in this way on a quantum scale, too, but not by sorting fast and slow molecules. If a particle is in an excited state, they could extract work by moving it to a ground state. (If it was in a ground state, they wouldn’t do anything and wouldn’t expend any work).

But they wanted to know what would happen if the quantum particles were in an excited state and a ground state at the same time, analogous to being fast and slow at the same time. In quantum physics, this is known as a superposition.

“Can you get work from information about a superposition of energy states?” Murch asked. “That’s what we wanted to find out.”

There’s a problem, though. On a quantum scale, getting information about particles can be a bit … tricky.

“Every time you measure the system, it changes that system,” Murch said. And if they measured the particle to find out exactly what state it was in, it would revert to one of two states: excited, or ground.

This effect is called quantum backaction. To get around it, when looking at the system, researchers (who were the “demons”) didn’t take a long, hard look at their particle. Instead, they took what was called a “weak observation.” It still influenced the state of the superposition, but not enough to move it all the way to an excited state or a ground state; it was still in a superposition of energy states. This observation was enough, though, to allow the researchers track with fairly high accuracy, exactly what superposition the particle was in — and this is important, because the way the work is extracted from the particle depends on what superposition state it is in.

To get information, even using the weak observation method, the researchers still had to take a peek at the particle, which meant they needed light. So they sent some photons in, and observed the photons that came back.

“But the demon misses some photons,” Murch said. “It only gets about half. The other half are lost.” But — and this is the key — even though the researchers didn’t see the other half of the photons, those photons still interacted with the system, which means they still had an effect on it. The researchers had no way of knowing what that effect was.

They took a weak measurement and got some information, but because of quantum backaction, they might end up knowing less than they did before the measurement. On the balance, that’s negative information.

And that’s weird.

“Do the rules of thermodynamics for a macroscopic, classical world still apply when we talk about quantum superposition?” Murch asked. “We found that yes, they hold, except there’s this weird thing. The information can be negative.

“I think this research highlights how difficult it is to build a quantum computer,” Murch said.

“For a normal computer, it just gets hot and we need to cool it. In the quantum computer you are always at risk of losing information.”

Here’s a link to and a citation for the paper,

Information Gain and Loss for a Quantum Maxwell’s Demon by M. Naghiloo, J. J. Alonso, A. Romito, E. Lutz, and K. W. Murch. Phys. Rev. Lett. 121, 030604 (Vol. 121, Iss. 3 — 20 July 2018) DOI:https://doi.org/10.1103/PhysRevLett.121.030604 Published 17 July 2018

© 2018 American Physical Society

This paper is behind a paywall.

Brainy and brainy: a novel synaptic architecture and a neuromorphic computing platform called SpiNNaker

I have two items about brainlike computing. The first item hearkens back to memristors, a topic I have been following since 2008. (If you’re curious about the various twists and turns just enter  the term ‘memristor’ in this blog’s search engine.) The latest on memristors is from a team than includes IBM (US), École Politechnique Fédérale de Lausanne (EPFL; Swizterland), and the New Jersey Institute of Technology (NJIT; US). The second bit comes from a Jülich Research Centre team in Germany and concerns an approach to brain-like computing that does not include memristors.

Multi-memristive synapses

In the inexorable march to make computers function more like human brains (neuromorphic engineering/computing), an international team has announced its latest results in a July 10, 2018 news item on Nanowerk,

Two New Jersey Institute of Technology (NJIT) researchers, working with collaborators from the IBM Research Zurich Laboratory and the École Polytechnique Fédérale de Lausanne, have demonstrated a novel synaptic architecture that could lead to a new class of information processing systems inspired by the brain.

The findings are an important step toward building more energy-efficient computing systems that also are capable of learning and adaptation in the real world. …

A July 10, 2018 NJIT news release (also on EurekAlert) by Tracey Regan, which originated by the news item, adds more details,

The researchers, Bipin Rajendran, an associate professor of electrical and computer engineering, and S. R. Nandakumar, a graduate student in electrical engineering, have been developing brain-inspired computing systems that could be used for a wide range of big data applications.

Over the past few years, deep learning algorithms have proven to be highly successful in solving complex cognitive tasks such as controlling self-driving cars and language understanding. At the heart of these algorithms are artificial neural networks – mathematical models of the neurons and synapses of the brain – that are fed huge amounts of data so that the synaptic strengths are autonomously adjusted to learn the intrinsic features and hidden correlations in these data streams.

However, the implementation of these brain-inspired algorithms on conventional computers is highly inefficient, consuming huge amounts of power and time. This has prompted engineers to search for new materials and devices to build special-purpose computers that can incorporate the algorithms. Nanoscale memristive devices, electrical components whose conductivity depends approximately on prior signaling activity, can be used to represent the synaptic strength between the neurons in artificial neural networks.

While memristive devices could potentially lead to faster and more power-efficient computing systems, they are also plagued by several reliability issues that are common to nanoscale devices. Their efficiency stems from their ability to be programmed in an analog manner to store multiple bits of information; however, their electrical conductivities vary in a non-deterministic and non-linear fashion.

In the experiment, the team showed how multiple nanoscale memristive devices exhibiting these characteristics could nonetheless be configured to efficiently implement artificial intelligence algorithms such as deep learning. Prototype chips from IBM containing more than one million nanoscale phase-change memristive devices were used to implement a neural network for the detection of hidden patterns and correlations in time-varying signals.

“In this work, we proposed and experimentally demonstrated a scheme to obtain high learning efficiencies with nanoscale memristive devices for implementing learning algorithms,” Nandakumar says. “The central idea in our demonstration was to use several memristive devices in parallel to represent the strength of a synapse of a neural network, but only chose one of them to be updated at each step based on the neuronal activity.”

Here’s a link to and a citation for the paper,

Neuromorphic computing with multi-memristive synapses by Irem Boybat, Manuel Le Gallo, S. R. Nandakumar, Timoleon Moraitis, Thomas Parnell, Tomas Tuma, Bipin Rajendran, Yusuf Leblebici, Abu Sebastian, & Evangelos Eleftheriou. Nature Communications volume 9, Article number: 2514 (2018) DOI: https://doi.org/10.1038/s41467-018-04933-y Published 28 June 2018

This is an open access paper.

Also they’ve got a couple of very nice introductory paragraphs which I’m including here, (from the June 28, 2018 paper in Nature Communications; Note: Links have been removed),

The human brain with less than 20 W of power consumption offers a processing capability that exceeds the petaflops mark, and thus outperforms state-of-the-art supercomputers by several orders of magnitude in terms of energy efficiency and volume. Building ultra-low-power cognitive computing systems inspired by the operating principles of the brain is a promising avenue towards achieving such efficiency. Recently, deep learning has revolutionized the field of machine learning by providing human-like performance in areas, such as computer vision, speech recognition, and complex strategic games1. However, current hardware implementations of deep neural networks are still far from competing with biological neural systems in terms of real-time information-processing capabilities with comparable energy consumption.

One of the reasons for this inefficiency is that most neural networks are implemented on computing systems based on the conventional von Neumann architecture with separate memory and processing units. There are a few attempts to build custom neuromorphic hardware that is optimized to implement neural algorithms2,3,4,5. However, as these custom systems are typically based on conventional silicon complementary metal oxide semiconductor (CMOS) circuitry, the area efficiency of such hardware implementations will remain relatively low, especially if in situ learning and non-volatile synaptic behavior have to be incorporated. Recently, a new class of nanoscale devices has shown promise for realizing the synaptic dynamics in a compact and power-efficient manner. These memristive devices store information in their resistance/conductance states and exhibit conductivity modulation based on the programming history6,7,8,9. The central idea in building cognitive hardware based on memristive devices is to store the synaptic weights as their conductance states and to perform the associated computational tasks in place.

The two essential synaptic attributes that need to be emulated by memristive devices are the synaptic efficacy and plasticity. …

It gets more complicated from there.

Now onto the next bit.

SpiNNaker

At a guess, those capitalized N’s are meant to indicate ‘neural networks’. As best I can determine, SpiNNaker is not based on the memristor. Moving on, a July 11, 2018 news item on phys.org announces work from a team examining how neuromorphic hardware and neuromorphic software work together,

A computer built to mimic the brain’s neural networks produces similar results to that of the best brain-simulation supercomputer software currently used for neural-signaling research, finds a new study published in the open-access journal Frontiers in Neuroscience. Tested for accuracy, speed and energy efficiency, this custom-built computer named SpiNNaker, has the potential to overcome the speed and power consumption problems of conventional supercomputers. The aim is to advance our knowledge of neural processing in the brain, to include learning and disorders such as epilepsy and Alzheimer’s disease.

A July 11, 2018 Frontiers Publishing news release on EurekAlert, which originated the news item, expands on the latest work,

“SpiNNaker can support detailed biological models of the cortex–the outer layer of the brain that receives and processes information from the senses–delivering results very similar to those from an equivalent supercomputer software simulation,” says Dr. Sacha van Albada, lead author of this study and leader of the Theoretical Neuroanatomy group at the Jülich Research Centre, Germany. “The ability to run large-scale detailed neural networks quickly and at low power consumption will advance robotics research and facilitate studies on learning and brain disorders.”

The human brain is extremely complex, comprising 100 billion interconnected brain cells. We understand how individual neurons and their components behave and communicate with each other and on the larger scale, which areas of the brain are used for sensory perception, action and cognition. However, we know less about the translation of neural activity into behavior, such as turning thought into muscle movement.

Supercomputer software has helped by simulating the exchange of signals between neurons, but even the best software run on the fastest supercomputers to date can only simulate 1% of the human brain.

“It is presently unclear which computer architecture is best suited to study whole-brain networks efficiently. The European Human Brain Project and Jülich Research Centre have performed extensive research to identify the best strategy for this highly complex problem. Today’s supercomputers require several minutes to simulate one second of real time, so studies on processes like learning, which take hours and days in real time are currently out of reach.” explains Professor Markus Diesmann, co-author, head of the Computational and Systems Neuroscience department at the Jülich Research Centre.

He continues, “There is a huge gap between the energy consumption of the brain and today’s supercomputers. Neuromorphic (brain-inspired) computing allows us to investigate how close we can get to the energy efficiency of the brain using electronics.”

Developed over the past 15 years and based on the structure and function of the human brain, SpiNNaker — part of the Neuromorphic Computing Platform of the Human Brain Project — is a custom-built computer composed of half a million of simple computing elements controlled by its own software. The researchers compared the accuracy, speed and energy efficiency of SpiNNaker with that of NEST–a specialist supercomputer software currently in use for brain neuron-signaling research.

“The simulations run on NEST and SpiNNaker showed very similar results,” reports Steve Furber, co-author and Professor of Computer Engineering at the University of Manchester, UK. “This is the first time such a detailed simulation of the cortex has been run on SpiNNaker, or on any neuromorphic platform. SpiNNaker comprises 600 circuit boards incorporating over 500,000 small processors in total. The simulation described in this study used just six boards–1% of the total capability of the machine. The findings from our research will improve the software to reduce this to a single board.”

Van Albada shares her future aspirations for SpiNNaker, “We hope for increasingly large real-time simulations with these neuromorphic computing systems. In the Human Brain Project, we already work with neuroroboticists who hope to use them for robotic control.”

Before getting to the link and citation for the paper, here’s a description of SpiNNaker’s hardware from the ‘Spiking neural netowrk’ Wikipedia entry, Note: Links have been removed,

Neurogrid, built at Stanford University, is a board that can simulate spiking neural networks directly in hardware. SpiNNaker (Spiking Neural Network Architecture) [emphasis mine], designed at the University of Manchester, uses ARM processors as the building blocks of a massively parallel computing platform based on a six-layer thalamocortical model.[5]

Now for the link and citation,

Performance Comparison of the Digital Neuromorphic Hardware SpiNNaker and the Neural Network Simulation Software NEST for a Full-Scale Cortical Microcircuit Model by
Sacha J. van Albada, Andrew G. Rowley, Johanna Senk, Michael Hopkins, Maximilian Schmidt, Alan B. Stokes, David R. Lester, Markus Diesmann, and Steve B. Furber. Neurosci. 12:291. doi: 10.3389/fnins.2018.00291 Published: 23 May 2018

As noted earlier, this is an open access paper.

Cellulose and natural nanofibres

Specifically, the researchers are describing these as cellulose nanofibrils. On the left of the image, the seed look mores like an egg waiting to be fried for breakfast but the image on the right is definitely fibrous-looking,

Through contact with water, the seed of Neopallasia pectinata from the family of composite plants forms a slimy sheath. The white cellulose fibres anchor it to the seed surface. Courtesy: Kiel University (CAU)

A December 18, 2018 news item on Nanowerk describes the research into seeds and cellulose,

The seeds of some plants such as basil, watercress or plantain form a mucous envelope as soon as they come into contact with water. This cover consists of cellulose in particular, which is an important structural component of the primary cell wall of green plants, and swelling pectins, plant polysaccharides.

In order to be able to investigate its physical properties, a research team from the Zoological Institute at Kiel University (CAU) used a special drying method, which gently removes the water from the cellulosic mucous sheath. The team discovered that this method can produce extremely strong nanofibres from natural cellulose. In future, they could be especially interesting for applications in biomedicine.

A December 18, 2018 Kiel University press release, which originated the news item, offers further details about the work,

Thanks to their slippery mucous sheath, seeds can slide through the digestive tract of birds undigested. They are excreted unharmed, and can be dispersed in this way. It is presumed that the mucous layer provides protection. “In order to find out more about the function of the mucilage, we first wanted to study the structure and the physical properties of this seed envelope material,” said Zoology Professor Stanislav N. Gorb, head of the “Functional Morphology and Biomechanics” working group at the CAU. In doing so they discovered that its properties depend on the alignment of the fibres that anchor them to the seed surface

Diverse properties: From slippery to sticky

The pectins in the shell of the seeds can absorb a large quantity of water, and thus form a gel-like capsule around the seed in a few minutes. It is anchored firmly to the surface of the seed by fine cellulose fibres with a diameter of just up to 100 nanometres, similar to the microscopic adhesive elements on the surface of highly-adhesive gecko feet. So in a sense, the fibres form the stabilising backbone of the mucous sheath.

The properties of the mucous change, depending on the water concentration. “The mucous actually makes the seeds very slippery. However, if we reduce the water content, it becomes sticky and begins to stick,” said Stanislav Gorb, summarising a result from previous studies together with Dr Agnieszka Kreitschitz. The adhesive strength is also increased by the forces acting between the individual vertically-arranged nanofibres of the seed and the adhesive surface.

Specially dried

In order to be able to investigate the mucous sheath under a scanning electron microscope, the Kiel research team used a particularly gentle method, so-called critical-point drying (CPD). They dehydrated the mucous sheath of various seeds step-by-step with liquid carbon dioxide – instead of the normal method using ethanol. The advantage of this method is that evaporation of liquid carbon dioxide can be controlled under certain pressure and temperature conditions, without surface tension developing within the sheath. As a result, the research team was able to precisely remove water from the mucous, without drying out the surface of the sheath and thereby destroying the original cell structure. Through the highly-precise drying, the structural arrangement of the individual cellulose fibres remained intact.

Almost as strongly-adhesive as carbon nanotubes

The research team tested the dried cellulose fibres regarding their friction and adhesion properties, and compared them with those of synthetically-produced carbon nanotubes. Due to their outstanding properties, such as their tensile strength, electrical conductivity or their friction, these microscopic structures are interesting for numerous industrial applications of the future.

“Our tests showed that the frictional and adhesive forces of the cellulose fibres are almost as strong as with vertically-arranged carbon nanotubes,” said Dr Clemens Schaber, first author of the study. The structural dimensions of the cellulose nanofibers are similar to the vertically aligned carbon nanotubes. Through the special drying method, they can also vary the adhesive strength in a targeted manner. In Gorb’s working group, the zoologist and biomechanic examines the functioning of biological nanofibres, and the potential to imitate them with technical means. “As a natural raw material, cellulose fibres have distinct advantages over carbon nanotubes, whose health effects have not yet been fully investigated,” continued Schaber. Nanocellulose is primarily found in biodegradable polymer composites, which are used in biomedicine, cosmetics or the food industry.

Here’s a link to and a citation for the paper,

Friction-Active Surfaces Based on Free-Standing Anchored Cellulose Nanofibrils by Clemens F. Schaber, Agnieszka Kreitschitz, and Stanislav N. Gorb. ACS Appl. Mater. Interfaces, 2018, 10 (43), pp 37566–37574 DOI: 10.1021/acsami.8b05972 Publication Date (Web): September 19, 2018

Copyright © 2018 American Chemical Society

This paper is behind a paywall.