Tag Archives: Michael Berger

All-natural agrochemicals

Michael Berger in his May 4, 2018 Nanowerk Spotlight article highlights research into creating all natural agrochemicals,

Widespread use of synthetic agrochemicals in crop protection has led to serious concerns of environmental contamination and increased resistance in plant-based pathogenic microbes.

In an effort to develop bio-based and non-synthetic alternatives, nanobiotechnology researchers are looking to plants that possess natural antimicrobial properties.

Thymol, an essential oil component of thyme, is such a plant and known for its antimicrobial activity. However, it has low water solubility, which reduces its biological activity and limits its application through aqueous medium. In addition, thymol is physically and chemically unstable in the presence of oxygen, light and temperature, which drastically reduces its effectiveness.

Scientists in India have overcome these obstacles by preparing thymol nanoemulsions where thymol is converted into nanoscale droplets using a plant-based surfactant known as saponin (a glycoside of the Quillaja tree). Due to this encapsulation, thymol becomes physically and chemically stable in the aqueous medium (the emulsion remained stable for three months).

In their work, the researchers show that nanoscale thymol’s antibacterial and antifungal properties not only prevent plant disease but that it also enhances plant growth.

“It is exciting how nanoscale thymol is more active,” says Saharan [Dr. Vinod Saharan from the Nano Research Facility Lab, Department of Molecular Biology and Biotechnology, at Maharana Pratap University of Agriculture and Technology], who led this work in collaboration with Washington University in St. Louis and Haryana Agricultural University, Hisar. “We found that nanoscale droplets of thymol can easily pass through the surfaces of bacteria, fungi and plants and exhibit much faster and strong activity. In addition nanodroplets of thymol have a larger surface area, i.e. more molecules on the surface, so thymol becomes more active at the target sites.”

Here’s a link to and a citation for the paper,

Thymol nanoemulsion exhibits potential antibacterial activity against bacterial pustule disease and growth promotory effect on soybean by Sarita Kumari, R. V. Kumaraswamy, Ram Chandra Choudhary, S. S. Sharma, Ajay Pal, Ramesh Raliya, Pratim Biswas, & Vinod Saharan. Scientific Reportsvolume 8, Article number: 6650 (2018) doi:10.1038/s41598-018-24871-5 Published: 27 April 2018

This paper is open access.

Final note

There is a Canadian company which specialises in nanoscale products for the agricultural sector, Vive Crop Protection. I don’t believe they claim their products are ‘green’ but due to the smaller quantities needed of Vive Crop Protection’s products, the environmental impact is less than that of traditional agrochemicals.

From the memristor to the atomristor?

I’m going to let Michael Berger explain the memristor (from Berger’s Jan. 2, 2017 Nanowerk Spotlight article),

In trying to bring brain-like (neuromorphic) computing closer to reality, researchers have been working on the development of memory resistors, or memristors, which are resistors in a circuit that ‘remember’ their state even if you lose power.

Today, most computers use random access memory (RAM), which moves very quickly as a user works but does not retain unsaved data if power is lost. Flash drives, on the other hand, store information when they are not powered but work much slower. Memristors could provide a memory that is the best of both worlds: fast and reliable.

He goes on to discuss a team at the University of Texas at Austin’s work on creating an extraordinarily thin memristor: an atomristor,

he team’s work features the thinnest memory devices and it appears to be a universal effect available in all semiconducting 2D monolayers.

The scientists explain that the unexpected discovery of nonvolatile resistance switching (NVRS) in monolayer transitional metal dichalcogenides (MoS2, MoSe2, WS2, WSe2) is likely due to the inherent layered crystalline nature that produces sharp interfaces and clean tunnel barriers. This prevents excessive leakage and affords stable phenomenon so that NVRS can be used for existing memory and computing applications.

“Our work opens up a new field of research in exploiting defects at the atomic scale, and can advance existing applications such as future generation high density storage, and 3D cross-bar networks for neuromorphic memory computing,” notes Akinwande [Deji Akinwande, an Associate Professor at the University of Texas at Austin]. “We also discovered a completely new application, which is non-volatile switching for radio-frequency (RF) communication systems. This is a rapidly emerging field because of the massive growth in wireless technologies and the need for very low-power switches. Our devices consume no static power, an important feature for battery life in mobile communication systems.”

Here’s a link to and a citation for the Akinwande team’s paper,

Atomristor: Nonvolatile Resistance Switching in Atomic Sheets of Transition Metal Dichalcogenides by Ruijing Ge, Xiaohan Wu, Myungsoo Kim, Jianping Shi, Sushant Sonde, Li Tao, Yanfeng Zhang, Jack C. Lee, and Deji Akinwande. Nano Lett., Article ASAP DOI: 10.1021/acs.nanolett.7b04342 Publication Date (Web): December 13, 2017

Copyright © 2017 American Chemical Society

This paper appears to be open access.

ETA January 23, 2018: There’s another account of the atomristor in Samuel K. Moore’s January 23, 2018 posting on the Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website).

Leftover 2017 memristor news bits

i have two bits of news, one from this October 2017 about using light to control a memristor’s learning properties and one from December 2017 about memristors and neural networks.

Shining a light on the memristor

Michael Berger wrote an October 30, 2017 Nanowerk Sportlight article about some of the latest work concerning memristors and light,

Memristors – or resistive memory – are nanoelectronic devices that are very promising components for next generation memory and computing devices. They are two-terminal electric elements similar to a conventional resistor – however, the electric resistance in a memristor is dependent on the charge passing through it; which means that its conductance can be precisely modulated by charge or flux through it. Its special property is that its resistance can be programmed (resistor function) and subsequently remains stored (memory function).

In this sense, a memristor is similar to a synapse in the human brain because it exhibits the same switching characteristics, i.e. it is able, with a high level of plasticity, to modify the efficiency of signal transfer between neurons under the influence of the transfer itself. That’s why researchers are hopeful to use memristors for the fabrication of electronic synapses for neuromorphic (i.e. brain-like) computing that mimics some of the aspects of learning and computation in human brains.

Human brains may be slow at pure number crunching but they are excellent at handling fast dynamic sensory information such as image and voice recognition. Walking is something that we take for granted but this is quite challenging for robots, especially over uneven terrain.

“Memristors present an opportunity to make new types of computers that are different from existing von Neumann architectures, which traditional computers are based upon,” Dr Neil T. Kemp, a Lecturer in Physics at the University of Hull [UK], tells Nanowerk. “Our team at the University of Hull is focussed on making memristor devices dynamically reconfigurable and adaptive – we believe this is the route to making a new generation of artificial intelligence systems that are smarter and can exhibit complex behavior. Such systems would also have the advantage of memristors, high density integration and lower power usage, so these systems would be more lightweight, portable and not need re-charging so often – which is something really needed for robots etc.”

In their new paper in Nanoscale (“Reversible Optical Switching Memristors with Tunable STDP Synaptic Plasticity: A Route to Hierarchical Control in Artificial Intelligent Systems”), Kemp and his team demonstrate the ability to reversibly control the learning properties of memristors via optical means.

The reversibility is achieved by changing the polarization of light. The researchers have used this effect to demonstrate tuneable learning in a memristor. One way this is achieved is through something called Spike Timing Dependent Plasticity (STDP), which is an effect known to occur in human brains and is linked with sensory perception, spatial reasoning, language and conscious thought in the neocortex.

STDP learning is based upon differences in the arrival time of signals from two adjacent neurons. The University of Hull team has shown that they can modulate the synaptic plasticity via optical means which enables the devices to have tuneable learning.

“Our research findings are important because it demonstrates that light can be used to control the learning properties of a memristor,” Kemp points out. “We have shown that light can be used in a reversible manner to change the connection strength (or conductivity) of artificial memristor synapses and as well control their ability to forget i.e. we can dynamically change device to have short-term or long-term memory.”

According to the team, there are many potential applications, such as adaptive electronic circuits controllable via light, or in more complex systems, such as neuromorphic computing, the development of optically reconfigurable neural networks.

Having optically controllable memristors can also facilitate the implementation of hierarchical control in larger artificial-brain like systems, whereby some of the key processes that are carried out by biological molecules in human brains can be emulated in solid-state devices through patterning with light.

Some of these processes include synaptic pruning, conversion of short term memory to long term memory, erasing of certain memories that are no longer needed or changing the sensitivity of synapses to be more adept at learning new information.

“The ability to control this dynamically, both spatially and temporally, is particularly interesting since it would allow neural networks to be reconfigurable on the fly through either spatial patterning or by adjusting the intensity of the light source,” notes Kemp.

In their new paper in Nanoscale Currently, the devices are more suited to neuromorphic computing applications, which do not need to be as fast. Optical control of memristors opens the route to dynamically tuneable and reprogrammable synaptic circuits as well the ability (via optical patterning) to have hierarchical control in larger and more complex artificial intelligent systems.

“Artificial Intelligence is really starting to come on strong in many areas, especially in the areas of voice/image recognition and autonomous systems – we could even say that this is the next revolution, similarly to what the industrial revolution was to farming and production processes,” concludes Kemp. “There are many challenges to overcome though. …

That excerpt should give you the gist of Berger’s article and, for those who need more information, there’s Berger’s article and, also, a link to and a citation for the paper,

Reversible optical switching memristors with tunable STDP synaptic plasticity: a route to hierarchical control in artificial intelligent systems by Ayoub H. Jaafar, Robert J. Gray, Emanuele Verrelli, Mary O’Neill, Stephen. M. Kelly, and Neil T. Kemp. Nanoscale, 2017,9, 17091-17098 DOI: 10.1039/C7NR06138B First published on 24 Oct 2017

This paper is behind a paywall.

The memristor and the neural network

It would seem machine learning could experience a significant upgrade if the work in Wei Lu’s University of Michigan laboratory can be scaled for general use. From a December 22, 2017 news item on ScienceDaily,

A new type of neural network made with memristors can dramatically improve the efficiency of teaching machines to think like humans.

The network, called a reservoir computing system, could predict words before they are said during conversation, and help predict future outcomes based on the present.

The research team that created the reservoir computing system, led by Wei Lu, professor of electrical engineering and computer science at the University of Michigan, recently published their work in Nature Communications.

A December 19, 2017 University of Michigan news release (also on EurekAlert) by Dan Newman, which originated the news item, expands on the theme,

Reservoir computing systems, which improve on a typical neural network’s capacity and reduce the required training time, have been created in the past with larger optical components. However, the U-M group created their system using memristors, which require less space and can be integrated more easily into existing silicon-based electronics.

Memristors are a special type of resistive device that can both perform logic and store data. This contrasts with typical computer systems, where processors perform logic separate from memory modules. In this study, Lu’s team used a special memristor that memorizes events only in the near history.

Inspired by brains, neural networks are composed of neurons, or nodes, and synapses, the connections between nodes.

To train a neural network for a task, a neural network takes in a large set of questions and the answers to those questions. In this process of what’s called supervised learning, the connections between nodes are weighted more heavily or lightly to minimize the amount of error in achieving the correct answer.

Once trained, a neural network can then be tested without knowing the answer. For example, a system can process a new photo and correctly identify a human face, because it has learned the features of human faces from other photos in its training set.

“A lot of times, it takes days or months to train a network,” says Lu. “It is very expensive.”

Image recognition is also a relatively simple problem, as it doesn’t require any information apart from a static image. More complex tasks, such as speech recognition, can depend highly on context and require neural networks to have knowledge of what has just occurred, or what has just been said.

“When transcribing speech to text or translating languages, a word’s meaning and even pronunciation will differ depending on the previous syllables,” says Lu.

This requires a recurrent neural network, which incorporates loops within the network that give the network a memory effect. However, training these recurrent neural networks is especially expensive, Lu says.

Reservoir computing systems built with memristors, however, can skip most of the expensive training process and still provide the network the capability to remember. This is because the most critical component of the system – the reservoir – does not require training.

When a set of data is inputted into the reservoir, the reservoir identifies important time-related features of the data, and hands it off in a simpler format to a second network. This second network then only needs training like simpler neural networks, changing weights of the features and outputs that the first network passed on until it achieves an acceptable level of error.

Enlargereservoir computing system

IMAGE:  Schematic of a reservoir computing system, showing the reservoir with internal dynamics and the simpler output. Only the simpler output needs to be trained, allowing for quicker and lower-cost training. Courtesy Wei Lu.

 

“The beauty of reservoir computing is that while we design it, we don’t have to train it,” says Lu.

The team proved the reservoir computing concept using a test of handwriting recognition, a common benchmark among neural networks. Numerals were broken up into rows of pixels, and fed into the computer with voltages like Morse code, with zero volts for a dark pixel and a little over one volt for a white pixel.

Using only 88 memristors as nodes to identify handwritten versions of numerals, compared to a conventional network that would require thousands of nodes for the task, the reservoir achieved 91% accuracy.

Reservoir computing systems are especially adept at handling data that varies with time, like a stream of data or words, or a function depending on past results.

To demonstrate this, the team tested a complex function that depended on multiple past results, which is common in engineering fields. The reservoir computing system was able to model the complex function with minimal error.

Lu plans on exploring two future paths with this research: speech recognition and predictive analysis.

“We can make predictions on natural spoken language, so you don’t even have to say the full word,” explains Lu.

“We could actually predict what you plan to say next.”

In predictive analysis, Lu hopes to use the system to take in signals with noise, like static from far-off radio stations, and produce a cleaner stream of data. “It could also predict and generate an output signal even if the input stopped,” he says.

EnlargeWei Lu

IMAGE:  Wei Lu, Professor of Electrical Engineering & Computer Science at the University of Michigan holds a memristor he created. Photo: Marcin Szczepanski.

 

The work was published in Nature Communications in the article, “Reservoir computing using dynamic memristors for temporal information processing”, with authors Chao Du, Fuxi Cai, Mohammed Zidan, Wen Ma, Seung Hwan Lee, and Prof. Wei Lu.

The research is part of a $6.9 million DARPA [US Defense Advanced Research Projects Agency] project, called “Sparse Adaptive Local Learning for Sensing and Analytics [also known as SALLSA],” that aims to build a computer chip based on self-organizing, adaptive neural networks. The memristor networks are fabricated at Michigan’s Lurie Nanofabrication Facility.

Lu and his team previously used memristors in implementing “sparse coding,” which used a 32-by-32 array of memristors to efficiently analyze and recreate images.

Here’s a link to and a citation for the paper,

Reservoir computing using dynamic memristors for temporal information processing by Chao Du, Fuxi Cai, Mohammed A. Zidan, Wen Ma, Seung Hwan Lee & Wei D. Lu. Nature Communications 8, Article number: 2204 (2017) doi:10.1038/s41467-017-02337-y Published online: 19 December 2017

This is an open access paper.

Nano- and neuro- together for nanoneuroscience

This is not the first time I’ve posted about nanotechnology and neuroscience (see this April 2, 2013 piece about then new brain science initiative in the US and Michael Berger’s  Nanowerk Spotlight article/review of an earlier paper covering the topic of nanotechnology and neuroscience).

Interestingly, the European Union (EU) had announced its two  $1B Euro research initiatives, the Human Brain Project and the Graphene Flagship (see my Jan. 28, 2013 posting about it),  months prior to the US brain research push. For those unfamiliar with the nanotechnology effort, graphene is a nanomaterial and there is high interest in its potential use in biomedical technology, thus partially connecting both EU projects.

In any event, Berger is highlighting a nanotechnology and neuroscience connection again in his Oct. 18, 2017 Nanowerk Spotlight article, or overview of, a new paper, which updates our understanding of the potential connections between the two fields (Note: A link has been removed),

Over the past several years, nanoscale analysis tools and in the design and synthesis of nanomaterials have generated optical, electrical, and chemical methods that can readily be adapted for use in neuroscience and brain activity mapping.

A review paper in Advanced Functional Materials (“Nanotechnology for Neuroscience: Promising Approaches for Diagnostics, Therapeutics and Brain Activity Mapping”) summarizes the basic concepts associated with neuroscience and the current journey of nanotechnology towards the study of neuron function by addressing various concerns on the significant role of nanomaterials in neuroscience and by describing the future applications of this emerging technology.

The collaboration between nanotechnology and neuroscience, though still at the early stages, utilizes broad concepts, such as drug delivery, cell protection, cell regeneration and differentiation, imaging and surgery, to give birth to novel clinical methods in neuroscience.

Ultimately, the clinical translation of nanoneuroscience implicates that central nervous system (CNS) diseases, including neurodevelopmental, neurodegenerative and psychiatric diseases, have the potential to be cured, while the industrial translation of nanoneuroscience indicates the need for advancement of brain-computer interface technologies.

Future Developing Arenas in Nanoneuroscience

The Brain Activity Map (BAM) Project aims to map the neural activity of every neuron across all neural circuits with the ultimate aim of curing diseases associated with the nervous system. The announcement of this collaborative, public-private research initiative in 2013 by President Obama has driven the surge in developing methods to elucidate neural circuitry. Three current developing arenas in the context of nanoneuroscience applications that will push such initiative forward are 1) optogenetics, 2) molecular/ion sensing and monitoring and 3) piezoelectric effects.

In their review, the authors discuss these aspects in detail.

Neurotoxicity of Nanomaterials

By engineering particles on the scale of molecular-level entities – proteins, lipid bilayers and nucleic acids – we can stereotactically interface with many of the components of cell systems, and at the cutting edge of this technology, we can begin to devise ways in which we can manipulate these components to our own ends. However, interfering with the internal environment of cells, especially neurons, is by no means simple.

“If we are to continue to make great strides in nanoneuroscience, functional investigations of nanomaterials must be complemented with robust toxicology studies,” the authors point out. “A database on the toxicity of materials that fully incorporates these findings for use in future schema must be developed. These databases should include information and data on 1) the chemical nature of the nanomaterials in complex aqueous environments; 2) the biological interactions of nanomaterials with chemical specificity; 3) the effects of various nanomaterial properties on living systems; and 4) a model for the simulation and computation of possible effects of nanomaterials in living systems across varying time and space. If we can establish such methods, it may be possible to design nanopharmaceuticals for improved research as well as quality of life.”

“However, challenges in nanoneuroscience are present in many forms, such as neurotoxicity; the inability to cross the blood-brain barrier [emphasis mine]; the need for greater specificity, bioavailability and short half-lives; and monitoring of disease treatment,” the authors conclude their review. “The nanoneurotoxicity surrounding these nanomaterials is a barrier that must be overcome for the translation of these applications from bench-to-bedside. While the challenges associated with nanoneuroscience seem unending, they represent opportunities for future work.”

I have a March 26, 2015 posting about Canadian researchers breaching the blood-brain barrier and an April 13, 2016 posting about US researchers at Cornell University also breaching the blood-brain barrier. Perhaps the “inability” mentioned in this Spotlight article means that it can’t be done consistently or that it hasn’t been achieved on humans.

Here’s a link to and a citation for the paper,

Nanotechnology for Neuroscience: Promising Approaches for Diagnostics, Therapeutics and Brain Activity Mapping by Anil Kumar, Aaron Tan, Joanna Wong, Jonathan Clayton Spagnoli, James Lam, Brianna Diane Blevins, Natasha G, Lewis Thorne, Keyoumars Ashkan, Jin Xie, and Hong Liu. Advanced Functional Materials Volume 27, Issue 39, October 19, 2017 DOI: 10.1002/adfm.201700489 Version of Record online: 14 AUG 2017

© 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

I took a look at the authors’ information and found that most of these researchers are based in  China and in the UK, with a sole researcher based in the US.

Nondirect relationships between the number of hydrogen bonds in DNA pairs and their relative strengths (three can be less than two)

Michael Berger’s Oct. 10, 2017 Nanowerk Spotlight article features research from the Institut Català de Nanociència i Nanotecnologia, (Catalan Institute of Nanoscience and Nanotechnology and acronym ICN2) which has nothing to do with recent vote on independence (for more, see this Oct. 23, 2017 article about how countries such as ‘Catalonia’ and others do and don’t gain independence in The Atlantic),

This is the first report on the electrical characterization of DNA with intrabond resolution.

Specifically, it quantifies electrical forces due to a single hydrogen bond in DNA and provides proof of a non-direct relationship between the number of hydrogen bonds in DNA pairs and the relative strengths of such pairs.

Such understanding of the relative strengths of the forces involved in the specific bonding of DNA as well as its electrical origin provides physical foundations to control mechanisms associated with DNA stability. It could help to develop new methods for the understanding, characterization; and control of relevant events originated at intrabond scales e.g. controlled DNA repair and damage; controlled modification of the expression of the genome; and communications below the single bond limit.

A blackboard representation of the manuscript’s key message: the quantification of the relative strengths between base pairs in DNA due to zipping hydrogen bonds might place on doubt such mechanisms regarding the interpretation of thermodynamic properties of DNA based on the assumption that A/T pairs are weaker than G/C pairs due to the sole difference in the number of hydrogen bonds, 2 and 3 respectively. (Image: Dr. Yamila García-Martínez)

Generally, being able to control DNA stability at the single bond level by means of electromagnetic interactions opens new avenues to induce modifications of the replication and transcription processes of DNA by means of noncontact methods.

….

Going forward, the researchers will study the effects of external electromagnetic fields on DNA at the level of single bond events. This will have not only an enormous interest in the medical field but also in nanotechnology where it would open the door to non-contact atomic manipulation of DNA – the analogue to the CRISPR gene editing method [emphasis mine] but using electromagnetic fields to drive changes in DNA.

Interesting stuff, eh?

Here’s a link to and a citation for the paper,

Unveiled electric profiles within hydrogen bonds suggest DNA base pairs with similar bond strengths by Y. B. Ruiz-Blanco, Y. Almeida, C. M. Sotomayor-Torres, Y. García. PLOS [Public Library of Science] https://doi.org/10.1371/journal.pone.0185638 Published: October 5, 2017

This paper is open access.

An examination of nanomanufacturing and nanofabrication

Michael Berger has written an Aug. 11, 2016 Nanowerk Spotlight review of a paper about nanomanufacturing (Note: A link has been removed),

… the path to greater benefits – whether economic, social, or environmental – from nanomanufactured goods and services is not yet clear. A recent review article in ACS Nano (“Nanomanufacturing: A Perspective”) by J. Alexander Liddle and Gregg M. Gallatin, takes silicon integrated circuit manufacturing as a baseline in order to consider the factors involved in matching processes with products, examining the characteristics and potential of top-down and bottom-up processes, and their combination.

The authors also discuss how a careful assessment of the way in which function can be made to follow form can enable high-volume manufacturing of nanoscale structures with the desired useful, and exciting, properties.

Although often used interchangeably, it makes sense to distinguish between nanofabrication and nanomanufacturing using the criterion of economic viability, suggested by the connotations of industrial scale and profitability associated with the word ‘manufacturing’.

Here’s a link to and a citation for the paper Berger is reviewing,

Nanomanufacturing: A Perspective by J. Alexander Liddle and Gregg M. Gallatin. ACS Nano, 2016, 10 (3), pp 2995–3014 DOI: 10.1021/acsnano.5b03299 Publication Date (Web): February 10, 2016

Copyright This article not subject to U.S. Copyright. Published 2016 by the American Chemical Society

This paper is behind a paywall.

Luckily for those who’d like a little more information before purchase, Berger’s review provides some insight into the study additional to what you’ll find in the abstract,

Nanomanufacturing, as the authors define it in their article, therefore, has the salient characteristic of being a source of money, while nanofabrication is often a sink.

To supply some background and indicate the scale of the nanomanufacturing challenge, the figure below shows the selling price ($·m-2) versus the annual production (m2) for a variety of nanoenabled or potentially nanoenabled products. The overall global market sizes are also indicated. It is interesting to note that the selling price spans 5 orders of magnitude, the production six, and the market size three. Although there is no strong correlation between the variables,
market price and size nanoenabled product
Log-log plot of the approximate product selling price ($·m-2) versus global annual production (m2) for a variety of nanoenabled, or potentially nanoenabled products. Approximate market sizes (2014) are shown next to each point. (Reprinted with permission by American Chemical Society)

market price and size nanoenabled product
Log-log plot of the approximate product selling price ($·m-2) versus global annual production (m2) for a variety of nanoenabled, or potentially nanoenabled products. Approximate market sizes (2014) are shown next to each point. (Reprinted with permission by American Chemical Society)

I encourage anyone interested in nanomanufacturing to read Berger’s article in its entirety as there is more detail and there are more figures to illustrate the points being made. He ends his review with this,

“Perhaps the most exciting prospect is that of creating dynamical nanoscale systems that are capable of exhibiting much richer structures and functionality. Whether this is achieved by learning how to control and engineer biological systems directly, or by building systems based on the same principles, remains to be seen, but will undoubtedly be disruptive and quite probably revolutionary.”

I find the reference to biological systems quite interesting especially in light of the recent launch of DARPA’s (US Defense Advanced Research Projects Agency) Engineered Living Materials (ELM) program (see my Aug. 9, 2016 posting).

Book announcement: Nanotechnology: The Future is Tiny

The book has a pretty cover (carbon nanotubes in the left corner, nanoparticles? next, and a circuit board to complete the image),

NanowerkBook_NanoFutureIsTiny

The book, written by Michael Berger, publisher of the Nanowerk website, was announced in an Aug. 31, 2016 Nanowerk Spotlight article (Note: Links have been removed),

“Nanotechnology: The Future is Tiny” puts a spotlight on some of the scientists who are pushing the boundaries of technology and it gives examples of their work and how they are advancing knowledge one little step at a time.

Written by Nanowerk’s Michael Berger, this book is a collection of essays about researchers involved in all facets of nanotechnologies. Nanoscience and nanotechnology research are truly multidisciplinary and international efforts, covering a wide range of scientific disciplines such as medicine, materials sciences, chemistry, biology and biotechnology, physics and electronics.

Here’s more about the book before I comment on the marketing (from the Nanotechnology: The Future is Tiny webpage on the Royal Society of Chemistry’s website),

Nanotechnology: The Future is Tiny introduces 176 different research projects from around the world that are exploring the different areas of nanotechnologies. Using interviews and descriptions of the projects, the collection of essays provides a unique commentary on the current status of the field. From flexible electronics that you can wear to nanomaterials used for cancer diagnostics and therapeutics, the book gives a new perspective on the current work into developing new nanotechnologies. Each chapter delves into a specific area of nanotechnology research including graphene, energy storage, electronics, 3D printing, nanomedicine, nanorobotics as well as environmental implications.

Through the scientists’ own words, the book gives a personal perspective on how nanotechnologies are created and developed, and an exclusive look at how today’s research will create tomorrow’s products and applications. This book will appeal to anyone who has an interest in the research and future of nanotechnology.

Publication Details
Print publication date: 30 Aug 2016
Copyright: 2016
Print ISBN: 978-1-78262-526-1
PDF eISBN: 978-1-78262-887-3
EPUB eISBN: 978-1-78262-888-0
DOI:10.1039/9781782628873

According to Berger’s description of his book (from the Aug. 31, 2016 Nanowerk Spotlight article),

Some stories are more like an introduction to nanotechnology, some are about understanding current developments, and some are advanced technical discussions of leading edge research. Reading this book will shatter the monolithic term “nanotechnology” into the myriad of facets that it really is.

Berger has taken on a very challenging task for a writer. It’s very difficult to produce a book that will satisfy the range of audiences described. Appealing to a different audience in each chapter is probably the only way to approach the task.  I think the book may prove especially useful for someone who’s more of a beginner or intermediate because it lets you find your level and as you grow in confidence you can approach more challenging chapters. The mystery is which chapters are for beginner/intermediates?

A rather interesting marketing strategy has been adopted, which has direct bearing on this mystery. The publisher, the Royal Society of Chemistry (RSC), has made some material available for free (sort of). There is no direct charge for the Front Matter, the Preface, the Table of Contents, or Chapter 1: Generating Energy Becomes Personal but you do need registration to access the materials. Plus, I believe they’re having a problem of some kind as the same information was accessed each time I clicked whether it was on the Front Matter, the Preface, or the Table of Contents. As for Chapter 1, you will get an abstract only.

You can purchase chapters individually or buy the hardback version of the book for £66.99 or the full ebook (EPUB) version for £200.97. Chapter 2: No More Rigid Boxes—Fully Flexible and Transparent Electronics (PDF) is available for £28.00. The pricing seems designed to encourage hardback purchases. It seems anyone who only wants one chapter is going to have guess as to whether it was written for an expert, a beginner, or someone in between.

Depending on your circumstances, taking a chance may be worth it. Based on the Nanowerk Spotlight articles, Berger writes with clarity and understanding of his subject matter. I’ve found value even in some of his more challenging pieces.

Nanomedicine living up to its promise?

Michael Berger has written a March 10, 2015 Nanowerk spotlight article reviewing nanomedicine’s  progress or lack thereof (Note: Links have been removed),

In early 2003, the European Science Foundation launched its Scientific Forward Look on Nanomedicine, a foresight study (report here ;pdf) and in 2004, the U.S. National Institute[s] of Health (NIH) published its Roadmap (now Common Fund) of the Nanomedicine Initiative. This program began in 2005 with a national network of eight Nanomedicine Development Centers. Now, in the second half of this 10-year program, the four centers best positioned to effectively apply their findings to translational studies were selected to continue receiving support.

A generally accepted definition of nanomedicine refers to highly specific medical intervention at the molecular scale for curing disease or repairing damaged tissues, such as bone, muscle, or nerve.

Much of Berger’s article is based on Subbu Venkatraman’s, Director of the NTU (Nanyang Technological University)-Northwestern Nanomedicine Institute in Singapore, paper, Has nanomedicine lived up to its promise?, 2014 Nanotechnology 25 372501 doi:10.1088/0957-4484/25/37/372501 (Note: Links have been removed),

… Historically, the approval of Doxil as the very first nanotherapeutic product in 1995 is generally regarded as the dawn of nanomedicine for human use. Since then, research activity in this area has been frenetic, with, for example, 2000 patents being generated in 2003, in addition to 1200 papers [2]. In the same time period, a total of 207 companies were involved in developing nanomedicinal products in diagnostics, imaging, drug delivery and implants. About 38 products loosely classified as nanomedicine products were in fact approved by 2004. Out of these, however, a number of products (five in all) were based on PEG-ylated proteins, which strictly speaking, are not so much nanomedicine products as molecular therapeutics. Nevertheless, the promise of nanomedicine was being translated into funding for small companies, and into clinical success, so that by 2013, the number of approved products had reached 54 in all, with another 150 in various stages of clinical trials [3]. The number of companies and institutions had risen to 241 (including research centres that were working on nanomedicine). A PubMed search on articles relating to nanomedicine shows 7400 hits over 10 years, of which 1874 were published in 2013 alone. Similarly, the US patent office database shows 409 patents (since 1976) that were granted in nanomedicine, with another 679 applications awaiting approval. So judging by research activity and funding the field of nanomedicine has been very fertile; however, when we use the yardstick of clinical success and paradigm shifts in treatment, the results appear more modest.

Both Berger’s spotlight article and Venkatraman’s review provide interesting reading and neither is especially long.

Insurance companies, the future, and perceptions about nanotechnology risks

Michael Berger has written a Dec. 15, 2014 Nanowerk Spotlight about a study examining perceptions of nanotechnology risks amongst members of the insurance industry,

Insurance companies are major stakeholders capable of contributing to the safer and more sustainable development of nanotechnologies and nanomaterials. This is owed to the fact that the insurance industry is one of the bearers of potential losses that can arise from the production and use of nanomaterials and nanotechnology applications.

Researchers at the University of Limerick in Ireland have examined how the insurance market perception of nanotechnology can influence the sustainability of technological advances and insurers’ concern for nanotechnology risks. They claim that, despite its role in sustaining technology development in modern society, insurers’ perception on nanomaterials has been largely overlooked by researchers and regulators alike.

I encourage you to read Berger’s piece in its entirety as it includes nuggets such as this,

… Over 64 per cent of surveyed insurers said they were vaguely familiar with nanotechnology and nanomaterial terms, and over 25 per cent said they had a moderate working knowledge and were able to define the terms. The interview data, however, suggests that this knowledge is at a basic level and there is a need for more information in order to allow this group to differentiate between distinct nanomaterial risks.

For those of you who would like to read the researchers’ paper in its entirety, you can find it in the Geneva Association Newsletter: Risk Management, No. 54, June 2014 where you will find a very interesting set of prognostications in Walter R. Stahel’s editorial,

In the editorial of the Risk Management newsletter of May 2013, I was looking back at 25 years of Risk Management Research of The Geneva Association. Today, this editorial and newsletter will look at some specific risks of the next 25 years.

If we first look back 25 years, to 1988, the PC had just been invented, Internet was still an internal network at the site of its invention the CERN [European Particle Physics Laboratory] in Geneva, cars were driven by people and mobile phones weighed five kilos and cost $5000, to give but a few technical examples. Dying forests, air pollution and retreating glaciers were the main environmental topics in the news, unemployment and sovereign debt were high on the agenda of politicians—some topics change, others remain.

Looking forward to 2039, the impacts of climate change will have amplified: invasive species—both plants such as ambrosia and animals such as the tiger mosquito—will have advanced further northward in Europe, while intensive agriculture in Scotland and Scandinavia will have become the norm—the European Union (EU) expects a 75 per cent increase in agricultural yields in these regions.

Other topics, such as bacteria which are resistant to antibiotics, represent a formidable challenge both as an opportunity for science and a risk to society. The European Commission estimates that today, 25,000 people die annually as a result of an infection with multi-drug-resistant bacteria.

The ageing population is another major opportunity and risk in the hands of policymakers, a topic which The Geneva Association started analysing more than 25 years ago. Yet the multiple benefits of continued activity by the elderly—such as lower health costs—are only starting to be recognised by politicians. And most companies, organisations and administrations are still extremely hesitant to keep able employees beyond the legal age of retirement.

No easy predictions can be made on the outcome of societal changes. Trends such as a shift from science-based policymaking to policy-based science, from evidence-based advocacy to advocacy-based evidence and from fault-based liability to need-based compensation could lead society onto down the wrong path, which may be irreversible.

The last paragraph from the excerpt is the most interesting to me as its puts some of the current machinations within Canadian public life into context within the European (and I suspect the international) political scene.

I do have a comment or two about the research but first here’s a citation for it,

Insurance Market Perception of Nanotechnology and Nanomaterials Risks By Lijana Baublyte, Martin Mullins, Finbarr Murphy and Syed A.M. Tofai. Geneva Association Newsletter: Risk Management, No. 54, June 2014.

No date is offered for when the research was conducted and there is no indication in the newsletter that it was published prior to its June 2014 publication.

As for the research itself, first, the respondents are self-assessing their knowledge about nanotechnology. That presents an interesting problem for researchers since self-assessment in any area is highly dependent on various attributes such as confidence, perceived intelligence, etc. For example, someone who’s more knowledgeable might self-assess as being less so than someone who has more confidence in themselves. As for this statistic from the report,

… Over 40 per cent of surveyed laypeople heard nothing at all about nanotechnologies and nanomaterials, 47.5 per cent said they were vaguely familiar with the technology and the remaining 11.7 per cent of respondents reported having moderate working knowledge.

Generally, people won’t tell you that they know about nanotechnologies and nanomaterials from a video game (Deux Ex) or a comic book (Iron Man’s Extremis story line) as they may not consider that to be knowledge or are embarrassed. In the case of the video game, the information about nanotechnology is based on reputable scientific research although it is somewhat massaged to fit into the game ethos. Nonetheless, information about emerging technologies is often conveyed through pop culture properties and/or advertising and most researchers don’t take that into account.

One more thing about layperson awareness, the researchers cite a meta-analysis conducted by Terre Satterfield, et. al. (full citation: Satterfield, T., Kandlikar, M., Beaudrie, C.E.H., Conti,J., and Herr Harthorn, B. [2009]. Anticipating the perceived risk of nanotechnologies. Nature Nanotechnology, 4[11]: 752–758),  which was published in 2009 (mentioned in my Sept. 22, 2009 post; scroll down about 35% of the way). As I recall, the meta-analysis fell a bit short as the researchers didn’t provide in-depth analysis of the research instruments (questionnaires) instead analysing only the results. That said, one can’t ‘reinvent the wheel’ every time one writes a paper or analyses data although I do wish just once I’d stumble across a study where researchers analysed the assumptions posed by the wording of the questions.

A review of the nanotechnology in green technology

Michael Berger has written a Nov. 18, 2014 Nanowerk Spotlight article focusing on the ‘green’ in nanotechnology (Note: A link has been removed),

There is a general perception that nanotechnologies will have a significant impact on developing ‘green’ and ‘clean’ technologies with considerable environmental benefits. The associated concept of green nanotechnology aims to exploit nanotech-enabled innovations in materials science and engineering to generate products and processes that are energy efficient as well as economically and environmentally sustainable. These applications are expected to impact a large range of economic sectors, such as energy production and storage, clean up-technologies, as well as construction and related infrastructure industries.

A recent review article in Environmental Health (“Opportunities and challenges of nanotechnology in the green economy”) examines opportunities and practical challenges that nanotechnology applications pose in addressing the guiding principles for a green economy.

Here’s a link to and citation for the review article cited by Berger. It is more focused on occupational health and safety then the title suggests but not surprising when you realize all of the authors are employed by the US National Institute of Occupational Safety and Health (NIOSH),,

Opportunities and challenges of nanotechnology in the green economy by Ivo Iavicoli, Veruscka Leso, Walter Ricciard, Laura L Hodson, and Mark D Hoover. Environmental Health 2014, 13:78 doi:10.1186/1476-069X-13-78 Published:    7 October 2014

© 2014 Iavicoli et al.; licensee BioMed Central Ltd.

This is an open access article.

Here’s the background to the work (from the article; Note: Links have been removed),

The “green economy” concept has been driven into the mainstream of policy debate by global economic crisis, expected increase in global demand for energy by more than one third between 2010 to 2035, rising commodity prices as well as the urgent need for addressing global challenges in domains such as energy, environment and health [1-3].

The term “green economy”, chiefly relating to the principles of sustainable development, was first coined in a pioneering 1989 report for the Government of the United Kingdom by a group of leading environmental economists [1]. The most widely used and reliable definition of “green economy” comes from the United Nations Environment Programme which states that “a green economy is one that results in improved human well-being and social equity, while significantly reducing environmental risks and ecological scarcities. It is low carbon, resource efficient, and socially inclusive” [4].

The green economy concept can indeed play a very useful role in changing the way that society manages the interaction of the environmental and economic domains. In this context, nanotechnology, which is the manipulation of matter in the dimension of 1 to 100 nm, offers the opportunity to produce new structures, materials and devices with unique physico-chemical properties (i.e. small size, large surface area to mass ratio) to be employed in energy efficient as well as economically and environmentally sustainable green innovations [8-12].

Although expected to exert a great impact on a large range of industrial and economic sectors, the sustainability of green nano-solutions is currently not completely clear, and it should be carefully faced. In fact, the benefits of incorporating nanomaterials (NMs) in processes and products that contribute to outcomes of sustainability, might bring with them environmental, health and safety risks, ethical and social issues, market and consumer acceptance uncertainty as well as a strong competition with traditional technologies [13].

The present review examines opportunities and practical challenges that nano-applications pose in addressing the guiding principles for a green economy. Examples are provided of the potential for nano-applications to address social and environmental challenges, particularly in energy production and storage thus reducing pressure on raw materials, clean-up technologies as well as in fostering sustainable manufactured products. Moreover, the review aims to critically assess the impact that green nanotechnology may have on the health and safety of workers involved in this innovative sector and proposes action strategies for the management of emerging occupational risks.

The potential nanotechnology impact on green innovations

Green nanotechnology is expected to play a fundamental role in bringing a key functionality across the whole value chain of a product, both through the beneficial properties of NMs included as a small percentage in a final device, as well as through nano-enabled processes and applications without final products containing any NMs [13,14]. However, most of the potential green nano-solutions are still in the lab/start-up phase and very few products have reached the market to date. Further studies are necessary to assess the applicability, efficiency and sustainability of nanotechnologies under more realistic conditions, as well as to validate NM enabled systems in comparison to existing technologies. The following paragraphs will describe the potential fields of application for green nanotechnology innovations.

Intriguingly, there’s no mention (that I could find) of soil remediation (clean-up) although there is reference to water remediation.  As for occupational health and safety and nanotechnology, the authors have this to say (Note: Links have been removed),

In this context according to the proposed principles for green economy, it is important that society, scientific community and industry take advantage of opportunities of nanotechnology while overcoming its practical challenges. However, not all revolutionary changes are sustainable per se and a cautious assessment of the benefits addressing economic, social and environmental implications, as well as the occupational health and safety impact is essential [95,96]. This latter aspect, in particular, should be carefully addressed, in consideration of the expected widespread use of nanotechnology and the consequent increasing likelihood of NM exposure in both living and occupational environments. Moreover, difficulties in nano-manufacturing and handling; uncertainty concerning stability of nano-innovations under aggressive or long-term operation (i.e. in the case of supercapacitors with nano-structured electrode materials or nano-enabled construction products); the lack of information regarding the release and fate of NMs in the environment (i.e. NMs released from water and wastewater treatment devices) as well as the limited knowledge concerning the NM toxicological profile, even further support the need for a careful consideration of the health and safety risks derived from NM exposure.Importantly, as shown in Figure 1, a number of potentially hazardous exposure conditions can be expected for workers involved in nanotechnology activities. In fact, NMs may have significant, still unknown, hazards that can pose risks for a wide range of workers: researchers, laboratory technicians, cleaners, production workers, transportation, storage and retail workers, employees in disposal and waste facilities and potentially, emergency responders who deal with spills and disasters of NMs who may be differently exposed to these potential, innovative xenobiotics.

The review article is quite interesting, albeit its precaution-heavy approach, but if you don’t have time, Berger summarizes the article. He also provides links to related articles he has written on the subjects of energy storage, evaluating ‘green’ nanotechnology in a full life cycle assessment, and more.