Protecting soldiers from biological and chemical agents with a ‘second skin’ made of carbon nanotubes

There are lots of ‘second skins’ which promise to protect against various chemical and biological agents, the big plus for this ‘skin’ from the US Lawrence Livermore National Laboratory is breathability. From an Aug. 3, 2016 news item on Nanowerk (Note: A link has been removed),

This material is the first key component of futuristic smart uniforms that also will respond to and protect from environmental chemical hazards. The research appears in the July 27 [2016] edition of the journal, , Advanced Materials (“Carbon Nanotubes: Ultrabreathable and Protective Membranes with Sub-5 nm Carbon Nanotube Pores”).

An Aug. 3, 2016 Lawrence Livermore National Laboratory (LLNL) news release (also on EurekAlert), which originated the news item, explains further (Note: Links have been removed),

High breathability is a critical requirement for protective clothing to prevent heat-stress and exhaustion when military personnel are engaged in missions in contaminated environments. Current protective military uniforms are based on heavyweight full-barrier protection or permeable adsorptive protective garments that cannot meet the critical demand of simultaneous high comfort and protection, and provide a passive rather than active response to an environmental threat.

The LLNL team fabricated flexible polymeric membranes with aligned carbon nanotube (CNT) channels as moisture conductive pores. The size of these pores (less than 5 nanometers, nm) is 5,000 times smaller than the width of a human hair [if 1 nm is 1/100,000 or 1/60,000 of a human hair {the two most commonly used measurements} then wouldn’t 5 nm be between 1/20,000 or1/15,000 of a human hair?] .

“We demonstrated that these membranes provide rates of water vapor transport that surpass those of commercial breathable fabrics like GoreTex, even though the CNT pores are only a few nanometers wide,” said Ngoc Bui, the lead author of the paper.

To provide high breathability, the new composite material takes advantage of the unique transport properties of carbon nanotube pores. By quantifying the membrane permeability to water vapor, the team found for the first time that, when a concentration gradient is used as a driving force, CNT nanochannels can sustain gas-transport rates exceeding that of a well-known diffusion theory by more than one order of magnitude.

These membranes also provide protection from biological agents due to their very small pore size — less than 5 nanometers (nm) wide. Biological threats like bacteria or viruses are much larger and typically more than 10-nm in size. Performed tests demonstrated that the CNT membranes repelled Dengue virus from aqueous solutions during filtration tests. This confirms that LLNL-developed CNT membranes provide effective protection from biological threats by size exclusion rather than by merely preventing wetting.

Furthermore, the results show that CNT pores combine high breathability and bio-protection in a single functional material.

However, chemical agents are much smaller in size and require the membrane pores to be able to react to block the threat. To encode the membrane with a smart and dynamic response to small chemical hazards, LLNL scientists and collaborators are surface modifying these prototype carbon nanotube membranes with chemical-threat-responsive functional groups. These functional groups will sense and block the threat like gatekeepers on the pore entrance. A second response scheme also is in development — similar to how living skin peels off when challenged with dangerous external factors. The fabric will exfoliate upon reaction with the chemical agent.

“The material will be like a smart second skin that responds to the environment,” said Kuang Jen Wu, leader of LLNL’s Biosecurity & Biosciences Group. “In this way, the fabric will be able to block chemical agents such as sulfur mustard (blister agent), GD and VX nerve agents, toxins such as staphylococcal enterotoxin and biological spores such as anthrax.”

Current work is directed toward designing this multifunctional material to undergo a rapid transition from the breathable state to the protective state.

“These responsive membranes are expected to be particularly effective in mitigating a physiological burden because a less breathable but protective state can be actuated locally and only when needed,” said Francesco Fornasiero, LLNL’s principal investigator of the project.

The new uniforms could be deployed in the field in less than 10 years.

“The goal of this science and technology program is to develop a focused, innovative technological solution for future chemical biological defense protective clothing,” said Tracee Whitfield, the DTRA [US Defense Threat Reduction Agency] science and technology manager for the Dynamic Multifunctional Material for a Second Skin Program. “Swatch-level evaluations will occur in early 2018 to demonstrate the concept of ‘second skin,’ a major milestone that is a key step in the maturation of this technology.”

The researchers have prepared a video describing their work,

Here’s a link to and a citation for the paper,

Ultrabreathable and Protective Membranes with Sub-5 nm Carbon Nanotube Pores by Ngoc Bui, Eric R. Meshot, Sangil Kim, José Peña, Phillip W. Gibson, Kuang Jen Wu, and Francesco Fornasiero. Advanced Materials Volume 28, Issue 28, pages 5871–5877, July 27, 2016 DOI: 10.1002/adma.201600740 Version of Record online: 9 MAY 2016

© 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

Better contrast agents for magnetic resonance imaging with nanoparticles

I wonder what’s going on in the field of magnetic resonance imaging. This is the third news item I’ve stumbled across related to the topic in the last couple of months. (Links to the other two posts follow at the end of this post.) By comparison, that’s the more than in the previous seven years (2008 – 2015) combined.

The latest research concerns a new and better contrast agent. From an Aug. 3, 2016 news item on Nanowerk,

Scientists at the University of Basel [Switzerland] have developed nanoparticles which can serve as efficient contrast agents for magnetic resonance imaging. This new type of nanoparticles [sic] produce around ten times more contrast than the actual contrast agents and are responsive to specific environments.

An Aug. 3, 2016 University of Basel press release (also on EurekAlert), which originated the news item, explains further,

Contrast agents are usually based on the metal Gadolinium, which is injected and serves for an improved imaging of various organs in an MRI. Gadolinium ions should be bound with a carrier compound to avoid the toxicity to the human body of the free ions. Therefore, highly efficient contrast agents requiring lower Gadolinium concentrations represent an important step for advancing diagnosis and improving patient health prognosis.

Smart nanoparticles as contrast agents

The research groups of Prof. Cornelia Palivan and Prof. Wolfgang Meier from the Department of Chemistry at the University of Basel have introduced a new type of nanoparticles [sic], which combine multiple properties required for contrast agents: an increased MRI contrast for lower concentration, a potential for long blood circulation and responsiveness to different biochemical environments. These nanoparticles were obtained by co-assembly of heparin-functionalized polymers with trapped gadolinium ions and stimuli-responsive peptides.

The study shows, that the nanoparticles have the capacity of enhancing the MRI signal tenfold higher than the current agents. In addition, they have an enhanced efficacy in reductive milieu, characteristic for specific regions, such as cancerous tissues. These nanoparticles fulfill numerous key criteria for further development, such as absence of cellular toxicity, no apparent anticoagulation property, and high shelf stability. The concept developed by the researchers at the University of Basel to produce better contrast agents based on nanoparticles highlights a new direction in the design of MRI contrast agents, and supports their implementation for future applications.

Here’s a link to and a citation for the paper,

Nanoparticle-based highly sensitive MRI contrast agents with enhanced relaxivity in reductive milieu by
Severin J. Sigg, Francesco Santini, Adrian Najer, Pascal U. Richard, Wolfgang P. Meier, and Cornelia G. Palivan. Chem. Commun., 2016,52, 9937-9940 DOI: 10.1039/C6CC03396B First published online 13 Jul 2016

This paper is behind a paywall.

The other two MRI items featured here are in a June 10, 2016 posting (pH dependent nanoparticle-based contrast agent for MRIs [magnetic resonance images]) and in an Aug. 1, 2016 posting (Nuclear magnetic resonance microscope breaks records).

US white paper on neuromorphic computing (or the nanotechnology-inspired Grand Challenge for future computing)

The US has embarked on a number of what is called “Grand Challenges.” I first came across the concept when reading about the Bill and Melinda Gates (of Microsoft fame) Foundation. I gather these challenges are intended to provide funding for research that advances bold visions.

There is the US National Strategic Computing Initiative established on July 29, 2015 and its first anniversary results were announced one year to the day later. Within that initiative a nanotechnology-inspired Grand Challenge for Future Computing was issued and, according to a July 29, 2016 news item on Nanowerk, a white paper on the topic has been issued (Note: A link has been removed),

Today [July 29, 2016), Federal agencies participating in the National Nanotechnology Initiative (NNI) released a white paper (pdf) describing the collective Federal vision for the emerging and innovative solutions needed to realize the Nanotechnology-Inspired Grand Challenge for Future Computing.

The grand challenge, announced on October 20, 2015, is to “create a new type of computer that can proactively interpret and learn from data, solve unfamiliar problems using what it has learned, and operate with the energy efficiency of the human brain.” The white paper describes the technical priorities shared by the agencies, highlights the challenges and opportunities associated with these priorities, and presents a guiding vision for the research and development (R&D) needed to achieve key technical goals. By coordinating and collaborating across multiple levels of government, industry, academia, and nonprofit organizations, the nanotechnology and computer science communities can look beyond the decades-old approach to computing based on the von Neumann architecture and chart a new path that will continue the rapid pace of innovation beyond the next decade.

A July 29, 2016 US National Nanotechnology Coordination Office news release, which originated the news item, further and succinctly describes the contents of the paper,

“Materials and devices for computing have been and will continue to be a key application domain in the field of nanotechnology. As evident by the R&D topics highlighted in the white paper, this challenge will require the convergence of nanotechnology, neuroscience, and computer science to create a whole new paradigm for low-power computing with revolutionary, brain-like capabilities,” said Dr. Michael Meador, Director of the National Nanotechnology Coordination Office. …

The white paper was produced as a collaboration by technical staff at the Department of Energy, the National Science Foundation, the Department of Defense, the National Institute of Standards and Technology, and the Intelligence Community. …

The white paper titled “A Federal Vision for Future Computing: A Nanotechnology-Inspired Grand Challenge” is 15 pp. and it offers tidbits such as this (Note: Footnotes not included),

A new materials base may be needed for future electronic hardware. While most of today’s electronics use silicon, this approach is unsustainable if billions of disposable and short-lived sensor nodes are needed for the coming Internet-of-Things (IoT). To what extent can the materials base for the implementation of future information technology (IT) components and systems support sustainability through recycling and bio-degradability? More sustainable materials, such as compostable or biodegradable systems (polymers, paper, etc.) that can be recycled or reused,  may play an important role. The potential role for such alternative materials in the fabrication of integrated systems needs to be explored as well. [p. 5]

The basic architecture of computers today is essentially the same as those built in the 1940s—the von Neumann architecture—with separate compute, high-speed memory, and high-density storage components that are electronically interconnected. However, it is well known that continued performance increases using this architecture are not feasible in the long term, with power density constraints being one of the fundamental roadblocks.7 Further advances in the current approach using multiple cores, chip multiprocessors, and associated architectures are plagued by challenges in software and programming models. Thus,  research and development is required in radically new and different computing architectures involving processors, memory, input-output devices, and how they behave and are interconnected. [p. 7]

Neuroscience research suggests that the brain is a complex, high-performance computing system with low energy consumption and incredible parallelism. A highly plastic and flexible organ, the human brain is able to grow new neurons, synapses, and connections to cope with an ever-changing environment. Energy efficiency, growth, and flexibility occur at all scales, from molecular to cellular, and allow the brain, from early to late stage, to never stop learning and to act with proactive intelligence in both familiar and novel situations. Understanding how these mechanisms work and cooperate within and across scales has the potential to offer tremendous technical insights and novel engineering frameworks for materials, devices, and systems seeking to perform efficient and autonomous computing. This research focus area is the most synergistic with the national BRAIN Initiative. However, unlike the BRAIN Initiative, where the goal is to map the network connectivity of the brain, the objective here is to understand the nature, methods, and mechanisms for computation,  and how the brain performs some of its tasks. Even within this broad paradigm,  one can loosely distinguish between neuromorphic computing and artificial neural network (ANN) approaches. The goal of neuromorphic computing is oriented towards a hardware approach to reverse engineering the computational architecture of the brain. On the other hand, ANNs include algorithmic approaches arising from machinelearning,  which in turn could leverage advancements and understanding in neuroscience as well as novel cognitive, mathematical, and statistical techniques. Indeed, the ultimate intelligent systems may as well be the result of merging existing ANN (e.g., deep learning) and bio-inspired techniques. [p. 8]

As government documents go, this is quite readable.

For anyone interested in learning more about the future federal plans for computing in the US, there is a July 29, 2016 posting on the White House blog celebrating the first year of the US National Strategic Computing Initiative Strategic Plan (29 pp. PDF; awkward but that is the title).

Selecta Biosciences’ proprietary tolerogenic nanoparticles improve efficacy and safety of biologic drugs

Although it may not seem like it initially, there is a nanotechnology element to this piece of news. From an Aug. 1, 2016 news item on Nanotechnology Now ,

Selecta Biosciences, Inc. (NASDAQ: SELB), a clinical-stage biopharmaceutical company developing targeted antigen-specific immune therapies for rare and serious diseases, announced today that Nature Nanotechnology has published an article that presents preclinical results from Selecta’s research which demonstrate the broad potential applicability of Selecta’s novel immune tolerance platform. Details that elucidate the mechanism of action of the company’s immune tolerance therapy, SVP [Synthetic Vaccine Particle]-Rapamycin (SEL-110), were also shown. Data in the publication support the Company’s lead clinical program, showing Selecta’s SVP-Rapamycin (SEL-110) induces antigen-specific immune tolerance and mitigates the formation of anti-drug antibodies (ADAs) to biologic drugs, including pegsiticase (for gout) and adalimumab (for rheumatoid arthritis).

An Aug. 1, 2016 Selecta Biosciences news release (also on EurekAlert), which originated the news item, provides more information,

“Undesired immune responses affect both the efficacy and safety of marketed biologic therapies and the development of otherwise promising new technologies. Selecta’s SVP platform positions the company to enhance biologic therapy and to advance a pipeline of proprietary products that meet the therapeutic needs of patients with rare and serious diseases,” said Werner Cautreels, PhD,Chairman of the Board, CEO and President of Selecta Biosciences. “This publication in Nature Nanotechnology highlights the mechanism by which Selecta’s proprietary nanoparticles induce lasting antigen-specific tolerance. We believe that SVP-Rapamycin has the potential to mitigate ADAs against a broad range of biologic therapies.”

In the Nature Nanotechnology journal article, Selecta presents validation of the immune tolerance mechanism of action of the company’s technology, demonstrating that poly(lactic-co-glycolic acid) (PLGA) nanoparticles encapsulating rapamycin, but not free rapamycin, are capable of inducing durable immunological tolerance to co-administered proteins. This robust immune tolerance is characterized immunologically by: (1) induction of tolerogenic dendritic cells; (2) an increase in regulatory T cells; (3) reduction in B cell activation and germinal center formation; and (4) inhibition of antigen-specific hypersensitivity reactions.

Data presented in the journal article support the Company’s clinical lead program in gout, showing that intravenous co-administration of tolerogenic nanoparticles with pegylated uricase inhibited the formation of ADAs in mice and nonhuman primates and normalized serum uric acid levels in uricase-deficient mice. Underscoring the broad potential of the approach, results additionally show that subcutaneous co-administration of nanoparticles with adalimumab durably inhibited ADAs, resulting in normalized pharmacokinetics of the anti-TNFα antibody and protection against arthritis in TNFα transgenic mice.

In the published research, the induction of specific immune tolerance by SVP-Rapamycin (SEL-110) versus chronic immune suppression is supported by the findings that: (1) antigen must be co-administered at the time of SVP-Rapamycin (SEL-110) treatment; (2) immune tolerance is durable to many challenges of antigen alone; (3) animals tolerized to a specific antigen are capable of responding to an unrelated antigen, meaning that SVP-Rapamycin (SEL-110) does not induce a broad immune suppression; and (4) activation of naïve T cells is inhibited when adoptively transferred into previously tolerized mice. In contrast, daily administration of free rapamycin, at five times the total weekly rapamycin dose as that administered in the SVP-Rapamycin, was observed to transiently suppress the immune response, but did not induce durable immunological tolerance.

Here’s a link to and a citation for the paper,

Improving the efficacy and safety of biologic drugs with tolerogenic nanoparticles by Takashi K. Kishimoto, Joseph D. Ferrari, Robert A. LaMothe, Pallavi N. Kolte, Aaron P. Griset, Conlin O’Neil, Victor Chan, Erica Browning, Aditi Chalishazar, William Kuhlman, Fen-ni Fu, Nelly Viseux, David H. Altreuter, Lloyd Johnston, & Roberto A. Maldonado. Nature Nanotechnology (2016)  doi:10.1038/nnano.2016.135 Published online 01 August 2016

This paper is behind a paywall.

Self-cleaning textiles and waterless toilets in our nanotechnology-enabled future

Whoever wrote the headline for an Aug. 1, 2016 article about our nanotechnology-enabled future by Jason Lam (headlines are not always written by the author) for South China Morning Post had some fun with words, “Scientists are flushed with success: sunshine to replace the need to wash clothes, while toilets will no longer need water,” (Note: A link has been removed)

Self-cleaning textiles are being explored at RMIT University in Melbourne, Australia. In this pioneering technology, researchers have been growing nanostructures on cotton fabric which, when exposed to light, release a burst of energy that then degrades organic matter. So a little ray or sunshine – or even a light bulb – could, in effect, clean your clothes for you.

As the scientists explain it, the nanostructure is metal-based, so it can absorb visible light. This creates energy, which is able to degrade organic matter on which it is present [the textile], “so that’s how it’ll get rid of stains”.

Tests on stains have proven promising, say the scientists, with results achieved within between six and 30 minutes of light exposure, depending on the material. The research is now moving on to sweat testing.

Stain-free fabrics have been around for a while, but they haven’t felt as comfortable as traditional textiles. Dropel Fabrics, a creator of hydrophobic natural textiles, is working to overcome that. It has developed a patented nanotechnology process infuses cotton fibres with water, stain, and odour repellent properties, while maintaining, the company says, the textile’s softness and breathability.

Apparently, it involves a “simple process” which encapsulates polymers within the textile fibres, and creates a protective layer. Invisible to the hand and eye, this protective layer does not affect the fabric’s texture, so its softness and construction is maintained. The company says it is exploring partnerships with high-end fashion brands.

Two Mexican industrial designers are working on their own solution for a waterless toilet, this time designed for urban areas. Reasoning that even some apartment dwellers have no access to sewage, their concept turns human waste into greywater which can safely be disposed of down the household drain.

I’m glad to have found Lam’s article as getting the perspective from Asia helps to balance this US-, Canada-, Euro-, and UK-centric science blog.

Curbing police violence with machine learning

A rather fascinating Aug. 1, 2016 article by Hal Hodson about machine learning and curbing police violence has appeared in the New Scientist journal (Note: Links have been removed),

None of their colleagues may have noticed, but a computer has. By churning through the police’s own staff records, it has caught signs that an officer is at high risk of initiating an “adverse event” – racial profiling or, worse, an unwarranted shooting.

The Charlotte-Mecklenburg Police Department in North Carolina is piloting the system in an attempt to tackle the police violence that has become a heated issue in the US in the past three years. A team at the University of Chicago is helping them feed their data into a machine learning system that learns to spot risk factors for unprofessional conduct. The department can then step in before risk transforms into actual harm.

The idea is to prevent incidents in which officers who are stressed behave aggressively, for example, such as one in Texas where an officer pulled his gun on children at a pool party after responding to two suicide calls earlier that shift. Ideally, early warning systems would be able to identify individuals who had recently been deployed on tough assignments, and divert them from other sensitive calls.

According to Hodson, there are already systems, both human and algorithmic, in place but the goal is to make them better,

The system being tested in Charlotte is designed to include all of the records a department holds on an individual – from details of previous misconduct and gun use to their deployment history, such as how many suicide or domestic violence calls they have responded to. It retrospectively caught 48 out of 83 adverse incidents between 2005 and now – 12 per cent more than Charlotte-Mecklenberg’s existing early intervention system.

More importantly, the false positive rate – the fraction of officers flagged as being under stress who do not go on to act aggressively – was 32 per cent lower than the existing system’s. “Right now the systems that claim to do this end up flagging the majority of officers,” says Rayid Ghani, who leads the Chicago team. “You can’t really intervene then.”

There is some cautious optimism about this new algorithm (Note: Links have been removed),

Frank Pasquale, who studies the social impact of algorithms at the University of Maryland, is cautiously optimistic. “In many walks of life I think this algorithmic ranking of workers has gone too far – it troubles me,” he says. “But in the context of the police, I think it could work.”

Pasquale says that while such a system for tackling police misconduct is new, it’s likely that older systems created the problem in the first place. “The people behind this are going to say it’s all new,” he says. “But it could be seen as an effort to correct an earlier algorithmic failure. A lot of people say that the reason you have so much contact between minorities and police is because the CompStat system was rewarding officers who got the most arrests.”

CompStat, short for Computer Statistics, is a police management and accountability system that was used to implement the “broken windows” theory of policing, which proposes that coming down hard on minor infractions like public drinking and vandalism helps to create an atmosphere of law and order, bringing serious crime down in its wake. Many police researchers have suggested that the approach has led to the current dangerous tension between police and minority communities.

Ghani has not forgotten the human dimension,

One thing Ghani is certain of is that the interventions will need to be decided on and delivered by humans. “I would not want any of those to be automated,” he says. “As long as there is a human in the middle starting a conversation with them, we’re reducing the chance for things to go wrong.”

h/t Terkko Navigator

I have written about police and violence here in the context of the Dallas Police Department and its use of a robot in a violent confrontation with a sniper, July 25, 2016 posting titled: Robots, Dallas (US), ethics, and killing.

Deriving graphene-like films from salt

This research comes from Russia (mostly). A July 29, 2016 news item on ScienceDaily describes a graphene-like structure derived from salt,

Researchers from Moscow Institute of Physics and Technology (MIPT), Skolkovo Institute of Science and Technology (Skoltech), the Technological Institute for Superhard and Novel Carbon Materials (TISNCM), the National University of Science and Technology MISiS (Russia), and Rice University (USA) used computer simulations to find how thin a slab of salt has to be in order for it to break up into graphene-like layers. Based on the computer simulation, they derived the equation for the number of layers in a crystal that will produce ultrathin films with applications in nanoelectronics. …

Caption: Transition from a cubic arrangement into several hexagonal layers. Credit: authors of the study

Caption: Transition from a cubic arrangement into several hexagonal layers. Credit: authors of the study

A July 29, 2016 Moscow Institute of Physics and Technology press release on EurekAlert, which originated the news item,  provides more technical detail,

From 3D to 2D

Unique monoatomic thickness of graphene makes it an attractive and useful material. Its crystal lattice resembles a honeycombs, as the bonds between the constituent atoms form regular hexagons. Graphene is a single layer of a three-dimensional graphite crystal and its properties (as well as properties of any 2D crystal) are radically different from its 3D counterpart. Since the discovery of graphene, a large amount of research has been directed at new two-dimensional materials with intriguing properties. Ultrathin films have unusual properties that might be useful for applications such as nano- and microelectronics.

Previous theoretical studies suggested that films with a cubic structure and ionic bonding could spontaneously convert to a layered hexagonal graphitic structure in what is known as graphitisation. For some substances, this conversion has been experimentally observed. It was predicted that rock salt NaCl can be one of the compounds with graphitisation tendencies. Graphitisation of cubic compounds could produce new and promising structures for applications in nanoelectronics. However, no theory has been developed that would account for this process in the case of an arbitrary cubic compound and make predictions about its conversion into graphene-like salt layers.

For graphitisation to occur, the crystal layers need to be reduced along the main diagonal of the cubic structure. This will result in one crystal surface being made of sodium ions Na? and the other of chloride ions Cl?. It is important to note that positive and negative ions (i.e. Na? and Cl?)–and not neutral atoms–occupy the lattice points of the structure. This generates charges of opposite signs on the two surfaces. As long as the surfaces are remote from each other, all charges cancel out, and the salt slab shows a preference for a cubic structure. However, if the film is made sufficiently thin, this gives rise to a large dipole moment due to the opposite charges of the two crystal surfaces. The structure seeks to get rid of the dipole moment, which increases the energy of the system. To make the surfaces charge-neutral, the crystal undergoes a rearrangement of atoms.

Experiment vs model

To study how graphitisation tendencies vary depending on the compound, the researchers examined 16 binary compounds with the general formula AB, where A stands for one of the four alkali metals lithium Li, sodium Na, potassium K, and rubidium Rb. These are highly reactive elements found in Group 1 of the periodic table. The B in the formula stands for any of the four halogens fluorine F, chlorine Cl, bromine Br, and iodine I. These elements are in Group 17 of the periodic table and readily react with alkali metals.

All compounds in this study come in a number of different structures, also known as crystal lattices or phases. If atmospheric pressure is increased to 300,000 times its normal value, an another phase (B2) of NaCl (represented by the yellow portion of the diagram) becomes more stable, effecting a change in the crystal lattice. To test their choice of methods and parameters, the researchers simulated two crystal lattices and calculated the pressure that corresponds to the phase transition between them. Their predictions agree with experimental data.

Just how thin should it be?

The compounds within the scope of this study can all have a hexagonal, “graphitic”, G phase (the red in the diagram) that is unstable in 3D bulk but becomes the most stable structure for ultrathin (2D or quasi-2D) films. The researchers identified the relationship between the surface energy of a film and the number of layers in it for both cubic and hexagonal structures. They graphed this relationship by plotting two lines with different slopes for each of the compounds studied. Each pair of lines associated with one compound has a common point that corresponds to the critical slab thickness that makes conversion from a cubic to a hexagonal structure energetically favourable. For example, the critical number of layers was found to be close to 11 for all sodium salts and between 19 and 27 for lithium salts.

Based on this data, the researchers established a relationship between the critical number of layers and two parameters that determine the strength of the ionic bonds in various compounds. The first parameter indicates the size of an ion of a given metal–its ionic radius. The second parameter is called electronegativity and is a measure of the ? atom’s ability to attract the electrons of element B. Higher electronegativity means more powerful attraction of electrons by the atom, a more pronounced ionic nature of the bond, a larger surface dipole, and a lower critical slab thickness.

And there’s more

Pavel Sorokin, Dr. habil., [sic] is head of the Laboratory of New Materials Simulation at TISNCM. He explains the importance of the study, ‘This work has already attracted our colleagues from Israel and Japan. If they confirm our findings experimentally, this phenomenon [of graphitisation] will provide a viable route to the synthesis of ultrathin films with potential applications in nanoelectronics.’

The scientists intend to broaden the scope of their studies by examining other compounds. They believe that ultrathin films of different composition might also undergo spontaneous graphitisation, yielding new layered structures with properties that are even more intriguing.

Here’s a link to and a citation for the paper,

Ionic Graphitization of Ultrathin Films of Ionic Compounds by A. G. Kvashnin, E. Y. Pashkin, B. I. Yakobson, and P. B. Sorokin. J. Phys. Chem. Lett., 2016, 7 (14), pp 2659–2663 DOI: 10.1021/acs.jpclett.6b01214 Publication Date (Web): June 23, 2016

Copyright © 2016 American Chemical Society

This paper is behind a paywall.

Strengthening science outreach initiatives

It’s a great idea but there are some puzzling aspects to the Sigma Xi, The Scientific Research Society’s newly announced research outreach initiative. From a July 28, 2016 Sigma XI, The Scientific Research Society’snews release on EurekAlert,

With 130 years of history and hundreds of thousands of inductees, Sigma Xi is the oldest and largest multidisciplinary honor society for scientists and engineers in the world. Its new program, the Research Communications Initiative (RCI), builds on the Society’s mission to enhance the health of the research enterprise, foster integrity in science and engineering, and promote the public’s understanding of science.

Through RCI, Sigma Xi will team up with researchers and partner institutions who wish to effectively tell general audiences, research administrators, and other investigators about their work. Sigma Xi will help its RCI partners develop a strategy for sharing their research and connect them with leading communication professionals who will develop content, including feature-length articles, videos, infographics, animations, podcasts, social media campaigns, and more. Sigma Xi will provide both digital and print publishing platforms so that partners may reach new audiences by the thousands. Finally, partners will receive a data-driven evaluation of the success of their communications.

“The Research Communications Initiative is an innovative program that calls on the expertise we’ve developed and perfected over our 100-year history of communicating science,” said Jamie Vernon, Sigma Xi’s director of science communications and publications. “We know that institutions can strengthen their reputation by sharing their research and that public and private funding agencies are asking for more outreach from their grant recipients. Sigma Xi is uniquely qualified to provide this service because of our emphasis on ethical research, our worldwide chapter and member network who can be an audience for our RCI partner communications, and our experience in publishing American Scientist.”

Sigma Xi’s award-winning magazine, American Scientist, contains articles for science enthusiasts that are written by researchers–scientists, engineers, and investigators of myriad disciplines–including Nobel laureates and other prominent investigators. The magazine is routinely recognized by researchers, educators, and the public for its trustworthy and engaging content. This editorial insight and expertise will help shape the future of science communication through RCI.

RCI partners will have the option to have their communications included in special sections or inserts in American Scientist or to have content on American Scientist‘s website as well as RCI digital platforms or partner’s sites. All RCI content will be fully disclosed as a product of the partnership program and will be published under a Creative Commons license, making it free to be republished. Sigma Xi has called upon its relationships with other like-minded organizations, such as the National Alliance for Broader Impacts, Council of Graduate Schools, and the U.S. Council on Competitiveness, to distribute the work created with RCI partners to leaders in the research community. The Society plans to have a variety of other organizations involved.

There is a lot to like about this initiative but it’s not immediately clear what they mean by the ‘partners’ who will be accessing this service. Is that a member or does that require a sponsorship fee or some sort of fee structure for institutions and individuals that wish to participate in the RCI? Is the effort confined to US science and/or English language science?  In any event, you can check out the Sigma Xi site here and the RCI webpage here.

 

Two nano workshops precede OpenTox Euro conference

The main conference OpenTox Euro is focused on novel materials and it’s being preceded by two nano workshops. All of of these events will be taking place in Germany in Oct. 2016. From an Aug. 11, 2016 posting by Lynn L. Bergeson on Nanotechnology Now,

The OpenTox Euro Conference, “Integrating Scientific Evidence Supporting Risk Assessment and Safer Design of Novel Substances,” will be held October 26-28, 2016. … The current topics for the Conference include: (1) computational modeling of mechanisms at the nanoscale; (2) translational bioinformatics applied to safety assessment; (3) advances in cheminformatics; (4) interoperability in action; (5) development and application of adverse outcome pathways; (6) open science applications showcase; (7) toxicokinetics and extrapolation; and (8) risk assessment.

On Oct. 24, 2016, two days before OpenTox Euro, the EU-US Nano EHS [Environmental Health and Safety] 2016 workshop will be held in Germany. The theme is: ‘Enabling a Sustainable Harmonised Knowledge Infrastructure supporting Nano Environmental and Health Safety Assessment’ and the objectives are,

The objective of the workshop is to facilitate networking, knowledge sharing and idea development on the requirements and implementation of a sustainable knowledge infrastructure for Nano Environmental and Health Safety Assessment and Communications. The infrastructure should support the needs required by different stakeholders including scientific researchers, industry, regulators, workers and consumers.

The workshop will also identify funding opportunities and financial models within and beyond current international and national programs. Specifically, the workshop will facilitate active discussions but also identify potential partners for future EU-US cooperation on the development of knowledge infrastructure in the NanoEHS field. Advances in the Nano Safety harmonisation process, including developing an ongoing working consensus on data management and ontology, will be discussed:

– Information needs of stakeholders and applications
– Data collection and management in the area of NanoEHS
– Developments in ontologies supporting NanoEHS goals
– Harmonisation efforts between EU and US programs
– Identify practice and infrastructure gaps and possible solutions
– Identify needs and solutions for different stakeholders
– Propose an overarching sustainable solution for the market and society

The presentations will be focused on the current efforts and concrete achievements within EU and US initiatives and their potential elaboration and extension.

The second workshop is being held by the eNanoMapper (ENM) project on Oct. 25, 2016 and concerns Nano Modelling. The objectives and workshop sessions are:

1. Give the opportunity to research groups working on computational nanotoxicology to disseminate their modelling tools based on hands-on examples and exercises
2. Present a collection of modelling tools that can span the entire lifecycle of nanotox research, starting from the design of experiments until use of models for risk assessment in biological and environmental systems.
3. Engage the workshop participants in using different modelling tools and motivate them to contribute and share their knowledge.

Indicative workshop sessions

• Preparation of datasets to be used for modelling and risk assessment
• Ontologies and databases
• Computation of theoretical descriptors
• NanoQSAR Modelling
• Ab-initio modelling
• Mechanistic modelling
• Modelling based on Omics data
• Filling data gaps-Read Across
• Risk assessment
• Experimental design

We would encourage research teams that have developed tools in the areas of computational nanotoxicology and risk assessment to demonstrate their tools in this workshop.

That’s going to be a very full week in Germany.

You can register for OpenTox Euro and more here.

Two European surveys on disposal practices for manufactured nano-objects

Lynn L. Bergeson’s Aug. 10, 2016 post on Nanotechnology Now announces two surveys (one for producers of nanoscale objects and one for waste disposal companies) being conducted by the European Commission,

Under European Commission (EC) funding, the European Committee for Standardization Technical Committee (CEN/TC) 352 — Nanotechnologies is developing guidelines relating to the safe waste management and disposal of deliberately manufactured nano-objects.

Tatiana Correia has written a July 15, 2016 description of the committee’s surveys for Innovate UK Network,

Under  the European Commission funding, CEN TC 352 European standardisation committee  are  developing guidelines relating to the safe waste management and  disposal of deliberately manufactured nano-objects. These are discrete pieces  of  material with one or more dimensions in the nanoscale(1). These may  also  be  referred  to  as  nanoparticles,  quantum  dots, nanofibres, nanotubes  and  nanoplates.  The  guidelines  will provide guidance for all waste  management  activities  from  the  manufacturing  and  processing of manufactured  nano-objects  (MNOs). In order to ensure that the context for this  document  is  correct,  it  is useful to gain an insight into current practice in the disposal of MNOs.

Here’s a link to the Questionnaire relating to current disposal practice for Manufactured Nano-objects in Waste – Companies manufacturing or processing manufactured nano-objects and to the Questionnaire relating to current disposal practice for Manufactured Nano-objects in Waste – Waste disposal companies.

The deadline for both surveys is Sept. 5, 2016.