# Two nano workshops precede OpenTox Euro conference

The main conference OpenTox Euro is focused on novel materials and it’s being preceded by two nano workshops. All of of these events will be taking place in Germany in Oct. 2016. From an Aug. 11, 2016 posting by Lynn L. Bergeson on Nanotechnology Now,

The OpenTox Euro Conference, “Integrating Scientific Evidence Supporting Risk Assessment and Safer Design of Novel Substances,” will be held October 26-28, 2016. … The current topics for the Conference include: (1) computational modeling of mechanisms at the nanoscale; (2) translational bioinformatics applied to safety assessment; (3) advances in cheminformatics; (4) interoperability in action; (5) development and application of adverse outcome pathways; (6) open science applications showcase; (7) toxicokinetics and extrapolation; and (8) risk assessment.

On Oct. 24, 2016, two days before OpenTox Euro, the EU-US Nano EHS [Environmental Health and Safety] 2016 workshop will be held in Germany. The theme is: ‘Enabling a Sustainable Harmonised Knowledge Infrastructure supporting Nano Environmental and Health Safety Assessment’ and the objectives are,

The objective of the workshop is to facilitate networking, knowledge sharing and idea development on the requirements and implementation of a sustainable knowledge infrastructure for Nano Environmental and Health Safety Assessment and Communications. The infrastructure should support the needs required by different stakeholders including scientific researchers, industry, regulators, workers and consumers.

The workshop will also identify funding opportunities and financial models within and beyond current international and national programs. Specifically, the workshop will facilitate active discussions but also identify potential partners for future EU-US cooperation on the development of knowledge infrastructure in the NanoEHS field. Advances in the Nano Safety harmonisation process, including developing an ongoing working consensus on data management and ontology, will be discussed:

– Information needs of stakeholders and applications
– Data collection and management in the area of NanoEHS
– Developments in ontologies supporting NanoEHS goals
– Harmonisation efforts between EU and US programs
– Identify practice and infrastructure gaps and possible solutions
– Identify needs and solutions for different stakeholders
– Propose an overarching sustainable solution for the market and society

The presentations will be focused on the current efforts and concrete achievements within EU and US initiatives and their potential elaboration and extension.

The second workshop is being held by the eNanoMapper (ENM) project on Oct. 25, 2016 and concerns Nano Modelling. The objectives and workshop sessions are:

1. Give the opportunity to research groups working on computational nanotoxicology to disseminate their modelling tools based on hands-on examples and exercises
2. Present a collection of modelling tools that can span the entire lifecycle of nanotox research, starting from the design of experiments until use of models for risk assessment in biological and environmental systems.
3. Engage the workshop participants in using different modelling tools and motivate them to contribute and share their knowledge.

Indicative workshop sessions

• Preparation of datasets to be used for modelling and risk assessment
• Ontologies and databases
• Computation of theoretical descriptors
• NanoQSAR Modelling • Ab-initio modelling
• Mechanistic modelling • Modelling based on Omics data
• Filling data gaps-Read Across
• Risk assessment
• Experimental design

We would encourage research teams that have developed tools in the areas of computational nanotoxicology and risk assessment to demonstrate their tools in this workshop.

That’s going to be a very full week in Germany.

# Generating clean fuel with individual gold atoms

A July 22, 2016 news item on Nanowerk highlights an international collaboration focused on producing clean fuel,

A combined experimental and theoretical study comprising researchers from the Chemistry Department and LCN [London Centre for Nanotechnology], along with groups in Argentina, China, Spain and Germany, has shed new light on the behaviour of individual gold atoms supported on defective thin cerium dioxide films – an important system for catalysis and the generation of clean hydrogen for fuel.

A July ??, 2016 LCN press release, which originated the news item, expands on the theme of catalysts, the research into individual gold atoms, and how all this could result in clean fuel,

Catalysis plays a vital role in our world; an estimated 80% of all chemical and materials are made via processes which involve catalysts, which are commonly a mixture of metals and oxides. The standard motif for these heterogeneous catalysts (where the catalysts are solid and the reactants are in the gas phase) is of a high surface area oxide support that is decorated with metal nanoparticles a few nanometres in diameter. Cerium dioxide (ceria, CeO2) is a widely used support material for many important industrial processes; metal nanoparticles supported on ceria have displayed high activities for applications including car catalytic converters, alcohol synthesis, and for hydrogen production. There are two key attributes of ceria which make it an excellent active support material: its oxygen storage and release ability, and its ability to stabilise small metal particles under reaction conditions. A recent system that has been the focus of much interest has been that of gold nanoparticles and single atoms with ceria, which has demonstrated high activity towards the water-gas-shift reaction, (CO + H2O —> CO2 + H2) a key stage in the generation of clean hydrogen for use in fuel cells.

The nature of the active sites of these catalysts and the role that defects play are still relatively poorly understood; in order to study them in a systematic fashion, the researchers prepared model systems which can be characterised on the atomic scale with a scanning tunnelling microscope.

Figure: STM images of CeO2-x(111) ultrathin films before and after the deposition of Au single atoms at 300 K. The bright lattice is from the oxygen atoms at the surface – vacancies appear as dark spots

These model systems comprised well-ordered, epitaxial ceria films less than 2 nm thick, prepared on a metal single crystal, upon which single atoms and small clusters of gold were evaporated onto under ultra-high-vacuum (essential to prevent contamination of the surfaces). Oxygen vacancy defects – missing oxygen atoms in the top layer of the ceria – are relatively common at the surface and appear as dark spots in the STM images. By mapping the surface before and after the deposition of gold, it is possible to analyse the binding of the metal atoms, in particular there does not appear to be any preference for binding in the vacancy sites at 300 K.

Publishing their results in Physical Review Letters, the researchers combined these experimental results with theoretical studies of the binding energies and diffusion rates across the surface. They showed that kinetic effects governed the behaviour of the gold atoms, prohibiting the expected occupation of the thermodynamically more stable oxygen vacancy sites. They also identified electron transfer between the gold atoms and the ceria, leading to a better understanding of the diffusion phenomena that occur at this scale, and demonstrated that the effect of individual surface defects may be more minor than is normally imagined.

Here’s a link to and a citation for the paper,

Diffusion Barriers Block Defect Occupation on Reduced CeO2(111) by P.G. Lustemberg, Y. Pan, B.-J. Shaw, D. Grinter, Chi Pang, G. Thornton, Rubén Pérez, M. V. Ganduglia-Pirovano, and N. Nilius. Phys. Rev. Lett. Vol. 116, Iss. 23 — 10 June 2016 2016DOI:http://dx.doi.org/10.1103/PhysRevLett.116.236101 Published 9 June 2016

This paper is behind a paywall.

# Technology, athletics, and the ‘new’ human

There is a tension between Olympic athletes and Paralympic athletes as it is felt by some able-bodied athletes that paralympic athletes may have an advantage due to their prosthetics. Roger Pielke Jr. has written a fascinating account of the tensions as a means of asking what it all means. From Pielke Jr.’s Aug. 3, 2016 post on the Guardian Science blogs (Note: Links have been removed),

Athletes are humans too, and they sometimes look for a performance improvement through technological enhancements. In my forthcoming book, The Edge: The War Against Cheating and Corruption in the Cutthroat World of Elite Sports, I discuss a range of technological augmentations to both people and to sports, and the challenges that they pose for rule making. In humans, such improvements can be the result of surgery to reshape (like laser eye surgery) or strengthen (such as replacing a ligament with a tendon) the body to aid performance, or to add biological or non-biological parts that the individual wasn’t born with.

One well-known case of technological augmentation involved the South African sprinter Oscar Pistorius, who ran in the 2012 Olympic Games on prosthetic “blades” below his knees (during happier days for the athlete who is currently jailed in South Africa for the killing of his girlfriend, Reeva Steenkamp). Years before the London Games Pistorius began to have success on the track running against able-bodied athletes. As a consequence of this success and Pistorius’s interest in competing at the Olympic games, the International Association of Athletics Federations (or IAAF, which oversees elite track and field competitions) introduced a rule in 2007, focused specifically on Pistorius, prohibiting the “use of any technical device that incorporates springs, wheels, or any other element that provides the user with an advantage over another athlete not using such a device.” Under this rule, Pistorius was determined by the IAAF to be ineligible to compete against able-bodied athletes.

Pistorius appealed the decision to the Court of Arbitration for Sport. The appeal hinged on answering a metaphysical question—how fast would Pistorius have run had he been born with functioning legs below the knee? In other words, did the blades give him an advantage over other athletes that the hypothetical, able-bodied Oscar Pistorius would not have had? Because there never was an able-bodied Pistorius, the CAS looked to scientists to answer the question.

CAS concluded that the IAAF was in fact fixing the rules to prevent Pistorius from competing and that “at least some IAAF officials had determined that they did not want Mr. Pistorius to be acknowledged as eligible to compete in international IAAF-sanctioned events, regardless of the results that properly conducted scientific studies might demonstrate.” CAS determined that it was the responsibility of the IAAF to show “on the balance of probabilities” that Pistorius gained an advantage by running on his blades. CAS concluded that the research commissioned by the IAAF did not show conclusively such an advantage.

As a result, CAS ruled that Pistorius was able to compete in the London Games, where he reached the semifinals of the 400 meters. CAS concluded that resolving such disputes “must be viewed as just one of the challenges of 21st Century life.”

The story does not end with Oscar Pistorius as Pielke, Jr. notes. There has been another challenge, this time by Markus Rehm, a German long-jumper who leaps off a prosthetic leg. Interestingly, the rules have changed since Oscar Pistorius won his case (Note: Links have been removed),

In the Pistorius case, under the rules for inclusion in the Olympic games the burden of proof had been on the IAAF, not the athlete, to demonstrate the presence of an advantage provided by technology.

This precedent was overturned in 2015, when the IAAF quietly introduced a new rule that in such cases reverses the burden of proof. The switch placed the burden of proof on the athlete instead of the governing body. The new rule—which we might call the Rehm Rule, given its timing—states that an athlete with a prosthetic limb (specifically, any “mechanical aid”) cannot participate in IAAF events “unless the athlete can establish on the balance of probabilities that the use of an aid would not provide him with an overall competitive advantage over an athlete not using such aid.” This new rule effectively slammed the door to participation by Paralympians with prosthetics from participating in Olympic Games.
Advertisement

Even if an athlete might have the resources to enlist researchers to carefully study his or her performance, the IAAF requires the athlete to do something that is very difficult, and often altogether impossible—to prove a negative.

If you have the time, I encourage you to read Pielke Jr.’s piece in its entirety as he notes the secrecy with which the Rehm rule was implemented and the implications for the future. Here’s one last excerpt (Note: A link has been removed),

We may be seeing only the beginning of debates over technological augmentation and sport. Silvia Camporesi, an ethicist at King’s College London, observed: “It is plausible to think that in 50 years, or maybe less, the ‘natural’ able-bodied athletes will just appear anachronistic.” She continues: “As our concept of what is ‘natural’ depends on what we are used to, and evolves with our society and culture, so does our concept of ‘purity’ of sport.”

I have written many times about human augmentation and the possibility that what is now viewed as a ‘normal’ body may one day be viewed as subpar or inferior is not all that farfetched. David Epstein’s 2014 TED talk “Are athletes really getting faster, better, stronger?” points out that in addition to sports technology innovations athletes’ bodies have changed considerably since the beginning of the 20th century. He doesn’t discuss body augmentation but it seems increasingly likely not just for athletes but for everyone.

As for athletes and augmentation, Epstein has an Aug. 7, 2016 Scientific American piece published on Salon.com in time for the 2016 Summer Olympics in Rio de Janeiro,

I knew Eero Mäntyranta had magic blood, but I hadn’t expected to see it in his face. I had tracked him down above the Arctic Circle in Finland where he was — what else? — a reindeer farmer.

He was all red. Not just the crimson sweater with knitted reindeer crossing his belly, but his actual skin. It was cardinal dappled with violet, his nose a bulbous purple plum. In the pictures I’d seen of him in Sports Illustrated in the 1960s — when he’d won three Olympic gold medals in cross-country skiing — he was still white. But now, as an older man, his special blood had turned him red.

Mäntyranta had about 50 percent more red blood cells than a normal man. If Armstrong [Lance Armstrong, cyclist] had as many red blood cells as Mäntyranta, cycling rules would have barred him from even starting a race, unless he could prove it was a natural condition.

During his career, Mäntyranta was accused of doping after his high red blood cell count was discovered. Two decades after he retired, Finnish scientists found his family’s mutation. …

Epstein also covers the Pistorius story, albeit with more detail about the science and controversy of determining whether someone with prosthetics may have an advantage over an able-bodied athlete. Scientists don’t agree about whether or not there is an advantage.

I have many other posts on the topic of augmentation. You can find them under the Human Enhancement category and you can also try the tag, machine/flesh.

# A method for producing two-dimensional quasicrystals from metal organic networks

A July 13, 2016 news item on ScienceDaily highlights an advance where quasicrystals are concerned,

Unlike classical crystals, quasicrystals do not comprise periodic units, even though they do have a superordinate structure. The formation of the fascinating mosaics that they produce is barely understood. In the context of an international collaborative effort, researchers at the Technical University of Munich (TUM) have now presented a methodology that allows the production of two-dimensional quasicrystals from metal-organic networks, opening the door to the development of promising new materials.

A July 13, 2016 TUM press release (also on EurekAlert), which originated the news item, explains further,

Physicist Daniel Shechtman [emphasis mine] merely put down three question marks in his laboratory journal, when he saw the results of his latest experiment one day in 1982. He was looking at a crystalline pattern that was considered impossible at the time. According to the canonical tenet of the day, crystals always had so-called translational symmetry. They comprise a single basic unit, the so-called elemental cell, that is repeated in the exact same form in all spatial directions.

Although Shechtman’s pattern did contain global symmetry, the individual building blocks could not be mapped onto each other merely by translation. The first quasicrystal had been discovered. In spite of partially stark criticism by reputable colleagues, Shechtman stood fast by his new concept and thus revolutionized the scientific understanding of crystals and solid bodies. In 2011 he ultimately received the Nobel Prize in Chemistry. To this day, both the basic conditions and mechanisms by which these fascinating structures are formed remain largely shrouded in mystery.

A toolbox for quasicrystals

Now a group of scientists led by Wilhelm Auwärter and Johannes Barth, both professors in the Department of Surface Physics at TU Munich, in collaboration with Hong Kong University of Science and Technology (HKUST, Prof. Nian Lin, et al) and the Spanish research institute IMDEA Nanoscience (Dr. David Écija), have developed a new basis for producing two-dimensional quasicrystals, which might bring them a good deal closer to understanding these peculiar patterns.

The TUM doctoral candidate José Ignacio Urgel made the pioneering measurements in the course of a research fellowship at HKUST. “We now have a new set of building blocks that we can use to assemble many different new quasicrystalline structures. This diversity allows us to investigate on how quasicrystals are formed,” explain the TUM physicists.

The researchers were successful in linking europium – a metal atom in the lanthanide series – with organic compounds, thereby constructing a two-dimensional quasicrystal that even has the potential to be extended into a three-dimensional quasicrystal. To date, scientists have managed to produce many periodic and in part highly complex structures from metal-organic networks, but never a quasicrystal.

The researchers were also able to thoroughly elucidate the new network geometry in unparalleled resolution using a scanning tunnelling microscope. They found a mosaic of four different basic elements comprising triangles and rectangles distributed irregularly on a substrate. Some of these basic elements assembled themselves to regular dodecagons that, however, cannot be mapped onto each other through parallel translation. The result is a complex pattern, a small work of art at the atomic level with dodecagonal symmetry.

Interesting optical and magnetic properties

In their future work, the researchers are planning to vary the interactions between the metal centers and the attached compounds using computer simulation and experiments in order to understand the conditions under which two-dimensional quasicrystals form. This insight could facilitate the future development of new tailored quasicrystalline layers.

These kinds of materials hold great promise. After all, the new metal-organic quasicrystalline networks may have properties that make them interesting in a wide variety of application. “We have discovered a new playing field on which we can not only investigate quasicrystallinity, but also create new functionalities, especially in the fields of optics and magnetism,” says Dr. David Écija of IMDEA Nanoscience.

For one, scientists could one day use the new methodology to create quasicrystalline coatings that influence photons in such a manner that they are transmitted better or that only certain wavelengths can pass through the material.

In addition, the interactions of the lanthanide building blocks in the new quasicrystals could facilitate the development of magnetic systems with very special properties, so-called “frustrated systems”. Here, the individual atoms in a crystalline grid interfere with each other in a manner that prevents grid points from achieving a minimal energy state. The result: exotic magnetic ground states that can be investigated as information stores for future quantum computers.

The researchers have made an image available,

The quasicrystalline network built up with europium atoms linked with para-quaterphenyl–dicarbonitrile on a gold surface (yellow) – Image: Carlos A. Palma / TUM

Here’s a link to and a citation for the paper,

Quasicrystallinity expressed in two-dimensional coordination networks by José I. Urgel, David Écija, Guoqing Lyu, Ran Zhang, Carlos-Andres Palma, Willi Auwärter, Nian Lin, & Johannes V. Barth. Nature Chemistry 8, 657–662 (2016) doi:10.1038/nchem.2507 Published online 16 May 2016

This paper is behind a paywall.

For anyone interested in more about the Daniel Schechter story and how he was reviled for his discovery of quasicrystals, there’s more in my Dec. 24, 2013 posting (scroll down about 60% of the way).

# D-PLACE: an open access database of places, language, culture, and enviroment

In an attempt to be a bit more broad in my interpretation of the ‘society’ part of my commentary I’m including this July 8, 2016 news item on ScienceDaily (Note: A link has been removed),

An international team of researchers has developed a website at d-place.org to help answer long-standing questions about the forces that shaped human cultural diversity.

D-PLACE — the Database of Places, Language, Culture and Environment — is an expandable, open access database that brings together a dispersed body of information on the language, geography, culture and environment of more than 1,400 human societies. It comprises information mainly on pre-industrial societies that were described by ethnographers in the 19th and early 20th centuries.

A July 8, 2016 University of Toronto news release (also on EurekAlert), which originated the news item, expands on the theme,

“Human cultural diversity is expressed in numerous ways: from the foods we eat and the houses we build, to our religious practices and political organisation, to who we marry and the types of games we teach our children,” said Kathryn Kirby, a postdoctoral fellow in the Departments of Ecology & Evolutionary Biology and Geography at the University of Toronto and lead author of the study. “Cultural practices vary across space and time, but the factors and processes that drive cultural change and shape patterns of diversity remain largely unknown.

“D-PLACE will enable a whole new generation of scholars to answer these long-standing questions about the forces that have shaped human cultural diversity.”

Co-author Fiona Jordan, senior lecturer in anthropology at the University of Bristol and one of the project leads said, “Comparative research is critical for understanding the processes behind cultural diversity. Over a century of anthropological research around the globe has given us a rich resource for understanding the diversity of humanity – but bringing different resources and datasets together has been a huge challenge in the past.

“We’ve drawn on the emerging big data sets from ecology, and combined these with cultural and linguistic data so researchers can visualise diversity at a glance, and download data to analyse in their own projects.”

D-PLACE allows users to search by cultural practice (e.g., monogamy vs. polygamy), environmental variable (e.g. elevation, mean annual temperature), language family (e.g. Indo-European, Austronesian), or region (e.g. Siberia). The search results can be displayed on a map, a language tree or in a table, and can also be downloaded for further analysis.

It aims to enable researchers to investigate the extent to which patterns in cultural diversity are shaped by different forces, including shared history, demographics, migration/diffusion, cultural innovations, and environmental and ecological conditions.

D-PLACE was developed by an international team of scientists interested in cross-cultural research. It includes researchers from Max Planck Institute for the Science of Human history in Jena Germany, University of Auckland, Colorado State University, University of Toronto, University of Bristol, Yale, Human Relations Area Files, Washington University in Saint Louis, University of Michigan, American Museum of Natural History, and City University of New York.

The diverse team included: linguists; anthropologists; biogeographers; data scientists; ethnobiologists; and evolutionary ecologists, who employ a variety of research methods including field-based primary data collection; compilation of cross-cultural data sources; and analyses of existing cross-cultural datasets.

“The team’s diversity is reflected in D-PLACE, which is designed to appeal to a broad user base,” said Kirby. “Envisioned users range from members of the public world-wide interested in comparing their cultural practices with those of other groups, to cross-cultural researchers interested in pushing the boundaries of existing research into the drivers of cultural change.”

Here’s a link to and a citation for the paper,

D-PLACE: A Global Database of Cultural, Linguistic and Environmental Diversity by Kathryn R. Kirby, Russell D. Gray, Simon J. Greenhill, Fiona M. Jordan, Stephanie Gomes-Ng, Hans-Jörg Bibiko, Damián E. Blasi, Carlos A. Botero, Claire Bowern, Carol R. Ember, Dan Leehr, Bobbi S. Low, Joe McCarter, William Divale, Michael C. Gavin.  PLOS ONE, 2016; 11 (7): e0158391 DOI: 10.1371/journal.pone.0158391 Published July 8, 2016.

This paper is open access.

You can find D-PLACE here.

While it might not seem like that there would be a close link between anthropology and physics in the 19th and early 20th centuries, that information can be mined for more contemporary applications. For example, someone who wants to make a case for a more diverse scientific community may want to develop a social science approach to the discussion. The situation in my June 16, 2016 post titled: Science literacy, science advice, the US Supreme Court, and Britain’s House of Commons, could  be extended into a discussion and educational process using data from D-Place and other sources to make the point,

Science literacy may not be just for the public, it would seem that US Supreme Court judges may not have a basic understanding of how science works. David Bruggeman’s March 24, 2016 posting (on his Pasco Phronesis blog) describes a then current case before the Supreme Court (Justice Antonin Scalia has since died), Note: Links have been removed,

It’s a case concerning aspects of the University of Texas admissions process for undergraduates and the case is seen as a possible means of restricting race-based considerations for admission.  While I think the arguments in the case will likely revolve around factors far removed from science and or technology, there were comments raised by two Justices that struck a nerve with many scientists and engineers.

Both Justice Antonin Scalia and Chief Justice John Roberts raised questions about the validity of having diversity where science and scientists are concerned [emphasis mine].  Justice Scalia seemed to imply that diversity wasn’t esential for the University of Texas as most African-American scientists didn’t come from schools at the level of the University of Texas (considered the best university in Texas).  Chief Justice Roberts was a bit more plain about not understanding the benefits of diversity.  He stated, “What unique perspective does a black student bring to a class in physics?”

To that end, Dr. S. James Gates, theoretical physicist at the University of Maryland, and member of the President’s Council of Advisers on Science and Technology (and commercial actor) has an editorial in the March 25 [2016] issue of Science explaining that the value of having diversity in science does not accrue *just* to those who are underrepresented.

Dr. Gates relates his personal experience as a researcher and teacher of how people’s background inform their practice of science, and that two different people may use the same scientific method, but think about the problem differently.

I’m guessing that both Scalia and Roberts and possibly others believe that science is the discovery and accumulation of facts. In this worldview science facts such as gravity are waiting for discovery and formulation into a ‘law’. They do not recognize that most science is a collection of beliefs and may be influenced by personal beliefs. For example, we believe we’ve proved the existence of the Higgs boson but no one associated with the research has ever stated unequivocally that it exists.

More generally, with D-PLACE and the recently announced Trans-Atlantic Platform (see my July 15, 2016 post about it), it seems Canada’s humanities and social sciences communities are taking strides toward greater international collaboration and a more profound investment in digital scholarship.

# Replicating brain’s neural networks with 3D nanoprinting

An announcement about European Union funding for a project to reproduce neural networks by 3D nanoprinting can be found in a June 10, 2016 news item on Nanowerk,

The MESO-BRAIN consortium has received a prestigious award of €3.3million in funding from the European Commission as part of its Future and Emerging Technology (FET) scheme. The project aims to develop three-dimensional (3D) human neural networks with specific biological architecture, and the inherent ability to interrogate the network’s brain-like activity both electrophysiologically and optically. It is expected that the MESO-BRAIN will facilitate a better understanding of human disease progression, neuronal growth and enable the development of large-scale human cell-based assays to test the modulatory effects of pharmacological and toxicological compounds on neural network activity. The use of more physiologically relevant human models will increase drug screening efficiency and reduce the need for animal testing.

A June 9, 2016 Institute of Photonic Sciences (ICFO) press release (also on EurekAlert), which originated the news item, provides more detail,

About the MESO-BRAIN project

The MESO-BRAIN project’s cornerstone will use human induced pluripotent stem cells (iPSCs) that have been differentiated into neurons upon a defined and reproducible 3D scaffold to support the development of human neural networks that emulate brain activity. The structure will be based on a brain cortical module and will be unique in that it will be designed and produced using nanoscale 3D-laser-printed structures incorporating nano-electrodes to enable downstream electrophysiological analysis of neural network function. Optical analysis will be conducted using cutting-edge light sheet-based, fast volumetric imaging technology to enable cellular resolution throughout the 3D network. The MESO-BRAIN project will allow for a comprehensive and detailed investigation of neural network development in health and disease.

Prof Edik Rafailov, Head of the MESO-BRAIN project (Aston University) said: “What we’re proposing to achieve with this project has, until recently, been the stuff of science fiction. Being able to extract and replicate neural networks from the brain through 3D nanoprinting promises to change this. The MESO-BRAIN project has the potential to revolutionise the way we are able to understand the onset and development of disease and discover treatments for those with dementia or brain injuries. We cannot wait to get started!”

The MESO-BRAIN project will launch in September 2016 and research will be conducted over three years.

About the MESO-BRAIN consortium

Each of the consortium partners have been chosen for the highly specific skills & knowledge that they bring to this project. These include technologies and expertise in stem cells, photonics, physics, 3D nanoprinting, electrophysiology, molecular biology, imaging and commercialisation.

Aston University (UK) Aston Institute of Photonic Technologies (School of Engineering and Applied Science) is one of the largest photonic groups in UK and an internationally recognised research centre in the fields of lasers, fibre-optics, high-speed optical communications, nonlinear and biomedical photonics. The Cell & Tissue Biomedical Research Group (Aston Research Centre for Healthy Ageing) combines collective expertise in genetic manipulation, tissue engineering and neuronal modelling with the electrophysiological and optical analysis of human iPSC-derived neural networks. Axol Bioscience Ltd. (UK) was founded to fulfil the unmet demand for high quality, clinically relevant human iPSC-derived cells for use in biomedical research and drug discovery. The Laser Zentrum Hannover (Germany) is a leading research organisation in the fields of laser development, material processing, laser medicine, and laser-based nanotechnologies. The Neurophysics Group (Physics Department) at University of Barcelona (Spain) are experts in combing experiments with theoretical and computational modelling to infer functional connectivity in neuronal circuits. The Institute of Photonic Sciences (ICFO) (Spain) is a world-leading research centre in photonics with expertise in several microscopy techniques including light sheet imaging. KITE Innovation (UK) helps to bridge the gap between the academic and business sectors in supporting collaboration, enterprise, and knowledge-based business development.

For anyone curious about the FET funding scheme, there’s this from the press release,

Horizon 2020 aims to ensure Europe produces world-class science by removing barriers to innovation through funding programmes such as the FET. The FET (Open) funds forward-looking collaborations between advanced multidisciplinary science and cutting-edge engineering for radically new future technologies. The published success rate is below 1.4%, making it amongst the toughest in the Horizon 2020 suite of funding schemes. The MESO-BRAIN proposal scored a perfect 5/5.

You can find out more about the MESO-BRAIN project on its ICFO webpage.

They don’t say anything about it but I can’t help wondering if the scientists aren’t also considering the possibility of creating an artificial brain.

# Lungs: EU SmartNanoTox and Pneumo NP

I have three news bits about lungs one concerning relatively new techniques for testing the impact nanomaterials may have on lungs and two concerning developments at PneumoNP; the first regarding a new technique for getting antibiotics to a lung infected with pneumonia and the second, a new antibiotic.

Predicting nanotoxicity in the lungs

From a June 13, 2016 news item on Nanowerk,

Scientists at the Helmholtz Zentrum München [German Research Centre for Environmental Health] have received more than one million euros in the framework of the European Horizon 2020 Initiative [a major European Commission science funding initiative successor to the Framework Programme 7 initiative]. Dr. Tobias Stöger and Dr. Otmar Schmid from the Institute of Lung Biology and Disease and the Comprehensive Pneumology Center (CPC) will be using the funds to develop new tests to assess risks posed by nanomaterials in the airways. This could contribute to reducing the need for complex toxicity tests.

A June 13, 2016 Helmholtz Zentrum München (German Research Centre for Environmental Health) press release, which originated the news item, expands on the theme,

Nanoparticles are extremely small particles that can penetrate into remote parts of the body. While researchers are investigating various strategies for harvesting the potential of nanoparticles for medical applications, they could also pose inherent health risks*. Currently the hazard assessment of nanomaterials necessitates a complex and laborious procedure. In addition to complete material characterization, controlled exposure studies are needed for each nanomaterial in order to guarantee the toxicological safety.

As a part of the EU SmartNanoTox project, which has now been funded with a total of eight million euros, eleven European research partners, including the Helmholtz Zentrum München, want to develop a new concept for the toxicological assessment of nanomaterials.

Reference database for hazardous substances

Biologist Tobias Stöger and physicist Otmar Schmid, both research group heads at the Institute of Lung Biology and Disease, hope that the use of modern methods will help to advance the assessment procedure. “We hope to make more reliable nanotoxicity predictions by using modern approaches involving systems biology, computer modelling, and appropriate statistical methods,” states Stöger.

The lung experts are concentrating primarily on the respiratory tract. The approach involves defining a representative selection of toxic nanomaterials and conducting an in-depth examination of their structure and the various molecular modes of action that lead to their toxicity. These data are then digitalized and transferred to a reference database for new nanomaterials. Economical tests that are easy to conduct should then make it possible to assess the toxicological potential of these new nanomaterials by comparing the test results s with what is already known from the database. “This should make it possible to predict whether or not a newly developed nanomaterial poses a health risk,” Otmar Schmid says.

* Review: Schmid, O. and Stoeger, T. (2016). Surface area is the biologically most effective dose metric for acute nanoparticle toxicity in the lung. Journal of Aerosol Science, DOI:10.1016/j.jaerosci.2015.12.006

The SmartNanoTox webpage is here on the European Commission’s Cordis website.

Carrying antibiotics into lungs (PneumoNP)

I received this news from the European Commission’s PneumoNP project (I wrote about PneumoNP in a June 26, 2014 posting when it was first announced). This latest development is from a March 21, 2016 email (the original can be found here on the How to pack antibiotics in nanocarriers webpage on the PneumoNP website),

PneumoNP researchers work on a complex task: attach or encapsulate antibiotics with nanocarriers that are stable enough to be included in an aerosol formulation, to pass through respiratory tracts and finally deliver antibiotics on areas of lungs affected by pneumonia infections. The good news is that they finally identify two promising methods to generate nanocarriers.

So far, compacting polymer coils into single-chain nanoparticles in water and mild conditions was an unsolved issue. But in Spain, IK4-CIDETEC scientists developed a covalent-based method that produces nanocarriers with remarkable stability under those particular conditions. Cherry on the cake, the preparation is scalable for more industrial production. IK4-CIDETEC patented the process.

Fig.: A polymer coil (step 1) compacts into a nanocarrier with cross-linkers (step 2). Then, antibiotics get attached to the nanocarrier (step 3).

At the same time, another route to produce lipidic nanocarriers have been developed by researchers from Utrecht University. In particular, they optimized the method consisting in assembling lipids directly around a drug. As a result, generated lipidic nanocarriers show encouraging stability properties and are able to carry sufficient quantity of antibiotics.

Fig.: On presence of antibiotics, a lipidic layer (step 1) aggregates the drug (step 2) until the lipids forms a capsule around antibiotics (step 3).

Assays of both polymeric and lipidic nanocarriers are currently performed by ITEM Fraunhofer Institute in Germany, Ingeniatrics Tecnologias in Spain and Erasmus Medical Centre in the Netherlands. Part of these tests allows to make sure that the nanocarriers are not toxic to cells. Other tests are also done to verify that the efficiency of antibiotics on Klebsiella Pneumoniae bacteria when they are attached to nanocarriers.

A new antibiotic for pneumonia (PneumoNP)

A June 14, 2016 PneumoNP press release (received via email) announces work on a promising new approach to an antibiotic for pneumonia,

The antimicrobial peptide M33 may be the long-sought substitute to treat difficult lung infections, like multi-drug resistant pneumonia.

In 2013, the European Respiratory Society predicted 3 millions cases of pneumonia in Europe every year [1]. The standard treatment for pneumonia is an intravenous administration of a combination of drugs. This leads to the development of antibiotic resistance in the population. Gradually, doctors are running out of solutions to cure patients. An Italian company suggests a new option: the M33 peptide.

Few years ago, the Italian company SetLance SRL decided to investigate the M33 peptide. The antimicrobial peptide is an optimized version of an artificial peptide sequence selected for its efficacy and stability. So far, it showed encouraging in-vitro results against multidrug-resistant Gram-negative bacteria, including Klebsiella Pneumoniae. With the support of EU funding to the PneumoNP project, SetLance SRL had the opportunity to develop a new formulation of M33 that enhances its antimicrobial activity.

The new formulation of M33 fights Gram-negative bacteria in three steps. First of all, the M33 binds with the lipopolysaccharides (LPS) on the outer membrane of bacteria. Then, the molecule forms a helix and finally disrupts the membrane provoking cytoplasm leaking. The peptide enabled up to 80% of mices to survive Pseudomonas Aeruginosa-based lung infections. Beyond these encouraging results, toxicity to the new M33 formulation seems to be much lower than antimicrobial peptides currently used in clinical practice like colistin [2].

Lately, SetLance scaled-up the synthesis route and is now able to produce several hundred milligrams per batch. The molecule is robust enough for industrial production. We may expect this drug to go on clinical development and validation at the beginning of 2018.

[1] http://www.erswhitebook.org/chapters/acute-lower-respiratory-infections/pneumonia/
[2] Ceccherini et al., Antimicrobial activity of levofloxacin-M33 peptide conjugation or combination, Chem Med Comm. 2016; Brunetti et al., In vitro and in vivo efficacy, toxicity, bio-distribution and resistance selection of a novel antibacterial drug candidate. Scientific Reports 2016

I believe all the references are open access.

Brief final comment

The only element linking these news bits together is that they concern the lungs.

# The birth of carbon nanotubes (CNTs): a history

There is a comprehensive history of the carbon nanotube stretching back to prehistory and forward to recent times in a June 3, 2016 Nanowerk Spotlight article by C.K. Nisha and Yashwant Mahajan of the Center of Knowledge Management of Nanoscience & Technology (CKMNT) in India. The authors provide an introduction explaining the importance of CNTs,

Carbon nanotubes (CNTs) have been acknowledged as the material of the 21st century. They possess unique combination of extraordinary mechanical, electronic, transport, electrical and optical, properties and nanoscale sizes making them suitable for a variety of applications ranging from engineering, electronics, optoelectronics, photonics, space, defence industry, medicine, molecular and biological systems and so on and so forth. Worldwide demand for CNTs is increasing at a rapid pace as applications for the material are being matured.

According to MarketsandMarkets (M&M), the global market for carbon nanotubes in 2015 was worth about $2.26 billion1; an increase of 45% from 2009 (i.e. ~$ 1.24 billion). This was due to the growing potential of CNTs in electronics, plastics and energy storage applications and the projected market of CNTs is expected to be around \$ 5.64 billion in 2020.

In view of the scientific and technological potential of CNTs, it is of immense importance to know who should be credited for their discovery. In the present article, we have made an attempt to give a glimpse into the discovery and early history of this fascinating material for our readers. Thousands of papers are being published every year on CNTs or related areas and most of these papers give credit for the discovery of CNTs to Sumio Iijima of NEC Corporation, Japan, who, in 1991, published a ground-breaking paper in Nature reporting the discovery of multi-walled carbon nanotubes (MWCNTs)2. This paper has been cited over 27,105 times in the literature (as on January 12, 2016, based on Scopus database). This discovery by Iijima has triggered an avalanche of scientific publications and catapulted CNTs onto the global scientific stage.

Nisha and Mahajan then prepare to take us back in time,

In a guest editorial for the journal Carbon, Marc Monthioux and Vladimir L. Kuznetsov3 have tried to clear the air by describing the chronological events that led to the discovery of carbon nanotubes. As one delves deeper into the history of carbon nanotubes, it becomes more apparent that the origin of CNTs could be even pre-historic in nature.

Recently, Ponomarchuk et al from Russia have reported the presence micro and nano carbon tubes in igneous rocks formed about 250 million years ago4-7. They suggested the possibility of formation of carbon nanotubes during the magmatic processes. It is presumed that the migration of hydrocarbon fluids through the residual melt of the rock groundmass created gas-saturated areas (mostly CH4, CO2, CO) in which condensation and decomposition of hydrocarbon in presence of metal elements resulted in the formation of micro and sub-micron carbon tubes.

Another most compelling evidence of pre-historic naturally occurring carbon nanotubes (MWCNTs) is based on the TEM studies carried out by Esquivel and Murr8 that analyzed 10,000-year-old Greenland ice core samples and it was suggested that probably they could have been formed during combustion of natural gas/methane during natural processes.

However, the validity of this evidence is questionable owing to the lack of clear high-resolution TEM images, high-quality diffraction patterns or Raman spectroscopy data. In addition, [an]other interesting possibility is that the carbon nanotubes could have been directly formed by the transformation of naturally occurring C60 fullerenes in nature without the assistance of man, given the right conditions prevail. Suchanek et al.,9 have actually demonstrated this thesis, under the laboratory environment, by transforming C60 fullerenes into CNTs under hydrothermal conditions.

There is a large body of evidence in literature about the existence of naturally occurring fullerenes in nature, e.g., coal, carboneous rocks, interstellar media, etc. Since the above experiments were conducted under the simulated geological environment, their results imply that CNTs may form in natural hydrothermal environment.

This hypothesis was further corroborated by Velasco-Santos and co-workers10, when they reported the presence of CNTs in a coal–petroleum mix obtained from an actual oil well, identified by the PEMEX (the Mexican Petroleum Company) as P1, which is located in Mexico’s southeast shore. TEM studies revealed that the coal-petroleum mix contained predominantly end-capped CNTs that are nearly 2 µm long with outer diameter varying between few to several tenths of nanometers.

There’s another study supporting the notion that carbon nanotubes may be formed naturally,

In yet another study, researchers from Germany11 have synthesized carbon nanotubes using igneous rock from Mount Etna lava as both support and catalyst. The naturally occurring iron oxide particles present in Etna lava rock make it an ideal material for growing and immobilizing nanocarbons.

When a mixture of ethylene and hydrogen were passed over the pulverized rocks reduced in a hydrogen atmosphere at 700°C, the iron particles catalyzed the decomposition of ethylene to elemental carbon, which gets deposited on the lava rock in the form of tiny tubes and fibers.
This study showed that if a carbon source is available, CNTs/CNFs can grow on a mineral at moderate temperatures, which directs towards the possibilities of carbon nanotube formation in active suboceanic volcanos or even in interstellar space where methane, atomic hydrogen, carbon oxides, and metallic iron are present.

This fascinating and informative piece was originally published in the January 2016 edition of Nanotech Insights (CKMNT newsletter; scroll down) and can be found there although it may be more easily accessible as the June 3, 2016 Nanowerk Spotlight article where it extends over five (Nanowerk) pages and has a number of embedded images along with an extensive list of references at the end.

Enjoy!

# Observing nanostructures in attosecond time

German scientists have observed a phenomenon (light-matter interaction) that exists for attoseconds. (For anyone unfamiliar with that scale, micro = a millionth, nano = a billionth, pico = a trillionth, femto = a quadrillionth, and atto = a quintillionth.)  A May 31, 2016 news item on Nanowerk announces the work (Note: A link has been removed),

Physicists of the Laboratory for Attosecond Physics at the Max Planck Institute of Quantum Optics and the Ludwig-Maximilians-Universität Munich in collaboration with scientists from the Friedrich-Alexander-Universität Erlangen-Nürnberg have observed a light-matter phenomenon in nano-optics, which lasts only attoseconds (“Attosecond nanoscale near-field sampling”).

Here’s an illustration of the work,

When laser light interacts with a nanoneedle (yellow), electromagnetic near-fields are formed at its surface. A second laser pulse (purple) emits an electron (green) from the needle, permitting to characterize the near-fields.
Image: Christian Hackenberger

A May 31, 2016 Max Planck Institute of Quantum Optics press release (also on EurekAlert) by Thorsten Naeser, which originated the news item, describes the phenomenon and the work in more detail,

The interaction between light and matter is of key importance in nature, the most prominent example being photosynthesis. Light-matter interactions have also been used extensively in technology, and will continue to be important in electronics of the future. A technology that could transfer and save data encoded on light waves would be 100.000-times faster than current systems. A light-matter interaction which could pave the way to such light-driven electronics has been investigated by scientists from the Laboratory for Attosecond Physics (LAP) at the Ludwig-Maximilians-Universität (LMU) and the Max Planck Institute of Quantum Optics (MPQ), in collaboration with colleagues from the Chair for Laser Physics at the Friedrich-Alexander-Universität Erlangen-Nürnberg. The researchers sent intense laser pulses onto a tiny nanowire made of gold. The ultrashort laser pulses excited vibrations of the freely moving electrons in the metal. This resulted in electromagnetic ‘near-fields’ at the surface of the wire. The near-fields oscillated with a shift of a few hundred attoseconds with respect to the exciting laser field (one attosecond is a billionth of a billionth of a second). This shift was measured using attosecond light pulses which the scientists subsequently sent onto the nanowire.

When light illuminates metals, it can result in curious things in the microcosm at the surface. The electromagnetic field of the light excites vibrations of the electrons in the metal. This interaction causes the formation of ‘near-fields’ – electromagnetic fields localized close to the surface of the metal.

How near-fields behave under the influence of light has now been investigated by an international team of physicists at the Laboratory for Attosecond Physics of the Ludwig-Maximilians-Universität and the Max Planck Institute of Quantum Optics in close collaboration with scientists of the Chair for Laser Physics at the Friedrich-Alexander-Universität Erlangen-Nürnberg.

The researchers sent strong infrared laser pulses onto a gold nanowire. These laser pulses are so short that they are composed of only a few oscillations of the light field. When the light illuminated the nanowire it excited collective vibrations of the conducting electrons surrounding the gold atoms. Through these electron motions, near-fields were created at the surface of the wire.

The physicists wanted to study the timing of the near-fields with respect to the light fields. To do this they sent a second light pulse with an extremely short duration of just a couple of hundred attoseconds onto the nanostructure shortly after the first light pulse. The second flash released individual electrons from the nanowire. When these electrons reached the surface, they were accelerated by the near-fields and detected. Analysis of the electrons showed that the near-fields were oscillating with a time shift of about 250 attoseconds with respect to the incident light, and that they were leading in their vibrations. In other words: the near-field vibrations reached their maximum amplitude 250 attoseconds earlier than the vibrations of the light field.

“Fields and surface waves at nanostructures are of central importance for the development of lightwave-electronics. With the demonstrated technique they can now be sharply resolved.”, explained Prof. Matthias Kling, the leader of the team carrying out the experiments in Munich.

The experiments pave the way towards more complex studies of light-matter interaction in metals that are of interest in nano-optics and the light-driven electronics of the future. Such electronics would work at the frequencies of light. Light oscillates a million billion times per second, i.e. with petahertz frequencies – about 100.000 times faster than electronics available at the moment. The ultimate limit of data processing could be reached.

Here’s a link to and a citation for the paper,

Attosecond nanoscale near-field sampling by B. Förg, J. Schötz, F. Süßmann, M. Förster, M. Krüger, B. Ahn, W. A. Okell, K. Wintersperger, S. Zherebtsov, A. Guggenmos, V. Pervak, A. Kessel, S. A. Trushin, A. M. Azzeer, M. I. Stockman, D. Kim, F. Krausz, P. Hommelhoff, & M. F. Kling.  Nature Communications 7, Article number: 11717  doi:10.1038/ncomms11717 Published 31 May 2016

This paper is open access.

# Scented video games: a nanotechnology project in Europe

Ten years ago when I was working on a master’s degree (creative writing and new media), I was part of a group presentation on multimedia and to prepare started a conversation about scent as part of a multimedia experience. Our group leader was somewhat outraged. He’d led international multimedia projects and as far as he was concerned the ‘scent’ discussion was a waste of time when we were trying to prepare a major presentation.

He was right and wrong. I think you’re supposed to have these discussions when you’re learning and exploring ideas but, in 2006, there wasn’t much work of that type to discuss. It seems things may be changing according to a May 21, 2016 news item on Nanowerk (Note: A link has been removed),

Controlled odour emission could transform video games and television viewing experiences and benefit industries such as pest control and medicine [emphasis mine]. The NANOSMELL project aims to switch smells on and off by tagging artificial odorants with nanoparticles exposed to electromagnetic field.

I wonder if the medicinal possibilities include nanotechnology-enabled aroma therapy?

Getting back to the news, a May 10, 2016 European Commission press release, which originated the news item, expands on the theme,

The ‘smellyvision’ – a TV that offers olfactory as well as visual stimulation – has been a science fiction staple for years. However, realising this concept has proved difficult given the sheer complexity of how smell works and the technical challenges of emitting odours on demand.

NANOSMELL will specifically address these two challenges by developing artificial smells that can be switched on and off remotely. This would be achieved by tagging specific DNA-based artificial odorants – chemical compounds that give off smells – with nanoparticles that respond to external electromagnetic fields.

With the ability to remotely control these artificial odours, the project team would then be able to examine exactly how olfactory receptors respond. Sensory imaging to investigate the patterns of neural activity and behavioural tests will be carried out in animals.

The project would next apply artificial odorants to the human olfactory system and measure perceptions by switching artificial smells on and off. Researchers will also assess whether artificial odorants have a role to play in wound healing by placing olfactory receptors in skin.

The researchers aim to develop controllable odour-emitting components that will further understanding of smell and open the door to novel odour-emitting applications in fields ranging from entertainment to medicine.

Project details

• Project acronym: NanoSmell
• Participants: Israel (Coordinator), Spain, Germany, Switzerland
• Project Reference N° 662629
• Total cost: € 3 979 069
• EU contribution: € 3 979 069
• Duration:September 2015 – September 2019

You can find more information on the European Commission’s NANOSMELL project page.