Tag Archives: Netherlands

High-performance, low-energy artificial synapse for neural network computing

This artificial synapse is apparently an improvement on the standard memristor-based artificial synapse but that doesn’t become clear until reading the abstract for the paper. First, there’s a Feb. 20, 2017 Stanford University news release by Taylor Kubota (dated Feb. 21, 2017 on EurekAlert), Note: Links have been removed,

For all the improvements in computer technology over the years, we still struggle to recreate the low-energy, elegant processing of the human brain. Now, researchers at Stanford University and Sandia National Laboratories have made an advance that could help computers mimic one piece of the brain’s efficient design – an artificial version of the space over which neurons communicate, called a synapse.

“It works like a real synapse but it’s an organic electronic device that can be engineered,” said Alberto Salleo, associate professor of materials science and engineering at Stanford and senior author of the paper. “It’s an entirely new family of devices because this type of architecture has not been shown before. For many key metrics, it also performs better than anything that’s been done before with inorganics.”

The new artificial synapse, reported in the Feb. 20 issue of Nature Materials, mimics the way synapses in the brain learn through the signals that cross them. This is a significant energy savings over traditional computing, which involves separately processing information and then storing it into memory. Here, the processing creates the memory.

This synapse may one day be part of a more brain-like computer, which could be especially beneficial for computing that works with visual and auditory signals. Examples of this are seen in voice-controlled interfaces and driverless cars. Past efforts in this field have produced high-performance neural networks supported by artificially intelligent algorithms but these are still distant imitators of the brain that depend on energy-consuming traditional computer hardware.

Building a brain

When we learn, electrical signals are sent between neurons in our brain. The most energy is needed the first time a synapse is traversed. Every time afterward, the connection requires less energy. This is how synapses efficiently facilitate both learning something new and remembering what we’ve learned. The artificial synapse, unlike most other versions of brain-like computing, also fulfills these two tasks simultaneously, and does so with substantial energy savings.

“Deep learning algorithms are very powerful but they rely on processors to calculate and simulate the electrical states and store them somewhere else, which is inefficient in terms of energy and time,” said Yoeri van de Burgt, former postdoctoral scholar in the Salleo lab and lead author of the paper. “Instead of simulating a neural network, our work is trying to make a neural network.”

The artificial synapse is based off a battery design. It consists of two thin, flexible films with three terminals, connected by an electrolyte of salty water. The device works as a transistor, with one of the terminals controlling the flow of electricity between the other two.

Like a neural path in a brain being reinforced through learning, the researchers program the artificial synapse by discharging and recharging it repeatedly. Through this training, they have been able to predict within 1 percent of uncertainly what voltage will be required to get the synapse to a specific electrical state and, once there, it remains at that state. In other words, unlike a common computer, where you save your work to the hard drive before you turn it off, the artificial synapse can recall its programming without any additional actions or parts.

Testing a network of artificial synapses

Only one artificial synapse has been produced but researchers at Sandia used 15,000 measurements from experiments on that synapse to simulate how an array of them would work in a neural network. They tested the simulated network’s ability to recognize handwriting of digits 0 through 9. Tested on three datasets, the simulated array was able to identify the handwritten digits with an accuracy between 93 to 97 percent.

Although this task would be relatively simple for a person, traditional computers have a difficult time interpreting visual and auditory signals.

“More and more, the kinds of tasks that we expect our computing devices to do require computing that mimics the brain because using traditional computing to perform these tasks is becoming really power hungry,” said A. Alec Talin, distinguished member of technical staff at Sandia National Laboratories in Livermore, California, and senior author of the paper. “We’ve demonstrated a device that’s ideal for running these type of algorithms and that consumes a lot less power.”

This device is extremely well suited for the kind of signal identification and classification that traditional computers struggle to perform. Whereas digital transistors can be in only two states, such as 0 and 1, the researchers successfully programmed 500 states in the artificial synapse, which is useful for neuron-type computation models. In switching from one state to another they used about one-tenth as much energy as a state-of-the-art computing system needs in order to move data from the processing unit to the memory.

This, however, means they are still using about 10,000 times as much energy as the minimum a biological synapse needs in order to fire. The researchers are hopeful that they can attain neuron-level energy efficiency once they test the artificial synapse in smaller devices.

Organic potential

Every part of the device is made of inexpensive organic materials. These aren’t found in nature but they are largely composed of hydrogen and carbon and are compatible with the brain’s chemistry. Cells have been grown on these materials and they have even been used to make artificial pumps for neural transmitters. The voltages applied to train the artificial synapse are also the same as those that move through human neurons.

All this means it’s possible that the artificial synapse could communicate with live neurons, leading to improved brain-machine interfaces. The softness and flexibility of the device also lends itself to being used in biological environments. Before any applications to biology, however, the team plans to build an actual array of artificial synapses for further research and testing.

Additional Stanford co-authors of this work include co-lead author Ewout Lubberman, also of the University of Groningen in the Netherlands, Scott T. Keene and Grégorio C. Faria, also of Universidade de São Paulo, in Brazil. Sandia National Laboratories co-authors include Elliot J. Fuller and Sapan Agarwal in Livermore and Matthew J. Marinella in Albuquerque, New Mexico. Salleo is an affiliate of the Stanford Precourt Institute for Energy and the Stanford Neurosciences Institute. Van de Burgt is now an assistant professor in microsystems and an affiliate of the Institute for Complex Molecular Studies (ICMS) at Eindhoven University of Technology in the Netherlands.

This research was funded by the National Science Foundation, the Keck Faculty Scholar Funds, the Neurofab at Stanford, the Stanford Graduate Fellowship, Sandia’s Laboratory-Directed Research and Development Program, the U.S. Department of Energy, the Holland Scholarship, the University of Groningen Scholarship for Excellent Students, the Hendrik Muller National Fund, the Schuurman Schimmel-van Outeren Foundation, the Foundation of Renswoude (The Hague and Delft), the Marco Polo Fund, the Instituto Nacional de Ciência e Tecnologia/Instituto Nacional de Eletrônica Orgânica in Brazil, the Fundação de Amparo à Pesquisa do Estado de São Paulo and the Brazilian National Council.

Here’s an abstract for the researchers’ paper (link to paper provided after abstract) and it’s where you’ll find the memristor connection explained,

The brain is capable of massively parallel information processing while consuming only ~1–100fJ per synaptic event1, 2. Inspired by the efficiency of the brain, CMOS-based neural architectures3 and memristors4, 5 are being developed for pattern recognition and machine learning. However, the volatility, design complexity and high supply voltages for CMOS architectures, and the stochastic and energy-costly switching of memristors complicate the path to achieve the interconnectivity, information density, and energy efficiency of the brain using either approach. Here we describe an electrochemical neuromorphic organic device (ENODe) operating with a fundamentally different mechanism from existing memristors. ENODe switches at low voltage and energy (<10pJ for 103μm2 devices), displays >500 distinct, non-volatile conductance states within a ~1V range, and achieves high classification accuracy when implemented in neural network simulations. Plastic ENODes are also fabricated on flexible substrates enabling the integration of neuromorphic functionality in stretchable electronic systems6, 7. Mechanical flexibility makes ENODes compatible with three-dimensional architectures, opening a path towards extreme interconnectivity comparable to the human brain.

Here’s a link to and a citation for the paper,

A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing by Yoeri van de Burgt, Ewout Lubberman, Elliot J. Fuller, Scott T. Keene, Grégorio C. Faria, Sapan Agarwal, Matthew J. Marinella, A. Alec Talin, & Alberto Salleo. Nature Materials (2017) doi:10.1038/nmat4856 Published online 20 February 2017

This paper is behind a paywall.

ETA March 8, 2017 10:28 PST: You may find this this piece on ferroelectricity and neuromorphic engineering of interest (March 7, 2017 posting titled: Ferroelectric roadmap to neuromorphic computing).

Nominations open for Kabiller Prizes in Nanoscience and Nanomedicine ($250,000 for visionary researcher and $10,000 for young investigator)

For a change I can publish something that doesn’t have a deadline in three days or less! Without more ado (from a Feb. 20, 2017 Northwestern University news release by Megan Fellman [h/t Nanowerk’s Feb. 20, 2017 news item]),

Northwestern University’s International Institute for Nanotechnology (IIN) is now accepting nominations for two prestigious international prizes: the $250,000 Kabiller Prize in Nanoscience and Nanomedicine and the $10,000 Kabiller Young Investigator Award in Nanoscience and Nanomedicine.

The deadline for nominations is May 15, 2017. Details are available on the IIN website.

“Our goal is to recognize the outstanding accomplishments in nanoscience and nanomedicine that have the potential to benefit all humankind,” said David G. Kabiller, a Northwestern trustee and alumnus. He is a co-founder of AQR Capital Management, a global investment management firm in Greenwich, Connecticut.

The two prizes, awarded every other year, were established in 2015 through a generous gift from Kabiller. Current Northwestern-affiliated researchers are not eligible for nomination until 2018 for the 2019 prizes.

The Kabiller Prize — the largest monetary award in the world for outstanding achievement in the field of nanomedicine — celebrates researchers who have made the most significant contributions to the field of nanotechnology and its application to medicine and biology.

The Kabiller Young Investigator Award recognizes young emerging researchers who have made recent groundbreaking discoveries with the potential to make a lasting impact in nanoscience and nanomedicine.

“The IIN at Northwestern University is a hub of excellence in the field of nanotechnology,” said Kabiller, chair of the IIN executive council and a graduate of Northwestern’s Weinberg College of Arts and Sciences and Kellogg School of Management. “As such, it is the ideal organization from which to launch these awards recognizing outstanding achievements that have the potential to substantially benefit society.”

Nanoparticles for medical use are typically no larger than 100 nanometers — comparable in size to the molecules in the body. At this scale, the essential properties (e.g., color, melting point, conductivity, etc.) of structures behave uniquely. Researchers are capitalizing on these unique properties in their quest to realize life-changing advances in the diagnosis, treatment and prevention of disease.

“Nanotechnology is one of the key areas of distinction at Northwestern,” said Chad A. Mirkin, IIN director and George B. Rathmann Professor of Chemistry in Weinberg. “We are very grateful for David’s ongoing support and are honored to be stewards of these prestigious awards.”

An international committee of experts in the field will select the winners of the 2017 Kabiller Prize and the 2017 Kabiller Young Investigator Award and announce them in September.

The recipients will be honored at an awards banquet Sept. 27 in Chicago. They also will be recognized at the 2017 IIN Symposium, which will include talks from prestigious speakers, including 2016 Nobel Laureate in Chemistry Ben Feringa, from the University of Groningen, the Netherlands.

2015 recipient of the Kabiller Prize

The winner of the inaugural Kabiller Prize, in 2015, was Joseph DeSimone the Chancellor’s Eminent Professor of Chemistry at the University of North Carolina at Chapel Hill and the William R. Kenan Jr. Distinguished Professor of Chemical Engineering at North Carolina State University and of Chemistry at UNC-Chapel Hill.

DeSimone was honored for his invention of particle replication in non-wetting templates (PRINT) technology that enables the fabrication of precisely defined, shape-specific nanoparticles for advances in disease treatment and prevention. Nanoparticles made with PRINT technology are being used to develop new cancer treatments, inhalable therapeutics for treating pulmonary diseases, such as cystic fibrosis and asthma, and next-generation vaccines for malaria, pneumonia and dengue.

2015 recipient of the Kabiller Young Investigator Award

Warren Chan, professor at the Institute of Biomaterials and Biomedical Engineering at the University of Toronto, was the recipient of the inaugural Kabiller Young Investigator Award, also in 2015. Chan and his research group have developed an infectious disease diagnostic device for a point-of-care use that can differentiate symptoms.

BTW, Warren Chan, winner of the ‘Young Investigator Award’, and/or his work have been featured here a few times, most recently in a Nov. 1, 2016 posting, which is mostly about another award he won but also includes links to some his work including my April 27, 2016 post about the discovery that fewer than 1% of nanoparticle-based drugs reach their destination.

How does ice melt? Layer by layer!

A Dec. 12, 2016 news item on ScienceDaily announces the answer to a problem scientists have been investigating for over a century but first, here are the questions,

We all know that water melts at 0°C. However, 150 years ago the famous physicist Michael Faraday discovered that at the surface of frozen ice, well below 0°C, a thin film of liquid-like water is present. This thin film makes ice slippery and is crucial for the motion of glaciers.

Since Faraday’s discovery, the properties of this water-like layer have been the research topic of scientists all over the world, which has entailed considerable controversy: at what temperature does the surface become liquid-like? How does the thickness of the layer dependent on temperature? How does the thickness of the layer increases with temperature? Continuously? Stepwise? Experiments to date have generally shown a very thin layer, which continuously grows in thickness up to 45 nm right below the bulk melting point at 0°C. This also illustrates why it has been so challenging to study this layer of liquid-like water on ice: 45 nm is about 1/1000th part of a human hair and is not discernible by eye.

Scientists of the Max Planck Institute for Polymer Research (MPI-P), in a collaboration with researchers from the Netherlands, the USA and Japan, have succeeded to study the properties of this quasi-liquid layer on ice at the molecular level using advanced surface-specific spectroscopy and computer simulations. The results are published in the latest edition of the scientific journal Proceedings of the National Academy of Science (PNAS).

Caption: Ice melts as described in the text layer by layer. Credit: © MPIP

A Dec. 12, 2016 Max Planck Institute for Polymer Research press release (also on EurekAlert), which originated the news item, goes on to answer the questions,

The team of scientists around Ellen Backus, group leader at MPI-P, investigated how the thin liquid layer is formed on ice, how it grows with increasing temperature, and if it is distinguishable from normal liquid water. These studies required well-defined ice crystal surfaces. Therefore much effort was put into creating ~10 cm large single crystals of ice, which could be cut in such a way that the surface structure was precisely known. To investigate whether the surface was solid or liquid, the team made use of the fact that water molecules in the liquid have a weaker interaction with each other compared to water molecules in ice. Using their interfacial spectroscopy, combined with the controlled heating of the ice crystal, the researchers were able to quantify the change in the interaction between water molecules directly at the interface between ice and air.

The experimental results, combined with the simulations, showed that the first molecular layer at the ice surface has already molten at temperatures as low as -38° C (235 K), the lowest temperature the researchers could experimentally investigate. Increasing the temperature to -16° C (257 K), the second layer becomes liquid. Contrary to popular belief, the surface melting of ice is not a continuous process, but occurs in a discontinuous, layer-by-layer fashion.

“A further important question for us was, whether one could distinguish between the properties of the quasi-liquid layer and those of normal water” says Mischa Bonn, co-author of the paper and director at the MPI-P. And indeed, the quasi-liquid layer at -4° C (269 K) shows a different spectroscopic response than supercooled water at the same temperature; in the quasi-liquid layer, the water molecules seem to interact more strongly than in liquid water.

The results are not only important for a fundamental understanding of ice, but also for climate science, where much research takes place on catalytic reactions on ice surfaces, for which the understanding of the ice surface structure is crucial.

Here’s a link to and a citation for the paper,

Experimental and theoretical evidence for bilayer-by-bilayer surface melting of crystalline ice by M. Alejandra Sánchez, Tanja Kling, Tatsuya Ishiyama, Marc-Jan van Zadel, Patrick J. Bisson, Markus Mezger, Mara N. Jochum, Jenée D. Cyran, Wilbert J. Smit, Huib J. Bakker, Mary Jane Shultz, Akihiro Morita, Davide Donadio, Yuki Nagata, Mischa Bonn, and Ellen H. G. Backus. Proceedings of the National Academy of Science, 2016 DOI: 10.1073/pnas.1612893114 Published online before print December 12, 2016

This paper appears to be open access.

Fashion Week Netherlands and a conversation about nanotextiles

Marjolein Lammerts van Bueren has written up an interview with the principals of Nanonow consulting agency, in a Dec. 15, 2016 article for Amsterdam Fashion Week, where they focus on nanotextiles (Note: Links have been removed),

Strong, sustainable textiles created by combining chemical recycling and nanotechnology – for Vincent Franken and Roel Boekel, their nanotechstiles are there already. With their consulting firm, Nanonow, the two men help companies in a range of industries innovate in the field of nanotechnology. And yes, you guessed it, the fashion industry, too, is finding ways to use the technology to its advantage. Fashionweek.nl sat down with Franken to talk about textiles on a nano scale.

How did you come up with the idea for Nanonow?

“I studied Science, Business & Innovations at the VU in Amsterdam. That’s a beta course that focuses on new technologies and how you can bring them to the market, and I specialised in nanotechnology within that. Because of the many – still untapped – opportunities and applications there are for nanotechnology, I started Nanonow with Roel Boekel after I graduated in 2014. We’re a consulting firm helping companies that still don’t really know how they can make use of nanotechnology, which can be used for a whole lot of things.”

Like the textile industry?

“Exactly. Over the last few years, we’ve done research into several different industries, like the waste and recycling industry. Six months ago we started looking at the textile industry, via Frankenhuis, an international textile recycler. When you throw your clothes in the recycling bin, a portion of them are sold on and a portion are recycled, or downcycled, as I call it. They pull the textiles apart, and those fibres – so the threads – are sold and repurposed into things like insulation. Roel and I thought that was a shame, because you’re deconstructing clothes that have often barely been worn just to make a low-value product out of them.”

So you’ve developed an alternative, Nanotechstiles. Tell us about it!

“We actually wanted to make new clothes from the deconstructed clothes. This is already happening via mechanical recycling, where you produce new clothes by reweaving the old textile fibres. But for me, the Holy Grail we’re looking for – I’m a tech guy after all – is the molecules inside the fibres.”

“First, we don’t want to use the existing thread, but instead we want to pull the thread apart completely then put it back together again. This is called chemical recycling and it’s already happening today. You can remove the cellulose fibres from cotton then put them back together to form viscose or lyocell. The downside of that is that the process is pretty expensive and the quality isn’t always that good.”

“Then you also have nanotechnologies, an area that’s developing rapidly and is already being used to strengthen textiles, which makes them last longer. But there are more options for making textiles no-iron, antibacterial – so that it doesn’t start to smell as quickly – or stain resistant. You can also integrate energy-saving electronics into them, or make them water resistant, as you saw last year on Valerio Zeno and Dennis Storm’s BNN TV programme, Proefkonijnen.”

“When you use nanotechnology to make materials smaller, you transform them, as it were, giving them completely different characteristics. So the fact that you can transform materials means that you can also do this with the threads themselves. We believe that when you combine chemical recycling with nanotechnology, what you get is the perfect thread. We call them nanotechstiles, and in the end, they lead to higher quality clothes that are sustainable, as well.”

“The fact that you can transform materials means that you can also do this with the threads themselves”

How far along are you in the research for nanotechstiles?

“We won the TKI Dinalog Take Off in the logistics sector last year with our nanotechstiles idea. That’s a prize for young talent with innovative ideas for economics and logistics. Since then, we’ve been trying to make the concept more concrete. Which recycling methods can we combine with which nanotechnologies? We’re already pretty far along in that research process, but there hasn’t been any clothing produced from it as yet. We’re focusing on cotton because it makes up the largest proportion of waste. At the moment, we’re in talks with the Institut für Textiltechnik at the University of Aken about how we can produce clothes from our nanotechstiles.

Have you also discovered some pitfalls as part of your research?

“The frustrating thing about nanotechnology is that the more you know about it, the less you can do with it. A lot of options are eliminated during the research process. I’ll give you an example. You want to make clothes that don’t smell as quickly? Well, on paper we know that silver kills 99.9% of bacteria, though we haven’t tested it. So then that leaves you with 0.1%, and that percentage can grow exponentially by using the nutrients from other bacteria. So the material in the clothing itself is safe, but what if a few particles come loose in the wash and get into the drinking water? What happens then? A lot of potential options are eliminated as you go through a process like that because they can be dangerous.”

What are the downsides and how can you guarantee that a design is safe?

“A tremendous amount of nanotechnologies are still in the research phase, so they’re too expensive to develop. We’d like to be using some of them now, but it turns out that there are still too many uncertainties to realistically put them into use. It’s essential to apply the principles of safety by design, only using nanotechnologies where the safety concerns have been well thought out. That’s something we’ve been in touch with the Rijksinstituut voor Volksgezondheid en Milieu (Royal Institute for Public Health and the Environment, RIVM) about. We take safety and the environment into account at every step in the production process for nanotechstiles.”

What the biggest challenge to your concept?

“We already know how certain nanotechnologies respond to cotton, but the biggest challenge is to figure out how they respond to recycled fabrics. You have to remember that nanotechnology isn’t just one thing. You can apply it to any material, which gives you thousands of possibilities. The question is, which one do you think is the most important? For example, you can add carbon nanotubes to make a fabric stronger, but then you’d be paying thousands of euros for a single shirt, and no one wants that.”

What’s the next step?

“Right now, we’re trying to get a sort of crowdfunding campaign started amongst businesses. We’re hoping to build relationships with companies like IKEA, who want to use our sustainable and stain-resistant textiles for things like their employee uniforms. So in addition to the subsidies, they’re helping to fund the research in that way. Based on that, we’ll eventually choose a nanotechnology that we can work up into an actual textile.”

I encourage you to read the original article with its embedded images, additional information, and links to more information.

One last comment, nanotechnology-enabled textiles are usually brand new materials so this is the first time I’ve seen a nanotechnology-based approach to recycling textiles. Bravo!

A European nanotechnology ‘dating’ event for researchers and innovators

A Dec. 13, 2016 Cambridge Network press release announces a networking (dating) event for nanotechnology researchers and industry partners,

The Enterprise Europe Network, in partnership with Innovate UK, the Dutch Ministry of Economic Affairs, the Netherlands Enterprise Agency, Knowledge Transfer Network and the UK Department of Business Energy & Industrial Strategy invite you to participate in an international partnering event and information day for the Nanotechnologies and Advanced Materials themes of the NMBP [Nannotechnologies, Advanced Materials, Biotechnology and Production] Work Programme within Horizon 2020.

This one-day event on 4th April 2017 will introduce the forthcoming calls for proposals, present insights and expectations from the European Commission, and offer a unique international networking experience to forge the winning partnerships of the future

The programme will include presentations from the European Commission and its evaluators and an opportunity to build prospective project partnerships with leading research organisations and cutting-edge innovators from across industry.

A dedicated brokerage session will allow you to expand your international network and create strong consortia through scheduled one-to-one meetings. Participants will also have the opportunity to meet with National Contact Points (UK and Netherlands confirmed) and representatives of the Enterprise Europe Network and the UK’s Knowledge Transfer Network.

The day will also include an optional proposal writing workshop in which delegates will be given valuable tips and insight into the preparation of a winning proposal including a review of the key evaluation criteria.

This event is dedicated to Key Enabling Technologies and will target upcoming calls in the following thematic fields: Nanotechnologies; Advanced materials

Participation for the day is free of charge, but early registration is recommended as the number of participants is limited.  Please note that participation may be limited to a maximum of two delegates per organization.  To register, please do so via the b2match website using this link: https://www.b2match.eu/h2020nmp2017

How does it work? Once you have registered, your profile will be screened by our event management team and once completed you will receive a validation email confirming your participation. You can browse the participant list and book meetings with organisations you are interested in, and a week before the event you will receive your personal meeting schedule.

Why attend? Improve your chances of success by understanding the main issues and expectations for upcoming H2020 calls based on feedback from previous rounds. It’s a great opportunity to raise your profile with future project partners from industry and research through pre-arranged one-to-one meetings. There is also the chance to hear from an experienced H2020 evaluator to gain tips and insight for the preparation of a strong proposal.

Good luck on getting registered for the event. By the way, the Enterprise Europe Network webpage for this event describes it as an Horizon 2020 Brokerage Event.

Sustainable Nanotechnologies (SUN) project draws to a close in March 2017

Two Oct. 31, 2016 news item on Nanowerk signal the impending sunset date for the European Union’s Sustainable Nanotechnologies (SUN) project. The first Oct. 31, 2016 news item on Nanowerk describes the projects latest achievements,

The results from the 3rd SUN annual meeting showed great advancement of the project. The meeting was held in Edinburgh, Scotland, UK on 4-5 October 2016 where the project partners presented the results obtained during the second reporting period of the project.

SUN is a three and a half year EU project, running from 2013 to 2017, with a budget of about €14 million. Its main goal is to evaluate the risks along the supply chain of engineered nanomaterials and incorporate the results into tools and guidelines for sustainable manufacturing.

The ultimate goal of the SUN Project is the development of an online software Decision Support System – SUNDS – aimed at estimating and managing occupational, consumer, environmental and public health risks from nanomaterials in real industrial products along their lifecycles. The SUNDS beta prototype has been released last October, 2015, and since then the main focus has been on refining the methodologies and testing them on selected case studies i.e. nano-copper oxide based wood preserving paint and nano- sized colourants for plastic car part: organic pigment and carbon black. Obtained results and open issues were discussed during the third annual meeting in order collect feedbacks from the consortium that will inform, in the next months, the implementation of the final version of the SUNDS software system, due by March 2017.

An Oct. 27, 2016 SUN project press release, which originated the news item, adds more information,

Significant interest has been payed towards the results obtained in WP2 (Lifecycle Thinking) which main objectives are to assess the environmental impacts arising from each life cycle stage of the SUN case studies (i.e. Nano-WC-Cobalt (Tungsten Carbide-cobalt) sintered ceramics, Nanocopper wood preservatives, Carbon Nano Tube (CNT) in plastics, Silicon Dioxide (SiO2) as food additive, Nano-Titanium Dioxide (TiO2) air filter system, Organic pigment in plastics and Nanosilver (Ag) in textiles), and compare them to conventional products with similar uses and functionality, in order to develop and validate criteria and guiding principles for green nano-manufacturing. Specifically, the consortium partner COLOROBBIA CONSULTING S.r.l. expressed its willingness to exploit the results obtained from the life cycle assessment analysis related to nanoTiO2 in their industrial applications.

On 6th October [2016], the discussions about the SUNDS advancement continued during a Stakeholder Workshop, where representatives from industry, regulatory and insurance sectors shared their feedback on the use of the decision support system. The recommendations collected during the workshop will be used for the further refinement and implemented in the final version of the software which will be released by March 2017.

The second Oct. 31, 2016 news item on Nanowerk led me to this Oct. 27, 2016 SUN project press release about the activities in the upcoming final months,

The project has designed its final events to serve as an effective platform to communicate the main results achieved in its course within the Nanosafety community and bridge them to a wider audience addressing the emerging risks of Key Enabling Technologies (KETs).

The series of events include the New Tools and Approaches for Nanomaterial Safety Assessment: A joint conference organized by NANOSOLUTIONS, SUN, NanoMILE, GUIDEnano and eNanoMapper to be held on 7 – 9 February 2017 in Malaga, Spain, the SUN-CaLIBRAte Stakeholders workshop to be held on 28 February – 1 March 2017 in Venice, Italy and the SRA Policy Forum: Risk Governance for Key Enabling Technologies to be held on 1- 3 March in Venice, Italy.

Jointly organized by the Society for Risk Analysis (SRA) and the SUN Project, the SRA Policy Forum will address current efforts put towards refining the risk governance of emerging technologies through the integration of traditional risk analytic tools alongside considerations of social and economic concerns. The parallel sessions will be organized in 4 tracks:  Risk analysis of engineered nanomaterials along product lifecycle, Risks and benefits of emerging technologies used in medical applications, Challenges of governing SynBio and Biotech, and Methods and tools for risk governance.

The SRA Policy Forum has announced its speakers and preliminary Programme. Confirmed speakers include:

  • Keld Alstrup Jensen (National Research Centre for the Working Environment, Denmark)
  • Elke Anklam (European Commission, Belgium)
  • Adam Arkin (University of California, Berkeley, USA)
  • Phil Demokritou (Harvard University, USA)
  • Gerard Escher (École polytechnique fédérale de Lausanne, Switzerland)
  • Lisa Friedersdor (National Nanotechnology Initiative, USA)
  • James Lambert (President, Society for Risk Analysis, USA)
  • Andre Nel (The University of California, Los Angeles, USA)
  • Bernd Nowack (EMPA, Switzerland)
  • Ortwin Renn (University of Stuttgart, Germany)
  • Vicki Stone (Heriot-Watt University, UK)
  • Theo Vermeire (National Institute for Public Health and the Environment (RIVM), Netherlands)
  • Tom van Teunenbroek (Ministry of Infrastructure and Environment, The Netherlands)
  • Wendel Wohlleben (BASF, Germany)

The New Tools and Approaches for Nanomaterial Safety Assessment (NMSA) conference aims at presenting the main results achieved in the course of the organizing projects fostering a discussion about their impact in the nanosafety field and possibilities for future research programmes.  The conference welcomes consortium partners, as well as representatives from other EU projects, industry, government, civil society and media. Accordingly, the conference topics include: Hazard assessment along the life cycle of nano-enabled products, Exposure assessment along the life cycle of nano-enabled products, Risk assessment & management, Systems biology approaches in nanosafety, Categorization & grouping of nanomaterials, Nanosafety infrastructure, Safe by design. The NMSA conference key note speakers include:

  • Harri Alenius (University of Helsinki, Finland,)
  • Antonio Marcomini (Ca’ Foscari University of Venice, Italy)
  • Wendel Wohlleben (BASF, Germany)
  • Danail Hristozov (Ca’ Foscari University of Venice, Italy)
  • Eva Valsami-Jones (University of Birmingham, UK)
  • Socorro Vázquez-Campos (LEITAT Technolоgical Center, Spain)
  • Barry Hardy (Douglas Connect GmbH, Switzerland)
  • Egon Willighagen (Maastricht University, Netherlands)
  • Nina Jeliazkova (IDEAconsult Ltd., Bulgaria)
  • Haralambos Sarimveis (The National Technical University of Athens, Greece)

During the SUN-caLIBRAte Stakeholder workshop the final version of the SUN user-friendly, software-based Decision Support System (SUNDS) for managing the environmental, economic and social impacts of nanotechnologies will be presented and discussed with its end users: industries, regulators and insurance sector representatives. The results from the discussion will be used as a foundation of the development of the caLIBRAte’s Risk Governance framework for assessment and management of human and environmental risks of MN and MN-enabled products.

The SRA Policy Forum: Risk Governance for Key Enabling Technologies and the New Tools and Approaches for Nanomaterial Safety Assessment conference are now open for registration. Abstracts for the SRA Policy Forum can be submitted till 15th November 2016.
For further information go to:
www.sra.org/riskgovernanceforum2017
http://www.nmsaconference.eu/

There you have it.

2016 Nobel Chemistry Prize for molecular machines

Wednesday, Oct. 5, 2016 was the day three scientists received the Nobel Prize in Chemistry for their work on molecular machines, according to an Oct. 5, 2016 news item on phys.org,

Three scientists won the Nobel Prize in chemistry on Wednesday [Oct. 5, 2016] for developing the world’s smallest machines, 1,000 times thinner than a human hair but with the potential to revolutionize computer and energy systems.

Frenchman Jean-Pierre Sauvage, Scottish-born Fraser Stoddart and Dutch scientist Bernard “Ben” Feringa share the 8 million kronor ($930,000) prize for the “design and synthesis of molecular machines,” the Royal Swedish Academy of Sciences said.

Machines at the molecular level have taken chemistry to a new dimension and “will most likely be used in the development of things such as new materials, sensors and energy storage systems,” the academy said.

Practical applications are still far away—the academy said molecular motors are at the same stage that electrical motors were in the first half of the 19th century—but the potential is huge.

Dexter Johnson in an Oct. 5, 2016 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) provides some insight into the matter (Note: A link has been removed),

In what seems to have come both as a shock to some of the recipients and a confirmation to all those who envision molecular nanotechnology as the true future of nanotechnology, Bernard Feringa, Jean-Pierre Sauvage, and Sir J. Fraser Stoddart have been awarded the 2016 Nobel Prize in Chemistry for their development of molecular machines.

The Nobel Prize was awarded to all three of the scientists based on their complementary work over nearly three decades. First, in 1983, Sauvage (currently at Strasbourg University in France) was able to link two ring-shaped molecules to form a chain. Then, eight years later, Stoddart, a professor at Northwestern University in Evanston, Ill., demonstrated that a molecular ring could turn on a thin molecular axle. Then, eight years after that, Feringa, a professor at the University of Groningen, in the Netherlands, built on Stoddardt’s work and fabricated a molecular rotor blade that could spin continually in the same direction.

Speaking of the Nobel committee’s selection, Donna Nelson, a chemist and president of the American Chemical Society told Scientific American: “I think this topic is going to be fabulous for science. When the Nobel Prize is given, it inspires a lot of interest in the topic by other researchers. It will also increase funding.” Nelson added that this line of research will be fascinating for kids. “They can visualize it, and imagine a nanocar. This comes at a great time, when we need to inspire the next generation of scientists.”

The Economist, which appears to be previewing an article about the 2016 Nobel prizes ahead of the print version, has this to say in its Oct. 8, 2016 article,

BIGGER is not always better. Anyone who doubts that has only to look at the explosion of computing power which has marked the past half-century. This was made possible by continual shrinkage of the components computers are made from. That success has, in turn, inspired a search for other areas where shrinkage might also yield dividends.

One such, which has been poised delicately between hype and hope since the 1990s, is nanotechnology. What people mean by this term has varied over the years—to the extent that cynics might be forgiven for wondering if it is more than just a fancy rebranding of the word “chemistry”—but nanotechnology did originally have a fairly clear definition. It was the idea that machines with moving parts could be made on a molecular scale. And in recognition of this goal Sweden’s Royal Academy of Science this week decided to award this year’s Nobel prize for chemistry to three researchers, Jean-Pierre Sauvage, Sir Fraser Stoddart and Bernard Feringa, who have never lost sight of nanotechnology’s original objective.

Optimists talk of manufacturing molecule-sized machines ranging from drug-delivery devices to miniature computers. Pessimists recall that nanotechnology is a field that has been puffed up repeatedly by both researchers and investors, only to deflate in the face of practical difficulties.

There is, though, reason to hope it will work in the end. This is because, as is often the case with human inventions, Mother Nature has got there first. One way to think of living cells is as assemblies of nanotechnological machines. For example, the enzyme that produces adenosine triphosphate (ATP)—a molecule used in almost all living cells to fuel biochemical reactions—includes a spinning molecular machine rather like Dr Feringa’s invention. This works well. The ATP generators in a human body turn out so much of the stuff that over the course of a day they create almost a body-weight’s-worth of it. Do something equivalent commercially, and the hype around nanotechnology might prove itself justified.

Congratulations to the three winners!

Sonifying a swimmer’s performance to improve technique by listening)

I imagine since the 2016 Olympic Games are over that athletes and their coaches will soon start training for the 2020 Games. Researchers at Bielefeld University (Germany) have developed a new technique for helping swimmers improve their technique (Note: The following video is German language with English language subtitles),

An Aug. 4, 2016 Bielefeld University press release (also on EurekAlert), tells more,

Since 1896, swimming has been an event in the Olympic games. Back then it was the swimmer’s physical condition that was decisive in securing a win, but today it is mostly technique that determines who takes home the title of world champion. Researchers at Bielefeld University have developed a system that professional swimmers can use to optimize their swimming technique. The system expands the athlete’s perception and feel for the water by enabling them to hear, in real time, how the pressure of the water flows created by the swimmer changes with their movements. This gives the swimmer an advantage over his competitors because he can refine the execution of his technique. This “Swimming Sonification” system was developed at the Cluster of Excellence Cognitive Interaction Technology (CITEC) of Bielefeld University. In a video, Bielefeld University’s own “research_tv” reports on the new system.

“Swimmers see the movements of their hands. They also feel how the water glides over their hands, and they sense how quickly they are moving forwards. However, the majority of swimmers are not very aware of one significant factor: how the pressure exerted by the flow of the water on their bodies changes,” says Dr. Thomas Hermann of the Cluster of Excellence Cognitive Interaction Technology (CITEC). The sound researcher is working on converting data into sounds that can be used to benefit the listener. This is called sonification, a process in which measured data values are systematically turned into audible sounds and noises. “In this project, we are using the pressure from water flows as the data source,” says Hermann, who heads CITEC research group Ambient Intelligence. “We convert into sound how the pressure of water flows changes while swimming – in real time. We play the sounds to the swimmer over headphones so that they can then adjust their movements based on what they hear,” explains Hermann.

For this research project on swimming sonification, Dr. Hermann is working together with Dr. Bodo Ungerechts of the Faculty of Psychology and Sports Science. As a biomechanist, Dr. Ungerechts deals with how human beings control their movements, particularly with swimming. “If a swimmer registers how the flow pressure changes by hearing, he can better judge, for instance, how he can produce more thrust at similar energy costs. This give the swimmer a more encompassing perception for his movements in the water,” says Dr. Ungerechts. The researcher even tested the system out for himself. “I was surprised at just how well the sonification and the effects of the water flow, which I felt myself, corresponded with one another,” he says. The system is intuitive and easy to use. “You immediately starts playing with the sounds to hear, for example, what tonal effect spreading your fingers apart or changing the position of your hand has,” says Ungerechts. The new system should open up new training possibilities for athletes. “By using this system, swimmers develop a harmony – a kind of melody. If a swimmer very quickly masters a lap, they can use the recording of the melody to mentally re-imagine and retrace the successful execution of this lap. This mental training can also help athletes perform successfully in competitions.” To this, Thomas Hermann adds “the ear is great at perceiving rhythm and changes in rhythm. In this way, swimmers can find their own rhythm and use this to orient themselves in the water.”

This system includes two gloves with thin tube ends that serve as pressure sensors and are fixed between the fingers. The swimmer wears these gloves during practice. The tubes are linked to a measuring instrument, which is currently connected to the swimmer via a line while he or she is swimming. The measuring device transmits data about water flow pressure to a laptop. A custom-made software then sonifies the data, meaning that it turns the information into sound. “During repeated hand actions, for instance, the system can make rising and sinking flow pressure audible as increasing or decreasing tonal pitches,” says Thomas Hermann. Other settings that sonify features such as symmetry or steadiness can also be activated as needed.

The sounds are transmitted to the swimmer in real time over headphones. When the swimmer modifies a movement, he hears live how this also changes the sound. With the sonification of aquatic flow pressure, the swimmer can now practice the front crawl in way that, for instance, both hands displace the water masses with the same water flow form – to do this, the swimmer just has make sure that he generates the same sound pattern with each hand. Because the coach also hears the sounds over speakers, he can base the instructions he gives to the swimmer not only on the movements he observes, but also on the sounds generated by the swimmer and their rhythm (e.g. “Move your hands so that the tonal pitch increases faster”).

For this sonification project, Thomas Hermann and Bodo Ungerechts are working with Daniel Cesarini, Ph.D., a researcher from the Department of Information Engineering at the University of Pisa in Italy. Dr. Cesarini developed the measuring device that analyzes the aquatic flow pressure data.

In a practical workshop held in September 2015, professional swimmers tested the system out and confirmed that it indeed helped them to optimize their swimming technique. Of the 10 swimmers who participated, three of them qualify for international competitions, and one of the female swimmers is competing this year at the Paralympics in Rio de Janeiro, Brazil. The workshop was funded by the Cluster of Excellence Cognitive Interaction Technology (CITEC). In addition to this, swim teams at the PSV Eindhoven (Philips Sports Union Eindhoven) in the Netherlands tested the new system out for two months, using it as part of their technique training sessions. The PSV swim club competes in the top swimming league in the Netherlands.

“It is advantageous for swimmers to receive immediate feedback on their swimming form,” says Thomas Hermann. “People learn more quickly when they get direct feedback because they can immediately test how the feedback – in this case, the sound – changes when they try out something new.”

The researchers want to continue developing their current prototype. “We are planning to develop a wearable system that can be used independently by the user, without the help of others,” says Thomas Hermann. In addition to this, the new sonification method is planned to be incorporated into long-term training programs in cooperation with swim clubs.

My first post about sonification was this February 7, 2014 post titled, Data sonification: listening to your data instead of visualizing it.

As for this swimmer’s version of data sonification, you can find out more about the project here and/or here.

Nuclear magnetic resonance microscope breaks records

Dutch researchers have found a way to apply the principles underlying magnetic resonance imaging (MRI) to a microscope designed *for* examining matter and life at the nanoscale. From a July 15, 2016 news item on phys.org,

A new nuclear magnetic resonance (NMR) microscope gives researchers an improved instrument to study fundamental physical processes. It also offers new possibilities for medical science—for example, to better study proteins in Alzheimer’s patients’ brains. …

A Leiden Institute of Physics press release, which originated the news item, expands on the theme,

If you get a knee injury, physicians use an MRI machine to look right through the skin and see what exactly is the problem. For this trick, doctors make use of the fact that our body’s atomic nuclei are electrically charged and spin around their axis. Just like small electromagnets they induce their own magnetic field. By placing the knee in a uniform magnetic field, the nuclei line up with their axis pointing in the same direction. The MRI machine then sends a specific type of radio waves through the knee, causing some axes to flip. After turning off this signal, those nuclei flip back after some time, under excitation of a small radio wave. Those waves give away the atoms’ location, and provide physicians with an accurate image of the knee.

NMR

MRI is the medical application of Nuclear Magnetic Resonance (NMR), which is based on the same principle and was invented by physicists to conduct fundamental research on materials. One of the things they study with NMR is the so-called relaxation time. This is the time scale at which the nuclei flip back and it gives a lot of information about a material’s properties.

Microscope

To study materials on the smallest of scales as well, physicists go one step further and develop NMR microscopes, with which they study the mechanics behind physical processes at the level of a group of atoms. Now Leiden PhD students Jelmer Wagenaar and Arthur de Haan have built an NMR microscope, together with principal investigator Tjerk Oosterkamp, that operates at a record temperature of 42 milliKelvin—close to absolute zero. In their article in Physical Review Applied they prove it works by measuring the relaxation time of copper. They achieved a thousand times higher sensitivity than existing NMR microscopes—also a world record.

Alzheimer

With their microscope, they give physicists an instrument to conduct fundamental research on many physical phenomena, like systems displaying strange behavior in extreme cold. And like NMR eventually led to MRI machines in hospitals, NMR microscopes have great potential too. Wagenaar: ‘One example is that you might be able to use our technique to study Alzheimer patients’ brains at the molecular level, in order to find out how iron is locked up in proteins.’

Here’s a link to and a citation for the paper,

Probing the Nuclear Spin-Lattice Relaxation Time at the Nanoscale by J. J. T. Wagenaar, A. M. J. den Haan, J. M. de Voogd, L. Bossoni, T. A. de Jong, M. de Wit, K. M. Bastiaans, D. J. Thoen, A. Endo, T. M. Klapwijk, J. Zaanen, and T. H. Oosterkamp. Phys. Rev. Applied 6, 014007 DOI:http://dx.doi.org/10.1103/PhysRevApplied.6.014007 Published 15 July 2016

This paper is open access.

*’fro’ changed to ‘for’ on Aug. 3, 2016.

Trans-Atlantic Platform (T-AP) is a unique collaboration of humanities and social science researchers from Europe and the Americas

Launched in 2013, the Trans-Atlantic Platform is co-chaired by Dr.Ted Hewitt, president of the Social Sciences and Humanities Research Council of Canada (SSHRC) , and Dr. Renée van Kessel-Hagesteijn, Netherlands Organisation for Scientific Research—Social Sciences (NWO—Social Sciences).

An EU (European Union) publication, International Innovation features an interview about T-AP with Ted Hewitt in a June 30, 2016 posting,

The Trans-Atlantic Platform is a unique collaboration of humanities and social science funders from Europe and the Americas. International Innovation’s Rebecca Torr speaks with Ted Hewitt, President of the Social Sciences and Humanities Research Council and Co-Chair of T-AP to understand more about the Platform and its pilot funding programme, Digging into Data.

Many commentators have called for better integration between natural and social scientists, to ensure that the societal benefits of STEM research are fully realised. Does the integration of diverse scientific disciplines form part of T-AP’s remit, and if so, how are you working to achieve this?

T-AP was designed primarily to promote and facilitate research across SSH. However, given the Platform’s thematic priorities and the funding opportunities being contemplated, we anticipate that a good number of non-SSH [emphasis mine] researchers will be involved.

As an example, on March 1, T-AP launched its first pilot funding opportunity: the T-AP Digging into Data Challenge. One of the sponsors is the Natural Sciences and Engineering Research Council of Canada (NSERC), Canada’s federal funding agency for research in the natural sciences and engineering. Their involvement ensures that the perspective of the natural sciences is included in the challenge. The Digging into Data Challenge is open to any project that addresses research questions in the SSH by using large-scale digital data analysis techniques, and is then able to show how these techniques can lead to new insights. And the challenge specifically aims to advance multidisciplinary collaborative projects.

When you tackle a research question or undertake research to address a social challenge, you need collaboration between various SSH disciplines or between SSH and STEM disciplines. So, while proposals must address SSH research questions, the individual teams often involve STEM researchers, such as computer scientists.

In previous rounds of the Digging into Data Challenge, this has led to invaluable research. One project looked at how the media shaped public opinion around the 1918 Spanish flu pandemic. Another used CT scans to examine hundreds of mummies, ultimately discovering that atherosclerosis, a form of heart disease, was prevalent 4,000 years ago. In both cases, these multidisciplinary historical research projects have helped inform our thinking of the present.

Of course, Digging into Data isn’t the only research area in which T-AP will be involved. Since its inception, T-AP partners have identified three priority areas beyond digital scholarship: diversity, inequality and difference; resilient and innovative societies; and transformative research on the environment. Each of these areas touches on a variety of SSH fields, while the transformative research on the environment area has strong connections with STEM fields. In September 2015, T-AP organised a workshop around this third priority area; environmental science researchers were among the workshop participants.

I wish Hewitt hadn’t described researchers from disciplines other than the humanities and social sciences as “non-SSH.” The designation divides the world in two: us and non-take your pick: non-Catholic/Muslim/American/STEM/SSH/etc.

Getting back to the interview, it is surprisingly Canuck-centric in places,

How does T-AP fit in with Social Sciences and Humanities Research Council of Canada (SSHRC)’s priorities?

One of the objectives in SSHRC’s new strategic plan is to develop partnerships that enable us to expand the reach of our funding. As T-AP provides SSHRC with links to 16 agencies across Europe and the Americas, it is an efficient mechanism for us to broaden the scope of our support and promotion of post-secondary-based research and training in SSH.

It also provides an opportunity to explore cutting edge areas of research, such as big data (as we did with the first call we put out, Digging into Data). The research enterprise is becoming increasingly international, by which I mean that researchers are working on issues with international dimensions or collaborating in international teams. In this globalised environment, SSHRC must partner with international funders to support research excellence. By developing international funding opportunities, T-AP helps researchers create teams better positioned to tackle the most exciting and promising research topics.

Finally, it is a highly effective way of broadly promoting the value of SSH research throughout Canada and around the globe. There are significant costs and complexities involved in international research, and uncoordinated funding from multiple national funders can actually create barriers to collaboration. A platform like T-AP helps funders coordinate and streamline processes.

The interview gets a little more international scope when it turns to the data project,

What is the significance of your pilot funding programme in digital scholarship and what types of projects will it support?

The T-AP Digging into Data Challenge is significant for several reasons. First, the geographic reach of Digging is truly significant. With 16 participants from 11 countries, this round of Digging has significantly broader participation from previous rounds. This is also the first time Digging into Data includes funders from South America.

The T-AP Digging into Data Challenge is open to any research project that addresses questions in SSH. In terms of what those projects will end up being is anybody’s guess – projects from past competitions have involved fields ranging from musicology to anthropology to political science.

The Challenge’s main focus is, of course, the use of big data in research.

You may want to read the interview in its entirety here.

I have checked out the Trans-Atlantic Platform website but cannot determine how someone or some institution might consult that site for information on how to get involved in their projects or get funding. However, there is a T-AP Digging into Data website where there is evidence of the first international call for funding submissions. Sadly, the deadline for the 2016 call has passed if the website is to be believed (sometimes people are late when changing deadline dates).