Tag Archives: Netherlands

Robot artists—should they get copyright protection

Clearly a lawyer wrote this June 26, 2017 essay on theconversation.com (Note: A link has been removed),

When a group of museums and researchers in the Netherlands unveiled a portrait entitled The Next Rembrandt, it was something of a tease to the art world. It wasn’t a long lost painting but a new artwork generated by a computer that had analysed thousands of works by the 17th-century Dutch artist Rembrandt Harmenszoon van Rijn.

The computer used something called machine learning [emphasis mine] to analyse and reproduce technical and aesthetic elements in Rembrandt’s works, including lighting, colour, brush-strokes and geometric patterns. The result is a portrait produced based on the styles and motifs found in Rembrandt’s art but produced by algorithms.

But who owns creative works generated by artificial intelligence? This isn’t just an academic question. AI is already being used to generate works in music, journalism and gaming, and these works could in theory be deemed free of copyright because they are not created by a human author.

This would mean they could be freely used and reused by anyone and that would be bad news for the companies selling them. Imagine you invest millions in a system that generates music for video games, only to find that music isn’t protected by law and can be used without payment by anyone in the world.

Unlike with earlier computer-generated works of art, machine learning software generates truly creative works without human input or intervention. AI is not just a tool. While humans program the algorithms, the decision making – the creative spark – comes almost entirely from the machine.

It could have been someone involved in the technology but nobody with that background would write “… something called machine learning … .”  Andres Guadamuz, lecturer in Intellectual Property Law at the University of Sussex, goes on to say (Note: Links have been removed),

Unlike with earlier computer-generated works of art, machine learning software generates truly creative works without human input or intervention. AI is not just a tool. While humans program the algorithms, the decision making – the creative spark – comes almost entirely from the machine.

That doesn’t mean that copyright should be awarded to the computer, however. Machines don’t (yet) have the rights and status of people under the law. But that doesn’t necessarily mean there shouldn’t be any copyright either. Not all copyright is owned by individuals, after all.

Companies are recognised as legal people and are often awarded copyright for works they don’t directly create. This occurs, for example, when a film studio hires a team to make a movie, or a website commissions a journalist to write an article. So it’s possible copyright could be awarded to the person (company or human) that has effectively commissioned the AI to produce work for it.

 

Things are likely to become yet more complex as AI tools are more commonly used by artists and as the machines get better at reproducing creativity, making it harder to discern if an artwork is made by a human or a computer. Monumental advances in computing and the sheer amount of computational power becoming available may well make the distinction moot. At that point, we will have to decide what type of protection, if any, we should give to emergent works created by intelligent algorithms with little or no human intervention.

The most sensible move seems to follow those countries that grant copyright to the person who made the AI’s operation possible, with the UK’s model looking like the most efficient. This will ensure companies keep investing in the technology, safe in the knowledge they will reap the benefits. What happens when we start seriously debating whether computers should be given the status and rights of people is a whole other story.

The team that developed a ‘new’ Rembrandt produced a video about the process,

Mark Brown’s April 5, 2016 article abut this project (which was unveiled on April 5, 2017 in Amsterdam, Netherlands) for the Guardian newspaper provides more detail such as this,

It [Next Rembrandt project] is the result of an 18-month project which asks whether new technology and data can bring back to life one of the greatest, most innovative painters of all time.

Advertising executive [Bas] Korsten, whose brainchild the project was, admitted that there were many doubters. “The idea was greeted with a lot of disbelief and scepticism,” he said. “Also coming up with the idea is one thing, bringing it to life is another.”

The project has involved data scientists, developers, engineers and art historians from organisations including Microsoft, Delft University of Technology, the Mauritshuis in The Hague and the Rembrandt House Museum in Amsterdam.

The final 3D printed painting consists of more than 148 million pixels and is based on 168,263 Rembrandt painting fragments.

Some of the challenges have been in designing a software system that could understand Rembrandt based on his use of geometry, composition and painting materials. A facial recognition algorithm was then used to identify and classify the most typical geometric patterns used to paint human features.

It sounds like it was a fascinating project but I don’t believe ‘The Next Rembrandt’ is an example of AI creativity or an example of the ‘creative spark’ Guadamuz discusses. This seems more like the kind of work  that could be done by a talented forger or fraudster. As I understand it, even when a human creates this type of artwork (a newly discovered and unknown xxx masterpiece), the piece is not considered a creative work in its own right. Some pieces are outright fraudulent and others which are described as “in the manner of xxx.”

Taking a somewhat different approach to mine, Timothy Geigner at Techdirt has also commented on the question of copyright and AI in relation to Guadamuz’s essay in a July 7, 2017 posting,

Unlike with earlier computer-generated works of art, machine learning software generates truly creative works without human input or intervention. AI is not just a tool. While humans program the algorithms, the decision making – the creative spark – comes almost entirely from the machine.

Let’s get the easy part out of the way: the culminating sentence in the quote above is not true. The creative spark is not the artistic output. Rather, the creative spark has always been known as the need to create in the first place. This isn’t a trivial quibble, either, as it factors into the simple but important reasoning for why AI and machines should certainly not receive copyright rights on their output.

That reasoning is the purpose of copyright law itself. Far too many see copyright as a reward system for those that create art rather than what it actually was meant to be: a boon to an artist to compensate for that artist to create more art for the benefit of the public as a whole. Artificial intelligence, however far progressed, desires only what it is programmed to desire. In whatever hierarchy of needs an AI might have, profit via copyright would factor either laughably low or not at all into its future actions. Future actions of the artist, conversely, are the only item on the agenda for copyright’s purpose. If receiving a copyright wouldn’t spur AI to create more art beneficial to the public, then copyright ought not to be granted.

Geigner goes on (July 7, 2017 posting) to elucidate other issues with the ideas expressed in the general debates of AI and ‘rights’ and the EU’s solution.

Art masterpieces are turning into soap

This piece of research has made a winding trek through the online science world. First it was featured in an April 20, 2017 American Chemical Society news release on EurekAlert,

A good art dealer can really clean up in today’s market, but not when some weird chemistry wreaks havoc on masterpieces. Art conservators started to notice microscopic pockmarks forming on the surfaces of treasured oil paintings that cause the images to look hazy. It turns out the marks are eruptions of paint caused, weirdly, by soap that forms via chemical reactions. Since you have no time to watch paint dry, we explain how paintings from Rembrandts to O’Keefes are threatened by their own compositions — and we don’t mean the imagery.

Here’s the video,

Interestingly, this seems to be based on a May 23, 2016 article by Sarah Everts for Chemical and Engineering News (an American Society publication) Note: Links have been removed,

When conservator Petria Noble first peered at Rembrandt’s “Anatomy Lesson of Dr. Nicolaes Tulp” under a microscope back in 1996, she was surprised to find pockmarks across the nearly 400-year-old painting’s surface.

Each tiny crater was just a few hundred micrometers in diameter, no wider than the period at the end of this sentence. The painting’s surface was entirely riddled with these curious structures, giving it “a dull, rather hazy, gritty surface,” Noble says.

A structure of lead nonanoate.

The crystal structures of metal soaps vary: Shown here is lead nonanoate, based on a structure solved by Cecil Dybowski at the University of Delaware and colleagues at the Metropolitan Museum of Art. Dashed lines are nearest oxygen neighbors.

This concerned Noble, who was tasked with cleaning the masterpiece with her then-colleague Jørgen Wadum at the Mauritshuis museum, the painting’s home in The Hague.

When Noble called physicist Jaap Boon, then at the Foundation for Fundamental Research on Matter in Amsterdam, to help figure out what was going on, the researchers unsuspectingly embarked on an investigation that would transform the art world’s understanding of aging paint.

More recently this ‘metal soaps in paintings’ story has made its way into a May 16, 2017 news item on phys.org,

An oil painting is not a permanent and unchangeable object, but undergoes a very slow change in the outer and inner structure. Metal soap formation is of great importance. Joen Hermans has managed to recreate the molecular structure of old oil paints: a big step towards better preservation of works of art. He graduated cum laude on Tuesday 9 May [2017] at the University of Amsterdam with NWO funding from the Science4Arts program.

A May 15, 2017 Netherlands Organization for Scientific Research (NWO) press release, which originated the phys.org news item, provides more information about Hermans’ work (albeit some of this is repetitive),

Johannes Vermeer, View of Delft, c. 1660 - 1661 (Mauritshuis, The Hague)Johannes Vermeer, View of Delft, c. 1660 – 1661 (Mauritshuis, The Hague)

Paint can fade, varnish can discolour and paintings can collect dust and dirt. Joen Hermans has examined the chemical processes behind ageing processes in paints. ‘While restorers do their best to repair any damages that have occurred, the fact remains that at present we do not know enough about the molecular structure of ageing oil paint and the chemical processes they undergo’, says Hermans. ‘This makes it difficult to predict with confidence how paints will react to restoration treatments or to changes in a painting’s environment.’

‘Sand grains’ In the red tiles of 'View of Delft' by Johannes Vermeer shows 'lead soap spheres' (Annelies van Loon, UvA/Mauritshuis)‘Sand grains’ In the red tiles of ‘View of Delft’ by Johannes Vermeer shows ‘lead soap spheres’ (Annelies van Loon, UvA/Mauritshuis)

Visible to the naked eye

Hermans explains that in its simplest form, oil paint is a mixture of pigment and drying oil, which forms the binding element. Colour pigments are often metal salts. ‘When the pigment and the drying oil are combined, an incredibly complicated chemical process begins’, says Hermans, ‘which continues for centuries’. The fatty acids in the oil form a polymer network when exposed to oxygen in the air. Meanwhile, metal ions react with the oil on the surface of the grains of pigment.

‘A common problem when conserving oil paintings is the formation of what are known as metal soaps’, Hermans continues. These are compounds of metal ions and fatty acids. The formation of metal soaps is linked to various ways in which paint deteriorates, as when it becomes increasingly brittle, transparent or forms a crust on the paint surface. Hermans: ‘You can see clumps of metal soap with the naked eye on some paintings, like Rembrandt’s Anatomy Lesson of Dr Nicolaes Tulp or Vermeer’s View of Delft’. Around 70 per cent of all oil paintings show signs of metal soap formation.’

Conserving valuable paintings

Hermans has studied in detail how metal soaps form. He began by defining the structure of metal soaps. One of the things he discovered was that the process that causes metal ions to move in the painting is crucial to the speed at which the painting ages. Hermans also managed to recreate the molecular structure of old oil paints, making it possible to simulate and study the behaviour of old paints without actually having to remove samples from Rembrandt’s Night Watch. Hermans hopes this knowledge will contribute towards a solid foundation for the conservation of valuable works of art.

I imagine this will make anyone who owns an oil painting or appreciates paintings in general pause for thought and the inclination to utter a short prayer for conservators to find a solution.

Explaining the link between air pollution and heart disease?

An April 26, 2017 news item on Nanowerk announces research that may explain the link between heart disease and air pollution (Note: A link has been removed),

Tiny particles in air pollution have been associated with cardiovascular disease, which can lead to premature death. But how particles inhaled into the lungs can affect blood vessels and the heart has remained a mystery.

Now, scientists have found evidence in human and animal studies that inhaled nanoparticles can travel from the lungs into the bloodstream, potentially explaining the link between air pollution and cardiovascular disease. Their results appear in the journal ACS Nano (“Inhaled Nanoparticles Accumulate at Sites of Vascular Disease”).

An April 26, 2017 American Chemical Society news release on EurekAlert, which originated the news item,  expands on the theme,

The World Health Organization estimates that in 2012, about 72 percent of premature deaths related to outdoor air pollution were due to ischemic heart disease and strokes. Pulmonary disease, respiratory infections and lung cancer were linked to the other 28 percent. Many scientists have suspected that fine particles travel from the lungs into the bloodstream, but evidence supporting this assumption in humans has been challenging to collect. So Mark Miller and colleagues at the University of Edinburgh in the United Kingdom and the National Institute for Public Health and the Environment in the Netherlands used a selection of specialized techniques to track the fate of inhaled gold nanoparticles.

In the new study, 14 healthy volunteers, 12 surgical patients and several mouse models inhaled gold nanoparticles, which have been safely used in medical imaging and drug delivery. Soon after exposure, the nanoparticles were detected in blood and urine. Importantly, the nanoparticles appeared to preferentially accumulate at inflamed vascular sites, including carotid plaques in patients at risk of a stroke. The findings suggest that nanoparticles can travel from the lungs into the bloodstream and reach susceptible areas of the cardiovascular system where they could possibly increase the likelihood of a heart attack or stroke, the researchers say.

Here’s a link to and a citation for the paper,

Inhaled Nanoparticles Accumulate at Sites of Vascular Disease by Mark R. Miller, Jennifer B. Raftis, Jeremy P. Langrish, Steven G. McLean, Pawitrabhorn Samutrtai, Shea P. Connell, Simon Wilson, Alex T. Vesey, Paul H. B. Fokkens, A. John F. Boere, Petra Krystek, Colin J. Campbell, Patrick W. F. Hadoke, Ken Donaldson, Flemming R. Cassee, David E. Newby, Rodger Duffin, and Nicholas L. Mills. ACS Nano, Article ASAP DOI: 10.1021/acsnano.6b08551 Publication Date (Web): April 26, 2017

Copyright © 2017 American Chemical Society

This paper is behind a paywall.

2D printed transistors in Ireland

2D transistors seem to be a hot area for research these days. In Ireland, the AMBER Centre has announced a transistor consisting entirely of 2D nanomaterials in an April 6, 2017 news item on Nanowerk,

Researchers in AMBER, the Science Foundation Ireland-funded materials science research centre hosted in Trinity College Dublin, have fabricated printed transistors consisting entirely of 2-dimensional nanomaterials for the first time. These 2D materials combine exciting electronic properties with the potential for low-cost production.

This breakthrough could unlock the potential for applications such as food packaging that displays a digital countdown to warn you of spoiling, wine labels that alert you when your white wine is at its optimum temperature, or even a window pane that shows the day’s forecast. …

An April 7, 2017 AMBER Centre press release (also on EurekAlert), which originated the news item, expands on the theme,

Prof Jonathan Coleman, who is an investigator in AMBER and Trinity’s School of Physics, said, “In the future, printed devices will be incorporated into even the most mundane objects such as labels, posters and packaging.

Printed electronic circuitry (constructed from the devices we have created) will allow consumer products to gather, process, display and transmit information: for example, milk cartons could send messages to your phone warning that the milk is about to go out-of-date.

We believe that 2D nanomaterials can compete with the materials currently used for printed electronics. Compared to other materials employed in this field, our 2D nanomaterials have the capability to yield more cost effective and higher performance printed devices. However, while the last decade has underlined the potential of 2D materials for a range of electronic applications, only the first steps have been taken to demonstrate their worth in printed electronics. This publication is important because it shows that conducting, semiconducting and insulating 2D nanomaterials can be combined together in complex devices. We felt that it was critically important to focus on printing transistors as they are the electric switches at the heart of modern computing. We believe this work opens the way to print a whole host of devices solely from 2D nanosheets.”

Led by Prof Coleman, in collaboration with the groups of Prof Georg Duesberg (AMBER) and Prof. Laurens Siebbeles (TU Delft,Netherlands), the team used standard printing techniques to combine graphene nanosheets as the electrodes with two other nanomaterials, tungsten diselenide and boron nitride as the channel and separator (two important parts of a transistor) to form an all-printed, all-nanosheet, working transistor.

Printable electronics have developed over the last thirty years based mainly on printable carbon-based molecules. While these molecules can easily be turned into printable inks, such materials are somewhat unstable and have well-known performance limitations. There have been many attempts to surpass these obstacles using alternative materials, such as carbon nanotubes or inorganic nanoparticles, but these materials have also shown limitations in either performance or in manufacturability. While the performance of printed 2D devices cannot yet compare with advanced transistors, the team believe there is a wide scope to improve performance beyond the current state-of-the-art for printed transistors.

The ability to print 2D nanomaterials is based on Prof. Coleman’s scalable method of producing 2D nanomaterials, including graphene, boron nitride, and tungsten diselenide nanosheets, in liquids, a method he has licensed to Samsung and Thomas Swan. These nanosheets are flat nanoparticles that are a few nanometres thick but hundreds of nanometres wide. Critically, nanosheets made from different materials have electronic properties that can be conducting, insulating or semiconducting and so include all the building blocks of electronics. Liquid processing is especially advantageous in that it yields large quantities of high quality 2D materials in a form that is easy to process into inks. Prof. Coleman’s publication provides the potential to print circuitry at extremely low cost which will facilitate a range of applications from animated posters to smart labels.

Prof Coleman is a partner in Graphene flagship, a €1 billion EU initiative to boost new technologies and innovation during the next 10 years.

Here’s a link to and a citation for the paper,

All-printed thin-film transistors from networks of liquid-exfoliated nanosheets by Adam G. Kelly, Toby Hallam, Claudia Backes, Andrew Harvey, Amir Sajad Esmaeily, Ian Godwin, João Coelho, Valeria Nicolosi, Jannika Lauth, Aditya Kulkarni, Sachin Kinge, Laurens D. A. Siebbeles, Georg S. Duesberg, Jonathan N. Coleman. Science  07 Apr 2017: Vol. 356, Issue 6333, pp. 69-73 DOI: 10.1126/science.aal4062

This paper is behind a paywall.

High-performance, low-energy artificial synapse for neural network computing

This artificial synapse is apparently an improvement on the standard memristor-based artificial synapse but that doesn’t become clear until reading the abstract for the paper. First, there’s a Feb. 20, 2017 Stanford University news release by Taylor Kubota (dated Feb. 21, 2017 on EurekAlert), Note: Links have been removed,

For all the improvements in computer technology over the years, we still struggle to recreate the low-energy, elegant processing of the human brain. Now, researchers at Stanford University and Sandia National Laboratories have made an advance that could help computers mimic one piece of the brain’s efficient design – an artificial version of the space over which neurons communicate, called a synapse.

“It works like a real synapse but it’s an organic electronic device that can be engineered,” said Alberto Salleo, associate professor of materials science and engineering at Stanford and senior author of the paper. “It’s an entirely new family of devices because this type of architecture has not been shown before. For many key metrics, it also performs better than anything that’s been done before with inorganics.”

The new artificial synapse, reported in the Feb. 20 issue of Nature Materials, mimics the way synapses in the brain learn through the signals that cross them. This is a significant energy savings over traditional computing, which involves separately processing information and then storing it into memory. Here, the processing creates the memory.

This synapse may one day be part of a more brain-like computer, which could be especially beneficial for computing that works with visual and auditory signals. Examples of this are seen in voice-controlled interfaces and driverless cars. Past efforts in this field have produced high-performance neural networks supported by artificially intelligent algorithms but these are still distant imitators of the brain that depend on energy-consuming traditional computer hardware.

Building a brain

When we learn, electrical signals are sent between neurons in our brain. The most energy is needed the first time a synapse is traversed. Every time afterward, the connection requires less energy. This is how synapses efficiently facilitate both learning something new and remembering what we’ve learned. The artificial synapse, unlike most other versions of brain-like computing, also fulfills these two tasks simultaneously, and does so with substantial energy savings.

“Deep learning algorithms are very powerful but they rely on processors to calculate and simulate the electrical states and store them somewhere else, which is inefficient in terms of energy and time,” said Yoeri van de Burgt, former postdoctoral scholar in the Salleo lab and lead author of the paper. “Instead of simulating a neural network, our work is trying to make a neural network.”

The artificial synapse is based off a battery design. It consists of two thin, flexible films with three terminals, connected by an electrolyte of salty water. The device works as a transistor, with one of the terminals controlling the flow of electricity between the other two.

Like a neural path in a brain being reinforced through learning, the researchers program the artificial synapse by discharging and recharging it repeatedly. Through this training, they have been able to predict within 1 percent of uncertainly what voltage will be required to get the synapse to a specific electrical state and, once there, it remains at that state. In other words, unlike a common computer, where you save your work to the hard drive before you turn it off, the artificial synapse can recall its programming without any additional actions or parts.

Testing a network of artificial synapses

Only one artificial synapse has been produced but researchers at Sandia used 15,000 measurements from experiments on that synapse to simulate how an array of them would work in a neural network. They tested the simulated network’s ability to recognize handwriting of digits 0 through 9. Tested on three datasets, the simulated array was able to identify the handwritten digits with an accuracy between 93 to 97 percent.

Although this task would be relatively simple for a person, traditional computers have a difficult time interpreting visual and auditory signals.

“More and more, the kinds of tasks that we expect our computing devices to do require computing that mimics the brain because using traditional computing to perform these tasks is becoming really power hungry,” said A. Alec Talin, distinguished member of technical staff at Sandia National Laboratories in Livermore, California, and senior author of the paper. “We’ve demonstrated a device that’s ideal for running these type of algorithms and that consumes a lot less power.”

This device is extremely well suited for the kind of signal identification and classification that traditional computers struggle to perform. Whereas digital transistors can be in only two states, such as 0 and 1, the researchers successfully programmed 500 states in the artificial synapse, which is useful for neuron-type computation models. In switching from one state to another they used about one-tenth as much energy as a state-of-the-art computing system needs in order to move data from the processing unit to the memory.

This, however, means they are still using about 10,000 times as much energy as the minimum a biological synapse needs in order to fire. The researchers are hopeful that they can attain neuron-level energy efficiency once they test the artificial synapse in smaller devices.

Organic potential

Every part of the device is made of inexpensive organic materials. These aren’t found in nature but they are largely composed of hydrogen and carbon and are compatible with the brain’s chemistry. Cells have been grown on these materials and they have even been used to make artificial pumps for neural transmitters. The voltages applied to train the artificial synapse are also the same as those that move through human neurons.

All this means it’s possible that the artificial synapse could communicate with live neurons, leading to improved brain-machine interfaces. The softness and flexibility of the device also lends itself to being used in biological environments. Before any applications to biology, however, the team plans to build an actual array of artificial synapses for further research and testing.

Additional Stanford co-authors of this work include co-lead author Ewout Lubberman, also of the University of Groningen in the Netherlands, Scott T. Keene and Grégorio C. Faria, also of Universidade de São Paulo, in Brazil. Sandia National Laboratories co-authors include Elliot J. Fuller and Sapan Agarwal in Livermore and Matthew J. Marinella in Albuquerque, New Mexico. Salleo is an affiliate of the Stanford Precourt Institute for Energy and the Stanford Neurosciences Institute. Van de Burgt is now an assistant professor in microsystems and an affiliate of the Institute for Complex Molecular Studies (ICMS) at Eindhoven University of Technology in the Netherlands.

This research was funded by the National Science Foundation, the Keck Faculty Scholar Funds, the Neurofab at Stanford, the Stanford Graduate Fellowship, Sandia’s Laboratory-Directed Research and Development Program, the U.S. Department of Energy, the Holland Scholarship, the University of Groningen Scholarship for Excellent Students, the Hendrik Muller National Fund, the Schuurman Schimmel-van Outeren Foundation, the Foundation of Renswoude (The Hague and Delft), the Marco Polo Fund, the Instituto Nacional de Ciência e Tecnologia/Instituto Nacional de Eletrônica Orgânica in Brazil, the Fundação de Amparo à Pesquisa do Estado de São Paulo and the Brazilian National Council.

Here’s an abstract for the researchers’ paper (link to paper provided after abstract) and it’s where you’ll find the memristor connection explained,

The brain is capable of massively parallel information processing while consuming only ~1–100fJ per synaptic event1, 2. Inspired by the efficiency of the brain, CMOS-based neural architectures3 and memristors4, 5 are being developed for pattern recognition and machine learning. However, the volatility, design complexity and high supply voltages for CMOS architectures, and the stochastic and energy-costly switching of memristors complicate the path to achieve the interconnectivity, information density, and energy efficiency of the brain using either approach. Here we describe an electrochemical neuromorphic organic device (ENODe) operating with a fundamentally different mechanism from existing memristors. ENODe switches at low voltage and energy (<10pJ for 103μm2 devices), displays >500 distinct, non-volatile conductance states within a ~1V range, and achieves high classification accuracy when implemented in neural network simulations. Plastic ENODes are also fabricated on flexible substrates enabling the integration of neuromorphic functionality in stretchable electronic systems6, 7. Mechanical flexibility makes ENODes compatible with three-dimensional architectures, opening a path towards extreme interconnectivity comparable to the human brain.

Here’s a link to and a citation for the paper,

A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing by Yoeri van de Burgt, Ewout Lubberman, Elliot J. Fuller, Scott T. Keene, Grégorio C. Faria, Sapan Agarwal, Matthew J. Marinella, A. Alec Talin, & Alberto Salleo. Nature Materials (2017) doi:10.1038/nmat4856 Published online 20 February 2017

This paper is behind a paywall.

ETA March 8, 2017 10:28 PST: You may find this this piece on ferroelectricity and neuromorphic engineering of interest (March 7, 2017 posting titled: Ferroelectric roadmap to neuromorphic computing).

Nominations open for Kabiller Prizes in Nanoscience and Nanomedicine ($250,000 for visionary researcher and $10,000 for young investigator)

For a change I can publish something that doesn’t have a deadline in three days or less! Without more ado (from a Feb. 20, 2017 Northwestern University news release by Megan Fellman [h/t Nanowerk’s Feb. 20, 2017 news item]),

Northwestern University’s International Institute for Nanotechnology (IIN) is now accepting nominations for two prestigious international prizes: the $250,000 Kabiller Prize in Nanoscience and Nanomedicine and the $10,000 Kabiller Young Investigator Award in Nanoscience and Nanomedicine.

The deadline for nominations is May 15, 2017. Details are available on the IIN website.

“Our goal is to recognize the outstanding accomplishments in nanoscience and nanomedicine that have the potential to benefit all humankind,” said David G. Kabiller, a Northwestern trustee and alumnus. He is a co-founder of AQR Capital Management, a global investment management firm in Greenwich, Connecticut.

The two prizes, awarded every other year, were established in 2015 through a generous gift from Kabiller. Current Northwestern-affiliated researchers are not eligible for nomination until 2018 for the 2019 prizes.

The Kabiller Prize — the largest monetary award in the world for outstanding achievement in the field of nanomedicine — celebrates researchers who have made the most significant contributions to the field of nanotechnology and its application to medicine and biology.

The Kabiller Young Investigator Award recognizes young emerging researchers who have made recent groundbreaking discoveries with the potential to make a lasting impact in nanoscience and nanomedicine.

“The IIN at Northwestern University is a hub of excellence in the field of nanotechnology,” said Kabiller, chair of the IIN executive council and a graduate of Northwestern’s Weinberg College of Arts and Sciences and Kellogg School of Management. “As such, it is the ideal organization from which to launch these awards recognizing outstanding achievements that have the potential to substantially benefit society.”

Nanoparticles for medical use are typically no larger than 100 nanometers — comparable in size to the molecules in the body. At this scale, the essential properties (e.g., color, melting point, conductivity, etc.) of structures behave uniquely. Researchers are capitalizing on these unique properties in their quest to realize life-changing advances in the diagnosis, treatment and prevention of disease.

“Nanotechnology is one of the key areas of distinction at Northwestern,” said Chad A. Mirkin, IIN director and George B. Rathmann Professor of Chemistry in Weinberg. “We are very grateful for David’s ongoing support and are honored to be stewards of these prestigious awards.”

An international committee of experts in the field will select the winners of the 2017 Kabiller Prize and the 2017 Kabiller Young Investigator Award and announce them in September.

The recipients will be honored at an awards banquet Sept. 27 in Chicago. They also will be recognized at the 2017 IIN Symposium, which will include talks from prestigious speakers, including 2016 Nobel Laureate in Chemistry Ben Feringa, from the University of Groningen, the Netherlands.

2015 recipient of the Kabiller Prize

The winner of the inaugural Kabiller Prize, in 2015, was Joseph DeSimone the Chancellor’s Eminent Professor of Chemistry at the University of North Carolina at Chapel Hill and the William R. Kenan Jr. Distinguished Professor of Chemical Engineering at North Carolina State University and of Chemistry at UNC-Chapel Hill.

DeSimone was honored for his invention of particle replication in non-wetting templates (PRINT) technology that enables the fabrication of precisely defined, shape-specific nanoparticles for advances in disease treatment and prevention. Nanoparticles made with PRINT technology are being used to develop new cancer treatments, inhalable therapeutics for treating pulmonary diseases, such as cystic fibrosis and asthma, and next-generation vaccines for malaria, pneumonia and dengue.

2015 recipient of the Kabiller Young Investigator Award

Warren Chan, professor at the Institute of Biomaterials and Biomedical Engineering at the University of Toronto, was the recipient of the inaugural Kabiller Young Investigator Award, also in 2015. Chan and his research group have developed an infectious disease diagnostic device for a point-of-care use that can differentiate symptoms.

BTW, Warren Chan, winner of the ‘Young Investigator Award’, and/or his work have been featured here a few times, most recently in a Nov. 1, 2016 posting, which is mostly about another award he won but also includes links to some his work including my April 27, 2016 post about the discovery that fewer than 1% of nanoparticle-based drugs reach their destination.

How does ice melt? Layer by layer!

A Dec. 12, 2016 news item on ScienceDaily announces the answer to a problem scientists have been investigating for over a century but first, here are the questions,

We all know that water melts at 0°C. However, 150 years ago the famous physicist Michael Faraday discovered that at the surface of frozen ice, well below 0°C, a thin film of liquid-like water is present. This thin film makes ice slippery and is crucial for the motion of glaciers.

Since Faraday’s discovery, the properties of this water-like layer have been the research topic of scientists all over the world, which has entailed considerable controversy: at what temperature does the surface become liquid-like? How does the thickness of the layer dependent on temperature? How does the thickness of the layer increases with temperature? Continuously? Stepwise? Experiments to date have generally shown a very thin layer, which continuously grows in thickness up to 45 nm right below the bulk melting point at 0°C. This also illustrates why it has been so challenging to study this layer of liquid-like water on ice: 45 nm is about 1/1000th part of a human hair and is not discernible by eye.

Scientists of the Max Planck Institute for Polymer Research (MPI-P), in a collaboration with researchers from the Netherlands, the USA and Japan, have succeeded to study the properties of this quasi-liquid layer on ice at the molecular level using advanced surface-specific spectroscopy and computer simulations. The results are published in the latest edition of the scientific journal Proceedings of the National Academy of Science (PNAS).

Caption: Ice melts as described in the text layer by layer. Credit: © MPIP

A Dec. 12, 2016 Max Planck Institute for Polymer Research press release (also on EurekAlert), which originated the news item, goes on to answer the questions,

The team of scientists around Ellen Backus, group leader at MPI-P, investigated how the thin liquid layer is formed on ice, how it grows with increasing temperature, and if it is distinguishable from normal liquid water. These studies required well-defined ice crystal surfaces. Therefore much effort was put into creating ~10 cm large single crystals of ice, which could be cut in such a way that the surface structure was precisely known. To investigate whether the surface was solid or liquid, the team made use of the fact that water molecules in the liquid have a weaker interaction with each other compared to water molecules in ice. Using their interfacial spectroscopy, combined with the controlled heating of the ice crystal, the researchers were able to quantify the change in the interaction between water molecules directly at the interface between ice and air.

The experimental results, combined with the simulations, showed that the first molecular layer at the ice surface has already molten at temperatures as low as -38° C (235 K), the lowest temperature the researchers could experimentally investigate. Increasing the temperature to -16° C (257 K), the second layer becomes liquid. Contrary to popular belief, the surface melting of ice is not a continuous process, but occurs in a discontinuous, layer-by-layer fashion.

“A further important question for us was, whether one could distinguish between the properties of the quasi-liquid layer and those of normal water” says Mischa Bonn, co-author of the paper and director at the MPI-P. And indeed, the quasi-liquid layer at -4° C (269 K) shows a different spectroscopic response than supercooled water at the same temperature; in the quasi-liquid layer, the water molecules seem to interact more strongly than in liquid water.

The results are not only important for a fundamental understanding of ice, but also for climate science, where much research takes place on catalytic reactions on ice surfaces, for which the understanding of the ice surface structure is crucial.

Here’s a link to and a citation for the paper,

Experimental and theoretical evidence for bilayer-by-bilayer surface melting of crystalline ice by M. Alejandra Sánchez, Tanja Kling, Tatsuya Ishiyama, Marc-Jan van Zadel, Patrick J. Bisson, Markus Mezger, Mara N. Jochum, Jenée D. Cyran, Wilbert J. Smit, Huib J. Bakker, Mary Jane Shultz, Akihiro Morita, Davide Donadio, Yuki Nagata, Mischa Bonn, and Ellen H. G. Backus. Proceedings of the National Academy of Science, 2016 DOI: 10.1073/pnas.1612893114 Published online before print December 12, 2016

This paper appears to be open access.

Fashion Week Netherlands and a conversation about nanotextiles

Marjolein Lammerts van Bueren has written up an interview with the principals of Nanonow consulting agency, in a Dec. 15, 2016 article for Amsterdam Fashion Week, where they focus on nanotextiles (Note: Links have been removed),

Strong, sustainable textiles created by combining chemical recycling and nanotechnology – for Vincent Franken and Roel Boekel, their nanotechstiles are there already. With their consulting firm, Nanonow, the two men help companies in a range of industries innovate in the field of nanotechnology. And yes, you guessed it, the fashion industry, too, is finding ways to use the technology to its advantage. Fashionweek.nl sat down with Franken to talk about textiles on a nano scale.

How did you come up with the idea for Nanonow?

“I studied Science, Business & Innovations at the VU in Amsterdam. That’s a beta course that focuses on new technologies and how you can bring them to the market, and I specialised in nanotechnology within that. Because of the many – still untapped – opportunities and applications there are for nanotechnology, I started Nanonow with Roel Boekel after I graduated in 2014. We’re a consulting firm helping companies that still don’t really know how they can make use of nanotechnology, which can be used for a whole lot of things.”

Like the textile industry?

“Exactly. Over the last few years, we’ve done research into several different industries, like the waste and recycling industry. Six months ago we started looking at the textile industry, via Frankenhuis, an international textile recycler. When you throw your clothes in the recycling bin, a portion of them are sold on and a portion are recycled, or downcycled, as I call it. They pull the textiles apart, and those fibres – so the threads – are sold and repurposed into things like insulation. Roel and I thought that was a shame, because you’re deconstructing clothes that have often barely been worn just to make a low-value product out of them.”

So you’ve developed an alternative, Nanotechstiles. Tell us about it!

“We actually wanted to make new clothes from the deconstructed clothes. This is already happening via mechanical recycling, where you produce new clothes by reweaving the old textile fibres. But for me, the Holy Grail we’re looking for – I’m a tech guy after all – is the molecules inside the fibres.”

“First, we don’t want to use the existing thread, but instead we want to pull the thread apart completely then put it back together again. This is called chemical recycling and it’s already happening today. You can remove the cellulose fibres from cotton then put them back together to form viscose or lyocell. The downside of that is that the process is pretty expensive and the quality isn’t always that good.”

“Then you also have nanotechnologies, an area that’s developing rapidly and is already being used to strengthen textiles, which makes them last longer. But there are more options for making textiles no-iron, antibacterial – so that it doesn’t start to smell as quickly – or stain resistant. You can also integrate energy-saving electronics into them, or make them water resistant, as you saw last year on Valerio Zeno and Dennis Storm’s BNN TV programme, Proefkonijnen.”

“When you use nanotechnology to make materials smaller, you transform them, as it were, giving them completely different characteristics. So the fact that you can transform materials means that you can also do this with the threads themselves. We believe that when you combine chemical recycling with nanotechnology, what you get is the perfect thread. We call them nanotechstiles, and in the end, they lead to higher quality clothes that are sustainable, as well.”

“The fact that you can transform materials means that you can also do this with the threads themselves”

How far along are you in the research for nanotechstiles?

“We won the TKI Dinalog Take Off in the logistics sector last year with our nanotechstiles idea. That’s a prize for young talent with innovative ideas for economics and logistics. Since then, we’ve been trying to make the concept more concrete. Which recycling methods can we combine with which nanotechnologies? We’re already pretty far along in that research process, but there hasn’t been any clothing produced from it as yet. We’re focusing on cotton because it makes up the largest proportion of waste. At the moment, we’re in talks with the Institut für Textiltechnik at the University of Aken about how we can produce clothes from our nanotechstiles.

Have you also discovered some pitfalls as part of your research?

“The frustrating thing about nanotechnology is that the more you know about it, the less you can do with it. A lot of options are eliminated during the research process. I’ll give you an example. You want to make clothes that don’t smell as quickly? Well, on paper we know that silver kills 99.9% of bacteria, though we haven’t tested it. So then that leaves you with 0.1%, and that percentage can grow exponentially by using the nutrients from other bacteria. So the material in the clothing itself is safe, but what if a few particles come loose in the wash and get into the drinking water? What happens then? A lot of potential options are eliminated as you go through a process like that because they can be dangerous.”

What are the downsides and how can you guarantee that a design is safe?

“A tremendous amount of nanotechnologies are still in the research phase, so they’re too expensive to develop. We’d like to be using some of them now, but it turns out that there are still too many uncertainties to realistically put them into use. It’s essential to apply the principles of safety by design, only using nanotechnologies where the safety concerns have been well thought out. That’s something we’ve been in touch with the Rijksinstituut voor Volksgezondheid en Milieu (Royal Institute for Public Health and the Environment, RIVM) about. We take safety and the environment into account at every step in the production process for nanotechstiles.”

What the biggest challenge to your concept?

“We already know how certain nanotechnologies respond to cotton, but the biggest challenge is to figure out how they respond to recycled fabrics. You have to remember that nanotechnology isn’t just one thing. You can apply it to any material, which gives you thousands of possibilities. The question is, which one do you think is the most important? For example, you can add carbon nanotubes to make a fabric stronger, but then you’d be paying thousands of euros for a single shirt, and no one wants that.”

What’s the next step?

“Right now, we’re trying to get a sort of crowdfunding campaign started amongst businesses. We’re hoping to build relationships with companies like IKEA, who want to use our sustainable and stain-resistant textiles for things like their employee uniforms. So in addition to the subsidies, they’re helping to fund the research in that way. Based on that, we’ll eventually choose a nanotechnology that we can work up into an actual textile.”

I encourage you to read the original article with its embedded images, additional information, and links to more information.

One last comment, nanotechnology-enabled textiles are usually brand new materials so this is the first time I’ve seen a nanotechnology-based approach to recycling textiles. Bravo!

A European nanotechnology ‘dating’ event for researchers and innovators

A Dec. 13, 2016 Cambridge Network press release announces a networking (dating) event for nanotechnology researchers and industry partners,

The Enterprise Europe Network, in partnership with Innovate UK, the Dutch Ministry of Economic Affairs, the Netherlands Enterprise Agency, Knowledge Transfer Network and the UK Department of Business Energy & Industrial Strategy invite you to participate in an international partnering event and information day for the Nanotechnologies and Advanced Materials themes of the NMBP [Nannotechnologies, Advanced Materials, Biotechnology and Production] Work Programme within Horizon 2020.

This one-day event on 4th April 2017 will introduce the forthcoming calls for proposals, present insights and expectations from the European Commission, and offer a unique international networking experience to forge the winning partnerships of the future

The programme will include presentations from the European Commission and its evaluators and an opportunity to build prospective project partnerships with leading research organisations and cutting-edge innovators from across industry.

A dedicated brokerage session will allow you to expand your international network and create strong consortia through scheduled one-to-one meetings. Participants will also have the opportunity to meet with National Contact Points (UK and Netherlands confirmed) and representatives of the Enterprise Europe Network and the UK’s Knowledge Transfer Network.

The day will also include an optional proposal writing workshop in which delegates will be given valuable tips and insight into the preparation of a winning proposal including a review of the key evaluation criteria.

This event is dedicated to Key Enabling Technologies and will target upcoming calls in the following thematic fields: Nanotechnologies; Advanced materials

Participation for the day is free of charge, but early registration is recommended as the number of participants is limited.  Please note that participation may be limited to a maximum of two delegates per organization.  To register, please do so via the b2match website using this link: https://www.b2match.eu/h2020nmp2017

How does it work? Once you have registered, your profile will be screened by our event management team and once completed you will receive a validation email confirming your participation. You can browse the participant list and book meetings with organisations you are interested in, and a week before the event you will receive your personal meeting schedule.

Why attend? Improve your chances of success by understanding the main issues and expectations for upcoming H2020 calls based on feedback from previous rounds. It’s a great opportunity to raise your profile with future project partners from industry and research through pre-arranged one-to-one meetings. There is also the chance to hear from an experienced H2020 evaluator to gain tips and insight for the preparation of a strong proposal.

Good luck on getting registered for the event. By the way, the Enterprise Europe Network webpage for this event describes it as an Horizon 2020 Brokerage Event.

Sustainable Nanotechnologies (SUN) project draws to a close in March 2017

Two Oct. 31, 2016 news item on Nanowerk signal the impending sunset date for the European Union’s Sustainable Nanotechnologies (SUN) project. The first Oct. 31, 2016 news item on Nanowerk describes the projects latest achievements,

The results from the 3rd SUN annual meeting showed great advancement of the project. The meeting was held in Edinburgh, Scotland, UK on 4-5 October 2016 where the project partners presented the results obtained during the second reporting period of the project.

SUN is a three and a half year EU project, running from 2013 to 2017, with a budget of about €14 million. Its main goal is to evaluate the risks along the supply chain of engineered nanomaterials and incorporate the results into tools and guidelines for sustainable manufacturing.

The ultimate goal of the SUN Project is the development of an online software Decision Support System – SUNDS – aimed at estimating and managing occupational, consumer, environmental and public health risks from nanomaterials in real industrial products along their lifecycles. The SUNDS beta prototype has been released last October, 2015, and since then the main focus has been on refining the methodologies and testing them on selected case studies i.e. nano-copper oxide based wood preserving paint and nano- sized colourants for plastic car part: organic pigment and carbon black. Obtained results and open issues were discussed during the third annual meeting in order collect feedbacks from the consortium that will inform, in the next months, the implementation of the final version of the SUNDS software system, due by March 2017.

An Oct. 27, 2016 SUN project press release, which originated the news item, adds more information,

Significant interest has been payed towards the results obtained in WP2 (Lifecycle Thinking) which main objectives are to assess the environmental impacts arising from each life cycle stage of the SUN case studies (i.e. Nano-WC-Cobalt (Tungsten Carbide-cobalt) sintered ceramics, Nanocopper wood preservatives, Carbon Nano Tube (CNT) in plastics, Silicon Dioxide (SiO2) as food additive, Nano-Titanium Dioxide (TiO2) air filter system, Organic pigment in plastics and Nanosilver (Ag) in textiles), and compare them to conventional products with similar uses and functionality, in order to develop and validate criteria and guiding principles for green nano-manufacturing. Specifically, the consortium partner COLOROBBIA CONSULTING S.r.l. expressed its willingness to exploit the results obtained from the life cycle assessment analysis related to nanoTiO2 in their industrial applications.

On 6th October [2016], the discussions about the SUNDS advancement continued during a Stakeholder Workshop, where representatives from industry, regulatory and insurance sectors shared their feedback on the use of the decision support system. The recommendations collected during the workshop will be used for the further refinement and implemented in the final version of the software which will be released by March 2017.

The second Oct. 31, 2016 news item on Nanowerk led me to this Oct. 27, 2016 SUN project press release about the activities in the upcoming final months,

The project has designed its final events to serve as an effective platform to communicate the main results achieved in its course within the Nanosafety community and bridge them to a wider audience addressing the emerging risks of Key Enabling Technologies (KETs).

The series of events include the New Tools and Approaches for Nanomaterial Safety Assessment: A joint conference organized by NANOSOLUTIONS, SUN, NanoMILE, GUIDEnano and eNanoMapper to be held on 7 – 9 February 2017 in Malaga, Spain, the SUN-CaLIBRAte Stakeholders workshop to be held on 28 February – 1 March 2017 in Venice, Italy and the SRA Policy Forum: Risk Governance for Key Enabling Technologies to be held on 1- 3 March in Venice, Italy.

Jointly organized by the Society for Risk Analysis (SRA) and the SUN Project, the SRA Policy Forum will address current efforts put towards refining the risk governance of emerging technologies through the integration of traditional risk analytic tools alongside considerations of social and economic concerns. The parallel sessions will be organized in 4 tracks:  Risk analysis of engineered nanomaterials along product lifecycle, Risks and benefits of emerging technologies used in medical applications, Challenges of governing SynBio and Biotech, and Methods and tools for risk governance.

The SRA Policy Forum has announced its speakers and preliminary Programme. Confirmed speakers include:

  • Keld Alstrup Jensen (National Research Centre for the Working Environment, Denmark)
  • Elke Anklam (European Commission, Belgium)
  • Adam Arkin (University of California, Berkeley, USA)
  • Phil Demokritou (Harvard University, USA)
  • Gerard Escher (École polytechnique fédérale de Lausanne, Switzerland)
  • Lisa Friedersdor (National Nanotechnology Initiative, USA)
  • James Lambert (President, Society for Risk Analysis, USA)
  • Andre Nel (The University of California, Los Angeles, USA)
  • Bernd Nowack (EMPA, Switzerland)
  • Ortwin Renn (University of Stuttgart, Germany)
  • Vicki Stone (Heriot-Watt University, UK)
  • Theo Vermeire (National Institute for Public Health and the Environment (RIVM), Netherlands)
  • Tom van Teunenbroek (Ministry of Infrastructure and Environment, The Netherlands)
  • Wendel Wohlleben (BASF, Germany)

The New Tools and Approaches for Nanomaterial Safety Assessment (NMSA) conference aims at presenting the main results achieved in the course of the organizing projects fostering a discussion about their impact in the nanosafety field and possibilities for future research programmes.  The conference welcomes consortium partners, as well as representatives from other EU projects, industry, government, civil society and media. Accordingly, the conference topics include: Hazard assessment along the life cycle of nano-enabled products, Exposure assessment along the life cycle of nano-enabled products, Risk assessment & management, Systems biology approaches in nanosafety, Categorization & grouping of nanomaterials, Nanosafety infrastructure, Safe by design. The NMSA conference key note speakers include:

  • Harri Alenius (University of Helsinki, Finland,)
  • Antonio Marcomini (Ca’ Foscari University of Venice, Italy)
  • Wendel Wohlleben (BASF, Germany)
  • Danail Hristozov (Ca’ Foscari University of Venice, Italy)
  • Eva Valsami-Jones (University of Birmingham, UK)
  • Socorro Vázquez-Campos (LEITAT Technolоgical Center, Spain)
  • Barry Hardy (Douglas Connect GmbH, Switzerland)
  • Egon Willighagen (Maastricht University, Netherlands)
  • Nina Jeliazkova (IDEAconsult Ltd., Bulgaria)
  • Haralambos Sarimveis (The National Technical University of Athens, Greece)

During the SUN-caLIBRAte Stakeholder workshop the final version of the SUN user-friendly, software-based Decision Support System (SUNDS) for managing the environmental, economic and social impacts of nanotechnologies will be presented and discussed with its end users: industries, regulators and insurance sector representatives. The results from the discussion will be used as a foundation of the development of the caLIBRAte’s Risk Governance framework for assessment and management of human and environmental risks of MN and MN-enabled products.

The SRA Policy Forum: Risk Governance for Key Enabling Technologies and the New Tools and Approaches for Nanomaterial Safety Assessment conference are now open for registration. Abstracts for the SRA Policy Forum can be submitted till 15th November 2016.
For further information go to:
www.sra.org/riskgovernanceforum2017
http://www.nmsaconference.eu/

There you have it.