Tag Archives: Netherlands

Historic and other buildings get protection from pollution?

This Sept. 15, 2017 news item on Nanowerk announces a new product for protecting buildings from pollution,

The organic pollution decomposing properties of titanium dioxide (TiO2 ) have been known for about half a century. However, practical applications have been few and hard to develop, but now a Greek paint producer claims to have found a solution

A Sept. 11, 2017 Youris (European Research Media Center) press release by Koen Mortelmans which originated the news item expands on the theme,

The photocatalytic properties of anatase, one of the three naturally occurring forms of titanium dioxide, were discovered in Japan in the late 1960s. Under the influence of the UV-radiation in sunlight, it can decompose organic pollutants such as bacteria, fungi and nicotine, and some inorganic materials into carbon dioxide. The catalytic effect is caused by the nanostructure of its crystals.

Applied outdoors, this affordable and widely available material could represent an efficient self-cleaning solution for buildings. This is due to the chemical reaction, which leaves a residue on building façades, a residue then washed away when it rains. Applying it to monuments in urban areas may save our cultural heritage, which is threatened by pollutants.

However, “photocatalytic paints and additives have long been a challenge for the coating industry, because the catalytic action affects the durability of resin binders and oxidizes the paint components,” explains Ioannis Arabatzis, founder and managing director of NanoPhos, based in the Greek town of Lavrio, in one of the countries home to some of the most important monuments of human history. The Greek company is testing a paint called Kirei, inspired by a Japanese word meaning both clean and beautiful.

According to Arabatzis, it’s an innovative product because it combines the self-cleaning action of photocatalytic nanoparticles and the reflective properties of cool wall paints. “When applied on exterior surfaces this paint can reflect more than 94% of the incident InfraRed radiation (IR), saving energy and reducing costs for heating and cooling”, he says. “The reflection values are enhanced by the self-cleaning ability. Compared to conventional paints, they remain unchanged for longer.”

The development of Kirei has been included in the European project BRESAER (BREakthrough Solutions for Adaptable Envelopes in building Refurbishment) which is studying a sustainable and adaptable “envelope system” to renovate buildings. The new paint was tested and subjected to quality controls following ISO standard procedures at the company’s own facilities and in other independent laboratories. “The lab results from testing in artificial, accelerated weathering conditions are reliable,” Arabatzis claims. “There was no sign of discolouration, chalking, cracking or any other paint defect during 2,000 hours of exposure to the simulated environmental conditions. We expect the coating’s service lifetime to be at least ten years.”

Many studies are being conducted to exploit the properties of titanium dioxide. Jan Duyzer, researcher at the Netherlands Organisation for Applied Scientific Research (TNO) in Utrecht, focused on depollution: “There is no doubt about the ability of anatase to decrease the levels of nitrogen oxides in the air. But in real situations, there are many differences in pollution, wind, light, and temperature. We were commissioned by the Dutch government specifically to find a way to take nitrogen oxides out of the air on roads and in traffic tunnels. We used anatase coated panels. Our results were disappointing, so the government decided to discontinue the research. Furthermore, we still don’t know what caused the difference between lab and life. Our best current hypothesis is that the total surface of the coated panels is very small compared to the large volumes of polluted air passing over them,” he tells youris.com.

Experimental deployment of titanium dioxide panels on an acoustic wall along a Dutch highway – Courtesy of Netherlands Organisation for Applied Scientific Research (TNO)

“In laboratory conditions the air is blown over the photocatalytic surface with a certain degree of turbulence. This results in the NOx-particles and the photocatalytic material coming into full contact with one another,” says engineer Anne Beeldens, visiting professor at KU Leuven, Belgium. Her experience with photocatalytic TiO2 is also limited to nitrogen dioxide (NOx) pollution.

In real applications, the air stream at the contact surface becomes laminar. This results in a lower velocity of the air at the surface and a lower depollution rate. Additionally, not all the air will be in contact with the photocatalytic surfaces. To ensure a good working application, the photocatalytic material needs to be positioned so that all the air is in contact with the surface and flows over it in a turbulent manner. This would allow as much of the NOx as possible to be in contact with photocatalytic material. In view of this, a good working application could lead to a reduction of 5 to 10 percent of NOx in the air, which is significant compared to other measures to reduce pollutants.”

The depollution capacity of TiO2 is undisputed, but most applications and tests have only involved specific kinds of substances. More research and measurements are required if we are to benefit more from the precious features of this material.

I think the most recent piece here on protecting buildings, i.e., the historic type, from pollution is an Oct. 21, 2014 posting: Heart of stone.

‘Nano-hashtags’ for Majorana particles?

The ‘nano-hashtags’ are in fact (assuming a minor leap of imagination) nanowires that resemble hashtags.

Scanning electron microscope image of the device wherein clearly a ‘hashtag’ is formed. Credit: Eindhoven University of Technology

An August 23, 2017 news item on ScienceDaily makes the announcement,

In Nature, an international team of researchers from Eindhoven University of Technology [Netherlands], Delft University of Technology [Netherlands] and the University of California — Santa Barbara presents an advanced quantum chip that will be able to provide definitive proof of the mysterious Majorana particles. These particles, first demonstrated in 2012, are their own antiparticle at one and the same time. The chip, which comprises ultrathin networks of nanowires in the shape of ‘hashtags’, has all the qualities to allow Majorana particles to exchange places. This feature is regarded as the smoking gun for proving their existence and is a crucial step towards their use as a building block for future quantum computers.

An August 23, 2017 Eindhoven University press release (also on EurekAlert), which originated the news item, provides some context and information about the work,

In 2012 it was big news: researchers from Delft University of Technology and Eindhoven University of Technology presented the first experimental signatures for the existence of the Majorana fermion. This particle had been predicted in 1937 by the Italian physicist Ettore Majorana and has the distinctive property of also being its own anti-particle. The Majorana particles emerge at the ends of a semiconductor wire, when in contact with a superconductor material.

Smoking gun

While the discovered particles may have properties typical to Majoranas, the most exciting proof could be obtained by allowing two Majorana particles to exchange places, or ‘braid’ as it is scientifically known. “That’s the smoking gun,” suggests Erik Bakkers, one of the researchers from Eindhoven University of Technology. “The behavior we then see could be the most conclusive evidence yet of Majoranas.”

Crossroads

In the Nature paper that is published today [August 23, 2017], Bakkers and his colleagues present a new device that should be able to show this exchanging of Majoranas. In the original experiment in 2012 two Majorana particles were found in a single wire but they were not able to pass each other without immediately destroying the other. Thus the researchers quite literally had to create space. In the presented experiment they formed intersections using the same kinds of nanowire so that four of these intersections form a ‘hashtag’, #, and thus create a closed circuit along which Majoranas are able to move.

Etch and grow

The researchers built their hashtag device starting from scratch. The nanowires are grown from a specially etched substrate such that they form exactly the desired network which they then expose to a stream of aluminium particles, creating layers of aluminium, a superconductor, on specific spots on the wires – the contacts where the Majorana particles emerge. Places that lie ‘in the shadow’ of other wires stay uncovered.

Leap in quality

The entire process happens in a vacuum and at ultra-cold temperature (around -273 degree Celsius). “This ensures very clean, pure contacts,” says Bakkers, “and enables us to make a considerable leap in the quality of this kind of quantum device.” The measurements demonstrate for a number of electronic and magnetic properties that all the ingredients are present for the Majoranas to braid.

Quantum computers

If the researchers succeed in enabling the Majorana particles to braid, they will at once have killed two birds with one stone. Given their robustness, Majoranas are regarded as the ideal building block for future quantum computers that will be able to perform many calculations simultaneously and thus many times faster than current computers. The braiding of two Majorana particles could form the basis for a qubit, the calculation unit of these computers.

Travel around the world

An interesting detail is that the samples have traveled around the world during the fabrication, combining unique and synergetic activities of each research institution. It started in Delft with patterning and etching the substrate, then to Eindhoven for nanowire growth and to Santa Barbara for aluminium contact formation. Finally back to Delft via Eindhoven for the measurements.

Here’s a link to and a citation for the paper,

Epitaxy of advanced nanowire quantum devices by Sasa Gazibegovic, Diana Car, Hao Zhang, Stijn C. Balk, John A. Logan, Michiel W. A. de Moor, Maja C. Cassidy, Rudi Schmits, Di Xu, Guanzhong Wang, Peter Krogstrup, Roy L. M. Op het Veld, Kun Zuo, Yoram Vos, Jie Shen, Daniël Bouman, Borzoyeh Shojaei, Daniel Pennachio, Joon Sue Lee, Petrus J. van Veldhoven, Sebastian Koelling, Marcel A. Verheijen, Leo P. Kouwenhoven, Chris J. Palmstrøm, & Erik P. A. M. Bakkers. Nature 548, 434–438 (24 August 2017) doi:10.1038/nature23468 Published online 23 August 2017

This paper is behind a paywall.

Dexter Johnson has some additional insight (interview with one of the researchers) in an Aug. 29, 2017 posting on his Nanoclast blog (on the IEEE [institute of Electrical and Electronics Engineers] website).

Cosmopolitanism and the Local in Science and Nature (a three year Canadian project nearing its end date)

Working on a grant from Canada’s Social Sciences and Humanities Research Council (SSHRC), the  Cosmopolitanism and the Local in Science and Nature project has been establishing a ‘cosmopolitanism’ research network that critiques the eurocentric approach so beloved of Canadian academics and has set up nodes across Canada and in India and Southeast Asia.

I first wrote about the project in a Dec. 12, 2014 posting which also featured a job listing. It seems I was there for the beginning and now for the end. For one of the project’s blog postings in its final months, they’re profiling one of their researchers (Dr. Letitia Meynell, Sept. 6, 2017 posting),

1. What is your current place of research?

I am an associate professor in philosophy at Dalhousie University, cross appointed with gender and women studies.

2. Could you give us some details about your education background?

My 1st degree was in Theater, which I did at York University. I did, however, minor in Philosophy and I have always had a particular interest in philosophy of science. So, my minor was perhaps a little anomalous, comprising courses on philosophy of physics, philosophy of nature, and the philosophy of Karl Popper along with courses on aesthetics and existentialism. After taking a few more courses in philosophy at the University of Calgary, I enrolled there for a Master’s degree, writing a thesis on conceptualization, with a view to its role in aesthetics and epistemology. From there I moved to the University of Western Ontario where I brought these three interests together, writing a thesis on the epistemology of pictures in science. Throughout these studies I maintained a keen interest in feminist philosophy, especially the politics of knowledge, and I have always seen my work on pictures in science as fitting into broader feminist commitments.

3. What projects are you currently working on and what are some projects you’ve worked on in the past?

4. What’s one thing you particularly enjoy about working in your field?

5. How do you relate your work to the broader topic of ‘cosmopolitanism and the local’?

As feminist philosophers have long realized, having perspectives on a topic that are quite different to your own is incredibly powerful for critically assessing both your own views and those of others. So, for instance, if you want to address the exploitation of nonhuman animals in our society it is incredibly powerful to consider how people from, say, South Asian traditions have thought about the differences, similarities, and relationships between humans and other animals. Keeping non-western perspectives in mind, even as one works in a western philosophical tradition, helps one to be both more rigorous in one’s analyses and less dogmatic. Rigor and critical openness are, in my opinion, central virtues of philosophy and, indeed, science.

Dr. Maynell will be speaking at the ‘Bridging the Gap: Scientific Imagination Meets Aesthetic Imagination‘ conference Oct. 5-6, 2017 at the London School of Economics,

On 5–6 October, this 2-day conference aims to connect work on artistic and scientific imagination, and to advance our understanding of the epistemic and heuristic roles that imagination can play.

Why, how, and when do scientists imagine, and what epistemological roles does the imagination play in scientific progress? Over the past few years, many philosophical accounts have emerged that are relevant to these questions. Roman Frigg, Arnon Levy, and Adam Toon have developed theories of scientific models that place imagination at the heart of modelling practice. And James R. Brown, Tamar Gendler, James McAllister, Letitia Meynell, and Nancy Nersessian have developed theories that recognize the indispensable role of the imagination in the performance of thought experiments. On the other hand, philosophers like Michael Weisberg dismiss imagination-based views of scientific modelling as mere “folk ontology”, and John D. Norton seems to claim that thought experiments are arguments whose imaginary components are epistemologically irrelevant.

In this conference we turn to aesthetics for help in addressing issues concerning scientific imagination-use. Aesthetics is said to have begun in 1717 with an essay called “The Pleasures of the Imagination” by Joseph Addison, and ever since imagination has been what Michael Polyani called “the cornerstone of aesthetic theory”. In recent years Kendall Walton has fruitfully explored the fundamental relevance of imagination for understanding literary, visual and auditory fictions. And many others have been inspired to do the same, including Greg Currie, David Davies, Peter Lamarque, Stein Olsen, and Kathleen Stock.

This conference aims to connect work on artistic and scientific imagination, and to advance our understanding of the epistemic and heuristic roles that imagination can play. Specific topics may include:

  • What kinds of imagination are involved in science?
  • What is the relation between scientific imagination and aesthetic imagination?
  • What are the structure and limits of knowledge and understanding acquired through imagination?
  • From a methodological point of view, how can aesthetic considerations about imagination play a role in philosophical accounts of scientific reasoning?
  • What can considerations about scientific imagination contribute to our understanding of aesthetic imagination?

The conference will include eight invited talks and four contributed papers. Two of the four slots for contributed papers are being reserved for graduate students, each of whom will receive a travel bursary of £100.

Invited speakers

Margherita Arcangeli (Humboldt University, Berlin)

Andrej Bicanski (Institute of Cognitive Neuroscience, University College London)

Gregory Currie (University of York)

Jim Faeder (University of Pittsburgh School of Medicine)

Tim de Mey (Erasmus University of Rotterdam)

Laetitia Meynell (Dalhousie University, Canada)

Adam Toon (University of Exeter)

Margot Strohminger (Humboldt University, Berlin)

This event is organised by LSE’s Centre for Philosophy of Natural and Social Science and it is co-sponsored by the British Society of Aesthetics, the Mind Association, the Aristotelian Society and the Marie Skłodowska-Curie grant agreement No 654034.

I wonder if they’ll be rubbing shoulders with Angelina Jolie? She is slated to be teaching there in Fall 2017 according to a May 23, 2016 news item in the Guardian (Note: Links have been removed),

The Hollywood actor and director has been appointed a visiting professor at the London School of Economics, teaching a course on the impact of war on women.

From 2017, Jolie will join the former foreign secretary William Hague as a “professor in practice”, the university announced on Monday, as part of a new MSc course on women, peace and security, which LSE says is the first of its kind in the world.

The course, it says, is intended to “[develop] strategies to promote gender equality and enhance women’s economic, social and political participation and security”, with visiting professors playing an active part in giving lectures, participating in workshops and undertaking their own research.

Getting back to ‘Cosmopolitanism’, some of the principals organized a summer 2017 event (from a Sept. 6, 2017 posting titled: Summer Events – 25th International Congress of History of Science and Technology),

CosmoLocal partners Lesley Cormack (University of Alberta, Canada), Gordon McOuat (University of King’s College, Halifax, Canada), and Dhruv Raina (Jawaharlal Nehru University, India) organized a symposium “Cosmopolitanism and the Local in Science and Nature” as part of the 25th International Congress of History of Science and Technology.  The conference was held July 23-29, 2017, in Rio de Janeiro, Brazil.  The abstract of the CosmoLocal symposium is below, and a pdf version can be found here.

Science, and its associated technologies, is typically viewed as “universal”. At the same time we were also assured that science can trace its genealogy to Europe in a period of rising European intellectual and imperial global force, ‘going outwards’ towards the periphery. As such, it is strikingly parochial. In a kind of sad irony, the ‘subaltern’ was left to retell that tale as one of centre-universalism dominating a traditionalist periphery. Self-described ‘modernity’ and ‘the west’ (two intertwined concepts of recent and mutually self-supporting origin) have erased much of the local engagement and as such represent science as emerging sui generis, moving in one direction. This story is now being challenged within sociology, political theory and history.

… Significantly, scholars who study the history of science in Asia and India have been examining different trajectories for the origin and meaning of science. It is now time for a dialogue between these approaches. Grounding the dialogue is the notion of a “cosmopolitical” science. “Cosmopolitics” is a term borrowed from Kant’s notion of perpetual peace and modern civil society, imagining shared political, moral and economic spaces within which trade, politics and reason get conducted.  …

The abstract is a little ‘high falutin’ but I’m glad to see more efforts being made in  Canada to understand science and its history as a global affair.

Robot artists—should they get copyright protection

Clearly a lawyer wrote this June 26, 2017 essay on theconversation.com (Note: A link has been removed),

When a group of museums and researchers in the Netherlands unveiled a portrait entitled The Next Rembrandt, it was something of a tease to the art world. It wasn’t a long lost painting but a new artwork generated by a computer that had analysed thousands of works by the 17th-century Dutch artist Rembrandt Harmenszoon van Rijn.

The computer used something called machine learning [emphasis mine] to analyse and reproduce technical and aesthetic elements in Rembrandt’s works, including lighting, colour, brush-strokes and geometric patterns. The result is a portrait produced based on the styles and motifs found in Rembrandt’s art but produced by algorithms.

But who owns creative works generated by artificial intelligence? This isn’t just an academic question. AI is already being used to generate works in music, journalism and gaming, and these works could in theory be deemed free of copyright because they are not created by a human author.

This would mean they could be freely used and reused by anyone and that would be bad news for the companies selling them. Imagine you invest millions in a system that generates music for video games, only to find that music isn’t protected by law and can be used without payment by anyone in the world.

Unlike with earlier computer-generated works of art, machine learning software generates truly creative works without human input or intervention. AI is not just a tool. While humans program the algorithms, the decision making – the creative spark – comes almost entirely from the machine.

It could have been someone involved in the technology but nobody with that background would write “… something called machine learning … .”  Andres Guadamuz, lecturer in Intellectual Property Law at the University of Sussex, goes on to say (Note: Links have been removed),

Unlike with earlier computer-generated works of art, machine learning software generates truly creative works without human input or intervention. AI is not just a tool. While humans program the algorithms, the decision making – the creative spark – comes almost entirely from the machine.

That doesn’t mean that copyright should be awarded to the computer, however. Machines don’t (yet) have the rights and status of people under the law. But that doesn’t necessarily mean there shouldn’t be any copyright either. Not all copyright is owned by individuals, after all.

Companies are recognised as legal people and are often awarded copyright for works they don’t directly create. This occurs, for example, when a film studio hires a team to make a movie, or a website commissions a journalist to write an article. So it’s possible copyright could be awarded to the person (company or human) that has effectively commissioned the AI to produce work for it.

 

Things are likely to become yet more complex as AI tools are more commonly used by artists and as the machines get better at reproducing creativity, making it harder to discern if an artwork is made by a human or a computer. Monumental advances in computing and the sheer amount of computational power becoming available may well make the distinction moot. At that point, we will have to decide what type of protection, if any, we should give to emergent works created by intelligent algorithms with little or no human intervention.

The most sensible move seems to follow those countries that grant copyright to the person who made the AI’s operation possible, with the UK’s model looking like the most efficient. This will ensure companies keep investing in the technology, safe in the knowledge they will reap the benefits. What happens when we start seriously debating whether computers should be given the status and rights of people is a whole other story.

The team that developed a ‘new’ Rembrandt produced a video about the process,

Mark Brown’s April 5, 2016 article abut this project (which was unveiled on April 5, 2017 in Amsterdam, Netherlands) for the Guardian newspaper provides more detail such as this,

It [Next Rembrandt project] is the result of an 18-month project which asks whether new technology and data can bring back to life one of the greatest, most innovative painters of all time.

Advertising executive [Bas] Korsten, whose brainchild the project was, admitted that there were many doubters. “The idea was greeted with a lot of disbelief and scepticism,” he said. “Also coming up with the idea is one thing, bringing it to life is another.”

The project has involved data scientists, developers, engineers and art historians from organisations including Microsoft, Delft University of Technology, the Mauritshuis in The Hague and the Rembrandt House Museum in Amsterdam.

The final 3D printed painting consists of more than 148 million pixels and is based on 168,263 Rembrandt painting fragments.

Some of the challenges have been in designing a software system that could understand Rembrandt based on his use of geometry, composition and painting materials. A facial recognition algorithm was then used to identify and classify the most typical geometric patterns used to paint human features.

It sounds like it was a fascinating project but I don’t believe ‘The Next Rembrandt’ is an example of AI creativity or an example of the ‘creative spark’ Guadamuz discusses. This seems more like the kind of work  that could be done by a talented forger or fraudster. As I understand it, even when a human creates this type of artwork (a newly discovered and unknown xxx masterpiece), the piece is not considered a creative work in its own right. Some pieces are outright fraudulent and others which are described as “in the manner of xxx.”

Taking a somewhat different approach to mine, Timothy Geigner at Techdirt has also commented on the question of copyright and AI in relation to Guadamuz’s essay in a July 7, 2017 posting,

Unlike with earlier computer-generated works of art, machine learning software generates truly creative works without human input or intervention. AI is not just a tool. While humans program the algorithms, the decision making – the creative spark – comes almost entirely from the machine.

Let’s get the easy part out of the way: the culminating sentence in the quote above is not true. The creative spark is not the artistic output. Rather, the creative spark has always been known as the need to create in the first place. This isn’t a trivial quibble, either, as it factors into the simple but important reasoning for why AI and machines should certainly not receive copyright rights on their output.

That reasoning is the purpose of copyright law itself. Far too many see copyright as a reward system for those that create art rather than what it actually was meant to be: a boon to an artist to compensate for that artist to create more art for the benefit of the public as a whole. Artificial intelligence, however far progressed, desires only what it is programmed to desire. In whatever hierarchy of needs an AI might have, profit via copyright would factor either laughably low or not at all into its future actions. Future actions of the artist, conversely, are the only item on the agenda for copyright’s purpose. If receiving a copyright wouldn’t spur AI to create more art beneficial to the public, then copyright ought not to be granted.

Geigner goes on (July 7, 2017 posting) to elucidate other issues with the ideas expressed in the general debates of AI and ‘rights’ and the EU’s solution.

Art masterpieces are turning into soap

This piece of research has made a winding trek through the online science world. First it was featured in an April 20, 2017 American Chemical Society news release on EurekAlert,

A good art dealer can really clean up in today’s market, but not when some weird chemistry wreaks havoc on masterpieces. Art conservators started to notice microscopic pockmarks forming on the surfaces of treasured oil paintings that cause the images to look hazy. It turns out the marks are eruptions of paint caused, weirdly, by soap that forms via chemical reactions. Since you have no time to watch paint dry, we explain how paintings from Rembrandts to O’Keefes are threatened by their own compositions — and we don’t mean the imagery.

Here’s the video,

Interestingly, this seems to be based on a May 23, 2016 article by Sarah Everts for Chemical and Engineering News (an American Society publication) Note: Links have been removed,

When conservator Petria Noble first peered at Rembrandt’s “Anatomy Lesson of Dr. Nicolaes Tulp” under a microscope back in 1996, she was surprised to find pockmarks across the nearly 400-year-old painting’s surface.

Each tiny crater was just a few hundred micrometers in diameter, no wider than the period at the end of this sentence. The painting’s surface was entirely riddled with these curious structures, giving it “a dull, rather hazy, gritty surface,” Noble says.

A structure of lead nonanoate.

The crystal structures of metal soaps vary: Shown here is lead nonanoate, based on a structure solved by Cecil Dybowski at the University of Delaware and colleagues at the Metropolitan Museum of Art. Dashed lines are nearest oxygen neighbors.

This concerned Noble, who was tasked with cleaning the masterpiece with her then-colleague Jørgen Wadum at the Mauritshuis museum, the painting’s home in The Hague.

When Noble called physicist Jaap Boon, then at the Foundation for Fundamental Research on Matter in Amsterdam, to help figure out what was going on, the researchers unsuspectingly embarked on an investigation that would transform the art world’s understanding of aging paint.

More recently this ‘metal soaps in paintings’ story has made its way into a May 16, 2017 news item on phys.org,

An oil painting is not a permanent and unchangeable object, but undergoes a very slow change in the outer and inner structure. Metal soap formation is of great importance. Joen Hermans has managed to recreate the molecular structure of old oil paints: a big step towards better preservation of works of art. He graduated cum laude on Tuesday 9 May [2017] at the University of Amsterdam with NWO funding from the Science4Arts program.

A May 15, 2017 Netherlands Organization for Scientific Research (NWO) press release, which originated the phys.org news item, provides more information about Hermans’ work (albeit some of this is repetitive),

Johannes Vermeer, View of Delft, c. 1660 - 1661 (Mauritshuis, The Hague)Johannes Vermeer, View of Delft, c. 1660 – 1661 (Mauritshuis, The Hague)

Paint can fade, varnish can discolour and paintings can collect dust and dirt. Joen Hermans has examined the chemical processes behind ageing processes in paints. ‘While restorers do their best to repair any damages that have occurred, the fact remains that at present we do not know enough about the molecular structure of ageing oil paint and the chemical processes they undergo’, says Hermans. ‘This makes it difficult to predict with confidence how paints will react to restoration treatments or to changes in a painting’s environment.’

‘Sand grains’ In the red tiles of 'View of Delft' by Johannes Vermeer shows 'lead soap spheres' (Annelies van Loon, UvA/Mauritshuis)‘Sand grains’ In the red tiles of ‘View of Delft’ by Johannes Vermeer shows ‘lead soap spheres’ (Annelies van Loon, UvA/Mauritshuis)

Visible to the naked eye

Hermans explains that in its simplest form, oil paint is a mixture of pigment and drying oil, which forms the binding element. Colour pigments are often metal salts. ‘When the pigment and the drying oil are combined, an incredibly complicated chemical process begins’, says Hermans, ‘which continues for centuries’. The fatty acids in the oil form a polymer network when exposed to oxygen in the air. Meanwhile, metal ions react with the oil on the surface of the grains of pigment.

‘A common problem when conserving oil paintings is the formation of what are known as metal soaps’, Hermans continues. These are compounds of metal ions and fatty acids. The formation of metal soaps is linked to various ways in which paint deteriorates, as when it becomes increasingly brittle, transparent or forms a crust on the paint surface. Hermans: ‘You can see clumps of metal soap with the naked eye on some paintings, like Rembrandt’s Anatomy Lesson of Dr Nicolaes Tulp or Vermeer’s View of Delft’. Around 70 per cent of all oil paintings show signs of metal soap formation.’

Conserving valuable paintings

Hermans has studied in detail how metal soaps form. He began by defining the structure of metal soaps. One of the things he discovered was that the process that causes metal ions to move in the painting is crucial to the speed at which the painting ages. Hermans also managed to recreate the molecular structure of old oil paints, making it possible to simulate and study the behaviour of old paints without actually having to remove samples from Rembrandt’s Night Watch. Hermans hopes this knowledge will contribute towards a solid foundation for the conservation of valuable works of art.

I imagine this will make anyone who owns an oil painting or appreciates paintings in general pause for thought and the inclination to utter a short prayer for conservators to find a solution.

Explaining the link between air pollution and heart disease?

An April 26, 2017 news item on Nanowerk announces research that may explain the link between heart disease and air pollution (Note: A link has been removed),

Tiny particles in air pollution have been associated with cardiovascular disease, which can lead to premature death. But how particles inhaled into the lungs can affect blood vessels and the heart has remained a mystery.

Now, scientists have found evidence in human and animal studies that inhaled nanoparticles can travel from the lungs into the bloodstream, potentially explaining the link between air pollution and cardiovascular disease. Their results appear in the journal ACS Nano (“Inhaled Nanoparticles Accumulate at Sites of Vascular Disease”).

An April 26, 2017 American Chemical Society news release on EurekAlert, which originated the news item,  expands on the theme,

The World Health Organization estimates that in 2012, about 72 percent of premature deaths related to outdoor air pollution were due to ischemic heart disease and strokes. Pulmonary disease, respiratory infections and lung cancer were linked to the other 28 percent. Many scientists have suspected that fine particles travel from the lungs into the bloodstream, but evidence supporting this assumption in humans has been challenging to collect. So Mark Miller and colleagues at the University of Edinburgh in the United Kingdom and the National Institute for Public Health and the Environment in the Netherlands used a selection of specialized techniques to track the fate of inhaled gold nanoparticles.

In the new study, 14 healthy volunteers, 12 surgical patients and several mouse models inhaled gold nanoparticles, which have been safely used in medical imaging and drug delivery. Soon after exposure, the nanoparticles were detected in blood and urine. Importantly, the nanoparticles appeared to preferentially accumulate at inflamed vascular sites, including carotid plaques in patients at risk of a stroke. The findings suggest that nanoparticles can travel from the lungs into the bloodstream and reach susceptible areas of the cardiovascular system where they could possibly increase the likelihood of a heart attack or stroke, the researchers say.

Here’s a link to and a citation for the paper,

Inhaled Nanoparticles Accumulate at Sites of Vascular Disease by Mark R. Miller, Jennifer B. Raftis, Jeremy P. Langrish, Steven G. McLean, Pawitrabhorn Samutrtai, Shea P. Connell, Simon Wilson, Alex T. Vesey, Paul H. B. Fokkens, A. John F. Boere, Petra Krystek, Colin J. Campbell, Patrick W. F. Hadoke, Ken Donaldson, Flemming R. Cassee, David E. Newby, Rodger Duffin, and Nicholas L. Mills. ACS Nano, Article ASAP DOI: 10.1021/acsnano.6b08551 Publication Date (Web): April 26, 2017

Copyright © 2017 American Chemical Society

This paper is behind a paywall.

2D printed transistors in Ireland

2D transistors seem to be a hot area for research these days. In Ireland, the AMBER Centre has announced a transistor consisting entirely of 2D nanomaterials in an April 6, 2017 news item on Nanowerk,

Researchers in AMBER, the Science Foundation Ireland-funded materials science research centre hosted in Trinity College Dublin, have fabricated printed transistors consisting entirely of 2-dimensional nanomaterials for the first time. These 2D materials combine exciting electronic properties with the potential for low-cost production.

This breakthrough could unlock the potential for applications such as food packaging that displays a digital countdown to warn you of spoiling, wine labels that alert you when your white wine is at its optimum temperature, or even a window pane that shows the day’s forecast. …

An April 7, 2017 AMBER Centre press release (also on EurekAlert), which originated the news item, expands on the theme,

Prof Jonathan Coleman, who is an investigator in AMBER and Trinity’s School of Physics, said, “In the future, printed devices will be incorporated into even the most mundane objects such as labels, posters and packaging.

Printed electronic circuitry (constructed from the devices we have created) will allow consumer products to gather, process, display and transmit information: for example, milk cartons could send messages to your phone warning that the milk is about to go out-of-date.

We believe that 2D nanomaterials can compete with the materials currently used for printed electronics. Compared to other materials employed in this field, our 2D nanomaterials have the capability to yield more cost effective and higher performance printed devices. However, while the last decade has underlined the potential of 2D materials for a range of electronic applications, only the first steps have been taken to demonstrate their worth in printed electronics. This publication is important because it shows that conducting, semiconducting and insulating 2D nanomaterials can be combined together in complex devices. We felt that it was critically important to focus on printing transistors as they are the electric switches at the heart of modern computing. We believe this work opens the way to print a whole host of devices solely from 2D nanosheets.”

Led by Prof Coleman, in collaboration with the groups of Prof Georg Duesberg (AMBER) and Prof. Laurens Siebbeles (TU Delft,Netherlands), the team used standard printing techniques to combine graphene nanosheets as the electrodes with two other nanomaterials, tungsten diselenide and boron nitride as the channel and separator (two important parts of a transistor) to form an all-printed, all-nanosheet, working transistor.

Printable electronics have developed over the last thirty years based mainly on printable carbon-based molecules. While these molecules can easily be turned into printable inks, such materials are somewhat unstable and have well-known performance limitations. There have been many attempts to surpass these obstacles using alternative materials, such as carbon nanotubes or inorganic nanoparticles, but these materials have also shown limitations in either performance or in manufacturability. While the performance of printed 2D devices cannot yet compare with advanced transistors, the team believe there is a wide scope to improve performance beyond the current state-of-the-art for printed transistors.

The ability to print 2D nanomaterials is based on Prof. Coleman’s scalable method of producing 2D nanomaterials, including graphene, boron nitride, and tungsten diselenide nanosheets, in liquids, a method he has licensed to Samsung and Thomas Swan. These nanosheets are flat nanoparticles that are a few nanometres thick but hundreds of nanometres wide. Critically, nanosheets made from different materials have electronic properties that can be conducting, insulating or semiconducting and so include all the building blocks of electronics. Liquid processing is especially advantageous in that it yields large quantities of high quality 2D materials in a form that is easy to process into inks. Prof. Coleman’s publication provides the potential to print circuitry at extremely low cost which will facilitate a range of applications from animated posters to smart labels.

Prof Coleman is a partner in Graphene flagship, a €1 billion EU initiative to boost new technologies and innovation during the next 10 years.

Here’s a link to and a citation for the paper,

All-printed thin-film transistors from networks of liquid-exfoliated nanosheets by Adam G. Kelly, Toby Hallam, Claudia Backes, Andrew Harvey, Amir Sajad Esmaeily, Ian Godwin, João Coelho, Valeria Nicolosi, Jannika Lauth, Aditya Kulkarni, Sachin Kinge, Laurens D. A. Siebbeles, Georg S. Duesberg, Jonathan N. Coleman. Science  07 Apr 2017: Vol. 356, Issue 6333, pp. 69-73 DOI: 10.1126/science.aal4062

This paper is behind a paywall.

High-performance, low-energy artificial synapse for neural network computing

This artificial synapse is apparently an improvement on the standard memristor-based artificial synapse but that doesn’t become clear until reading the abstract for the paper. First, there’s a Feb. 20, 2017 Stanford University news release by Taylor Kubota (dated Feb. 21, 2017 on EurekAlert), Note: Links have been removed,

For all the improvements in computer technology over the years, we still struggle to recreate the low-energy, elegant processing of the human brain. Now, researchers at Stanford University and Sandia National Laboratories have made an advance that could help computers mimic one piece of the brain’s efficient design – an artificial version of the space over which neurons communicate, called a synapse.

“It works like a real synapse but it’s an organic electronic device that can be engineered,” said Alberto Salleo, associate professor of materials science and engineering at Stanford and senior author of the paper. “It’s an entirely new family of devices because this type of architecture has not been shown before. For many key metrics, it also performs better than anything that’s been done before with inorganics.”

The new artificial synapse, reported in the Feb. 20 issue of Nature Materials, mimics the way synapses in the brain learn through the signals that cross them. This is a significant energy savings over traditional computing, which involves separately processing information and then storing it into memory. Here, the processing creates the memory.

This synapse may one day be part of a more brain-like computer, which could be especially beneficial for computing that works with visual and auditory signals. Examples of this are seen in voice-controlled interfaces and driverless cars. Past efforts in this field have produced high-performance neural networks supported by artificially intelligent algorithms but these are still distant imitators of the brain that depend on energy-consuming traditional computer hardware.

Building a brain

When we learn, electrical signals are sent between neurons in our brain. The most energy is needed the first time a synapse is traversed. Every time afterward, the connection requires less energy. This is how synapses efficiently facilitate both learning something new and remembering what we’ve learned. The artificial synapse, unlike most other versions of brain-like computing, also fulfills these two tasks simultaneously, and does so with substantial energy savings.

“Deep learning algorithms are very powerful but they rely on processors to calculate and simulate the electrical states and store them somewhere else, which is inefficient in terms of energy and time,” said Yoeri van de Burgt, former postdoctoral scholar in the Salleo lab and lead author of the paper. “Instead of simulating a neural network, our work is trying to make a neural network.”

The artificial synapse is based off a battery design. It consists of two thin, flexible films with three terminals, connected by an electrolyte of salty water. The device works as a transistor, with one of the terminals controlling the flow of electricity between the other two.

Like a neural path in a brain being reinforced through learning, the researchers program the artificial synapse by discharging and recharging it repeatedly. Through this training, they have been able to predict within 1 percent of uncertainly what voltage will be required to get the synapse to a specific electrical state and, once there, it remains at that state. In other words, unlike a common computer, where you save your work to the hard drive before you turn it off, the artificial synapse can recall its programming without any additional actions or parts.

Testing a network of artificial synapses

Only one artificial synapse has been produced but researchers at Sandia used 15,000 measurements from experiments on that synapse to simulate how an array of them would work in a neural network. They tested the simulated network’s ability to recognize handwriting of digits 0 through 9. Tested on three datasets, the simulated array was able to identify the handwritten digits with an accuracy between 93 to 97 percent.

Although this task would be relatively simple for a person, traditional computers have a difficult time interpreting visual and auditory signals.

“More and more, the kinds of tasks that we expect our computing devices to do require computing that mimics the brain because using traditional computing to perform these tasks is becoming really power hungry,” said A. Alec Talin, distinguished member of technical staff at Sandia National Laboratories in Livermore, California, and senior author of the paper. “We’ve demonstrated a device that’s ideal for running these type of algorithms and that consumes a lot less power.”

This device is extremely well suited for the kind of signal identification and classification that traditional computers struggle to perform. Whereas digital transistors can be in only two states, such as 0 and 1, the researchers successfully programmed 500 states in the artificial synapse, which is useful for neuron-type computation models. In switching from one state to another they used about one-tenth as much energy as a state-of-the-art computing system needs in order to move data from the processing unit to the memory.

This, however, means they are still using about 10,000 times as much energy as the minimum a biological synapse needs in order to fire. The researchers are hopeful that they can attain neuron-level energy efficiency once they test the artificial synapse in smaller devices.

Organic potential

Every part of the device is made of inexpensive organic materials. These aren’t found in nature but they are largely composed of hydrogen and carbon and are compatible with the brain’s chemistry. Cells have been grown on these materials and they have even been used to make artificial pumps for neural transmitters. The voltages applied to train the artificial synapse are also the same as those that move through human neurons.

All this means it’s possible that the artificial synapse could communicate with live neurons, leading to improved brain-machine interfaces. The softness and flexibility of the device also lends itself to being used in biological environments. Before any applications to biology, however, the team plans to build an actual array of artificial synapses for further research and testing.

Additional Stanford co-authors of this work include co-lead author Ewout Lubberman, also of the University of Groningen in the Netherlands, Scott T. Keene and Grégorio C. Faria, also of Universidade de São Paulo, in Brazil. Sandia National Laboratories co-authors include Elliot J. Fuller and Sapan Agarwal in Livermore and Matthew J. Marinella in Albuquerque, New Mexico. Salleo is an affiliate of the Stanford Precourt Institute for Energy and the Stanford Neurosciences Institute. Van de Burgt is now an assistant professor in microsystems and an affiliate of the Institute for Complex Molecular Studies (ICMS) at Eindhoven University of Technology in the Netherlands.

This research was funded by the National Science Foundation, the Keck Faculty Scholar Funds, the Neurofab at Stanford, the Stanford Graduate Fellowship, Sandia’s Laboratory-Directed Research and Development Program, the U.S. Department of Energy, the Holland Scholarship, the University of Groningen Scholarship for Excellent Students, the Hendrik Muller National Fund, the Schuurman Schimmel-van Outeren Foundation, the Foundation of Renswoude (The Hague and Delft), the Marco Polo Fund, the Instituto Nacional de Ciência e Tecnologia/Instituto Nacional de Eletrônica Orgânica in Brazil, the Fundação de Amparo à Pesquisa do Estado de São Paulo and the Brazilian National Council.

Here’s an abstract for the researchers’ paper (link to paper provided after abstract) and it’s where you’ll find the memristor connection explained,

The brain is capable of massively parallel information processing while consuming only ~1–100fJ per synaptic event1, 2. Inspired by the efficiency of the brain, CMOS-based neural architectures3 and memristors4, 5 are being developed for pattern recognition and machine learning. However, the volatility, design complexity and high supply voltages for CMOS architectures, and the stochastic and energy-costly switching of memristors complicate the path to achieve the interconnectivity, information density, and energy efficiency of the brain using either approach. Here we describe an electrochemical neuromorphic organic device (ENODe) operating with a fundamentally different mechanism from existing memristors. ENODe switches at low voltage and energy (<10pJ for 103μm2 devices), displays >500 distinct, non-volatile conductance states within a ~1V range, and achieves high classification accuracy when implemented in neural network simulations. Plastic ENODes are also fabricated on flexible substrates enabling the integration of neuromorphic functionality in stretchable electronic systems6, 7. Mechanical flexibility makes ENODes compatible with three-dimensional architectures, opening a path towards extreme interconnectivity comparable to the human brain.

Here’s a link to and a citation for the paper,

A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing by Yoeri van de Burgt, Ewout Lubberman, Elliot J. Fuller, Scott T. Keene, Grégorio C. Faria, Sapan Agarwal, Matthew J. Marinella, A. Alec Talin, & Alberto Salleo. Nature Materials (2017) doi:10.1038/nmat4856 Published online 20 February 2017

This paper is behind a paywall.

ETA March 8, 2017 10:28 PST: You may find this this piece on ferroelectricity and neuromorphic engineering of interest (March 7, 2017 posting titled: Ferroelectric roadmap to neuromorphic computing).

Nominations open for Kabiller Prizes in Nanoscience and Nanomedicine ($250,000 for visionary researcher and $10,000 for young investigator)

For a change I can publish something that doesn’t have a deadline in three days or less! Without more ado (from a Feb. 20, 2017 Northwestern University news release by Megan Fellman [h/t Nanowerk’s Feb. 20, 2017 news item]),

Northwestern University’s International Institute for Nanotechnology (IIN) is now accepting nominations for two prestigious international prizes: the $250,000 Kabiller Prize in Nanoscience and Nanomedicine and the $10,000 Kabiller Young Investigator Award in Nanoscience and Nanomedicine.

The deadline for nominations is May 15, 2017. Details are available on the IIN website.

“Our goal is to recognize the outstanding accomplishments in nanoscience and nanomedicine that have the potential to benefit all humankind,” said David G. Kabiller, a Northwestern trustee and alumnus. He is a co-founder of AQR Capital Management, a global investment management firm in Greenwich, Connecticut.

The two prizes, awarded every other year, were established in 2015 through a generous gift from Kabiller. Current Northwestern-affiliated researchers are not eligible for nomination until 2018 for the 2019 prizes.

The Kabiller Prize — the largest monetary award in the world for outstanding achievement in the field of nanomedicine — celebrates researchers who have made the most significant contributions to the field of nanotechnology and its application to medicine and biology.

The Kabiller Young Investigator Award recognizes young emerging researchers who have made recent groundbreaking discoveries with the potential to make a lasting impact in nanoscience and nanomedicine.

“The IIN at Northwestern University is a hub of excellence in the field of nanotechnology,” said Kabiller, chair of the IIN executive council and a graduate of Northwestern’s Weinberg College of Arts and Sciences and Kellogg School of Management. “As such, it is the ideal organization from which to launch these awards recognizing outstanding achievements that have the potential to substantially benefit society.”

Nanoparticles for medical use are typically no larger than 100 nanometers — comparable in size to the molecules in the body. At this scale, the essential properties (e.g., color, melting point, conductivity, etc.) of structures behave uniquely. Researchers are capitalizing on these unique properties in their quest to realize life-changing advances in the diagnosis, treatment and prevention of disease.

“Nanotechnology is one of the key areas of distinction at Northwestern,” said Chad A. Mirkin, IIN director and George B. Rathmann Professor of Chemistry in Weinberg. “We are very grateful for David’s ongoing support and are honored to be stewards of these prestigious awards.”

An international committee of experts in the field will select the winners of the 2017 Kabiller Prize and the 2017 Kabiller Young Investigator Award and announce them in September.

The recipients will be honored at an awards banquet Sept. 27 in Chicago. They also will be recognized at the 2017 IIN Symposium, which will include talks from prestigious speakers, including 2016 Nobel Laureate in Chemistry Ben Feringa, from the University of Groningen, the Netherlands.

2015 recipient of the Kabiller Prize

The winner of the inaugural Kabiller Prize, in 2015, was Joseph DeSimone the Chancellor’s Eminent Professor of Chemistry at the University of North Carolina at Chapel Hill and the William R. Kenan Jr. Distinguished Professor of Chemical Engineering at North Carolina State University and of Chemistry at UNC-Chapel Hill.

DeSimone was honored for his invention of particle replication in non-wetting templates (PRINT) technology that enables the fabrication of precisely defined, shape-specific nanoparticles for advances in disease treatment and prevention. Nanoparticles made with PRINT technology are being used to develop new cancer treatments, inhalable therapeutics for treating pulmonary diseases, such as cystic fibrosis and asthma, and next-generation vaccines for malaria, pneumonia and dengue.

2015 recipient of the Kabiller Young Investigator Award

Warren Chan, professor at the Institute of Biomaterials and Biomedical Engineering at the University of Toronto, was the recipient of the inaugural Kabiller Young Investigator Award, also in 2015. Chan and his research group have developed an infectious disease diagnostic device for a point-of-care use that can differentiate symptoms.

BTW, Warren Chan, winner of the ‘Young Investigator Award’, and/or his work have been featured here a few times, most recently in a Nov. 1, 2016 posting, which is mostly about another award he won but also includes links to some his work including my April 27, 2016 post about the discovery that fewer than 1% of nanoparticle-based drugs reach their destination.

How does ice melt? Layer by layer!

A Dec. 12, 2016 news item on ScienceDaily announces the answer to a problem scientists have been investigating for over a century but first, here are the questions,

We all know that water melts at 0°C. However, 150 years ago the famous physicist Michael Faraday discovered that at the surface of frozen ice, well below 0°C, a thin film of liquid-like water is present. This thin film makes ice slippery and is crucial for the motion of glaciers.

Since Faraday’s discovery, the properties of this water-like layer have been the research topic of scientists all over the world, which has entailed considerable controversy: at what temperature does the surface become liquid-like? How does the thickness of the layer dependent on temperature? How does the thickness of the layer increases with temperature? Continuously? Stepwise? Experiments to date have generally shown a very thin layer, which continuously grows in thickness up to 45 nm right below the bulk melting point at 0°C. This also illustrates why it has been so challenging to study this layer of liquid-like water on ice: 45 nm is about 1/1000th part of a human hair and is not discernible by eye.

Scientists of the Max Planck Institute for Polymer Research (MPI-P), in a collaboration with researchers from the Netherlands, the USA and Japan, have succeeded to study the properties of this quasi-liquid layer on ice at the molecular level using advanced surface-specific spectroscopy and computer simulations. The results are published in the latest edition of the scientific journal Proceedings of the National Academy of Science (PNAS).

Caption: Ice melts as described in the text layer by layer. Credit: © MPIP

A Dec. 12, 2016 Max Planck Institute for Polymer Research press release (also on EurekAlert), which originated the news item, goes on to answer the questions,

The team of scientists around Ellen Backus, group leader at MPI-P, investigated how the thin liquid layer is formed on ice, how it grows with increasing temperature, and if it is distinguishable from normal liquid water. These studies required well-defined ice crystal surfaces. Therefore much effort was put into creating ~10 cm large single crystals of ice, which could be cut in such a way that the surface structure was precisely known. To investigate whether the surface was solid or liquid, the team made use of the fact that water molecules in the liquid have a weaker interaction with each other compared to water molecules in ice. Using their interfacial spectroscopy, combined with the controlled heating of the ice crystal, the researchers were able to quantify the change in the interaction between water molecules directly at the interface between ice and air.

The experimental results, combined with the simulations, showed that the first molecular layer at the ice surface has already molten at temperatures as low as -38° C (235 K), the lowest temperature the researchers could experimentally investigate. Increasing the temperature to -16° C (257 K), the second layer becomes liquid. Contrary to popular belief, the surface melting of ice is not a continuous process, but occurs in a discontinuous, layer-by-layer fashion.

“A further important question for us was, whether one could distinguish between the properties of the quasi-liquid layer and those of normal water” says Mischa Bonn, co-author of the paper and director at the MPI-P. And indeed, the quasi-liquid layer at -4° C (269 K) shows a different spectroscopic response than supercooled water at the same temperature; in the quasi-liquid layer, the water molecules seem to interact more strongly than in liquid water.

The results are not only important for a fundamental understanding of ice, but also for climate science, where much research takes place on catalytic reactions on ice surfaces, for which the understanding of the ice surface structure is crucial.

Here’s a link to and a citation for the paper,

Experimental and theoretical evidence for bilayer-by-bilayer surface melting of crystalline ice by M. Alejandra Sánchez, Tanja Kling, Tatsuya Ishiyama, Marc-Jan van Zadel, Patrick J. Bisson, Markus Mezger, Mara N. Jochum, Jenée D. Cyran, Wilbert J. Smit, Huib J. Bakker, Mary Jane Shultz, Akihiro Morita, Davide Donadio, Yuki Nagata, Mischa Bonn, and Ellen H. G. Backus. Proceedings of the National Academy of Science, 2016 DOI: 10.1073/pnas.1612893114 Published online before print December 12, 2016

This paper appears to be open access.