Tag Archives: University of Basel

Worried your ‘priceless’ art could be ruined? Genomics could be the answer

First, there was the story about art masterpieces turning into soap (my June 22, 2017 posting) and now, it seems that microbes may also constitute a problem. Before getting to the latest research, here’s are some images the researchers are using to illustrate their work,

Caption: Leonardo da Vinci noted that the fore and hind wings of a dragonfly are out of phase — verified centuries later by slow motion photography. Thaler suggests further study to compare individuals and species with high “flicker fusion frequency” ability. Credit: PXFuel

I’m not sure what that has to do with anything but I do love dragonflies. This next image seems more relevant to the research,

Caption: Photo summary of the various artworks sampled for the study “”Characterizing microbial signatures on sculptures and paintings of similar provenance.” Circles indicate swabbed areas on each sample artwork Credit: JCVI

It turns out, the researchers are releasing two pieces of research in the same press release, neither having much to do with the other. They (art conservation rresearch, first and, then, research into vision [hence the dragonfly] and da Vinci’s eyes) are both described in a June 18, 2020 J. Craig Venter Institute (JCVI)-Leonardo Da Vinci DNA Project press release (also on EurekAlert),

A new study of the microbial settlers on old paintings, sculptures, and other forms of art charts a potential path for preserving, restoring, and confirming the geographic origin of some of humanity’s greatest treasures.

Genetics scientists with the J. Craig Venter Institute (JCVI), collaborating with the Leonardo da Vinci DNA Project and supported by the Richard Lounsbery Foundation, say identifying and managing communities of microbes on art may offer museums and collectors a new way to stem the deterioration of priceless possessions, and to unmask counterfeits in the $60 billion a year art market.

Manolito G. Torralba, Claire Kuelbs, Kelvin Jens Moncera, and Karen E. Nelson of the JCVI, La Jolla, California, and Rhonda Roby of the Alameda California County Sheriff’s Office Crime Laboratory, used small, dry polyester swabs to gently collect microbes from centuries-old, Renaissance-style art in a private collector’s home in Florence, Italy. Their findings are published in the journal Microbial Ecology .

The genetic detectives caution that additional time and research are needed to formally convict microbes as a culprit in artwork decay but consider their most interesting find to be “oxidase positive” microbes primarily on painted wood and canvas surfaces.

These species can dine on organic and inorganic compounds often found in paints, in glue, and in the cellulose in paper, canvas, and wood. Using oxygen for energy production, they can produce water or hydrogen peroxide, a chemical used in disinfectants and bleaches.

“Such byproducts are likely to influence the presence of mold and the overall rate of deterioration,” the paper says.

“Though prior studies have attempted to characterize the microbial composition associated with artwork decay, our results summarize the first large scale genomics-based study to understand the microbial communities associated with aging artwork.”

The study builds on an earlier one in which the authors compared hairs collected from people in the Washington D.C., and San Diego, CA. areas, finding that microbial signatures and patterns are geographically distinguishable.

In the art world context, studying microbes clinging to the surface of a work of art may help confirm its geographic origin and authenticity or identify counterfeits.

Lead author Manolito G. Torralba notes that, as art’s value continues to climb, preservation is increasingly important to museums and collectors alike, and typically involves mostly the monitoring and adjusting of lighting, heat, and moisture.

Adding genomics science to these efforts offers advantages of “immense potential.”

The study says microbial populations “were easily discernible between the different types of substrates sampled,” with those on stone and marble art more diverse than wood and canvas. This is “likely due to the porous nature of stone and marble harboring additional organisms and potentially moisture and nutrients, along with the likelihood of biofilm formation.”

As well, microbial diversity on paintings is likely lower because few organisms can metabolize the meagre nutrients offered by oil-based paint.

“Though our sample size is low, the novelty of our study has provided the art and scientific communities with evidence that microbial signatures are capable of differentiating artwork according to their substrate,” the paper says.

“Future studies would benefit from working with samples whose authorship, ownership, and care are well-documented, although documentation about care of works of art (e.g., whether and how they were cleaned) seems rare before the mid-twentieth century.”

“Of particular interest would be the presence and activity of oil-degrading enzymes. Such approaches will lead to fully understanding which organism(s) are responsible for the rapid decay of artwork while potentially using this information to target these organisms to prevent degradation.”

“Focusing on reducing the abundance of such destructive organisms has great potential in preserving and restoring important pieces of human history.”

Biology in Art

The paper was supported by the US-based Richard Lounsbery Foundation as part of its “biology in art” research theme, which has also included seed funding efforts to obtain and sequence the genome of Leonardo da Vinci.

The Leonardo da Vinci DNA Project involves scientists in France (where Leonardo lived during his final years and was buried), Italy (where his father and other relatives were buried, and descendants of his half-brothers still live), Spain (whose National Library holds 700 pages of his notebooks), and the US (where forensic DNA skills flourish).

The Leonardo project has convened molecular biologists, population geneticists, microbiologists, forensic experts, and physicians working together with other natural scientists and with genealogists, historians, artists, and curators to discover and decode previously inaccessible knowledge and to preserve cultural heritage.  

Related news release: Leonardo da Vinci’s DNA: Experts unite to shine modern light on a Renaissance master http://bit.ly/2FG4jJu

Measuring Leonardo da Vinci’s “quick eye” 500 years later.

Could he have played major-league baseball?

Famous art historians and biographers such as Sir Kenneth Clark and Walter Isaacson have written about Leonardo da Vinci’s “quick eye” because of the way he accurately captured fleeting expressions, wings during bird flight, and patterns in swirling water. But until now no one had tried to put a number on this aspect of Leonardo’s extraordinary visual acuity.

David S. Thaler of the University of Basel, and a guest investigator in the Program for the Human Environment at The Rockefeller University, does, allowing comparison of Leonardo with modern measures. Leonardo fares quite well.

Thaler’s estimate hinges on Leonardo’s observation that the fore and hind wings of a dragonfly are out of phase — not verified until centuries later by slow motion photography (see e.g. https://youtu.be/Lw2dfjYENNE?t=44).

To quote Isaacson’s translation of Leonardo’s notebook: “The dragonfly flies with four wings, and when those in front are raised those behind are lowered.”

Thaler challenged himself and friends to try seeing if that’s true, but they all saw only blurs.

High-speed camera studies by others show the fore and hind wingbeats of dragonflies vary by 20 to 10 milliseconds — one fiftieth to one hundredth of a second — beyond average human perception.

Thaler notes that “flicker fusion frequency” (FFF) — akin to a motion picture’s frames per second — is used to quantify and measure “temporal acuity” in human vision.

When frames per second exceed the number of frames the viewer can perceive individually, the brain constructs the illusion of continuous movement. The average person’s FFF is between 20 to 40 frames per second; current motion pictures present 48 or 72 frames per second.

To accurately see the angle between dragonfly wings would require temporal acuity in the range of 50 to 100 frames per second.

Thaler believes genetics will account for variations in FFF among different species, which range from a low of 12 in some nocturnal insects to over 300 in Fire Beetles. We simply do not know what accounts for human variation. Training and genetics may both play important roles.

“Perhaps the clearest contemporary case for a fast flicker fusion frequency in humans is in American baseball, because it is said that elite batters can see the seams on a pitched baseball,” even when rotating 30 to 50 times per second with two or four seams facing the batter. A batter would need Leonardo-esque FFF to spot the seams on most inbound baseballs.  

Thaler suggests further study to compare the genome of individuals and species with unusually high FFF, including, if possible, Leonardo’s DNA.  

Flicker fusion for focus, attention, and affection   

In a companion paper, Thaler describes how Leonardo used psychophysics that would only be understood centuries later — and about which a lot remains to be learned today — to communicate deep beauty and emotion. 

Leonardo was master of a technique known as sfumato (the word derived from the Italian sfumare, “to tone down” or “to evaporate like smoke”), which describes a subtle blur of edges and blending of colors without sharp focus or distinct lines.

Leonardo expert Martin Kemp has noted that Leonardo’s sfumato sometimes involves a distance dependence which is akin to the focal plane of a camera. Yet, at other times, features at the same distance have selective sfumato so simple plane of focus is not the whole answer.

Thaler suggests that Leonardo achieved selective soft focus in portraits by painting in overcast or evening light, where the eyes’ pupils enlarge to let in more light but have a narrow plane of sharp focus. 

To quote Leonardo’s notebook, under the heading “Selecting the light which gives most grace to faces”: “In the evening and when the weather is dull, what softness and delicacy you may perceive in the faces of men and women.”  In dim light pupils enlarge to let in more light but their depth of field decreases.  

By measuring the size of the portrait’s pupils, Thaler inferred Leonardo’s depth of focus. He says Leonardo likely sensed this effect, perhaps unconsciously in the realm of his artistic sensibility. The pupil / aperture effect on depth of focus wasn’t explained until the mid-1800s, centuries after Leonardo’s birth in Vinci, Italy in 1452.

What about selective focus at equal distance? In this case Leonardo may have taken advantage of the fovea, the small area on the back of the eye where detail is sharpest.

Most of us move our eyes around and because of our slower flicker fusion frequency we construct a single 3D image of the world by jamming together many partially in-focus images. Leonardo realized and “froze” separate snapshots with which we construct ordinary perception.

Says Thaler: “We study Leonardo not only to learn about him but to learn about ourselves and further human potential.”

Thaler’s papers (at https://bit.ly/2WZ2cwo and https://bit.ly/2ZBj7Hi) evolved from talks at meetings of the Leonardo da Vinci DNA Project in Italy (2018), Spain and France (2019).

They form part of a collection of papers presented at a recent colloquium in Amboise, France, now being readied for publication in a book: Actes du Colloque International d’Amboise: Leonardo de Vinci, Anatomiste. Pionnier de l’Anatomie comparée, de la Biomécanique, de la Bionique et de la Physiognomonie. Edited by Henry de Lumley, President, Institute of Human Paleontology, Paris, and originally planned for release in late spring, 2020, publication was delayed by the global virus pandemic but should be available at CNRS Editions in the second half of the summer.

Other papers in the collection cover a range of topics, including how Leonardo used his knowledge of anatomy, gained by performing autopsies on dozens of cadavers, to achieve Mona Lisa’s enigmatic smile.

Leonardo also used it to exact revenge on academics and scientists who ridiculed him for lacking a classical education, sketching them with absurdly deformed faces to resemble birds, dogs, or goats. 

De Lumley earlier co-authored a 72-page monograph for the Leonardo DNA Project: “Leonardo da Vinci: Pioneer of comparative anatomy, biomechanics and physiognomy.”.

Here’s a link to and a citation for the paper featuring microbes and art masterpiece,

Characterizing Microbial Signatures on Sculptures and Paintings of Similar Provenance by Manolito G. Torralba, Claire Kuelbs, Kelvin Jens Moncera, Rhonda Roby & Karen E. Nelson. Microbial Ecology (2020) DOI: https://doi.org/10.1007/s00248-020-01504-x Published: 21 May 2020

This paper is open access.

Nanocar Race winners!

In fact, there was a tie although it seems the Swiss winners were a little more excited. A May 1, 2017 news item on swissinfo.ch provides fascinating detail,

“Swiss Nano Dragster”, driven by scientists from Basel, has won the first international car race involving molecular machines. The race involved four nano cars zipping round a pure gold racetrack measuring 100 nanometres – or one ten-thousandth of a millimetre.

The two Swiss pilots, Rémy Pawlak and Tobias Meier from the Swiss Nanoscience Institute and the Department of Physicsexternal link at the University of Basel, had to reach the chequered flag – negotiating two curves en route – within 38 hours. [emphasis mine*]

The winning drivers, who actually shared first place with a US-Austrian team, were not sitting behind a steering wheel but in front of a computer. They used this to propel their single-molecule vehicle with a small electric shock from a scanning tunnelling microscope.

During such a race, a tunnelling current flows between the tip of the microscope and the molecule, with the size of the current depending on the distance between molecule and tip. If the current is high enough, the molecule starts to move and can be steered over the racetrack, a bit like a hovercraft.

….

The race track was maintained at a very low temperature (-268 degrees Celsius) so that the molecules didn’t move without the current.

What’s more, any nudging of the molecule by the microscope tip would have led to disqualification.

Miniature motors

The race, held in Toulouse, France, and organised by the National Centre for Scientific Research (CNRS), was originally going to be held in October 2016, but problems with some cars resulted in a slight delay. In the end, organisers selected four of nine applicants since there were only four racetracks.

The cars measured between one and three nanometres – about 30,000 times smaller than a human hair. The Swiss Nano Dragster is, in technical language, a 4′-(4-Tolyl)-2,2′:6′,2”-terpyridine molecule.

The Swiss and US-Austrian teams outraced rivals from the US and Germany.

The race is not just a bit of fun for scientists. The researchers hope to gain insights into how molecules move.

I believe this Basel University .gif is from the race,

*Emphasis added on May 9, 2017 at 12:26 pm PT. See my May 9, 2017 posting: Nanocar Race winners: The US-Austrian team for the other half of this story.

Emerging technology and the law

I have three news bits about legal issues that are arising as a consequence of emerging technologies.

Deep neural networks, art, and copyright

Caption: The rise of automated art opens new creative avenues, coupled with new problems for copyright protection. Credit: Provided by: Alexander Mordvintsev, Christopher Olah and Mike Tyka

Presumably this artwork is a demonstration of automated art although they never really do explain how in the news item/news release. An April 26, 2017 news item on ScienceDaily announces research into copyright and the latest in using neural networks to create art,

In 1968, sociologist Jean Baudrillard wrote on automatism that “contained within it is the dream of a dominated world […] that serves an inert and dreamy humanity.”

With the growing popularity of Deep Neural Networks (DNN’s), this dream is fast becoming a reality.

Dr. Jean-Marc Deltorn, researcher at the Centre d’études internationales de la propriété intellectuelle in Strasbourg, argues that we must remain a responsive and responsible force in this process of automation — not inert dominators. As he demonstrates in a recent Frontiers in Digital Humanities paper, the dream of automation demands a careful study of the legal problems linked to copyright.

An April 26, 2017 Frontiers (publishing) news release on EurekAlert, which originated the news item, describes the research in more detail,

For more than half a century, artists have looked to computational processes as a way of expanding their vision. DNN’s are the culmination of this cross-pollination: by learning to identify a complex number of patterns, they can generate new creations.

These systems are made up of complex algorithms modeled on the transmission of signals between neurons in the brain.

DNN creations rely in equal measure on human inputs and the non-human algorithmic networks that process them.

Inputs are fed into the system, which is layered. Each layer provides an opportunity for a more refined knowledge of the inputs (shape, color, lines). Neural networks compare actual outputs to expected ones, and correct the predictive error through repetition and optimization. They train their own pattern recognition, thereby optimizing their learning curve and producing increasingly accurate outputs.

The deeper the layers are, the higher the level of abstraction. The highest layers are able to identify the contents of a given input with reasonable accuracy, after extended periods of training.

Creation thus becomes increasingly automated through what Deltorn calls “the arcane traceries of deep architecture”. The results are sufficiently abstracted from their sources to produce original creations that have been exhibited in galleries, sold at auction and performed at concerts.

The originality of DNN’s is a combined product of technological automation on one hand, human inputs and decisions on the other.

DNN’s are gaining popularity. Various platforms (such as DeepDream) now allow internet users to generate their very own new creations . This popularization of the automation process calls for a comprehensive legal framework that ensures a creator’s economic and moral rights with regards to his work – copyright protection.

Form, originality and attribution are the three requirements for copyright. And while DNN creations satisfy the first of these three, the claim to originality and attribution will depend largely on a given country legislation and on the traceability of the human creator.

Legislation usually sets a low threshold to originality. As DNN creations could in theory be able to create an endless number of riffs on source materials, the uncurbed creation of original works could inflate the existing number of copyright protections.

Additionally, a small number of national copyright laws confers attribution to what UK legislation defines loosely as “the person by whom the arrangements necessary for the creation of the work are undertaken.” In the case of DNN’s, this could mean anybody from the programmer to the user of a DNN interface.

Combined with an overly supple take on originality, this view on attribution would further increase the number of copyrightable works.

The risk, in both cases, is that artists will be less willing to publish their own works, for fear of infringement of DNN copyright protections.

In order to promote creativity – one seminal aim of copyright protection – the issue must be limited to creations that manifest a personal voice “and not just the electric glint of a computational engine,” to quote Deltorn. A delicate act of discernment.

DNN’s promise new avenues of creative expression for artists – with potential caveats. Copyright protection – a “catalyst to creativity” – must be contained. Many of us gently bask in the glow of an increasingly automated form of technology. But if we want to safeguard the ineffable quality that defines much art, it might be a good idea to hone in more closely on the differences between the electric and the creative spark.

This research is and be will part of a broader Frontiers Research Topic collection of articles on Deep Learning and Digital Humanities.

Here’s a link to and a citation for the paper,

Deep Creations: Intellectual Property and the Automata by Jean-Marc Deltorn. Front. Digit. Humanit., 01 February 2017 | https://doi.org/10.3389/fdigh.2017.00003

This paper is open access.

Conference on governance of emerging technologies

I received an April 17, 2017 notice via email about this upcoming conference. Here’s more from the Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics webpage,

The Fifth Annual Conference on Governance of Emerging Technologies:

Law, Policy and Ethics held at the new

Beus Center for Law & Society in Phoenix, AZ

May 17-19, 2017!

Call for Abstracts – Now Closed

The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including (but not limited to) nanotechnology, synthetic biology, gene editing, biotechnology, genomics, personalized medicine, human enhancement technologies, telecommunications, information technologies, surveillance technologies, geoengineering, neuroscience, artificial intelligence, and robotics. The conference is premised on the belief that there is much to be learned and shared from and across the governance experience and proposals for these various emerging technologies.

Keynote Speakers:

Gillian HadfieldRichard L. and Antoinette Schamoi Kirtland Professor of Law and Professor of Economics USC [University of Southern California] Gould School of Law

Shobita Parthasarathy, Associate Professor of Public Policy and Women’s Studies, Director, Science, Technology, and Public Policy Program University of Michigan

Stuart Russell, Professor at [University of California] Berkeley, is a computer scientist known for his contributions to artificial intelligence

Craig Shank, Vice President for Corporate Standards Group in Microsoft’s Corporate, External and Legal Affairs (CELA)

Plenary Panels:

Innovation – Responsible and/or Permissionless

Ellen-Marie Forsberg, Senior Researcher/Research Manager at Oslo and Akershus University College of Applied Sciences

Adam Thierer, Senior Research Fellow with the Technology Policy Program at the Mercatus Center at George Mason University

Wendell Wallach, Consultant, ethicist, and scholar at Yale University’s Interdisciplinary Center for Bioethics

 Gene Drives, Trade and International Regulations

Greg Kaebnick, Director, Editorial Department; Editor, Hastings Center Report; Research Scholar, Hastings Center

Jennifer Kuzma, Goodnight-North Carolina GlaxoSmithKline Foundation Distinguished Professor in Social Sciences in the School of Public and International Affairs (SPIA) and co-director of the Genetic Engineering and Society (GES) Center at North Carolina State University

Andrew Maynard, Senior Sustainability Scholar, Julie Ann Wrigley Global Institute of Sustainability Director, Risk Innovation Lab, School for the Future of Innovation in Society Professor, School for the Future of Innovation in Society, Arizona State University

Gary Marchant, Regents’ Professor of Law, Professor of Law Faculty Director and Faculty Fellow, Center for Law, Science & Innovation, Arizona State University

Marc Saner, Inaugural Director of the Institute for Science, Society and Policy, and Associate Professor, University of Ottawa Department of Geography

Big Data

Anupam Chander, Martin Luther King, Jr. Professor of Law and Director, California International Law Center, UC Davis School of Law

Pilar Ossorio, Professor of Law and Bioethics, University of Wisconsin, School of Law and School of Medicine and Public Health; Morgridge Institute for Research, Ethics Scholar-in-Residence

George Poste, Chief Scientist, Complex Adaptive Systems Initiative (CASI) (http://www.casi.asu.edu/), Regents’ Professor and Del E. Webb Chair in Health Innovation, Arizona State University

Emily Shuckburgh, climate scientist and deputy head of the Polar Oceans Team at the British Antarctic Survey, University of Cambridge

 Responsible Development of AI

Spring Berman, Ira A. Fulton Schools of Engineering, Arizona State University

John Havens, The IEEE [Institute of Electrical and Electronics Engineers] Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems

Subbarao Kambhampati, Senior Sustainability Scientist, Julie Ann Wrigley Global Institute of Sustainability, Professor, School of Computing, Informatics and Decision Systems Engineering, Ira A. Fulton Schools of Engineering, Arizona State University

Wendell Wallach, Consultant, Ethicist, and Scholar at Yale University’s Interdisciplinary Center for Bioethics

Existential and Catastrophic Ricks [sic]

Tony Barrett, Co-Founder and Director of Research of the Global Catastrophic Risk Institute

Haydn Belfield,  Academic Project Administrator, Centre for the Study of Existential Risk at the University of Cambridge

Margaret E. Kosal Associate Director, Sam Nunn School of International Affairs, Georgia Institute of Technology

Catherine Rhodes,  Academic Project Manager, Centre for the Study of Existential Risk at CSER, University of Cambridge

These were the panels that are of interest to me; there are others on the homepage.

Here’s some information from the Conference registration webpage,

Early Bird Registration – $50 off until May 1! Enter discount code: earlybirdGETs50

New: Group Discount – Register 2+ attendees together and receive an additional 20% off for all group members!

Click Here to Register!

Conference registration fees are as follows:

  • General (non-CLE) Registration: $150.00
  • CLE Registration: $350.00
  • *Current Student / ASU Law Alumni Registration: $50.00
  • ^Cybsersecurity sessions only (May 19): $100 CLE / $50 General / Free for students (registration info coming soon)

There you have it.

Neuro-techno future laws

I’m pretty sure this isn’t the first exploration of potential legal issues arising from research into neuroscience although it’s the first one I’ve stumbled across. From an April 25, 2017 news item on phys.org,

New human rights laws to prepare for advances in neurotechnology that put the ‘freedom of the mind’ at risk have been proposed today in the open access journal Life Sciences, Society and Policy.

The authors of the study suggest four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy. The four laws are: the right to cognitive liberty, the right to mental privacy, the right to mental integrity and the right to psychological continuity.

An April 25, 2017 Biomed Central news release on EurekAlert, which originated the news item, describes the work in more detail,

Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel, said: “The mind is considered to be the last refuge of personal freedom and self-determination, but advances in neural engineering, brain imaging and neurotechnology put the freedom of the mind at risk. Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology.”

Advances in neurotechnology, such as sophisticated brain imaging and the development of brain-computer interfaces, have led to these technologies moving away from a clinical setting and into the consumer domain. While these advances may be beneficial for individuals and society, there is a risk that the technology could be misused and create unprecedented threats to personal freedom.

Professor Roberto Andorno, co-author of the research, explained: “Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court, for example as a tool for assessing criminal responsibility or even the risk of reoffending. Consumer companies are using brain imaging for ‘neuromarketing’, to understand consumer behaviour and elicit desired responses from customers. There are also tools such as ‘brain decoders’ which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom which we sought to address with the development of four new human rights laws.”

The authors explain that as neurotechnology improves and becomes commonplace, there is a risk that the technology could be hacked, allowing a third-party to ‘eavesdrop’ on someone’s mind. In the future, a brain-computer interface used to control consumer technology could put the user at risk of physical and psychological damage caused by a third-party attack on the technology. There are also ethical and legal concerns over the protection of data generated by these devices that need to be considered.

International human rights laws make no specific mention to neuroscience, although advances in biomedicine have become intertwined with laws, such as those concerning human genetic data. Similar to the historical trajectory of the genetic revolution, the authors state that the on-going neurorevolution will force a reconceptualization of human rights laws and even the creation of new ones.

Marcello Ienca added: “Science-fiction can teach us a lot about the potential threat of technology. Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes. We need to be prepared to deal with the impact these technologies will have on our personal freedom.”

Here’s a link to and a citation for the paper,

Towards new human rights in the age of neuroscience and neurotechnology by Marcello Ienca and Roberto Andorno. Life Sciences, Society and Policy201713:5 DOI: 10.1186/s40504-017-0050-1 Published: 26 April 2017

©  The Author(s). 2017

This paper is open access.

Atomic force microscope with nanowire sensors

Measuring the size and direction of forces may become reality with a nanotechnology-enabled atomic force microscope designed by Swiss scientists, according to an Oct. 17, 2016 news item on phys.org,

A new type of atomic force microscope (AFM) uses nanowires as tiny sensors. Unlike standard AFM, the device with a nanowire sensor enables measurements of both the size and direction of forces. Physicists at the University of Basel and at the EPF Lausanne have described these results in the recent issue of Nature Nanotechnology.

A nanowire sensor measures size and direction of forces (Image: University of Basel, Department of Physics)

A nanowire sensor measures size and direction of forces (Image: University of Basel, Department of Physics)

An Oct. 17, 2016 University of Basel press release (also on EurekAlert), which originated the news item, expands on the theme,

Nanowires are extremely tiny filamentary crystals which are built-up molecule by molecule from various materials and which are now being very actively studied by scientists all around the world because of their exceptional properties.

The wires normally have a diameter of 100 nanometers and therefore possess only about one thousandth of a hair thickness. Because of this tiny dimension, they have a very large surface in comparison to their volume. This fact, their small mass and flawless crystal lattice make them very attractive in a variety of nanometer-scale sensing applications, including as sensors of biological and chemical samples, and as pressure or charge sensors.

Measurement of direction and size

The team of Argovia Professor Martino Poggio from the Swiss Nanoscience Institute (SNI) and the Department of Physics at the University of Basel has now demonstrated that nanowires can also be used as force sensors in atomic force microscopes. Based on their special mechanical properties, nanowires vibrate along two perpendicular axes at nearly the same frequency. When they are integrated into an AFM, the researchers can measure changes in the perpendicular vibrations caused by different forces. Essentially, they use the nanowires like tiny mechanical compasses that point out both the direction and size of the surrounding forces.

Image of the two-dimensional force field

The scientists from Basel describe how they imaged a patterned sample surface using a nanowire sensor. Together with colleagues from the EPF Lausanne, who grew the nanowires, they mapped the two-dimensional force field above the sample surface using their nanowire “compass”. As a proof-of-principle, they also mapped out test force fields produced by tiny electrodes.

The most challenging technical aspect of the experiments was the realization of an apparatus that could simultaneously scan a nanowire above a surface and monitor its vibration along two perpendicular directions. With their study, the scientists have demonstrated a new type of AFM that could extend the technique’s numerous applications even further.

AFM – today widely used

The development of AFM 30 years ago was honored with the conferment of the Kavli-Prize [2016 Kavli Prize in Nanoscience] beginning of September this year. Professor Christoph Gerber of the SNI and Department of Physics at the University of Basel is one of the awardees, who has substantially contributed to the wide use of AFM in different fields, including solid-state physics, materials science, biology, and medicine.

The various different types of AFM are most often carried out using cantilevers made from crystalline Si as the mechanical sensor. “Moving to much smaller nanowire sensors may now allow for even further improvements on an already amazingly successful technique”, Martino Poggio comments his approach.

I featured an interview article with Christoph Gerber and Gerd Binnig about their shared Kavli prize and about inventing the AFM in a Sept. 20, 2016 posting.

As for the latest innovation, here’s a link to and a citation for the paper,

Vectorial scanning force microscopy using a nanowire sensor by Nicola Rossi, Floris R. Braakman, Davide Cadeddu, Denis Vasyukov, Gözde Tütüncüoglu, Anna Fontcuberta i Morral, & Martino Poggio. Nature Nanotechnology (2016) doi:10.1038/nnano.2016.189 Published online 17 October 2016

This paper is behind a paywall.

Better contrast agents for magnetic resonance imaging with nanoparticles

I wonder what’s going on in the field of magnetic resonance imaging. This is the third news item I’ve stumbled across related to the topic in the last couple of months. (Links to the other two posts follow at the end of this post.) By comparison, that’s the more than in the previous seven years (2008 – 2015) combined.

The latest research concerns a new and better contrast agent. From an Aug. 3, 2016 news item on Nanowerk,

Scientists at the University of Basel [Switzerland] have developed nanoparticles which can serve as efficient contrast agents for magnetic resonance imaging. This new type of nanoparticles [sic] produce around ten times more contrast than the actual contrast agents and are responsive to specific environments.

An Aug. 3, 2016 University of Basel press release (also on EurekAlert), which originated the news item, explains further,

Contrast agents are usually based on the metal Gadolinium, which is injected and serves for an improved imaging of various organs in an MRI. Gadolinium ions should be bound with a carrier compound to avoid the toxicity to the human body of the free ions. Therefore, highly efficient contrast agents requiring lower Gadolinium concentrations represent an important step for advancing diagnosis and improving patient health prognosis.

Smart nanoparticles as contrast agents

The research groups of Prof. Cornelia Palivan and Prof. Wolfgang Meier from the Department of Chemistry at the University of Basel have introduced a new type of nanoparticles [sic], which combine multiple properties required for contrast agents: an increased MRI contrast for lower concentration, a potential for long blood circulation and responsiveness to different biochemical environments. These nanoparticles were obtained by co-assembly of heparin-functionalized polymers with trapped gadolinium ions and stimuli-responsive peptides.

The study shows, that the nanoparticles have the capacity of enhancing the MRI signal tenfold higher than the current agents. In addition, they have an enhanced efficacy in reductive milieu, characteristic for specific regions, such as cancerous tissues. These nanoparticles fulfill numerous key criteria for further development, such as absence of cellular toxicity, no apparent anticoagulation property, and high shelf stability. The concept developed by the researchers at the University of Basel to produce better contrast agents based on nanoparticles highlights a new direction in the design of MRI contrast agents, and supports their implementation for future applications.

Here’s a link to and a citation for the paper,

Nanoparticle-based highly sensitive MRI contrast agents with enhanced relaxivity in reductive milieu by
Severin J. Sigg, Francesco Santini, Adrian Najer, Pascal U. Richard, Wolfgang P. Meier, and Cornelia G. Palivan. Chem. Commun., 2016,52, 9937-9940 DOI: 10.1039/C6CC03396B First published online 13 Jul 2016

This paper is behind a paywall.

The other two MRI items featured here are in a June 10, 2016 posting (pH dependent nanoparticle-based contrast agent for MRIs [magnetic resonance images]) and in an Aug. 1, 2016 posting (Nuclear magnetic resonance microscope breaks records).

Measuring the van der Waals forces between individual atoms for the first time

A May 13, 2016 news item on Nanowerk heralds the first time measuring the van der Waals forces between individual atoms,

Physicists at the Swiss Nanoscience Institute and the University of Basel have succeeded in measuring the very weak van der Waals forces between individual atoms for the first time. To do this, they fixed individual noble gas atoms within a molecular network and determined the interactions with a single xenon atom that they had positioned at the tip of an atomic force microscope. As expected, the forces varied according to the distance between the two atoms; but, in some cases, the forces were several times larger than theoretically calculated.

A May 13, 2016 University of Basel press release (also on EurekAlert), which originated the news item, provides an explanation of van der Waals forces (the most comprehensive I’ve seen) and technical details about how the research was conducted,

Van der Waals forces act between non-polar atoms and molecules. Although they are very weak in comparison to chemical bonds, they are hugely significant in nature. They play an important role in all processes relating to cohesion, adhesion, friction or condensation and are, for example, essential for a gecko’s climbing skills.

Van der Waals interactions arise due to a temporary redistribution of electrons in the atoms and molecules. This results in the occasional formation of dipoles, which in turn induce a redistribution of electrons in closely neighboring molecules. Due to the formation of dipoles, the two molecules experience a mutual attraction, which is referred to as a van der Waals interaction. This only exists temporarily but is repeatedly re-formed. The individual forces are the weakest binding forces that exist in nature, but they add up to reach magnitudes that we can perceive very clearly on the macroscopic scale – as in the example of the gecko.

Fixed within the nano-beaker

To measure the van der Waals forces, scientists in Basel used a low-temperature atomic force microscope with a single xenon atom on the tip. They then fixed the individual argon, krypton and xenon atoms in a molecular network. This network, which is self-organizing under certain experimental conditions, contains so-called nano-beakers of copper atoms in which the noble gas atoms are held in place like a bird egg. Only with this experimental set-up is it possible to measure the tiny forces between microscope tip and noble gas atom, as a pure metal surface would allow the noble gas atoms to slide around.

Compared with theory

The researchers compared the measured forces with calculated values and displayed them graphically. As expected from the theoretical calculations, the measured forces fell dramatically as the distance between the atoms increased. While there was good agreement between measured and calculated curve shapes for all of the noble gases analyzed, the absolute measured forces were larger than had been expected from calculations according to the standard model. Above all for xenon, the measured forces were larger than the calculated values by a factor of up to two.

The scientists are working on the assumption that, even in the noble gases, charge transfer occurs and therefore weak covalent bonds are occasionally formed, which would explain the higher values.

Here’s a link to and a citation for the paper,

Van der Waals interactions and the limits of isolated atom models at interfaces by Shigeki Kawai, Adam S. Foster, Torbjörn Björkman, Sylwia Nowakowska, Jonas Björk, Filippo Federici Canova, Lutz H. Gade, Thomas A. Jung, & Ernst Meyer. Nature Communications 7, Article number: 11559  doi:10.1038/ncomms11559 Published 13 May 2016

This is an open access paper.

An atom without properties?

There’s rather intriguing Swiss research into atoms and so-called Bell Correlations according to an April 21, 2016 news item on ScienceDaily,

The microscopic world is governed by the rules of quantum mechanics, where the properties of a particle can be completely undetermined and yet strongly correlated with those of other particles. Physicists from the University of Basel have observed these so-called Bell correlations for the first time between hundreds of atoms. Their findings are published in the scientific journal Science.

Everyday objects possess properties independently of each other and regardless of whether we observe them or not. Einstein famously asked whether the moon still exists if no one is there to look at it; we answer with a resounding yes. This apparent certainty does not exist in the realm of small particles. The location, speed or magnetic moment of an atom can be entirely indeterminate and yet still depend greatly on the measurements of other distant atoms.

An April 21, 2016 University of Basel (Switzerland) press release (also on EurekAlert), which originated the news item, provides further explanation,

With the (false) assumption that atoms possess their properties independently of measurements and independently of each other, a so-called Bell inequality can be derived. If it is violated by the results of an experiment, it follows that the properties of the atoms must be interdependent. This is described as Bell correlations between atoms, which also imply that each atom takes on its properties only at the moment of the measurement. Before the measurement, these properties are not only unknown – they do not even exist.

A team of researchers led by professors Nicolas Sangouard and Philipp Treutlein from the University of Basel, along with colleagues from Singapore, have now observed these Bell correlations for the first time in a relatively large system, specifically among 480 atoms in a Bose-Einstein condensate. Earlier experiments showed Bell correlations with a maximum of four light particles or 14 atoms. The results mean that these peculiar quantum effects may also play a role in larger systems.

Large number of interacting particles

In order to observe Bell correlations in systems consisting of many particles, the researchers first had to develop a new method that does not require measuring each particle individually – which would require a level of control beyond what is currently possible. The team succeeded in this task with the help of a Bell inequality that was only recently discovered. The Basel researchers tested their method in the lab with small clouds of ultracold atoms cooled with laser light down to a few billionths of a degree above absolute zero. The atoms in the cloud constantly collide, causing their magnetic moments to become slowly entangled. When this entanglement reaches a certain magnitude, Bell correlations can be detected. Author Roman Schmied explains: “One would expect that random collisions simply cause disorder. Instead, the quantum-mechanical properties become entangled so strongly that they violate classical statistics.”

More specifically, each atom is first brought into a quantum superposition of two states. After the atoms have become entangled through collisions, researchers count how many of the atoms are actually in each of the two states. This division varies randomly between trials. If these variations fall below a certain threshold, it appears as if the atoms have ‘agreed’ on their measurement results; this agreement describes precisely the Bell correlations.

New scientific territory

The work presented, which was funded by the National Centre of Competence in Research Quantum Science and Technology (NCCR QSIT), may open up new possibilities in quantum technology; for example, for generating random numbers or for quantum-secure data transmission. New prospects in basic research open up as well: “Bell correlations in many-particle systems are a largely unexplored field with many open questions – we are entering uncharted territory with our experiments,” says Philipp Treutlein.

Here’s a link to and a citation for the paper,

Bell correlations in a Bose-Einstein condensate by Roman Schmied, Jean-Daniel Bancal, Baptiste Allard, Matteo Fadel, Valerio Scarani, Philipp Treutlein, Nicolas Sangouard. Science  22 Apr 2016: Vol. 352, Issue 6284, pp. 441-444 DOI: 10.1126/science.aad8665

This paper is behind a paywall.

Viewing quantum entanglement with the naked eye

A Feb. 18, 2016 article by Bob Yirka for phys.org suggests there may be a way to see quantum entanglement with the naked eye,

A trio of physicists in Europe has come up with an idea that they believe would allow a person to actually witness entanglement. Valentina Caprara Vivoli, with the University of Geneva, Pavel Sekatski, with the University of Innsbruck and Nicolas Sangouard, with the University of Basel, have together written a paper describing a scenario where a human subject would be able to witness an instance of entanglement—they have uploaded it to the arXiv server for review by others.
Entanglement, is of course, where two quantum particles are intrinsically linked to the extent that they actually share the same existence, even though they can be separated and moved apart. The idea was first proposed nearly a century ago, and it has not only been proven, but researchers routinely cause it to occur, but, to date, not one single person has every actually seen it happen—they only know it happens by conducting a series of experiments. It is not clear if anyone has ever actually tried to see it happen, but in this new effort, the research trio claim to have found a way to make it happen—if only someone else will carry out the experiment on a willing volunteer.

A Feb. 17, 2016 article for the MIT (Massachusetts Institute of Technology) Technology Review describes this proposed project in detail,

Finding a way for a human eye to detect entangled photons sounds straightforward. After all, the eye is a photon detector, so it ought to be possible for an eye to replace a photo detector in any standard entanglement detecting experiment.

Such an experiment might consist of a source of entangled pairs of photons, each of which is sent to a photo detector via an appropriate experimental setup.

By comparing the arrival of photons at each detector and by repeating the detecting process many times, it is possible to determine statistically whether entanglement is occurring.

It’s easy to imagine that this experiment can be easily repeated by replacing one of the photodetectors with an eye. But that turns out not to be the case.

The main problem is that the eye cannot detect single photons. Instead, each light-detecting rod at the back of the eye must be stimulated by a good handful of photons to trigger a detection. The lowest number of photons that can do the trick is thought to be about seven, but in practice, people usually see photons only when they arrive in the hundreds or thousands.

Even then, the eye is not a particularly efficient photodetector. A good optics lab will have photodetectors that are well over 90 percent efficient. By contrast, at the very lowest light levels, the eye is about 8 percent efficient. That means it misses lots of photons.

That creates a significant problem. If a human eye is ever to “see” entanglement in this way, then physicists will have to entangle not just two photons but at least seven, and ideally many hundreds or thousands of them.

And that simply isn’t possible with today’s technology. At best, physicists are capable of entangling half a dozen photons but even this is a difficult task.

But the researchers have come up with a solution to the problem,

Vivoli and co say they have devised a trick that effectively amplifies a single entangled photon into many photons that the eye can see. Their trick depends on a technique called a displacement operation, in which two quantum objects interfere so that one changes the phase of another.

One way to do this with photons is with a beam splitter. Imagine a beam of coherent photons from a laser that is aimed at a beam splitter. The beam is transmitted through the splitter but a change of phase can cause it to be reflected instead.

Now imagine another beam of coherent photons that interferes with the first. This changes the phase of the first beam so that it is reflected rather than transmitted. In other words, the second beam can switch the reflection on and off.

Crucially, the switching beam needn’t be as intense as the main beam—it only needs to be coherent. Indeed, a single photon can do this trick of switching more intense beam, at least in theory.

That’s the basis of the new approach. The idea is to use a single entangled photon to switch the passage of more powerful beam through a beam splitter. And it is this more powerful beam that the eye detects and which still preserves the quantum nature of the original entanglement.

… this experiment will be hard to do. Ensuring that the optical amplifier works as they claim will be hard, for example.

And even if it does, reliably recording each detection in the eye will be even harder. The test for entanglement is a statistical one that requires many counts from both detectors. That means an individual would have to sit in the experiment registering a yes or no answer for each run, repeated thousands or tens of thousands of times. Volunteers will need to have plenty of time on their hands.

Of course, experiments like this will quickly take the glamor and romance out of the popular perception of entanglement. Indeed, it’s hard to see why anybody would want to be entangled with a photodetector over the time it takes to do this experiment.

There is a suggestion as to how to make this a more attractive proposition for volunteers,

One way to increase this motivation would be to modify the experiment so that it entangles two humans. It’s not hard to imagine a people wanting to take part in such an experiment, perhaps even eagerly.

That will require a modified set up in which both detectors are human eyes, with their high triggering level and their low efficiency. Whether this will be possible with Vivoli and co’s setup isn’t yet clear.

Only then will volunteers be able to answer the question that sits uncomfortably with most physicists. What does it feel like to be entangled with another human?

Given the nature of this experiment, the answer will be “mind-numbingly boring.” But as Vivoli and co point out in their conclusion: “It is safe to say that probing human vision with quantum light is terra incognita. This makes it an attractive challenge on its own.”

You can read the arXiv paper,

What Does It Take to See Entanglement? by Valentina Caprara Vivoli, Pavel Sekatski, Nicolas Sangouard arxiv.org/abs/1602.01907 Submitted Feb. 5, 2016

This is an open access paper and this site encourages comments and peer review.

One final comment, the articles reminded me of a March 1, 2012 posting which posed this question Can we see entangled images? a question for physicists in the headline for a piece about a physicist’s (Geraldo Barbosa) challenge and his arXiv paper. Coincidentally, the source article was by Bob Yirka and was published on phys.org.

Nano (?) diamonds used in mechanical system to control quantum states

We do end up back in the world of spin but, first, there are the nano (I think) diamonds in an Aug. 3, 2015 news item on Nanotechnology Now,

Scientists at the Swiss Nanoscience Institute at the University of Basel have used resonators made from single-crystalline diamonds to develop a novel device in which a quantum system is integrated into a mechanical oscillating system. For the first time, the researchers were able to show that this mechanical system can be used to coherently manipulate an electron spin embedded in the resonator – without external antennas or complex microelectronic structures. …

A July 16, 2014 University of Basel press release (also on EurekAlert), which originated the news item, provides more detail about the work,

In previous publications, the research team led by Georg H. Endress Professor Patrick Maletinsky described how resonators made from single-crystalline diamonds with individually embedded electrons are highly suited to addressing the spin of these electrons. These diamond resonators were modified in multiple instances so that a carbon atom from the diamond lattice was replaced with a nitrogen atom in their crystal lattices with a missing atom directly adjacent. In these “nitrogen-vacancy centers,” individual electrons are trapped. Their “spin” or intrinsic angular momentum is examined in this research.

When the resonator now begins to oscillate, strain develops in the diamond’s crystal structure. This, in turn, influences the spin of the electrons, which can indicate two possible directions (“up” or “down”) when measured. The direction of the spin can be detected with the aid of fluorescence spectroscopy.

Extremely fast spin oscillation

In this latest publication, the scientists have shaken the resonators in a way that allows them to induce a coherent oscillation of the coupled spin for the first time. This means that the spin of the electrons switches from up to down and vice versa in a controlled and rapid rhythm and that the scientists can control the spin status at any time. This spin oscillation is fast compared with the frequency of the resonator. It also protects the spin against harmful decoherence mechanisms.

It is conceivable that this diamond resonator could be applied to sensors – potentially in a highly sensitive way – because the oscillation of the resonator can be recorded via the altered spin. These new findings also allow the spin to be coherently rotated over a very long period of close to 100 microseconds, making the measurement more precise. Nitrogen-vacancy centers could potentially also be used to develop a quantum computer. In this case, the quick manipulation of its quantum states demonstrated in this work would be a decisive advantage.

Unfortunately, the researchers do not indicate the measurement scale for the diamonds so I’m guessing, given the descriptions, that these were nanoscale diamonds being used in the research.

In any event, here’s a link to and a citation for the paper,

Strong mechanical driving of a single electron spin by A. Barfuss, J. Teissier, E. Neu, A. Nunnenkamp, & P. Maletinsky. Nature Physics (2015)  doi:10.1038/nphys3411 Published online 03 August 2015

This paper is behind a paywall.

An efficient method for signal transmission from nanocomponents

A May 23, 2015 news item on Nanotechnology Now describes research into perfecting the use of nanocomponents in electronic circuits,

Physicists have developed an innovative method that could enable the efficient use of nanocomponents in electronic circuits. To achieve this, they have developed a layout in which a nanocomponent is connected to two electrical conductors, which uncouple the electrical signal in a highly efficient manner. The scientists at the Department of Physics and the Swiss Nanoscience Institute at the University of Basel have published their results in the scientific journal Nature Communications together with their colleagues from ETH Zurich.

A May 22, 2015 University of Basel press release (also on EurkeAlert) describes why there is interest in smaller components and some of the challenges once electrodes can be measured in atoms,

Electronic components are becoming smaller and smaller. Components measuring just a few nanometers – the size of around ten atoms – are already being produced in research laboratories. Thanks to miniaturization, numerous electronic components can be placed in restricted spaces, which will boost the performance of electronics even further in the future.

Teams of scientists around the world are investigating how to produce such nanocomponents with the aid of carbon nanotubes. These tubes have unique properties – they offer excellent heat conduction, can withstand strong currents, and are suitable for use as conductors or semiconductors. However, signal transmission between a carbon nanotube and a significantly larger electrical conductor remains problematic as large portions of the electrical signal are lost due to the reflection of part of the signal.

Antireflex increases efficiency

A similar problem occurs with light sources inside a glass object. A large amount of light is reflected by the walls, which means that only a small proportion reaches the outside. This can be countered by using an antireflex coating on the walls.

The press release goes on to describe new technique for addressing the issue,

Led by Professor Christian Schönenberger, scientists in Basel are now taking a similar approach to nanoelectronics. They have developed an antireflex device for electrical signals to reduce the reflection that occurs during transmission from nanocomponents to larger circuits. To do so, they created a special formation of electrical conductors of a certain length, which are coupled with a carbon nanotube. The researchers were therefore able to efficiently uncouple a high-frequency signal from the nanocomponent.

Differences in impedance cause the problem

Coupling nanostructures with significantly larger conductors proved difficult because they have very different impedances. The greater the difference in impedance between two conducting structures, the greater the loss during transmission. The difference between nanocomponents and macroscopic conductors is so great that no signal will be transmitted unless countermeasures are taken. The antireflex device minimizes this effect and adjusts the impedances, leading to efficient coupling. This brings the scientists significantly closer to their goal of using nanocomponents to transmit signals in electronic parts.

Here’s a link to and a citation for the paper,

Clean carbon nanotubes coupled to superconducting impedance-matching circuits by V. Ranjan, G. Puebla-Hellmann, M. Jung, T. Hasler, A. Nunnenkamp, M. Muoth, C. Hierold, A. Wallraff, & C. Schönenberger. Nature Communications 6, Article number: 7165 doi:10.1038/ncomms8165 Published 15 May 2015

This paper is behind a paywall.