Tag Archives: University of Stuttgart

Defending nanoelectronics from cyber attacks

There’s a new program at the University of Stuttgart (Germany) and their call for projects was recently announced. First, here’s a description of the program in a May 30, 2019 news item on Nanowerk,

Today’s societies critically depend on electronic systems. Past spectacular cyber-attacks have clearly demonstrated the vulnerability of existing systems and the need to prevent such attacks in the future. The majority of available cyber-defenses concentrate on protecting the software part of electronic systems or their communication interfaces.

However, manufacturing technology advancements and the increasing hardware complexity provide a large number of challenges so that the focus of attackers has shifted towards the hardware level. We saw already evidence for powerful and successful hardware-level attacks, including Rowhammer, Meltdown and Spectre.

These attacks happened on products built using state-of-the-art microelectronic technology, however, we are facing completely new security challenges due to the ongoing transition to radically new types of nanoelectronic devices, such as memristors, spintronics, or carbon nanotubes and graphene based transistors.

The use of such emerging nanotechnologies is inevitable to address the key challenges related to energy efficiency, computing power and performance. Therefore, the entire industry, are switching to emerging nano-electronics alongside scaled CMOS technologies in heterogeneous integrated systems.

These technologies come with new properties and also facilitate the development of radically different computer architectures. The new technologies and architectures provide new opportunities for achieving security targets, but also raise questions about their vulnerabilities to new types of hardware attacks.

A May 28, 2019 University of Stuttgart press release provides more information about the program and the call for projects,

Whether it’s cars, industrial plants or the government network, spectacular cyber attacks over the past few months have shown how vulnerable modern electronic systems are. The aim of the new Priority Program “Nano Security”, which is coordinated by the University of Stuttgart, is protecting you and preventing the cyber attacks of the future. The program, which is funded by the German Research Foundation (DFG), emphasizes making the hardware into a reliable foundation of a system or a layer of security.

The challenges of nanoelectronics

Completely new challenges also emerge as a result of the switch to radically new nanoelectronic components, which for example are used to master the challenges of the future in terms of energy efficiency, computing power and secure data transmission. For example, memristors (components which are not just used to store information but also function as logic modules), the spintronics, which exploit quantum-mechanical effects, or carbon nanotubes.

The new technologies, as well as the fundamentally different computer architecture associated with them, offer new opportunities for cryptographic primitives in order to achieve an even more secure data transmission. However, they also raise questions about their vulnerability to new types of hardware attacks.

The problem is part of the solution

In this context, a better understanding should be developed of what consequences the new nanoelectronic technologies have for the security of circuits and systems as part of the new Priority Program. Here, the hardware is not just thought of as part of the problem but also as an important and necessary part of the solution to security problems. The starting points here for example are the hardware-based generation of cryptographic keys, the secure storage and processing of sensitive data, and the isolation of system components which is guaranteed by the hardware. Lastly, it should be ensured that an attack cannot be spread further by the system.

In this process, the scientists want to assess the possible security risks and weaknesses which stem from the new type of nanoelectronics. Furthermore, they want to develop innovative approaches for system security which are based on nanoelectronics as a security anchor.

The Priority Program promotes cooperation between scientists, who develop innovative security solutions for the computer systems of the future on different levels of abstraction. Likewise, it makes methods available to system designers to keep ahead in the race between attackers and security measures over the next few decades.

The call has started

The DFG Priority Program “Nano Security. From Nano-Electronics to Secure Systems“ (SPP 2253) is scheduled to last for a period of six years. The call for projects for the first three-year funding period was advertised a few days ago, and the first projects are set to start at the beginning of 2020.

For more information go to the Nano Security: From Nano-Electronics to Secure Systems webpage on the University of Stuttgart website.

Quantum back action and devil’s play

I always appreciate a reference to James Clerk Maxwell’s demon thought experiment (you can find out about it in the Maxwell’s demon Wikipedia entry). This time it comes from physicist  Kater Murch in a July 23, 2018 Washington University in st. Louis (WUSTL) news release (published July 25, 2018 on EurekAlert) written by Brandie Jefferson (offering a good explanation of the thought experiment and more),

Thermodynamics is one of the most human of scientific enterprises, according to Kater Murch, associate professor of physics in Arts & Sciences at Washington University in St. Louis.

“It has to do with our fascination of fire and our laziness,” he said. “How can we get fire” — or heat — “to do work for us?”

Now, Murch and colleagues have taken that most human enterprise down to the intangible quantum scale — that of ultra low temperatures and microscopic systems — and discovered that, as in the macroscopic world, it is possible to use information to extract work.

There is a catch, though: Some information may be lost in the process.

“We’ve experimentally confirmed the connection between information in the classical case and the quantum case,” Murch said, “and we’re seeing this new effect of information loss.”

The results were published in the July 20 [2018] issue of Physical Review Letters.

The international team included Eric Lutz of the University of Stuttgart; J. J. Alonzo of the University of Erlangen-Nuremberg; Alessandro Romito of Lancaster University; and Mahdi Naghiloo, a Washington University graduate research assistant in physics.

That we can get energy from information on a macroscopic scale was most famously illustrated in a thought experiment known as Maxwell’s Demon. [emphasis mine] The “demon” presides over a box filled with molecules. The box is divided in half by a wall with a door. If the demon knows the speed and direction of all of the molecules, it can open the door when a fast-moving molecule is moving from the left half of the box to the right side, allowing it to pass. It can do the same for slow particles moving in the opposite direction, opening the door when a slow-moving molecule is approaching from the right, headed left. ­

After a while, all of the quickly-moving molecules are on the right side of the box. Faster motion corresponds to higher temperature. In this way, the demon has created a temperature imbalance, where one side of the box is hotter. That temperature imbalance can be turned into work — to push on a piston as in a steam engine, for instance. At first the thought experiment seemed to show that it was possible create a temperature difference without doing any work, and since temperature differences allow you to extract work, one could build a perpetual motion machine — a violation of the second law of thermodynamics.

“Eventually, scientists realized that there’s something about the information that the demon has about the molecules,” Murch said. “It has a physical quality like heat and work and energy.”

His team wanted to know if it would be possible to use information to extract work in this way on a quantum scale, too, but not by sorting fast and slow molecules. If a particle is in an excited state, they could extract work by moving it to a ground state. (If it was in a ground state, they wouldn’t do anything and wouldn’t expend any work).

But they wanted to know what would happen if the quantum particles were in an excited state and a ground state at the same time, analogous to being fast and slow at the same time. In quantum physics, this is known as a superposition.

“Can you get work from information about a superposition of energy states?” Murch asked. “That’s what we wanted to find out.”

There’s a problem, though. On a quantum scale, getting information about particles can be a bit … tricky.

“Every time you measure the system, it changes that system,” Murch said. And if they measured the particle to find out exactly what state it was in, it would revert to one of two states: excited, or ground.

This effect is called quantum backaction. To get around it, when looking at the system, researchers (who were the “demons”) didn’t take a long, hard look at their particle. Instead, they took what was called a “weak observation.” It still influenced the state of the superposition, but not enough to move it all the way to an excited state or a ground state; it was still in a superposition of energy states. This observation was enough, though, to allow the researchers track with fairly high accuracy, exactly what superposition the particle was in — and this is important, because the way the work is extracted from the particle depends on what superposition state it is in.

To get information, even using the weak observation method, the researchers still had to take a peek at the particle, which meant they needed light. So they sent some photons in, and observed the photons that came back.

“But the demon misses some photons,” Murch said. “It only gets about half. The other half are lost.” But — and this is the key — even though the researchers didn’t see the other half of the photons, those photons still interacted with the system, which means they still had an effect on it. The researchers had no way of knowing what that effect was.

They took a weak measurement and got some information, but because of quantum backaction, they might end up knowing less than they did before the measurement. On the balance, that’s negative information.

And that’s weird.

“Do the rules of thermodynamics for a macroscopic, classical world still apply when we talk about quantum superposition?” Murch asked. “We found that yes, they hold, except there’s this weird thing. The information can be negative.

“I think this research highlights how difficult it is to build a quantum computer,” Murch said.

“For a normal computer, it just gets hot and we need to cool it. In the quantum computer you are always at risk of losing information.”

Here’s a link to and a citation for the paper,

Information Gain and Loss for a Quantum Maxwell’s Demon by M. Naghiloo, J. J. Alonso, A. Romito, E. Lutz, and K. W. Murch. Phys. Rev. Lett. 121, 030604 (Vol. 121, Iss. 3 — 20 July 2018) DOI:https://doi.org/10.1103/PhysRevLett.121.030604 Published 17 July 2018

© 2018 American Physical Society

This paper is behind a paywall.

Sustainable Nanotechnologies (SUN) project draws to a close in March 2017

Two Oct. 31, 2016 news item on Nanowerk signal the impending sunset date for the European Union’s Sustainable Nanotechnologies (SUN) project. The first Oct. 31, 2016 news item on Nanowerk describes the projects latest achievements,

The results from the 3rd SUN annual meeting showed great advancement of the project. The meeting was held in Edinburgh, Scotland, UK on 4-5 October 2016 where the project partners presented the results obtained during the second reporting period of the project.

SUN is a three and a half year EU project, running from 2013 to 2017, with a budget of about €14 million. Its main goal is to evaluate the risks along the supply chain of engineered nanomaterials and incorporate the results into tools and guidelines for sustainable manufacturing.

The ultimate goal of the SUN Project is the development of an online software Decision Support System – SUNDS – aimed at estimating and managing occupational, consumer, environmental and public health risks from nanomaterials in real industrial products along their lifecycles. The SUNDS beta prototype has been released last October, 2015, and since then the main focus has been on refining the methodologies and testing them on selected case studies i.e. nano-copper oxide based wood preserving paint and nano- sized colourants for plastic car part: organic pigment and carbon black. Obtained results and open issues were discussed during the third annual meeting in order collect feedbacks from the consortium that will inform, in the next months, the implementation of the final version of the SUNDS software system, due by March 2017.

An Oct. 27, 2016 SUN project press release, which originated the news item, adds more information,

Significant interest has been payed towards the results obtained in WP2 (Lifecycle Thinking) which main objectives are to assess the environmental impacts arising from each life cycle stage of the SUN case studies (i.e. Nano-WC-Cobalt (Tungsten Carbide-cobalt) sintered ceramics, Nanocopper wood preservatives, Carbon Nano Tube (CNT) in plastics, Silicon Dioxide (SiO2) as food additive, Nano-Titanium Dioxide (TiO2) air filter system, Organic pigment in plastics and Nanosilver (Ag) in textiles), and compare them to conventional products with similar uses and functionality, in order to develop and validate criteria and guiding principles for green nano-manufacturing. Specifically, the consortium partner COLOROBBIA CONSULTING S.r.l. expressed its willingness to exploit the results obtained from the life cycle assessment analysis related to nanoTiO2 in their industrial applications.

On 6th October [2016], the discussions about the SUNDS advancement continued during a Stakeholder Workshop, where representatives from industry, regulatory and insurance sectors shared their feedback on the use of the decision support system. The recommendations collected during the workshop will be used for the further refinement and implemented in the final version of the software which will be released by March 2017.

The second Oct. 31, 2016 news item on Nanowerk led me to this Oct. 27, 2016 SUN project press release about the activities in the upcoming final months,

The project has designed its final events to serve as an effective platform to communicate the main results achieved in its course within the Nanosafety community and bridge them to a wider audience addressing the emerging risks of Key Enabling Technologies (KETs).

The series of events include the New Tools and Approaches for Nanomaterial Safety Assessment: A joint conference organized by NANOSOLUTIONS, SUN, NanoMILE, GUIDEnano and eNanoMapper to be held on 7 – 9 February 2017 in Malaga, Spain, the SUN-CaLIBRAte Stakeholders workshop to be held on 28 February – 1 March 2017 in Venice, Italy and the SRA Policy Forum: Risk Governance for Key Enabling Technologies to be held on 1- 3 March in Venice, Italy.

Jointly organized by the Society for Risk Analysis (SRA) and the SUN Project, the SRA Policy Forum will address current efforts put towards refining the risk governance of emerging technologies through the integration of traditional risk analytic tools alongside considerations of social and economic concerns. The parallel sessions will be organized in 4 tracks:  Risk analysis of engineered nanomaterials along product lifecycle, Risks and benefits of emerging technologies used in medical applications, Challenges of governing SynBio and Biotech, and Methods and tools for risk governance.

The SRA Policy Forum has announced its speakers and preliminary Programme. Confirmed speakers include:

  • Keld Alstrup Jensen (National Research Centre for the Working Environment, Denmark)
  • Elke Anklam (European Commission, Belgium)
  • Adam Arkin (University of California, Berkeley, USA)
  • Phil Demokritou (Harvard University, USA)
  • Gerard Escher (École polytechnique fédérale de Lausanne, Switzerland)
  • Lisa Friedersdor (National Nanotechnology Initiative, USA)
  • James Lambert (President, Society for Risk Analysis, USA)
  • Andre Nel (The University of California, Los Angeles, USA)
  • Bernd Nowack (EMPA, Switzerland)
  • Ortwin Renn (University of Stuttgart, Germany)
  • Vicki Stone (Heriot-Watt University, UK)
  • Theo Vermeire (National Institute for Public Health and the Environment (RIVM), Netherlands)
  • Tom van Teunenbroek (Ministry of Infrastructure and Environment, The Netherlands)
  • Wendel Wohlleben (BASF, Germany)

The New Tools and Approaches for Nanomaterial Safety Assessment (NMSA) conference aims at presenting the main results achieved in the course of the organizing projects fostering a discussion about their impact in the nanosafety field and possibilities for future research programmes.  The conference welcomes consortium partners, as well as representatives from other EU projects, industry, government, civil society and media. Accordingly, the conference topics include: Hazard assessment along the life cycle of nano-enabled products, Exposure assessment along the life cycle of nano-enabled products, Risk assessment & management, Systems biology approaches in nanosafety, Categorization & grouping of nanomaterials, Nanosafety infrastructure, Safe by design. The NMSA conference key note speakers include:

  • Harri Alenius (University of Helsinki, Finland,)
  • Antonio Marcomini (Ca’ Foscari University of Venice, Italy)
  • Wendel Wohlleben (BASF, Germany)
  • Danail Hristozov (Ca’ Foscari University of Venice, Italy)
  • Eva Valsami-Jones (University of Birmingham, UK)
  • Socorro Vázquez-Campos (LEITAT Technolоgical Center, Spain)
  • Barry Hardy (Douglas Connect GmbH, Switzerland)
  • Egon Willighagen (Maastricht University, Netherlands)
  • Nina Jeliazkova (IDEAconsult Ltd., Bulgaria)
  • Haralambos Sarimveis (The National Technical University of Athens, Greece)

During the SUN-caLIBRAte Stakeholder workshop the final version of the SUN user-friendly, software-based Decision Support System (SUNDS) for managing the environmental, economic and social impacts of nanotechnologies will be presented and discussed with its end users: industries, regulators and insurance sector representatives. The results from the discussion will be used as a foundation of the development of the caLIBRAte’s Risk Governance framework for assessment and management of human and environmental risks of MN and MN-enabled products.

The SRA Policy Forum: Risk Governance for Key Enabling Technologies and the New Tools and Approaches for Nanomaterial Safety Assessment conference are now open for registration. Abstracts for the SRA Policy Forum can be submitted till 15th November 2016.
For further information go to:
www.sra.org/riskgovernanceforum2017
http://www.nmsaconference.eu/

There you have it.

A Victoria & Albert Museum installation integrates of biomimicry, robotic fabrication and new materials research in architecture

The Victoria & Albert Museum (V&A) in London, UK, opened its Engineering Season show on May 18, 2016 (it runs until Nov. 6, 2016) featuring a robot installation and an exhibition putting the spotlight on Ove Arup, “the most significant engineer of the 20th century” according to the V&A’s May ??, 2016 press release,

The first major retrospective of the most influential engineer of the 20th century and a site specific installation inspired by nature and fabricated by robots will be the highlights of the V&A’s first ever Engineering Season, complemented by displays, events and digital initiatives dedicated to global engineering design. The V&A Engineering Season will highlight the importance of engineering in our daily lives and consider engineers as the ‘unsung heroes’ of design, who play a vital and creative role in the creation of our built environment.

Before launching into the robot/biomimicry part of this story, here’s a very brief description of why Ove Arup is considered so significant and influential,

Engineering the World: Ove Arup and the Philosophy of Total Design will explore the work and legacy of Ove Arup (1895-1988), … . Ove pioneered a multidisciplinary approach to design that has defined the way engineering is understood and practiced today. Spanning 100 years of engineering and architectural design, the exhibition will be guided by Ove’s writings about design and include his early projects, such as the Penguin Pool at London Zoo, as well as renowned projects by the firm including Sydney Opera House [Australia] and the Centre Pompidou in Paris. Arup’s collaborations with major architects of the 20th century pioneered new approaches to design and construction that remain influential today, with the firm’s legacy visible in many buildings across London and around the world. It will also showcase recent work by Arup, from major infrastructure projects like Crossrail and novel technologies for acoustics and crowd flow analysis, to engineering solutions for open source housing design.

Robots, biomimicry and the Elytra Filament Pavilion

A May 18, 2016 article by Tim Master for BBC (British Broadcasting Corporation) news online describes the pavilion installation,

A robot has taken up residence at the Victoria & Albert Musuem to construct a new installation at its London gardens.

The robot – which resembles something from a car assembly line – will build new sections of the Elytra Filament Pavilion over the coming months.

The futuristic structure will grow and change shape using data based on how visitors interact with it.

Elytra’s canopy is made up of 40 hexagonal cells – made from strips of carbon and glass fibre – which have been tightly wound into shape by the computer-controlled Kuka robot.

Each cell takes about three hours to build. On certain days, visitors to the V&A will be able to watch the robot create new cells that will be added to the canopy.

Here are some images made available by V&A,

Elytra Filament Pavilion arriving at the V&A, 2016. © Victoria and Albert Museum, London

Elytra Filament Pavilion arriving at the V&A, 2016. © Victoria and Albert Museum, London

Kuka robot weaving Elytra Filament Pavilion cell fibres, 2016. © Victoria and Albert Museum, London

Kuka robot weaving Elytra Filament Pavilion cell fibres, 2016. © Victoria and Albert Museum, London

[downloaded from http://www.bbc.com/news/entertainment-arts-36322731]

[downloaded from http://www.bbc.com/news/entertainment-arts-36322731]

Elytra Filament Pavilion at the V&A, 2016. © Victoria and Albert Museum, London

Elytra Filament Pavilion at the V&A, 2016. © Victoria and Albert Museum, London

Here’s more detail from the V&A’s Elytra Filament Pavilion installation description,

Elytra Filament Pavilion has been created by experimental German architect Achim Menges with Moritz Dörstelmann, structural engineer Jan Knippers and climate engineer Thomas Auer.

Menges and Knippers are leaders of research institutes at the University of Stuttgart that are pioneering the integration of biomimicry, robotic fabrication and new materials research in architecture. This installation emerges from their ongoing research projects and is their first-ever major commission in the UK.

The pavilion explores the impact of emerging robotic technologies on architectural design, engineering and making.

Its design is inspired by lightweight construction principles found in nature, the filament structures of the forewing shells of flying beetles known as elytra. Made of glass and carbon fibre, each component of the undulating canopy is produced using an innovative robotic winding technique developed by the designers. Like beetle elytra, the pavilion’s filament structure is both very strong and very light – spanning over 200m2 it weighs less than 2,5 tonnes.

Elytra is a responsive shelter that will grow over the course of the V&A Engineering Season. Sensors in the canopy fibres will collect data on how visitors inhabit the pavilion and monitor the structure’s behaviour, ultimately informing how and where the canopy grows. During a series of special events as part of the Engineering Season, visitors will have the opportunity to witness the pavilion’s construction live, as new components are fabricated on-site by a Kuka robot.

Unfortunately, I haven’t been able to find more technical detail, particularly about the materials being used in the construction of the pavilion, on the V&A website.

One observation, I’m a little uncomfortable with how they’re gathering data “Sensors in the canopy fibres will collect data on how visitors inhabit the pavilion … .” It sounds like surveillance to me.

Nonetheless, the Engineering Season offers the promise of a very intriguing approach to fulfilling the V&A’s mandate as a museum dedicated to decorative arts and design.

Do artists see colour at the nanoscale? It would seem so

I’ve wondered how Japanese artists of the 16th to 18th centuries were able to beat gold down to the nanoscale for application to screens. How could they see what they were doing? I may have an answer at last. According to some new research, it seems that the human eye can detect colour at the nanoscale.

Before getting to the research, here’s the Namban screen story.

Japanese Namban Screen. ca. 1550. In Portugal-Japão: 450 anos de memórias. Embaixada de Portugal no Japão, 1993. [downloaded from http://www.indiana.edu/~liblilly/digital/exhibitions/exhibits/show/portuguese-speaking-diaspora/china-and-japan]

Japanese Namban Screen. ca. 1550. In Portugal-Japão: 450 anos de memórias. Embaixada de Portugal no Japão, 1993. [downloaded from http://www.indiana.edu/~liblilly/digital/exhibitions/exhibits/show/portuguese-speaking-diaspora/china-and-japan]

This image is from an Indiana University at Bloomington website featuring a page titled, Portuguese-Speaking Diaspora,

A detail from one of four large folding screens on display in the Museu de Arte Antiga in Lisbon. Namban was the word used to refer to Portuguese traders who, in this scene, are dressed in colorful pantaloons and accompanied by African slaves. Jesuits appear in black robes, while the Japanese observe the newcomers from inside their home. The screen materials included gold-covered copper and paper, tempera paint, silk, and lacquer.

Copyright © 2015 The Trustees of Indiana University

Getting back to the Japanese artists, here’s how their work was described in a July 2, 2014 Springer press release on EurekAlert,

Ancient Japanese gold leaf artists were truly masters of their craft. An analysis of six ancient Namban paper screens show that these artifacts are gilded with gold leaf that was hand-beaten to the nanometer scale. [emphasis mine] Study leader Sofia Pessanha of the Atomic Physics Center of the University of Lisbon in Portugal believes that the X-ray fluorescence technique her team used in the analysis could also be used to date other artworks without causing any damage to them. The results are published in Springer’s journal Applied Physics A: Materials Science & Processing.

Gold leaf refers to a very thin sheet made from a combination of gold and other metals. It has almost no weight and can only be handled by specially designed tools. Even though the ancient Egyptians were probably the first to gild artwork with it, the Japanese have long been credited as being able to produce the thinnest gold leaf in the world. In Japanese traditional painting, decorating with gold leaf is named Kin-haku, and the finest examples of this craft are the Namban folding screens, or byobu. These were made during the late Momoyama (around 1573 to 1603) and early Edo (around 1603 to 1868) periods.

Pessanha’s team examined six screens that are currently either part of a museum collection or in a private collection in Portugal. Four screens belong to the Momoyama period, and two others were decorated during the early Edo period. The researchers used various X-ray fluorescence spectroscopy techniques to test the thickness and characteristics of the gold layers. The method is completely non-invasive, no samples needed to be taken, and therefore the artwork was not damaged in any way. Also, the apparatus needed to perform these tests is portable and can be done outside of a laboratory.

The gilding was evaluated by taking the attenuation or weakening of the different characteristic lines of gold leaf layers into account. The methodology was tested to be suitable for high grade gold alloys with a maximum of 5 percent influence of silver, which is considered negligible.

The two screens from the early Edo period were initially thought to be of the same age. However, Pessanha’s team found that gold leaf on a screen kept at Museu Oriente in Lisbon was thinner, hence was made more recently. This is in line with the continued development of the gold beating techniques carried out in an effort to obtain ever thinner gold leaf.

So, how did these artists beat gold leaf down to the nanoscale and then use the sheets in their art work? This July 10, 2015 news item on Azonano may help to answer that question,

The human eye is an amazing instrument and can accurately distinguish between the tiniest, most subtle differences in color. Where human vision excels in one area, it seems to fall short in others, such as perceiving minuscule details because of the natural limitations of human optics.

In a paper published today in The Optical Society’s new, high-impact journal Optica, a research team from the University of Stuttgart, Germany and the University of Eastern Finland, Joensuu, Finland, has harnessed the human eye’s color-sensing strengths to give the eye the ability to distinguish between objects that differ in thickness by no more than a few nanometers — about the thickness of a cell membrane or an individual virus.

A July 9, 2015 Optical Society news release (also on EurkeAlert), which originated the news item, provides more details,

This ability to go beyond the diffraction limit of the human eye was demonstrated by teaching a small group of volunteers to identify the remarkably subtle color differences in light that has passed through thin films of titanium dioxide under highly controlled and precise lighting conditions. The result was a remarkably consistent series of tests that revealed a hitherto untapped potential, one that rivals sophisticated optics tools that can measure such minute thicknesses, such as ellipsometry.

“We were able to demonstrate that the unaided human eye is able to determine the thickness of a thin film — materials only a few nanometers thick — by simply observing the color it presents under specific lighting conditions,” said Sandy Peterhänsel, University of Stuttgart, Germany and principal author on the paper. The actual testing was conducted at the University of Eastern Finland.

The Color and Thickness of Thin Films

Thin films are essential for a variety of commercial and manufacturing applications, including anti-reflective coatings on solar panels. These films can be as small as a few to tens of nanometers thick. The thin films used in this experiment were created by applying layer after layer of single atoms on a surface. Though highly accurate, this is a time-consuming procedure and other techniques like vapor deposition are used in industry.

The optical properties of thin films mean that when light interacts with their surfaces it produces a wide range of colors. This is the same phenomenon that produces scintillating colors in soap bubble and oil films on water.

The specific colors produced by this process depend strongly on the composition of the material, its thickness, and the properties of the incoming light. This high sensitivity to both the material and thickness has sometimes been used by skilled engineers to quickly estimate the thickness of films down to a level of approximately 10-20 nanometers.

This observation inspired the research team to test the limits of human vision to see how small of a variation could be detected under ideal conditions.

“Although the spatial resolving power of the human eye is orders of magnitude too weak to directly characterize film thicknesses, the interference colors are well known to be very sensitive to variations in the film,” said Peterhänsel.

Experimental Setup

The setup for this experiment was remarkably simple. A series of thin films of titanium dioxide were manufactured one layer at a time by atomic deposition. While time consuming, this method enabled the researchers to carefully control the thickness of the samples to test the limitations of how small a variation the research subjects could identify.

The samples were then placed on a LCD monitor that was set to display a pure white color, with the exception of a colored reference area that could be calibrated to match the apparent surface colors of the thin films with various thicknesses.

The color of the reference field was then changed by the test subject until it perfectly matched the reference sample: correctly identifying the color meant they also correctly determined its thickness. This could be done in as little as two minutes, and for some samples and test subjects their estimated thickness differed only by one-to-three nanometers from the actual value measured by conventional means. This level of precision is far beyond normal human vision.

Compared to traditional automated methods of determining the thickness of a thin film, which can take five to ten minutes per sample using some techniques, the human eye performance compared very favorably.

Since human eyes tire very easily, this process is unlikely to replace automated methods. It can, however, serve as a quick check by an experienced technician. “The intention of our study never was solely to compare the human color vision to much more sophisticated methods,” noted Peterhänsel. “Finding out how precise this approach can be was the main motivation for our work.”

The researchers speculate that it may be possible to detect even finer variations if other control factors are put in place. “People often underestimate human senses and their value in engineering and science. This experiment demonstrates that our natural born vision can achieve exceptional tasks that we normally would only assign to expensive and sophisticated machinery,” concludes Peterhänsel.

Here’s a link to and a citation for the paper,

Human color vision provides nanoscale accuracy in thin-film thickness characterization by Sandy Peterhänsel, Hannu Laamanen, Joonas Lehtolahti, Markku Kuittinen, Wolfgang Osten, and Jani Tervo. Optica Vol. 2, Issue 7, pp. 627-630 (2015) •doi: 10.1364/OPTICA.2.000627

This article appears to be open access.

It would seem that the artists creating the Namban screens exploited the ability to see at the nanoscale, which leads me to  wonder how many people who work with color/colour all the time such as visual artists, interior designers, graphic designers, printers, and more can perceive at the nanoscale. These German and Finnish researchers may want to work with some of these professionals in their next study.

RoboEarth (robot internet) gets examined in hospital

RoboEarth sometimes referred to as a robot internet or a robot world wide web is being tested this week by a team of researchers at Eindhoven University of Technology (Technische Universiteit Eindhoven, Netherlands) and their colleagues at Philips, ETH Zürich, TU München and the universities of Zaragoza and Stuttgart according to a Jan. 14, 2014 news item on BBC (British Broadcasting Corporation) news online,

A world wide web for robots to learn from each other and share information is being shown off for the first time.

Scientists behind RoboEarth will put it through its paces at Eindhoven University in a mocked-up hospital room.

Four robots will use the system to complete a series of tasks, including serving drinks to patients.

It is the culmination of a four-year project, funded by the European Union.

The eventual aim is that both robots and humans will be able to upload information to the cloud-based database, which would act as a kind of common brain for machines.

There’s a bit more detail in Victoria Turk’s Jan. 13 (?), 2014 article for motherboard.vice.com (Note: A link has been removed),

A hospital-like setting is an ideal test for the project, because where RoboEarth could come in handy is in helping out humans with household tasks. A big problem for robots at the moment is that human environments tend to change a lot, whereas robots are limited to the very specific movements and tasks they’ve been programmed to do.

“To enable robots to successfully lend a mechanical helping hand, they need to be able to deal flexibly with new situations and conditions,” explains a post by the University of Eindhoven. “For example you can teach a robot to bring you a cup of coffee in the living room, but if some of the chairs have been moved the robot won’t be able to find you any longer. Or it may get confused if you’ve just bought a different set of coffee cups.”

And of course, it wouldn’t just be limited to robots working explicitly together. The Wikipedia-like knowledge base is more like an internet for machines, connecting lonely robots across the globe.

A Jan. 10, 2014 Eindhoven University of Technology news release provides some insight into what the researchers want to accomplish,

“The problem right now is that robots are often developed specifically for one task”, says René van de Molengraft, TU/e  [Eindhoven University of Technology] researcher and RoboEarth project leader. “Everyday changes that happen all the time in our environment make all the programmed actions unusable. But RoboEarth simply lets robots learn new tasks and situations from each other. All their knowledge and experience are shared worldwide on a central, online database. As well as that, computing and ‘thinking’ tasks can be carried out by the system’s ‘cloud engine’, so the robot doesn’t need to have as much computing or battery power on‑board.”

It means, for example, that a robot can image a hospital room and upload the resulting map to RoboEarth. Another robot, which doesn’t know the room, can use that map on RoboEarth to locate a glass of water immediately, without having to search for it endlessly. In the same way a task like opening a box of pills can be shared on RoboEarth, so other robots can also do it without having to be programmed for that specific type of box.

There’s no word as to exactly when this test being demonstrated to a delegation from the European Commission, which financed the project, using four robots and two simulated hospital rooms is being held.

I first wrote about* RoboEarth in a Feb. 14, 2011 posting (scroll down about 1/4 of the way) and again in a March 12 2013 posting about the project’s cloud engine, Rapyuta.

* ‘abut’ corrected to ‘about’ on Sept. 2, 2014.

NanoRem: pollution, nanotechnology, and remediation

According to a July 6, 2013 news item on Nanowerk, nanoremediation is not the right term for referring to pollution cleanup technologies that are nanotechnology-enabled,

In the remediation of pollutions in the soil and groundwater, minute nanoparticles are being increasingly used that are to convert resp. break down pollutants on site. The process, often somewhat mistakenly described as “nano-remediation”, can also be used with contaminations that have been hard to fight up to now, for example through heavy metals or the notorious, carcinogenic softener PCB. Yet how do the various nanoparticles behave in the earth, are they in turn harmless for humans and the environment and how can they be produced at a favourable price? These questions were investigated by scientists from the Research Facility for Subsurface Remediation (VEGAS) of the University of Stuttgart together with 27 partners from 13 countries in the framework of EU project “NanoRem”, planned to last four years. For this purpose the European Union is providing around 10.5 million Euros from the 7th research framework programme.

The July 6, 2013 news item on Nanotechnology Now (ordinarily, I’d quote from the University of Stuttgart press release which originated the Nanowerk and Nanotechnology Now news items but the university’s website seems to be experiencing technical problems) provides more details about treating pollution with ‘nanotechnology-enabled’ techniques and more information about NanoRem,

Nanotechnologies are particularly suited for treating groundwater aquifers but also contaminated soil at the site of the contamination (in situ). However, in remediation projects (reclamation of contaminated sites), they have only been used hesitantly since an effective and reliable application is not yet mature, the potential risks for the environment difficult to assess and nano-remediation in addition comparatively expensive due to the still high manufacturing costs of nanoparticles. The nanotechnology, however, offers advantages: compared to the classic remediation processes, such as “Pump & Treat” (pumping off contaminated groundwater and cleaning it in a treatment plant) or chemical, resp. microbiological in-situ remediation processes, the range of “treatable” contaminants is greater. In addition, a quick and targeted break down of pollutants can be achieved, for example also in industrial buildings without the production being interrupted. “Through nanotechnology we are expecting a significant improvement in the remediation service and the operational areas”, according to the Stuttgart coordinator Dr. Hans-Peter Koschitzky. This would not only be beneficial for the environment but would also be attractive from an economical point of view: the world market for the application of environmental nanotechnologies was estimated to be a total of six billion US Dollars in 2010.

Against this background the scientists involved in NanoRem want to develop practical, efficient, safe and economical nanotechnologies for in-situ remediation with the aim of enabling a commercial use as well as a spread of the application in Europe. The focus is on the best-suited nanotechnologies as well as favourably priced production techniques. For this purpose questions on the mobility and reactivity of nanoparticles in the subsoil as well as the possible risk potential for mankind and environment in particular are to be investigated. A further aim is the provision of a comprehensive “tool box” for the planning and monitoring of the remediation as well as success control.

The Stuttgart researchers will be focusing on the use of nanoscale iron particles (aka, nano zero valent iron nZVI?; you can find out more about zNVI in my Mar. 20, 2012 posting) as per the news items,

The researchers from the Stuttgart Research Facility for Subsurface Remediation, VEGAS, are concentrating on the large-scale implementation of nano-iron particles within the project. Initially three large-scale tests are conducted: artificial aquifers are established with defined sand layers of various properties in large stainless steel containers in the experimental hall and flooded with groundwater. In each of these large-scale tests a defined source of pollution is incorporated, then various nanoparticles are injected. Probes in the container provide information on the concentrations of pollutants and nanoparticles as well as on the remediation progress at many sites in the aquifer. These tests are validated by Dutch and Italian partners with the help of a numerical groundwater flow and transport model. Finally, field tests at sites in need of remediation with various requirement profiles are conducted in several countries in Europe in order to verify the efficiency and profitability of nano-remediation. In particular, however, they also serve the purpose of achieving acceptance through transparency Europe-wide with public authorities and the public.

There is more information about the NanoRem project on the CORDIS website. The NanoRem website is currently (July 8, 2013) under construction but does offer more overview information on its landing page.

Nanodiamonds as imaging devices

Two different teams have recently published studies in Science magazine (Feb. 1, 2013 issue) about their work with nanodiamonds, flaws, and imaging in what seems to be a case of synchronicity as there are no obvious connections between the teams.

Sabrina Richards writes in her Jan. 31, 2013 article for The Scientist about the possibility of taking snapshots of molecules at some time in the future (Note: Links have been removed),

A miniscule diamond flaw—just two atoms different—could someday enable researchers to image single molecules without resorting to time-consuming and technically exacting X-ray crystallography. The new approach, published today (January 31 [sic]) in Science, relies on a single electron to detect perturbation in molecular magnetic fields, which can provide clues about the structures of proteins and other molecules.

The work was inspired by magnetic resonance imaging (MRI), which uses electromagnetic coils to detect the magnetic fields emitted by hydrogen atom protons.  But traditional MRI requires many trillions of protons to get a clear image—of a brain, for example—preventing scientists from visualizing anything much smaller than millimeters-wide structures. To detect just a few protons, such as those of a single molecule, scientists would need an atomic-scale sensor.

To construct such a sensor, physicists Daniel Rugar at IBM Research and David Awschalom at the University of California, Santa Barbara, turned to diamonds. A perfect diamond, made entirely of carbon atoms covalently bonded to each other, has no free electrons and therefore no magnetic properties, explained Hammel. But a special kind of defect, known as a nitrogen-vacancy (NV) center, confers unique magnetic properties.

Jyllian Kemsley’s Jan. 31, 2013 article for C&EN (Chemical and Engineering News) discusses the work from both teams and describes the technique they used,

To downscale NMR [aka MRI], both groups used a detector made of diamond with a site defect called a single nitrogen-vacancy (NV) center, in which a nitrogen atom and a lattice hole replace two adjacent carbon atoms. Prior work had determined that NV centers are sensitive to the internal magnetic fields of the diamond. The new research demonstrates that the fluorescence of such centers can be used to detect magnetic fields emanating from just outside the diamond. Both groups were able to use NV centers to detect nuclear polarization of hydrogens in poly(methyl methacrylate) with a sample volume lower limit of about (5 nm)3. Further development is necessary to extract structural information.

Still, nothing much has happened with this technique as Richards notes in her article,

So far, the study is “just a proof of principle,” noted Awschalom. The researchers haven’t actually imaged any molecules yet, but simply detected their presence. Still, Awschalom said, “we’ve shown it’s not a completely ridiculous idea to detect external nuclear magnetic fields with one electron.” …

Here’s a citation and a link to the article,

Nanoscale Nuclear Magnetic Resonance with a Nitrogen-Vacancy Spin Sensor by H. J. Mamin, M. Kim, M. H. Sherwood, C. T. Rettner, K. Ohno, D. D. Awschalom, D. Rugar. Science 1 February 2013: Vol. 339 no. 6119 pp. 557-560 DOI: 10.1126/science.1231540

The other research is described in a Feb. 14, 2013 news item on Azonano,

Magnetic resonance imaging (MRI) reveals details of living tissues, diseased organs and tumors inside the body without x-rays or surgery. What if the same technology could peer down to the level of atoms? Doctors could make visual diagnoses of a person’s molecules – examining damage on a strand of DNA, watching molecules misfold, or identifying a cancer cell by the proteins on its surface.

It is remarkably  similar work as Kemsley notes not helped by the fact that the one line description for both articles in Science magazine’s Table of Contents is identical.  (One line description: The optical response of the spin of a near-surface atomic defect in diamond can be used to sense proton magnetic fields.) The City College of New York City Feb. 13, 2013 news release, which originated the Azonano news item about the other team, offers more details,

 … Dr. Carlos Meriles, associate professor of physics at The City College of New York, and an international team of researchers at the University of Stuttgart and elsewhere have opened the door for nanoscale MRI. They used tiny defects in diamonds to sense the magnetic resonance of molecules. They reported their results in the February 1 [2013] issue of Science.

“It is bringing MRI to a level comparable to an atomic force microscope,” said Professor Meriles, referring to the device that traces the contours of atoms or tugs on a molecule to measure its strength. A nanoscale MRI could display how a molecule moves without touching it.

“Standard MRI typically gets to a resolution of 100 microns,” about the width of a human hair, said Professor Meriles. “With extraordinary effort,” he said, “it can get down to about 10 microns” – the width of a couple of blood cells. Nanoscale MRI would have a resolution 1,000 to 10,000 times better.

To try to pick up magnetic resonance on such a small scale, the team took advantage of the spin of protons in an atom, a property usually used to investigate quantum computing. In particular, they used minute imperfections in diamonds.

Diamonds are crystals made up almost entirely of carbon atoms. When a nitrogen atom lodges next to a spot where a carbon atom is missing, however, it creates a defect known as a nitrogen-vacancy (NV) center.

“These imperfections turn out to have a spin – like a little compass – and have some remarkable properties,” noted Professor Meriles. In the last few years, researchers realized that these NV centers could serve as very sensitive sensors. They can pick up the magnetic resonance of nearby atoms in a cell, for example. But unlike the atoms in a cell, the NVs shine when a light is directed at them, signaling what their spin is. If you illuminate it with green light it flashes red back.

“It is a form of what is called optically detected magnetic resonance,” he said. Like a hiker flashing Morse code on a hillside, the sensor “sends back flashes to say it is alive and well.”

“The NV can also be thought of as an atomic magnet. You can manipulate the spin of that atomic magnet just like you do with MRI by applying a radio frequency or radio pulses,” Professor Meriles explained. The NV responds. Shine a green light at it when the spin is pointing up and it will respond with brighter red light. A down spin gives a dimmer red light.

In the lab, graduate student Tobias Staudacher — the first author in this work — used NVs that had been created just below the diamond’s surface by bombarding it with nitrogen atoms. The team detected magnetic resonance within a film of organic material applied to the surface, just as one might examine a thin film of cells or tissue.

“Ultimately,” said Professor Meriles, “One will use a nitrogen-vacancy mounted on the tip of an atomic force microscope – or an array of NVs distributed on the diamond surface – to allow a scanning view of a cell, for example, to probe nuclear spins with a resolution down to a nanometer or perhaps better.”

Here’s a citation and a link to this team’s study,

Nuclear Magnetic Resonance Spectroscopy on a (5-Nanometer)3 Sample Volume by T. Staudacher, F. Shi, S. Pezzagna, J. Meijer, J. Du, C. A. Meriles, F. Reinhard1, J. Wrachtrup. Science 1 February 2013: Vol. 339 no. 6119 pp. 561-563 DOI: 10.1126/science.1231675

Both articles are behind paywalls.