Monthly Archives: October 2016

A bionic hybrid neurochip from the University of Calgary (Canada)

The University of Calgary is publishing some very exciting work these days as can be seen in my Sept. 21, 2016 posting about quantum teleportation. Today, the university announced this via an Oct. 26, 2016 news item on Nanowerk (Note: A link has been removed),

Brain functions are controlled by millions of brain cells. However, in order to understand how the brain controls functions, such as simple reflexes or learning and memory, we must be able to record the activity of large networks and groups of neurons. Conventional methods have allowed scientists to record the activity of neurons for minutes, but a new technology, developed by University of Calgary researchers, known as a bionic hybrid neuro chip, is able to record activity in animal brain cells for weeks at a much higher resolution. The technological advancement was published in the journal Scientific Reports(“A novel bio-mimicking, planar nano-edge microelectrode enables enhanced long-term neural recording”).

There’s more from an Oct. 26, 2016 University of Calgary news release on EurekAlert, which originated the news item,

“These chips are 15 times more sensitive than conventional neuro chips,” says Naweed Syed, PhD, scientific director of the University of Calgary, Cumming School of Medicine’s Alberta Children’s Hospital Research Institute, member of the Hotchkiss Brain Institute and senior author on the study. “This allows brain cell signals to be amplified more easily and to see real time recordings of brain cell activity at a resolution that has never been achieved before.”

The development of this technology will allow researchers to investigate and understand in greater depth, in animal models, the origins of neurological diseases and conditions such as epilepsy, as well as other cognitive functions such as learning and memory.

“Recording this activity over a long period of time allows you to see changes that occur over time, in the activity itself,” says Pierre Wijdenes, a PhD student in the Biomedical Engineering Graduate Program and the study’s first author. “This helps to understand why certain neurons form connections with each other and why others won’t.”

The cross-faculty team created the chip to mimic the natural biological contact between brain cells, essentially tricking the brain cells into believing that they are connecting with other brain cells. As a result, the cells immediately connect with the chip, thereby allowing researchers to view and record the two-way communication that would go on between two normal functioning brain cells.

“We simulated what mother-nature does in nature and provided brain cells with an environment where they feel as if they are at home,” says Syed. “This has allowed us to increase the sensitivity of our readings and help neurons build a long-term relationship with our electronic chip.”

While the chip is currently used to analyze animal brain cells, this increased resolution and the ability to make long-term recordings is bringing the technology one step closer to being effective in the recording of human brain cell activity.

“Human brain cell signals are smaller and therefore require more sensitive electronic tools to be designed to pick up the signals,” says Colin Dalton, Adjunct Professor in the Department of Electrical and Computer Engineering at the Schulich School of Engineering and a co-author on this study. Dalton is also the Facility Manager of the University of Calgary’s Advanced Micro/nanosystems Integration Facility (AMIF), where the chips were designed and fabricated.

Researchers hope the technology will one day be used as a tool to bring personalized therapeutic options to patients facing neurological disease.

Here’s a link to and a citation for the paper,

A novel bio-mimicking, planar nano-edge microelectrode enables enhanced long-term neural recording by Pierre Wijdenes, Hasan Ali, Ryden Armstrong, Wali Zaidi, Colin Dalton & Naweed I. Syed. Scientific Reports 6, Article number: 34553 (2016) doi:10.1038/srep34553
Published online: 12 October 2016

This paper is  open access.

News from Arizona State University’s The Frankenstein Bicentennial Project

I received a September 2016 newsletter (issued occasionally) from The Frankenstein Bicentennial Project at Arizona State University (ASU) which contained these two tidbits:

I, Artist

Bobby Zokaites converted a Roomba, a robotic vacuum, from a room cleaning device to an art-maker by removing the dust collector and vacuuming system and replacing it with a paint reservoir. Artists have been playing with robots to make art since the 1950s. This work is an extension of a genre, repurposing a readily available commercial robot.

With this project, Bobby set out to create a self-portrait of a generation, one that grew up with access to a vast amount of information and constantly bombarded by advertisements. The Roomba paintings prove that a robot can paint a reasonably complex painting, and do it differently every time; thus this version of the Turing test was successful.

As in the story of Frankenstein, this work also interrogates questions of creativity and responsibility. Is this a truly creative work of art, and if so, who is the artist; man or machine?

Both the text description and the video are from: https://www.youtube.com/watch?v=0m5ihmwPWgY

Frankenstein at 200 Exhibit

From the September 2016 newsletter (Note: Links have been removed),

Just as the creature in Frankenstein [the monster is never named in the book; its creator, however, is Victor Frankenstein] was assembled from an assortment of materials, so too is the cultural understanding of the Frankenstein myth. Now a new, interdisciplinary exhibit at ASU Libraries examines how Mary Shelley’s 200-year-old science fiction story continues to inspire, educate, and frighten 21st century audiences.

Frankenstein at 200 is open now through December 10 on the first floor of ASU’s Hayden Library in Tempe, AZ.

Here’s more from the exhibit’s webpage on the ASU website,

No work of literature has done more to shape the way people imagine science and its moral consequences than “Frankenstein;” or “The Modern Prometheus,” Mary Shelley’s enduring tale of creation and responsibility. The novel’s themes and tropes continue to resonate with contemporary audiences, influencing the way we confront emerging technologies, conceptualize the process of scientific research, and consider the ethical relationships between creators and their creations

Two hundred years after Mary Shelley imagined the story that would become “Frankenstein,” ASU Libraries is exhibiting an interdisciplinary installation that contextualizes the conditions of the original tale while exploring it’s continued importance in our technological age. Featuring work by ASU faculty and students, this exhibition includes a variety of physical and digital artifacts, original art projects and interactive elements that examine “Frankenstein’s” colossal scientific, technological, cultural and social impacts.

About the Frankenstein Bicentennial Project: Launched by Drs. David Guston and Ed Finn in 2013, the Frankenstein Bicentennial Project, is a global celebration of the bicentennial of the writing and publication of Mary Shelley’s Frankenstein, from 2016-2018. The project uses Frankenstein as a lens to examine the complex relationships between science, technology, ethics, and society. To learn more visit frankenstein.asu.edu and follow @FrankensteinASU on Twitter

There are more informational tidbits at The Frankenstein Bicentennial Project website.

Discovering why nanoscale gold has catalytic properties

Gold’s glitter may have inspired poets and triggered wars, but its catalytic prowess has helped make chemical reactions greener and more efficient. (Image courtesy of iStock/sbayram) [downloaded from http://www1.lehigh.edu/news/scientists-uncover-secret-gold%E2%80%99s-catalytic-powers

Gold’s glitter may have inspired poets and triggered wars, but its catalytic prowess has helped make chemical reactions greener and more efficient. (Image courtesy of iStock/sbayram) [downloaded from http://www1.lehigh.edu/news/scientists-uncover-secret-gold%E2%80%99s-catalytic-powers

A Sept. 27, 2016 news item on phys.org describes a discovery made by scientists at Lehigh University (US),

Settling a decades-long debate, new research conclusively shows that a hierarchy of active species exists in gold on iron oxide catalysis designed for low temperature carbon monoxide oxidation; Nanoparticles, sub-nanometer clusters and dispersed atoms—as well as how the material is prepared—are all important for determining catalytic activity.

A Sept. 27, 2016 Lehigh University news release by Lori Friedman, which originated the news item, provides more information about the discovery that gold nanoparticles can be used in catalysis and about the discovery of why that’s possible,

Christopher J. Kiely calls the 1982 discovery by Masatake Haruta that gold (Au) possessed a high level of catalytic activity for carbon monoxide (CO) oxidation when deposited on a metal-oxide “a remarkable turn of events in nanotechnology”—remarkable because gold had long been assumed to be inert for catalysis.

Haruta showed that gold dispersed on iron oxide effectively catalyzed the conversion of harmful carbon monoxide into more benign carbon dioxide (CO2) at room temperatures—a reaction that is critical for the construction of fire fighters’ breathing masks and for removal of CO from hydrogen feeds for fuel cells. In fact, today gold catalysts are being exploited in a major way for the greening of many important reactions in the chemical industry, because they can lead to cleaner, more efficient reactions with fewer by-products.

Haruta and Graham J. Hutchings, who co-discovered the use of gold as a catalyst for different reactions, are noted as Thompson Reuters Citation Laureates and appear annually on the ScienceWatch Nobel Prize prediction list. Their pioneering work opened up a new area of scientific inquiry and kicked off a decades-long debate about which type of supported gold species are most effective for the CO oxidation reaction.

In 2008, using electron microscopy technology that was not yet available in the 1980s and ’90 s, Hutchings, the director of the Cardiff Catalysis Institute at Cardiff University worked with Kiely, the Harold B. Chambers Senior Professor Materials Science and Engineering at Lehigh, examined the structure of supported gold at the nanoscale. One nanometer (nm) is equal to one one-billionth of a meter or about the diameter of five atoms.

Using what was then a rare piece of equipment—Lehigh’s aberration-corrected JEOL 2200 FS scanning transmission electron microscope (STEM)—the team identified the co-existence of three distinct gold species: facetted nanoparticles larger than one nanometer in size, sub-clusters containing less than 20 atoms and individual gold atoms strewn over the support. Because only the larger gold nanoparticles had previously been detected, this created debate as to which of these species were responsible for the good catalytic behavior.

Haruta, professor of applied chemistry at Tokyo Metropolitan University, Hutchings and Kiely have been working collaboratively on this problem over recent years and are now the first to demonstrate conclusively that it is not the particles or the individual atoms or the clusters which are solely responsible for the catalysis—but that they all contribute to different degrees. Their results have been published in an article in Nature Communications titled: “Population and hierarchy of active species in gold iron oxide catalysts for carbon monoxide oxidation.”

“All of the species tend to co-exist in conventionally prepared catalysts and show some level of activity,” says Kiely. “They all do something—but some less efficiently than others.”

Their research revealed the sub-nanometer clusters and 1-3nm nanoparticles to be the most efficient for catalyzing this CO oxidation reaction, while larger particles were less so and the atoms even less.  Nevertheless, Kiely cautions, all the species present need to be considered to fully explain the overall measured activity of the catalyst.

Among the team’s other key findings: the measured activity of gold on iron oxide catalysts is exquisitely dependent on exactly how the material is prepared. Very small changes in synthesis parameters  influence the relative proportion and spatial distribution of these various Au species on the support material and thus have a big impact on its overall catalytic performance.

A golden opportunity

Building on their earlier work (published in a 2008 Science article), the team sought to find a robust way to quantitatively analyze the relative population distributions of nanoparticles of various sizes, sub-nm clusters and highly dispersed atoms in a given gold on iron oxide sample. By correlating this information with catalytic performance measurements, they then hoped to determine which species distribution would be optimal to produce the most efficient catalyst, in order to utilize the precious gold component in the most cost effective way.

Ultimately, it was a catalyst synthesis problem the team faced that offered them a golden opportunity to do just that.

During the collaboration, Haruta’s and Hutchings’ teams each prepared gold on iron oxide samples in their home labs in Tokyo and Cardiff. Even though both groups nominally utilized the same ‘co-precipitation’ synthesis method, it turned out that a final heat treatment step was beneficial to the catalytic performance for one set of materials but detrimental to the other. This observation provided a fascinating scientific conundrum that detailed electron microscopy studies performed by Qian He, one of Kiely’s PhD students at the time, was key to solving. Qian He is now a University Research Fellow at Cardiff University leading their electron microscopy effort.

“In the end, there were subtle differences in the order and speed in which each group added in their ingredients while preparing the material,” explains He. “When examined under the electron microscope, it was clear that the two slightly different methods produced quite different distributions of particles, clusters and dispersed atoms on the support.”

“Very small variations in the preparation route or thermal history of the sample can alter the relative balance of supported gold nanoparticles-to-clusters-to-atoms in the material and this manifests itself in the measured catalytic activity,” adds Kiely.

The group was able to compare this set of materials and correlate the Au species distributions with catalytic performance measurements, ultimately identifying the species distribution that was associated with greater catalytic efficiency.

Now that the team has identified the catalytic activity hierarchy associated with these supported gold species, the next step, says Kiely, will be to modify the synthesis method to positively influence that distribution to optimize the catalyst performance while making the most efficient use of the precious gold metal content.

“As a next stage to this study we would like to be able to observe gold on iron oxide materials in-situ within the electron microscope while the reaction is happening,” says Kiely.

Once again, it is next generation microscopy facilities that may hold the key to fulfilling gold’s promise as a pivotal player in green technology.

Despite the link to the paper already in the news release, here’s one that includes a citation,

Identification of Active Gold Nanoclusters on Iron Oxide Supports for CO Oxidation by Andrew A. Herzing, Christopher J. Kiely, Albert F. Carley, Philip Landon, Graham J. Hutchings. Science  05 Sep 2008: Vol. 321, Issue 5894, pp. 1331-1335 DOI: 10.1126/science.1159639

This paper is currently behind a paywall but, if you can wait one year, free access can be gained if you register (for free) with Science.

Wearable microscopes

It never occurred to me that someone might want a wearable microscope but, apparently, there is a need. A Sept. 27, 2016 news item on phys.org,

UCLA [University of California at Los Angeles] researchers working with a team at Verily Life Sciences have designed a mobile microscope that can detect and monitor fluorescent biomarkers inside the skin with a high level of sensitivity, an important tool in tracking various biochemical reactions for medical diagnostics and therapy.

A Sept. 26, 2016 UCLA news release by Meghan Steele Horan, which originated the news item, describes the work in more detail,

This new system weighs less than a one-tenth of a pound, making it small and light enough for a person to wear around their bicep, among other parts of their body. In the future, technology like this could be used for continuous patient monitoring at home or at point-of-care settings.

The research, which was published in the journal ACS Nano, was led by Aydogan Ozcan, UCLA’s Chancellor’s Professor of Electrical Engineering and Bioengineering and associate director of the California NanoSystems Institute and Vasiliki Demas of Verily Life Sciences (formerly Google Life Sciences).

Fluorescent biomarkers are routinely used for cancer detection and drug delivery and release among other medical therapies. Recently, biocompatible fluorescent dyes have emerged, creating new opportunities for noninvasive sensing and measuring of biomarkers through the skin.

However, detecting artificially added fluorescent objects under the skin is challenging. Collagen, melanin and other biological structures emit natural light in a process called autofluorescence. Various methods have been tried to investigate this problem using different sensing systems. Most are quite expensive and difficult to make small and cost-effective enough to be used in a wearable imaging system.

To test the mobile microscope, researchers first designed a tissue phantom — an artificially created material that mimics human skin optical properties, such as autofluorescence, absorption and scattering. The target fluorescent dye solution was injected into a micro-well with a volume of about one-hundredth of a microliter, thinner than a human hair, and subsequently implanted into the tissue phantom half a millimeter to 2 millimeters from the surface — which would be deep enough to reach blood and other tissue fluids in practice.

To measure the fluorescent dye, the wearable microscope created by Ozcan and his team used a laser to hit the skin at an angle. The fluorescent image at the surface of the skin was captured via the wearable microscope. The image was then uploaded to a computer where it was processed using a custom-designed algorithm, digitally separating the target fluorescent signal from the autofluorescence of the skin, at a very sensitive parts-per-billion level of detection.

“We can place various tiny bio-sensors inside the skin next to each other, and through our imaging system, we can tell them apart,” Ozcan said. “We can monitor all these embedded sensors inside the skin in parallel, even understand potential misalignments of the wearable imager and correct it to continuously quantify a panel of biomarkers.”

This computational imaging framework might also be used in the future to continuously monitor various chronic diseases through the skin using an implantable or injectable fluorescent dye.

Here’s a link to and a citation for the paper,

Quantitative Fluorescence Sensing Through Highly Autofluorescent, Scattering, and Absorbing Media Using Mobile Microscopy by Zoltán Göröcs, Yair Rivenson, Hatice Ceylan Koydemir, Derek Tseng, Tamara L. Troy, Vasiliki Demas, and Aydogan Ozcan. ACS Nano, 2016, 10 (9), pp 8989–8999 DOI: 10.1021/acsnano.6b05129 Publication Date (Web): September 13, 2016

Copyright © 2016 American Chemical Society

This paper is behind a paywall.

Removing gender-based stereotypes from algorithms

Most people don’t think of algorithms as having biases and stereotypes but Michael Zou in his Sept. 26, 2016 essay for The Conversation (h/t phys.org Sept. 26, 2016 news item) says different, Note: Links have been removed,

Machine learning is ubiquitous in our daily lives. Every time we talk to our smartphones, search for images or ask for restaurant recommendations, we are interacting with machine learning algorithms. They take as input large amounts of raw data, like the entire text of an encyclopedia, or the entire archives of a newspaper, and analyze the information to extract patterns that might not be visible to human analysts. But when these large data sets include social bias, the machines learn that too.

A machine learning algorithm is like a newborn baby that has been given millions of books to read without being taught the alphabet or knowing any words or grammar. The power of this type of information processing is impressive, but there is a problem. When it takes in the text data, a computer observes relationships between words based on various factors, including how often they are used together.

We can test how well the word relationships are identified by using analogy puzzles. Suppose I ask the system to complete the analogy “He is to King as She is to X.” If the system comes back with “Queen,” then we would say it is successful, because it returns the same answer a human would.

Our research group trained the system on Google News articles, and then asked it to complete a different analogy: “Man is to Computer Programmer as Woman is to X.” The answer came back: “Homemaker.”

Zou explains how a machine (algorithm) learns and then notes this,

Not only can the algorithm reflect society’s biases – demonstrating how much those biases are contained in the input data – but the system can potentially amplify gender stereotypes. Suppose I search for “computer programmer” and the search program uses a gender-biased database that associates that term more closely with a man than a woman.

The search results could come back flawed by the bias. Because “John” as a male name is more closely related to “computer programmer” than the female name “Mary” in the biased data set, the search program could evaluate John’s website as more relevant to the search than Mary’s – even if the two websites are identical except for the names and gender pronouns.

It’s true that the biased data set could actually reflect factual reality – perhaps there are more “Johns” who are programmers than there are “Marys” – and the algorithms simply capture these biases. This does not absolve the responsibility of machine learning in combating potentially harmful stereotypes. The biased results would not just repeat but could even boost the statistical bias that most programmers are male, by moving the few female programmers lower in the search results. It’s useful and important to have an alternative that’s not biased.

There is a way according to Zou that stereotypes can be removed,

Our debiasing system uses real people to identify examples of the types of connections that are appropriate (brother/sister, king/queen) and those that should be removed. Then, using these human-generated distinctions, we quantified the degree to which gender was a factor in those word choices – as opposed to, say, family relationships or words relating to royalty.

Next we told our machine-learning algorithm to remove the gender factor from the connections in the embedding. This removes the biased stereotypes without reducing the overall usefulness of the embedding.

When that is done, we found that the machine learning algorithm no longer exhibits blatant gender stereotypes. We are investigating applying related ideas to remove other types of biases in the embedding, such as racial or cultural stereotypes.

If you have time, I encourage you to read the essay in its entirety and this June 14, 2016 posting about research into algorithms and how they make decisions for you about credit, medical diagnoses, job opportunities and more.

There’s also an Oct. 24, 2016 article by Michael Light on Salon.com on the topic (Note: Links have been removed),

In a recent book that was longlisted for the National Book Award, Cathy O’Neil, a data scientist, blogger and former hedge-fund quant, details a number of flawed algorithms to which we have given incredible power — she calls them “Weapons of Math Destruction.” We have entrusted these WMDs to make important, potentially life-altering decisions, yet in many cases, they embed human race and class biases; in other cases, they don’t function at all.
Among other examples, O’Neil examines a “value-added” model New York City used to decide which teachers to fire, even though, she writes, the algorithm was useless, functioning essentially as a random number generator, arbitrarily ending careers. She looks at models put to use by judges to assign recidivism scores to inmates that ended up having a racist inclination. And she looks at how algorithms are contributing to American partisanship, allowing political operatives to target voters with information that plays to their existing biases and fears.

I recommend reading Light’s article in its entirety.

Uganda and emerging technology

Matsiko Kahunga’s Sept. 26, 2016 piece from The Monitor (Uganda: Are We Hunter-Gatherers or a Nanotechnology Economy?) on allafrica.com provides some intriguing insight,

Our teacher of Agriculture in lower secondary school, (I can only remember his moniker: we called him Boxer) had a very intriguing definition of land, which we may today find instructive as the land question in Uganda rears its ugly head again. From his various definitions of land, what emerges is that land will mean different things to different people. Thus, to an aeropilot, land is a hard, flat surface onto which airports can be built to enable safe take off and landing; while to an equatorial forest hunter-gatherer, land is that lush green environment where fruits, berries and roots are ever in abundance and game animals plentiful. To the sedentary arable farmer, land is that medium in which crops can grow…it is useful if it can support crop life, and it is useless if it cannot support crop life.

The land question is up again. And already tempers are high and rising, building on the earlier intermittent squabbles across the country. Perhaps a simple reflection may send us rethinking our perception of land: does land mean the same thing to all Ugandans? If we are on the path to industrialisation as we ought to, does land in an industrial country carry the same meaning and importance it carries in a subsistence economy?

Kahunga then recounts this story,

A friend who recently returned from a tour of duty with a UN agency in an Asian Tiger, tells me that he lived on the 17th floor of an 81-storey skyscraper, which is basically a self-contained town: besides residential flats, the entire height of the building is punctuated by public arenas, kindergartens, shopping malls, clinics, temples, office blocks, police stations, municipal council and related services.

He then contrasts it with Seoul,

Another instructive case is Seoul, the South Korean capital. The Seoul National Capital Area houses 25 million people (as of 2012).

This is over half the population of South Korea, living on 0.6 per cent of the country’s land area, and generating 21 per cent of the country’s GDP (Leahy, 2012). Twenty five million is 73 per cent of Uganda’s population (2012 figures) or Burundi and Rwanda combined.

I am struck by the similarities between the current heated discussions about land use and density in Vancouver (Canada) and our national climate change issues and Kahunga’s depiction of Uganda’s issues,

The tokenism of ‘carbon-fund’, ‘green development’ ‘mainstreaming’…, typical of conferences will not save us. Uganda is best placed to pioneer green industrial development with not only minimal impact on the climate, but also a reversal of the current catastrophe: plastic-choked soils, drying marshlands and river beds, changing season patterns and melting Rwenzori glaciers.

And no one is safe from this pending catastrophe: rich or poor, investor or squatter, powerful or powerless . …

Thought-provoking, eh?

Germany has released a review of their research strategy for nanomaterials

A Sept. 24, 2016 posting by Lynn L. Bergeson and Carla N. Hutton on The National Law Review blog features a new report from German authorities (Note: A link has been removed),

On September 19, 2016, the Federal Institute for Occupational Safety and Health (BAuA) published a report entitled Review of the joint research strategy of the higher federal authorities — Nanomaterials and other advanced materials:  Application safety and environmental compatibility.  The report states that in a long-term research strategy, the higher federal authorities responsible for human and environmental safety — the German Environment Agency (UBA), the Federal Institute for Risk Assessment (BfR), BAuA, the Federal Institute for Materials Research and Testing (BAM), and the National Metrology Institute (PTB) — are accompanying the rapid pace of development of new materials from the points of view of occupational safety and health, consumer protection, and environmental protection.

Here’s a link to Review of the joint research strategy of the higher federal authorities — Nanomaterials and other advanced materials:  Application safety and environmental compatibility (PDF) and excerpts from the foreword (Note: There are some differences in formatting between what you see here and what you’ll see in the report),

The research strategy builds on the outcomes so far of the joint research strategy of the higher federal authorities launched in 2008 and first evaluated in 2013, “Nanotechnology: Health and Environmental Risks of Nanomaterials”1, while additionally covering other advanced materials where these pose similar risks to humans and the environment or where such risks need to be studied. It also takes up the idea of application safety of chemical products 2 from the New Quality of Work (INQA) initiative of the Federal Ministry of Labour and Social Affairs (BMAS) and the concept of sustainable
chemistry 3 endorsed by the Federal  Ministry  for  the  Environment, Nature Conservation, Building  and Nuclear Safety (BMUB). Application safety and environmental compatibility are aimed for advanced materials and derived products in order to largely rule out unacceptable risks to humans and the environment. This can be achieved by:

Using safe materials without hazardous properties for humans and the environment (direct application safety); or

Product design for low emissions and environmental compatibility over the entire product lifecycle (integrated application safety); or

Product stewardship, where producers support users in taking technical, organizational, and personal safety measures for the safe use and disposal of products (supported application safety).

As a comprising part of the Federal Government’s Nanotechnology Action Plan 2020, the update of the joint research strategy aims to contribute to governmental research in the following main areas:

 characterising and assessing the human and environmental risks of advanced materials
 Supporting research institutions and business enterprises
 Science-based revision of legal requirements and recommendations
 Public acceptance

The research strategy is to be implemented in projects and other research-related activities. These  include  governmental  research,  tendering  and  extramural  research  funding, and participation in mostly publicly supported projects with third-party funding. Additional activities will take place as part of policy advice and the ongoing work of the sovereign tasks of agencies involved. Interdisciplinary and transdisciplinary approaches will be used to better connect risk and safety research with innovation research and material development. In keeping up with the rapid pace of development, the time horizon for the research strategy is up to 2020. The research objectives address the research approaches likely to be actionable in this period. The research strategy will be supported by a working group and be evaluated and revised by the end of the Nanotechnology Action Plan 2020. tegy will be implemented in projects and other research-related activities, including governmental research, tendering and extramural research funding, and participation in mostly publicly supported projects with third-party funding.  Agencies will use interdisciplinary and transdisciplinary approaches to connect better risk and safety research with innovation research and material development. To keep up with the pace of development, the time horizon for the research strategy extends to 2020.  The research objectives in the report address the research approaches likely to be actionable in this period.  The research strategy will be supported by a working group and be evaluated and revised by the end of the Nanotechnology Action Plan 2020.

It’s always interesting to find out what’s happening elsewhere.

Tiny sensors produced by nanoscale 3D printing could lead to new generation of atomic force microscopes

A Sept. 26, 2016 news item on Nanowerk features research into producing smaller sensors for atomic force microscopes (AFMs) to achieve greater sensitivity,

Tiny sensors made through nanoscale 3D printing may be the basis for the next generation of atomic force microscopes. These nanosensors can enhance the microscopes’ sensitivity and detection speed by miniaturizing their detection component up to 100 times. The sensors were used in a real-world application for the first time at EPFL, and the results are published in Nature Communications.

A Sept. 26, 2016 École Polytechnique Fédérale de Lausanne (EPFL; Switzerland) press release by Laure-Anne Pessina, which originated the news item, expands on the theme (Note: A link has been removed),

Atomic force microscopy is based on powerful technology that works a little like a miniature turntable. A tiny cantilever with a nanometric tip passes over a sample and traces its relief, atom by atom. The tip’s infinitesimal up-and-down movements are picked up by a sensor so that the sample’s topography can be determined. (…)

One way to improve atomic force microscopes is to miniaturize the cantilever, as this will reduce inertia, increase sensitivity, and speed up detection. Researchers at EPFL’s Laboratory for Bio- and Nano-Instrumentation achieved this by equipping the cantilever with a 5-nanometer thick sensor made with a nanoscale 3D-printing technique. “Using our method, the cantilever can be 100 times smaller,” says Georg Fantner, the lab’s director.

Electrons that jump over obstacles

The nanometric tip’s up-and-down movements can be measured through the deformation of the sensor placed at the fixed end of the cantilever. But because the researchers were dealing with minute movements – smaller than an atom – they had to pull a trick out of their hat.

Together with Michael Huth’s lab at Goethe Universität at Frankfurt am Main, they developed a sensor made up of highly conductive platinum nanoparticles surrounded by an insulating carbon matrix. Under normal conditions, the carbon isolates the electrons. But at the nano-scale, a quantum effect comes into play: some electrons jump through the insulating material and travel from one nanoparticle to the next. “It’s sort of like if people walking on a path came up against a wall and only the courageous few managed to climb over it,” said Fantner.

When the shape of the sensor changes, the nanoparticles move further away from each other and the electrons jump between them less frequently. Changes in the current thus reveal the deformation of the sensor and the composition of the sample.

Tailor-made sensors

The researchers’ real feat was in finding a way to produce these sensors in nanoscale dimensions while carefully controlling their structure and, by extension, their properties. “In a vacuum, we distribute a precursor gas containing platinum and carbon atoms over a substrate. Then we apply an electron beam. The platinum atoms gather and form nanoparticles, and the carbon atoms naturally form a matrix around them,” said Maja Dukic, the article’s lead author. “By repeating this process, we can build sensors with any thickness and shape we want. We have proven that we could build these sensors and that they work on existing infrastructures. Our technique can now be used for broader applications, ranging from biosensors, ABS sensors for cars, to touch sensors on flexible membranes in prosthetics and artificial skin.”

Here’s a link to and a citation for the paper,

Direct-write nanoscale printing of nanogranular tunnelling strain sensors for sub-micrometre cantilevers by Maja Dukic, Marcel Winhold, Christian H. Schwalb, Jonathan D. Adams, Vladimir Stavrov, Michael Huth, & Georg E. Fantner. Nature Communications 7, Article number: 12487 doi:10.1038/ncomms12487 Published  26 September 2016

This is an open access paper.

The Nine Dots Prize competition for creative thinking on social issues

A new prize is being inaugurated, the $US100,000 Nine Dots Prize for creative thinking and it’s open to anyone anywhere in the world. Here’s more from an Oct. 21, 2016 article by Jane Tinkler for the Guardian (Note: Links have been removed),

In the debate over this year’s surprise award to Bob Dylan, it is easy to lose sight of the long history of prizes being used to recognise great writing (in whatever form), great research and other outstanding achievements.

The use of prizes dates back furthest in the sciences. In 1714, the British government famously offered an award of £20,000 (about £2.5 million at today’s value) to the person who could find a way of determining a ship’s longitude. British clockmaker John Harrison won the Longitude Prize and, by doing so, improved the safety of long-distance sea travel.

Prizes are now proliferating. Since 2000, more than sixty prizes of more than $100,000 (US dollars) have been created, and the field of philanthropic prize-giving is estimated to exceed £1 billion each year. Prizes are seen as ways to reward excellence, build networks, support collaboration and direct efforts towards practical and social goals. Those awarding them include philanthropists, governments and companies.

Today [Oct. 21, 2016] sees the launch of the newest kid on the prize-giving block. Drawing its name from a puzzle that can be solved only by lateral thinking, the Nine Dots prize wants to encourage creative thinking and writing that can help to tackle social problems. It is sponsored by the Kadas Prize Foundation, with the support of the Centre for Research in the Arts, Social Sciences and Humanities (CRASSH) at the University of Cambridge, and Cambridge University Press.

The Nine Dots prize is a hybrid of [three types of prizes]. There is a recognition [emphasis mine] aspect, but it doesn’t require an extensive back catalogue. The prize will be judged by a board of twelve renowned scholars, thinkers and writers. They will assess applications on an anonymised basis, so whoever wins will have done so not because of past work, but because of the strength of their ideas, and ability to communicate them effectively.

It is an incentive [emphasis mine] prize in that we ask applicants to respond to a defined question. The inaugural question is: “Are digital technologies making politics impossible?” [emphasis mine]. This is not proscriptive: applicants are encouraged to define what the question means to them, and to respond to that. We expect the submissions to be wildly varied. A new question will be set every two years, always with a focus on pressing issues that affect society. The prize’s disciplinary heartland lies in the social sciences, but responses from all fields, sectors and life experiences are welcome.

Finally, it is a resource [emphasis mine] prize in that it does not expect all the answers at the point of application. Applicants need to provide a 3,000-word summary of how they would approach the question. Board members will assess these, and the winner will then be invited to write their ideas up into a short, accessible book, that will be published by Cambridge University Press. A prize award of $100,000 (£82,000) will support the winner to take time out to think and write over a nine month period. The winner will also have the option of a term’s visiting fellowship at the University of Cambridge, to help with the writing process.

With this mix of elements, we hope the Nine Dots prize will encourage creative thinking about some of today’s most pressing issues. The winner’s book will be made freely accessible online; we hope it will capture the public’s imagination and spark a real debate.

The submission deadline is Jan. 31, 2017 and the winner announcement is May 2017. The winner’s book is to be published May 2018.

Good Luck! You can find out more about the prize and the contest rules on The Nine Dots Prize website.

Creating multiferroic material at room temperature

A Sept. 23, 2016 news item on ScienceDaily describes some research from Cornell University (US),

Multiferroics — materials that exhibit both magnetic and electric order — are of interest for next-generation computing but difficult to create because the conditions conducive to each of those states are usually mutually exclusive. And in most multiferroics found to date, their respective properties emerge only at extremely low temperatures.

Two years ago, researchers in the labs of Darrell Schlom, the Herbert Fisk Johnson Professor of Industrial Chemistry in the Department of Materials Science and Engineering, and Dan Ralph, the F.R. Newman Professor in the College of Arts and Sciences, in collaboration with professor Ramamoorthy Ramesh at UC Berkeley, published a paper announcing a breakthrough in multiferroics involving the only known material in which magnetism can be controlled by applying an electric field at room temperature: the multiferroic bismuth ferrite.

Schlom’s group has partnered with David Muller and Craig Fennie, professors of applied and engineering physics, to take that research a step further: The researchers have combined two non-multiferroic materials, using the best attributes of both to create a new room-temperature multiferroic.

Their paper, “Atomically engineered ferroic layers yield a room-temperature magnetoelectric multiferroic,” was published — along with a companion News & Views piece — Sept. 22 [2016] in Nature. …

A Sept. 22, 2016 Cornell University news release by Tom Fleischman, which originated the news item, details more about the work (Note: A link has been removed),

The group engineered thin films of hexagonal lutetium iron oxide (LuFeO3), a material known to be a robust ferroelectric but not strongly magnetic. The LuFeO3 consists of alternating single monolayers of lutetium oxide and iron oxide, and differs from a strong ferrimagnetic oxide (LuFe2O4), which consists of alternating monolayers of lutetium oxide with double monolayers of iron oxide.

The researchers found, however, that they could combine these two materials at the atomic-scale to create a new compound that was not only multiferroic but had better properties that either of the individual constituents. In particular, they found they need to add just one extra monolayer of iron oxide to every 10 atomic repeats of the LuFeO3 to dramatically change the properties of the system.

That precision engineering was done via molecular-beam epitaxy (MBE), a specialty of the Schlom lab. A technique Schlom likens to “atomic spray painting,” MBE let the researchers design and assemble the two different materials in layers, a single atom at a time.

The combination of the two materials produced a strongly ferrimagnetic layer near room temperature. They then tested the new material at the Lawrence Berkeley National Laboratory (LBNL) Advanced Light Source in collaboration with co-author Ramesh to show that the ferrimagnetic atoms followed the alignment of their ferroelectric neighbors when switched by an electric field.

“It was when our collaborators at LBNL demonstrated electrical control of magnetism in the material that we made that things got super exciting,” Schlom said. “Room-temperature multiferroics are exceedingly rare and only multiferroics that enable electrical control of magnetism are relevant to applications.”

In electronics devices, the advantages of multiferroics include their reversible polarization in response to low-power electric fields – as opposed to heat-generating and power-sapping electrical currents – and their ability to hold their polarized state without the need for continuous power. High-performance memory chips make use of ferroelectric or ferromagnetic materials.

“Our work shows that an entirely different mechanism is active in this new material,” Schlom said, “giving us hope for even better – higher-temperature and stronger – multiferroics for the future.”

Collaborators hailed from the University of Illinois at Urbana-Champaign, the National Institute of Standards and Technology, the University of Michigan and Penn State University.

Here is a link and a citation to the paper and to a companion piece,

Atomically engineered ferroic layers yield a room-temperature magnetoelectric multiferroic by Julia A. Mundy, Charles M. Brooks, Megan E. Holtz, Jarrett A. Moyer, Hena Das, Alejandro F. Rébola, John T. Heron, James D. Clarkson, Steven M. Disseler, Zhiqi Liu, Alan Farhan, Rainer Held, Robert Hovden, Elliot Padgett, Qingyun Mao, Hanjong Paik, Rajiv Misra, Lena F. Kourkoutis, Elke Arenholz, Andreas Scholl, Julie A. Borchers, William D. Ratcliff, Ramamoorthy Ramesh, Craig J. Fennie, Peter Schiffer et al. Nature 537, 523–527 (22 September 2016) doi:10.1038/nature19343 Published online 21 September 2016

Condensed-matter physics: Multitasking materials from atomic templates by Manfred Fiebig. Nature 537, 499–500  (22 September 2016) doi:10.1038/537499a Published online 21 September 2016

Both the paper and its companion piece are behind a paywall.