Tag Archives: IEEE

From the memristor to the atomristor?

I’m going to let Michael Berger explain the memristor (from Berger’s Jan. 2, 2017 Nanowerk Spotlight article),

In trying to bring brain-like (neuromorphic) computing closer to reality, researchers have been working on the development of memory resistors, or memristors, which are resistors in a circuit that ‘remember’ their state even if you lose power.

Today, most computers use random access memory (RAM), which moves very quickly as a user works but does not retain unsaved data if power is lost. Flash drives, on the other hand, store information when they are not powered but work much slower. Memristors could provide a memory that is the best of both worlds: fast and reliable.

He goes on to discuss a team at the University of Texas at Austin’s work on creating an extraordinarily thin memristor: an atomristor,

he team’s work features the thinnest memory devices and it appears to be a universal effect available in all semiconducting 2D monolayers.

The scientists explain that the unexpected discovery of nonvolatile resistance switching (NVRS) in monolayer transitional metal dichalcogenides (MoS2, MoSe2, WS2, WSe2) is likely due to the inherent layered crystalline nature that produces sharp interfaces and clean tunnel barriers. This prevents excessive leakage and affords stable phenomenon so that NVRS can be used for existing memory and computing applications.

“Our work opens up a new field of research in exploiting defects at the atomic scale, and can advance existing applications such as future generation high density storage, and 3D cross-bar networks for neuromorphic memory computing,” notes Akinwande [Deji Akinwande, an Associate Professor at the University of Texas at Austin]. “We also discovered a completely new application, which is non-volatile switching for radio-frequency (RF) communication systems. This is a rapidly emerging field because of the massive growth in wireless technologies and the need for very low-power switches. Our devices consume no static power, an important feature for battery life in mobile communication systems.”

Here’s a link to and a citation for the Akinwande team’s paper,

Atomristor: Nonvolatile Resistance Switching in Atomic Sheets of Transition Metal Dichalcogenides by Ruijing Ge, Xiaohan Wu, Myungsoo Kim, Jianping Shi, Sushant Sonde, Li Tao, Yanfeng Zhang, Jack C. Lee, and Deji Akinwande. Nano Lett., Article ASAP DOI: 10.1021/acs.nanolett.7b04342 Publication Date (Web): December 13, 2017

Copyright © 2017 American Chemical Society

This paper appears to be open access.

ETA January 23, 2018: There’s another account of the atomristor in Samuel K. Moore’s January 23, 2018 posting on the Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website).

FrogHeart’s good-bye to 2017 and hello to 2018

This is going to be relatively short and sweet(ish). Starting with the 2017 review:

Nano blogosphere and the Canadian blogosphere

From my perspective there’s been a change taking place in the nano blogosphere over the last few years. There are fewer blogs along with fewer postings from those who still blog. Interestingly, some blogs are becoming more generalized. At the same time, Foresight Institute’s Nanodot blog (as has FrogHeart) has expanded its range of topics to include artificial intelligence and other topics. Andrew Maynard’s 2020 Science blog now exists in an archived from but before its demise, it, too, had started to include other topics, notably risk in its many forms as opposed to risk and nanomaterials. Dexter Johnson’s blog, Nanoclast (on the IEEE [Institute for Electrical and Electronics Engineers] website), maintains its 3x weekly postings. Tim Harper who often wrote about nanotechnology on his Cientifica blog appears to have found a more freewheeling approach that is dominated by his Twitter feed although he also seems (I can’t confirm that the latest posts were written in 2017) to blog here on timharper.net.

The Canadian science blogosphere seems to be getting quieter if Science Borealis (blog aggregator) is a measure. My overall impression is that the bloggers have been a bit quieter this year with fewer postings on the feed or perhaps that’s due to some technical issues (sometimes FrogHeart posts do not get onto the feed). On the promising side, Science Borealis teamed with the Science Writers and Communicators of Canada Association to run a contest, “2017 People’s Choice Awards: Canada’s Favourite Science Online!”  There were two categories (Favourite Science Blog and Favourite Science Site) and you can find a list of the finalists with links to the winners here.

Big congratulations for the winners: Canada’s Favourite Blog 2017: Body of Evidence (Dec. 6, 2017 article by Alina Fisher for Science Borealis) and Let’s Talk Science won Canada’s Favourite Science Online 2017 category as per this announcement.

However, I can’t help wondering: where were ASAP Science, Acapella Science, Quirks & Quarks, IFLS (I f***ing love science), and others on the list for finalists? I would have thought any of these would have a lock on a position as a finalist. These are Canadian online science purveyors and they are hugely popular, which should mean they’d have no problem getting nominated and getting votes. I can’t find the criteria for nominations (or any hint there will be a 2018 contest) so I imagine their nonpresence on the 2017 finalists list will remain a mystery to me.

Looking forward to 2018, I think that the nano blogosphere will continue with its transformation into a more general science/technology-oriented community. To some extent, I believe this reflects the fact that nanotechnology is being absorbed into the larger science/technology effort as foundational (something wiser folks than me predicted some years ago).

As for Science Borealis and the Canadian science online effort, I’m going to interpret the quieter feeds as a sign of a maturing community. After all, there are always ups and downs in terms of enthusiasm and participation and as I noted earlier the launch of an online contest is promising as is the collaboration with Science Writers and Communicators of Canada.

Canadian science policy

It was a big year.

Canada’s Chief Science Advisor

With Canada’s first chief science advisor in many years, being announced Dr. Mona Nemer stepped into her position sometime in Fall 2017. The official announcement was made on Sept. 26, 2017. I covered the event in my Sept. 26, 2017 posting, which includes a few more details than found the official announcement.

You’ll also find in that Sept. 26, 2017 posting a brief discourse on the Naylor report (also known as the Review of Fundamental Science) and some speculation on why, to my knowledge, there has been no action taken as a consequence.  The Naylor report was released April 10, 2017 and was covered here in a three-part review, published on June 8, 2017,

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 1 of 3

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 2 of 3

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 3 of 3

I have found another commentary (much briefer than mine) by Paul Dufour on the Canadian Science Policy Centre website. (November 9, 2017)

Subnational and regional science funding

This began in 2016 with a workshop mentioned in my November 10, 2016 posting: ‘Council of Canadian Academies and science policy for Alberta.” By the time the report was published the endeavour had been transformed into: Science Policy: Considerations for Subnational Governments (report here and my June 22, 2017 commentary here).

I don’t know what will come of this but I imagine scientists will be supportive as it means more money and they are always looking for more money. Still, the new government in British Columbia has only one ‘science entity’ and I’m not sure it’s still operational but i was called the Premier’s Technology Council. To my knowledge, there is no ministry or other agency that is focused primarily or partially on science.

Meanwhile, a couple of representatives from the health sciences (neither of whom were involved in the production of the report) seem quite enthused about the prospects for provincial money in their (Bev Holmes, Interim CEO, Michael Smith Foundation for Health Research, British Columbia, and Patrick Odnokon (CEO, Saskatchewan Health Research Foundation) October 27, 2017 opinion piece for the Canadian Science Policy Centre.

Artificial intelligence and Canadians

An event which I find more interesting with time was the announcement of the Pan=Canadian Artificial Intelligence Strategy in the 2017 Canadian federal budget. Since then there has been a veritable gold rush mentality with regard to artificial intelligence in Canada. One announcement after the next about various corporations opening new offices in Toronto or Montréal has been made in the months since.

What has really piqued my interest recently is a report being written for Canada’s Treasury Board by Michael Karlin (you can learn more from his Twitter feed although you may need to scroll down past some of his more personal tweets (something cassoulet in the Dec. 29, 2017 tweets).  As for Karlin’s report, which is a work in progress, you can find out more about the report and Karlin in a December 12, 2017 article by Rob Hunt for the Algorithmic Media Observatory (sponsored by the Social Sciences and Humanities Research Council of Canada [SHRCC], the Centre for Study of Democratic Citizenship, and the Fonds de recherche du Québec: Société et culture).

You can ring in 2018 by reading and making comments, which could influence the final version, on Karlin’s “Responsible Artificial Intelligence in the Government of Canada” part of the government’s Digital Disruption White Paper Series.

As for other 2018 news, the Council of Canadian Academies is expected to publish “The State of Science and Technology and Industrial Research and Development in Canada” at some point soon (we hope). This report follows and incorporates two previous ‘states’, The State of Science and Technology in Canada, 2012 (the first of these was a 2006 report) and the 2013 version of The State of Industrial R&D in Canada. There is already some preliminary data for this latest ‘state of’  (you can find a link and commentary in my December 15, 2016 posting).

FrogHeart then (2017) and soon (2018)

On looking back I see that the year started out at quite a clip as I was attempting to hit the 5000th blog posting mark, which I did on March 3,  2017. I have cut back somewhat from the 3 postings/day high to approximately 1 posting/day. It makes things more manageable allowing me to focus on other matters.

By the way, you may note that the ‘Donate’ button has disappeared from my sidebard. I thank everyone who donated from the bottom of my heart. The money was more than currency, it also symbolized encouragement. On the sad side, I moved from one hosting service to a new one (Sibername) late in December 2016 and have been experiencing serious bandwidth issues which result on FrogHeart’s disappearance from the web for days at a time. I am trying to resolve the issues and hope that such actions as removing the ‘Donate’ button will help.

I wish my readers all the best for 2018 as we explore nanotechnology and other emerging technologies!

(I apologize for any and all errors. I usually take a little more time to write this end-of-year and coming-year piece but due to bandwidth issues I was unable to access my draft and give it at least one review. And at this point, I’m too tired to try spotting error. If you see any, please do let me know.)

Yarns that harvest and generate energy

The researchers involved in this work are confident enough about their prospects that they will be  patenting their research into yarns. From an August 25, 2017 news item on Nanowerk,

An international research team led by scientists at The University of Texas at Dallas and Hanyang University in South Korea has developed high-tech yarns that generate electricity when they are stretched or twisted.

In a study published in the Aug. 25 [2017] issue of the journal Science (“Harvesting electrical energy from carbon nanotube yarn twist”), researchers describe “twistron” yarns and their possible applications, such as harvesting energy from the motion of ocean waves or from temperature fluctuations. When sewn into a shirt, these yarns served as a self-powered breathing monitor.

“The easiest way to think of twistron harvesters is, you have a piece of yarn, you stretch it, and out comes electricity,” said Dr. Carter Haines, associate research professor in the Alan G. MacDiarmid NanoTech Institute at UT Dallas and co-lead author of the article. The article also includes researchers from South Korea, Virginia Tech, Wright-Patterson Air Force Base and China.

An August 25, 2017 University of Texas at Dallas news release, which originated the news item, expands on the theme,

Yarns Based on Nanotechnology

The yarns are constructed from carbon nanotubes, which are hollow cylinders of carbon 10,000 times smaller in diameter than a human hair. The researchers first twist-spun the nanotubes into high-strength, lightweight yarns. To make the yarns highly elastic, they introduced so much twist that the yarns coiled like an over-twisted rubber band.

In order to generate electricity, the yarns must be either submerged in or coated with an ionically conducting material, or electrolyte, which can be as simple as a mixture of ordinary table salt and water.

“Fundamentally, these yarns are supercapacitors,” said Dr. Na Li, a research scientist at the NanoTech Institute and co-lead author of the study. “In a normal capacitor, you use energy — like from a battery — to add charges to the capacitor. But in our case, when you insert the carbon nanotube yarn into an electrolyte bath, the yarns are charged by the electrolyte itself. No external battery, or voltage, is needed.”

When a harvester yarn is twisted or stretched, the volume of the carbon nanotube yarn decreases, bringing the electric charges on the yarn closer together and increasing their energy, Haines said. This increases the voltage associated with the charge stored in the yarn, enabling the harvesting of electricity.

Stretching the coiled twistron yarns 30 times a second generated 250 watts per kilogram of peak electrical power when normalized to the harvester’s weight, said Dr. Ray Baughman, director of the NanoTech Institute and a corresponding author of the study.

“Although numerous alternative harvesters have been investigated for many decades, no other reported harvester provides such high electrical power or energy output per cycle as ours for stretching rates between a few cycles per second and 600 cycles per second.”

Lab Tests Show Potential Applications

In the lab, the researchers showed that a twistron yarn weighing less than a housefly could power a small LED, which lit up each time the yarn was stretched.

To show that twistrons can harvest waste thermal energy from the environment, Li connected a twistron yarn to a polymer artificial muscle that contracts and expands when heated and cooled. The twistron harvester converted the mechanical energy generated by the polymer muscle to electrical energy.

“There is a lot of interest in using waste energy to power the Internet of Things, such as arrays of distributed sensors,” Li said. “Twistron technology might be exploited for such applications where changing batteries is impractical.”

The researchers also sewed twistron harvesters into a shirt. Normal breathing stretched the yarn and generated an electrical signal, demonstrating its potential as a self-powered respiration sensor.

“Electronic textiles are of major commercial interest, but how are you going to power them?” Baughman said. “Harvesting electrical energy from human motion is one strategy for eliminating the need for batteries. Our yarns produced over a hundred times higher electrical power per weight when stretched compared to other weavable fibers reported in the literature.”

Electricity from Ocean Waves

“In the lab we showed that our energy harvesters worked using a solution of table salt as the electrolyte,” said Baughman, who holds the Robert A. Welch Distinguished Chair in Chemistry in the School of Natural Sciences and Mathematics. “But we wanted to show that they would also work in ocean water, which is chemically more complex.”

In a proof-of-concept demonstration, co-lead author Dr. Shi Hyeong Kim, a postdoctoral researcher at the NanoTech Institute, waded into the frigid surf off the east coast of South Korea to deploy a coiled twistron in the sea. He attached a 10 centimeter-long yarn, weighing only 1 milligram (about the weight of a mosquito), between a balloon and a sinker that rested on the seabed.

Every time an ocean wave arrived, the balloon would rise, stretching the yarn up to 25 percent, thereby generating measured electricity.

Even though the investigators used very small amounts of twistron yarn in the current study, they have shown that harvester performance is scalable, both by increasing twistron diameter and by operating many yarns in parallel.

“If our twistron harvesters could be made less expensively, they might ultimately be able to harvest the enormous amount of energy available from ocean waves,” Baughman said. “However, at present these harvesters are most suitable for powering sensors and sensor communications. Based on demonstrated average power output, just 31 milligrams of carbon nanotube yarn harvester could provide the electrical energy needed to transmit a 2-kilobyte packet of data over a 100-meter radius every 10 seconds for the Internet of Things.”

Researchers from the UT Dallas Erik Jonsson School of Engineering and Computer Science and Lintec of America’s Nano-Science & Technology Center also participated in the study.

The investigators have filed a patent on the technology.

In the U.S., the research was funded by the Air Force, the Air Force Office of Scientific Research, NASA, the Office of Naval Research and the Robert A. Welch Foundation. In Korea, the research was supported by the Korea-U.S. Air Force Cooperation Program and the Creative Research Initiative Center for Self-powered Actuation of the National Research Foundation and the Ministry of Science.

Here’s a link to and a citation for the paper,

Harvesting electrical energy from carbon nanotube yarn twist by Shi Hyeong Kim, Carter S. Haines, Na Li, Keon Jung Kim, Tae Jin Mun, Changsoon Choi, Jiangtao Di, Young Jun Oh, Juan Pablo Oviedo, Julia Bykova, Shaoli Fang, Nan Jiang, Zunfeng Liu, Run Wang, Prashant Kumar, Rui Qiao, Shashank Priya, Kyeongjae Cho, Moon Kim, Matthew Steven Lucas, Lawrence F. Drummy, Benji Maruyama, Dong Youn Lee, Xavier Lepró, Enlai Gao, Dawood Albarq, Raquel Ovalle-Robles, Seon Jeong Kim, Ray H. Baughman. Science 25 Aug 2017: Vol. 357, Issue 6353, pp. 773-778 DOI: 10.1126/science.aam8771

This paper is behind a paywall.

Dexter Johnson in an Aug. 25, 2017 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) delves further into the research,

“Basically what’s happening is when we stretch the yarn, we’re getting a change in capacitance of the yarn. It’s that change that allows us to get energy out,” explains Carter Haines, associate research professor at UT Dallas and co-lead author of the paper describing the research, in an interview with IEEE Spectrum.

This makes it similar in many ways to other types of energy harvesters. For instance, in other research, it has been demonstrated—with sheets of rubber with coated electrodes on both sides—that you can increase the capacitance of a material when you stretch it and it becomes thinner. As a result, if you have charge on that capacitor, you can change the voltage associated with that charge.

“We’re more or less exploiting the same effect but what we’re doing differently is we’re using an electric chemical cell to do this,” says Haines. “So we’re not changing double layer capacitance in normal parallel plate capacitors. But we’re actually changing the electric chemical capacitance on the surface of a super capacitor yarn.”

While there are other capacitance-based energy harvesters, those other devices require extremely high voltages to work because they’re using parallel plate capacitors, according to Haines.

Dexter asks good questions and his post is very informative.

Emerging technology and the law

I have three news bits about legal issues that are arising as a consequence of emerging technologies.

Deep neural networks, art, and copyright

Caption: The rise of automated art opens new creative avenues, coupled with new problems for copyright protection. Credit: Provided by: Alexander Mordvintsev, Christopher Olah and Mike Tyka

Presumably this artwork is a demonstration of automated art although they never really do explain how in the news item/news release. An April 26, 2017 news item on ScienceDaily announces research into copyright and the latest in using neural networks to create art,

In 1968, sociologist Jean Baudrillard wrote on automatism that “contained within it is the dream of a dominated world […] that serves an inert and dreamy humanity.”

With the growing popularity of Deep Neural Networks (DNN’s), this dream is fast becoming a reality.

Dr. Jean-Marc Deltorn, researcher at the Centre d’études internationales de la propriété intellectuelle in Strasbourg, argues that we must remain a responsive and responsible force in this process of automation — not inert dominators. As he demonstrates in a recent Frontiers in Digital Humanities paper, the dream of automation demands a careful study of the legal problems linked to copyright.

An April 26, 2017 Frontiers (publishing) news release on EurekAlert, which originated the news item, describes the research in more detail,

For more than half a century, artists have looked to computational processes as a way of expanding their vision. DNN’s are the culmination of this cross-pollination: by learning to identify a complex number of patterns, they can generate new creations.

These systems are made up of complex algorithms modeled on the transmission of signals between neurons in the brain.

DNN creations rely in equal measure on human inputs and the non-human algorithmic networks that process them.

Inputs are fed into the system, which is layered. Each layer provides an opportunity for a more refined knowledge of the inputs (shape, color, lines). Neural networks compare actual outputs to expected ones, and correct the predictive error through repetition and optimization. They train their own pattern recognition, thereby optimizing their learning curve and producing increasingly accurate outputs.

The deeper the layers are, the higher the level of abstraction. The highest layers are able to identify the contents of a given input with reasonable accuracy, after extended periods of training.

Creation thus becomes increasingly automated through what Deltorn calls “the arcane traceries of deep architecture”. The results are sufficiently abstracted from their sources to produce original creations that have been exhibited in galleries, sold at auction and performed at concerts.

The originality of DNN’s is a combined product of technological automation on one hand, human inputs and decisions on the other.

DNN’s are gaining popularity. Various platforms (such as DeepDream) now allow internet users to generate their very own new creations . This popularization of the automation process calls for a comprehensive legal framework that ensures a creator’s economic and moral rights with regards to his work – copyright protection.

Form, originality and attribution are the three requirements for copyright. And while DNN creations satisfy the first of these three, the claim to originality and attribution will depend largely on a given country legislation and on the traceability of the human creator.

Legislation usually sets a low threshold to originality. As DNN creations could in theory be able to create an endless number of riffs on source materials, the uncurbed creation of original works could inflate the existing number of copyright protections.

Additionally, a small number of national copyright laws confers attribution to what UK legislation defines loosely as “the person by whom the arrangements necessary for the creation of the work are undertaken.” In the case of DNN’s, this could mean anybody from the programmer to the user of a DNN interface.

Combined with an overly supple take on originality, this view on attribution would further increase the number of copyrightable works.

The risk, in both cases, is that artists will be less willing to publish their own works, for fear of infringement of DNN copyright protections.

In order to promote creativity – one seminal aim of copyright protection – the issue must be limited to creations that manifest a personal voice “and not just the electric glint of a computational engine,” to quote Deltorn. A delicate act of discernment.

DNN’s promise new avenues of creative expression for artists – with potential caveats. Copyright protection – a “catalyst to creativity” – must be contained. Many of us gently bask in the glow of an increasingly automated form of technology. But if we want to safeguard the ineffable quality that defines much art, it might be a good idea to hone in more closely on the differences between the electric and the creative spark.

This research is and be will part of a broader Frontiers Research Topic collection of articles on Deep Learning and Digital Humanities.

Here’s a link to and a citation for the paper,

Deep Creations: Intellectual Property and the Automata by Jean-Marc Deltorn. Front. Digit. Humanit., 01 February 2017 | https://doi.org/10.3389/fdigh.2017.00003

This paper is open access.

Conference on governance of emerging technologies

I received an April 17, 2017 notice via email about this upcoming conference. Here’s more from the Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics webpage,

The Fifth Annual Conference on Governance of Emerging Technologies:

Law, Policy and Ethics held at the new

Beus Center for Law & Society in Phoenix, AZ

May 17-19, 2017!

Call for Abstracts – Now Closed

The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including (but not limited to) nanotechnology, synthetic biology, gene editing, biotechnology, genomics, personalized medicine, human enhancement technologies, telecommunications, information technologies, surveillance technologies, geoengineering, neuroscience, artificial intelligence, and robotics. The conference is premised on the belief that there is much to be learned and shared from and across the governance experience and proposals for these various emerging technologies.

Keynote Speakers:

Gillian HadfieldRichard L. and Antoinette Schamoi Kirtland Professor of Law and Professor of Economics USC [University of Southern California] Gould School of Law

Shobita Parthasarathy, Associate Professor of Public Policy and Women’s Studies, Director, Science, Technology, and Public Policy Program University of Michigan

Stuart Russell, Professor at [University of California] Berkeley, is a computer scientist known for his contributions to artificial intelligence

Craig Shank, Vice President for Corporate Standards Group in Microsoft’s Corporate, External and Legal Affairs (CELA)

Plenary Panels:

Innovation – Responsible and/or Permissionless

Ellen-Marie Forsberg, Senior Researcher/Research Manager at Oslo and Akershus University College of Applied Sciences

Adam Thierer, Senior Research Fellow with the Technology Policy Program at the Mercatus Center at George Mason University

Wendell Wallach, Consultant, ethicist, and scholar at Yale University’s Interdisciplinary Center for Bioethics

 Gene Drives, Trade and International Regulations

Greg Kaebnick, Director, Editorial Department; Editor, Hastings Center Report; Research Scholar, Hastings Center

Jennifer Kuzma, Goodnight-North Carolina GlaxoSmithKline Foundation Distinguished Professor in Social Sciences in the School of Public and International Affairs (SPIA) and co-director of the Genetic Engineering and Society (GES) Center at North Carolina State University

Andrew Maynard, Senior Sustainability Scholar, Julie Ann Wrigley Global Institute of Sustainability Director, Risk Innovation Lab, School for the Future of Innovation in Society Professor, School for the Future of Innovation in Society, Arizona State University

Gary Marchant, Regents’ Professor of Law, Professor of Law Faculty Director and Faculty Fellow, Center for Law, Science & Innovation, Arizona State University

Marc Saner, Inaugural Director of the Institute for Science, Society and Policy, and Associate Professor, University of Ottawa Department of Geography

Big Data

Anupam Chander, Martin Luther King, Jr. Professor of Law and Director, California International Law Center, UC Davis School of Law

Pilar Ossorio, Professor of Law and Bioethics, University of Wisconsin, School of Law and School of Medicine and Public Health; Morgridge Institute for Research, Ethics Scholar-in-Residence

George Poste, Chief Scientist, Complex Adaptive Systems Initiative (CASI) (http://www.casi.asu.edu/), Regents’ Professor and Del E. Webb Chair in Health Innovation, Arizona State University

Emily Shuckburgh, climate scientist and deputy head of the Polar Oceans Team at the British Antarctic Survey, University of Cambridge

 Responsible Development of AI

Spring Berman, Ira A. Fulton Schools of Engineering, Arizona State University

John Havens, The IEEE [Institute of Electrical and Electronics Engineers] Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems

Subbarao Kambhampati, Senior Sustainability Scientist, Julie Ann Wrigley Global Institute of Sustainability, Professor, School of Computing, Informatics and Decision Systems Engineering, Ira A. Fulton Schools of Engineering, Arizona State University

Wendell Wallach, Consultant, Ethicist, and Scholar at Yale University’s Interdisciplinary Center for Bioethics

Existential and Catastrophic Ricks [sic]

Tony Barrett, Co-Founder and Director of Research of the Global Catastrophic Risk Institute

Haydn Belfield,  Academic Project Administrator, Centre for the Study of Existential Risk at the University of Cambridge

Margaret E. Kosal Associate Director, Sam Nunn School of International Affairs, Georgia Institute of Technology

Catherine Rhodes,  Academic Project Manager, Centre for the Study of Existential Risk at CSER, University of Cambridge

These were the panels that are of interest to me; there are others on the homepage.

Here’s some information from the Conference registration webpage,

Early Bird Registration – $50 off until May 1! Enter discount code: earlybirdGETs50

New: Group Discount – Register 2+ attendees together and receive an additional 20% off for all group members!

Click Here to Register!

Conference registration fees are as follows:

  • General (non-CLE) Registration: $150.00
  • CLE Registration: $350.00
  • *Current Student / ASU Law Alumni Registration: $50.00
  • ^Cybsersecurity sessions only (May 19): $100 CLE / $50 General / Free for students (registration info coming soon)

There you have it.

Neuro-techno future laws

I’m pretty sure this isn’t the first exploration of potential legal issues arising from research into neuroscience although it’s the first one I’ve stumbled across. From an April 25, 2017 news item on phys.org,

New human rights laws to prepare for advances in neurotechnology that put the ‘freedom of the mind’ at risk have been proposed today in the open access journal Life Sciences, Society and Policy.

The authors of the study suggest four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy. The four laws are: the right to cognitive liberty, the right to mental privacy, the right to mental integrity and the right to psychological continuity.

An April 25, 2017 Biomed Central news release on EurekAlert, which originated the news item, describes the work in more detail,

Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel, said: “The mind is considered to be the last refuge of personal freedom and self-determination, but advances in neural engineering, brain imaging and neurotechnology put the freedom of the mind at risk. Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology.”

Advances in neurotechnology, such as sophisticated brain imaging and the development of brain-computer interfaces, have led to these technologies moving away from a clinical setting and into the consumer domain. While these advances may be beneficial for individuals and society, there is a risk that the technology could be misused and create unprecedented threats to personal freedom.

Professor Roberto Andorno, co-author of the research, explained: “Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court, for example as a tool for assessing criminal responsibility or even the risk of reoffending. Consumer companies are using brain imaging for ‘neuromarketing’, to understand consumer behaviour and elicit desired responses from customers. There are also tools such as ‘brain decoders’ which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom which we sought to address with the development of four new human rights laws.”

The authors explain that as neurotechnology improves and becomes commonplace, there is a risk that the technology could be hacked, allowing a third-party to ‘eavesdrop’ on someone’s mind. In the future, a brain-computer interface used to control consumer technology could put the user at risk of physical and psychological damage caused by a third-party attack on the technology. There are also ethical and legal concerns over the protection of data generated by these devices that need to be considered.

International human rights laws make no specific mention to neuroscience, although advances in biomedicine have become intertwined with laws, such as those concerning human genetic data. Similar to the historical trajectory of the genetic revolution, the authors state that the on-going neurorevolution will force a reconceptualization of human rights laws and even the creation of new ones.

Marcello Ienca added: “Science-fiction can teach us a lot about the potential threat of technology. Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes. We need to be prepared to deal with the impact these technologies will have on our personal freedom.”

Here’s a link to and a citation for the paper,

Towards new human rights in the age of neuroscience and neurotechnology by Marcello Ienca and Roberto Andorno. Life Sciences, Society and Policy201713:5 DOI: 10.1186/s40504-017-0050-1 Published: 26 April 2017

©  The Author(s). 2017

This paper is open access.

Nanoelectronic thread (NET) brain probes for long-term neural recording

A rendering of the ultra-flexible probe in neural tissue gives viewers a sense of the device’s tiny size and footprint in the brain. Image credit: Science Advances.

As long time readers have likely noted, I’m not a big a fan of this rush to ‘colonize’ the brain but it continues apace as a Feb. 15, 2017 news item on Nanowerk announces a new type of brain probe,

Engineering researchers at The University of Texas at Austin have designed ultra-flexible, nanoelectronic thread (NET) brain probes that can achieve more reliable long-term neural recording than existing probes and don’t elicit scar formation when implanted.

A Feb. 15, 2017 University of Texas at Austin news release, which originated the news item, provides more information about the new probes (Note: A link has been removed),

A team led by Chong Xie, an assistant professor in the Department of Biomedical Engineering in the Cockrell School of Engineering, and Lan Luan, a research scientist in the Cockrell School and the College of Natural Sciences, have developed new probes that have mechanical compliances approaching that of the brain tissue and are more than 1,000 times more flexible than other neural probes. This ultra-flexibility leads to an improved ability to reliably record and track the electrical activity of individual neurons for long periods of time. There is a growing interest in developing long-term tracking of individual neurons for neural interface applications, such as extracting neural-control signals for amputees to control high-performance prostheses. It also opens up new possibilities to follow the progression of neurovascular and neurodegenerative diseases such as stroke, Parkinson’s and Alzheimer’s diseases.

One of the problems with conventional probes is their size and mechanical stiffness; their larger dimensions and stiffer structures often cause damage around the tissue they encompass. Additionally, while it is possible for the conventional electrodes to record brain activity for months, they often provide unreliable and degrading recordings. It is also challenging for conventional electrodes to electrophysiologically track individual neurons for more than a few days.

In contrast, the UT Austin team’s electrodes are flexible enough that they comply with the microscale movements of tissue and still stay in place. The probe’s size also drastically reduces the tissue displacement, so the brain interface is more stable, and the readings are more reliable for longer periods of time. To the researchers’ knowledge, the UT Austin probe — which is as small as 10 microns at a thickness below 1 micron, and has a cross-section that is only a fraction of that of a neuron or blood capillary — is the smallest among all neural probes.

“What we did in our research is prove that we can suppress tissue reaction while maintaining a stable recording,” Xie said. “In our case, because the electrodes are very, very flexible, we don’t see any sign of brain damage — neurons stayed alive even in contact with the NET probes, glial cells remained inactive and the vasculature didn’t become leaky.”

In experiments in mouse models, the researchers found that the probe’s flexibility and size prevented the agitation of glial cells, which is the normal biological reaction to a foreign body and leads to scarring and neuronal loss.

“The most surprising part of our work is that the living brain tissue, the biological system, really doesn’t mind having an artificial device around for months,” Luan said.

The researchers also used advanced imaging techniques in collaboration with biomedical engineering professor Andrew Dunn and neuroscientists Raymond Chitwood and Jenni Siegel from the Institute for Neuroscience at UT Austin to confirm that the NET enabled neural interface did not degrade in the mouse model for over four months of experiments. The researchers plan to continue testing their probes in animal models and hope to eventually engage in clinical testing. The research received funding from the UT BRAIN seed grant program, the Department of Defense and National Institutes of Health.

Here’s a link to and citation for the paper,

Ultraflexible nanoelectronic probes form reliable, glial scar–free neural integration by Lan Luan, Xiaoling Wei, Zhengtuo Zhao, Jennifer J. Siegel, Ojas Potnis, Catherine A Tuppen, Shengqing Lin, Shams Kazmi, Robert A. Fowler, Stewart Holloway, Andrew K. Dunn, Raymond A. Chitwood, and Chong Xie. Science Advances  15 Feb 2017: Vol. 3, no. 2, e1601966 DOI: 10.1126/sciadv.1601966

This paper is open access.

You can get more detail about the research in a Feb. 17, 2017 posting by Dexter Johnson on his Nanoclast blog (on the IEEE [International Institute for Electrical and Electronics Engineers] website).

Drive to operationalize transistors that outperform silicon gets a boost

Dexter Johnson has written a Jan. 19, 2017 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers]) about work which could lead to supplanting silicon-based transistors with carbon nanotube-based transistors in the future (Note: Links have been removed),

The end appears nigh for scaling down silicon-based complimentary metal-oxide semiconductor (CMOS) transistors, with some experts seeing the cutoff date as early as 2020.

While carbon nanotubes (CNTs) have long been among the nanomaterials investigated to serve as replacement for silicon in CMOS field-effect transistors (FETs) in a post-silicon future, they have always been bogged down by some frustrating technical problems. But, with some of the main technical showstoppers having been largely addressed—like sorting between metallic and semiconducting carbon nanotubes—the stage has been set for CNTs to start making their presence felt a bit more urgently in the chip industry.

Peking University scientists in China have now developed carbon nanotube field-effect transistors (CNT FETs) having a critical dimension—the gate length—of just five nanometers that would outperform silicon-based CMOS FETs at the same scale. The researchers claim in the journal Science that this marks the first time that sub-10 nanometer CNT CMOS FETs have been reported.

More importantly than just being the first, the Peking group showed that their CNT-based FETs can operate faster and at a lower supply voltage than their silicon-based counterparts.

A Jan. 20, 2017 article by Bob Yirka for phys.org provides more insight into the work at Peking University,

One of the most promising candidates is carbon nanotubes—due to their unique properties, transistors based on them could be smaller, faster and more efficient. Unfortunately, the difficulty in growing carbon nanotubes and their sometimes persnickety nature means that a way to make them and mass produce them has not been found. In this new effort, the researchers report on a method of creating carbon nanotube transistors that are suitable for testing, but not mass production.

To create the transistors, the researchers took a novel approach—instead of growing carbon nanotubes that had certain desired properties, they grew some and put them randomly on a silicon surface and then added electronics that would work with the properties they had—clearly not a strategy that would work for mass production, but one that allowed for building a carbon nanotube transistor that could be tested to see if it would verify theories about its performance. Realizing there would still be scaling problems using traditional electrodes, the researchers built a new kind by etching very tiny sheets of graphene. The result was a very tiny transistor, the team reports, capable of moving more current than a standard CMOS transistor using just half of the normal amount of voltage. It was also faster due to a much shorter switch delay, courtesy of a gate capacitance of just 70 femtoseconds.

Peking University has published an edited and more comprehensive version of the phys.org article first reported by Lisa Zyga and edited by Arthars,

Now in a new paper published in Nano Letters, researchers Tian Pei, et al., at Peking University in Beijing, China, have developed a modular method for constructing complicated integrated circuits (ICs) made from many FETs on individual CNTs. To demonstrate, they constructed an 8-bits BUS system–a circuit that is widely used for transferring data in computers–that contains 46 FETs on six CNTs. This is the most complicated CNT IC fabricated to date, and the fabrication process is expected to lead to even more complex circuits.

SEM image of an eight-transistor (8-T) unit that was fabricated on two CNTs (marked with two white dotted lines). The scale bar is 100 μm. (Copyright: 2014 American Chemical Society)

Ever since the first CNT FET was fabricated in 1998, researchers have been working to improve CNT-based electronics. As the scientists explain in their paper, semiconducting CNTs are promising candidates for replacing silicon wires because they are thinner, which offers better scaling-down potential, and also because they have a higher carrier mobility, resulting in higher operating speeds.

Yet CNT-based electronics still face challenges. One of the most significant challenges is obtaining arrays of semiconducting CNTs while removing the less-suitable metallic CNTs. Although scientists have devised a variety of ways to separate semiconducting and metallic CNTs, these methods almost always result in damaged semiconducting CNTs with degraded performance.

To get around this problem, researchers usually build ICs on single CNTs, which can be individually selected based on their condition. It’s difficult to use more than one CNT because no two are alike: they each have slightly different diameters and properties that affect performance. However, using just one CNT limits the complexity of these devices to simple logic and arithmetical gates.

The 8-T unit can be used as the basic building block of a variety of ICs other than BUS systems, making this modular method a universal and efficient way to construct large-scale CNT ICs. Building on their previous research, the scientists hope to explore these possibilities in the future.

“In our earlier work, we showed that a carbon nanotube based field-effect transistor is about five (n-type FET) to ten (p-type FET) times faster than its silicon counterparts, but uses much less energy, about a few percent of that of similar sized silicon transistors,” Peng said.

“In the future, we plan to construct large-scale integrated circuits that outperform silicon-based systems. These circuits are faster, smaller, and consume much less power. They can also work at extremely low temperatures (e.g., in space) and moderately high temperatures (potentially no cooling system required), on flexible and transparent substrates, and potentially be bio-compatible.”

Here’s a link to and a citation for the paper,

Scaling carbon nanotube complementary transistors to 5-nm gate lengths by Chenguang Qiu, Zhiyong Zhang, Mengmeng Xiao, Yingjun Yang, Donglai Zhong, Lian-Mao Peng. Science  20 Jan 2017: Vol. 355, Issue 6322, pp. 271-276 DOI: 10.1126/science.aaj1628

This paper is behind a paywall.

Nanotechnology cracks Wall Street (Daily)

David Dittman’s Jan. 11, 2017 article for wallstreetdaily.com portrays a great deal of excitement about nanotechnology and the possibilities (I’m highlighting the article because it showcases Dexter Johnson’s Nanoclast blog),

When we talk about next-generation aircraft, next-generation wearable biomedical devices, and next-generation fiber-optic communication, the consistent theme is nano: nanotechnology, nanomaterials, nanophotonics.

For decades, manufacturers have used carbon fiber to make lighter sports equipment, stronger aircraft, and better textiles.

Now, as Dexter Johnson of IEEE [Institute of Electrical and Electronics Engineers] Spectrum reports [on his Nanoclast blog], carbon nanotubes will help make aerospace composites more efficient:

Now researchers at the University of Surrey’s Advanced Technology Institute (ATI), the University of Bristol’s Advanced Composite Centre for Innovation and Science (ACCIS), and aerospace company Bombardier [headquartered in Montréal, Canada] have collaborated on the development of a carbon nanotube-enabled material set to replace the polymer sizing. The reinforced polymers produced with this new material have enhanced electrical and thermal conductivity, opening up new functional possibilities. It will be possible, say the British researchers, to embed gadgets such as sensors and energy harvesters directly into the material.

When it comes to flight, lighter is better, so building sensors and energy harvesters into the body of aircraft marks a significant leap forward.

Johnson also reports for IEEE Spectrum on a “novel hybrid nanomaterial” based on oscillations of electrons — a major advance in nanophotonics:

Researchers at the University of Texas at Austin have developed a hybrid nanomaterial that enables the writing, erasing and rewriting of optical components. The researchers believe that this nanomaterial and the techniques used in exploiting it could create a new generation of optical chips and circuits.

Of course, the concept of rewritable optics is not altogether new; it forms the basis of optical storage mediums like CDs and DVDs. However, CDs and DVDs require bulky light sources, optical media and light detectors. The advantage of the rewritable integrated photonic circuits developed here is that it all happens on a 2-D material.

“To develop rewritable integrated nanophotonic circuits, one has to be able to confine light within a 2-D plane, where the light can travel in the plane over a long distance and be arbitrarily controlled in terms of its propagation direction, amplitude, frequency and phase,” explained Yuebing Zheng, a professor at the University of Texas who led the research… “Our material, which is a hybrid, makes it possible to develop rewritable integrated nanophotonic circuits.”

Who knew that mixing graphene with homemade Silly Putty would create a potentially groundbreaking new material that could make “wearables” actually useful?

Next-generation biomedical devices will undoubtedly include some of this stuff:

A dash of graphene can transform the stretchy goo known as Silly Putty into a pressure sensor able to monitor a human pulse or even track the dainty steps of a small spider.

The material, dubbed G-putty, could be developed into a device that continuously monitors blood pressure, its inventors hope.

The guys who made G-putty often rely on “household stuff” in their research.

It’s nice to see a blogger’s work be highlighted. Congratulations Dexter.

G-putty was mentioned here in a Dec. 30, 2016 posting which also includes a link to Dexter’s piece on the topic.

Keeping up with science is impossible: ruminations on a nanotechnology talk

I think it’s time to give this suggestion again. Always hold a little doubt about the science information you read and hear. Everybody makes mistakes.

Here’s an example of what can happen. George Tulevski who gave a talk about nanotechnology in Nov. 2016 for TED@IBM is an accomplished scientist who appears to have made an error during his TED talk. From Tulevski’s The Next Step in Nanotechnology talk transcript page,

When I was a graduate student, it was one of the most exciting times to be working in nanotechnology. There were scientific breakthroughs happening all the time. The conferences were buzzing, there was tons of money pouring in from funding agencies. And the reason is when objects get really small, they’re governed by a different set of physics that govern ordinary objects, like the ones we interact with. We call this physics quantum mechanics. [emphases mine] And what it tells you is that you can precisely tune their behavior just by making seemingly small changes to them, like adding or removing a handful of atoms, or twisting the material. It’s like this ultimate toolkit. You really felt empowered; you felt like you could make anything.

In September 2016, scientists at Cambridge University (UK) announced they had concrete proof that the physics governing materials at the nanoscale is unique, i.e., it does not follow the rules of either classical or quantum physics. From my Oct. 27, 2016 posting,

A Sept. 29, 2016 University of Cambridge press release, which originated the news item, hones in on the peculiarities of the nanoscale,

In the middle, on the order of around 10–100,000 molecules, something different is going on. Because it’s such a tiny scale, the particles have a really big surface-area-to-volume ratio. This means the energetics of what goes on at the surface become very important, much as they do on the atomic scale, where quantum mechanics is often applied.

Classical thermodynamics breaks down. But because there are so many particles, and there are many interactions between them, the quantum model doesn’t quite work either.

It is very, very easy to miss new developments no matter how tirelessly you scan for information.

Tulevski is a good, interesting, and informed speaker but I do have one other hesitation regarding his talk. He seems to think that over the last 15 years there should have been more practical applications arising from the field of nanotechnology. There are two aspects here. First, he seems to be dating the ‘nanotechnology’ effort from the beginning of the US National Nanotechnology Initiative and there are many scientists who would object to that as the starting point. Second, 15 or even 30 or more years is a brief period of time especially when you are investigating that which hasn’t been investigated before. For example, you might want to check out the book, “Leviathan and the Air-Pump: Hobbes, Boyle, and the Experimental Life” (published 1985) is a book by Steven Shapin and Simon Schaffer (Wikipedia entry for the book). The amount of time (years) spent on how to make just the glue which held the various experimental apparatuses together was a revelation to me. Of  course, it makes perfect sense that if you’re trying something new, you’re going to have figure out everything.

By the way, I include my blog as one of the sources of information that can be faulty despite efforts to make corrections and to keep up with the latest. Even the scientists at Cambridge University can run into some problems as I noted in my Jan. 28, 2016 posting.

Getting back to Tulevsk, herei’s a link to his lively, informative talk :
https://www.ted.com/talks/george_tulevski_the_next_step_in_nanotechnology#t-562570

ETA Jan. 24, 2017: For some insight into how uncertain, tortuous, and expensive commercializing technology can be read Dexter Johnson’s Jan. 23, 2017 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website). Here’s an excerpt (Note: Links have been removed),

The brief description of this odyssey includes US $78 million in financing over 15 years and $50 million in revenues over that period through licensing of its technology and patents. That revenue includes a back-against-the-wall sell-off of a key business unit to Lockheed Martin in 2008.  Another key moment occured back in 2012 when Belgian-based nanoelectronics powerhouse Imec took on the job of further developing Nantero’s carbon-nanotube-based memory back in 2012. Despite the money and support from major electronics players, the big commercial breakout of their NRAM technology seemed ever less likely to happen with the passage of time.

Colours in bendable electronic paper

Scientists at Chalmers University of Technology (Sweden) are able to produce a rainbow of colours in a new electronic paper according to an Oct. 14, 2016 news item on Nanowerk,

Less than a micrometre thin, bendable and giving all the colours that a regular LED display does, it still needs ten times less energy than a Kindle tablet. Researchers at Chalmers University of Technology have developed the basis for a new electronic “paper.”

When Chalmers researcher Andreas Dahlin and his PhD student Kunli Xiong were working on placing conductive polymers on nanostructures, they discovered that the combination would be perfectly suited to creating electronic displays as thin as paper. A year later the results were ready for publication. A material that is less than a micrometre thin, flexible and giving all the colours that a standard LED display does.

An Oct. 14, 2016 Chalmers University of Technology press release (also on EurekAlert) by Mats Tiborn, which originated the news item, expands on the theme,

“The ’paper’ is similar to the Kindle tablet. It isn’t lit up like a standard display, but rather reflects the external light which illuminates it. Therefore it works very well where there is bright light, such as out in the sun, in contrast to standard LED displays that work best in darkness. At the same time it needs only a tenth of the energy that a Kindle tablet uses, which itself uses much less energy than a tablet LED display”, says Andreas Dahlin.

It all depends on the polymers’ ability to control how light is absorbed and reflected. The polymers that cover the whole surface lead the electric signals throughout the full display and create images in high resolution. The material is not yet ready for application, but the basis is there. The team has tested and built a few pixels. These use the same red, green and blue (RGB) colours that together can create all the colours in standard LED displays. The results so far have been positive, what remains now is to build pixels that cover an area as large as a display.

“We are working at a fundamental level but even so, the step to manufacturing a product out of it shouldn’t be too far away. What we need now are engineers”, says Andreas Dahlin.

One obstacle today is that there is gold and silver in the display.

“The gold surface is 20 nanometres thick so there is not that much gold in it. But at present there is a lot of gold wasted in manufacturing it. Either we reduce the waste or we find another way to reduce the production cost”, says Andreas Dahlin.

Caption: Chalmers' e-paper contains gold, silver and PET plastic. The layer that produces the colours is less than a micrometre thin. Credit: Mats Tiborn

Caption: Chalmers’ e-paper contains gold, silver and PET plastic. The layer that produces the colours is less than a micrometre thin. Credit: Mats Tiborn

Here’s a link to and a citation for the paper,

Plasmonic Metasurfaces with Conjugated Polymers for Flexible Electronic Paper in Color by Kunli Xiong, Gustav Emilsson, Ali Maziz, Xinxin Yang, Lei Shao, Edwin W. H. Jager, and Andreas B. Dahlin. Advanced Materials DOI: 10.1002/adma.201603358 Version of Record online: 27 SEP 2016

© 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

Finally, Dexter Johnson in an Oct. 18, 2016 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) offers some broader insight into this development (Note: Links have been removed),

Plasmonic nanostructures leverage the oscillations in the density of electrons that are generated when photons hit a metal surface. Researchers have used these structures for applications including increasing the light absorption of solar cells and creating colors without the need for dyes. As a demonstration of how effective these nanostructures are as a replacement for color dyes, a the technology has been used to produce a miniature copy of the Mona Lisa in a space smaller than the footprint taken up by a single pixel on an iPhone Retina display.

Cooling the skin with plastic clothing

Rather that cooling or heating an entire room, why not cool or heat the person? Engineers at Stanford University (California, US) have developed a material that helps with half of that premise: cooling. From a Sept. 1, 2016 news item on ScienceDaily,

Stanford engineers have developed a low-cost, plastic-based textile that, if woven into clothing, could cool your body far more efficiently than is possible with the natural or synthetic fabrics in clothes we wear today.

Describing their work in Science, the researchers suggest that this new family of fabrics could become the basis for garments that keep people cool in hot climates without air conditioning.

“If you can cool the person rather than the building where they work or live, that will save energy,” said Yi Cui, an associate professor of materials science and engineering and of photon science at Stanford.

A Sept. 1, 2016 Stanford University news release (also on EurekAlert) by Tom Abate, which originated the news item, further explains the information in the video,

This new material works by allowing the body to discharge heat in two ways that would make the wearer feel nearly 4 degrees Fahrenheit cooler than if they wore cotton clothing.

The material cools by letting perspiration evaporate through the material, something ordinary fabrics already do. But the Stanford material provides a second, revolutionary cooling mechanism: allowing heat that the body emits as infrared radiation to pass through the plastic textile.

All objects, including our bodies, throw off heat in the form of infrared radiation, an invisible and benign wavelength of light. Blankets warm us by trapping infrared heat emissions close to the body. This thermal radiation escaping from our bodies is what makes us visible in the dark through night-vision goggles.

“Forty to 60 percent of our body heat is dissipated as infrared radiation when we are sitting in an office,” said Shanhui Fan, a professor of electrical engineering who specializes in photonics, which is the study of visible and invisible light. “But until now there has been little or no research on designing the thermal radiation characteristics of textiles.”

Super-powered kitchen wrap

To develop their cooling textile, the Stanford researchers blended nanotechnology, photonics and chemistry to give polyethylene – the clear, clingy plastic we use as kitchen wrap – a number of characteristics desirable in clothing material: It allows thermal radiation, air and water vapor to pass right through, and it is opaque to visible light.

The easiest attribute was allowing infrared radiation to pass through the material, because this is a characteristic of ordinary polyethylene food wrap. Of course, kitchen plastic is impervious to water and is see-through as well, rendering it useless as clothing.

The Stanford researchers tackled these deficiencies one at a time.

First, they found a variant of polyethylene commonly used in battery making that has a specific nanostructure that is opaque to visible light yet is transparent to infrared radiation, which could let body heat escape. This provided a base material that was opaque to visible light for the sake of modesty but thermally transparent for purposes of energy efficiency.

They then modified the industrial polyethylene by treating it with benign chemicals to enable water vapor molecules to evaporate through nanopores in the plastic, said postdoctoral scholar and team member Po-Chun Hsu, allowing the plastic to breathe like a natural fiber.

Making clothes

That success gave the researchers a single-sheet material that met their three basic criteria for a cooling fabric. To make this thin material more fabric-like, they created a three-ply version: two sheets of treated polyethylene separated by a cotton mesh for strength and thickness.

To test the cooling potential of their three-ply construct versus a cotton fabric of comparable thickness, they placed a small swatch of each material on a surface that was as warm as bare skin and measured how much heat each material trapped.

“Wearing anything traps some heat and makes the skin warmer,” Fan said. “If dissipating thermal radiation were our only concern, then it would be best to wear nothing.”

The comparison showed that the cotton fabric made the skin surface 3.6 F warmer than their cooling textile. The researchers said this difference means that a person dressed in their new material might feel less inclined to turn on a fan or air conditioner.

The researchers are continuing their work on several fronts, including adding more colors, textures and cloth-like characteristics to their material. Adapting a material already mass produced for the battery industry could make it easier to create products.

“If you want to make a textile, you have to be able to make huge volumes inexpensively,” Cui said.

Fan believes that this research opens up new avenues of inquiry to cool or heat things, passively, without the use of outside energy, by tuning materials to dissipate or trap infrared radiation.

“In hindsight, some of what we’ve done looks very simple, but it’s because few have really been looking at engineering the radiation characteristics of textiles,” he said.

Dexter Johnson (Nanoclast blog on the IEEE [Institute of Electrical and Electronics Engineers] website) has written a Sept. 2, 2016 posting where he provides more technical detail about this work,

The nanoPE [nanoporous polyethylene] material is able to achieve this release of the IR heat because of the size of the interconnected pores. The pores can range in size from 50 to 1000 nanometers. They’re therefore comparable in size to wavelengths of visible light, which allows the material to scatter that light. However, because the pores are much smaller than the wavelength of infrared light, the nanoPE is transparent to the IR.

It is this combination of blocking visible light and allowing IR to pass through that distinguishes the nanoPE material from regular polyethylene, which allows similar amounts of IR to pass through, but can only block 20 percent of the visible light compared to nanoPE’s 99 percent opacity.

The Stanford researchers were also able to improve on the water wicking capability of the nanoPE material by using a microneedle punching technique and coating the material with a water-repelling agent. The result is that perspiration can evaporate through the material unlike with regular polyethylene.

For those who wish to further pursue their interest, Dexter has a lively writing style and he provides more detail and insight in his posting.

Here’s a link to and a citation for the paper,

Radiative human body cooling by nanoporous polyethylene textile by Po-Chun Hsu, Alex Y. Song, Peter B. Catrysse, Chong Liu, Yucan Peng, Jin Xie, Shanhui Fan, Yi Cui. Science  02 Sep 2016: Vol. 353, Issue 6303, pp. 1019-1023 DOI: 10.1126/science.aaf5471

This paper is open access.