Tag Archives: Institute of Electrical and Electronics Engineers

Popcorn-powered robots

A soft robotic device powered by popcorn, constructed by researchers in Cornell’s Collective Embodied Intelligence Lab. Courtesy: Cornell University

What an intriguing idea, popcorn-powered robots, and one I have difficulty imagining even with the help of the image above. A July 26, 2018 Cornell University news release (an edited version is on EurekAlert) by Melanie Lefkowitz describes the concept,

Cornell researchers have discovered how to power simple robots with a novel substance that, when heated, can expand more than 10 times in size, change its viscosity by a factor of 10 and transition from regular to highly irregular granules with surprising force.

You can also eat it with a little butter and salt.

“Popcorn-Driven Robotic Actuators,” a recent paper co-authored by doctoral student Steven Ceron, mechanical engineering, and Kirstin H. Petersen, assistant professor of electrical and computer engineering, examines how popcorn’s unique qualities can power inexpensive robotic devices that grip, expand or change rigidity.

“The goal of our lab is to try to make very minimalistic robots which, when deployed in high numbers, can still accomplish great things,” said Petersen, who runs Cornell’s Collective Embodied Intelligence Lab. “Simple robots are cheap and less prone to failures and wear, so we can have many operating autonomously over a long time. So we are always looking for new and innovative ideas that will permit us to have more functionalities for less, and popcorn is one of those.”

The study is the first to consider powering robots with popcorn, which is inexpensive, readily available, biodegradable and of course, edible. Since kernels can expand rapidly, exerting force and motion when heated, they could potentially power miniature jumping robots. Edible devices could be ingested for medical procedures. The mix of hard, unpopped granules and lighter popped corn could replace fluids in soft robots without the need for air pumps or compressors.

“Pumps and compressors tend to be more expensive, and they add a lot of weight and expense to your robot,” said Ceron, the paper’s lead author. “With popcorn, in some of the demonstrations that we showed, you just need to apply voltage to get the kernels to pop, so it would take all the bulky and expensive parts out of the robots.”

Since kernels can’t shrink once they’ve popped, a popcorn-powered mechanism can generally be used only once, though multiple uses are conceivable because popped kernels can dissolve in water, Ceron said.

The researchers experimented with Amish Country Extra Small popcorn, which they chose because the brand did not use additives. The extra-small variety had the highest expansion ratio of those they tested.

After studying popcorn’s properties using different types of heating, the researchers constructed three simple robotic actuators – devices used to perform a function.

For a jamming actuator, 36 kernels of popcorn heated with nichrome wire were used to stiffen a flexible silicone beam. For an elastomer actuator, they constructed a three-fingered soft gripper, whose silicone fingers were stuffed with popcorn heated by nichrome wire. When the kernels popped, the expansion exerted pressure against the outer walls of the fingers, causing them to curl. For an origami actuator, they folded recycled Newman’s Own organic popcorn bags into origami bellows folds, filled them with kernels and microwaved them. The expansion of the kernels was strong enough to support the weight of a nine-pound kettlebell.

The paper was presented at the IEEE [Institute of Electrical and Electronics Engineers] International Conference on Robotics and Automation in May and co-authored with Aleena Kurumunda ’19, Eashan Garg ’20, Mira Kim ’20 and Tosin Yeku ’20. Petersen said she hopes it inspires researchers to explore the possibilities of other nontraditional materials.

“Robotics is really good at embracing new ideas, and we can be super creative about what we use to generate multifunctional properties,” she said. “In the end we come up with very simple solutions to fairly complex problems. We don’t always have to look for high-tech solutions. Sometimes the answer is right in front of us.”

The work was supported by the Cornell Engineering Learning Initiative, the Cornell Electrical and Computer Engineering Early Career Award and the Cornell Sloan Fellowship.

Here’s a link to and a citation for the paper,

Popcorn-Driven Robotic Actuators by Steven Ceron, Aleena Kurumunda, Eashan Garg, Mira Kim, Tosin Yeku, and Kirstin Petersen. Presented at the IEEE International Conference on Robotics and Automation held in May 21-25, 2018 in Brisbane, Australia.

The researchers have made this video demonstrating the technology,

Embedded AI (artificial intelligence) with a variant of a memristor?

I don’t entirely get how ReRAM (resistive random access memory) is a variant of a memristor but I’m assuming Samuel K. Moore knows what he’s writing about since his May 16, 2018 posting is on the Nanoclast blog (hosted by the IEEE [Institute of Electrical and Electronics Engineers]), Note: Links have been removed,

Resistive RAM technology developer Crossbar says it has inked a deal with aerospace chip maker Microsemi allowing the latter to embed Crossbar’s nonvolatile memory on future chips. The move follows selection of Crossbar’s technology by a leading foundry for advanced manufacturing nodes. Crossbar is counting on resistive RAM (ReRAM) to enable artificial intelligence systems whose neural networks are housed within the device rather than in the cloud.

ReRAM is a variant of the memristor, a nonvolatile memory device whose resistance can be set or reset by a pulse of voltage. The variant Crossbar qualified for advanced manufacturing is called a filament device. It’s built within the layers above a chip’s silicon, where the IC’s interconnects go, and it’s made up of three layers: from top to bottom—silver, amorphous silicon, and tungsten. Voltage across the amorphous silicon causes a filament of silver atoms to cross the gap to the tungsten, making the memory cell conductive. Reversing the voltage pushes the silver back into place, cutting off conduction.

“The filament itself is only three to four nanometers wide,” says Sylvain Dubois, vice president of marketing and business development at Crossbar. “So the cell itself will be able to scale below 10-nanometers.” What’s more, the ratio between the current that flows when the device is on to when it is off is 1,000 or higher. …

A May 14, 2018 Crossbar news release describes some of the technical AI challenges,

“The biggest challenge facing engineers for AI today is overcoming the memory speed and power bottleneck in the current architecture to get faster data access while lowering the energy cost,” said Dubois. “By enabling a new, memory-centric non-volatile architecture like ReRAM, the entire trained model or knowledge base can be on-chip, connected directly to the neural network with the potential to achieve massive energy savings and performance improvements, resulting in a greatly improved battery life and a better user experience.”

Crossbar’s May 16, 2018 news release provides more detail about their ‘strategic collaboration’ with Microsemi Products (Note: A link has been removed),

Crossbar Inc., the ReRAM technology leader, announced an agreement with Microsemi Corporation, the largest U.S. commercial supplier of military and aerospace semiconductors, in which Microsemi will license Crossbar’s ReRAM core intellectual property. As part of the agreement, Microsemi and Crossbar will collaborate in the research, development and application of Crossbar’s proprietary ReRAM technology in next generation products from Microsemi that integrate Crossbar’s embedded ReRAM with Microsemi products manufactured at the 1x nm process node.

Military and aerospace, eh?

From the memristor to the atomristor?

I’m going to let Michael Berger explain the memristor (from Berger’s Jan. 2, 2017 Nanowerk Spotlight article),

In trying to bring brain-like (neuromorphic) computing closer to reality, researchers have been working on the development of memory resistors, or memristors, which are resistors in a circuit that ‘remember’ their state even if you lose power.

Today, most computers use random access memory (RAM), which moves very quickly as a user works but does not retain unsaved data if power is lost. Flash drives, on the other hand, store information when they are not powered but work much slower. Memristors could provide a memory that is the best of both worlds: fast and reliable.

He goes on to discuss a team at the University of Texas at Austin’s work on creating an extraordinarily thin memristor: an atomristor,

he team’s work features the thinnest memory devices and it appears to be a universal effect available in all semiconducting 2D monolayers.

The scientists explain that the unexpected discovery of nonvolatile resistance switching (NVRS) in monolayer transitional metal dichalcogenides (MoS2, MoSe2, WS2, WSe2) is likely due to the inherent layered crystalline nature that produces sharp interfaces and clean tunnel barriers. This prevents excessive leakage and affords stable phenomenon so that NVRS can be used for existing memory and computing applications.

“Our work opens up a new field of research in exploiting defects at the atomic scale, and can advance existing applications such as future generation high density storage, and 3D cross-bar networks for neuromorphic memory computing,” notes Akinwande [Deji Akinwande, an Associate Professor at the University of Texas at Austin]. “We also discovered a completely new application, which is non-volatile switching for radio-frequency (RF) communication systems. This is a rapidly emerging field because of the massive growth in wireless technologies and the need for very low-power switches. Our devices consume no static power, an important feature for battery life in mobile communication systems.”

Here’s a link to and a citation for the Akinwande team’s paper,

Atomristor: Nonvolatile Resistance Switching in Atomic Sheets of Transition Metal Dichalcogenides by Ruijing Ge, Xiaohan Wu, Myungsoo Kim, Jianping Shi, Sushant Sonde, Li Tao, Yanfeng Zhang, Jack C. Lee, and Deji Akinwande. Nano Lett., Article ASAP DOI: 10.1021/acs.nanolett.7b04342 Publication Date (Web): December 13, 2017

Copyright © 2017 American Chemical Society

This paper appears to be open access.

ETA January 23, 2018: There’s another account of the atomristor in Samuel K. Moore’s January 23, 2018 posting on the Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website).

Emerging technology and the law

I have three news bits about legal issues that are arising as a consequence of emerging technologies.

Deep neural networks, art, and copyright

Caption: The rise of automated art opens new creative avenues, coupled with new problems for copyright protection. Credit: Provided by: Alexander Mordvintsev, Christopher Olah and Mike Tyka

Presumably this artwork is a demonstration of automated art although they never really do explain how in the news item/news release. An April 26, 2017 news item on ScienceDaily announces research into copyright and the latest in using neural networks to create art,

In 1968, sociologist Jean Baudrillard wrote on automatism that “contained within it is the dream of a dominated world […] that serves an inert and dreamy humanity.”

With the growing popularity of Deep Neural Networks (DNN’s), this dream is fast becoming a reality.

Dr. Jean-Marc Deltorn, researcher at the Centre d’études internationales de la propriété intellectuelle in Strasbourg, argues that we must remain a responsive and responsible force in this process of automation — not inert dominators. As he demonstrates in a recent Frontiers in Digital Humanities paper, the dream of automation demands a careful study of the legal problems linked to copyright.

An April 26, 2017 Frontiers (publishing) news release on EurekAlert, which originated the news item, describes the research in more detail,

For more than half a century, artists have looked to computational processes as a way of expanding their vision. DNN’s are the culmination of this cross-pollination: by learning to identify a complex number of patterns, they can generate new creations.

These systems are made up of complex algorithms modeled on the transmission of signals between neurons in the brain.

DNN creations rely in equal measure on human inputs and the non-human algorithmic networks that process them.

Inputs are fed into the system, which is layered. Each layer provides an opportunity for a more refined knowledge of the inputs (shape, color, lines). Neural networks compare actual outputs to expected ones, and correct the predictive error through repetition and optimization. They train their own pattern recognition, thereby optimizing their learning curve and producing increasingly accurate outputs.

The deeper the layers are, the higher the level of abstraction. The highest layers are able to identify the contents of a given input with reasonable accuracy, after extended periods of training.

Creation thus becomes increasingly automated through what Deltorn calls “the arcane traceries of deep architecture”. The results are sufficiently abstracted from their sources to produce original creations that have been exhibited in galleries, sold at auction and performed at concerts.

The originality of DNN’s is a combined product of technological automation on one hand, human inputs and decisions on the other.

DNN’s are gaining popularity. Various platforms (such as DeepDream) now allow internet users to generate their very own new creations . This popularization of the automation process calls for a comprehensive legal framework that ensures a creator’s economic and moral rights with regards to his work – copyright protection.

Form, originality and attribution are the three requirements for copyright. And while DNN creations satisfy the first of these three, the claim to originality and attribution will depend largely on a given country legislation and on the traceability of the human creator.

Legislation usually sets a low threshold to originality. As DNN creations could in theory be able to create an endless number of riffs on source materials, the uncurbed creation of original works could inflate the existing number of copyright protections.

Additionally, a small number of national copyright laws confers attribution to what UK legislation defines loosely as “the person by whom the arrangements necessary for the creation of the work are undertaken.” In the case of DNN’s, this could mean anybody from the programmer to the user of a DNN interface.

Combined with an overly supple take on originality, this view on attribution would further increase the number of copyrightable works.

The risk, in both cases, is that artists will be less willing to publish their own works, for fear of infringement of DNN copyright protections.

In order to promote creativity – one seminal aim of copyright protection – the issue must be limited to creations that manifest a personal voice “and not just the electric glint of a computational engine,” to quote Deltorn. A delicate act of discernment.

DNN’s promise new avenues of creative expression for artists – with potential caveats. Copyright protection – a “catalyst to creativity” – must be contained. Many of us gently bask in the glow of an increasingly automated form of technology. But if we want to safeguard the ineffable quality that defines much art, it might be a good idea to hone in more closely on the differences between the electric and the creative spark.

This research is and be will part of a broader Frontiers Research Topic collection of articles on Deep Learning and Digital Humanities.

Here’s a link to and a citation for the paper,

Deep Creations: Intellectual Property and the Automata by Jean-Marc Deltorn. Front. Digit. Humanit., 01 February 2017 | https://doi.org/10.3389/fdigh.2017.00003

This paper is open access.

Conference on governance of emerging technologies

I received an April 17, 2017 notice via email about this upcoming conference. Here’s more from the Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics webpage,

The Fifth Annual Conference on Governance of Emerging Technologies:

Law, Policy and Ethics held at the new

Beus Center for Law & Society in Phoenix, AZ

May 17-19, 2017!

Call for Abstracts – Now Closed

The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including (but not limited to) nanotechnology, synthetic biology, gene editing, biotechnology, genomics, personalized medicine, human enhancement technologies, telecommunications, information technologies, surveillance technologies, geoengineering, neuroscience, artificial intelligence, and robotics. The conference is premised on the belief that there is much to be learned and shared from and across the governance experience and proposals for these various emerging technologies.

Keynote Speakers:

Gillian HadfieldRichard L. and Antoinette Schamoi Kirtland Professor of Law and Professor of Economics USC [University of Southern California] Gould School of Law

Shobita Parthasarathy, Associate Professor of Public Policy and Women’s Studies, Director, Science, Technology, and Public Policy Program University of Michigan

Stuart Russell, Professor at [University of California] Berkeley, is a computer scientist known for his contributions to artificial intelligence

Craig Shank, Vice President for Corporate Standards Group in Microsoft’s Corporate, External and Legal Affairs (CELA)

Plenary Panels:

Innovation – Responsible and/or Permissionless

Ellen-Marie Forsberg, Senior Researcher/Research Manager at Oslo and Akershus University College of Applied Sciences

Adam Thierer, Senior Research Fellow with the Technology Policy Program at the Mercatus Center at George Mason University

Wendell Wallach, Consultant, ethicist, and scholar at Yale University’s Interdisciplinary Center for Bioethics

 Gene Drives, Trade and International Regulations

Greg Kaebnick, Director, Editorial Department; Editor, Hastings Center Report; Research Scholar, Hastings Center

Jennifer Kuzma, Goodnight-North Carolina GlaxoSmithKline Foundation Distinguished Professor in Social Sciences in the School of Public and International Affairs (SPIA) and co-director of the Genetic Engineering and Society (GES) Center at North Carolina State University

Andrew Maynard, Senior Sustainability Scholar, Julie Ann Wrigley Global Institute of Sustainability Director, Risk Innovation Lab, School for the Future of Innovation in Society Professor, School for the Future of Innovation in Society, Arizona State University

Gary Marchant, Regents’ Professor of Law, Professor of Law Faculty Director and Faculty Fellow, Center for Law, Science & Innovation, Arizona State University

Marc Saner, Inaugural Director of the Institute for Science, Society and Policy, and Associate Professor, University of Ottawa Department of Geography

Big Data

Anupam Chander, Martin Luther King, Jr. Professor of Law and Director, California International Law Center, UC Davis School of Law

Pilar Ossorio, Professor of Law and Bioethics, University of Wisconsin, School of Law and School of Medicine and Public Health; Morgridge Institute for Research, Ethics Scholar-in-Residence

George Poste, Chief Scientist, Complex Adaptive Systems Initiative (CASI) (http://www.casi.asu.edu/), Regents’ Professor and Del E. Webb Chair in Health Innovation, Arizona State University

Emily Shuckburgh, climate scientist and deputy head of the Polar Oceans Team at the British Antarctic Survey, University of Cambridge

 Responsible Development of AI

Spring Berman, Ira A. Fulton Schools of Engineering, Arizona State University

John Havens, The IEEE [Institute of Electrical and Electronics Engineers] Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems

Subbarao Kambhampati, Senior Sustainability Scientist, Julie Ann Wrigley Global Institute of Sustainability, Professor, School of Computing, Informatics and Decision Systems Engineering, Ira A. Fulton Schools of Engineering, Arizona State University

Wendell Wallach, Consultant, Ethicist, and Scholar at Yale University’s Interdisciplinary Center for Bioethics

Existential and Catastrophic Ricks [sic]

Tony Barrett, Co-Founder and Director of Research of the Global Catastrophic Risk Institute

Haydn Belfield,  Academic Project Administrator, Centre for the Study of Existential Risk at the University of Cambridge

Margaret E. Kosal Associate Director, Sam Nunn School of International Affairs, Georgia Institute of Technology

Catherine Rhodes,  Academic Project Manager, Centre for the Study of Existential Risk at CSER, University of Cambridge

These were the panels that are of interest to me; there are others on the homepage.

Here’s some information from the Conference registration webpage,

Early Bird Registration – $50 off until May 1! Enter discount code: earlybirdGETs50

New: Group Discount – Register 2+ attendees together and receive an additional 20% off for all group members!

Click Here to Register!

Conference registration fees are as follows:

  • General (non-CLE) Registration: $150.00
  • CLE Registration: $350.00
  • *Current Student / ASU Law Alumni Registration: $50.00
  • ^Cybsersecurity sessions only (May 19): $100 CLE / $50 General / Free for students (registration info coming soon)

There you have it.

Neuro-techno future laws

I’m pretty sure this isn’t the first exploration of potential legal issues arising from research into neuroscience although it’s the first one I’ve stumbled across. From an April 25, 2017 news item on phys.org,

New human rights laws to prepare for advances in neurotechnology that put the ‘freedom of the mind’ at risk have been proposed today in the open access journal Life Sciences, Society and Policy.

The authors of the study suggest four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy. The four laws are: the right to cognitive liberty, the right to mental privacy, the right to mental integrity and the right to psychological continuity.

An April 25, 2017 Biomed Central news release on EurekAlert, which originated the news item, describes the work in more detail,

Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel, said: “The mind is considered to be the last refuge of personal freedom and self-determination, but advances in neural engineering, brain imaging and neurotechnology put the freedom of the mind at risk. Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology.”

Advances in neurotechnology, such as sophisticated brain imaging and the development of brain-computer interfaces, have led to these technologies moving away from a clinical setting and into the consumer domain. While these advances may be beneficial for individuals and society, there is a risk that the technology could be misused and create unprecedented threats to personal freedom.

Professor Roberto Andorno, co-author of the research, explained: “Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court, for example as a tool for assessing criminal responsibility or even the risk of reoffending. Consumer companies are using brain imaging for ‘neuromarketing’, to understand consumer behaviour and elicit desired responses from customers. There are also tools such as ‘brain decoders’ which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom which we sought to address with the development of four new human rights laws.”

The authors explain that as neurotechnology improves and becomes commonplace, there is a risk that the technology could be hacked, allowing a third-party to ‘eavesdrop’ on someone’s mind. In the future, a brain-computer interface used to control consumer technology could put the user at risk of physical and psychological damage caused by a third-party attack on the technology. There are also ethical and legal concerns over the protection of data generated by these devices that need to be considered.

International human rights laws make no specific mention to neuroscience, although advances in biomedicine have become intertwined with laws, such as those concerning human genetic data. Similar to the historical trajectory of the genetic revolution, the authors state that the on-going neurorevolution will force a reconceptualization of human rights laws and even the creation of new ones.

Marcello Ienca added: “Science-fiction can teach us a lot about the potential threat of technology. Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes. We need to be prepared to deal with the impact these technologies will have on our personal freedom.”

Here’s a link to and a citation for the paper,

Towards new human rights in the age of neuroscience and neurotechnology by Marcello Ienca and Roberto Andorno. Life Sciences, Society and Policy201713:5 DOI: 10.1186/s40504-017-0050-1 Published: 26 April 2017

©  The Author(s). 2017

This paper is open access.

Drive to operationalize transistors that outperform silicon gets a boost

Dexter Johnson has written a Jan. 19, 2017 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers]) about work which could lead to supplanting silicon-based transistors with carbon nanotube-based transistors in the future (Note: Links have been removed),

The end appears nigh for scaling down silicon-based complimentary metal-oxide semiconductor (CMOS) transistors, with some experts seeing the cutoff date as early as 2020.

While carbon nanotubes (CNTs) have long been among the nanomaterials investigated to serve as replacement for silicon in CMOS field-effect transistors (FETs) in a post-silicon future, they have always been bogged down by some frustrating technical problems. But, with some of the main technical showstoppers having been largely addressed—like sorting between metallic and semiconducting carbon nanotubes—the stage has been set for CNTs to start making their presence felt a bit more urgently in the chip industry.

Peking University scientists in China have now developed carbon nanotube field-effect transistors (CNT FETs) having a critical dimension—the gate length—of just five nanometers that would outperform silicon-based CMOS FETs at the same scale. The researchers claim in the journal Science that this marks the first time that sub-10 nanometer CNT CMOS FETs have been reported.

More importantly than just being the first, the Peking group showed that their CNT-based FETs can operate faster and at a lower supply voltage than their silicon-based counterparts.

A Jan. 20, 2017 article by Bob Yirka for phys.org provides more insight into the work at Peking University,

One of the most promising candidates is carbon nanotubes—due to their unique properties, transistors based on them could be smaller, faster and more efficient. Unfortunately, the difficulty in growing carbon nanotubes and their sometimes persnickety nature means that a way to make them and mass produce them has not been found. In this new effort, the researchers report on a method of creating carbon nanotube transistors that are suitable for testing, but not mass production.

To create the transistors, the researchers took a novel approach—instead of growing carbon nanotubes that had certain desired properties, they grew some and put them randomly on a silicon surface and then added electronics that would work with the properties they had—clearly not a strategy that would work for mass production, but one that allowed for building a carbon nanotube transistor that could be tested to see if it would verify theories about its performance. Realizing there would still be scaling problems using traditional electrodes, the researchers built a new kind by etching very tiny sheets of graphene. The result was a very tiny transistor, the team reports, capable of moving more current than a standard CMOS transistor using just half of the normal amount of voltage. It was also faster due to a much shorter switch delay, courtesy of a gate capacitance of just 70 femtoseconds.

Peking University has published an edited and more comprehensive version of the phys.org article first reported by Lisa Zyga and edited by Arthars,

Now in a new paper published in Nano Letters, researchers Tian Pei, et al., at Peking University in Beijing, China, have developed a modular method for constructing complicated integrated circuits (ICs) made from many FETs on individual CNTs. To demonstrate, they constructed an 8-bits BUS system–a circuit that is widely used for transferring data in computers–that contains 46 FETs on six CNTs. This is the most complicated CNT IC fabricated to date, and the fabrication process is expected to lead to even more complex circuits.

SEM image of an eight-transistor (8-T) unit that was fabricated on two CNTs (marked with two white dotted lines). The scale bar is 100 μm. (Copyright: 2014 American Chemical Society)

Ever since the first CNT FET was fabricated in 1998, researchers have been working to improve CNT-based electronics. As the scientists explain in their paper, semiconducting CNTs are promising candidates for replacing silicon wires because they are thinner, which offers better scaling-down potential, and also because they have a higher carrier mobility, resulting in higher operating speeds.

Yet CNT-based electronics still face challenges. One of the most significant challenges is obtaining arrays of semiconducting CNTs while removing the less-suitable metallic CNTs. Although scientists have devised a variety of ways to separate semiconducting and metallic CNTs, these methods almost always result in damaged semiconducting CNTs with degraded performance.

To get around this problem, researchers usually build ICs on single CNTs, which can be individually selected based on their condition. It’s difficult to use more than one CNT because no two are alike: they each have slightly different diameters and properties that affect performance. However, using just one CNT limits the complexity of these devices to simple logic and arithmetical gates.

The 8-T unit can be used as the basic building block of a variety of ICs other than BUS systems, making this modular method a universal and efficient way to construct large-scale CNT ICs. Building on their previous research, the scientists hope to explore these possibilities in the future.

“In our earlier work, we showed that a carbon nanotube based field-effect transistor is about five (n-type FET) to ten (p-type FET) times faster than its silicon counterparts, but uses much less energy, about a few percent of that of similar sized silicon transistors,” Peng said.

“In the future, we plan to construct large-scale integrated circuits that outperform silicon-based systems. These circuits are faster, smaller, and consume much less power. They can also work at extremely low temperatures (e.g., in space) and moderately high temperatures (potentially no cooling system required), on flexible and transparent substrates, and potentially be bio-compatible.”

Here’s a link to and a citation for the paper,

Scaling carbon nanotube complementary transistors to 5-nm gate lengths by Chenguang Qiu, Zhiyong Zhang, Mengmeng Xiao, Yingjun Yang, Donglai Zhong, Lian-Mao Peng. Science  20 Jan 2017: Vol. 355, Issue 6322, pp. 271-276 DOI: 10.1126/science.aaj1628

This paper is behind a paywall.

Nanotechnology cracks Wall Street (Daily)

David Dittman’s Jan. 11, 2017 article for wallstreetdaily.com portrays a great deal of excitement about nanotechnology and the possibilities (I’m highlighting the article because it showcases Dexter Johnson’s Nanoclast blog),

When we talk about next-generation aircraft, next-generation wearable biomedical devices, and next-generation fiber-optic communication, the consistent theme is nano: nanotechnology, nanomaterials, nanophotonics.

For decades, manufacturers have used carbon fiber to make lighter sports equipment, stronger aircraft, and better textiles.

Now, as Dexter Johnson of IEEE [Institute of Electrical and Electronics Engineers] Spectrum reports [on his Nanoclast blog], carbon nanotubes will help make aerospace composites more efficient:

Now researchers at the University of Surrey’s Advanced Technology Institute (ATI), the University of Bristol’s Advanced Composite Centre for Innovation and Science (ACCIS), and aerospace company Bombardier [headquartered in Montréal, Canada] have collaborated on the development of a carbon nanotube-enabled material set to replace the polymer sizing. The reinforced polymers produced with this new material have enhanced electrical and thermal conductivity, opening up new functional possibilities. It will be possible, say the British researchers, to embed gadgets such as sensors and energy harvesters directly into the material.

When it comes to flight, lighter is better, so building sensors and energy harvesters into the body of aircraft marks a significant leap forward.

Johnson also reports for IEEE Spectrum on a “novel hybrid nanomaterial” based on oscillations of electrons — a major advance in nanophotonics:

Researchers at the University of Texas at Austin have developed a hybrid nanomaterial that enables the writing, erasing and rewriting of optical components. The researchers believe that this nanomaterial and the techniques used in exploiting it could create a new generation of optical chips and circuits.

Of course, the concept of rewritable optics is not altogether new; it forms the basis of optical storage mediums like CDs and DVDs. However, CDs and DVDs require bulky light sources, optical media and light detectors. The advantage of the rewritable integrated photonic circuits developed here is that it all happens on a 2-D material.

“To develop rewritable integrated nanophotonic circuits, one has to be able to confine light within a 2-D plane, where the light can travel in the plane over a long distance and be arbitrarily controlled in terms of its propagation direction, amplitude, frequency and phase,” explained Yuebing Zheng, a professor at the University of Texas who led the research… “Our material, which is a hybrid, makes it possible to develop rewritable integrated nanophotonic circuits.”

Who knew that mixing graphene with homemade Silly Putty would create a potentially groundbreaking new material that could make “wearables” actually useful?

Next-generation biomedical devices will undoubtedly include some of this stuff:

A dash of graphene can transform the stretchy goo known as Silly Putty into a pressure sensor able to monitor a human pulse or even track the dainty steps of a small spider.

The material, dubbed G-putty, could be developed into a device that continuously monitors blood pressure, its inventors hope.

The guys who made G-putty often rely on “household stuff” in their research.

It’s nice to see a blogger’s work be highlighted. Congratulations Dexter.

G-putty was mentioned here in a Dec. 30, 2016 posting which also includes a link to Dexter’s piece on the topic.

Keeping up with science is impossible: ruminations on a nanotechnology talk

I think it’s time to give this suggestion again. Always hold a little doubt about the science information you read and hear. Everybody makes mistakes.

Here’s an example of what can happen. George Tulevski who gave a talk about nanotechnology in Nov. 2016 for TED@IBM is an accomplished scientist who appears to have made an error during his TED talk. From Tulevski’s The Next Step in Nanotechnology talk transcript page,

When I was a graduate student, it was one of the most exciting times to be working in nanotechnology. There were scientific breakthroughs happening all the time. The conferences were buzzing, there was tons of money pouring in from funding agencies. And the reason is when objects get really small, they’re governed by a different set of physics that govern ordinary objects, like the ones we interact with. We call this physics quantum mechanics. [emphases mine] And what it tells you is that you can precisely tune their behavior just by making seemingly small changes to them, like adding or removing a handful of atoms, or twisting the material. It’s like this ultimate toolkit. You really felt empowered; you felt like you could make anything.

In September 2016, scientists at Cambridge University (UK) announced they had concrete proof that the physics governing materials at the nanoscale is unique, i.e., it does not follow the rules of either classical or quantum physics. From my Oct. 27, 2016 posting,

A Sept. 29, 2016 University of Cambridge press release, which originated the news item, hones in on the peculiarities of the nanoscale,

In the middle, on the order of around 10–100,000 molecules, something different is going on. Because it’s such a tiny scale, the particles have a really big surface-area-to-volume ratio. This means the energetics of what goes on at the surface become very important, much as they do on the atomic scale, where quantum mechanics is often applied.

Classical thermodynamics breaks down. But because there are so many particles, and there are many interactions between them, the quantum model doesn’t quite work either.

It is very, very easy to miss new developments no matter how tirelessly you scan for information.

Tulevski is a good, interesting, and informed speaker but I do have one other hesitation regarding his talk. He seems to think that over the last 15 years there should have been more practical applications arising from the field of nanotechnology. There are two aspects here. First, he seems to be dating the ‘nanotechnology’ effort from the beginning of the US National Nanotechnology Initiative and there are many scientists who would object to that as the starting point. Second, 15 or even 30 or more years is a brief period of time especially when you are investigating that which hasn’t been investigated before. For example, you might want to check out the book, “Leviathan and the Air-Pump: Hobbes, Boyle, and the Experimental Life” (published 1985) is a book by Steven Shapin and Simon Schaffer (Wikipedia entry for the book). The amount of time (years) spent on how to make just the glue which held the various experimental apparatuses together was a revelation to me. Of  course, it makes perfect sense that if you’re trying something new, you’re going to have figure out everything.

By the way, I include my blog as one of the sources of information that can be faulty despite efforts to make corrections and to keep up with the latest. Even the scientists at Cambridge University can run into some problems as I noted in my Jan. 28, 2016 posting.

Getting back to Tulevsk, herei’s a link to his lively, informative talk :
https://www.ted.com/talks/george_tulevski_the_next_step_in_nanotechnology#t-562570

ETA Jan. 24, 2017: For some insight into how uncertain, tortuous, and expensive commercializing technology can be read Dexter Johnson’s Jan. 23, 2017 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website). Here’s an excerpt (Note: Links have been removed),

The brief description of this odyssey includes US $78 million in financing over 15 years and $50 million in revenues over that period through licensing of its technology and patents. That revenue includes a back-against-the-wall sell-off of a key business unit to Lockheed Martin in 2008.  Another key moment occured back in 2012 when Belgian-based nanoelectronics powerhouse Imec took on the job of further developing Nantero’s carbon-nanotube-based memory back in 2012. Despite the money and support from major electronics players, the big commercial breakout of their NRAM technology seemed ever less likely to happen with the passage of time.

Colours in bendable electronic paper

Scientists at Chalmers University of Technology (Sweden) are able to produce a rainbow of colours in a new electronic paper according to an Oct. 14, 2016 news item on Nanowerk,

Less than a micrometre thin, bendable and giving all the colours that a regular LED display does, it still needs ten times less energy than a Kindle tablet. Researchers at Chalmers University of Technology have developed the basis for a new electronic “paper.”

When Chalmers researcher Andreas Dahlin and his PhD student Kunli Xiong were working on placing conductive polymers on nanostructures, they discovered that the combination would be perfectly suited to creating electronic displays as thin as paper. A year later the results were ready for publication. A material that is less than a micrometre thin, flexible and giving all the colours that a standard LED display does.

An Oct. 14, 2016 Chalmers University of Technology press release (also on EurekAlert) by Mats Tiborn, which originated the news item, expands on the theme,

“The ’paper’ is similar to the Kindle tablet. It isn’t lit up like a standard display, but rather reflects the external light which illuminates it. Therefore it works very well where there is bright light, such as out in the sun, in contrast to standard LED displays that work best in darkness. At the same time it needs only a tenth of the energy that a Kindle tablet uses, which itself uses much less energy than a tablet LED display”, says Andreas Dahlin.

It all depends on the polymers’ ability to control how light is absorbed and reflected. The polymers that cover the whole surface lead the electric signals throughout the full display and create images in high resolution. The material is not yet ready for application, but the basis is there. The team has tested and built a few pixels. These use the same red, green and blue (RGB) colours that together can create all the colours in standard LED displays. The results so far have been positive, what remains now is to build pixels that cover an area as large as a display.

“We are working at a fundamental level but even so, the step to manufacturing a product out of it shouldn’t be too far away. What we need now are engineers”, says Andreas Dahlin.

One obstacle today is that there is gold and silver in the display.

“The gold surface is 20 nanometres thick so there is not that much gold in it. But at present there is a lot of gold wasted in manufacturing it. Either we reduce the waste or we find another way to reduce the production cost”, says Andreas Dahlin.

Caption: Chalmers' e-paper contains gold, silver and PET plastic. The layer that produces the colours is less than a micrometre thin. Credit: Mats Tiborn

Caption: Chalmers’ e-paper contains gold, silver and PET plastic. The layer that produces the colours is less than a micrometre thin. Credit: Mats Tiborn

Here’s a link to and a citation for the paper,

Plasmonic Metasurfaces with Conjugated Polymers for Flexible Electronic Paper in Color by Kunli Xiong, Gustav Emilsson, Ali Maziz, Xinxin Yang, Lei Shao, Edwin W. H. Jager, and Andreas B. Dahlin. Advanced Materials DOI: 10.1002/adma.201603358 Version of Record online: 27 SEP 2016

© 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

Finally, Dexter Johnson in an Oct. 18, 2016 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) offers some broader insight into this development (Note: Links have been removed),

Plasmonic nanostructures leverage the oscillations in the density of electrons that are generated when photons hit a metal surface. Researchers have used these structures for applications including increasing the light absorption of solar cells and creating colors without the need for dyes. As a demonstration of how effective these nanostructures are as a replacement for color dyes, a the technology has been used to produce a miniature copy of the Mona Lisa in a space smaller than the footprint taken up by a single pixel on an iPhone Retina display.

A new memristor circuit

Apparently engineers at the University of Massachusetts at Amherst have developed a new kind of memristor. A Sept. 29, 2016 news item on Nanowerk makes the announcement (Note: A link has been removed),

Engineers at the University of Massachusetts Amherst are leading a research team that is developing a new type of nanodevice for computer microprocessors that can mimic the functioning of a biological synapse—the place where a signal passes from one nerve cell to another in the body. The work is featured in the advance online publication of Nature Materials (“Memristors with diffusive dynamics as synaptic emulators for neuromorphic computing”).

Such neuromorphic computing in which microprocessors are configured more like human brains is one of the most promising transformative computing technologies currently under study.

While it doesn’t sound different from any other memristor, that’s misleading. Do read on. A Sept. 27, 2016 University of Massachusetts at Amherst news release, which originated the news item, provides more detail about the researchers and the work,

J. Joshua Yang and Qiangfei Xia are professors in the electrical and computer engineering department in the UMass Amherst College of Engineering. Yang describes the research as part of collaborative work on a new type of memristive device.

Memristive devices are electrical resistance switches that can alter their resistance based on the history of applied voltage and current. These devices can store and process information and offer several key performance characteristics that exceed conventional integrated circuit technology.

“Memristors have become a leading candidate to enable neuromorphic computing by reproducing the functions in biological synapses and neurons in a neural network system, while providing advantages in energy and size,” the researchers say.

Neuromorphic computing—meaning microprocessors configured more like human brains than like traditional computer chips—is one of the most promising transformative computing technologies currently under intensive study. Xia says, “This work opens a new avenue of neuromorphic computing hardware based on memristors.”

They say that most previous work in this field with memristors has not implemented diffusive dynamics without using large standard technology found in integrated circuits commonly used in microprocessors, microcontrollers, static random access memory and other digital logic circuits.

The researchers say they proposed and demonstrated a bio-inspired solution to the diffusive dynamics that is fundamentally different from the standard technology for integrated circuits while sharing great similarities with synapses. They say, “Specifically, we developed a diffusive-type memristor where diffusion of atoms offers a similar dynamics [?] and the needed time-scales as its bio-counterpart, leading to a more faithful emulation of actual synapses, i.e., a true synaptic emulator.”

The researchers say, “The results here provide an encouraging pathway toward synaptic emulation using diffusive memristors for neuromorphic computing.”

Here’s a link to and a citation for the paper,

Memristors with diffusive dynamics as synaptic emulators for neuromorphic computing by Zhongrui Wang, Saumil Joshi, Sergey E. Savel’ev, Hao Jiang, Rivu Midya, Peng Lin, Miao Hu, Ning Ge, John Paul Strachan, Zhiyong Li, Qing Wu, Mark Barnell, Geng-Lin Li, Huolin L. Xin, R. Stanley Williams [emphasis mine], Qiangfei Xia, & J. Joshua Yang. Nature Materials (2016) doi:10.1038/nmat4756 Published online 26 September 2016

This paper is behind a paywall.

I’ve emphasized R. Stanley Williams’ name as he was the lead researcher on the HP Labs team that proved Leon Chua’s 1971 theory about the memristor and exerted engineering control of the memristor in 2008. (Bernard Widrow, in the 1960s,  predicted and proved the existence of something he termed a ‘memistor’. Chua arrived at his ‘memristor’ theory independently.)

Austin Silver in a Sept. 29, 2016 posting on The Human OS blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) delves into this latest memristor research (Note: Links have been removed),

In research published in Nature Materials on 26 September [2016], Yang and his team mimicked a crucial underlying component of how synaptic connections get stronger or weaker: the flow of calcium.

The movement of calcium into or out of the neuronal membrane, neuroscientists have found, directly affects the connection. Chemical processes move the calcium in and out— triggering a long-term change in the synapses’ strength. 2015 research in ACS NanoLetters and Advanced Functional Materials discovered that types of memristors can simulate some of the calcium behavior, but not all.

In the new research, Yang combined two types of memristors in series to create an artificial synapse. The hybrid device more closely mimics biological synapse behavior—the calcium flow in particular, Yang says.

The new memristor used–called a diffusive memristor because atoms in the resistive material move even without an applied voltage when the device is in the high resistance state—was a dielectic film sandwiched between Pt [platinum] or Au [gold] electrodes. The film contained Ag [silver] nanoparticles, which would play the role of calcium in the experiments.

By tracking the movement of the silver nanoparticles inside the diffusive memristor, the researchers noticed a striking similarity to how calcium functions in biological systems.

A voltage pulse to the hybrid device drove silver into the gap between the diffusive memristor’s two electrodes–creating a filament bridge. After the pulse died away, the filament started to break and the silver moved back— resistance increased.

Like the case with calcium, a force made silver go in and a force made silver go out.

To complete the artificial synapse, the researchers connected the diffusive memristor in series to another type of memristor that had been studied before.

When presented with a sequence of voltage pulses with particular timing, the artificial synapse showed the kind of long-term strengthening behavior a real synapse would, according to the researchers. “We think it is sort of a real emulation, rather than simulation because they have the physical similarity,” Yang says.

I was glad to find some additional technical detail about this new memristor and to find the Human OS blog, which is new to me and according to its home page is a “biomedical blog, featuring the wearable sensors, big data analytics, and implanted devices that enable new ventures in personalized medicine.”

Cooling the skin with plastic clothing

Rather that cooling or heating an entire room, why not cool or heat the person? Engineers at Stanford University (California, US) have developed a material that helps with half of that premise: cooling. From a Sept. 1, 2016 news item on ScienceDaily,

Stanford engineers have developed a low-cost, plastic-based textile that, if woven into clothing, could cool your body far more efficiently than is possible with the natural or synthetic fabrics in clothes we wear today.

Describing their work in Science, the researchers suggest that this new family of fabrics could become the basis for garments that keep people cool in hot climates without air conditioning.

“If you can cool the person rather than the building where they work or live, that will save energy,” said Yi Cui, an associate professor of materials science and engineering and of photon science at Stanford.

A Sept. 1, 2016 Stanford University news release (also on EurekAlert) by Tom Abate, which originated the news item, further explains the information in the video,

This new material works by allowing the body to discharge heat in two ways that would make the wearer feel nearly 4 degrees Fahrenheit cooler than if they wore cotton clothing.

The material cools by letting perspiration evaporate through the material, something ordinary fabrics already do. But the Stanford material provides a second, revolutionary cooling mechanism: allowing heat that the body emits as infrared radiation to pass through the plastic textile.

All objects, including our bodies, throw off heat in the form of infrared radiation, an invisible and benign wavelength of light. Blankets warm us by trapping infrared heat emissions close to the body. This thermal radiation escaping from our bodies is what makes us visible in the dark through night-vision goggles.

“Forty to 60 percent of our body heat is dissipated as infrared radiation when we are sitting in an office,” said Shanhui Fan, a professor of electrical engineering who specializes in photonics, which is the study of visible and invisible light. “But until now there has been little or no research on designing the thermal radiation characteristics of textiles.”

Super-powered kitchen wrap

To develop their cooling textile, the Stanford researchers blended nanotechnology, photonics and chemistry to give polyethylene – the clear, clingy plastic we use as kitchen wrap – a number of characteristics desirable in clothing material: It allows thermal radiation, air and water vapor to pass right through, and it is opaque to visible light.

The easiest attribute was allowing infrared radiation to pass through the material, because this is a characteristic of ordinary polyethylene food wrap. Of course, kitchen plastic is impervious to water and is see-through as well, rendering it useless as clothing.

The Stanford researchers tackled these deficiencies one at a time.

First, they found a variant of polyethylene commonly used in battery making that has a specific nanostructure that is opaque to visible light yet is transparent to infrared radiation, which could let body heat escape. This provided a base material that was opaque to visible light for the sake of modesty but thermally transparent for purposes of energy efficiency.

They then modified the industrial polyethylene by treating it with benign chemicals to enable water vapor molecules to evaporate through nanopores in the plastic, said postdoctoral scholar and team member Po-Chun Hsu, allowing the plastic to breathe like a natural fiber.

Making clothes

That success gave the researchers a single-sheet material that met their three basic criteria for a cooling fabric. To make this thin material more fabric-like, they created a three-ply version: two sheets of treated polyethylene separated by a cotton mesh for strength and thickness.

To test the cooling potential of their three-ply construct versus a cotton fabric of comparable thickness, they placed a small swatch of each material on a surface that was as warm as bare skin and measured how much heat each material trapped.

“Wearing anything traps some heat and makes the skin warmer,” Fan said. “If dissipating thermal radiation were our only concern, then it would be best to wear nothing.”

The comparison showed that the cotton fabric made the skin surface 3.6 F warmer than their cooling textile. The researchers said this difference means that a person dressed in their new material might feel less inclined to turn on a fan or air conditioner.

The researchers are continuing their work on several fronts, including adding more colors, textures and cloth-like characteristics to their material. Adapting a material already mass produced for the battery industry could make it easier to create products.

“If you want to make a textile, you have to be able to make huge volumes inexpensively,” Cui said.

Fan believes that this research opens up new avenues of inquiry to cool or heat things, passively, without the use of outside energy, by tuning materials to dissipate or trap infrared radiation.

“In hindsight, some of what we’ve done looks very simple, but it’s because few have really been looking at engineering the radiation characteristics of textiles,” he said.

Dexter Johnson (Nanoclast blog on the IEEE [Institute of Electrical and Electronics Engineers] website) has written a Sept. 2, 2016 posting where he provides more technical detail about this work,

The nanoPE [nanoporous polyethylene] material is able to achieve this release of the IR heat because of the size of the interconnected pores. The pores can range in size from 50 to 1000 nanometers. They’re therefore comparable in size to wavelengths of visible light, which allows the material to scatter that light. However, because the pores are much smaller than the wavelength of infrared light, the nanoPE is transparent to the IR.

It is this combination of blocking visible light and allowing IR to pass through that distinguishes the nanoPE material from regular polyethylene, which allows similar amounts of IR to pass through, but can only block 20 percent of the visible light compared to nanoPE’s 99 percent opacity.

The Stanford researchers were also able to improve on the water wicking capability of the nanoPE material by using a microneedle punching technique and coating the material with a water-repelling agent. The result is that perspiration can evaporate through the material unlike with regular polyethylene.

For those who wish to further pursue their interest, Dexter has a lively writing style and he provides more detail and insight in his posting.

Here’s a link to and a citation for the paper,

Radiative human body cooling by nanoporous polyethylene textile by Po-Chun Hsu, Alex Y. Song, Peter B. Catrysse, Chong Liu, Yucan Peng, Jin Xie, Shanhui Fan, Yi Cui. Science  02 Sep 2016: Vol. 353, Issue 6303, pp. 1019-1023 DOI: 10.1126/science.aaf5471

This paper is open access.