Tag Archives: IEEE

AI (artificial intelligence) for Good Global Summit from May 15 – 17, 2018 in Geneva, Switzerland: details and an interview with Frederic Werner

With all the talk about artificial intelligence (AI), a lot more attention seems to be paid to apocalyptic scenarios: loss of jobs, financial hardship, loss of personal agency and privacy, and more with all of these impacts being described as global. Still, there are some folks who are considering and working on ‘AI for good’.

If you’d asked me, the International Telecommunications Union (ITU) would not have been my first guess (my choice would have been United Nations Educational, Scientific and Cultural Organization [UNESCO]) as an agency likely to host the 2018 AI for Good Global Summit. But, it turns out the ITU is a UN (United Nations agency) and, according to its Wikipedia entry, it’s an intergovernmental public-private partnership, which may explain the nature of the participants in the upcoming summit.

The news

First, there’s a May 4, 2018 ITU media advisory (received via email or you can find the full media advisory here) about the upcoming summit,

Artificial Intelligence (AI) is now widely identified as being able to address the greatest challenges facing humanity – supporting innovation in fields ranging from crisis management and healthcare to smart cities and communications networking.

The second annual ‘AI for Good Global Summit’ will take place 15-17 May [2018] in Geneva, and seeks to leverage AI to accelerate progress towards the United Nations’ Sustainable Development Goals and ultimately benefit humanity.

WHAT: Global event to advance ‘AI for Good’ with the participation of internationally recognized AI experts. The programme will include interactive high-level panels, while ‘AI Breakthrough Teams’ will propose AI strategies able to create impact in the near term, guided by an expert audience of mentors representing government, industry, academia and civil society – through interactive sessions. The summit will connect AI innovators with public and private-sector decision-makers, building collaboration to take promising strategies forward.

A special demo & exhibit track will feature innovative applications of AI designed to: protect women from sexual violence, avoid infant crib deaths, end child abuse, predict oral cancer, and improve mental health treatments for depression – as well as interactive robots including: Alice, a Dutch invention designed to support the aged; iCub, an open-source robot; and Sophia, the humanoid AI robot.

WHEN: 15-17 May 2018, beginning daily at 9 AM

WHERE: ITU Headquarters, 2 Rue de Varembé, Geneva, Switzerland (Please note: entrance to ITU is now limited for all visitors to the Montbrillant building entrance only on rue Varembé).

WHO: Confirmed participants to date include expert representatives from: Association for Computing Machinery, Bill and Melinda Gates Foundation, Cambridge University, Carnegie Mellon, Chan Zuckerberg Initiative, Consumer Trade Association, Facebook, Fraunhofer, Google, Harvard University, IBM Watson, IEEE, Intellectual Ventures, ITU, Microsoft, Massachusetts Institute of Technology (MIT), Partnership on AI, Planet Labs, Shenzhen Open Innovation Lab, University of California at Berkeley, University of Tokyo, XPRIZE Foundation, Yale University – and the participation of “Sophia” the humanoid robot and “iCub” the EU open source robotcub.

The interview

Frederic Werner, Senior Communications Officer at the International Telecommunication Union and and one of the organizers of the AI for Good Global Summit 2018 kindly took the time to speak to me and provide a few more details about the upcoming event.

Werner noted that the 2018 event grew out of a much smaller 2017 ‘workshop’ and first of its kind, about beneficial AI which this year has ballooned in size to 91 countries (about 15 participants are expected from Canada), 32 UN agencies, and substantive representation from the private sector. The 2017 event featured Dr. Yoshua Bengio of the University of Montreal  (Université de Montréal) was a featured speaker.

“This year, we’re focused on action-oriented projects that will help us reach our Sustainable Development Goals (SDGs) by 2030. We’re looking at near-term practical AI applications,” says Werner. “We’re matchmaking problem-owners and solution-owners.”

Academics, industry professionals, government officials, and representatives from UN agencies are gathering  to work on four tracks/themes:

In advance of this meeting, the group launched an AI repository (an action item from the 2017 meeting) on April 25, 2018 inviting people to list their AI projects (from the ITU’s April 25, 2018? AI repository news announcement),

ITU has just launched an AI Repository where anyone working in the field of artificial intelligence (AI) can contribute key information about how to leverage AI to help solve humanity’s greatest challenges.

This is the only global repository that identifies AI-related projects, research initiatives, think-tanks and organizations that aim to accelerate progress on the 17 United Nations’ Sustainable Development Goals (SDGs).

To submit a project, just press ‘Submit’ on the AI Repository site and fill in the online questionnaire, providing all relevant details of your project. You will also be asked to map your project to the relevant World Summit on the Information Society (WSIS) action lines and the SDGs. Approved projects will be officially registered in the repository database.

Benefits of participation on the AI Repository include:

WSIS Prizes recognize individuals, governments, civil society, local, regional and international agencies, research institutions and private-sector companies for outstanding success in implementing development oriented strategies that leverage the power of AI and ICTs.

Creating the AI Repository was one of the action items of last year’s AI for Good Global Summit.

We are looking forward to your submissions.

If you have any questions, please send an email to: ai@itu.int

“Your project won’t be visible immediately as we have to vet the submissions to weed out spam-type material and projects that are not in line with our goals,” says Werner. That said, there are already 29 projects in the repository. As you might expect, the UK, China, and US are in the repository but also represented are Egypt, Uganda, Belarus, Serbia, Peru, Italy, and other countries not commonly cited when discussing AI research.

Werner also pointed out in response to my surprise over the ITU’s role with regard to this AI initiative that the ITU is the only UN agency which has 192* member states (countries), 150 universities, and over 700 industry members as well as other member entities, which gives them tremendous breadth of reach. As well, the organization, founded originally in 1865 as the International Telegraph Convention, has extensive experience with global standardization in the information technology and telecommunications industries. (See more in their Wikipedia entry.)


There is a bit more about the summit on the ITU’s AI for Good Global Summit 2018 webpage,

The 2nd edition of the AI for Good Global Summit will be organized by ITU in Geneva on 15-17 May 2018, in partnership with XPRIZE Foundation, the global leader in incentivized prize competitions, the Association for Computing Machinery (ACM) and sister United Nations agencies including UNESCO, UNICEF, UNCTAD, UNIDO, Global Pulse, UNICRI, UNODA, UNIDIR, UNODC, WFP, IFAD, UNAIDS, WIPO, ILO, UNITAR, UNOPS, OHCHR, UN UniversityWHO, UNEP, ICAO, UNDP, The World Bank, UN DESA, CTBTOUNISDRUNOG, UNOOSAUNFPAUNECE, UNDPA, and UNHCR.

The AI for Good series is the leading United Nations platform for dialogue on AI. The action​​-oriented 2018 summit will identify practical applications of AI and supporting strategies to improve the quality and sustainability of life on our planet. The summit will continue to formulate strategies to ensure trusted, safe and inclusive development of AI technologies and equitable access to their benefits.

While the 2017 summit sparked the first ever inclusive global dialogue on beneficial AI, the action-oriented 2018 summit will focus on impactful AI solutions able to yield long-term benefits and help achieve the Sustainable Development Goals. ‘Breakthrough teams’ will demonstrate the potential of AI to map poverty and aid with natural disasters using satellite imagery, how AI could assist the delivery of citizen-centric services in smart cities, and new opportunities for AI to help achieve Universal Health Coverage, and finally to help achieve transparency and explainability in AI algorithms.

Teams will propose impactful AI strategies able to be enacted in the near term, guided by an expert audience of mentors representing government, industry, academia and civil society. Strategies will be evaluated by the mentors according to their feasibility and scalability, potential to address truly global challenges, degree of supporting advocacy, and applicability to market failures beyond the scope of government and industry. The exercise will connect AI innovators with public and private-sector decision-makers, building collaboration to take promising strategies forward.

“As the UN specialized agency for information and communication technologies, ITU is well placed to guide AI innovation towards the achievement of the UN Sustainable Development ​Goals. We are providing a neutral close quotation markplatform for international dialogue aimed at ​building a ​common understanding of the capabilities of emerging AI technologies.​​” Houlin Zhao, Secretary General ​of ITU​

Should you be close to Geneva, it seems that registration is still open. Just go to the ITU’s AI for Good Global Summit 2018 webpage, scroll the page down to ‘Documentation’ and you will find a link to the invitation and a link to online registration. Participation is free but I expect that you are responsible for your travel and accommodation costs.

For anyone unable to attend in person, the summit will be livestreamed (webcast in real time) and you can watch the sessions by following the link below,


For those of us on the West Coast of Canada and other parts distant to Geneva, you will want to take the nine hour difference between Geneva (Switzerland) and here into account when viewing the proceedings. If you can’t manage the time difference, the sessions are being recorded and will be posted at a later date.

*’132 member states’ corrected to ‘192 member states’ on May 11, 2018 at 1500 hours PDT.

How small can a carbon nanotube get before it stops being ‘electrical’?

Research, which began as an attempt to get reproducible electronics (?) measurements, yielded some unexpected results according ta January 3, 2018 news item on phys.org,

Carbon nanotubes bound for electronics not only need to be as clean as possible to maximize their utility in next-generation nanoscale devices, but contact effects may limit how small a nano device can be, according to researchers at the Energy Safety Research Institute (ESRI) at Swansea University [UK] in collaboration with researchers at Rice University [US].

ESRI Director Andrew Barron, also a professor at Rice University in the USA, and his team have figured out how to get nanotubes clean enough to obtain reproducible electronic measurements and in the process not only explained why the electrical properties of nanotubes have historically been so difficult to measure consistently, but have shown that there may be a limit to how “nano” future electronic devices can be using carbon nanotubes.

Swansea University Issued a January 3, 2018 press release (also on EurekAlert), which originated the news item, explains the work in more detail,

Like any normal wire, semiconducting nanotubes are progressively more resistant to current along their length. But conductivity measurements of nanotubes over the years have been anything but consistent. The ESRI team wanted to know why.

“We are interested in the creation of nanotube based conductors, and while people have been able to make wires their conduction has not met expectations. We were interested in determining the basic sconce behind the variability observed by other researchers.”

They discovered that hard-to-remove contaminants — leftover iron catalyst, carbon and water — could easily skew the results of conductivity tests. Burning them away, Barron said, creates new possibilities for carbon nanotubes in nanoscale electronics.

The new study appears in the American Chemical Society journal Nano Letters.

The researchers first made multiwalled carbon nanotubes between 40 and 200 nanometers in diameter and up to 30 microns long. They then either heated the nanotubes in a vacuum or bombarded them with argon ions to clean their surfaces.

They tested individual nanotubes the same way one would test any electrical conductor: By touching them with two probes to see how much current passes through the material from one tip to the other. In this case, their tungsten probes were attached to a scanning tunneling microscope.

In clean nanotubes, resistance got progressively stronger as the distance increased, as it should. But the results were skewed when the probes encountered surface contaminants, which increased the electric field strength at the tip. And when measurements were taken within 4 microns of each other, regions of depleted conductivity caused by contaminants overlapped, further scrambling the results.

“We think this is why there’s such inconsistency in the literature,” Barron said.

“If nanotubes are to be the next generation lightweight conductor, then consistent results, batch-to-batch, and sample-to-sample, is needed for devices such as motors and generators as well as power systems.”

Annealing the nanotubes in a vacuum above 200 degrees Celsius (392 degrees Fahrenheit) reduced surface contamination, but not enough to eliminate inconsistent results, they found. Argon ion bombardment also cleaned the tubes, but led to an increase in defects that degrade conductivity.

Ultimately they discovered vacuum annealing nanotubes at 500 degrees Celsius (932 Fahrenheit) reduced contamination enough to accurately measure resistance, they reported.

To now, Barron said, engineers who use nanotube fibers or films in devices modify the material through doping or other means to get the conductive properties they require. But if the source nanotubes are sufficiently decontaminated, they should be able to get the right conductivity by simply putting their contacts in the right spot.

“A key result of our work was that if contacts on a nanotube are less than 1 micron apart, the electronic properties of the nanotube changes from conductor to semiconductor, due to the presence of overlapping depletion zones” said Barron, “this has a potential limiting factor on the size of nanotube based electronic devices – this would limit the application of Moore’s law to nanotube devices.”

Chris Barnett of Swansea is lead author of the paper. Co-authors are Cathren Gowenlock and Kathryn Welsby, and Rice alumnus Alvin Orbaek White of Swansea. Barron is the Sêr Cymru Chair of Low Carbon Energy and Environment at Swansea and the Charles W. Duncan Jr.–Welch Professor of Chemistry and a professor of materials science and nanoengineering at Rice.

The Welsh Government Sêr Cymru National Research Network in Advanced Engineering and Materials, the Sêr Cymru Chair Program, the Office of Naval Research and the Robert A. Welch Foundation supported the research.

Rice University has published a January 4, 2018 Rice University news release (also on EurekAlert), which is almost (95%) identical to the press release from Swansea. That’s a bit unusual as collaborating institutions usually like to focus on their unique contributions to the research, hence, multiple news/press releases.

Dexter Johnson, in a January 11, 2018 post on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website,  adds a detail or two while writing in an accessible style.

Here’s a link to and a citation for the paper,

Spatial and Contamination-Dependent Electrical Properties of Carbon Nanotubes by Chris J. Barnett, Cathren E. Gowenlock, Kathryn Welsby, Alvin Orbaek White, and Andrew R. Barron. Nano Lett., Article ASAP DOI: 10.1021/acs.nanolett.7b03390 Publication Date (Web): December 19, 2017

Copyright © 2017 American Chemical Society

This paper is behind a paywall.

From the memristor to the atomristor?

I’m going to let Michael Berger explain the memristor (from Berger’s Jan. 2, 2017 Nanowerk Spotlight article),

In trying to bring brain-like (neuromorphic) computing closer to reality, researchers have been working on the development of memory resistors, or memristors, which are resistors in a circuit that ‘remember’ their state even if you lose power.

Today, most computers use random access memory (RAM), which moves very quickly as a user works but does not retain unsaved data if power is lost. Flash drives, on the other hand, store information when they are not powered but work much slower. Memristors could provide a memory that is the best of both worlds: fast and reliable.

He goes on to discuss a team at the University of Texas at Austin’s work on creating an extraordinarily thin memristor: an atomristor,

he team’s work features the thinnest memory devices and it appears to be a universal effect available in all semiconducting 2D monolayers.

The scientists explain that the unexpected discovery of nonvolatile resistance switching (NVRS) in monolayer transitional metal dichalcogenides (MoS2, MoSe2, WS2, WSe2) is likely due to the inherent layered crystalline nature that produces sharp interfaces and clean tunnel barriers. This prevents excessive leakage and affords stable phenomenon so that NVRS can be used for existing memory and computing applications.

“Our work opens up a new field of research in exploiting defects at the atomic scale, and can advance existing applications such as future generation high density storage, and 3D cross-bar networks for neuromorphic memory computing,” notes Akinwande [Deji Akinwande, an Associate Professor at the University of Texas at Austin]. “We also discovered a completely new application, which is non-volatile switching for radio-frequency (RF) communication systems. This is a rapidly emerging field because of the massive growth in wireless technologies and the need for very low-power switches. Our devices consume no static power, an important feature for battery life in mobile communication systems.”

Here’s a link to and a citation for the Akinwande team’s paper,

Atomristor: Nonvolatile Resistance Switching in Atomic Sheets of Transition Metal Dichalcogenides by Ruijing Ge, Xiaohan Wu, Myungsoo Kim, Jianping Shi, Sushant Sonde, Li Tao, Yanfeng Zhang, Jack C. Lee, and Deji Akinwande. Nano Lett., Article ASAP DOI: 10.1021/acs.nanolett.7b04342 Publication Date (Web): December 13, 2017

Copyright © 2017 American Chemical Society

This paper appears to be open access.

ETA January 23, 2018: There’s another account of the atomristor in Samuel K. Moore’s January 23, 2018 posting on the Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website).

FrogHeart’s good-bye to 2017 and hello to 2018

This is going to be relatively short and sweet(ish). Starting with the 2017 review:

Nano blogosphere and the Canadian blogosphere

From my perspective there’s been a change taking place in the nano blogosphere over the last few years. There are fewer blogs along with fewer postings from those who still blog. Interestingly, some blogs are becoming more generalized. At the same time, Foresight Institute’s Nanodot blog (as has FrogHeart) has expanded its range of topics to include artificial intelligence and other topics. Andrew Maynard’s 2020 Science blog now exists in an archived from but before its demise, it, too, had started to include other topics, notably risk in its many forms as opposed to risk and nanomaterials. Dexter Johnson’s blog, Nanoclast (on the IEEE [Institute for Electrical and Electronics Engineers] website), maintains its 3x weekly postings. Tim Harper who often wrote about nanotechnology on his Cientifica blog appears to have found a more freewheeling approach that is dominated by his Twitter feed although he also seems (I can’t confirm that the latest posts were written in 2017) to blog here on timharper.net.

The Canadian science blogosphere seems to be getting quieter if Science Borealis (blog aggregator) is a measure. My overall impression is that the bloggers have been a bit quieter this year with fewer postings on the feed or perhaps that’s due to some technical issues (sometimes FrogHeart posts do not get onto the feed). On the promising side, Science Borealis teamed with the Science Writers and Communicators of Canada Association to run a contest, “2017 People’s Choice Awards: Canada’s Favourite Science Online!”  There were two categories (Favourite Science Blog and Favourite Science Site) and you can find a list of the finalists with links to the winners here.

Big congratulations for the winners: Canada’s Favourite Blog 2017: Body of Evidence (Dec. 6, 2017 article by Alina Fisher for Science Borealis) and Let’s Talk Science won Canada’s Favourite Science Online 2017 category as per this announcement.

However, I can’t help wondering: where were ASAP Science, Acapella Science, Quirks & Quarks, IFLS (I f***ing love science), and others on the list for finalists? I would have thought any of these would have a lock on a position as a finalist. These are Canadian online science purveyors and they are hugely popular, which should mean they’d have no problem getting nominated and getting votes. I can’t find the criteria for nominations (or any hint there will be a 2018 contest) so I imagine their nonpresence on the 2017 finalists list will remain a mystery to me.

Looking forward to 2018, I think that the nano blogosphere will continue with its transformation into a more general science/technology-oriented community. To some extent, I believe this reflects the fact that nanotechnology is being absorbed into the larger science/technology effort as foundational (something wiser folks than me predicted some years ago).

As for Science Borealis and the Canadian science online effort, I’m going to interpret the quieter feeds as a sign of a maturing community. After all, there are always ups and downs in terms of enthusiasm and participation and as I noted earlier the launch of an online contest is promising as is the collaboration with Science Writers and Communicators of Canada.

Canadian science policy

It was a big year.

Canada’s Chief Science Advisor

With Canada’s first chief science advisor in many years, being announced Dr. Mona Nemer stepped into her position sometime in Fall 2017. The official announcement was made on Sept. 26, 2017. I covered the event in my Sept. 26, 2017 posting, which includes a few more details than found the official announcement.

You’ll also find in that Sept. 26, 2017 posting a brief discourse on the Naylor report (also known as the Review of Fundamental Science) and some speculation on why, to my knowledge, there has been no action taken as a consequence.  The Naylor report was released April 10, 2017 and was covered here in a three-part review, published on June 8, 2017,

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 1 of 3

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 2 of 3

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 3 of 3

I have found another commentary (much briefer than mine) by Paul Dufour on the Canadian Science Policy Centre website. (November 9, 2017)

Subnational and regional science funding

This began in 2016 with a workshop mentioned in my November 10, 2016 posting: ‘Council of Canadian Academies and science policy for Alberta.” By the time the report was published the endeavour had been transformed into: Science Policy: Considerations for Subnational Governments (report here and my June 22, 2017 commentary here).

I don’t know what will come of this but I imagine scientists will be supportive as it means more money and they are always looking for more money. Still, the new government in British Columbia has only one ‘science entity’ and I’m not sure it’s still operational but i was called the Premier’s Technology Council. To my knowledge, there is no ministry or other agency that is focused primarily or partially on science.

Meanwhile, a couple of representatives from the health sciences (neither of whom were involved in the production of the report) seem quite enthused about the prospects for provincial money in their (Bev Holmes, Interim CEO, Michael Smith Foundation for Health Research, British Columbia, and Patrick Odnokon (CEO, Saskatchewan Health Research Foundation) October 27, 2017 opinion piece for the Canadian Science Policy Centre.

Artificial intelligence and Canadians

An event which I find more interesting with time was the announcement of the Pan=Canadian Artificial Intelligence Strategy in the 2017 Canadian federal budget. Since then there has been a veritable gold rush mentality with regard to artificial intelligence in Canada. One announcement after the next about various corporations opening new offices in Toronto or Montréal has been made in the months since.

What has really piqued my interest recently is a report being written for Canada’s Treasury Board by Michael Karlin (you can learn more from his Twitter feed although you may need to scroll down past some of his more personal tweets (something cassoulet in the Dec. 29, 2017 tweets).  As for Karlin’s report, which is a work in progress, you can find out more about the report and Karlin in a December 12, 2017 article by Rob Hunt for the Algorithmic Media Observatory (sponsored by the Social Sciences and Humanities Research Council of Canada [SHRCC], the Centre for Study of Democratic Citizenship, and the Fonds de recherche du Québec: Société et culture).

You can ring in 2018 by reading and making comments, which could influence the final version, on Karlin’s “Responsible Artificial Intelligence in the Government of Canada” part of the government’s Digital Disruption White Paper Series.

As for other 2018 news, the Council of Canadian Academies is expected to publish “The State of Science and Technology and Industrial Research and Development in Canada” at some point soon (we hope). This report follows and incorporates two previous ‘states’, The State of Science and Technology in Canada, 2012 (the first of these was a 2006 report) and the 2013 version of The State of Industrial R&D in Canada. There is already some preliminary data for this latest ‘state of’  (you can find a link and commentary in my December 15, 2016 posting).

FrogHeart then (2017) and soon (2018)

On looking back I see that the year started out at quite a clip as I was attempting to hit the 5000th blog posting mark, which I did on March 3,  2017. I have cut back somewhat from the 3 postings/day high to approximately 1 posting/day. It makes things more manageable allowing me to focus on other matters.

By the way, you may note that the ‘Donate’ button has disappeared from my sidebard. I thank everyone who donated from the bottom of my heart. The money was more than currency, it also symbolized encouragement. On the sad side, I moved from one hosting service to a new one (Sibername) late in December 2016 and have been experiencing serious bandwidth issues which result on FrogHeart’s disappearance from the web for days at a time. I am trying to resolve the issues and hope that such actions as removing the ‘Donate’ button will help.

I wish my readers all the best for 2018 as we explore nanotechnology and other emerging technologies!

(I apologize for any and all errors. I usually take a little more time to write this end-of-year and coming-year piece but due to bandwidth issues I was unable to access my draft and give it at least one review. And at this point, I’m too tired to try spotting error. If you see any, please do let me know.)

Yarns that harvest and generate energy

The researchers involved in this work are confident enough about their prospects that they will be  patenting their research into yarns. From an August 25, 2017 news item on Nanowerk,

An international research team led by scientists at The University of Texas at Dallas and Hanyang University in South Korea has developed high-tech yarns that generate electricity when they are stretched or twisted.

In a study published in the Aug. 25 [2017] issue of the journal Science (“Harvesting electrical energy from carbon nanotube yarn twist”), researchers describe “twistron” yarns and their possible applications, such as harvesting energy from the motion of ocean waves or from temperature fluctuations. When sewn into a shirt, these yarns served as a self-powered breathing monitor.

“The easiest way to think of twistron harvesters is, you have a piece of yarn, you stretch it, and out comes electricity,” said Dr. Carter Haines, associate research professor in the Alan G. MacDiarmid NanoTech Institute at UT Dallas and co-lead author of the article. The article also includes researchers from South Korea, Virginia Tech, Wright-Patterson Air Force Base and China.

An August 25, 2017 University of Texas at Dallas news release, which originated the news item, expands on the theme,

Yarns Based on Nanotechnology

The yarns are constructed from carbon nanotubes, which are hollow cylinders of carbon 10,000 times smaller in diameter than a human hair. The researchers first twist-spun the nanotubes into high-strength, lightweight yarns. To make the yarns highly elastic, they introduced so much twist that the yarns coiled like an over-twisted rubber band.

In order to generate electricity, the yarns must be either submerged in or coated with an ionically conducting material, or electrolyte, which can be as simple as a mixture of ordinary table salt and water.

“Fundamentally, these yarns are supercapacitors,” said Dr. Na Li, a research scientist at the NanoTech Institute and co-lead author of the study. “In a normal capacitor, you use energy — like from a battery — to add charges to the capacitor. But in our case, when you insert the carbon nanotube yarn into an electrolyte bath, the yarns are charged by the electrolyte itself. No external battery, or voltage, is needed.”

When a harvester yarn is twisted or stretched, the volume of the carbon nanotube yarn decreases, bringing the electric charges on the yarn closer together and increasing their energy, Haines said. This increases the voltage associated with the charge stored in the yarn, enabling the harvesting of electricity.

Stretching the coiled twistron yarns 30 times a second generated 250 watts per kilogram of peak electrical power when normalized to the harvester’s weight, said Dr. Ray Baughman, director of the NanoTech Institute and a corresponding author of the study.

“Although numerous alternative harvesters have been investigated for many decades, no other reported harvester provides such high electrical power or energy output per cycle as ours for stretching rates between a few cycles per second and 600 cycles per second.”

Lab Tests Show Potential Applications

In the lab, the researchers showed that a twistron yarn weighing less than a housefly could power a small LED, which lit up each time the yarn was stretched.

To show that twistrons can harvest waste thermal energy from the environment, Li connected a twistron yarn to a polymer artificial muscle that contracts and expands when heated and cooled. The twistron harvester converted the mechanical energy generated by the polymer muscle to electrical energy.

“There is a lot of interest in using waste energy to power the Internet of Things, such as arrays of distributed sensors,” Li said. “Twistron technology might be exploited for such applications where changing batteries is impractical.”

The researchers also sewed twistron harvesters into a shirt. Normal breathing stretched the yarn and generated an electrical signal, demonstrating its potential as a self-powered respiration sensor.

“Electronic textiles are of major commercial interest, but how are you going to power them?” Baughman said. “Harvesting electrical energy from human motion is one strategy for eliminating the need for batteries. Our yarns produced over a hundred times higher electrical power per weight when stretched compared to other weavable fibers reported in the literature.”

Electricity from Ocean Waves

“In the lab we showed that our energy harvesters worked using a solution of table salt as the electrolyte,” said Baughman, who holds the Robert A. Welch Distinguished Chair in Chemistry in the School of Natural Sciences and Mathematics. “But we wanted to show that they would also work in ocean water, which is chemically more complex.”

In a proof-of-concept demonstration, co-lead author Dr. Shi Hyeong Kim, a postdoctoral researcher at the NanoTech Institute, waded into the frigid surf off the east coast of South Korea to deploy a coiled twistron in the sea. He attached a 10 centimeter-long yarn, weighing only 1 milligram (about the weight of a mosquito), between a balloon and a sinker that rested on the seabed.

Every time an ocean wave arrived, the balloon would rise, stretching the yarn up to 25 percent, thereby generating measured electricity.

Even though the investigators used very small amounts of twistron yarn in the current study, they have shown that harvester performance is scalable, both by increasing twistron diameter and by operating many yarns in parallel.

“If our twistron harvesters could be made less expensively, they might ultimately be able to harvest the enormous amount of energy available from ocean waves,” Baughman said. “However, at present these harvesters are most suitable for powering sensors and sensor communications. Based on demonstrated average power output, just 31 milligrams of carbon nanotube yarn harvester could provide the electrical energy needed to transmit a 2-kilobyte packet of data over a 100-meter radius every 10 seconds for the Internet of Things.”

Researchers from the UT Dallas Erik Jonsson School of Engineering and Computer Science and Lintec of America’s Nano-Science & Technology Center also participated in the study.

The investigators have filed a patent on the technology.

In the U.S., the research was funded by the Air Force, the Air Force Office of Scientific Research, NASA, the Office of Naval Research and the Robert A. Welch Foundation. In Korea, the research was supported by the Korea-U.S. Air Force Cooperation Program and the Creative Research Initiative Center for Self-powered Actuation of the National Research Foundation and the Ministry of Science.

Here’s a link to and a citation for the paper,

Harvesting electrical energy from carbon nanotube yarn twist by Shi Hyeong Kim, Carter S. Haines, Na Li, Keon Jung Kim, Tae Jin Mun, Changsoon Choi, Jiangtao Di, Young Jun Oh, Juan Pablo Oviedo, Julia Bykova, Shaoli Fang, Nan Jiang, Zunfeng Liu, Run Wang, Prashant Kumar, Rui Qiao, Shashank Priya, Kyeongjae Cho, Moon Kim, Matthew Steven Lucas, Lawrence F. Drummy, Benji Maruyama, Dong Youn Lee, Xavier Lepró, Enlai Gao, Dawood Albarq, Raquel Ovalle-Robles, Seon Jeong Kim, Ray H. Baughman. Science 25 Aug 2017: Vol. 357, Issue 6353, pp. 773-778 DOI: 10.1126/science.aam8771

This paper is behind a paywall.

Dexter Johnson in an Aug. 25, 2017 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) delves further into the research,

“Basically what’s happening is when we stretch the yarn, we’re getting a change in capacitance of the yarn. It’s that change that allows us to get energy out,” explains Carter Haines, associate research professor at UT Dallas and co-lead author of the paper describing the research, in an interview with IEEE Spectrum.

This makes it similar in many ways to other types of energy harvesters. For instance, in other research, it has been demonstrated—with sheets of rubber with coated electrodes on both sides—that you can increase the capacitance of a material when you stretch it and it becomes thinner. As a result, if you have charge on that capacitor, you can change the voltage associated with that charge.

“We’re more or less exploiting the same effect but what we’re doing differently is we’re using an electric chemical cell to do this,” says Haines. “So we’re not changing double layer capacitance in normal parallel plate capacitors. But we’re actually changing the electric chemical capacitance on the surface of a super capacitor yarn.”

While there are other capacitance-based energy harvesters, those other devices require extremely high voltages to work because they’re using parallel plate capacitors, according to Haines.

Dexter asks good questions and his post is very informative.

Emerging technology and the law

I have three news bits about legal issues that are arising as a consequence of emerging technologies.

Deep neural networks, art, and copyright

Caption: The rise of automated art opens new creative avenues, coupled with new problems for copyright protection. Credit: Provided by: Alexander Mordvintsev, Christopher Olah and Mike Tyka

Presumably this artwork is a demonstration of automated art although they never really do explain how in the news item/news release. An April 26, 2017 news item on ScienceDaily announces research into copyright and the latest in using neural networks to create art,

In 1968, sociologist Jean Baudrillard wrote on automatism that “contained within it is the dream of a dominated world […] that serves an inert and dreamy humanity.”

With the growing popularity of Deep Neural Networks (DNN’s), this dream is fast becoming a reality.

Dr. Jean-Marc Deltorn, researcher at the Centre d’études internationales de la propriété intellectuelle in Strasbourg, argues that we must remain a responsive and responsible force in this process of automation — not inert dominators. As he demonstrates in a recent Frontiers in Digital Humanities paper, the dream of automation demands a careful study of the legal problems linked to copyright.

An April 26, 2017 Frontiers (publishing) news release on EurekAlert, which originated the news item, describes the research in more detail,

For more than half a century, artists have looked to computational processes as a way of expanding their vision. DNN’s are the culmination of this cross-pollination: by learning to identify a complex number of patterns, they can generate new creations.

These systems are made up of complex algorithms modeled on the transmission of signals between neurons in the brain.

DNN creations rely in equal measure on human inputs and the non-human algorithmic networks that process them.

Inputs are fed into the system, which is layered. Each layer provides an opportunity for a more refined knowledge of the inputs (shape, color, lines). Neural networks compare actual outputs to expected ones, and correct the predictive error through repetition and optimization. They train their own pattern recognition, thereby optimizing their learning curve and producing increasingly accurate outputs.

The deeper the layers are, the higher the level of abstraction. The highest layers are able to identify the contents of a given input with reasonable accuracy, after extended periods of training.

Creation thus becomes increasingly automated through what Deltorn calls “the arcane traceries of deep architecture”. The results are sufficiently abstracted from their sources to produce original creations that have been exhibited in galleries, sold at auction and performed at concerts.

The originality of DNN’s is a combined product of technological automation on one hand, human inputs and decisions on the other.

DNN’s are gaining popularity. Various platforms (such as DeepDream) now allow internet users to generate their very own new creations . This popularization of the automation process calls for a comprehensive legal framework that ensures a creator’s economic and moral rights with regards to his work – copyright protection.

Form, originality and attribution are the three requirements for copyright. And while DNN creations satisfy the first of these three, the claim to originality and attribution will depend largely on a given country legislation and on the traceability of the human creator.

Legislation usually sets a low threshold to originality. As DNN creations could in theory be able to create an endless number of riffs on source materials, the uncurbed creation of original works could inflate the existing number of copyright protections.

Additionally, a small number of national copyright laws confers attribution to what UK legislation defines loosely as “the person by whom the arrangements necessary for the creation of the work are undertaken.” In the case of DNN’s, this could mean anybody from the programmer to the user of a DNN interface.

Combined with an overly supple take on originality, this view on attribution would further increase the number of copyrightable works.

The risk, in both cases, is that artists will be less willing to publish their own works, for fear of infringement of DNN copyright protections.

In order to promote creativity – one seminal aim of copyright protection – the issue must be limited to creations that manifest a personal voice “and not just the electric glint of a computational engine,” to quote Deltorn. A delicate act of discernment.

DNN’s promise new avenues of creative expression for artists – with potential caveats. Copyright protection – a “catalyst to creativity” – must be contained. Many of us gently bask in the glow of an increasingly automated form of technology. But if we want to safeguard the ineffable quality that defines much art, it might be a good idea to hone in more closely on the differences between the electric and the creative spark.

This research is and be will part of a broader Frontiers Research Topic collection of articles on Deep Learning and Digital Humanities.

Here’s a link to and a citation for the paper,

Deep Creations: Intellectual Property and the Automata by Jean-Marc Deltorn. Front. Digit. Humanit., 01 February 2017 | https://doi.org/10.3389/fdigh.2017.00003

This paper is open access.

Conference on governance of emerging technologies

I received an April 17, 2017 notice via email about this upcoming conference. Here’s more from the Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics webpage,

The Fifth Annual Conference on Governance of Emerging Technologies:

Law, Policy and Ethics held at the new

Beus Center for Law & Society in Phoenix, AZ

May 17-19, 2017!

Call for Abstracts – Now Closed

The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including (but not limited to) nanotechnology, synthetic biology, gene editing, biotechnology, genomics, personalized medicine, human enhancement technologies, telecommunications, information technologies, surveillance technologies, geoengineering, neuroscience, artificial intelligence, and robotics. The conference is premised on the belief that there is much to be learned and shared from and across the governance experience and proposals for these various emerging technologies.

Keynote Speakers:

Gillian HadfieldRichard L. and Antoinette Schamoi Kirtland Professor of Law and Professor of Economics USC [University of Southern California] Gould School of Law

Shobita Parthasarathy, Associate Professor of Public Policy and Women’s Studies, Director, Science, Technology, and Public Policy Program University of Michigan

Stuart Russell, Professor at [University of California] Berkeley, is a computer scientist known for his contributions to artificial intelligence

Craig Shank, Vice President for Corporate Standards Group in Microsoft’s Corporate, External and Legal Affairs (CELA)

Plenary Panels:

Innovation – Responsible and/or Permissionless

Ellen-Marie Forsberg, Senior Researcher/Research Manager at Oslo and Akershus University College of Applied Sciences

Adam Thierer, Senior Research Fellow with the Technology Policy Program at the Mercatus Center at George Mason University

Wendell Wallach, Consultant, ethicist, and scholar at Yale University’s Interdisciplinary Center for Bioethics

 Gene Drives, Trade and International Regulations

Greg Kaebnick, Director, Editorial Department; Editor, Hastings Center Report; Research Scholar, Hastings Center

Jennifer Kuzma, Goodnight-North Carolina GlaxoSmithKline Foundation Distinguished Professor in Social Sciences in the School of Public and International Affairs (SPIA) and co-director of the Genetic Engineering and Society (GES) Center at North Carolina State University

Andrew Maynard, Senior Sustainability Scholar, Julie Ann Wrigley Global Institute of Sustainability Director, Risk Innovation Lab, School for the Future of Innovation in Society Professor, School for the Future of Innovation in Society, Arizona State University

Gary Marchant, Regents’ Professor of Law, Professor of Law Faculty Director and Faculty Fellow, Center for Law, Science & Innovation, Arizona State University

Marc Saner, Inaugural Director of the Institute for Science, Society and Policy, and Associate Professor, University of Ottawa Department of Geography

Big Data

Anupam Chander, Martin Luther King, Jr. Professor of Law and Director, California International Law Center, UC Davis School of Law

Pilar Ossorio, Professor of Law and Bioethics, University of Wisconsin, School of Law and School of Medicine and Public Health; Morgridge Institute for Research, Ethics Scholar-in-Residence

George Poste, Chief Scientist, Complex Adaptive Systems Initiative (CASI) (http://www.casi.asu.edu/), Regents’ Professor and Del E. Webb Chair in Health Innovation, Arizona State University

Emily Shuckburgh, climate scientist and deputy head of the Polar Oceans Team at the British Antarctic Survey, University of Cambridge

 Responsible Development of AI

Spring Berman, Ira A. Fulton Schools of Engineering, Arizona State University

John Havens, The IEEE [Institute of Electrical and Electronics Engineers] Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems

Subbarao Kambhampati, Senior Sustainability Scientist, Julie Ann Wrigley Global Institute of Sustainability, Professor, School of Computing, Informatics and Decision Systems Engineering, Ira A. Fulton Schools of Engineering, Arizona State University

Wendell Wallach, Consultant, Ethicist, and Scholar at Yale University’s Interdisciplinary Center for Bioethics

Existential and Catastrophic Ricks [sic]

Tony Barrett, Co-Founder and Director of Research of the Global Catastrophic Risk Institute

Haydn Belfield,  Academic Project Administrator, Centre for the Study of Existential Risk at the University of Cambridge

Margaret E. Kosal Associate Director, Sam Nunn School of International Affairs, Georgia Institute of Technology

Catherine Rhodes,  Academic Project Manager, Centre for the Study of Existential Risk at CSER, University of Cambridge

These were the panels that are of interest to me; there are others on the homepage.

Here’s some information from the Conference registration webpage,

Early Bird Registration – $50 off until May 1! Enter discount code: earlybirdGETs50

New: Group Discount – Register 2+ attendees together and receive an additional 20% off for all group members!

Click Here to Register!

Conference registration fees are as follows:

  • General (non-CLE) Registration: $150.00
  • CLE Registration: $350.00
  • *Current Student / ASU Law Alumni Registration: $50.00
  • ^Cybsersecurity sessions only (May 19): $100 CLE / $50 General / Free for students (registration info coming soon)

There you have it.

Neuro-techno future laws

I’m pretty sure this isn’t the first exploration of potential legal issues arising from research into neuroscience although it’s the first one I’ve stumbled across. From an April 25, 2017 news item on phys.org,

New human rights laws to prepare for advances in neurotechnology that put the ‘freedom of the mind’ at risk have been proposed today in the open access journal Life Sciences, Society and Policy.

The authors of the study suggest four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy. The four laws are: the right to cognitive liberty, the right to mental privacy, the right to mental integrity and the right to psychological continuity.

An April 25, 2017 Biomed Central news release on EurekAlert, which originated the news item, describes the work in more detail,

Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel, said: “The mind is considered to be the last refuge of personal freedom and self-determination, but advances in neural engineering, brain imaging and neurotechnology put the freedom of the mind at risk. Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology.”

Advances in neurotechnology, such as sophisticated brain imaging and the development of brain-computer interfaces, have led to these technologies moving away from a clinical setting and into the consumer domain. While these advances may be beneficial for individuals and society, there is a risk that the technology could be misused and create unprecedented threats to personal freedom.

Professor Roberto Andorno, co-author of the research, explained: “Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court, for example as a tool for assessing criminal responsibility or even the risk of reoffending. Consumer companies are using brain imaging for ‘neuromarketing’, to understand consumer behaviour and elicit desired responses from customers. There are also tools such as ‘brain decoders’ which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom which we sought to address with the development of four new human rights laws.”

The authors explain that as neurotechnology improves and becomes commonplace, there is a risk that the technology could be hacked, allowing a third-party to ‘eavesdrop’ on someone’s mind. In the future, a brain-computer interface used to control consumer technology could put the user at risk of physical and psychological damage caused by a third-party attack on the technology. There are also ethical and legal concerns over the protection of data generated by these devices that need to be considered.

International human rights laws make no specific mention to neuroscience, although advances in biomedicine have become intertwined with laws, such as those concerning human genetic data. Similar to the historical trajectory of the genetic revolution, the authors state that the on-going neurorevolution will force a reconceptualization of human rights laws and even the creation of new ones.

Marcello Ienca added: “Science-fiction can teach us a lot about the potential threat of technology. Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes. We need to be prepared to deal with the impact these technologies will have on our personal freedom.”

Here’s a link to and a citation for the paper,

Towards new human rights in the age of neuroscience and neurotechnology by Marcello Ienca and Roberto Andorno. Life Sciences, Society and Policy201713:5 DOI: 10.1186/s40504-017-0050-1 Published: 26 April 2017

©  The Author(s). 2017

This paper is open access.

Nanoelectronic thread (NET) brain probes for long-term neural recording

A rendering of the ultra-flexible probe in neural tissue gives viewers a sense of the device’s tiny size and footprint in the brain. Image credit: Science Advances.

As long time readers have likely noted, I’m not a big a fan of this rush to ‘colonize’ the brain but it continues apace as a Feb. 15, 2017 news item on Nanowerk announces a new type of brain probe,

Engineering researchers at The University of Texas at Austin have designed ultra-flexible, nanoelectronic thread (NET) brain probes that can achieve more reliable long-term neural recording than existing probes and don’t elicit scar formation when implanted.

A Feb. 15, 2017 University of Texas at Austin news release, which originated the news item, provides more information about the new probes (Note: A link has been removed),

A team led by Chong Xie, an assistant professor in the Department of Biomedical Engineering in the Cockrell School of Engineering, and Lan Luan, a research scientist in the Cockrell School and the College of Natural Sciences, have developed new probes that have mechanical compliances approaching that of the brain tissue and are more than 1,000 times more flexible than other neural probes. This ultra-flexibility leads to an improved ability to reliably record and track the electrical activity of individual neurons for long periods of time. There is a growing interest in developing long-term tracking of individual neurons for neural interface applications, such as extracting neural-control signals for amputees to control high-performance prostheses. It also opens up new possibilities to follow the progression of neurovascular and neurodegenerative diseases such as stroke, Parkinson’s and Alzheimer’s diseases.

One of the problems with conventional probes is their size and mechanical stiffness; their larger dimensions and stiffer structures often cause damage around the tissue they encompass. Additionally, while it is possible for the conventional electrodes to record brain activity for months, they often provide unreliable and degrading recordings. It is also challenging for conventional electrodes to electrophysiologically track individual neurons for more than a few days.

In contrast, the UT Austin team’s electrodes are flexible enough that they comply with the microscale movements of tissue and still stay in place. The probe’s size also drastically reduces the tissue displacement, so the brain interface is more stable, and the readings are more reliable for longer periods of time. To the researchers’ knowledge, the UT Austin probe — which is as small as 10 microns at a thickness below 1 micron, and has a cross-section that is only a fraction of that of a neuron or blood capillary — is the smallest among all neural probes.

“What we did in our research is prove that we can suppress tissue reaction while maintaining a stable recording,” Xie said. “In our case, because the electrodes are very, very flexible, we don’t see any sign of brain damage — neurons stayed alive even in contact with the NET probes, glial cells remained inactive and the vasculature didn’t become leaky.”

In experiments in mouse models, the researchers found that the probe’s flexibility and size prevented the agitation of glial cells, which is the normal biological reaction to a foreign body and leads to scarring and neuronal loss.

“The most surprising part of our work is that the living brain tissue, the biological system, really doesn’t mind having an artificial device around for months,” Luan said.

The researchers also used advanced imaging techniques in collaboration with biomedical engineering professor Andrew Dunn and neuroscientists Raymond Chitwood and Jenni Siegel from the Institute for Neuroscience at UT Austin to confirm that the NET enabled neural interface did not degrade in the mouse model for over four months of experiments. The researchers plan to continue testing their probes in animal models and hope to eventually engage in clinical testing. The research received funding from the UT BRAIN seed grant program, the Department of Defense and National Institutes of Health.

Here’s a link to and citation for the paper,

Ultraflexible nanoelectronic probes form reliable, glial scar–free neural integration by Lan Luan, Xiaoling Wei, Zhengtuo Zhao, Jennifer J. Siegel, Ojas Potnis, Catherine A Tuppen, Shengqing Lin, Shams Kazmi, Robert A. Fowler, Stewart Holloway, Andrew K. Dunn, Raymond A. Chitwood, and Chong Xie. Science Advances  15 Feb 2017: Vol. 3, no. 2, e1601966 DOI: 10.1126/sciadv.1601966

This paper is open access.

You can get more detail about the research in a Feb. 17, 2017 posting by Dexter Johnson on his Nanoclast blog (on the IEEE [International Institute for Electrical and Electronics Engineers] website).

Drive to operationalize transistors that outperform silicon gets a boost

Dexter Johnson has written a Jan. 19, 2017 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers]) about work which could lead to supplanting silicon-based transistors with carbon nanotube-based transistors in the future (Note: Links have been removed),

The end appears nigh for scaling down silicon-based complimentary metal-oxide semiconductor (CMOS) transistors, with some experts seeing the cutoff date as early as 2020.

While carbon nanotubes (CNTs) have long been among the nanomaterials investigated to serve as replacement for silicon in CMOS field-effect transistors (FETs) in a post-silicon future, they have always been bogged down by some frustrating technical problems. But, with some of the main technical showstoppers having been largely addressed—like sorting between metallic and semiconducting carbon nanotubes—the stage has been set for CNTs to start making their presence felt a bit more urgently in the chip industry.

Peking University scientists in China have now developed carbon nanotube field-effect transistors (CNT FETs) having a critical dimension—the gate length—of just five nanometers that would outperform silicon-based CMOS FETs at the same scale. The researchers claim in the journal Science that this marks the first time that sub-10 nanometer CNT CMOS FETs have been reported.

More importantly than just being the first, the Peking group showed that their CNT-based FETs can operate faster and at a lower supply voltage than their silicon-based counterparts.

A Jan. 20, 2017 article by Bob Yirka for phys.org provides more insight into the work at Peking University,

One of the most promising candidates is carbon nanotubes—due to their unique properties, transistors based on them could be smaller, faster and more efficient. Unfortunately, the difficulty in growing carbon nanotubes and their sometimes persnickety nature means that a way to make them and mass produce them has not been found. In this new effort, the researchers report on a method of creating carbon nanotube transistors that are suitable for testing, but not mass production.

To create the transistors, the researchers took a novel approach—instead of growing carbon nanotubes that had certain desired properties, they grew some and put them randomly on a silicon surface and then added electronics that would work with the properties they had—clearly not a strategy that would work for mass production, but one that allowed for building a carbon nanotube transistor that could be tested to see if it would verify theories about its performance. Realizing there would still be scaling problems using traditional electrodes, the researchers built a new kind by etching very tiny sheets of graphene. The result was a very tiny transistor, the team reports, capable of moving more current than a standard CMOS transistor using just half of the normal amount of voltage. It was also faster due to a much shorter switch delay, courtesy of a gate capacitance of just 70 femtoseconds.

Peking University has published an edited and more comprehensive version of the phys.org article first reported by Lisa Zyga and edited by Arthars,

Now in a new paper published in Nano Letters, researchers Tian Pei, et al., at Peking University in Beijing, China, have developed a modular method for constructing complicated integrated circuits (ICs) made from many FETs on individual CNTs. To demonstrate, they constructed an 8-bits BUS system–a circuit that is widely used for transferring data in computers–that contains 46 FETs on six CNTs. This is the most complicated CNT IC fabricated to date, and the fabrication process is expected to lead to even more complex circuits.

SEM image of an eight-transistor (8-T) unit that was fabricated on two CNTs (marked with two white dotted lines). The scale bar is 100 μm. (Copyright: 2014 American Chemical Society)

Ever since the first CNT FET was fabricated in 1998, researchers have been working to improve CNT-based electronics. As the scientists explain in their paper, semiconducting CNTs are promising candidates for replacing silicon wires because they are thinner, which offers better scaling-down potential, and also because they have a higher carrier mobility, resulting in higher operating speeds.

Yet CNT-based electronics still face challenges. One of the most significant challenges is obtaining arrays of semiconducting CNTs while removing the less-suitable metallic CNTs. Although scientists have devised a variety of ways to separate semiconducting and metallic CNTs, these methods almost always result in damaged semiconducting CNTs with degraded performance.

To get around this problem, researchers usually build ICs on single CNTs, which can be individually selected based on their condition. It’s difficult to use more than one CNT because no two are alike: they each have slightly different diameters and properties that affect performance. However, using just one CNT limits the complexity of these devices to simple logic and arithmetical gates.

The 8-T unit can be used as the basic building block of a variety of ICs other than BUS systems, making this modular method a universal and efficient way to construct large-scale CNT ICs. Building on their previous research, the scientists hope to explore these possibilities in the future.

“In our earlier work, we showed that a carbon nanotube based field-effect transistor is about five (n-type FET) to ten (p-type FET) times faster than its silicon counterparts, but uses much less energy, about a few percent of that of similar sized silicon transistors,” Peng said.

“In the future, we plan to construct large-scale integrated circuits that outperform silicon-based systems. These circuits are faster, smaller, and consume much less power. They can also work at extremely low temperatures (e.g., in space) and moderately high temperatures (potentially no cooling system required), on flexible and transparent substrates, and potentially be bio-compatible.”

Here’s a link to and a citation for the paper,

Scaling carbon nanotube complementary transistors to 5-nm gate lengths by Chenguang Qiu, Zhiyong Zhang, Mengmeng Xiao, Yingjun Yang, Donglai Zhong, Lian-Mao Peng. Science  20 Jan 2017: Vol. 355, Issue 6322, pp. 271-276 DOI: 10.1126/science.aaj1628

This paper is behind a paywall.

Nanotechnology cracks Wall Street (Daily)

David Dittman’s Jan. 11, 2017 article for wallstreetdaily.com portrays a great deal of excitement about nanotechnology and the possibilities (I’m highlighting the article because it showcases Dexter Johnson’s Nanoclast blog),

When we talk about next-generation aircraft, next-generation wearable biomedical devices, and next-generation fiber-optic communication, the consistent theme is nano: nanotechnology, nanomaterials, nanophotonics.

For decades, manufacturers have used carbon fiber to make lighter sports equipment, stronger aircraft, and better textiles.

Now, as Dexter Johnson of IEEE [Institute of Electrical and Electronics Engineers] Spectrum reports [on his Nanoclast blog], carbon nanotubes will help make aerospace composites more efficient:

Now researchers at the University of Surrey’s Advanced Technology Institute (ATI), the University of Bristol’s Advanced Composite Centre for Innovation and Science (ACCIS), and aerospace company Bombardier [headquartered in Montréal, Canada] have collaborated on the development of a carbon nanotube-enabled material set to replace the polymer sizing. The reinforced polymers produced with this new material have enhanced electrical and thermal conductivity, opening up new functional possibilities. It will be possible, say the British researchers, to embed gadgets such as sensors and energy harvesters directly into the material.

When it comes to flight, lighter is better, so building sensors and energy harvesters into the body of aircraft marks a significant leap forward.

Johnson also reports for IEEE Spectrum on a “novel hybrid nanomaterial” based on oscillations of electrons — a major advance in nanophotonics:

Researchers at the University of Texas at Austin have developed a hybrid nanomaterial that enables the writing, erasing and rewriting of optical components. The researchers believe that this nanomaterial and the techniques used in exploiting it could create a new generation of optical chips and circuits.

Of course, the concept of rewritable optics is not altogether new; it forms the basis of optical storage mediums like CDs and DVDs. However, CDs and DVDs require bulky light sources, optical media and light detectors. The advantage of the rewritable integrated photonic circuits developed here is that it all happens on a 2-D material.

“To develop rewritable integrated nanophotonic circuits, one has to be able to confine light within a 2-D plane, where the light can travel in the plane over a long distance and be arbitrarily controlled in terms of its propagation direction, amplitude, frequency and phase,” explained Yuebing Zheng, a professor at the University of Texas who led the research… “Our material, which is a hybrid, makes it possible to develop rewritable integrated nanophotonic circuits.”

Who knew that mixing graphene with homemade Silly Putty would create a potentially groundbreaking new material that could make “wearables” actually useful?

Next-generation biomedical devices will undoubtedly include some of this stuff:

A dash of graphene can transform the stretchy goo known as Silly Putty into a pressure sensor able to monitor a human pulse or even track the dainty steps of a small spider.

The material, dubbed G-putty, could be developed into a device that continuously monitors blood pressure, its inventors hope.

The guys who made G-putty often rely on “household stuff” in their research.

It’s nice to see a blogger’s work be highlighted. Congratulations Dexter.

G-putty was mentioned here in a Dec. 30, 2016 posting which also includes a link to Dexter’s piece on the topic.

Keeping up with science is impossible: ruminations on a nanotechnology talk

I think it’s time to give this suggestion again. Always hold a little doubt about the science information you read and hear. Everybody makes mistakes.

Here’s an example of what can happen. George Tulevski who gave a talk about nanotechnology in Nov. 2016 for TED@IBM is an accomplished scientist who appears to have made an error during his TED talk. From Tulevski’s The Next Step in Nanotechnology talk transcript page,

When I was a graduate student, it was one of the most exciting times to be working in nanotechnology. There were scientific breakthroughs happening all the time. The conferences were buzzing, there was tons of money pouring in from funding agencies. And the reason is when objects get really small, they’re governed by a different set of physics that govern ordinary objects, like the ones we interact with. We call this physics quantum mechanics. [emphases mine] And what it tells you is that you can precisely tune their behavior just by making seemingly small changes to them, like adding or removing a handful of atoms, or twisting the material. It’s like this ultimate toolkit. You really felt empowered; you felt like you could make anything.

In September 2016, scientists at Cambridge University (UK) announced they had concrete proof that the physics governing materials at the nanoscale is unique, i.e., it does not follow the rules of either classical or quantum physics. From my Oct. 27, 2016 posting,

A Sept. 29, 2016 University of Cambridge press release, which originated the news item, hones in on the peculiarities of the nanoscale,

In the middle, on the order of around 10–100,000 molecules, something different is going on. Because it’s such a tiny scale, the particles have a really big surface-area-to-volume ratio. This means the energetics of what goes on at the surface become very important, much as they do on the atomic scale, where quantum mechanics is often applied.

Classical thermodynamics breaks down. But because there are so many particles, and there are many interactions between them, the quantum model doesn’t quite work either.

It is very, very easy to miss new developments no matter how tirelessly you scan for information.

Tulevski is a good, interesting, and informed speaker but I do have one other hesitation regarding his talk. He seems to think that over the last 15 years there should have been more practical applications arising from the field of nanotechnology. There are two aspects here. First, he seems to be dating the ‘nanotechnology’ effort from the beginning of the US National Nanotechnology Initiative and there are many scientists who would object to that as the starting point. Second, 15 or even 30 or more years is a brief period of time especially when you are investigating that which hasn’t been investigated before. For example, you might want to check out the book, “Leviathan and the Air-Pump: Hobbes, Boyle, and the Experimental Life” (published 1985) is a book by Steven Shapin and Simon Schaffer (Wikipedia entry for the book). The amount of time (years) spent on how to make just the glue which held the various experimental apparatuses together was a revelation to me. Of  course, it makes perfect sense that if you’re trying something new, you’re going to have figure out everything.

By the way, I include my blog as one of the sources of information that can be faulty despite efforts to make corrections and to keep up with the latest. Even the scientists at Cambridge University can run into some problems as I noted in my Jan. 28, 2016 posting.

Getting back to Tulevsk, herei’s a link to his lively, informative talk :

ETA Jan. 24, 2017: For some insight into how uncertain, tortuous, and expensive commercializing technology can be read Dexter Johnson’s Jan. 23, 2017 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website). Here’s an excerpt (Note: Links have been removed),

The brief description of this odyssey includes US $78 million in financing over 15 years and $50 million in revenues over that period through licensing of its technology and patents. That revenue includes a back-against-the-wall sell-off of a key business unit to Lockheed Martin in 2008.  Another key moment occured back in 2012 when Belgian-based nanoelectronics powerhouse Imec took on the job of further developing Nantero’s carbon-nanotube-based memory back in 2012. Despite the money and support from major electronics players, the big commercial breakout of their NRAM technology seemed ever less likely to happen with the passage of time.