Tag Archives: Dexter Johnson

Announcing the ‘memtransistor’

Yet another advance toward ‘brainlike’ computing (how many times have I written this or a variation thereof in the last 10 years? See: Dexter Johnson’s take on the situation at the end of this post): Northwestern University announced their latest memristor research in a February 21, 2018 news item on Nanowerk,

Computer algorithms might be performing brain-like functions, such as facial recognition and language translation, but the computers themselves have yet to operate like brains.

“Computers have separate processing and memory storage units, whereas the brain uses neurons to perform both functions,” said Northwestern University’s Mark C. Hersam. “Neural networks can achieve complicated computation with significantly lower energy consumption compared to a digital computer.”

A February 21, 2018 Northwestern University news release (also on EurekAlert), which originated the news item, provides more information about the latest work from this team,

In recent years, researchers have searched for ways to make computers more neuromorphic, or brain-like, in order to perform increasingly complicated tasks with high efficiency. Now Hersam, a Walter P. Murphy Professor of Materials Science and Engineering in Northwestern’s McCormick School of Engineering, and his team are bringing the world closer to realizing this goal.

The research team has developed a novel device called a “memtransistor,” which operates much like a neuron by performing both memory and information processing. With combined characteristics of a memristor and transistor, the memtransistor also encompasses multiple terminals that operate more similarly to a neural network.

Supported by the National Institute of Standards and Technology and the National Science Foundation, the research was published online today, February 22 [2018], in Nature. Vinod K. Sangwan and Hong-Sub Lee, postdoctoral fellows advised by Hersam, served as the paper’s co-first authors.

The memtransistor builds upon work published in 2015, in which Hersam, Sangwan, and their collaborators used single-layer molybdenum disulfide (MoS2) to create a three-terminal, gate-tunable memristor for fast, reliable digital memory storage. Memristor, which is short for “memory resistors,” are resistors in a current that “remember” the voltage previously applied to them. Typical memristors are two-terminal electronic devices, which can only control one voltage channel. By transforming it into a three-terminal device, Hersam paved the way for memristors to be used in more complex electronic circuits and systems, such as neuromorphic computing.

To develop the memtransistor, Hersam’s team again used atomically thin MoS2 with well-defined grain boundaries, which influence the flow of current. Similar to the way fibers are arranged in wood, atoms are arranged into ordered domains – called “grains” – within a material. When a large voltage is applied, the grain boundaries facilitate atomic motion, causing a change in resistance.

“Because molybdenum disulfide is atomically thin, it is easily influenced by applied electric fields,” Hersam explained. “This property allows us to make a transistor. The memristor characteristics come from the fact that the defects in the material are relatively mobile, especially in the presence of grain boundaries.”

But unlike his previous memristor, which used individual, small flakes of MoS2, Hersam’s memtransistor makes use of a continuous film of polycrystalline MoS2 that comprises a large number of smaller flakes. This enabled the research team to scale up the device from one flake to many devices across an entire wafer.

“When length of the device is larger than the individual grain size, you are guaranteed to have grain boundaries in every device across the wafer,” Hersam said. “Thus, we see reproducible, gate-tunable memristive responses across large arrays of devices.”

After fabricating memtransistors uniformly across an entire wafer, Hersam’s team added additional electrical contacts. Typical transistors and Hersam’s previously developed memristor each have three terminals. In their new paper, however, the team realized a seven-terminal device, in which one terminal controls the current among the other six terminals.

“This is even more similar to neurons in the brain,” Hersam said, “because in the brain, we don’t usually have one neuron connected to only one other neuron. Instead, one neuron is connected to multiple other neurons to form a network. Our device structure allows multiple contacts, which is similar to the multiple synapses in neurons.”

Next, Hersam and his team are working to make the memtransistor faster and smaller. Hersam also plans to continue scaling up the device for manufacturing purposes.

“We believe that the memtransistor can be a foundational circuit element for new forms of neuromorphic computing,” he said. “However, making dozens of devices, as we have done in our paper, is different than making a billion, which is done with conventional transistor technology today. Thus far, we do not see any fundamental barriers that will prevent further scale up of our approach.”

The researchers have made this illustration available,

Caption: This is the memtransistor symbol overlaid on an artistic rendering of a hypothetical circuit layout in the shape of a brain. Credit; Hersam Research Group

Here’s a link to and a citation for the paper,

Multi-terminal memtransistors from polycrystalline monolayer molybdenum disulfide by Vinod K. Sangwan, Hong-Sub Lee, Hadallia Bergeron, Itamar Balla, Megan E. Beck, Kan-Sheng Chen, & Mark C. Hersam. Nature volume 554, pages 500–504 (22 February 2018 doi:10.1038/nature25747 Published online: 21 February 2018

This paper is behind a paywall.

The team’s earlier work referenced in the news release was featured here in an April 10, 2015 posting.

Dexter Johnson

From a Feb. 23, 2018 posting by Dexter Johnson on the Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website),

While this all seems promising, one of the big shortcomings in neuromorphic computing has been that it doesn’t mimic the brain in a very important way. In the brain, for every neuron there are a thousand synapses—the electrical signal sent between the neurons of the brain. This poses a problem because a transistor only has a single terminal, hardly an accommodating architecture for multiplying signals.

Now researchers at Northwestern University, led by Mark Hersam, have developed a new device that combines memristors—two-terminal non-volatile memory devices based on resistance switching—with transistors to create what Hersam and his colleagues have dubbed a “memtransistor” that performs both memory storage and information processing.

This most recent research builds on work that Hersam and his team conducted back in 2015 in which the researchers developed a three-terminal, gate-tunable memristor that operated like a kind of synapse.

While this work was recognized as mimicking the low-power computing of the human brain, critics didn’t really believe that it was acting like a neuron since it could only transmit a signal from one artificial neuron to another. This was far short of a human brain that is capable of making tens of thousands of such connections.

“Traditional memristors are two-terminal devices, whereas our memtransistors combine the non-volatility of a two-terminal memristor with the gate-tunability of a three-terminal transistor,” said Hersam to IEEE Spectrum. “Our device design accommodates additional terminals, which mimic the multiple synapses in neurons.”

Hersam believes that these unique attributes of these multi-terminal memtransistors are likely to present a range of new opportunities for non-volatile memory and neuromorphic computing.

If you have the time and the interest, Dexter’s post provides more context,

How small can a carbon nanotube get before it stops being ‘electrical’?

Research, which began as an attempt to get reproducible electronics (?) measurements, yielded some unexpected results according ta January 3, 2018 news item on phys.org,

Carbon nanotubes bound for electronics not only need to be as clean as possible to maximize their utility in next-generation nanoscale devices, but contact effects may limit how small a nano device can be, according to researchers at the Energy Safety Research Institute (ESRI) at Swansea University [UK] in collaboration with researchers at Rice University [US].

ESRI Director Andrew Barron, also a professor at Rice University in the USA, and his team have figured out how to get nanotubes clean enough to obtain reproducible electronic measurements and in the process not only explained why the electrical properties of nanotubes have historically been so difficult to measure consistently, but have shown that there may be a limit to how “nano” future electronic devices can be using carbon nanotubes.

Swansea University Issued a January 3, 2018 press release (also on EurekAlert), which originated the news item, explains the work in more detail,

Like any normal wire, semiconducting nanotubes are progressively more resistant to current along their length. But conductivity measurements of nanotubes over the years have been anything but consistent. The ESRI team wanted to know why.

“We are interested in the creation of nanotube based conductors, and while people have been able to make wires their conduction has not met expectations. We were interested in determining the basic sconce behind the variability observed by other researchers.”

They discovered that hard-to-remove contaminants — leftover iron catalyst, carbon and water — could easily skew the results of conductivity tests. Burning them away, Barron said, creates new possibilities for carbon nanotubes in nanoscale electronics.

The new study appears in the American Chemical Society journal Nano Letters.

The researchers first made multiwalled carbon nanotubes between 40 and 200 nanometers in diameter and up to 30 microns long. They then either heated the nanotubes in a vacuum or bombarded them with argon ions to clean their surfaces.

They tested individual nanotubes the same way one would test any electrical conductor: By touching them with two probes to see how much current passes through the material from one tip to the other. In this case, their tungsten probes were attached to a scanning tunneling microscope.

In clean nanotubes, resistance got progressively stronger as the distance increased, as it should. But the results were skewed when the probes encountered surface contaminants, which increased the electric field strength at the tip. And when measurements were taken within 4 microns of each other, regions of depleted conductivity caused by contaminants overlapped, further scrambling the results.

“We think this is why there’s such inconsistency in the literature,” Barron said.

“If nanotubes are to be the next generation lightweight conductor, then consistent results, batch-to-batch, and sample-to-sample, is needed for devices such as motors and generators as well as power systems.”

Annealing the nanotubes in a vacuum above 200 degrees Celsius (392 degrees Fahrenheit) reduced surface contamination, but not enough to eliminate inconsistent results, they found. Argon ion bombardment also cleaned the tubes, but led to an increase in defects that degrade conductivity.

Ultimately they discovered vacuum annealing nanotubes at 500 degrees Celsius (932 Fahrenheit) reduced contamination enough to accurately measure resistance, they reported.

To now, Barron said, engineers who use nanotube fibers or films in devices modify the material through doping or other means to get the conductive properties they require. But if the source nanotubes are sufficiently decontaminated, they should be able to get the right conductivity by simply putting their contacts in the right spot.

“A key result of our work was that if contacts on a nanotube are less than 1 micron apart, the electronic properties of the nanotube changes from conductor to semiconductor, due to the presence of overlapping depletion zones” said Barron, “this has a potential limiting factor on the size of nanotube based electronic devices – this would limit the application of Moore’s law to nanotube devices.”

Chris Barnett of Swansea is lead author of the paper. Co-authors are Cathren Gowenlock and Kathryn Welsby, and Rice alumnus Alvin Orbaek White of Swansea. Barron is the Sêr Cymru Chair of Low Carbon Energy and Environment at Swansea and the Charles W. Duncan Jr.–Welch Professor of Chemistry and a professor of materials science and nanoengineering at Rice.

The Welsh Government Sêr Cymru National Research Network in Advanced Engineering and Materials, the Sêr Cymru Chair Program, the Office of Naval Research and the Robert A. Welch Foundation supported the research.

Rice University has published a January 4, 2018 Rice University news release (also on EurekAlert), which is almost (95%) identical to the press release from Swansea. That’s a bit unusual as collaborating institutions usually like to focus on their unique contributions to the research, hence, multiple news/press releases.

Dexter Johnson, in a January 11, 2018 post on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website,  adds a detail or two while writing in an accessible style.

Here’s a link to and a citation for the paper,

Spatial and Contamination-Dependent Electrical Properties of Carbon Nanotubes by Chris J. Barnett, Cathren E. Gowenlock, Kathryn Welsby, Alvin Orbaek White, and Andrew R. Barron. Nano Lett., Article ASAP DOI: 10.1021/acs.nanolett.7b03390 Publication Date (Web): December 19, 2017

Copyright © 2017 American Chemical Society

This paper is behind a paywall.

Canada’s ‘Smart Cities’ will need new technology (5G wireless) and, maybe, graphene

I recently published [March 20, 2018] a piece on ‘smart cities’ both an art/science event in Toronto and a Canadian government initiative without mentioning the necessity of new technology to support all of the grand plans. On that note, it seems the Canadian federal government and two provincial (Québec and Ontario) governments are prepared to invest in one of the necessary ‘new’ technologies, 5G wireless. The Canadian Broadcasting Corporation’s (CBC) Shawn Benjamin reports about Canada’s 5G plans in suitably breathless (even in text only) tones of excitement in a March 19, 2018 article,

The federal, Ontario and Quebec governments say they will spend $200 million to help fund research into 5G wireless technology, the next-generation networks with download speeds 100 times faster than current ones can handle.

The so-called “5G corridor,” known as ENCQOR, will see tech companies such as Ericsson, Ciena Canada, Thales Canada, IBM and CGI kick in another $200 million to develop facilities to get the project up and running.

The idea is to set up a network of linked research facilities and laboratories that these companies — and as many as 1,000 more across Canada — will be able to use to test products and services that run on 5G networks.

Benjamin’s description of 5G is focused on what it will make possible in the future,

If you think things are moving too fast, buckle up, because a new 5G cellular network is just around the corner and it promises to transform our lives by connecting nearly everything to a new, much faster, reliable wireless network.

The first networks won’t be operational for at least a few years, but technology and telecom companies around the world are already planning to spend billions to make sure they aren’t left behind, says Lawrence Surtees, a communications analyst with the research firm IDC.

The new 5G is no tentative baby step toward the future. Rather, as Surtees puts it, “the move from 4G to 5G is a quantum leap.”

In a downtown Toronto soundstage, Alan Smithson recently demonstrated a few virtual reality and augmented reality projects that his company MetaVRse is working on.

The potential for VR and AR technology is endless, he said, in large part for its potential to help hurdle some of the walls we are already seeing with current networks.

Virtual Reality technology on the market today is continually increasing things like frame rates and screen resolutions in a constant quest to make their devices even more lifelike.

… They [current 4G networks] can’t handle the load. But 5G can do so easily, Smithson said, so much so that the current era of bulky augmented reality headsets could be replaced buy a pair of normal looking glasses.

In a 5G world, those internet-connected glasses will automatically recognize everyone you meet, and possibly be able to overlay their name in your field of vision, along with a link to their online profile. …

Benjamin also mentions ‘smart cities’,

In a University of Toronto laboratory, Professor Alberto Leon-Garcia researches connected vehicles and smart power grids. “My passion right now is enabling smart cities — making smart cities a reality — and that means having much more immediate and detailed sense of the environment,” he said.

Faster 5G networks will assist his projects in many ways, by giving planners more, instant data on things like traffic patterns, energy consumption, variou carbon footprints and much more.

Leon-Garcia points to a brightly lit map of Toronto [image embedded in Benjamin’s article] in his office, and explains that every dot of light represents a sensor transmitting real time data.

Currently, the network is hooked up to things like city buses, traffic cameras and the city-owned fleet of shared bicycles. He currently has thousands of data points feeding him info on his map, but in a 5G world, the network will support about a million sensors per square kilometre.

Very exciting but where is all this data going? What computers will be processing the information? Where are these sensors located? Benjamin does not venture into those waters nor does The Economist in a February 13, 2018 article about 5G, the Olympic Games in Pyeonchang, South Korea, but the magazine does note another barrier to 5G implementation,

“FASTER, higher, stronger,” goes the Olympic motto. So it is only appropriate that the next generation of wireless technology, “5G” for short, should get its first showcase at the Winter Olympics  under way in Pyeongchang, South Korea. Once fully developed, it is supposed to offer download speeds of at least 20 gigabits per second (4G manages about half that at best) and response times (“latency”) of below 1 millisecond. So the new networks will be able to transfer a high-resolution movie in two seconds and respond to requests in less than a hundredth of the time it takes to blink an eye. But 5G is not just about faster and swifter wireless connections.

The technology is meant to enable all sorts of new services. One such would offer virtual- or augmented-reality experiences. At the Olympics, for example, many contestants are being followed by 360-degree video cameras. At special venues sports fans can don virtual-reality goggles to put themselves right into the action. But 5G is also supposed to become the connective tissue for the internet of things, to link anything from smartphones to wireless sensors and industrial robots to self-driving cars. This will be made possible by a technique called “network slicing”, which allows operators quickly to create bespoke networks that give each set of devices exactly the connectivity they need.

Despite its versatility, it is not clear how quickly 5G will take off. The biggest brake will be economic. [emphasis mine] When the GSMA, an industry group, last year asked 750 telecoms bosses about the most salient impediment to delivering 5G, more than half cited the lack of a clear business case. People may want more bandwidth, but they are not willing to pay for it—an attitude even the lure of the fanciest virtual-reality applications may not change. …

That may not be the only brake, Dexter Johnson in a March 19, 2018 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website), covers some of the others (Note: Links have been removed),

Graphene has been heralded as a “wonder material” for well over a decade now, and 5G has been marketed as the next big thing for at least the past five years. Analysts have suggested that 5G could be the golden ticket to virtual reality and artificial intelligence, and promised that graphene could improve technologies within electronics and optoelectronics.

But proponents of both graphene and 5G have also been accused of stirring up hype. There now seems to be a rising sense within industry circles that these glowing technological prospects will not come anytime soon.

At Mobile World Congress (MWC) in Barcelona last month [February 2018], some misgivings for these long promised technologies may have been put to rest, though, thanks in large part to each other.

In a meeting at MWC with Jari Kinaret, a professor at Chalmers University in Sweden and director of the Graphene Flagship, I took a guided tour around the Pavilion to see some of the technologies poised to have an impact on the development of 5G.

Being invited back to the MWC for three years is a pretty clear indication of how important graphene is to those who are trying to raise the fortunes of 5G. But just how important became more obvious to me in an interview with Frank Koppens, the leader of the quantum nano-optoelectronic group at Institute of Photonic Sciences (ICFO) just outside of Barcelona, last year.

He said: “5G cannot just scale. Some new technology is needed. And that’s why we have several companies in the Graphene Flagship that are putting a lot of pressure on us to address this issue.”

In a collaboration led by CNIT—a consortium of Italian universities and national laboratories focused on communication technologies—researchers from AMO GmbH, Ericsson, Nokia Bell Labs, and Imec have developed graphene-based photodetectors and modulators capable of receiving and transmitting optical data faster than ever before.

The aim of all this speed for transmitting data is to support the ultrafast data streams with extreme bandwidth that will be part of 5G. In fact, at another section during MWC, Ericsson was presenting the switching of a 100 Gigabits per second (Gbps) channel based on the technology.

“The fact that Ericsson is demonstrating another version of this technology demonstrates that from Ericsson’s point of view, this is no longer just research” said Kinaret.

It’s no mystery why the big mobile companies are jumping on this technology. Not only does it provide high-speed data transmission, but it also does it 10 times more efficiently than silicon or doped silicon devices, and will eventually do it more cheaply than those devices, according to Vito Sorianello, senior researcher at CNIT.

Interestingly, Ericsson is one of the tech companies mentioned with regard to Canada’s 5G project, ENCQOR and Sweden’s Chalmers University, as Dexter Johnson notes, is the lead institution for the Graphene Flagship.. One other fact to note, Canada’s resources include graphite mines with ‘premium’ flakes for producing graphene. Canada’s graphite mines are located (as far as I know) in only two Canadian provinces, Ontario and Québec, which also happen to be pitching money into ENCQOR. My March 21, 2018 posting describes the latest entry into the Canadian graphite mining stakes.

As for the questions I posed about processing power, etc. It seems the South Koreans have found answers of some kind but it’s hard to evaluate as I haven’t found any additional information about 5G and its implementation in South Korea. If anyone has answers, please feel free to leave them in the ‘comments’. Thank you.

FrogHeart’s good-bye to 2017 and hello to 2018

This is going to be relatively short and sweet(ish). Starting with the 2017 review:

Nano blogosphere and the Canadian blogosphere

From my perspective there’s been a change taking place in the nano blogosphere over the last few years. There are fewer blogs along with fewer postings from those who still blog. Interestingly, some blogs are becoming more generalized. At the same time, Foresight Institute’s Nanodot blog (as has FrogHeart) has expanded its range of topics to include artificial intelligence and other topics. Andrew Maynard’s 2020 Science blog now exists in an archived from but before its demise, it, too, had started to include other topics, notably risk in its many forms as opposed to risk and nanomaterials. Dexter Johnson’s blog, Nanoclast (on the IEEE [Institute for Electrical and Electronics Engineers] website), maintains its 3x weekly postings. Tim Harper who often wrote about nanotechnology on his Cientifica blog appears to have found a more freewheeling approach that is dominated by his Twitter feed although he also seems (I can’t confirm that the latest posts were written in 2017) to blog here on timharper.net.

The Canadian science blogosphere seems to be getting quieter if Science Borealis (blog aggregator) is a measure. My overall impression is that the bloggers have been a bit quieter this year with fewer postings on the feed or perhaps that’s due to some technical issues (sometimes FrogHeart posts do not get onto the feed). On the promising side, Science Borealis teamed with the Science Writers and Communicators of Canada Association to run a contest, “2017 People’s Choice Awards: Canada’s Favourite Science Online!”  There were two categories (Favourite Science Blog and Favourite Science Site) and you can find a list of the finalists with links to the winners here.

Big congratulations for the winners: Canada’s Favourite Blog 2017: Body of Evidence (Dec. 6, 2017 article by Alina Fisher for Science Borealis) and Let’s Talk Science won Canada’s Favourite Science Online 2017 category as per this announcement.

However, I can’t help wondering: where were ASAP Science, Acapella Science, Quirks & Quarks, IFLS (I f***ing love science), and others on the list for finalists? I would have thought any of these would have a lock on a position as a finalist. These are Canadian online science purveyors and they are hugely popular, which should mean they’d have no problem getting nominated and getting votes. I can’t find the criteria for nominations (or any hint there will be a 2018 contest) so I imagine their nonpresence on the 2017 finalists list will remain a mystery to me.

Looking forward to 2018, I think that the nano blogosphere will continue with its transformation into a more general science/technology-oriented community. To some extent, I believe this reflects the fact that nanotechnology is being absorbed into the larger science/technology effort as foundational (something wiser folks than me predicted some years ago).

As for Science Borealis and the Canadian science online effort, I’m going to interpret the quieter feeds as a sign of a maturing community. After all, there are always ups and downs in terms of enthusiasm and participation and as I noted earlier the launch of an online contest is promising as is the collaboration with Science Writers and Communicators of Canada.

Canadian science policy

It was a big year.

Canada’s Chief Science Advisor

With Canada’s first chief science advisor in many years, being announced Dr. Mona Nemer stepped into her position sometime in Fall 2017. The official announcement was made on Sept. 26, 2017. I covered the event in my Sept. 26, 2017 posting, which includes a few more details than found the official announcement.

You’ll also find in that Sept. 26, 2017 posting a brief discourse on the Naylor report (also known as the Review of Fundamental Science) and some speculation on why, to my knowledge, there has been no action taken as a consequence.  The Naylor report was released April 10, 2017 and was covered here in a three-part review, published on June 8, 2017,

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 1 of 3

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 2 of 3

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 3 of 3

I have found another commentary (much briefer than mine) by Paul Dufour on the Canadian Science Policy Centre website. (November 9, 2017)

Subnational and regional science funding

This began in 2016 with a workshop mentioned in my November 10, 2016 posting: ‘Council of Canadian Academies and science policy for Alberta.” By the time the report was published the endeavour had been transformed into: Science Policy: Considerations for Subnational Governments (report here and my June 22, 2017 commentary here).

I don’t know what will come of this but I imagine scientists will be supportive as it means more money and they are always looking for more money. Still, the new government in British Columbia has only one ‘science entity’ and I’m not sure it’s still operational but i was called the Premier’s Technology Council. To my knowledge, there is no ministry or other agency that is focused primarily or partially on science.

Meanwhile, a couple of representatives from the health sciences (neither of whom were involved in the production of the report) seem quite enthused about the prospects for provincial money in their (Bev Holmes, Interim CEO, Michael Smith Foundation for Health Research, British Columbia, and Patrick Odnokon (CEO, Saskatchewan Health Research Foundation) October 27, 2017 opinion piece for the Canadian Science Policy Centre.

Artificial intelligence and Canadians

An event which I find more interesting with time was the announcement of the Pan=Canadian Artificial Intelligence Strategy in the 2017 Canadian federal budget. Since then there has been a veritable gold rush mentality with regard to artificial intelligence in Canada. One announcement after the next about various corporations opening new offices in Toronto or Montréal has been made in the months since.

What has really piqued my interest recently is a report being written for Canada’s Treasury Board by Michael Karlin (you can learn more from his Twitter feed although you may need to scroll down past some of his more personal tweets (something cassoulet in the Dec. 29, 2017 tweets).  As for Karlin’s report, which is a work in progress, you can find out more about the report and Karlin in a December 12, 2017 article by Rob Hunt for the Algorithmic Media Observatory (sponsored by the Social Sciences and Humanities Research Council of Canada [SHRCC], the Centre for Study of Democratic Citizenship, and the Fonds de recherche du Québec: Société et culture).

You can ring in 2018 by reading and making comments, which could influence the final version, on Karlin’s “Responsible Artificial Intelligence in the Government of Canada” part of the government’s Digital Disruption White Paper Series.

As for other 2018 news, the Council of Canadian Academies is expected to publish “The State of Science and Technology and Industrial Research and Development in Canada” at some point soon (we hope). This report follows and incorporates two previous ‘states’, The State of Science and Technology in Canada, 2012 (the first of these was a 2006 report) and the 2013 version of The State of Industrial R&D in Canada. There is already some preliminary data for this latest ‘state of’  (you can find a link and commentary in my December 15, 2016 posting).

FrogHeart then (2017) and soon (2018)

On looking back I see that the year started out at quite a clip as I was attempting to hit the 5000th blog posting mark, which I did on March 3,  2017. I have cut back somewhat from the 3 postings/day high to approximately 1 posting/day. It makes things more manageable allowing me to focus on other matters.

By the way, you may note that the ‘Donate’ button has disappeared from my sidebard. I thank everyone who donated from the bottom of my heart. The money was more than currency, it also symbolized encouragement. On the sad side, I moved from one hosting service to a new one (Sibername) late in December 2016 and have been experiencing serious bandwidth issues which result on FrogHeart’s disappearance from the web for days at a time. I am trying to resolve the issues and hope that such actions as removing the ‘Donate’ button will help.

I wish my readers all the best for 2018 as we explore nanotechnology and other emerging technologies!

(I apologize for any and all errors. I usually take a little more time to write this end-of-year and coming-year piece but due to bandwidth issues I was unable to access my draft and give it at least one review. And at this point, I’m too tired to try spotting error. If you see any, please do let me know.)

‘Nano-hashtags’ for Majorana particles?

The ‘nano-hashtags’ are in fact (assuming a minor leap of imagination) nanowires that resemble hashtags.

Scanning electron microscope image of the device wherein clearly a ‘hashtag’ is formed. Credit: Eindhoven University of Technology

An August 23, 2017 news item on ScienceDaily makes the announcement,

In Nature, an international team of researchers from Eindhoven University of Technology [Netherlands], Delft University of Technology [Netherlands] and the University of California — Santa Barbara presents an advanced quantum chip that will be able to provide definitive proof of the mysterious Majorana particles. These particles, first demonstrated in 2012, are their own antiparticle at one and the same time. The chip, which comprises ultrathin networks of nanowires in the shape of ‘hashtags’, has all the qualities to allow Majorana particles to exchange places. This feature is regarded as the smoking gun for proving their existence and is a crucial step towards their use as a building block for future quantum computers.

An August 23, 2017 Eindhoven University press release (also on EurekAlert), which originated the news item, provides some context and information about the work,

In 2012 it was big news: researchers from Delft University of Technology and Eindhoven University of Technology presented the first experimental signatures for the existence of the Majorana fermion. This particle had been predicted in 1937 by the Italian physicist Ettore Majorana and has the distinctive property of also being its own anti-particle. The Majorana particles emerge at the ends of a semiconductor wire, when in contact with a superconductor material.

Smoking gun

While the discovered particles may have properties typical to Majoranas, the most exciting proof could be obtained by allowing two Majorana particles to exchange places, or ‘braid’ as it is scientifically known. “That’s the smoking gun,” suggests Erik Bakkers, one of the researchers from Eindhoven University of Technology. “The behavior we then see could be the most conclusive evidence yet of Majoranas.”

Crossroads

In the Nature paper that is published today [August 23, 2017], Bakkers and his colleagues present a new device that should be able to show this exchanging of Majoranas. In the original experiment in 2012 two Majorana particles were found in a single wire but they were not able to pass each other without immediately destroying the other. Thus the researchers quite literally had to create space. In the presented experiment they formed intersections using the same kinds of nanowire so that four of these intersections form a ‘hashtag’, #, and thus create a closed circuit along which Majoranas are able to move.

Etch and grow

The researchers built their hashtag device starting from scratch. The nanowires are grown from a specially etched substrate such that they form exactly the desired network which they then expose to a stream of aluminium particles, creating layers of aluminium, a superconductor, on specific spots on the wires – the contacts where the Majorana particles emerge. Places that lie ‘in the shadow’ of other wires stay uncovered.

Leap in quality

The entire process happens in a vacuum and at ultra-cold temperature (around -273 degree Celsius). “This ensures very clean, pure contacts,” says Bakkers, “and enables us to make a considerable leap in the quality of this kind of quantum device.” The measurements demonstrate for a number of electronic and magnetic properties that all the ingredients are present for the Majoranas to braid.

Quantum computers

If the researchers succeed in enabling the Majorana particles to braid, they will at once have killed two birds with one stone. Given their robustness, Majoranas are regarded as the ideal building block for future quantum computers that will be able to perform many calculations simultaneously and thus many times faster than current computers. The braiding of two Majorana particles could form the basis for a qubit, the calculation unit of these computers.

Travel around the world

An interesting detail is that the samples have traveled around the world during the fabrication, combining unique and synergetic activities of each research institution. It started in Delft with patterning and etching the substrate, then to Eindhoven for nanowire growth and to Santa Barbara for aluminium contact formation. Finally back to Delft via Eindhoven for the measurements.

Here’s a link to and a citation for the paper,

Epitaxy of advanced nanowire quantum devices by Sasa Gazibegovic, Diana Car, Hao Zhang, Stijn C. Balk, John A. Logan, Michiel W. A. de Moor, Maja C. Cassidy, Rudi Schmits, Di Xu, Guanzhong Wang, Peter Krogstrup, Roy L. M. Op het Veld, Kun Zuo, Yoram Vos, Jie Shen, Daniël Bouman, Borzoyeh Shojaei, Daniel Pennachio, Joon Sue Lee, Petrus J. van Veldhoven, Sebastian Koelling, Marcel A. Verheijen, Leo P. Kouwenhoven, Chris J. Palmstrøm, & Erik P. A. M. Bakkers. Nature 548, 434–438 (24 August 2017) doi:10.1038/nature23468 Published online 23 August 2017

This paper is behind a paywall.

Dexter Johnson has some additional insight (interview with one of the researchers) in an Aug. 29, 2017 posting on his Nanoclast blog (on the IEEE [institute of Electrical and Electronics Engineers] website).

Yarns that harvest and generate energy

The researchers involved in this work are confident enough about their prospects that they will be  patenting their research into yarns. From an August 25, 2017 news item on Nanowerk,

An international research team led by scientists at The University of Texas at Dallas and Hanyang University in South Korea has developed high-tech yarns that generate electricity when they are stretched or twisted.

In a study published in the Aug. 25 [2017] issue of the journal Science (“Harvesting electrical energy from carbon nanotube yarn twist”), researchers describe “twistron” yarns and their possible applications, such as harvesting energy from the motion of ocean waves or from temperature fluctuations. When sewn into a shirt, these yarns served as a self-powered breathing monitor.

“The easiest way to think of twistron harvesters is, you have a piece of yarn, you stretch it, and out comes electricity,” said Dr. Carter Haines, associate research professor in the Alan G. MacDiarmid NanoTech Institute at UT Dallas and co-lead author of the article. The article also includes researchers from South Korea, Virginia Tech, Wright-Patterson Air Force Base and China.

An August 25, 2017 University of Texas at Dallas news release, which originated the news item, expands on the theme,

Yarns Based on Nanotechnology

The yarns are constructed from carbon nanotubes, which are hollow cylinders of carbon 10,000 times smaller in diameter than a human hair. The researchers first twist-spun the nanotubes into high-strength, lightweight yarns. To make the yarns highly elastic, they introduced so much twist that the yarns coiled like an over-twisted rubber band.

In order to generate electricity, the yarns must be either submerged in or coated with an ionically conducting material, or electrolyte, which can be as simple as a mixture of ordinary table salt and water.

“Fundamentally, these yarns are supercapacitors,” said Dr. Na Li, a research scientist at the NanoTech Institute and co-lead author of the study. “In a normal capacitor, you use energy — like from a battery — to add charges to the capacitor. But in our case, when you insert the carbon nanotube yarn into an electrolyte bath, the yarns are charged by the electrolyte itself. No external battery, or voltage, is needed.”

When a harvester yarn is twisted or stretched, the volume of the carbon nanotube yarn decreases, bringing the electric charges on the yarn closer together and increasing their energy, Haines said. This increases the voltage associated with the charge stored in the yarn, enabling the harvesting of electricity.

Stretching the coiled twistron yarns 30 times a second generated 250 watts per kilogram of peak electrical power when normalized to the harvester’s weight, said Dr. Ray Baughman, director of the NanoTech Institute and a corresponding author of the study.

“Although numerous alternative harvesters have been investigated for many decades, no other reported harvester provides such high electrical power or energy output per cycle as ours for stretching rates between a few cycles per second and 600 cycles per second.”

Lab Tests Show Potential Applications

In the lab, the researchers showed that a twistron yarn weighing less than a housefly could power a small LED, which lit up each time the yarn was stretched.

To show that twistrons can harvest waste thermal energy from the environment, Li connected a twistron yarn to a polymer artificial muscle that contracts and expands when heated and cooled. The twistron harvester converted the mechanical energy generated by the polymer muscle to electrical energy.

“There is a lot of interest in using waste energy to power the Internet of Things, such as arrays of distributed sensors,” Li said. “Twistron technology might be exploited for such applications where changing batteries is impractical.”

The researchers also sewed twistron harvesters into a shirt. Normal breathing stretched the yarn and generated an electrical signal, demonstrating its potential as a self-powered respiration sensor.

“Electronic textiles are of major commercial interest, but how are you going to power them?” Baughman said. “Harvesting electrical energy from human motion is one strategy for eliminating the need for batteries. Our yarns produced over a hundred times higher electrical power per weight when stretched compared to other weavable fibers reported in the literature.”

Electricity from Ocean Waves

“In the lab we showed that our energy harvesters worked using a solution of table salt as the electrolyte,” said Baughman, who holds the Robert A. Welch Distinguished Chair in Chemistry in the School of Natural Sciences and Mathematics. “But we wanted to show that they would also work in ocean water, which is chemically more complex.”

In a proof-of-concept demonstration, co-lead author Dr. Shi Hyeong Kim, a postdoctoral researcher at the NanoTech Institute, waded into the frigid surf off the east coast of South Korea to deploy a coiled twistron in the sea. He attached a 10 centimeter-long yarn, weighing only 1 milligram (about the weight of a mosquito), between a balloon and a sinker that rested on the seabed.

Every time an ocean wave arrived, the balloon would rise, stretching the yarn up to 25 percent, thereby generating measured electricity.

Even though the investigators used very small amounts of twistron yarn in the current study, they have shown that harvester performance is scalable, both by increasing twistron diameter and by operating many yarns in parallel.

“If our twistron harvesters could be made less expensively, they might ultimately be able to harvest the enormous amount of energy available from ocean waves,” Baughman said. “However, at present these harvesters are most suitable for powering sensors and sensor communications. Based on demonstrated average power output, just 31 milligrams of carbon nanotube yarn harvester could provide the electrical energy needed to transmit a 2-kilobyte packet of data over a 100-meter radius every 10 seconds for the Internet of Things.”

Researchers from the UT Dallas Erik Jonsson School of Engineering and Computer Science and Lintec of America’s Nano-Science & Technology Center also participated in the study.

The investigators have filed a patent on the technology.

In the U.S., the research was funded by the Air Force, the Air Force Office of Scientific Research, NASA, the Office of Naval Research and the Robert A. Welch Foundation. In Korea, the research was supported by the Korea-U.S. Air Force Cooperation Program and the Creative Research Initiative Center for Self-powered Actuation of the National Research Foundation and the Ministry of Science.

Here’s a link to and a citation for the paper,

Harvesting electrical energy from carbon nanotube yarn twist by Shi Hyeong Kim, Carter S. Haines, Na Li, Keon Jung Kim, Tae Jin Mun, Changsoon Choi, Jiangtao Di, Young Jun Oh, Juan Pablo Oviedo, Julia Bykova, Shaoli Fang, Nan Jiang, Zunfeng Liu, Run Wang, Prashant Kumar, Rui Qiao, Shashank Priya, Kyeongjae Cho, Moon Kim, Matthew Steven Lucas, Lawrence F. Drummy, Benji Maruyama, Dong Youn Lee, Xavier Lepró, Enlai Gao, Dawood Albarq, Raquel Ovalle-Robles, Seon Jeong Kim, Ray H. Baughman. Science 25 Aug 2017: Vol. 357, Issue 6353, pp. 773-778 DOI: 10.1126/science.aam8771

This paper is behind a paywall.

Dexter Johnson in an Aug. 25, 2017 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) delves further into the research,

“Basically what’s happening is when we stretch the yarn, we’re getting a change in capacitance of the yarn. It’s that change that allows us to get energy out,” explains Carter Haines, associate research professor at UT Dallas and co-lead author of the paper describing the research, in an interview with IEEE Spectrum.

This makes it similar in many ways to other types of energy harvesters. For instance, in other research, it has been demonstrated—with sheets of rubber with coated electrodes on both sides—that you can increase the capacitance of a material when you stretch it and it becomes thinner. As a result, if you have charge on that capacitor, you can change the voltage associated with that charge.

“We’re more or less exploiting the same effect but what we’re doing differently is we’re using an electric chemical cell to do this,” says Haines. “So we’re not changing double layer capacitance in normal parallel plate capacitors. But we’re actually changing the electric chemical capacitance on the surface of a super capacitor yarn.”

While there are other capacitance-based energy harvesters, those other devices require extremely high voltages to work because they’re using parallel plate capacitors, according to Haines.

Dexter asks good questions and his post is very informative.

Cyborg bacteria to reduce carbon dioxide

This video is a bit technical but then it is about work being presented to chemists at the American Chemical Society’s (ACS) at the 254th National Meeting & Exposition Aug. 20 -24, 2017,

For a more plain language explanation, there’s an August 22, 2017 ACS news release (also on EurekAlert),

Photosynthesis provides energy for the vast majority of life on Earth. But chlorophyll, the green pigment that plants use to harvest sunlight, is relatively inefficient. To enable humans to capture more of the sun’s energy than natural photosynthesis can, scientists have taught bacteria to cover themselves in tiny, highly efficient solar panels to produce useful compounds.

“Rather than rely on inefficient chlorophyll to harvest sunlight, I’ve taught bacteria how to grow and cover their bodies with tiny semiconductor nanocrystals,” says Kelsey K. Sakimoto, Ph.D., who carried out the research in the lab of Peidong Yang, Ph.D. “These nanocrystals are much more efficient than chlorophyll and can be grown at a fraction of the cost of manufactured solar panels.”

Humans increasingly are looking to find alternatives to fossil fuels as sources of energy and feedstocks for chemical production. Many scientists have worked to create artificial photosynthetic systems to generate renewable energy and simple organic chemicals using sunlight. Progress has been made, but the systems are not efficient enough for commercial production of fuels and feedstocks.

Research in Yang’s lab at the University of California, Berkeley, where Sakimoto earned his Ph.D., focuses on harnessing inorganic semiconductors that can capture sunlight to organisms such as bacteria that can then use the energy to produce useful chemicals from carbon dioxide and water. “The thrust of research in my lab is to essentially ‘supercharge’ nonphotosynthetic bacteria by providing them energy in the form of electrons from inorganic semiconductors, like cadmium sulfide, that are efficient light absorbers,” Yang says. “We are now looking for more benign light absorbers than cadmium sulfide to provide bacteria with energy from light.”

Sakimoto worked with a naturally occurring, nonphotosynthetic bacterium, Moorella thermoacetica, which, as part of its normal respiration, produces acetic acid from carbon dioxide (CO2). Acetic acid is a versatile chemical that can be readily upgraded to a number of fuels, polymers, pharmaceuticals and commodity chemicals through complementary, genetically engineered bacteria.

When Sakimoto fed cadmium and the amino acid cysteine, which contains a sulfur atom, to the bacteria, they synthesized cadmium sulfide (CdS) nanoparticles, which function as solar panels on their surfaces. The hybrid organism, M. thermoacetica-CdS, produces acetic acid from CO2, water and light. “Once covered with these tiny solar panels, the bacteria can synthesize food, fuels and plastics, all using solar energy,” Sakimoto says. “These bacteria outperform natural photosynthesis.”

The bacteria operate at an efficiency of more than 80 percent, and the process is self-replicating and self-regenerating, making this a zero-waste technology. “Synthetic biology and the ability to expand the product scope of CO2 reduction will be crucial to poising this technology as a replacement, or one of many replacements, for the petrochemical industry,” Sakimoto says.

So, do the inorganic-biological hybrids have commercial potential? “I sure hope so!” he says. “Many current systems in artificial photosynthesis require solid electrodes, which is a huge cost. Our algal biofuels are much more attractive, as the whole CO2-to-chemical apparatus is self-contained and only requires a big vat out in the sun.” But he points out that the system still requires some tweaking to tune both the semiconductor and the bacteria. He also suggests that it is possible that the hybrid bacteria he created may have some naturally occurring analog. “A future direction, if this phenomenon exists in nature, would be to bioprospect for these organisms and put them to use,” he says.

For more insight into the work, check out Dexter Johnson’s Aug. 22, 2017 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website),

“It’s actually a natural, overlooked feature of their biology,” explains Sakimoto in an e-mail interview with IEEE Spectrum. “This bacterium has a detoxification pathway, meaning if it encounters a toxic metal, like cadmium, it will try to precipitate it out, thereby detoxifying it. So when we introduce cadmium ions into the growth medium in which M. thermoacetica is hanging out, it will convert the amino acid cysteine into sulfide, which precipitates out cadmium as cadmium sulfide. The crystals then assemble and stick onto the bacterium through normal electrostatic interactions.”

I’ve just excerpted one bit, there’s more in Dexter’s posting.

Training drugs

This summarizes some of what’s happening in nanomedicine and provides a plug (boost) for the  University of Cambridge’s nanotechnology programmes (from a June 26, 2017 news item on Nanowerk),

Nanotechnology is creating new opportunities for fighting disease – from delivering drugs in smart packaging to nanobots powered by the world’s tiniest engines.

Chemotherapy benefits a great many patients but the side effects can be brutal.
When a patient is injected with an anti-cancer drug, the idea is that the molecules will seek out and destroy rogue tumour cells. However, relatively large amounts need to be administered to reach the target in high enough concentrations to be effective. As a result of this high drug concentration, healthy cells may be killed as well as cancer cells, leaving many patients weak, nauseated and vulnerable to infection.

One way that researchers are attempting to improve the safety and efficacy of drugs is to use a relatively new area of research known as nanothrapeutics to target drug delivery just to the cells that need it.

Professor Sir Mark Welland is Head of the Electrical Engineering Division at Cambridge. In recent years, his research has focused on nanotherapeutics, working in collaboration with clinicians and industry to develop better, safer drugs. He and his colleagues don’t design new drugs; instead, they design and build smart packaging for existing drugs.

The University of Cambridge has produced a video interview (referencing a 1966 movie ‘Fantastic Voyage‘ in its title)  with Sir Mark Welland,

A June 23, 2017 University of Cambridge press release, which originated the news item, delves further into the topic of nanotherapeutics (nanomedicine) and nanomachines,

Nanotherapeutics come in many different configurations, but the easiest way to think about them is as small, benign particles filled with a drug. They can be injected in the same way as a normal drug, and are carried through the bloodstream to the target organ, tissue or cell. At this point, a change in the local environment, such as pH, or the use of light or ultrasound, causes the nanoparticles to release their cargo.

Nano-sized tools are increasingly being looked at for diagnosis, drug delivery and therapy. “There are a huge number of possibilities right now, and probably more to come, which is why there’s been so much interest,” says Welland. Using clever chemistry and engineering at the nanoscale, drugs can be ‘taught’ to behave like a Trojan horse, or to hold their fire until just the right moment, or to recognise the target they’re looking for.

“We always try to use techniques that can be scaled up – we avoid using expensive chemistries or expensive equipment, and we’ve been reasonably successful in that,” he adds. “By keeping costs down and using scalable techniques, we’ve got a far better chance of making a successful treatment for patients.”

In 2014, he and collaborators demonstrated that gold nanoparticles could be used to ‘smuggle’ chemotherapy drugs into cancer cells in glioblastoma multiforme, the most common and aggressive type of brain cancer in adults, which is notoriously difficult to treat. The team engineered nanostructures containing gold and cisplatin, a conventional chemotherapy drug. A coating on the particles made them attracted to tumour cells from glioblastoma patients, so that the nanostructures bound and were absorbed into the cancer cells.

Once inside, these nanostructures were exposed to radiotherapy. This caused the gold to release electrons that damaged the cancer cell’s DNA and its overall structure, enhancing the impact of the chemotherapy drug. The process was so effective that 20 days later, the cell culture showed no evidence of any revival, suggesting that the tumour cells had been destroyed.

While the technique is still several years away from use in humans, tests have begun in mice. Welland’s group is working with MedImmune, the biologics R&D arm of pharmaceutical company AstraZeneca, to study the stability of drugs and to design ways to deliver them more effectively using nanotechnology.

“One of the great advantages of working with MedImmune is they understand precisely what the requirements are for a drug to be approved. We would shut down lines of research where we thought it was never going to get to the point of approval by the regulators,” says Welland. “It’s important to be pragmatic about it so that only the approaches with the best chance of working in patients are taken forward.”

The researchers are also targeting diseases like tuberculosis (TB). With funding from the Rosetrees Trust, Welland and postdoctoral researcher Dr Íris da luz Batalha are working with Professor Andres Floto in the Department of Medicine to improve the efficacy of TB drugs.

Their solution has been to design and develop nontoxic, biodegradable polymers that can be ‘fused’ with TB drug molecules. As polymer molecules have a long, chain-like shape, drugs can be attached along the length of the polymer backbone, meaning that very large amounts of the drug can be loaded onto each polymer molecule. The polymers are stable in the bloodstream and release the drugs they carry when they reach the target cell. Inside the cell, the pH drops, which causes the polymer to release the drug.

In fact, the polymers worked so well for TB drugs that another of Welland’s postdoctoral researchers, Dr Myriam Ouberaï, has formed a start-up company, Spirea, which is raising funding to develop the polymers for use with oncology drugs. Ouberaï is hoping to establish a collaboration with a pharma company in the next two years.

“Designing these particles, loading them with drugs and making them clever so that they release their cargo in a controlled and precise way: it’s quite a technical challenge,” adds Welland. “The main reason I’m interested in the challenge is I want to see something working in the clinic – I want to see something working in patients.”

Could nanotechnology move beyond therapeutics to a time when nanomachines keep us healthy by patrolling, monitoring and repairing the body?

Nanomachines have long been a dream of scientists and public alike. But working out how to make them move has meant they’ve remained in the realm of science fiction.

But last year, Professor Jeremy Baumberg and colleagues in Cambridge and the University of Bath developed the world’s tiniest engine – just a few billionths of a metre [nanometre] in size. It’s biocompatible, cost-effective to manufacture, fast to respond and energy efficient.

The forces exerted by these ‘ANTs’ (for ‘actuating nano-transducers’) are nearly a hundred times larger than those for any known device, motor or muscle. To make them, tiny charged particles of gold, bound together with a temperature-responsive polymer gel, are heated with a laser. As the polymer coatings expel water from the gel and collapse, a large amount of elastic energy is stored in a fraction of a second. On cooling, the particles spring apart and release energy.

The researchers hope to use this ability of ANTs to produce very large forces relative to their weight to develop three-dimensional machines that swim, have pumps that take on fluid to sense the environment and are small enough to move around our bloodstream.

Working with Cambridge Enterprise, the University’s commercialisation arm, the team in Cambridge’s Nanophotonics Centre hopes to commercialise the technology for microfluidics bio-applications. The work is funded by the Engineering and Physical Sciences Research Council and the European Research Council.

“There’s a revolution happening in personalised healthcare, and for that we need sensors not just on the outside but on the inside,” explains Baumberg, who leads an interdisciplinary Strategic Research Network and Doctoral Training Centre focused on nanoscience and nanotechnology.

“Nanoscience is driving this. We are now building technology that allows us to even imagine these futures.”

I have featured Welland and his work here before and noted his penchant for wanting to insert nanodevices into humans as per this excerpt from an April 30, 2010 posting,
Getting back to the Cambridge University video, do go and watch it on the Nanowerk site. It is fun and very informative and approximately 17 mins. I noticed that they reused part of their Nokia morph animation (last mentioned on this blog here) and offered some thoughts from Professor Mark Welland, the team leader on that project. Interestingly, Welland was talking about yet another possibility. (Sometimes I think nano goes too far!) He was suggesting that we could have chips/devices in our brains that would allow us to think about phoning someone and an immediate connection would be made to that person. Bluntly—no. Just think what would happen if the marketers got access and I don’t even want to think what a person who suffers psychotic breaks (i.e., hearing voices) would do with even more input. Welland starts to talk at the 11 minute mark (I think). For an alternative take on the video and more details, visit Dexter Johnson’s blog, Nanoclast, for this posting. Hint, he likes the idea of a phone in the brain much better than I do.

I’m not sure what could have occasioned this latest press release and related video featuring Welland and nanotherapeutics other than guessing that it was a slow news period.

IBM and a 5 nanometre chip

If this continues, they’re going to have change the scale from nano to pico. IBM has announced work on a 5 nanometre (5nm) chip in a June 5, 2017 news item on Nanotechnology Now,

IBM (NYSE: IBM), its Research Alliance partners GLOBALFOUNDRIES and Samsung, and equipment suppliers have developed an industry-first process to build silicon nanosheet transistors that will enable 5 nanometer (nm) chips. The details of the process will be presented at the 2017 Symposia on VLSI Technology and Circuits conference in Kyoto, Japan. In less than two years since developing a 7nm test node chip with 20 billion transistors, scientists have paved the way for 30 billion switches on a fingernail-sized chip.

A June 5, 2017 IBM news release, which originated the news item, spells out some of the details about IBM’s latest breakthrough,

The resulting increase in performance will help accelerate cognitive computing [emphasis mine], the Internet of Things (IoT), and other data-intensive applications delivered in the cloud. The power savings could also mean that the batteries in smartphones and other mobile products could last two to three times longer than today’s devices, before needing to be charged.

Scientists working as part of the IBM-led Research Alliance at the SUNY Polytechnic Institute Colleges of Nanoscale Science and Engineering’s NanoTech Complex in Albany, NY achieved the breakthrough by using stacks of silicon nanosheets as the device structure of the transistor, instead of the standard FinFET architecture, which is the blueprint for the semiconductor industry up through 7nm node technology.

“For business and society to meet the demands of cognitive and cloud computing in the coming years, advancement in semiconductor technology is essential,” said Arvind Krishna, senior vice president, Hybrid Cloud, and director, IBM Research. “That’s why IBM aggressively pursues new and different architectures and materials that push the limits of this industry, and brings them to market in technologies like mainframes and our cognitive systems.”

The silicon nanosheet transistor demonstration, as detailed in the Research Alliance paper Stacked Nanosheet Gate-All-Around Transistor to Enable Scaling Beyond FinFET, and published by VLSI, proves that 5nm chips are possible, more powerful, and not too far off in the future.

Compared to the leading edge 10nm technology available in the market, a nanosheet-based 5nm technology can deliver 40 percent performance enhancement at fixed power, or 75 percent power savings at matched performance. This improvement enables a significant boost to meeting the future demands of artificial intelligence (AI) systems, virtual reality and mobile devices.

Building a New Switch

“This announcement is the latest example of the world-class research that continues to emerge from our groundbreaking public-private partnership in New York,” said Gary Patton, CTO and Head of Worldwide R&D at GLOBALFOUNDRIES. “As we make progress toward commercializing 7nm in 2018 at our Fab 8 manufacturing facility, we are actively pursuing next-generation technologies at 5nm and beyond to maintain technology leadership and enable our customers to produce a smaller, faster, and more cost efficient generation of semiconductors.”

IBM Research has explored nanosheet semiconductor technology for more than 10 years. This work is the first in the industry to demonstrate the feasibility to design and fabricate stacked nanosheet devices with electrical properties superior to FinFET architecture.

This same Extreme Ultraviolet (EUV) lithography approach used to produce the 7nm test node and its 20 billion transistors was applied to the nanosheet transistor architecture. Using EUV lithography, the width of the nanosheets can be adjusted continuously, all within a single manufacturing process or chip design. This adjustability permits the fine-tuning of performance and power for specific circuits – something not possible with today’s FinFET transistor architecture production, which is limited by its current-carrying fin height. Therefore, while FinFET chips can scale to 5nm, simply reducing the amount of space between fins does not provide increased current flow for additional performance.

“Today’s announcement continues the public-private model collaboration with IBM that is energizing SUNY-Polytechnic’s, Albany’s, and New York State’s leadership and innovation in developing next generation technologies,” said Dr. Bahgat Sammakia, Interim President, SUNY Polytechnic Institute. “We believe that enabling the first 5nm transistor is a significant milestone for the entire semiconductor industry as we continue to push beyond the limitations of our current capabilities. SUNY Poly’s partnership with IBM and Empire State Development is a perfect example of how Industry, Government and Academia can successfully collaborate and have a broad and positive impact on society.”

Part of IBM’s $3 billion, five-year investment in chip R&D (announced in 2014), the proof of nanosheet architecture scaling to a 5nm node continues IBM’s legacy of historic contributions to silicon and semiconductor innovation. They include the invention or first implementation of the single cell DRAM, the Dennard Scaling Laws, chemically amplified photoresists, copper interconnect wiring, Silicon on Insulator, strained engineering, multi core microprocessors, immersion lithography, high speed SiGe, High-k gate dielectrics, embedded DRAM, 3D chip stacking and Air gap insulators.

I last wrote about IBM and computer chips in a July 15, 2015 posting regarding their 7nm chip. You may want to scroll down approximately 55% of the way where I note research from MIT (Massachusetts Institute of Technology) about metal nanoparticles with unexpected properties possibly having an impact on nanoelectronics.

Getting back to IBM, they have produced a slick video about their 5nm chip breakthrough,

Meanwhile, Katherine Bourzac provides technical detail in a June 5, 2017 posting on the Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website), Note: A link has been removed,

Researchers at IBM believe the future of the transistor is in stacked nanosheets. …

Today’s state-of-the-art transistor is the finFET, named for the fin-like ridges of current-carrying silicon that project from the chip’s surface. The silicon fins are surrounded on their three exposed sides by a structure called the gate. The gate switches the flow of current on, and prevents electrons from leaking out when the transistor is off. This design is expected to last from this year’s bleeding-edge process technology, the “10-nanometer” node, through the next node, 7 nanometers. But any smaller, and these transistors will become difficult to switch off: electrons will leak out, even with the three-sided gates.

So the semiconductor industry has been working on alternatives for the upcoming 5 nanometer node. One popular idea is to use lateral silicon nanowires that are completely surrounded by the gate, preventing electron leaks and saving power. This design is called “gate all around.” IBM’s new design is a variation on this. In their test chips, each transistor is made up of three stacked horizontal sheets of silicon, each only a few nanometers thick and completely surrounded by a gate.

Why a sheet instead of a wire? Huiming Bu, director of silicon integration and devices at IBM, says nanosheets can bring back one of the benefits of pre-finFET, planar designs. Designers used to be able to vary the width of a transistor to prioritize fast operations or energy efficiency. Varying the amount of silicon in a finFET transistor is not practicable because it would mean making some fins taller and other shorter. Fins must all be the same height due to manufacturing constraints, says Bu.

IBM’s nanosheets can range from 8 to 50 nanometers in width. “Wider gives you better performance but takes more power, smaller width relaxes performance but reduces power use,” says Bu. This will allow circuit designers to pick and choose what they need, whether they are making a power efficient mobile chip processor or designing a bank of SRAM memory. “We are bringing flexibility back to the designers,” he says.

The test chips have 30 billion transistors. …

It was a struggle trying to edit Bourzac’s posting with its good detail and clear writing. I encourage you to read it (June 5, 2017 posting) in its entirety.

As for where this drive downwards to the ‘ever smaller’ is going, there’s Dexter’s Johnson’s June 29, 2017 posting about another IBM team’s research on his Nanoclast blog on the IEEE website (Note: Links have been removed),

There have been increasing signs coming from the research community that carbon nanotubes are beginning to step up to the challenge of offering a real alternative to silicon-based complementary metal-oxide semiconductor (CMOS) transistors.

Now, researchers at IBM Thomas J. Watson Research Center have advanced carbon nanotube-based transistors another step toward meeting the demands of the International Technology Roadmap for Semiconductors (ITRS) for the next decade. The IBM researchers have fabricated a p-channel transistor based on carbon nanotubes that takes up less than half the space of leading silicon technologies while operating at a lower voltage.

In research described in the journal Science, the IBM scientists used a carbon nanotube p-channel to reduce the transistor footprint; their transistor contains all components to 40 square nanometers [emphasis mine], an ITRS roadmap benchmark for ten years out.

One of the keys to being able to reduce the transistor to such a small size is the use of the carbon nanotube as the channel in place of silicon. The nanotube is only 1 nanometer thick. Such thinness offers a significant advantage in electrostatics, so that it’s possible to reduce the device gate length to 10 nanometers without seeing the device performance adversely affected by short-channel effects. An additional benefit of the nanotubes is that the electrons travel much faster, which contributes to a higher level of device performance.

Happy reading!

Atomic force microscope (AFM) shrunk down to a dime-sized device?

Before getting to the announcement, here’s a little background from Dexter Johnson’s Feb. 21, 2017 posting on his NanoClast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website; Note: Links have been removed),

Ever since the 1980s, when Gerd Binnig of IBM first heard that “beautiful noise” made by the tip of the first scanning tunneling microscope (STM) dragging across the surface of an atom, and he later developed the atomic force microscope (AFM), these microscopy tools have been the bedrock of nanotechnology research and development.

AFMs have continued to evolve over the years, and at one time, IBM even looked into using them as the basis of a memory technology in the company’s Millipede project. Despite all this development, AFMs have remained bulky and expensive devices, costing as much as $50,000 [or more].

Now, here’s the announcement in a Feb. 15, 2017 news item on Nanowerk,

Researchers at The University of Texas at Dallas have created an atomic force microscope on a chip, dramatically shrinking the size — and, hopefully, the price tag — of a high-tech device commonly used to characterize material properties.

“A standard atomic force microscope is a large, bulky instrument, with multiple control loops, electronics and amplifiers,” said Dr. Reza Moheimani, professor of mechanical engineering at UT Dallas. “We have managed to miniaturize all of the electromechanical components down onto a single small chip.”

A Feb. 15, 2017 University of Texas at Dallas news release, which originated the news item, provides more detail,

An atomic force microscope (AFM) is a scientific tool that is used to create detailed three-dimensional images of the surfaces of materials, down to the nanometer scale — that’s roughly on the scale of individual molecules.

The basic AFM design consists of a tiny cantilever, or arm, that has a sharp tip attached to one end. As the apparatus scans back and forth across the surface of a sample, or the sample moves under it, the interactive forces between the sample and the tip cause the cantilever to move up and down as the tip follows the contours of the surface. Those movements are then translated into an image.

“An AFM is a microscope that ‘sees’ a surface kind of the way a visually impaired person might, by touching. You can get a resolution that is well beyond what an optical microscope can achieve,” said Moheimani, who holds the James Von Ehr Distinguished Chair in Science and Technology in the Erik Jonsson School of Engineering and Computer Science. “It can capture features that are very, very small.”

The UT Dallas team created its prototype on-chip AFM using a microelectromechanical systems (MEMS) approach.

“A classic example of MEMS technology are the accelerometers and gyroscopes found in smartphones,” said Dr. Anthony Fowler, a research scientist in Moheimani’s Laboratory for Dynamics and Control of Nanosystems and one of the article’s co-authors. “These used to be big, expensive, mechanical devices, but using MEMS technology, accelerometers have shrunk down onto a single chip, which can be manufactured for just a few dollars apiece.”

The MEMS-based AFM is about 1 square centimeter in size, or a little smaller than a dime. It is attached to a small printed circuit board, about half the size of a credit card, which contains circuitry, sensors and other miniaturized components that control the movement and other aspects of the device.

Conventional AFMs operate in various modes. Some map out a sample’s features by maintaining a constant force as the probe tip drags across the surface, while others do so by maintaining a constant distance between the two.

“The problem with using a constant height approach is that the tip is applying varying forces on a sample all the time, which can damage a sample that is very soft,” Fowler said. “Or, if you are scanning a very hard surface, you could wear down the tip,”

The MEMS-based AFM operates in “tapping mode,” which means the cantilever and tip oscillate up and down perpendicular to the sample, and the tip alternately contacts then lifts off from the surface. As the probe moves back and forth across a sample material, a feedback loop maintains the height of that oscillation, ultimately creating an image.

“In tapping mode, as the oscillating cantilever moves across the surface topography, the amplitude of the oscillation wants to change as it interacts with sample,” said Dr. Mohammad Maroufi, a research associate in mechanical engineering and co-author of the paper. “This device creates an image by maintaining the amplitude of oscillation.”

Because conventional AFMs require lasers and other large components to operate, their use can be limited. They’re also expensive.

“An educational version can cost about $30,000 or $40,000, and a laboratory-level AFM can run $500,000 or more,” Moheimani said. “Our MEMS approach to AFM design has the potential to significantly reduce the complexity and cost of the instrument.

“One of the attractive aspects about MEMS is that you can mass produce them, building hundreds or thousands of them in one shot, so the price of each chip would only be a few dollars. As a result, you might be able to offer the whole miniature AFM system for a few thousand dollars.”

A reduced size and price tag also could expand the AFMs’ utility beyond current scientific applications.

“For example, the semiconductor industry might benefit from these small devices, in particular companies that manufacture the silicon wafers from which computer chips are made,” Moheimani said. “With our technology, you might have an array of AFMs to characterize the wafer’s surface to find micro-faults before the product is shipped out.”

The lab prototype is a first-generation device, Moheimani said, and the group is already working on ways to improve and streamline the fabrication of the device.

“This is one of those technologies where, as they say, ‘If you build it, they will come.’ We anticipate finding many applications as the technology matures,” Moheimani said.

In addition to the UT Dallas researchers, Michael Ruppert, a visiting graduate student from the University of Newcastle in Australia, was a co-author of the journal article. Moheimani was Ruppert’s doctoral advisor.

So, an AFM that could cost as much as $500,000 for a laboratory has been shrunk to this size and become far less expensive,

A MEMS-based atomic force microscope developed by engineers at UT Dallas is about 1 square centimeter in size (top center). Here it is attached to a small printed circuit board that contains circuitry, sensors and other miniaturized components that control the movement and other aspects of the device. Courtesy: University of Texas at Dallas

Of course, there’s still more work to be done as you’ll note when reading Dexter’s Feb. 21, 2017 posting where he features answers to questions he directed to the researchers.

Here’s a link to and a citation for the paper,

On-Chip Dynamic Mode Atomic Force Microscopy: A Silicon-on-Insulator MEMS Approach by  Michael G. Ruppert, Anthony G. Fowler, Mohammad Maroufi, S. O. Reza Moheimani. IEEE Journal of Microelectromechanical Systems Volume: 26 Issue: 1  Feb. 2017 DOI: 10.1109/JMEMS.2016.2628890 Date of Publication: 06 December 2016

This paper is behind a paywall.