Tag Archives: Moore’s Law

New ingredient for computers: water!

A July 25, 2019 news item on Nanowerk provides a description of Moore`s Law and some ‘watery’ research that may upend it,

Moore’s law – which says the number of components that could be etched onto the surface of a silicon wafer would double every two years – has been the subject of recent debate. The quicker pace of computing advancements in the past decade have led some experts to say Moore’s law, the brainchild of Intel co-founder Gordon Moore in the 1960s, no longer applies. Particularly of concern, next-generation computing devices require features smaller than 10 nanometers – driving unsustainable increases in fabrication costs.

Biology creates features at sub-10nm scales routinely, but they are often structured in ways that are not useful for applications like computing. A Purdue University group has found ways of transforming structures that occur naturally in cell membranes to create other architectures, like parallel 1nm-wide line segments, more applicable to computing.

Inspired by biological cell membranes, Purdue researchers in the Claridge Research Group have developed surfaces that act as molecular-scale blueprints for unpacking and aligning nanoscale components for next-generation computers. The secret ingredient? Water, in tiny amounts.

A July 25, 2019 Purdue University news release (also on EurekAlert), expands on the theme,

“Biology has an amazing tool kit for embedding chemical information in a surface,” said Shelley Claridge, a recently tenured faculty member in chemistry and biomedical engineering at Purdue, who leads a group of nanomaterials researchers. “What we’re finding is that these instructions can become even more powerful in nonbiological settings, where water is scarce.”

In work just published in Chem, sister journal to Cell, the group has found that stripes of lipids can unpack and order flexible gold nanowires with diameters of just 2 nm, over areas corresponding to many millions of molecules in the template surface.

“The real surprise was the importance of water,” Claridge said. “Your body is mostly water, so the molecules in your cell membranes depend on it to function. Even after we transform the membrane structure in a way that’s very nonbiological and dry it out, these molecules can pull enough water out of dry winter air to do their job.”

Their work aligns with Purdue’s Giant Leaps celebration, celebrating the global advancements in sustainability as part of Purdue’s 150th anniversary. Sustainability is one of the four themes of the yearlong celebration’s Ideas Festival, designed to showcase Purdue as an intellectual center solving real-world issues.

The research team is working with the Purdue Research Foundation Office of Technology Commercialization to patent their work. They are looking for partners for continued research and to take the technology to market. [emphasis mine]

I wonder how close they are to taking this work to market. Usually they say it will be five to 10 years but perhaps we’ll see water-based computers in the near future. In the meantime, here’s a link to and a citation for the paper,

1-nm-Wide Hydrated Dipole Arrays Regulate AuNW Assembly on Striped Monolayers in Nonpolar Solvent by Ashlin G. Porter, Tianhong Ouyang, Tyler R. Hayes, John Biechele-Speziale, Shane R. Russell, Shelley A. Claridge. Chem DOI: DOI:https://doi.org/10.1016/j.chempr.2019.07.002 Published online:July 25, 2019

This paper is behind a paywall.

Making nanoscale transistor chips out of thin air—sort of

Caption: The nano-gap transistors operating in air. As gaps become smaller than the mean-free path of electrons in air, there is ballistic electron transport. Credit: RMIT University

A November 19, 2018 news item on Nanowerk describes the ‘airy’ work ( Note: A link has been removed),

Researchers at RMIT University [Ausralia] have engineered a new type of transistor, the building block for all electronics. Instead of sending electrical currents through silicon, these transistors send electrons through narrow air gaps, where they can travel unimpeded as if in space.

The device unveiled in material sciences journal Nano Letters (“Metal–Air Transistors: Semiconductor-free field-emission air-channel nanoelectronics”), eliminates the use of any semiconductor at all, making it faster and less prone to heating up.

A November 19, 2018 RMIT University news release on EurkeAlert, which originated the news item, describes the work and possibilities in more detail,

Lead author and PhD candidate in RMIT’s Functional Materials and Microsystems Research Group, Ms Shruti Nirantar, said this promising proof-of-concept design for nanochips as a combination of metal and air gaps could revolutionise electronics.

“Every computer and phone has millions to billions of electronic transistors made from silicon, but this technology is reaching its physical limits where the silicon atoms get in the way of the current flow, limiting speed and causing heat,” Nirantar said.

“Our air channel transistor technology has the current flowing through air, so there are no collisions to slow it down and no resistance in the material to produce heat.”

The power of computer chips – or number of transistors squeezed onto a silicon chip – has increased on a predictable path for decades, roughly doubling every two years. But this rate of progress, known as Moore’s Law, has slowed in recent years as engineers struggle to make transistor parts, which are already smaller than the tiniest viruses, smaller still.

Nirantar says their research is a promising way forward for nano electronics in response to the limitation of silicon-based electronics.

“This technology simply takes a different pathway to the miniaturisation of a transistor in an effort to uphold Moore’s Law for several more decades,” Shruti said.

Research team leader Associate Professor Sharath Sriram said the design solved a major flaw in traditional solid channel transistors – they are packed with atoms – which meant electrons passing through them collided, slowed down and wasted energy as heat.

“Imagine walking on a densely crowded street in an effort to get from point A to B. The crowd slows your progress and drains your energy,” Sriram said.

“Travelling in a vacuum on the other hand is like an empty highway where you can drive faster with higher energy efficiency.”

But while this concept is obvious, vacuum packaging solutions around transistors to make them faster would also make them much bigger, so are not viable.

“We address this by creating a nanoscale gap between two metal points. The gap is only a few tens of nanometers, or 50,000 times smaller than the width of a human hair, but it’s enough to fool electrons into thinking that they are travelling through a vacuum and re-create a virtual outer-space for electrons within the nanoscale air gap,” he said.

The nanoscale device is designed to be compatible with modern industry fabrication and development processes. It also has applications in space – both as electronics resistant to radiation and to use electron emission for steering and positioning ‘nano-satellites’.

“This is a step towards an exciting technology which aims to create something out of nothing to significantly increase speed of electronics and maintain pace of rapid technological progress,” Sriram said.

Here’s a link to and a citation for the paper,

Metal–Air Transistors: Semiconductor-free field-emission air-channel nanoelectronics by
Shruti Nirantar, Taimur Ahmed, Guanghui Ren, Philipp Gutruf, Chenglong Xu, Madhu Bhaskaran, Sumeet Walia, and Sharath Sriram. Nano Lett., DOI: 10.1021/acs.nanolett.8b02849 Publication Date (Web): November 16, 2018

Copyright © 2018 American Chemical Society

This paper is behind a paywall.

7nm (nanometre) chip shakeup

From time to time I check out the latest on attempts to shrink computer chips. In my July 11, 2014 posting I noted IBM’s announcement about developing a 7nm computer chip and later in my July 15, 2015 posting I noted IBM’s announcement of a working 7nm chip (from a July 9, 2015 IBM news release , “The breakthrough, accomplished in partnership with GLOBALFOUNDRIES and Samsung at SUNY Polytechnic Institute’s Colleges of Nanoscale Science and Engineering (SUNY Poly CNSE), could result in the ability to place more than 20 billion tiny switches — transistors — on the fingernail-sized chips that power everything from smartphones to spacecraft.”

I’m not sure what happened to the IBM/Global Foundries/Samsung partnership but Global Foundries recently announced that it will no longer be working on 7nm chips. From an August 27, 2018 Global Foundries news release,

GLOBALFOUNDRIES [GF] today announced an important step in its transformation, continuing the trajectory launched with the appointment of Tom Caulfield as CEO earlier this year. In line with the strategic direction Caulfield has articulated, GF is reshaping its technology portfolio to intensify its focus on delivering truly differentiated offerings for clients in high-growth markets.

GF is realigning its leading-edge FinFET roadmap to serve the next wave of clients that will adopt the technology in the coming years. The company will shift development resources to make its 14/12nm FinFET platform more relevant to these clients, delivering a range of innovative IP and features including RF, embedded memory, low power and more. To support this transition, GF is putting its 7nm FinFET program on hold indefinitely [emphasis mine] and restructuring its research and development teams to support its enhanced portfolio initiatives. This will require a workforce reduction, however a significant number of top technologists will be redeployed on 14/12nm FinFET derivatives and other differentiated offerings.

I tried to find a definition for FinFet but the reference to a MOSFET and in-gate transistors was too much incomprehensible information packed into a tight space, see the FinFET Wikipedia entry for more, if you dare.

Getting back to the 7nm chip issue, Samuel K. Moore (I don’t think he’s related to the Moore of Moore’s law) wrote an Aug. 28, 2018 posting on the Nanoclast blog (on the IEEE [Institute of Electronics and Electrical Engineers] website) which provides some insight (Note: Links have been removed),

In a major shift in strategy, GlobalFoundries is halting its development of next-generation chipmaking processes. It had planned to move to the so-called 7-nm node, then begin to use extreme-ultraviolet lithography (EUV) to make that process cheaper. From there, it planned to develop even more advanced lithography that would allow for 5- and 3-nanometer nodes. Despite having installed at least one EUV machine at its Fab 8 facility in Malta, N.Y., all those plans are now on indefinite hold, the company announced Monday.

The move leaves only three companies reaching for the highest rungs of the Moore’s Law ladder: Intel, Samsung, and TSMC.

It’s a huge turnabout for GlobalFoundries. …

GlobalFoundries rationale for the move is that there are not enough customers that need bleeding-edge 7-nm processes to make it profitable. “While the leading edge gets most of the headlines, fewer customers can afford the transition to 7 nm and finer geometries,” said Samuel Wang, research vice president at Gartner, in a GlobalFoundries press release.

“The vast majority of today’s fabless [emphasis mine] customers are looking to get more value out of each technology generation to leverage the substantial investments required to design into each technology node,” explained GlobalFoundries CEO Tom Caulfield in a press release. “Essentially, these nodes are transitioning to design platforms serving multiple waves of applications, giving each node greater longevity. This industry dynamic has resulted in fewer fabless clients designing into the outer limits of Moore’s Law. We are shifting our resources and focus by doubling down on our investments in differentiated technologies across our entire portfolio that are most relevant to our clients in growing market segments.”

(The dynamic Caulfield describes is something the U.S. Defense Advanced Research Agency is working to disrupt with its $1.5-billion Electronics Resurgence Initiative. Darpa’s [DARPA] partners are trying to collapse the cost of design and allow older process nodes to keep improving by using 3D technology.)

Fabless manufacturing is where the fabrication is outsourced and the manufacturing company of record is focused on other matters according to the Fabless manufacturing Wikipedia entry.

Roland Moore-Colyer (I don’t think he’s related to Moore of Moore’s law either) has written August 28, 2018 article for theinquirer.net which also explores this latest news from Global Foundries (Note: Links have been removed),

EVER PREPPED A SPREAD for a party to then have less than half the people you were expecting show up? That’s probably how GlobalFoundries [sic] feels at the moment.

The chip manufacturer, which was once part of AMD, had a fabrication process geared up for 7-nanometre chips which its customers – including AMD and Qualcomm – were expected to adopt.

But AMD has confirmed that it’s decided to move its 7nm GPU production to TSMC, and Intel is still stuck trying to make chips based on 10nm fabrication.

Arguably, this could mark a stymieing of innovation and cutting-edge designs for chips in the near future. But with processors like AMD’s Threadripper 2990WX overclocked to run at 6GHz across all its 32 cores, in the real-world PC fans have no need to worry about consumer chips running out of puff anytime soon. µ

That’s all folks.

Maybe that’s not all

Steve Blank in a Sept. 10, 2018 posting on the Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) provides some provocative commentary on the Global Foundries announcement (Note: A link has been removed),

For most of our lives, the idea that computers and technology would get better, faster, and cheaper every year was as assured as the sun rising every morning. The story “GlobalFoundries Halts 7-nm Chip Development”  doesn’t sound like the end of that era, but for you and anyone who uses an electronic device, it most certainly is.

Technology innovation is going to take a different direction.

This story just goes on and on

There was a new development according to a Sept. 12, 2018 posting on the Nanoclast blog by, again, Samuel K. Moore (Note Links have been removed),

At an event today [sept. 12, 2018], Apple executives said that the new iPhone Xs and Xs Max will contain the first smartphone processor to be made using 7 nm manufacturing technology, the most advanced process node. Huawei made the same claim, to less fanfare, late last month and it’s unclear who really deserves the accolades. If anybody does, it’s TSMC, which manufactures both chips.

TSMC went into volume production with 7-nm tech in April, and rival Samsung is moving toward commercial 7-nm production later this year or in early 2019. GlobalFoundries recently abandoned its attempts to develop a 7 nm process, reasoning that the multibillion-dollar investment would never pay for itself. And Intel announced delays in its move to its next manufacturing technology, which it calls a 10-nm node but which may be equivalent to others’ 7-nm technology.

There’s a certain ‘soap opera’ quality to this with all the twists and turns.

More memory, less space and a walk down the cryptocurrency road

Libraries, archives, records management, oral history, etc. there are many institutions and names for how we manage collective and personal memory. You might call it a peculiarly human obsession stretching back into antiquity. For example, there’s the Library of Alexandria (Wikipedia entry) founded in the third, or possibly 2nd, century BCE (before the common era) and reputed to store all the knowledge in the world. It was destroyed although accounts differ as to when and how but its loss remains a potent reminder of memory’s fragility.

These days, the technology community is terribly concerned with storing ever more bits of data on materials that are reaching their limits for storage.I have news of a possible solution,  an interview of sorts with the researchers working on this new technology, and some very recent research into policies for cryptocurrency mining and development. That bit about cryptocurrency makes more sense when you read what the response to one of the interview questions.

Memory

It seems University of Alberta researchers may have found a way to increase memory exponentially, from a July 23, 2018 news item on ScienceDaily,

The most dense solid-state memory ever created could soon exceed the capabilities of current computer storage devices by 1,000 times, thanks to a new technique scientists at the University of Alberta have perfected.

“Essentially, you can take all 45 million songs on iTunes and store them on the surface of one quarter,” said Roshan Achal, PhD student in Department of Physics and lead author on the new research. “Five years ago, this wasn’t even something we thought possible.”

A July 23, 2018 University of Alberta news release (also on EurekAlert) by Jennifer-Anne Pascoe, which originated the news item, provides more information,

Previous discoveries were stable only at cryogenic conditions, meaning this new finding puts society light years closer to meeting the need for more storage for the current and continued deluge of data. One of the most exciting features of this memory is that it’s road-ready for real-world temperatures, as it can withstand normal use and transportation beyond the lab.

“What is often overlooked in the nanofabrication business is actual transportation to an end user, that simply was not possible until now given temperature restrictions,” continued Achal. “Our memory is stable well above room temperature and precise down to the atom.”

Achal explained that immediate applications will be data archival. Next steps will be increasing readout and writing speeds, meaning even more flexible applications.

More memory, less space

Achal works with University of Alberta physics professor Robert Wolkow, a pioneer in the field of atomic-scale physics. Wolkow perfected the art of the science behind nanotip technology, which, thanks to Wolkow and his team’s continued work, has now reached a tipping point, meaning scaling up atomic-scale manufacturing for commercialization.

“With this last piece of the puzzle now in-hand, atom-scale fabrication will become a commercial reality in the very near future,” said Wolkow. Wolkow’s Spin-off [sic] company, Quantum Silicon Inc., is hard at work on commercializing atom-scale fabrication for use in all areas of the technology sector.

To demonstrate the new discovery, Achal, Wolkow, and their fellow scientists not only fabricated the world’s smallest maple leaf, they also encoded the entire alphabet at a density of 138 terabytes, roughly equivalent to writing 350,000 letters across a grain of rice. For a playful twist, Achal also encoded music as an atom-sized song, the first 24 notes of which will make any video-game player of the 80s and 90s nostalgic for yesteryear but excited for the future of technology and society.

As noted in the news release, there is an atom-sized song, which is available in this video,

As for the nano-sized maple leaf, I highlighted that bit of whimsy in a June 30, 2017 posting.

Here’s a link to and a citation for the paper,

Lithography for robust and editable atomic-scale silicon devices and memories by Roshan Achal, Mohammad Rashidi, Jeremiah Croshaw, David Churchill, Marco Taucer, Taleana Huff, Martin Cloutier, Jason Pitters, & Robert A. Wolkow. Nature Communicationsvolume 9, Article number: 2778 (2018) DOI: https://doi.org/10.1038/s41467-018-05171-y Published 23 July 2018

This paper is open access.

For interested parties, you can find Quantum Silicon (QSI) here. My Edmonton geography is all but nonexistent, still, it seems to me the company address on Saskatchewan Drive is a University of Alberta address. It’s also the address for the National Research Council of Canada. Perhaps this is a university/government spin-off company?

The ‘interview’

I sent some questions to the researchers at the University of Alberta who very kindly provided me with the following answers. Roshan Achal passed on one of the questions to his colleague Taleana Huff for her response. Both Achal and Huff are associated with QSI.

Unfortunately I could not find any pictures of all three researchers (Achal, Huff, and Wolkow) together.

Roshan Achal (left) used nanotechnology perfected by his PhD supervisor, Robert Wolkow (right) to create atomic-scale computer memory that could exceed the capacity of today’s solid-state storage drives by 1,000 times. (Photo: Faculty of Science)

(1) SHRINKING THE MANUFACTURING PROCESS TO THE ATOMIC SCALE HAS
ATTRACTED A LOT OF ATTENTION OVER THE YEARS STARTING WITH SCIENCE
FICTION OR RICHARD FEYNMAN OR K. ERIC DREXLER, ETC. IN ANY EVENT, THE
ORIGINS ARE CONTESTED SO I WON’T PUT YOU ON THE SPOT BY ASKING WHO
STARTED IT ALL INSTEAD ASKING HOW DID YOU GET STARTED?

I got started in this field about 6 years ago, when I undertook a MSc
with Dr. Wolkow here at the University of Alberta. Before that point, I
had only ever heard of a scanning tunneling microscope from what was
taught in my classes. I was aware of the famous IBM logo made up from
just a handful of atoms using this machine, but I didn’t know what
else could be done. Here, Dr. Wolkow introduced me to his line of
research, and I saw the immense potential for growth in this area and
decided to pursue it further. I had the chance to interact with and
learn from nanofabrication experts and gain the skills necessary to
begin playing around with my own techniques and ideas during my PhD.

(2) AS I UNDERSTAND IT, THESE ARE THE PIECES YOU’VE BEEN
WORKING ON: (1) THE TUNGSTEN MICROSCOPE TIP, WHICH MAKE[s] (2) THE SMALLEST
QUANTUM DOTS (SINGLE ATOMS OF SILICON), (3) THE AUTOMATION OF THE
QUANTUM DOT PRODUCTION PROCESS, AND (4) THE “MOST DENSE SOLID-STATE
MEMORY EVER CREATED.” WHAT’S MISSING FROM THE LIST AND IS THAT WHAT
YOU’RE WORKING ON NOW?

One of the things missing from the list, that we are currently working
on, is the ability to easily communicate (electrically) from the
macroscale (our world) to the nanoscale, without the use of a scanning
tunneling microscope. With this, we would be able to then construct
devices using the other pieces we’ve developed up to this point, and
then integrate them with more conventional electronics. This would bring
us yet another step closer to the realization of atomic-scale
electronics.

(3) PERHAPS YOU COULD CLARIFY SOMETHING FOR ME. USUALLY WHEN SOLID STATE
MEMORY IS MENTIONED, THERE’S GREAT CONCERN ABOUT MOORE’S LAW. IS
THIS WORK GOING TO CREATE A NEW LAW? AND, WHAT IF ANYTHING DOES
;YOUR MEMORY DEVICE HAVE TO DO WITH QUANTUM COMPUTING?

That is an interesting question. With the density we’ve achieved,
there are not too many surfaces where atomic sites are more closely
spaced to allow for another factor of two improvement. In that sense, it
would be difficult to improve memory densities further using these
techniques alone. In order to continue Moore’s law, new techniques, or
storage methods would have to be developed to move beyond atomic-scale
storage.

The memory design itself does not have anything to do with quantum
computing, however, the lithographic techniques developed through our
work, may enable the development of certain quantum-dot-based quantum
computing schemes.

(4) THIS MAY BE A LITTLE OUT OF LEFT FIELD (OR FURTHER OUT THAN THE
OTHERS), COULD;YOUR MEMORY DEVICE HAVE AN IMPACT ON THE
DEVELOPMENT OF CRYPTOCURRENCY AND BLOCKCHAIN? IF SO, WHAT MIGHT THAT
IMPACT BE?

I am not very familiar with these topics, however, co-author Taleana
Huff has provided some thoughts:

Taleana Huff (downloaded from https://ca.linkedin.com/in/taleana-huff]

“The memory, as we’ve designed it, might not have too much of an
impact in and of itself. Cryptocurrencies fall into two categories.
Proof of Work and Proof of Stake. Proof of Work relies on raw
computational power to solve a difficult math problem. If you solve it,
you get rewarded with a small amount of that coin. The problem is that
it can take a lot of power and energy for your computer to crunch
through that problem. Faster access to memory alone could perhaps
streamline small parts of this slightly, but it would be very slight.
Proof of Stake is already quite power efficient and wouldn’t really
have a drastic advantage from better faster computers.

Now, atomic-scale circuitry built using these new lithographic
techniques that we’ve developed, which could perform computations at
significantly lower energy costs, would be huge for Proof of Work coins.
One of the things holding bitcoin back, for example, is that mining it
is now consuming power on the order of the annual energy consumption
required by small countries. A more efficient way to mine while still
taking the same amount of time to solve the problem would make bitcoin
much more attractive as a currency.”

Thank you to Roshan Achal and Taleana Huff for helping me to further explore the implications of their work with Dr. Wolkow.

Comments

As usual, after receiving the replies I have more questions but these people have other things to do so I’ll content myself with noting that there is something extraordinary in the fact that we can imagine a near future where atomic scale manufacturing is possible and where as Achal says, ” … storage methods would have to be developed to move beyond atomic-scale [emphasis mine] storage”. In decades past it was the stuff of science fiction or of theorists who didn’t have the tools to turn the idea into a reality. With Wolkow’s, Achal’s, Hauff’s, and their colleagues’ work, atomic scale manufacturing is attainable in the foreseeable future.

Hopefully we’ll be wiser than we have been in the past in how we deploy these new manufacturing techniques. Of course, before we need the wisdom, scientists, as  Achal notes,  need to find a new way to communicate between the macroscale and the nanoscale.

As for Huff’s comments about cryptocurrencies and cyptocurrency and blockchain technology, I stumbled across this very recent research, from a July 31, 2018 Elsevier press release (also on EurekAlert),

A study [behind a paywall] published in Energy Research & Social Science warns that failure to lower the energy use by Bitcoin and similar Blockchain designs may prevent nations from reaching their climate change mitigation obligations under the Paris Agreement.

The study, authored by Jon Truby, PhD, Assistant Professor, Director of the Centre for Law & Development, College of Law, Qatar University, Doha, Qatar, evaluates the financial and legal options available to lawmakers to moderate blockchain-related energy consumption and foster a sustainable and innovative technology sector. Based on this rigorous review and analysis of the technologies, ownership models, and jurisdictional case law and practices, the article recommends an approach that imposes new taxes, charges, or restrictions to reduce demand by users, miners, and miner manufacturers who employ polluting technologies, and offers incentives that encourage developers to create less energy-intensive/carbon-neutral Blockchain.

“Digital currency mining is the first major industry developed from Blockchain, because its transactions alone consume more electricity than entire nations,” said Dr. Truby. “It needs to be directed towards sustainability if it is to realize its potential advantages.

“Many developers have taken no account of the environmental impact of their designs, so we must encourage them to adopt consensus protocols that do not result in high emissions. Taking no action means we are subsidizing high energy-consuming technology and causing future Blockchain developers to follow the same harmful path. We need to de-socialize the environmental costs involved while continuing to encourage progress of this important technology to unlock its potential economic, environmental, and social benefits,” explained Dr. Truby.

As a digital ledger that is accessible to, and trusted by all participants, Blockchain technology decentralizes and transforms the exchange of assets through peer-to-peer verification and payments. Blockchain technology has been advocated as being capable of delivering environmental and social benefits under the UN’s Sustainable Development Goals. However, Bitcoin’s system has been built in a way that is reminiscent of physical mining of natural resources – costs and efforts rise as the system reaches the ultimate resource limit and the mining of new resources requires increasing hardware resources, which consume huge amounts of electricity.

Putting this into perspective, Dr. Truby said, “the processes involved in a single Bitcoin transaction could provide electricity to a British home for a month – with the environmental costs socialized for private benefit.

“Bitcoin is here to stay, and so, future models must be designed without reliance on energy consumption so disproportionate on their economic or social benefits.”

The study evaluates various Blockchain technologies by their carbon footprints and recommends how to tax or restrict Blockchain types at different phases of production and use to discourage polluting versions and encourage cleaner alternatives. It also analyzes the legal measures that can be introduced to encourage technology innovators to develop low-emissions Blockchain designs. The specific recommendations include imposing levies to prevent path-dependent inertia from constraining innovation:

  • Registration fees collected by brokers from digital coin buyers.
  • “Bitcoin Sin Tax” surcharge on digital currency ownership.
  • Green taxes and restrictions on machinery purchases/imports (e.g. Bitcoin mining machines).
  • Smart contract transaction charges.

According to Dr. Truby, these findings may lead to new taxes, charges or restrictions, but could also lead to financial rewards for innovators developing carbon-neutral Blockchain.

The press release doesn’t fully reflect Dr. Truby’s thoughtfulness or the incentives he has suggested. it’s not all surcharges, taxes, and fees constitute encouragement.  Here’s a sample from the conclusion,

The possibilities of Blockchain are endless and incentivisation can help solve various climate change issues, such as through the development of digital currencies to fund climate finance programmes. This type of public-private finance initiative is envisioned in the Paris Agreement, and fiscal tools can incentivize innovators to design financially rewarding Blockchain technology that also achieves environmental goals. Bitcoin, for example, has various utilitarian intentions in its White Paper, which may or may not turn out to be as envisioned, but it would not have been such a success without investors seeking remarkable returns. Embracing such technology, and promoting a shift in behaviour with such fiscal tools, can turn the industry itself towards achieving innovative solutions for environmental goals.

I realize Wolkow, et. al, are not focused on cryptocurrency and blockchain technology per se but as Huff notes in her reply, “… new lithographic techniques that we’ve developed, which could perform computations at significantly lower energy costs, would be huge for Proof of Work coins.”

Whether or not there are implications for cryptocurrencies, energy needs, climate change, etc., it’s the kind of innovative work being done by scientists at the University of Alberta which may have implications in fields far beyond the researchers’ original intentions such as more efficient computation and data storage.

ETA Aug. 6, 2018: Dexter Johnson weighed in with an August 3, 2018 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website),

Researchers at the University of Alberta in Canada have developed a new approach to rewritable data storage technology by using a scanning tunneling microscope (STM) to remove and replace hydrogen atoms from the surface of a silicon wafer. If this approach realizes its potential, it could lead to a data storage technology capable of storing 1,000 times more data than today’s hard drives, up to 138 terabytes per square inch.

As a bit of background, Gerd Binnig and Heinrich Rohrer developed the first STM in 1986 for which they later received the Nobel Prize in physics. In the over 30 years since an STM first imaged an atom by exploiting a phenomenon known as tunneling—which causes electrons to jump from the surface atoms of a material to the tip of an ultrasharp electrode suspended a few angstroms above—the technology has become the backbone of so-called nanotechnology.

In addition to imaging the world on the atomic scale for the last thirty years, STMs have been experimented with as a potential data storage device. Last year, we reported on how IBM (where Binnig and Rohrer first developed the STM) used an STM in combination with an iron atom to serve as an electron-spin resonance sensor to read the magnetic pole of holmium atoms. The north and south poles of the holmium atoms served as the 0 and 1 of digital logic.

The Canadian researchers have taken a somewhat different approach to making an STM into a data storage device by automating a known technique that uses the ultrasharp tip of the STM to apply a voltage pulse above an atom to remove individual hydrogen atoms from the surface of a silicon wafer. Once the atom has been removed, there is a vacancy on the surface. These vacancies can be patterned on the surface to create devices and memories.

If you have the time, I recommend reading Dexter’s posting as he provides clear explanations, additional insight into the work, and more historical detail.

3-D integration of nanotechnologies on a single computer chip

By integrating nanomaterials , a new technique for a 3D computer chip capable of handling today’s huge amount of data has been developed. Weirdly, the first two paragraphs of a July 5, 2017 news item on Nanowerk do not convey the main point (Note: A link has been removed),

As embedded intelligence is finding its way into ever more areas of our lives, fields ranging from autonomous driving to personalized medicine are generating huge amounts of data. But just as the flood of data is reaching massive proportions, the ability of computer chips to process it into useful information is stalling.

Now, researchers at Stanford University and MIT have built a new chip to overcome this hurdle. The results are published today in the journal Nature (“Three-dimensional integration of nanotechnologies for computing and data storage on a single chip”), by lead author Max Shulaker, an assistant professor of electrical engineering and computer science at MIT. Shulaker began the work as a PhD student alongside H.-S. Philip Wong and his advisor Subhasish Mitra, professors of electrical engineering and computer science at Stanford. The team also included professors Roger Howe and Krishna Saraswat, also from Stanford.

This image helps to convey the main points,

Instead of relying on silicon-based devices, a new chip uses carbon nanotubes and resistive random-access memory (RRAM) cells. The two are built vertically over one another, making a new, dense 3-D computer architecture with interleaving layers of logic and memory. Courtesy MIT

As I hove been quite impressed with their science writing, it was a bit surprising to find that the Massachusetts Institute of Technology (MIT) had issued this news release (news item) as it didn’t follow the ‘rules’, i.e., cover as many of the journalistic questions (Who, What, Where, When, Why, and, sometimes, How) as possible in the first sentence/paragraph. This is written more in the style of a magazine article and so the details take a while to emerge, from a July 5, 2017 MIT news release, which originated the news item,

Computers today comprise different chips cobbled together. There is a chip for computing and a separate chip for data storage, and the connections between the two are limited. As applications analyze increasingly massive volumes of data, the limited rate at which data can be moved between different chips is creating a critical communication “bottleneck.” And with limited real estate on the chip, there is not enough room to place them side-by-side, even as they have been miniaturized (a phenomenon known as Moore’s Law).

To make matters worse, the underlying devices, transistors made from silicon, are no longer improving at the historic rate that they have for decades.

The new prototype chip is a radical change from today’s chips. It uses multiple nanotechnologies, together with a new computer architecture, to reverse both of these trends.

Instead of relying on silicon-based devices, the chip uses carbon nanotubes, which are sheets of 2-D graphene formed into nanocylinders, and resistive random-access memory (RRAM) cells, a type of nonvolatile memory that operates by changing the resistance of a solid dielectric material. The researchers integrated over 1 million RRAM cells and 2 million carbon nanotube field-effect transistors, making the most complex nanoelectronic system ever made with emerging nanotechnologies.

The RRAM and carbon nanotubes are built vertically over one another, making a new, dense 3-D computer architecture with interleaving layers of logic and memory. By inserting ultradense wires between these layers, this 3-D architecture promises to address the communication bottleneck.

However, such an architecture is not possible with existing silicon-based technology, according to the paper’s lead author, Max Shulaker, who is a core member of MIT’s Microsystems Technology Laboratories. “Circuits today are 2-D, since building conventional silicon transistors involves extremely high temperatures of over 1,000 degrees Celsius,” says Shulaker. “If you then build a second layer of silicon circuits on top, that high temperature will damage the bottom layer of circuits.”

The key in this work is that carbon nanotube circuits and RRAM memory can be fabricated at much lower temperatures, below 200 C. “This means they can be built up in layers without harming the circuits beneath,” Shulaker says.

This provides several simultaneous benefits for future computing systems. “The devices are better: Logic made from carbon nanotubes can be an order of magnitude more energy-efficient compared to today’s logic made from silicon, and similarly, RRAM can be denser, faster, and more energy-efficient compared to DRAM,” Wong says, referring to a conventional memory known as dynamic random-access memory.

“In addition to improved devices, 3-D integration can address another key consideration in systems: the interconnects within and between chips,” Saraswat adds.

“The new 3-D computer architecture provides dense and fine-grained integration of computating and data storage, drastically overcoming the bottleneck from moving data between chips,” Mitra says. “As a result, the chip is able to store massive amounts of data and perform on-chip processing to transform a data deluge into useful information.”

To demonstrate the potential of the technology, the researchers took advantage of the ability of carbon nanotubes to also act as sensors. On the top layer of the chip they placed over 1 million carbon nanotube-based sensors, which they used to detect and classify ambient gases.

Due to the layering of sensing, data storage, and computing, the chip was able to measure each of the sensors in parallel, and then write directly into its memory, generating huge bandwidth, Shulaker says.

Three-dimensional integration is the most promising approach to continue the technology scaling path set forth by Moore’s laws, allowing an increasing number of devices to be integrated per unit volume, according to Jan Rabaey, a professor of electrical engineering and computer science at the University of California at Berkeley, who was not involved in the research.

“It leads to a fundamentally different perspective on computing architectures, enabling an intimate interweaving of memory and logic,” Rabaey says. “These structures may be particularly suited for alternative learning-based computational paradigms such as brain-inspired systems and deep neural nets, and the approach presented by the authors is definitely a great first step in that direction.”

“One big advantage of our demonstration is that it is compatible with today’s silicon infrastructure, both in terms of fabrication and design,” says Howe.

“The fact that this strategy is both CMOS [complementary metal-oxide-semiconductor] compatible and viable for a variety of applications suggests that it is a significant step in the continued advancement of Moore’s Law,” says Ken Hansen, president and CEO of the Semiconductor Research Corporation, which supported the research. “To sustain the promise of Moore’s Law economics, innovative heterogeneous approaches are required as dimensional scaling is no longer sufficient. This pioneering work embodies that philosophy.”

The team is working to improve the underlying nanotechnologies, while exploring the new 3-D computer architecture. For Shulaker, the next step is working with Massachusetts-based semiconductor company Analog Devices to develop new versions of the system that take advantage of its ability to carry out sensing and data processing on the same chip.

So, for example, the devices could be used to detect signs of disease by sensing particular compounds in a patient’s breath, says Shulaker.

“The technology could not only improve traditional computing, but it also opens up a whole new range of applications that we can target,” he says. “My students are now investigating how we can produce chips that do more than just computing.”

“This demonstration of the 3-D integration of sensors, memory, and logic is an exceptionally innovative development that leverages current CMOS technology with the new capabilities of carbon nanotube field–effect transistors,” says Sam Fuller, CTO emeritus of Analog Devices, who was not involved in the research. “This has the potential to be the platform for many revolutionary applications in the future.”

This work was funded by the Defense Advanced Research Projects Agency [DARPA], the National Science Foundation, Semiconductor Research Corporation, STARnet SONIC, and member companies of the Stanford SystemX Alliance.

Here’s a link to and a citation for the paper,

Three-dimensional integration of nanotechnologies for computing and data storage on a single chip by Max M. Shulaker, Gage Hills, Rebecca S. Park, Roger T. Howe, Krishna Saraswat, H.-S. Philip Wong, & Subhasish Mitra. Nature 547, 74–78 (06 July 2017) doi:10.1038/nature22994 Published online 05 July 2017

This paper is behind a paywall.

Book announcement: Atomistic Simulation of Quantum Transport in Nanoelectronic Devices

For anyone who’s curious about where we go after creating chips at the 7nm size, this may be the book for you. Here’s more from a July 27, 2016 news item on Nanowerk,

In the year 2015, Intel, Samsung and TSMC began to mass-market the 14nm technology called FinFETs. In the same year, IBM, working with Global Foundries, Samsung, SUNY, and various equipment suppliers, announced their success in fabricating 7nm devices. A 7nm silicon channel is about 50 atomic layers and these devices are truly atomic! It is clear that we have entered an era of atomic scale transistors. How do we model the carrier transport in such atomic scale devices?

One way is to improve existing device models by including more and more parameters. This is called the top-down approach. However, as device sizes shrink, the number of parameters grows rapidly, making the top-down approach more and more sophisticated and challenging. Most importantly, to continue Moore’s law, electronic engineers are exploring new electronic materials and new operating mechanisms. These efforts are beyond the scope of well-established device models — hence significant changes are necessary to the top-down approach.

An alternative way is called the bottom-up approach. The idea is to build up nanoelectronic devices atom by atom on a computer, and predict the transport behavior from first principles. By doing so, one is allowed to go inside atomic structures and see what happens from there. The elegance of the approach comes from its unification and generality. Everything comes out naturally from the very basic principles of quantum mechanics and nonequilibrium statistics. The bottom-up approach is complementary to the top-down approach, and is extremely useful for testing innovative ideas of future technologies.

A July 27, 2016 World Scientific news release on EurekAlert, which originated the news item, delves into the topics covered by the book,

In recent decades, several device simulation tools using the bottom-up approach have been developed in universities and software companies. Some examples are McDcal, Transiesta, Atomistic Tool Kit, Smeagol, NanoDcal, NanoDsim, OpenMX, GPAW and NEMO-5. These software tools are capable of predicting electric current flowing through a nanostructure. Essentially the input is the atomic coordinates and the output is the electric current. These software tools have been applied extensively to study emerging electronic materials and devices.

However, developing such a software tool is extremely difficult. It takes years-long experiences and requires knowledge of and techniques in condensed matter physics, computer science, electronic engineering, and applied mathematics. In a library, one can find books on density functional theory, books on quantum transport, books on computer programming, books on numerical algorithms, and books on device simulation. But one can hardly find a book integrating all these fields for the purpose of nanoelectronic device simulation.

“Atomistic Simulation of Quantum Transport in Nanoelectronic Devices” (With CD-ROM) fills the chasm. Authors Yu Zhu and Lei Liu have experience in both academic research and software development. Yu Zhu is the project manager of NanoDsim, and Lei Liu is the project manager of NanoDcal. The content of the book is based Zhu and Liu’s combined R&D experiences of more than forty years.

In this book, the authors conduct an experiment and adopt a “paradigm” approach. Instead of organizing materials by fields, they focus on the development of one particular software tool called NanoDsim, and provide relevant knowledge and techniques whenever needed. The black of box of NanoDsim is opened, and the complete procedure from theoretical derivation, to numerical implementation, all the way to device simulation is illustrated. The affilicated source code of NanoDsim also provides an open platform for new researchers.

I’m not recommending the book as I haven’t read it but it does seem intriguing. For anyone who wishes to purchase it, you can do that here.

I wrote about IBM and its 7nm chip in a July 15, 2015 post.

The world’s smallest diode is made from a single molecule

Both the University of Georgia (US) and the American Associates Ben-Gurion University of the Negev (Israel) have issued press releases about a joint research project resulting in the world’s smallest diode.

I stumbled across the April 4, 2016 University of Georgia news release on EurekAlert first,

Researchers at the University of Georgia and at Ben-Gurion University in Israel have demonstrated for the first time that nanoscale electronic components can be made from single DNA molecules. Their study, published in the journal Nature Chemistry, represents a promising advance in the search for a replacement for the silicon chip.

The finding may eventually lead to smaller, more powerful and more advanced electronic devices, according to the study’s lead author, Bingqian Xu.

“For 50 years, we have been able to place more and more computing power onto smaller and smaller chips, but we are now pushing the physical limits of silicon,” said Xu, an associate professor in the UGA College of Engineering and an adjunct professor in chemistry and physics. “If silicon-based chips become much smaller, their performance will become unstable and unpredictable.”

To find a solution to this challenge, Xu turned to DNA. He says DNA’s predictability, diversity and programmability make it a leading candidate for the design of functional electronic devices using single molecules.

In the Nature Chemistry paper, Xu and collaborators at Ben-Gurion University of the Negev describe using a single molecule of DNA to create the world’s smallest diode. A diode is a component vital to electronic devices that allows current to flow in one direction but prevents its flow in the other direction.

Xu and a team of graduate research assistants at UGA isolated a specifically designed single duplex DNA of 11 base pairs and connected it to an electronic circuit only a few nanometers in size. After the measured current showed no special behavior, the team site-specifically intercalated a small molecule named coralyne into the DNA. They found the current flowing through the DNA was 15 times stronger for negative voltages than for positive voltages, a necessary feature of a diode.

“This finding is quite counterintuitive because the molecular structure is still seemingly symmetrical after coralyne intercalation,” Xu said.

A theoretical model developed by Yanantan Dubi of Ben-Gurion University indicated the diode-like behavior of DNA originates from the bias voltage-induced breaking of spatial symmetry inside the DNA molecule after the coralyne is inserted.

“Our discovery can lead to progress in the design and construction of nanoscale electronic elements that are at least 1,000 times smaller than current components,” Xu said.

The research team plans to continue its work, with the goal of constructing additional molecular devices and enhancing the performance of the molecular diode.

The April 4, 2016 American Associates Ben-Gurion University of the Negev press release on EurekAlert covers much of the same ground while providing some new details,

The world’s smallest diode, the size of a single molecule, has been developed collaboratively by U.S. and Israeli researchers from the University of Georgia and Ben-Gurion University of the Negev (BGU).

“Creating and characterizing the world’s smallest diode is a significant milestone in the development of molecular electronic devices,” explains Dr. Yoni Dubi, a researcher in the BGU Department of Chemistry and Ilse Katz Institute for Nanoscale Science and Technology. “It gives us new insights into the electronic transport mechanism.”

Continuous demand for more computing power is pushing the limitations of present day methods. This need is driving researchers to look for molecules with interesting properties and find ways to establish reliable contacts between molecular components and bulk materials in an electrode, in order to mimic conventional electronic elements at the molecular scale.

An example for such an element is the nanoscale diode (or molecular rectifier), which operates like a valve to facilitate electronic current flow in one direction. A collection of these nanoscale diodes, or molecules, has properties that resemble traditional electronic components such as a wire, transistor or rectifier. The emerging field of single molecule electronics may provide a way to overcome Moore’s Law– the observation that over the history of computing hardware the number of transistors in a dense integrated circuit has doubled approximately every two years – beyond the limits of conventional silicon integrated circuits.

Prof. Bingqian Xu’s group at the College of Engineering at the University of Georgia took a single DNA molecule constructed from 11 base pairs and connected it to an electronic circuit only a few nanometers in size. When they measured the current through the molecule, it did not show any special behavior. However, when layers of a molecule called “coralyne,” were inserted (or intercalated) between layers of DNA, the behavior of the circuit changed drastically. The current jumped to 15 times larger negative vs. positive voltages–a necessary feature for a nano diode. “In summary, we have constructed a molecular rectifier by intercalating specific, small molecules into designed DNA strands,” explains Prof. Xu.

Dr. Dubi and his student, Elinor Zerah-Harush, constructed a theoretical model of the DNA molecule inside the electric circuit to better understand the results of the experiment. “The model allowed us to identify the source of the diode-like feature, which originates from breaking spatial symmetry inside the DNA molecule after coralyne is inserted.”

There’s an April 4, 2016 posting on the Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) which provides a brief overview and a link to a previous essay, Whatever Happened to the Molecular Computer?

Here’s a link and citation for the paper,

Molecular rectifier composed of DNA with high rectification ratio enabled by intercalation by Cunlan Guo, Kun Wang, Elinor Zerah-Harush, Joseph Hamill, Bin Wang, Yonatan Dubi, & Bingqian Xu. Nature Chemistry (2016) doi:10.1038/nchem.2480 Published online 04 April 2016

This paper is behind a paywall.

IBM, the Cognitive Era, and carbon nanotube electronics

IBM has a storied position in the field of nanotechnology due to the scanning tunneling microscope developed in the company’s laboratories. It was a Nobel Prize-winning breakthough which provided the impetus for nanotechnology applied research. Now, an Oct. 1, 2015 news item on Nanowerk trumpets another IBM breakthrough,

IBM Research today [Oct. 1, 2015] announced a major engineering breakthrough that could accelerate carbon nanotubes replacing silicon transistors to power future computing technologies.

IBM scientists demonstrated a new way to shrink transistor contacts without reducing performance of carbon nanotube devices, opening a pathway to dramatically faster, smaller and more powerful computer chips beyond the capabilities of traditional semiconductors.

While the Oct. 1, 2015 IBM news release, which originated the news item, does go on at length there’s not much technical detail (see the second to last paragraph in the excerpt for the little they do include) about the research breakthrough (Note: Links have been removed),

IBM’s breakthrough overcomes a major hurdle that silicon and any semiconductor transistor technologies face when scaling down. In any transistor, two things scale: the channel and its two contacts. As devices become smaller, increased contact resistance for carbon nanotubes has hindered performance gains until now. These results could overcome contact resistance challenges all the way to the 1.8 nanometer node – four technology generations away. [emphasis mine]

Carbon nanotube chips could greatly improve the capabilities of high performance computers, enabling Big Data to be analyzed faster, increasing the power and battery life of mobile devices and the Internet of Things, and allowing cloud data centers to deliver services more efficiently and economically.

Silicon transistors, tiny switches that carry information on a chip, have been made smaller year after year, but they are approaching a point of physical limitation. With Moore’s Law running out of steam, shrinking the size of the transistor – including the channels and contacts – without compromising performance has been a vexing challenge troubling researchers for decades.

IBM has previously shown that carbon nanotube transistors can operate as excellent switches at channel dimensions of less than ten nanometers – the equivalent to 10,000 times thinner than a strand of human hair and less than half the size of today’s leading silicon technology. IBM’s new contact approach overcomes the other major hurdle in incorporating carbon nanotubes into semiconductor devices, which could result in smaller chips with greater performance and lower power consumption.

Earlier this summer, IBM unveiled the first 7 nanometer node silicon test chip [emphasis mine], pushing the limits of silicon technologies and ensuring further innovations for IBM Systems and the IT industry. By advancing research of carbon nanotubes to replace traditional silicon devices, IBM is paving the way for a post-silicon future and delivering on its $3 billion chip R&D investment announced in July 2014.

“These chip innovations are necessary to meet the emerging demands of cloud computing, Internet of Things and Big Data systems,” said Dario Gil, vice president of Science & Technology at IBM Research. “As silicon technology nears its physical limits, new materials, devices and circuit architectures must be ready to deliver the advanced technologies that will be required by the Cognitive Computing era. This breakthrough shows that computer chips made of carbon nanotubes will be able to power systems of the future sooner than the industry expected.”

A New Contact for Carbon Nanotubes

Carbon nanotubes represent a new class of semiconductor materials that consist of single atomic sheets of carbon rolled up into a tube. The carbon nanotubes form the core of a transistor device whose superior electrical properties promise several generations of technology scaling beyond the physical limits of silicon.

Electrons in carbon transistors can move more easily than in silicon-based devices, and the ultra-thin body of carbon nanotubes provide additional advantages at the atomic scale. Inside a chip, contacts are the valves that control the flow of electrons from metal into the channels of a semiconductor. As transistors shrink in size, electrical resistance increases within the contacts, which impedes performance. Until now, decreasing the size of the contacts on a device caused a commensurate drop in performance – a challenge facing both silicon and carbon nanotube transistor technologies.

IBM researchers had to forego traditional contact schemes and invented a metallurgical process akin to microscopic welding that chemically binds the metal atoms to the carbon atoms at the ends of nanotubes. This ‘end-bonded contact scheme’ allows the contacts to be shrunken down to below 10 nanometers without deteriorating performance of the carbon nanotube devices.

“For any advanced transistor technology, the increase in contact resistance due to the decrease in the size of transistors becomes a major performance bottleneck,” Gil added. “Our novel approach is to make the contact from the end of the carbon nanotube, which we show does not degrade device performance. This brings us a step closer to the goal of a carbon nanotube technology within the decade.”

Every once in a while, the size gets to me and a 1.8nm node is amazing. As for IBM’s 7nm chip, which was previewed this summer, there’s more about that in my July 15, 2015 posting.

Here’s a link to and a citation for the IBM paper,

End-bonded contacts for carbon nanotube transistors with low, size-independent resistance by Qing Cao, Shu-Jen Han, Jerry Tersoff, Aaron D. Franklin†, Yu Zhu, Zhen Zhang‡, George S. Tulevski, Jianshi Tang, and Wilfried Haensch. Science 2 October 2015: Vol. 350 no. 6256 pp. 68-72 DOI: 10.1126/science.aac8006

This paper is behind a paywall.

IBM weighs in with plans for a 7nm computer chip

On the heels of Intel’s announcement about a deal utilizing their 14nm low-power manufacturing process and speculations about a 10nm computer chip (my July 9, 2014 posting), IBM makes an announcement about a 7nm chip as per this July 10, 2014 news item on Azonano,

IBM today [July 10, 2014] announced it is investing $3 billion over the next 5 years in two broad research and early stage development programs to push the limits of chip technology needed to meet the emerging demands of cloud computing and Big Data systems. These investments will push IBM’s semiconductor innovations from today’s breakthroughs into the advanced technology leadership required for the future.

A very comprehensive July 10, 2014 news release lays out the company’s plans for this $3B investment representing 10% of IBM’s total research budget,

The first research program is aimed at so-called “7 nanometer and beyond” silicon technology that will address serious physical challenges that are threatening current semiconductor scaling techniques and will impede the ability to manufacture such chips. The second is focused on developing alternative technologies for post-silicon era chips using entirely different approaches, which IBM scientists and other experts say are required because of the physical limitations of silicon based semiconductors.

Cloud and big data applications are placing new challenges on systems, just as the underlying chip technology is facing numerous significant physical scaling limits.  Bandwidth to memory, high speed communication and device power consumption are becoming increasingly challenging and critical.

The teams will comprise IBM Research scientists and engineers from Albany and Yorktown, New York; Almaden, California; and Europe. In particular, IBM will be investing significantly in emerging areas of research that are already underway at IBM such as carbon nanoelectronics, silicon photonics, new memory technologies, and architectures that support quantum and cognitive computing. [emphasis mine]

These teams will focus on providing orders of magnitude improvement in system level performance and energy efficient computing. In addition, IBM will continue to invest in the nanosciences and quantum computing–two areas of fundamental science where IBM has remained a pioneer for over three decades.

7 nanometer technology and beyond

IBM Researchers and other semiconductor experts predict that while challenging, semiconductors show promise to scale from today’s 22 nanometers down to 14 and then 10 nanometers in the next several years.  However, scaling to 7 nanometers and perhaps below, by the end of the decade will require significant investment and innovation in semiconductor architectures as well as invention of new tools and techniques for manufacturing.

“The question is not if we will introduce 7 nanometer technology into manufacturing, but rather how, when, and at what cost?” said John Kelly, senior vice president, IBM Research. “IBM engineers and scientists, along with our partners, are well suited for this challenge and are already working on the materials science and device engineering required to meet the demands of the emerging system requirements for cloud, big data, and cognitive systems. This new investment will ensure that we produce the necessary innovations to meet these challenges.”

“Scaling to 7nm and below is a terrific challenge, calling for deep physics competencies in processing nano materials affinities and characteristics. IBM is one of a very few companies who has repeatedly demonstrated this level of science and engineering expertise,” said Richard Doherty, technology research director, The Envisioneering Group.

Bridge to a “Post-Silicon” Era

Silicon transistors, tiny switches that carry information on a chip, have been made smaller year after year, but they are approaching a point of physical limitation. Their increasingly small dimensions, now reaching the nanoscale, will prohibit any gains in performance due to the nature of silicon and the laws of physics. Within a few more generations, classical scaling and shrinkage will no longer yield the sizable benefits of lower power, lower cost and higher speed processors that the industry has become accustomed to.

With virtually all electronic equipment today built on complementary metal–oxide–semiconductor (CMOS) technology, there is an urgent need for new materials and circuit architecture designs compatible with this engineering process as the technology industry nears physical scalability limits of the silicon transistor.

Beyond 7 nanometers, the challenges dramatically increase, requiring a new kind of material to power systems of the future, and new computing platforms to solve problems that are unsolvable or difficult to solve today. Potential alternatives include new materials such as carbon nanotubes, and non-traditional computational approaches such as neuromorphic computing, cognitive computing, machine learning techniques, and the science behind quantum computing.

As the leader in advanced schemes that point beyond traditional silicon-based computing, IBM holds over 500 patents for technologies that will drive advancements at 7nm and beyond silicon — more than twice the nearest competitor. These continued investments will accelerate the invention and introduction into product development for IBM’s highly differentiated computing systems for cloud, and big data analytics.

Several exploratory research breakthroughs that could lead to major advancements in delivering dramatically smaller, faster and more powerful computer chips, include quantum computing, neurosynaptic computing, silicon photonics, carbon nanotubes, III-V technologies, low power transistors and graphene:

Quantum Computing

The most basic piece of information that a typical computer understands is a bit. Much like a light that can be switched on or off, a bit can have only one of two values: “1” or “0.” Described as superposition, this special property of qubits enables quantum computers to weed through millions of solutions all at once, while desktop PCs would have to consider them one at a time.

IBM is a world leader in superconducting qubit-based quantum computing science and is a pioneer in the field of experimental and theoretical quantum information, fields that are still in the category of fundamental science – but one that, in the long term, may allow the solution of problems that are today either impossible or impractical to solve using conventional machines. The team recently demonstrated the first experimental realization of parity check with three superconducting qubits, an essential building block for one type of quantum computer.

Neurosynaptic Computing

Bringing together nanoscience, neuroscience, and supercomputing, IBM and university partners have developed an end-to-end ecosystem including a novel non-von Neumann architecture, a new programming language, as well as applications. This novel technology allows for computing systems that emulate the brain’s computing efficiency, size and power usage. IBM’s long-term goal is to build a neurosynaptic system with ten billion neurons and a hundred trillion synapses, all while consuming only one kilowatt of power and occupying less than two liters of volume.

Silicon Photonics

IBM has been a pioneer in the area of CMOS integrated silicon photonics for over 12 years, a technology that integrates functions for optical communications on a silicon chip, and the IBM team has recently designed and fabricated the world’s first monolithic silicon photonics based transceiver with wavelength division multiplexing.  Such transceivers will use light to transmit data between different components in a computing system at high data rates, low cost, and in an energetically efficient manner.

Silicon nanophotonics takes advantage of pulses of light for communication rather than traditional copper wiring and provides a super highway for large volumes of data to move at rapid speeds between computer chips in servers, large datacenters, and supercomputers, thus alleviating the limitations of congested data traffic and high-cost traditional interconnects.

Businesses are entering a new era of computing that requires systems to process and analyze, in real-time, huge volumes of information known as Big Data. Silicon nanophotonics technology provides answers to Big Data challenges by seamlessly connecting various parts of large systems, whether few centimeters or few kilometers apart from each other, and move terabytes of data via pulses of light through optical fibers.

III-V technologies

IBM researchers have demonstrated the world’s highest transconductance on a self-aligned III-V channel metal-oxide semiconductor (MOS) field-effect transistors (FETs) device structure that is compatible with CMOS scaling. These materials and structural innovation are expected to pave path for technology scaling at 7nm and beyond.  With more than an order of magnitude higher electron mobility than silicon, integrating III-V materials into CMOS enables higher performance at lower power density, allowing for an extension to power/performance scaling to meet the demands of cloud computing and big data systems.

Carbon Nanotubes

IBM Researchers are working in the area of carbon nanotube (CNT) electronics and exploring whether CNTs can replace silicon beyond the 7 nm node.  As part of its activities for developing carbon nanotube based CMOS VLSI circuits, IBM recently demonstrated — for the first time in the world — 2-way CMOS NAND gates using 50 nm gate length carbon nanotube transistors.

IBM also has demonstrated the capability for purifying carbon nanotubes to 99.99 percent, the highest (verified) purities demonstrated to date, and transistors at 10 nm channel length that show no degradation due to scaling–this is unmatched by any other material system to date.

Carbon nanotubes are single atomic sheets of carbon rolled up into a tube. The carbon nanotubes form the core of a transistor device that will work in a fashion similar to the current silicon transistor, but will be better performing. They could be used to replace the transistors in chips that power data-crunching servers, high performing computers and ultra fast smart phones.

Carbon nanotube transistors can operate as excellent switches at molecular dimensions of less than ten nanometers – the equivalent to 10,000 times thinner than a strand of human hair and less than half the size of the leading silicon technology. Comprehensive modeling of the electronic circuits suggests that about a five to ten times improvement in performance compared to silicon circuits is possible.

Graphene

Graphene is pure carbon in the form of a one atomic layer thick sheet.  It is an excellent conductor of heat and electricity, and it is also remarkably strong and flexible.  Electrons can move in graphene about ten times faster than in commonly used semiconductor materials such as silicon and silicon germanium. Its characteristics offer the possibility to build faster switching transistors than are possible with conventional semiconductors, particularly for applications in the handheld wireless communications business where it will be a more efficient switch than those currently used.

Recently in 2013, IBM demonstrated the world’s first graphene based integrated circuit receiver front end for wireless communications. The circuit consisted of a 2-stage amplifier and a down converter operating at 4.3 GHz.

Next Generation Low Power Transistors

In addition to new materials like CNTs, new architectures and innovative device concepts are required to boost future system performance. Power dissipation is a fundamental challenge for nanoelectronic circuits. To explain the challenge, consider a leaky water faucet — even after closing the valve as far as possible water continues to drip — this is similar to today’s transistor, in that energy is constantly “leaking” or being lost or wasted in the off-state.

A potential alternative to today’s power hungry silicon field effect transistors are so-called steep slope devices. They could operate at much lower voltage and thus dissipate significantly less power. IBM scientists are researching tunnel field effect transistors (TFETs). In this special type of transistors the quantum-mechanical effect of band-to-band tunneling is used to drive the current flow through the transistor. TFETs could achieve a 100-fold power reduction over complementary CMOS transistors, so integrating TFETs with CMOS technology could improve low-power integrated circuits.

Recently, IBM has developed a novel method to integrate III-V nanowires and heterostructures directly on standard silicon substrates and built the first ever InAs/Si tunnel diodes and TFETs using InAs as source and Si as channel with wrap-around gate as steep slope device for low power consumption applications.

“In the next ten years computing hardware systems will be fundamentally different as our scientists and engineers push the limits of semiconductor innovations to explore the post-silicon future,” said Tom Rosamilia, senior vice president, IBM Systems and Technology Group. “IBM Research and Development teams are creating breakthrough innovations that will fuel the next era of computing systems.”

IBM’s historic contributions to silicon and semiconductor innovation include the invention and/or first implementation of: the single cell DRAM, the “Dennard scaling laws” underpinning “Moore’s Law”, chemically amplified photoresists, copper interconnect wiring, Silicon on Insulator, strained engineering, multi core microprocessors, immersion lithography, high speed silicon germanium (SiGe), High-k gate dielectrics, embedded DRAM, 3D chip stacking, and Air gap insulators.

IBM researchers also are credited with initiating the era of nano devices following the Nobel prize winning invention of the scanning tunneling microscope which enabled nano and atomic scale invention and innovation.

IBM will also continue to fund and collaborate with university researchers to explore and develop the future technologies for the semiconductor industry. In particular, IBM will continue to support and fund university research through private-public partnerships such as the NanoElectornics Research Initiative (NRI), and the Semiconductor Advanced Research Network (STARnet), and the Global Research Consortium (GRC) of the Semiconductor Research Corporation.

I highlighted ‘memory systems’ as this brings to mind HP Labs and their major investment in ‘memristive’ technologies noted in my June 26, 2014 posting,

… During a two-hour presentation held a year and a half ago, they laid out how the computer might work, its benefits, and the expectation that about 75 percent of HP Labs personnel would be dedicated to this one project. “At the end, Meg {Meg Whitman, CEO of HP Labs] turned to [Chief Financial Officer] Cathie Lesjak and said, ‘Find them more money,’” says John Sontag, the vice president of systems research at HP, who attended the meeting and is in charge of bringing the Machine to life. “People in Labs see this as a once-in-a-lifetime opportunity.”

The Machine is based on the memristor and other associated technologies.

Getting back to IBM, there’s this analysis of the $3B investment ($600M/year for five years) by Alex Konrad in a July 10, 2014 article for Forbes (Note: A link has been removed),

When IBM … announced a $3 billion commitment to even tinier semiconductor chips that no longer depended on silicon on Wednesday, the big news was that IBM’s putting a lot of money into a future for chips where Moore’s Law no longer applies. But on second glance, the move to spend billions on more experimental ideas like silicon photonics and carbon nanotubes shows that IBM’s finally shifting large portions of its research budget into more ambitious and long-term ideas.

… IBM tells Forbes the $3 billion isn’t additional money being added to its R&D spend, an area where analysts have told Forbes they’d like to see more aggressive cash commitments in the future. IBM will still spend about $6 billion a year on R&D, 6% of revenue. Ten percent of that research budget, however, now has to come from somewhere else to fuel these more ambitious chip projects.

Neal Ungerleider’s July 11, 2014 article for Fast Company focuses on the neuromorphic computing and quantum computing aspects of this $3B initiative (Note: Links have been removed),

The new R&D initiatives fall into two categories: Developing nanotech components for silicon chips for big data and cloud systems, and experimentation with “post-silicon” microchips. This will include research into quantum computers which don’t know binary code, neurosynaptic computers which mimic the behavior of living brains, carbon nanotubes, graphene tools and a variety of other technologies.

IBM’s investment is one of the largest for quantum computing to date; the company is one of the biggest researchers in the field, along with a Canadian company named D-Wave which is partnering with Google and NASA to develop quantum computer systems.

The curious can find D-Wave Systems here. There’s also a January 19, 2012 posting here which discusses the D-Wave’s situation at that time.

Final observation, these are fascinating developments especially for the insight they provide into the worries troubling HP Labs, Intel, and IBM as they jockey for position.

ETA July 14, 2014: Dexter Johnson has a July 11, 2014 posting on his Nanoclast blog (on the IEEE [Institute for Electrical and Electronics Engineers]) about the IBM announcement and which features some responses he received from IBM officials to his queries,

While this may be a matter of fascinating speculation for investors, the impact on nanotechnology development  is going to be significant. To get a better sense of what it all means, I was able to talk to some of the key figures of IBM’s push in nanotechnology research.

I conducted e-mail interviews with Tze-Chiang (T.C.) Chen, vice president science & technology, IBM Fellow at the Thomas J. Watson Research Center and Wilfried Haensch, senior manager, physics and materials for logic and communications, IBM Research.

Silicon versus Nanomaterials

First, I wanted to get a sense for how long IBM envisioned sticking with silicon and when they expected the company would permanently make the move away from CMOS to alternative nanomaterials. Unfortunately, as expected, I didn’t get solid answers, except for them to say that new manufacturing tools and techniques need to be developed now.

He goes on to ask about carbon nanotubes and graphene. Interestingly, IBM does not have a wide range of electronics applications in mind for graphene.  I encourage you to read Dexter’s posting as Dexter got answers to some very astute and pointed questions.

Graphene hype; the emerging story in an interview with Carla Alvial Palavicino (University of Twente, Netherlands)

i’m delighted to be publishing this interview with Carla Alvial Palavicino, PhD student at the University of Twente (Netherlands), as she is working on the topicof  graphene ‘hype’. Here’s a bit more about the work from her University of Twente webpage (Note: A link has been removed),

From its origins the field of nanotechnology has been populated of expectations. Pictured as “the new industrial revolution” the economic promise holds strong, but also nanotechnologies as a cure for almost all the human ills, sustainers of future growth, prosperity and happiness. In contrast to these promises, the uncertainties associated to the introduction of such a new and revolutionary technology, and mainly risks of nanomaterials, have elicited concerns among governments and the public. Nevertheless, the case of the public can be characterized as concerns about concerns, based on the experience of previous innovations (GMO, etc.).

Expectations, both as promises and concerns, have played and continue playing a central role in the “real-time social and political constitution of nanotechnology” (Kearnes and Macnaghten 2006). A circulation of visions, promises and concerns in observed in the field, from the broadly defined umbrella promises to more specific expectations, and references to grand challenges as moral imperatives. These expectations have become such an important part of the social repertoire of nano applications that we observe the proliferation of systematic and intentional modes of expectation building such as roadmaps, technology assessment, etc.; as well as a considerable group of reports on risk, concerns, and ethical and social aspects. This different modes of expectation building (Konrad 2010) co-exist and contribute to the articulation of the nano field.

This project seeks to identify, characterize and contextualize the existing modes of expectations building, being those intentional (i.e. foresight, TA, etc.) or implicit in arenas of public discourse, associated to ongoing and emerging social processes in the context of socio-technical change.

This dynamics are being explored in relation to the new material graphene.

Before getting to the interview, here’s Alvial Palavicino’s biography,

Carla Alvial Palavicino has a bachelor degree in Molecular Biology Engineering, School of Science, University of Chile, Chile and a Master’s degree on Sustainability Sciences, Graduate School of Frontier Science, University of Tokyo, Japan. She has worked in technology transfer and more recently, in Smart Grids and local scale renewable energy provision.

Finally, here’s the interview (Note: At the author’s request, there have been some grammatical changes made to conform with Canadian English.),

  • What is it that interests you about the ‘hype’ that some technologies receive and how did you come to focus on graphene in particular?

My research belongs to a field called the Sociology of Expectations, which deals with the role of promises, visions, concerns and ideas of the future in the development of technologies, and how these ideas actually affect people’s strategies in technology development. Part of the dynamic found for these expectations are hype-disappointment cycles, much like the ones the Gartner Group uses. And hype has become an expectation itself; people expect that there will be too many promises and some, maybe many of them are not going to be fulfilled, followed by disappointment.

I came to know about graphene because, initially, I was broadly interested in nanoelectronics (my research project is part of NanoNextNL a large Dutch Nano research programme), due to the strong future orientation in the electronics industry. The industry has been organizing, and continues to organize around the promise of Moore’s law for more than 50 years! So I came across graphene as thriving to some extent on the expectations around the end of Moore’s law and because simply everybody was talking about it as the next big thing! Then I thought, this is a great opportunity to investigate hype in real-time

  • Is there something different about the hype for graphene or is this the standard ‘we’ve found a new material and it will change everything’?

I guess with every new technology and new material you find a portion of genuine enthusiasm which might lead to big promises. But that doesn’t necessarily turn into big hype. One thing is that all hype is not the same and you might have technologies that disappeared after the hype such as High Temperature Semiconductors, or technologies that go through a number of hype cycles and disappointment cycles throughout their development (for example, Fuel Cells). Now with graphene what you certainly have is very ‘loud’ hype – the amount of attention it has received in so little time is extraordinary. If that is a characteristic of graphene or a consequence of the current conditions in which the hype has been developed, such as faster ways of communication (social media for example) or different incentives for science and innovation well, this is part of what I am trying to find out.

Quite clearly, the hype in graphene seems to be more ‘reflexive’ than others, that is, people seem to be more conscious about hype now. We have had the experience with carbon nanotubes only recently and scientist, companies and investors are less naïve about what can be expected of the technology, and what needs to be done to move it forward ‘in the right direction’. And they do act in ways that try to soften the slope of the hype-disappointment curve. Having said that, actors [Ed. Note: as in actor-network theory] are also aware of how they can take some advantage of the hype (for funding, investment, or another interest), how to make use of it and hopefully leave safely, before disappointment. In the end, it is rather hard to ask accountability of big promises over the long-term.

  • In the description of your work you mention intentional and implicit modes of building expectations, could explain the difference between the two?

One striking feature of technology development today is that we found more and more activities directed at learning about, assess, and shaping the future, such as forecasts, foresights, Delphi, roadmaps and so on. There are even specialized future actors such as consultancy organisations or foresight experts,  Cientifica among them. And these formalized ways of anticipating  the future are expected to be performative by those who produce them and use them, that is, influence the way the future – and the present- turns out. But this is not a linear story, it’s not like 100% of a roadmap can be turned practice (not even for the ITRS roadmap [Ed. Note: International Technology Roadmap for Semi-conductors] that sustains Moore’s law, some expectations change quite radically between editions of the roadmap). Besides that, there are other forms of building expectations which are embedded in practices around new technologies. Think of the promises made in high profile journals or grant applications; and of expectations incorporated in patents and standards. All these embody particular forms and directions for the future, and exclude others. These are implicit forms of expectation-building, even if not primarily intended as such. These forms are shaped by particular expectations which themselves shape further development. So, in order to understand how these practices, both intentional and implicit, anticipate futures you need to look at the interplay between the various types.

  • Do you see a difference internationally with regard to graphene hype? Is it more prevalent in Europe than in the North America? Is it particularly prevalent in some jurisdiction, e.g. UK?

I think the graphene ‘hype’ has been quite global, but it is moving to different communities, or actors groups, as Tim Harper from Cientifica has mentioned in his recent report about graphene

What is interesting in relation to the different ‘geographical’ responses to graphene is that they exemplify nicely how a big promise (graphene, in this case) is connected to other circulating visions, expectations or concerns. In the case of the UK, the *Nobel prize on Graphene and the following investment was connected to the idea of a perceived crisis of innovation in the country. Thus, the decision to invest in graphene was presented and discussed in reference to global competitiveness, showing a political commitment for science and innovation that was in doubt at that time.

In the European case with its *Graphene flagship, something similar happened. While there is no doubt of the scientific excellence of the flagship project, the reasons why it finally became a winner in the flagship competition might have been related to the attention on graphene. The project itself started quite humbly, and it differed from the other flagship proposals that were much more oriented towards economic or societal challenges. But the attention graphene received after the Nobel Prize, plus the engagement of some large companies, helped to frame the project in terms of its economic profitability.  And. this might have helped to bring attention and make sense of the project in the terms the European Commission was interested in.

In contrast, if you think of the US, the hype has been there (the number of companies engaged in graphene research is only increasing) but it has not had a big echo in policy. One of the reasons might be because this idea of global competition and being left behind is not so present in the US. And in the case of Canada for example, graphene has been taken up by the graphite (mining) community, which is a very local feature.

So answering your questions, the hype has been quite global and fed in a global way (developments in one place resonate in the other) but different geographical areas have reacted in relation to their contingent expectations to what this hype dynamic provided.

  • What do you think of graphene?

I think it’s the new material with more YouTube videos (this one is particularly good in over promising for example)  and the coolest superhero (Mr G from the Flagship). But seriously,  I often get asked that question when I do interviews with actors in the field, since they are curious to learn about the outsider perspective. But to be honest I try to remain as neutral and distant as possible regarding my research object… and not getting caught in the hype!

Thanks so much for a fascinating interview Carla and I very much appreciate the inclusion of Canada in your response to the question about the international response to graphene hype. (Here are three of my postings on graphite and mining in Canada: Canada’s contribution to graphene research: big graphite flakes [Feb. 6, 2012]; A ‘graphite today, graphene tomorrow’ philosophy from Focus Graphite [April 17, 2013[; and Lomiko’s Quatre Milles graphite flakes—pure and ultra pure [April 17, 2013] There are others you can find by searching ‘graphite’ in the blog’s search box.)

* For anyone curious about the Nobel prize and graphene, there’s this Oct.7, 2010 posting. Plus, the Graphene Flagship was one of several projects competing for one of the two 1B Euro research prizes awarded in January 2013 (the win is mentioned in my Jan. 28, 2013 posting).

Merry Christmas, Happy New Year, and Happy Holidays to all!