Tag Archives: qubits

Graphene can be used in quantum components

A November 3, 2022 news item on phys.org provides a brief history of graphene before announcing the latest work from ETH Zurich,

Less than 20 years ago, Konstantin Novoselov and Andre Geim first created two-dimensional crystals consisting of just one layer of carbon atoms. Known as graphene, this material has had quite a career since then.

Due to its exceptional strength, graphene is used today to reinforce products such as tennis rackets, car tires or aircraft wings. But it is also an interesting subject for fundamental research, as physicists keep discovering new, astonishing phenomena that have not been observed in other materials.

The right twist

Bilayer graphene crystals, in which the two atomic layers are slightly rotated relative to each other, are particularly interesting for researchers. About one year ago, a team of researchers led by Klaus Ensslin and Thomas Ihn at ETH Zurich’s Laboratory for Solid State Physics was able to demonstrate that twisted graphene could be used to create Josephson junctions, the fundamental building blocks of superconducting devices.

Based on this work, researchers were now able to produce the first superconducting quantum interference device, or SQUID, from twisted graphene for the purpose of demonstrating the interference of superconducting quasiparticles. Conventional SQUIDs are already being used, for instance in medicine, geology and archaeology. Their sensitive sensors are capable of measuring even the smallest changes in magnetic fields. However, SQUIDs work only in conjunction with superconducting materials, so they require cooling with liquid helium or nitrogen when in operation.

In quantum technology, SQUIDs can host quantum bits (qubits); that is, as elements for carrying out quantum operations. “SQUIDs are to superconductivity what transistors are to semiconductor technology—the fundamental building blocks for more complex circuits,” Ensslin explains.

A November 3, 2022 ETH Zurich news release by Felix Würsten, which originated the news item, delves further into the work,

The spectrum is widening

The graphene SQUIDs created by doctoral student Elías Portolés are not more sensitive than their conventional counterparts made from aluminium and also have to be cooled down to temperatures lower than 2 degrees above absolute zero. “So it’s not a breakthrough for SQUID technology as such,” Ensslin says. However, it does broaden graphene’s application spectrum significantly. “Five years ago, we were already able to show that graphene could be used to build single-electron transistors. Now we’ve added superconductivity,” Ensslin says.

What is remarkable is that the graphene’s behaviour can be controlled in a targeted manner by biasing an electrode. Depending on the voltage applied, the material can be insulating, conducting or superconducting. “The rich spectrum of opportunities offered by solid-state physics is at our disposal,” Ensslin says.

Also interesting is that the two fundamental building blocks of a semiconductor (transistor) and a superconductor (SQUID) can now be combined in a single material. This makes it possible to build novel control operations. “Normally, the transistor is made from silicon and the SQUID from aluminium,” Ensslin says. “These are different materials requiring different processing technologies.”

An extremely challenging production process

Superconductivity in graphene was discovered by an MIT [Massachusetts Institute of Technology] research group five years ago, yet there are only a dozen or so experimental groups worldwide that look at this phenomenon. Even fewer are capable of converting superconducting graphene into a functioning component.

The challenge is that scientists have to carry out several delicate work steps one after the other: First, they have to align the graphene sheets at the exact right angle relative to each other. The next steps then include connecting electrodes and etching holes. If the graphene were to be heated up, as happens often during cleanroom processing, the two layers re-align the twist angle vanishes. “The entire standard semiconductor technology has to be readjusted, making this an extremely challenging job,” Portolés says.

The vision of hybrid systems

Ensslin is thinking one step ahead. Quite a variety of different qubit technologies are currently being assessed, each with its own advantages and disadvantages. For the most part, this is being done by various research groups within the National Center of Competence in Quantum Science and Technology (QSIT). If scientists succeed in coupling two of these systems using graphene, it might be possible to combine their benefits as well. “The result would be two different quantum systems on the same crystal,” Ensslin says.

This would also generate new possibilities for research on superconductivity. “With these components, we might be better able to understand how superconductivity in graphene comes about in the first place,” he adds. “All we know today is that there are different phases of superconductivity in this material, but we do not yet have a theoretical model to explain them.”

Here’s a link to and a citation for the paper,

A tunable monolithic SQUID in twisted bilayer graphene by Elías Portolés, Shuichi Iwakiri, Giulia Zheng, Peter Rickhaus, Takashi Taniguchi, Kenji Watanabe, Thomas Ihn, Klaus Ensslin & Folkert K. de Vries. Nature Nanotechnology volume 17, pages 1159–1164 (2022) Issue Date: November 2022 DOI: https://doi.org/10.1038/s41565-022-01222-0 Published online: 24 October 2022

This paper is behind a paywall.

Exotic magnetism: a quantum simulation from D-Wave Sytems

Vancouver (Canada) area company, D-Wave Systems is trumpeting itself (with good reason) again. This 2021 ‘milestone’ achievement builds on work from 2018 (see my August 23, 2018 posting for the earlier work). For me, the big excitement was finding the best explanation for quantum annealing and D-Wave’s quantum computers that I’ve seen yet (that explanation and a link to more is at the end of this posting).

A February 18, 2021 news item on phys.org announces the latest achievement,

D-Wave Systems Inc. today [February 18, 2021] published a milestone study in collaboration with scientists at Google, demonstrating a computational performance advantage, increasing with both simulation size and problem hardness, to over 3 million times that of corresponding classical methods. Notably, this work was achieved on a practical application with real-world implications, simulating the topological phenomena behind the 2016 Nobel Prize in Physics. This performance advantage, exhibited in a complex quantum simulation of materials, is a meaningful step in the journey toward applications advantage in quantum computing.

A February 18, 2021 D-Wave Systems press release (also on EurekAlert), which originated the news item, describes the work in more detail,

The work by scientists at D-Wave and Google also demonstrates that quantum effects can be harnessed to provide a computational advantage in D-Wave processors, at problem scale that requires thousands of qubits. Recent experiments performed on multiple D-Wave processors represent by far the largest quantum simulations carried out by existing quantum computers to date.

The paper, entitled “Scaling advantage over path-integral Monte Carlo in quantum simulation of geometrically frustrated magnets”, was published in the journal Nature Communications (DOI 10.1038/s41467-021-20901-5, February 18, 2021). D-Wave researchers programmed the D-Wave 2000Q™ system to model a two-dimensional frustrated quantum magnet using artificial spins. The behavior of the magnet was described by the Nobel-prize winning work of theoretical physicists Vadim Berezinskii, J. Michael Kosterlitz and David Thouless. They predicted a new state of matter in the 1970s characterized by nontrivial topological properties. This new research is a continuation of previous breakthrough work published by D-Wave’s team in a 2018 Nature paper entitled “Observation of topological phenomena in a programmable lattice of 1,800 qubits” (Vol. 560, Issue 7719, August 22, 2018). In this latest paper, researchers from D-Wave, alongside contributors from Google, utilize D-Wave’s lower noise processor to achieve superior performance and glean insights into the dynamics of the processor never observed before.

“This work is the clearest evidence yet that quantum effects provide a computational advantage in D-Wave processors,” said Dr. Andrew King, principal investigator for this work at D-Wave. “Tying the magnet up into a topological knot and watching it escape has given us the first detailed look at dynamics that are normally too fast to observe. What we see is a huge benefit in absolute terms, with the scaling advantage in temperature and size that we would hope for. This simulation is a real problem that scientists have already attacked using the algorithms we compared against, marking a significant milestone and an important foundation for future development. This wouldn’t have been possible today without D-Wave’s lower noise processor.”

“The search for quantum advantage in computations is becoming increasingly lively because there are special problems where genuine progress is being made. These problems may appear somewhat contrived even to physicists, but in this paper from a collaboration between D-Wave Systems, Google, and Simon Fraser University [SFU], it appears that there is an advantage for quantum annealing using a special purpose processor over classical simulations for the more ‘practical’ problem of finding the equilibrium state of a particular quantum magnet,” said Prof. Dr. Gabriel Aeppli, professor of physics at ETH Zürich and EPF Lausanne, and head of the Photon Science Division of the Paul Scherrer Institute. “This comes as a surprise given the belief of many that quantum annealing has no intrinsic advantage over path integral Monte Carlo programs implemented on classical processors.”

“Nascent quantum technologies mature into practical tools only when they leave classical counterparts in the dust in solving real-world problems,” said Hidetoshi Nishimori, Professor, Institute of Innovative Research, Tokyo Institute of Technology. “A key step in this direction has been achieved in this paper by providing clear evidence of a scaling advantage of the quantum annealer over an impregnable classical computing competitor in simulating dynamical properties of a complex material. I send sincere applause to the team.”

“Successfully demonstrating such complex phenomena is, on its own, further proof of the programmability and flexibility of D-Wave’s quantum computer,” said D-Wave CEO Alan Baratz. “But perhaps even more important is the fact that this was not demonstrated on a synthetic or ‘trick’ problem. This was achieved on a real problem in physics against an industry-standard tool for simulation–a demonstration of the practical value of the D-Wave processor. We must always be doing two things: furthering the science and increasing the performance of our systems and technologies to help customers develop applications with real-world business value. This kind of scientific breakthrough from our team is in line with that mission and speaks to the emerging value that it’s possible to derive from quantum computing today.”

The scientific achievements presented in Nature Communications further underpin D-Wave’s ongoing work with world-class customers to develop over 250 early quantum computing applications, with a number piloting in production applications, in diverse industries such as manufacturing, logistics, pharmaceutical, life sciences, retail and financial services. In September 2020, D-Wave brought its next-generation Advantage™ quantum system to market via the Leap™ quantum cloud service. The system includes more than 5,000 qubits and 15-way qubit connectivity, as well as an expanded hybrid solver service capable of running business problems with up to one million variables. The combination of Advantage’s computing power and scale with the hybrid solver service gives businesses the ability to run performant, real-world quantum applications for the first time.

That last paragraph seems more sales pitch than research oriented. It’s not unexpected in a company’s press release but I was surprised that the editors at EurekAlert didn’t remove it.

Here’s a link to and a citation for the latest paper,

Scaling advantage over path-integral Monte Carlo in quantum simulation of geometrically frustrated magnets by Andrew D. King, Jack Raymond, Trevor Lanting, Sergei V. Isakov, Masoud Mohseni, Gabriel Poulin-Lamarre, Sara Ejtemaee, William Bernoudy, Isil Ozfidan, Anatoly Yu. Smirnov, Mauricio Reis, Fabio Altomare, Michael Babcock, Catia Baron, Andrew J. Berkley, Kelly Boothby, Paul I. Bunyk, Holly Christiani, Colin Enderud, Bram Evert, Richard Harris, Emile Hoskinson, Shuiyuan Huang, Kais Jooya, Ali Khodabandelou, Nicolas Ladizinsky, Ryan Li, P. Aaron Lott, Allison J. R. MacDonald, Danica Marsden, Gaelen Marsden, Teresa Medina, Reza Molavi, Richard Neufeld, Mana Norouzpour, Travis Oh, Igor Pavlov, Ilya Perminov, Thomas Prescott, Chris Rich, Yuki Sato, Benjamin Sheldan, George Sterling, Loren J. Swenson, Nicholas Tsai, Mark H. Volkmann, Jed D. Whittaker, Warren Wilkinson, Jason Yao, Hartmut Neven, Jeremy P. Hilton, Eric Ladizinsky, Mark W. Johnson, Mohammad H. Amin. Nature Communications volume 12, Article number: 1113 (2021) DOI: https://doi.org/10.1038/s41467-021-20901-5 Published: 18 February 2021

This paper is open access.

Quantum annealing and more

Dr. Andrew King, one of the D-Wave researchers, has written a February 18, 2021 article on Medium explaining some of the work. I’ve excerpted one of King’s points,

Insight #1: We observed what actually goes on under the hood in the processor for the first time

Quantum annealing — the approach adopted by D-Wave from the beginning — involves setting up a simple but purely quantum initial state, and gradually reducing the “quantumness” until the system is purely classical. This takes on the order of a microsecond. If you do it right, the classical system represents a hard (NP-complete) computational problem, and the state has evolved to an optimal, or at least near-optimal, solution to that problem.

What happens at the beginning and end of the computation are about as simple as quantum computing gets. But the action in the middle is hard to get a handle on, both theoretically and experimentally. That’s one reason these experiments are so important: they provide high-fidelity measurements of the physical processes at the core of quantum annealing. Our 2018 Nature article introduced the same simulation, but without measuring computation time. To benchmark the experiment this time around, we needed lower-noise hardware (in this case, we used the D-Wave 2000Q lower noise quantum computer), and we needed, strangely, to slow the simulation down. Since the quantum simulation happens so fast, we actually had to make things harder. And we had to find a way to slow down both quantum and classical simulation in an equitable way. The solution? Topological obstruction.

If you have time and the inclination, I encourage you to read King’s piece.

Quantum supremacy

This supremacy, refers to an engineering milestone and a October 23, 2019 news item on ScienceDaily announces the milestone has been reached,

Researchers in UC [University of California] Santa Barbara/Google scientist John Martinis’ group have made good on their claim to quantum supremacy. Using 53 entangled quantum bits (“qubits”), their Sycamore computer has taken on — and solved — a problem considered intractable for classical computers.

An October 23, 2019 UC Santa Barbara news release (also on EurekAlert) by Sonia Fernandez, which originated the news item, delves further into the work,

“A computation that would take 10,000 years on a classical supercomputer took 200 seconds on our quantum computer,” said Brooks Foxen, a graduate student researcher in the Martinis Group. “It is likely that the classical simulation time, currently estimated at 10,000 years, will be reduced by improved classical hardware and algorithms, but, since we are currently 1.5 trillion times faster, we feel comfortable laying claim to this achievement.”

The feat is outlined in a paper in the journal Nature.

The milestone comes after roughly two decades of quantum computing research conducted by Martinis and his group, from the development of a single superconducting qubit to systems including architectures of 72 and, with Sycamore, 54 qubits (one didn’t perform) that take advantage of the both awe-inspiring and bizarre properties of quantum mechanics.

“The algorithm was chosen to emphasize the strengths of the quantum computer by leveraging the natural dynamics of the device,” said Ben Chiaro, another graduate student researcher in the Martinis Group. That is, the researchers wanted to test the computer’s ability to hold and rapidly manipulate a vast amount of complex, unstructured data.

“We basically wanted to produce an entangled state involving all of our qubits as quickly as we can,” Foxen said, “and so we settled on a sequence of operations that produced a complicated superposition state that, when measured, returns bitstring with a probability determined by the specific sequence of operations used to prepare that particular superposition. The exercise, which was to verify that the circuit’s output correspond to the equence used to prepare the state, sampled the quantum circuit a million times in just a few minutes, exploring all possibilities — before the system could lose its quantum coherence.

‘A complex superposition state’

“We performed a fixed set of operations that entangles 53 qubits into a complex superposition state,” Chiaro explained. “This superposition state encodes the probability distribution. For the quantum computer, preparing this superposition state is accomplished by applying a sequence of tens of control pulses to each qubit in a matter of microseconds. We can prepare and then sample from this distribution by measuring the qubits a million times in 200 seconds.”

“For classical computers, it is much more difficult to compute the outcome of these operations because it requires computing the probability of being in any one of the 2^53 possible states, where the 53 comes from the number of qubits — the exponential scaling is why people are interested in quantum computing to begin with,” Foxen said. “This is done by matrix multiplication, which is expensive for classical computers as the matrices become large.”

According to the new paper, the researchers used a method called cross-entropy benchmarking to compare the quantum circuit’s output (a “bitstring”) to its “corresponding ideal probability computed via simulation on a classical computer” to ascertain that the quantum computer was working correctly.

“We made a lot of design choices in the development of our processor that are really advantageous,” said Chiaro. Among these advantages, he said, are the ability to experimentally tune the parameters of the individual qubits as well as their interactions.

While the experiment was chosen as a proof-of-concept for the computer, the research has resulted in a very real and valuable tool: a certified random number generator. Useful in a variety of fields, random numbers can ensure that encrypted keys can’t be guessed, or that a sample from a larger population is truly representative, leading to optimal solutions for complex problems and more robust machine learning applications. The speed with which the quantum circuit can produce its randomized bit string is so great that there is no time to analyze and “cheat” the system.

“Quantum mechanical states do things that go beyond our day-to-day experience and so have the potential to provide capabilities and application that would otherwise be unattainable,” commented Joe Incandela, UC Santa Barbara’s vice chancellor for research. “The team has demonstrated the ability to reliably create and repeatedly sample complicated quantum states involving 53 entangled elements to carry out an exercise that would take millennia to do with a classical supercomputer. This is a major accomplishment. We are at the threshold of a new era of knowledge acquisition.”

Looking ahead

With an achievement like “quantum supremacy,” it’s tempting to think that the UC Santa Barbara/Google researchers will plant their flag and rest easy. But for Foxen, Chiaro, Martinis and the rest of the UCSB/Google AI Quantum group, this is just the beginning.

“It’s kind of a continuous improvement mindset,” Foxen said. “There are always projects in the works.” In the near term, further improvements to these “noisy” qubits may enable the simulation of interesting phenomena in quantum mechanics, such as thermalization, or the vast amount of possibility in the realms of materials and chemistry.

In the long term, however, the scientists are always looking to improve coherence times, or, at the other end, to detect and fix errors, which would take many additional qubits per qubit being checked. These efforts have been running parallel to the design and build of the quantum computer itself, and ensure the researchers have a lot of work before hitting their next milestone.

“It’s been an honor and a pleasure to be associated with this team,” Chiaro said. “It’s a great collection of strong technical contributors with great leadership and the whole team really synergizes well.”

Here’s a link to and a citation for the paper,

Quantum supremacy using a programmable superconducting processor by Frank Arute, Kunal Arya, Ryan Babbush, Dave Bacon, Joseph C. Bardin, Rami Barends, Rupak Biswas, Sergio Boixo, Fernando G. S. L. Brandao, David A. Buell, Brian Burkett, Yu Chen, Zijun Chen, Ben Chiaro, Roberto Collins, William Courtney, Andrew Dunsworth, Edward Farhi, Brooks Foxen, Austin Fowler, Craig Gidney, Marissa Giustina, Rob Graff, Keith Guerin, Steve Habegger, Matthew P. Harrigan, Michael J. Hartmann, Alan Ho, Markus Hoffmann, Trent Huang, Travis S. Humble, Sergei V. Isakov, Evan Jeffrey, Zhang Jiang, Dvir Kafri, Kostyantyn Kechedzhi, Julian Kelly, Paul V. Klimov, Sergey Knysh, Alexander Korotkov, Fedor Kostritsa, David Landhuis, Mike Lindmark, Erik Lucero, Dmitry Lyakh, Salvatore Mandrà, Jarrod R. McClean, Matthew McEwen, Anthony Megrant, Xiao Mi, Kristel Michielsen, Masoud Mohseni, Josh Mutus, Ofer Naaman, Matthew Neeley, Charles Neill, Murphy Yuezhen Niu, Eric Ostby, Andre Petukhov, John C. Platt, Chris Quintana, Eleanor G. Rieffel, Pedram Roushan, Nicholas C. Rubin, Daniel Sank, Kevin J. Satzinger, Vadim Smelyanskiy, Kevin J. Sung, Matthew D. Trevithick, Amit Vainsencher, Benjamin Villalonga, Theodore White, Z. Jamie Yao, Ping Yeh, Adam Zalcman, Hartmut Neven & John M. Martinis. Nature volume 574, pages505–510 (2019) DOI: https://doi.org/10.1038/s41586-019-1666-5 Issue Date 24 October 2019

This paper appears to be open access.

Seeing the future with quantum computing

Researchers at the University of Sydney (Australia) have demonstrated the ability to see the ‘quantum future’ according to a Jan. 16, 2017 news item on ScienceDaily,

Scientists at the University of Sydney have demonstrated the ability to “see” the future of quantum systems, and used that knowledge to preempt their demise, in a major achievement that could help bring the strange and powerful world of quantum technology closer to reality.

The applications of quantum-enabled technologies are compelling and already demonstrating significant impacts — especially in the realm of sensing and metrology. And the potential to build exceptionally powerful quantum computers using quantum bits, or qubits, is driving investment from the world’s largest companies.

However a significant obstacle to building reliable quantum technologies has been the randomisation of quantum systems by their environments, or decoherence, which effectively destroys the useful quantum character.

The physicists have taken a technical quantum leap in addressing this, using techniques from big data to predict how quantum systems will change and then preventing the system’s breakdown from occurring.

A Jan. 14, 2017 University of Sydney press release (also on EurekAlert), which originated the news item, expands on the theme,

“Much the way the individual components in mobile phones will eventually fail, so too do quantum systems,” said the paper’s senior author Professor Michael J.  Biercuk.

“But in quantum technology the lifetime is generally measured in fractions of a second, rather than years.”

Professor Biercuk, from the University of Sydney’s School of Physics and a chief investigator at the Australian Research Council’s Centre of Excellence for Engineered Quantum Systems, said his group had demonstrated it was possible to suppress decoherence in a preventive manner. The key was to develop a technique to predict how the system would disintegrate.

Professor Biercuk highlighted the challenges of making predictions in a quantum world: “Humans routinely employ predictive techniques in our daily experience; for instance, when we play tennis we predict where the ball will end up based on observations of the airborne ball,” he said.

“This works because the rules that govern how the ball will move, like gravity, are regular and known.  But what if the rules changed randomly while the ball was on its way to you?  In that case it’s next to impossible to predict the future behavior of that ball.

“And yet this situation is exactly what we had to deal with because the disintegration of quantum systems is random. Moreover, in the quantum realm observation erases quantumness, so our team needed to be able to guess how and when the system would randomly break.

“We effectively needed to swing at the randomly moving tennis ball while blindfolded.”

The team turned to machine learning for help in keeping their quantum systems – qubits realised in trapped atoms – from breaking.

What might look like random behavior actually contained enough information for a computer program to guess how the system would change in the future. It could then predict the future without direct observation, which would otherwise erase the system’s useful characteristics.

The predictions were remarkably accurate, allowing the team to use their guesses preemptively to compensate for the anticipated changes.

Doing this in real time allowed the team to prevent the disintegration of the quantum character, extending the useful lifetime of the qubits.

“We know that building real quantum technologies will require major advances in our ability to control and stabilise qubits – to make them useful in applications,” Professor Biercuk said.

Our techniques apply to any qubit, built in any technology, including the special superconducting circuits being used by major corporations.

“We’re excited to be developing new capabilities that turn quantum systems from novelties into useful technologies. The quantum future is looking better all the time,” Professor Biercuk said.

Here’s a link to and a  citation for the paper,

Prediction and real-time compensation of qubit decoherence via machine learning by Sandeep Mavadia, Virginia Frey, Jarrah Sastrawan, Stephen Dona, & Michael J. Biercuk. Nature Communications 8, Article number: 14106 (2017) doi:10.1038/ncomms14106 Published online: 16 January 2017

This paper is open access.

Connecting chaos and entanglement

Researchers seem to have stumbled across a link between classical and quantum physics. A July 12, 2016 University of California at Santa Barbara (UCSB) news release (also on EurekAlert) by Sonia Fernandez provides a description of both classical and quantum physics, as well as, the research that connects the two,

Using a small quantum system consisting of three superconducting qubits, researchers at UC Santa Barbara and Google have uncovered a link between aspects of classical and quantum physics thought to be unrelated: classical chaos and quantum entanglement. Their findings suggest that it would be possible to use controllable quantum systems to investigate certain fundamental aspects of nature.

“It’s kind of surprising because chaos is this totally classical concept — there’s no idea of chaos in a quantum system,” Charles Neill, a researcher in the UCSB Department of Physics and lead author of a paper that appears in Nature Physics. “Similarly, there’s no concept of entanglement within classical systems. And yet it turns out that chaos and entanglement are really very strongly and clearly related.”

Initiated in the 15th century, classical physics generally examines and describes systems larger than atoms and molecules. It consists of hundreds of years’ worth of study including Newton’s laws of motion, electrodynamics, relativity, thermodynamics as well as chaos theory — the field that studies the behavior of highly sensitive and unpredictable systems. One classic example of chaos theory is the weather, in which a relatively small change in one part of the system is enough to foil predictions — and vacation plans — anywhere on the globe.

At smaller size and length scales in nature, however, such as those involving atoms and photons and their behaviors, classical physics falls short. In the early 20th century quantum physics emerged, with its seemingly counterintuitive and sometimes controversial science, including the notions of superposition (the theory that a particle can be located in several places at once) and entanglement (particles that are deeply linked behave as such despite physical distance from one another).

And so began the continuing search for connections between the two fields.

All systems are fundamentally quantum systems, according [to] Neill, but the means of describing in a quantum sense the chaotic behavior of, say, air molecules in an evacuated room, remains limited.

Imagine taking a balloon full of air molecules, somehow tagging them so you could see them and then releasing them into a room with no air molecules, noted co-author and UCSB/Google researcher Pedram Roushan. One possible outcome is that the air molecules remain clumped together in a little cloud following the same trajectory around the room. And yet, he continued, as we can probably intuit, the molecules will more likely take off in a variety of velocities and directions, bouncing off walls and interacting with each other, resting after the room is sufficiently saturated with them.

“The underlying physics is chaos, essentially,” he said. The molecules coming to rest — at least on the macroscopic level — is the result of thermalization, or of reaching equilibrium after they have achieved uniform saturation within the system. But in the infinitesimal world of quantum physics, there is still little to describe that behavior. The mathematics of quantum mechanics, Roushan said, do not allow for the chaos described by Newtonian laws of motion.

To investigate, the researchers devised an experiment using three quantum bits, the basic computational units of the quantum computer. Unlike classical computer bits, which utilize a binary system of two possible states (e.g., zero/one), a qubit can also use a superposition of both states (zero and one) as a single state. Additionally, multiple qubits can entangle, or link so closely that their measurements will automatically correlate. By manipulating these qubits with electronic pulses, Neill caused them to interact, rotate and evolve in the quantum analog of a highly sensitive classical system.

The result is a map of entanglement entropy of a qubit that, over time, comes to strongly resemble that of classical dynamics — the regions of entanglement in the quantum map resemble the regions of chaos on the classical map. The islands of low entanglement in the quantum map are located in the places of low chaos on the classical map.

“There’s a very clear connection between entanglement and chaos in these two pictures,” said Neill. “And, it turns out that thermalization is the thing that connects chaos and entanglement. It turns out that they are actually the driving forces behind thermalization.

“What we realize is that in almost any quantum system, including on quantum computers, if you just let it evolve and you start to study what happens as a function of time, it’s going to thermalize,” added Neill, referring to the quantum-level equilibration. “And this really ties together the intuition between classical thermalization and chaos and how it occurs in quantum systems that entangle.”

The study’s findings have fundamental implications for quantum computing. At the level of three qubits, the computation is relatively simple, said Roushan, but as researchers push to build increasingly sophisticated and powerful quantum computers that incorporate more qubits to study highly complex problems that are beyond the ability of classical computing — such as those in the realms of machine learning, artificial intelligence, fluid dynamics or chemistry — a quantum processor optimized for such calculations will be a very powerful tool.

“It means we can study things that are completely impossible to study right now, once we get to bigger systems,” said Neill.

Experimental link between quantum entanglement (left) and classical chaos (right) found using a small quantum computer. Photo Credit: Courtesy Image (Courtesy: UCSB)

Experimental link between quantum entanglement (left) and classical chaos (right) found using a small quantum computer. Photo Credit: Courtesy Image (Courtesy: UCSB)

Here’s a link to and a citation for the paper,

Ergodic dynamics and thermalization in an isolated quantum system by C. Neill, P. Roushan, M. Fang, Y. Chen, M. Kolodrubetz, Z. Chen, A. Megrant, R. Barends, B. Campbell, B. Chiaro, A. Dunsworth, E. Jeffrey, J. Kelly, J. Mutus, P. J. J. O’Malley, C. Quintana, D. Sank, A. Vainsencher, J. Wenner, T. C. White, A. Polkovnikov, & J. M. Martinis. Nature Physics (2016)  doi:10.1038/nphys3830 Published online 11 July 2016

This paper is behind a paywall.

D-Wave passes 1000-qubit barrier

A local (Vancouver, Canada-based, quantum computing company, D-Wave is making quite a splash lately due to a technical breakthrough.  h/t’s Speaking up for Canadian Science for Business in Vancouver article and Nanotechnology Now for Harris & Harris Group press release and Economist article.

A June 22, 2015 article by Tyler Orton for Business in Vancouver describes D-Wave’s latest technical breakthrough,

“This updated processor will allow significantly more complex computational problems to be solved than ever before,” Jeremy Hilton, D-Wave’s vice-president of processor development, wrote in a June 22 [2015] blog entry.

Regular computers use two bits – ones and zeroes – to make calculations, while quantum computers rely on qubits.

Qubits possess a “superposition” that allow it to be one and zero at the same time, meaning it can calculate all possible values in a single operation.

But the algorithm for a full-scale quantum computer requires 8,000 qubits.

A June 23, 2015 Harris & Harris Group press release adds more information about the breakthrough,

Harris & Harris Group, Inc. (Nasdaq: TINY), an investor in transformative companies enabled by disruptive science, notes that its portfolio company, D-Wave Systems, Inc., announced that it has successfully fabricated 1,000 qubit processors that power its quantum computers.  D-Wave’s quantum computer runs a quantum annealing algorithm to find the lowest points, corresponding to optimal or near optimal solutions, in a virtual “energy landscape.”  Every additional qubit doubles the search space of the processor.  At 1,000 qubits, the new processor considers 21000 possibilities simultaneously, a search space which is substantially larger than the 2512 possibilities available to the company’s currently available 512 qubit D-Wave Two. In fact, the new search space contains far more possibilities than there are particles in the observable universe.

A June 22, 2015 D-Wave news release, which originated the technical details about the breakthrough found in the Harris & Harris press release, provides more information along with some marketing hype (hyperbole), Note: Links have been removed,

As the only manufacturer of scalable quantum processors, D-Wave breaks new ground with every succeeding generation it develops. The new processors, comprising over 128,000 Josephson tunnel junctions, are believed to be the most complex superconductor integrated circuits ever successfully yielded. They are fabricated in part at D-Wave’s facilities in Palo Alto, CA and at Cypress Semiconductor’s wafer foundry located in Bloomington, Minnesota.

“Temperature, noise, and precision all play a profound role in how well quantum processors solve problems.  Beyond scaling up the technology by doubling the number of qubits, we also achieved key technology advances prioritized around their impact on performance,” said Jeremy Hilton, D-Wave vice president, processor development. “We expect to release benchmarking data that demonstrate new levels of performance later this year.”

The 1000-qubit milestone is the result of intensive research and development by D-Wave and reflects a triumph over a variety of design challenges aimed at enhancing performance and boosting solution quality. Beyond the much larger number of qubits, other significant innovations include:

  •  Lower Operating Temperature: While the previous generation processor ran at a temperature close to absolute zero, the new processor runs 40% colder. The lower operating temperature enhances the importance of quantum effects, which increases the ability to discriminate the best result from a collection of good candidates.​
  • Reduced Noise: Through a combination of improved design, architectural enhancements and materials changes, noise levels have been reduced by 50% in comparison to the previous generation. The lower noise environment enhances problem-solving performance while boosting reliability and stability.
  • Increased Control Circuitry Precision: In the testing to date, the increased precision coupled with the noise reduction has demonstrated improved precision by up to 40%. To accomplish both while also improving manufacturing yield is a significant achievement.
  • Advanced Fabrication:  The new processors comprise over 128,000 Josephson junctions (tunnel junctions with superconducting electrodes) in a 6-metal layer planar process with 0.25μm features, believed to be the most complex superconductor integrated circuits ever built.
  • New Modes of Use: The new technology expands the boundaries of ways to exploit quantum resources.  In addition to performing discrete optimization like its predecessor, firmware and software upgrades will make it easier to use the system for sampling applications.

“Breaking the 1000 qubit barrier marks the culmination of years of research and development by our scientists, engineers and manufacturing team,” said D-Wave CEO Vern Brownell. “It is a critical step toward bringing the promise of quantum computing to bear on some of the most challenging technical, commercial, scientific, and national defense problems that organizations face.”

A June 20, 2015 article in The Economist notes there is now commercial interest as it provides good introductory information about quantum computing. The article includes an analysis of various research efforts in Canada (they mention D-Wave), the US, and the UK. These excerpts don’t do justice to the article but will hopefully whet your appetite or provide an overview for anyone with limited time,

A COMPUTER proceeds one step at a time. At any particular moment, each of its bits—the binary digits it adds and subtracts to arrive at its conclusions—has a single, definite value: zero or one. At that moment the machine is in just one state, a particular mixture of zeros and ones. It can therefore perform only one calculation next. This puts a limit on its power. To increase that power, you have to make it work faster.

But bits do not exist in the abstract. Each depends for its reality on the physical state of part of the computer’s processor or memory. And physical states, at the quantum level, are not as clear-cut as classical physics pretends. That leaves engineers a bit of wriggle room. By exploiting certain quantum effects they can create bits, known as qubits, that do not have a definite value, thus overcoming classical computing’s limits.

… The biggest question is what the qubits themselves should be made from.

A qubit needs a physical system with two opposite quantum states, such as the direction of spin of an electron orbiting an atomic nucleus. Several things which can do the job exist, and each has its fans. Some suggest nitrogen atoms trapped in the crystal lattices of diamonds. Calcium ions held in the grip of magnetic fields are another favourite. So are the photons of which light is composed (in this case the qubit would be stored in the plane of polarisation). And quasiparticles, which are vibrations in matter that behave like real subatomic particles, also have a following.

The leading candidate at the moment, though, is to use a superconductor in which the qubit is either the direction of a circulating current, or the presence or absence of an electric charge. Both Google and IBM are banking on this approach. It has the advantage that superconducting qubits can be arranged on semiconductor chips of the sort used in existing computers. That, the two firms think, should make them easier to commercialise.

Google is also collaborating with D-Wave of Vancouver, Canada, which sells what it calls quantum annealers. The field’s practitioners took much convincing that these devices really do exploit the quantum advantage, and in any case they are limited to a narrower set of problems—such as searching for images similar to a reference image. But such searches are just the type of application of interest to Google. In 2013, in collaboration with NASA and USRA, a research consortium, the firm bought a D-Wave machine in order to put it through its paces. Hartmut Neven, director of engineering at Google Research, is guarded about what his team has found, but he believes D-Wave’s approach is best suited to calculations involving fewer qubits, while Dr Martinis and his colleagues build devices with more.

It’s not clear to me if the writers at The Economist were aware of  D-Wave’s latest breakthrough at the time of writing but I think not. In any event, they (The Economist writers) have included a provocative tidbit about quantum encryption,

Documents released by Edward Snowden, a whistleblower, revealed that the Penetrating Hard Targets programme of America’s National Security Agency was actively researching “if, and how, a cryptologically useful quantum computer can be built”. In May IARPA [Intellligence Advanced Research Projects Agency], the American government’s intelligence-research arm, issued a call for partners in its Logical Qubits programme, to make robust, error-free qubits. In April, meanwhile, Tanja Lange and Daniel Bernstein of Eindhoven University of Technology, in the Netherlands, announced PQCRYPTO, a programme to advance and standardise “post-quantum cryptography”. They are concerned that encrypted communications captured now could be subjected to quantum cracking in the future. That means strong pre-emptive encryption is needed immediately.

I encourage you to read the Economist article.

Two final comments. (1) The latest piece, prior to this one, about D-Wave was in a Feb. 6, 2015 posting about then new investment into the company. (2) A Canadian effort in the field of quantum cryptography was mentioned in a May 11, 2015 posting (scroll down about 50% of the way) featuring a profile of Raymond Laflamme, at the University of Waterloo’s Institute of Quantum Computing in the context of an announcement about science media initiative Research2Reality.

Solving an iridescent mystery could lead to quantum transistors

iridescence has fascinated me (and scores of other people) since early childhood and it’s fascinating to note that scientists seems almost as enchanted as we amateurs are. The latest bit of ‘iridescent’ news comes from the University of Michigan in a Dec. 5, 2014 news item on ScienceDaily,

An odd, iridescent material that’s puzzled physicists for decades turns out to be an exotic state of matter that could open a new path to quantum computers and other next-generation electronics.

Physicists at the University of Michigan have discovered or confirmed several properties of the compound samarium hexaboride that raise hopes for finding the silicon of the quantum era. They say their results also close the case of how to classify the material–a mystery that has been investigated since the late 1960s.

A Dec. 5, 2014 University of Michigan news release, which originated the news item, provides more details about the mystery and the efforts to resolve it,

The researchers provide the first direct evidence that samarium hexaboride, abbreviated SmB6, is a topological insulator. Topological insulators are, to physicists, an exciting class of solids that conduct electricity like a metal across their surface, but block the flow of current like rubber through their interior. They behave in this two-faced way despite that their chemical composition is the same throughout.

The U-M scientists used a technique called torque magnetometry to observe tell-tale oscillations in the material’s response to a magnetic field that reveal how electric current moves through it. Their technique also showed that the surface of samarium hexaboride holds rare Dirac electrons, particles with the potential to help researchers overcome one of the biggest hurdles in quantum computing.

These properties are particularly enticing to scientists because SmB6 is considered a strongly correlated material. Its electrons interact more closely with one another than most solids. This helps its interior maintain electricity-blocking behavior.

This deeper understanding of samarium hexaboride raises the possibility that engineers might one day route the flow of electric current in quantum computers like they do on silicon in conventional electronics, said Lu Li, assistant professor of physics in the College of Literature, Science, and the Arts and a co-author of a paper on the findings published in Science.

“Before this, no one had found Dirac electrons in a strongly correlated material,” Li said. “We thought strong correlation would hurt them, but now we know it doesn’t. While I don’t think this material is the answer, now we know that this combination of properties is possible and we can look for other candidates.”

The drawback of samarium hexaboride is that the researchers only observed these behaviors at ultracold temperatures.

Quantum computers use particles like atoms or electrons to perform processing and memory tasks. They could offer dramatic increases in computing power due to their ability to carry out scores of calculations at once. Because they could factor numbers much faster than conventional computers, they would greatly improve computer security.

In quantum computers, “qubits” stand in for the 0s and 1s of conventional computers’ binary code. While a conventional bit can be either a 0 or a 1, a qubit could be both at the same time—only until you measure it, that is. Measuring a quantum system forces it to pick one state, which eliminates its main advantage.

Dirac electrons, named after the English physicist whose equations describe their behavior, straddle the realms of classical and quantum physics, Li said. Working together with other materials, they could be capable of clumping together into a new kind of qubit that would change the properties of a material in a way that could be measured indirectly, without the qubit sensing it. The qubit could remain in both states.

While these applications are intriguing, the researchers are most enthusiastic about the fundamental science they’ve uncovered.

“In the science business you have concepts that tell you it should be this or that and when it’s two things at once, that’s a sign you have something interesting to find,” said Jim Allen, an emeritus professor of physics who studied samarium hexaboride for 30 years. “Mysteries are always intriguing to people who do curiosity-driven research.”

Allen thought for years that samarium hexaboride must be a flawed insulator that behaved like a metal at low temperatures because of defects and impurities, but he couldn’t align that with all of its other properties.

“The prediction several years ago about it being a topological insulator makes a lightbulb go off if you’re an old guy like me and you’ve been living with this stuff your whole life,” Allen said.

In 2010, Kai Sun, assistant professor of physics at U-M, led a group that first posited that SmB6 might be a topological insulator. He and Allen were also involved in seminal U-M experiments led by physics professor Cagliyan Kurdak in 2012 that showed indirectly that the hypothesis was correct.

“But the scientific community is always critical,” Sun said. “They want very strong evidence. We think this experiment finally provides direct proof of our theory.”

Here’s a link to and a citation for the researchers’ latest paper,

Two-dimensional Fermi surfaces in Kondo insulator SmB6 by G. Li, Z. Xiang, F. Yu, T. Asaba, B. Lawson, P. Cai1, C. Tinsman, A. Berkley, S. Wolgast, Y. S. Eo, Dae-Jeong Kim, C. Kurdak, J. W. Allen, K. Sun, X. H. Chen, Y. Y. Wang, Z. Fisk, and Lu Li. Science 5 December 2014: Vol. 346 no. 6214 pp. 1208-1212 DOI: 10.1126/science.1250366

This paper is behind a paywall.

IBM weighs in with plans for a 7nm computer chip

On the heels of Intel’s announcement about a deal utilizing their 14nm low-power manufacturing process and speculations about a 10nm computer chip (my July 9, 2014 posting), IBM makes an announcement about a 7nm chip as per this July 10, 2014 news item on Azonano,

IBM today [July 10, 2014] announced it is investing $3 billion over the next 5 years in two broad research and early stage development programs to push the limits of chip technology needed to meet the emerging demands of cloud computing and Big Data systems. These investments will push IBM’s semiconductor innovations from today’s breakthroughs into the advanced technology leadership required for the future.

A very comprehensive July 10, 2014 news release lays out the company’s plans for this $3B investment representing 10% of IBM’s total research budget,

The first research program is aimed at so-called “7 nanometer and beyond” silicon technology that will address serious physical challenges that are threatening current semiconductor scaling techniques and will impede the ability to manufacture such chips. The second is focused on developing alternative technologies for post-silicon era chips using entirely different approaches, which IBM scientists and other experts say are required because of the physical limitations of silicon based semiconductors.

Cloud and big data applications are placing new challenges on systems, just as the underlying chip technology is facing numerous significant physical scaling limits.  Bandwidth to memory, high speed communication and device power consumption are becoming increasingly challenging and critical.

The teams will comprise IBM Research scientists and engineers from Albany and Yorktown, New York; Almaden, California; and Europe. In particular, IBM will be investing significantly in emerging areas of research that are already underway at IBM such as carbon nanoelectronics, silicon photonics, new memory technologies, and architectures that support quantum and cognitive computing. [emphasis mine]

These teams will focus on providing orders of magnitude improvement in system level performance and energy efficient computing. In addition, IBM will continue to invest in the nanosciences and quantum computing–two areas of fundamental science where IBM has remained a pioneer for over three decades.

7 nanometer technology and beyond

IBM Researchers and other semiconductor experts predict that while challenging, semiconductors show promise to scale from today’s 22 nanometers down to 14 and then 10 nanometers in the next several years.  However, scaling to 7 nanometers and perhaps below, by the end of the decade will require significant investment and innovation in semiconductor architectures as well as invention of new tools and techniques for manufacturing.

“The question is not if we will introduce 7 nanometer technology into manufacturing, but rather how, when, and at what cost?” said John Kelly, senior vice president, IBM Research. “IBM engineers and scientists, along with our partners, are well suited for this challenge and are already working on the materials science and device engineering required to meet the demands of the emerging system requirements for cloud, big data, and cognitive systems. This new investment will ensure that we produce the necessary innovations to meet these challenges.”

“Scaling to 7nm and below is a terrific challenge, calling for deep physics competencies in processing nano materials affinities and characteristics. IBM is one of a very few companies who has repeatedly demonstrated this level of science and engineering expertise,” said Richard Doherty, technology research director, The Envisioneering Group.

Bridge to a “Post-Silicon” Era

Silicon transistors, tiny switches that carry information on a chip, have been made smaller year after year, but they are approaching a point of physical limitation. Their increasingly small dimensions, now reaching the nanoscale, will prohibit any gains in performance due to the nature of silicon and the laws of physics. Within a few more generations, classical scaling and shrinkage will no longer yield the sizable benefits of lower power, lower cost and higher speed processors that the industry has become accustomed to.

With virtually all electronic equipment today built on complementary metal–oxide–semiconductor (CMOS) technology, there is an urgent need for new materials and circuit architecture designs compatible with this engineering process as the technology industry nears physical scalability limits of the silicon transistor.

Beyond 7 nanometers, the challenges dramatically increase, requiring a new kind of material to power systems of the future, and new computing platforms to solve problems that are unsolvable or difficult to solve today. Potential alternatives include new materials such as carbon nanotubes, and non-traditional computational approaches such as neuromorphic computing, cognitive computing, machine learning techniques, and the science behind quantum computing.

As the leader in advanced schemes that point beyond traditional silicon-based computing, IBM holds over 500 patents for technologies that will drive advancements at 7nm and beyond silicon — more than twice the nearest competitor. These continued investments will accelerate the invention and introduction into product development for IBM’s highly differentiated computing systems for cloud, and big data analytics.

Several exploratory research breakthroughs that could lead to major advancements in delivering dramatically smaller, faster and more powerful computer chips, include quantum computing, neurosynaptic computing, silicon photonics, carbon nanotubes, III-V technologies, low power transistors and graphene:

Quantum Computing

The most basic piece of information that a typical computer understands is a bit. Much like a light that can be switched on or off, a bit can have only one of two values: “1” or “0.” Described as superposition, this special property of qubits enables quantum computers to weed through millions of solutions all at once, while desktop PCs would have to consider them one at a time.

IBM is a world leader in superconducting qubit-based quantum computing science and is a pioneer in the field of experimental and theoretical quantum information, fields that are still in the category of fundamental science – but one that, in the long term, may allow the solution of problems that are today either impossible or impractical to solve using conventional machines. The team recently demonstrated the first experimental realization of parity check with three superconducting qubits, an essential building block for one type of quantum computer.

Neurosynaptic Computing

Bringing together nanoscience, neuroscience, and supercomputing, IBM and university partners have developed an end-to-end ecosystem including a novel non-von Neumann architecture, a new programming language, as well as applications. This novel technology allows for computing systems that emulate the brain’s computing efficiency, size and power usage. IBM’s long-term goal is to build a neurosynaptic system with ten billion neurons and a hundred trillion synapses, all while consuming only one kilowatt of power and occupying less than two liters of volume.

Silicon Photonics

IBM has been a pioneer in the area of CMOS integrated silicon photonics for over 12 years, a technology that integrates functions for optical communications on a silicon chip, and the IBM team has recently designed and fabricated the world’s first monolithic silicon photonics based transceiver with wavelength division multiplexing.  Such transceivers will use light to transmit data between different components in a computing system at high data rates, low cost, and in an energetically efficient manner.

Silicon nanophotonics takes advantage of pulses of light for communication rather than traditional copper wiring and provides a super highway for large volumes of data to move at rapid speeds between computer chips in servers, large datacenters, and supercomputers, thus alleviating the limitations of congested data traffic and high-cost traditional interconnects.

Businesses are entering a new era of computing that requires systems to process and analyze, in real-time, huge volumes of information known as Big Data. Silicon nanophotonics technology provides answers to Big Data challenges by seamlessly connecting various parts of large systems, whether few centimeters or few kilometers apart from each other, and move terabytes of data via pulses of light through optical fibers.

III-V technologies

IBM researchers have demonstrated the world’s highest transconductance on a self-aligned III-V channel metal-oxide semiconductor (MOS) field-effect transistors (FETs) device structure that is compatible with CMOS scaling. These materials and structural innovation are expected to pave path for technology scaling at 7nm and beyond.  With more than an order of magnitude higher electron mobility than silicon, integrating III-V materials into CMOS enables higher performance at lower power density, allowing for an extension to power/performance scaling to meet the demands of cloud computing and big data systems.

Carbon Nanotubes

IBM Researchers are working in the area of carbon nanotube (CNT) electronics and exploring whether CNTs can replace silicon beyond the 7 nm node.  As part of its activities for developing carbon nanotube based CMOS VLSI circuits, IBM recently demonstrated — for the first time in the world — 2-way CMOS NAND gates using 50 nm gate length carbon nanotube transistors.

IBM also has demonstrated the capability for purifying carbon nanotubes to 99.99 percent, the highest (verified) purities demonstrated to date, and transistors at 10 nm channel length that show no degradation due to scaling–this is unmatched by any other material system to date.

Carbon nanotubes are single atomic sheets of carbon rolled up into a tube. The carbon nanotubes form the core of a transistor device that will work in a fashion similar to the current silicon transistor, but will be better performing. They could be used to replace the transistors in chips that power data-crunching servers, high performing computers and ultra fast smart phones.

Carbon nanotube transistors can operate as excellent switches at molecular dimensions of less than ten nanometers – the equivalent to 10,000 times thinner than a strand of human hair and less than half the size of the leading silicon technology. Comprehensive modeling of the electronic circuits suggests that about a five to ten times improvement in performance compared to silicon circuits is possible.

Graphene

Graphene is pure carbon in the form of a one atomic layer thick sheet.  It is an excellent conductor of heat and electricity, and it is also remarkably strong and flexible.  Electrons can move in graphene about ten times faster than in commonly used semiconductor materials such as silicon and silicon germanium. Its characteristics offer the possibility to build faster switching transistors than are possible with conventional semiconductors, particularly for applications in the handheld wireless communications business where it will be a more efficient switch than those currently used.

Recently in 2013, IBM demonstrated the world’s first graphene based integrated circuit receiver front end for wireless communications. The circuit consisted of a 2-stage amplifier and a down converter operating at 4.3 GHz.

Next Generation Low Power Transistors

In addition to new materials like CNTs, new architectures and innovative device concepts are required to boost future system performance. Power dissipation is a fundamental challenge for nanoelectronic circuits. To explain the challenge, consider a leaky water faucet — even after closing the valve as far as possible water continues to drip — this is similar to today’s transistor, in that energy is constantly “leaking” or being lost or wasted in the off-state.

A potential alternative to today’s power hungry silicon field effect transistors are so-called steep slope devices. They could operate at much lower voltage and thus dissipate significantly less power. IBM scientists are researching tunnel field effect transistors (TFETs). In this special type of transistors the quantum-mechanical effect of band-to-band tunneling is used to drive the current flow through the transistor. TFETs could achieve a 100-fold power reduction over complementary CMOS transistors, so integrating TFETs with CMOS technology could improve low-power integrated circuits.

Recently, IBM has developed a novel method to integrate III-V nanowires and heterostructures directly on standard silicon substrates and built the first ever InAs/Si tunnel diodes and TFETs using InAs as source and Si as channel with wrap-around gate as steep slope device for low power consumption applications.

“In the next ten years computing hardware systems will be fundamentally different as our scientists and engineers push the limits of semiconductor innovations to explore the post-silicon future,” said Tom Rosamilia, senior vice president, IBM Systems and Technology Group. “IBM Research and Development teams are creating breakthrough innovations that will fuel the next era of computing systems.”

IBM’s historic contributions to silicon and semiconductor innovation include the invention and/or first implementation of: the single cell DRAM, the “Dennard scaling laws” underpinning “Moore’s Law”, chemically amplified photoresists, copper interconnect wiring, Silicon on Insulator, strained engineering, multi core microprocessors, immersion lithography, high speed silicon germanium (SiGe), High-k gate dielectrics, embedded DRAM, 3D chip stacking, and Air gap insulators.

IBM researchers also are credited with initiating the era of nano devices following the Nobel prize winning invention of the scanning tunneling microscope which enabled nano and atomic scale invention and innovation.

IBM will also continue to fund and collaborate with university researchers to explore and develop the future technologies for the semiconductor industry. In particular, IBM will continue to support and fund university research through private-public partnerships such as the NanoElectornics Research Initiative (NRI), and the Semiconductor Advanced Research Network (STARnet), and the Global Research Consortium (GRC) of the Semiconductor Research Corporation.

I highlighted ‘memory systems’ as this brings to mind HP Labs and their major investment in ‘memristive’ technologies noted in my June 26, 2014 posting,

… During a two-hour presentation held a year and a half ago, they laid out how the computer might work, its benefits, and the expectation that about 75 percent of HP Labs personnel would be dedicated to this one project. “At the end, Meg {Meg Whitman, CEO of HP Labs] turned to [Chief Financial Officer] Cathie Lesjak and said, ‘Find them more money,’” says John Sontag, the vice president of systems research at HP, who attended the meeting and is in charge of bringing the Machine to life. “People in Labs see this as a once-in-a-lifetime opportunity.”

The Machine is based on the memristor and other associated technologies.

Getting back to IBM, there’s this analysis of the $3B investment ($600M/year for five years) by Alex Konrad in a July 10, 2014 article for Forbes (Note: A link has been removed),

When IBM … announced a $3 billion commitment to even tinier semiconductor chips that no longer depended on silicon on Wednesday, the big news was that IBM’s putting a lot of money into a future for chips where Moore’s Law no longer applies. But on second glance, the move to spend billions on more experimental ideas like silicon photonics and carbon nanotubes shows that IBM’s finally shifting large portions of its research budget into more ambitious and long-term ideas.

… IBM tells Forbes the $3 billion isn’t additional money being added to its R&D spend, an area where analysts have told Forbes they’d like to see more aggressive cash commitments in the future. IBM will still spend about $6 billion a year on R&D, 6% of revenue. Ten percent of that research budget, however, now has to come from somewhere else to fuel these more ambitious chip projects.

Neal Ungerleider’s July 11, 2014 article for Fast Company focuses on the neuromorphic computing and quantum computing aspects of this $3B initiative (Note: Links have been removed),

The new R&D initiatives fall into two categories: Developing nanotech components for silicon chips for big data and cloud systems, and experimentation with “post-silicon” microchips. This will include research into quantum computers which don’t know binary code, neurosynaptic computers which mimic the behavior of living brains, carbon nanotubes, graphene tools and a variety of other technologies.

IBM’s investment is one of the largest for quantum computing to date; the company is one of the biggest researchers in the field, along with a Canadian company named D-Wave which is partnering with Google and NASA to develop quantum computer systems.

The curious can find D-Wave Systems here. There’s also a January 19, 2012 posting here which discusses the D-Wave’s situation at that time.

Final observation, these are fascinating developments especially for the insight they provide into the worries troubling HP Labs, Intel, and IBM as they jockey for position.

ETA July 14, 2014: Dexter Johnson has a July 11, 2014 posting on his Nanoclast blog (on the IEEE [Institute for Electrical and Electronics Engineers]) about the IBM announcement and which features some responses he received from IBM officials to his queries,

While this may be a matter of fascinating speculation for investors, the impact on nanotechnology development  is going to be significant. To get a better sense of what it all means, I was able to talk to some of the key figures of IBM’s push in nanotechnology research.

I conducted e-mail interviews with Tze-Chiang (T.C.) Chen, vice president science & technology, IBM Fellow at the Thomas J. Watson Research Center and Wilfried Haensch, senior manager, physics and materials for logic and communications, IBM Research.

Silicon versus Nanomaterials

First, I wanted to get a sense for how long IBM envisioned sticking with silicon and when they expected the company would permanently make the move away from CMOS to alternative nanomaterials. Unfortunately, as expected, I didn’t get solid answers, except for them to say that new manufacturing tools and techniques need to be developed now.

He goes on to ask about carbon nanotubes and graphene. Interestingly, IBM does not have a wide range of electronics applications in mind for graphene.  I encourage you to read Dexter’s posting as Dexter got answers to some very astute and pointed questions.

Graphene, Perimeter Institute, and condensed matter physics

In short, researchers at Canada’s Perimeter Institute are working on theoretical models involving graphene. which could lead to quantum computing. A July 3, 2014 Perimeter Institute news release by Erin Bow (also on EurekAlert) provides some insight into the connections between graphene and condensed matter physics (Note: Bow has included some good basic explanations of graphene, quasiparticles, and more for beginners),

One of the hottest materials in condensed matter research today is graphene.

Graphene had an unlikely start: it began with researchers messing around with pencil marks on paper. Pencil “lead” is actually made of graphite, which is a soft crystal lattice made of nothing but carbon atoms. When pencils deposit that graphite on paper, the lattice is laid down in thin sheets. By pulling that lattice apart into thinner sheets – originally using Scotch tape – researchers discovered that they could make flakes of crystal just one atom thick.

The name for this atom-scale chicken wire is graphene. Those folks with the Scotch tape, Andre Geim and Konstantin Novoselov, won the 2010 Nobel Prize for discovering it. “As a material, it is completely new – not only the thinnest ever but also the strongest,” wrote the Nobel committee. “As a conductor of electricity, it performs as well as copper. As a conductor of heat, it outperforms all other known materials. It is almost completely transparent, yet so dense that not even helium, the smallest gas atom, can pass through it.”

Developing a theoretical model of graphene

Graphene is not just a practical wonder – it’s also a wonderland for theorists. Confined to the two-dimensional surface of the graphene, the electrons behave strangely. All kinds of new phenomena can be seen, and new ideas can be tested. Testing new ideas in graphene is exactly what Perimeter researchers Zlatko Papić and Dmitry (Dima) Abanin set out to do.

“Dima and I started working on graphene a very long time ago,” says Papić. “We first met in 2009 at a conference in Sweden. I was a grad student and Dima was in the first year of his postdoc, I think.”

The two young scientists got to talking about what new physics they might be able to observe in the strange new material when it is exposed to a strong magnetic field.

“We decided we wanted to model the material,” says Papić. They’ve been working on their theoretical model of graphene, on and off, ever since. The two are now both at Perimeter Institute, where Papić is a postdoctoral researcher and Abanin is a faculty member. They are both cross-appointed with the Institute for Quantum Computing (IQC) at the University of Waterloo.

In January 2014, they published a paper in Physical Review Letters presenting new ideas about how to induce a strange but interesting state in graphene – one where it appears as if particles inside it have a fraction of an electron’s charge.

It’s called the fractional quantum Hall effect (FQHE), and it’s head turning. Like the speed of light or Planck’s constant, the charge of the electron is a fixed point in the disorienting quantum universe.

Every system in the universe carries whole multiples of a single electron’s charge. When the FQHE was first discovered in the 1980s, condensed matter physicists quickly worked out that the fractionally charged “particles” inside their semiconductors were actually quasiparticles – that is, emergent collective behaviours of the system that imitate particles.

Graphene is an ideal material in which to study the FQHE. “Because it’s just one atom thick, you have direct access to the surface,” says Papić. “In semiconductors, where FQHE was first observed, the gas of electrons that create this effect are buried deep inside the material. They’re hard to access and manipulate. But with graphene you can imagine manipulating these states much more easily.”

In the January paper, Abanin and Papić reported novel types of FQHE states that could arise in bilayer graphene – that is, in two sheets of graphene laid one on top of another – when it is placed in a strong perpendicular magnetic field. In an earlier work from 2012, they argued that applying an electric field across the surface of bilayer graphene could offer a unique experimental knob to induce transitions between FQHE states. Combining the two effects, they argued, would be an ideal way to look at special FQHE states and the transitions between them.

Once the scientists developed their theory they went to work on some experiments,

Two experimental groups – one in Geneva, involving Abanin, and one at Columbia, involving both Abanin and Papić – have since put the electric field + magnetic field method to good use. The paper by the Columbia group appears in the July 4 issue of Science. A third group, led by Amir Yacoby of Harvard, is doing closely related work.

“We often work hand-in-hand with experimentalists,” says Papić. “One of the reasons I like condensed matter is that often even the most sophisticated, cutting-edge theory stands a good chance of being quickly checked with experiment.”

Inside both the magnetic and electric field, the electrical resistance of the graphene demonstrates the strange behaviour characteristic of the FQHE. Instead of resistance that varies in a smooth curve with voltage, resistance jumps suddenly from one level to another, and then plateaus – a kind of staircase of resistance. Each stair step is a different state of matter, defined by the complex quantum tangle of charges, spins, and other properties inside the graphene.

“The number of states is quite rich,” says Papić. “We’re very interested in bilayer graphene because of the number of states we are detecting and because we have these mechanisms – like tuning the electric field – to study how these states are interrelated, and what happens when the material changes from one state to another.”

For the moment, researchers are particularly interested in the stair steps whose “height” is described by a fraction with an even denominator. That’s because the quasiparticles in that state are expected to have an unusual property.

There are two kinds of particles in our three-dimensional world: fermions (such as electrons), where two identical particles can’t occupy one state, and bosons (such as photons), where two identical particles actually want to occupy one state. In three dimensions, fermions are fermions and bosons are bosons, and never the twain shall meet.

But a sheet of graphene doesn’t have three dimensions – it has two. It’s effectively a tiny two-dimensional universe, and in that universe, new phenomena can occur. For one thing, fermions and bosons can meet halfway – becoming anyons, which can be anywhere in between fermions and bosons. The quasiparticles in these special stair-step states are expected to be anyons.

In particular, the researchers are hoping these quasiparticles will be non-Abelian anyons, as their theory indicates they should be. That would be exciting because non-Abelian anyons can be used in the making of qubits.

Graphene qubits?

Qubits are to quantum computers what bits are to ordinary computers: both a basic unit of information and the basic piece of equipment that stores that information. Because of their quantum complexity, qubits are more powerful than ordinary bits and their power grows exponentially as more of them are added. A quantum computer of only a hundred qubits can tackle certain problems beyond the reach of even the best non-quantum supercomputers. Or, it could, if someone could find a way to build stable qubits.

The drive to make qubits is part of the reason why graphene is a hot research area in general, and why even-denominator FQHE states – with their special anyons – are sought after in particular.

“A state with some number of these anyons can be used to represent a qubit,” says Papić. “Our theory says they should be there and the experiments seem to bear that out – certainly the even-denominator FQHE states seem to be there, at least according to the Geneva experiments.”

That’s still a step away from experimental proof that those even-denominator stair-step states actually contain non-Abelian anyons. More work remains, but Papić is optimistic: “It might be easier to prove in graphene than it would be in semiconductors. Everything is happening right at the surface.”

It’s still early, but it looks as if bilayer graphene may be the magic material that allows this kind of qubit to be built. That would be a major mark on the unlikely line between pencil lead and quantum computers.

Here are links for further research,

January PRL paper mentioned above: http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.112.046602

Experimental paper from the Geneva graphene group, including Abanin: http://pubs.acs.org/doi/abs/10.1021/nl5003922

Experimental paper from the Columbia graphene group, including both Abanin and Papić: http://arxiv.org/abs/1403.2112. This paper is featured in the journal Science.

Related experiment on bilayer graphene by Amir Yacoby’s group at Harvard: http://www.sciencemag.org/content/early/2014/05/28/science.1250270

The Nobel Prize press release on graphene, mentioned above: http://www.nobelprize.org/nobel_prizes/physics/laureates/2010/press.html

I recently posted a piece about some research into the ‘scotch-tape technique’ for isolating graphene (June 30, 2014 posting). Amusingly, Geim argued against coining the technique as the ‘scotch-tape’ technique, something I found out only recently.