Tag Archives: quantum computing

Could there be a quantum internet?

We’ve always had limited success with predicting future technologies by examining current technologies. For example, the Internet and World Wide Web as we experience them today would have been unthinkable for most people in the 1950s when computers inhabited entire buildings and satellites were a brand new technology designed for space exploration not bouncing communication signals around the planet. That said, this new work on a ‘quantum internet’ from Eindhoven University of Technology is quite intriguing (from a Dec. 15, 2014 news item on Nanowerk),

In the same way as we now connect computers in networks through optical signals, it could also be possible to connect future quantum computers in a ‘quantum internet’. The optical signals would then consist of individual light particles or photons. One prerequisite for a working quantum internet is control of the shape of these photons. Researchers at Eindhoven University of Technology (TU/e) and the FOM foundation  [Foundation for Fundamental Research on Matter] have now succeeded for the first time in getting this control within the required short time.

A Dec. 15, 2014 Eindhoven University of Technology (TU/e) press release, which originated the news item, describes one of the problems with a ‘quantum internet’ and the researchers’ solution,

Quantum computers could in principle communicate with each other by exchanging individual photons to create a ‘quantum internet’. The shape of the photons, in other words how their energy is distributed over time, is vital for successful transmission of information. This shape must be symmetric in time, while photons that are emitted by atoms normally have an asymmetric shape. Therefore, this process requires external control in order to create a quantum internet.

Optical cavity

Researchers at TU/e and FOM have succeeded in getting the required degree of control by embedding a quantum dot – a piece of semiconductor material that can transmit photons – into a ‘photonic crystal’, thereby creating an optical cavity. Then the researchers applied a very short electrical pulse to the cavity, which influences how the quantum dot interacts with it, and how the photon is emitted. By varying the strength of this pulse, they were able to control the shape of the transmitted photons.

Within a billionth of a second

The Eindhoven researchers are the first to achieve this, thanks to the use of electrical pulses shorter than nanosecond, a billionth of a second. This is vital for use in quantum communication, as research leader Andrea Fiore of TU/e explains: “The emission of a photon only lasts for one nanosecond, so if you want to change anything you have to do it within that time. It’s like the shutter of a high-speed camera, which has to be very short if you want to capture something that changes very fast in an image. By controlling the speed at which you send a photon, you can in principle achieve very efficient exchange of photons, which is important for the future quantum internet.”

Here’s a link to and a citation for the paper,

Dynamically controlling the emission of single excitons in photonic crystal cavities by Francesco Pagliano, YongJin Cho, Tian Xia, Frank van Otten, Robert Johne, & Andrea Fiore. Nature Communications 5, Article number: 5786 doi:10.1038/ncomms6786 Published 15 December 2014

This is an open access paper.

ETA Dec. 16, 2014 at 1230 hours PDT: There is a copy of the Dec. 15, 2014 news release on EurekAlert.

IBM weighs in with plans for a 7nm computer chip

On the heels of Intel’s announcement about a deal utilizing their 14nm low-power manufacturing process and speculations about a 10nm computer chip (my July 9, 2014 posting), IBM makes an announcement about a 7nm chip as per this July 10, 2014 news item on Azonano,

IBM today [July 10, 2014] announced it is investing $3 billion over the next 5 years in two broad research and early stage development programs to push the limits of chip technology needed to meet the emerging demands of cloud computing and Big Data systems. These investments will push IBM’s semiconductor innovations from today’s breakthroughs into the advanced technology leadership required for the future.

A very comprehensive July 10, 2014 news release lays out the company’s plans for this $3B investment representing 10% of IBM’s total research budget,

The first research program is aimed at so-called “7 nanometer and beyond” silicon technology that will address serious physical challenges that are threatening current semiconductor scaling techniques and will impede the ability to manufacture such chips. The second is focused on developing alternative technologies for post-silicon era chips using entirely different approaches, which IBM scientists and other experts say are required because of the physical limitations of silicon based semiconductors.

Cloud and big data applications are placing new challenges on systems, just as the underlying chip technology is facing numerous significant physical scaling limits.  Bandwidth to memory, high speed communication and device power consumption are becoming increasingly challenging and critical.

The teams will comprise IBM Research scientists and engineers from Albany and Yorktown, New York; Almaden, California; and Europe. In particular, IBM will be investing significantly in emerging areas of research that are already underway at IBM such as carbon nanoelectronics, silicon photonics, new memory technologies, and architectures that support quantum and cognitive computing. [emphasis mine]

These teams will focus on providing orders of magnitude improvement in system level performance and energy efficient computing. In addition, IBM will continue to invest in the nanosciences and quantum computing–two areas of fundamental science where IBM has remained a pioneer for over three decades.

7 nanometer technology and beyond

IBM Researchers and other semiconductor experts predict that while challenging, semiconductors show promise to scale from today’s 22 nanometers down to 14 and then 10 nanometers in the next several years.  However, scaling to 7 nanometers and perhaps below, by the end of the decade will require significant investment and innovation in semiconductor architectures as well as invention of new tools and techniques for manufacturing.

“The question is not if we will introduce 7 nanometer technology into manufacturing, but rather how, when, and at what cost?” said John Kelly, senior vice president, IBM Research. “IBM engineers and scientists, along with our partners, are well suited for this challenge and are already working on the materials science and device engineering required to meet the demands of the emerging system requirements for cloud, big data, and cognitive systems. This new investment will ensure that we produce the necessary innovations to meet these challenges.”

“Scaling to 7nm and below is a terrific challenge, calling for deep physics competencies in processing nano materials affinities and characteristics. IBM is one of a very few companies who has repeatedly demonstrated this level of science and engineering expertise,” said Richard Doherty, technology research director, The Envisioneering Group.

Bridge to a “Post-Silicon” Era

Silicon transistors, tiny switches that carry information on a chip, have been made smaller year after year, but they are approaching a point of physical limitation. Their increasingly small dimensions, now reaching the nanoscale, will prohibit any gains in performance due to the nature of silicon and the laws of physics. Within a few more generations, classical scaling and shrinkage will no longer yield the sizable benefits of lower power, lower cost and higher speed processors that the industry has become accustomed to.

With virtually all electronic equipment today built on complementary metal–oxide–semiconductor (CMOS) technology, there is an urgent need for new materials and circuit architecture designs compatible with this engineering process as the technology industry nears physical scalability limits of the silicon transistor.

Beyond 7 nanometers, the challenges dramatically increase, requiring a new kind of material to power systems of the future, and new computing platforms to solve problems that are unsolvable or difficult to solve today. Potential alternatives include new materials such as carbon nanotubes, and non-traditional computational approaches such as neuromorphic computing, cognitive computing, machine learning techniques, and the science behind quantum computing.

As the leader in advanced schemes that point beyond traditional silicon-based computing, IBM holds over 500 patents for technologies that will drive advancements at 7nm and beyond silicon — more than twice the nearest competitor. These continued investments will accelerate the invention and introduction into product development for IBM’s highly differentiated computing systems for cloud, and big data analytics.

Several exploratory research breakthroughs that could lead to major advancements in delivering dramatically smaller, faster and more powerful computer chips, include quantum computing, neurosynaptic computing, silicon photonics, carbon nanotubes, III-V technologies, low power transistors and graphene:

Quantum Computing

The most basic piece of information that a typical computer understands is a bit. Much like a light that can be switched on or off, a bit can have only one of two values: “1” or “0.” Described as superposition, this special property of qubits enables quantum computers to weed through millions of solutions all at once, while desktop PCs would have to consider them one at a time.

IBM is a world leader in superconducting qubit-based quantum computing science and is a pioneer in the field of experimental and theoretical quantum information, fields that are still in the category of fundamental science – but one that, in the long term, may allow the solution of problems that are today either impossible or impractical to solve using conventional machines. The team recently demonstrated the first experimental realization of parity check with three superconducting qubits, an essential building block for one type of quantum computer.

Neurosynaptic Computing

Bringing together nanoscience, neuroscience, and supercomputing, IBM and university partners have developed an end-to-end ecosystem including a novel non-von Neumann architecture, a new programming language, as well as applications. This novel technology allows for computing systems that emulate the brain’s computing efficiency, size and power usage. IBM’s long-term goal is to build a neurosynaptic system with ten billion neurons and a hundred trillion synapses, all while consuming only one kilowatt of power and occupying less than two liters of volume.

Silicon Photonics

IBM has been a pioneer in the area of CMOS integrated silicon photonics for over 12 years, a technology that integrates functions for optical communications on a silicon chip, and the IBM team has recently designed and fabricated the world’s first monolithic silicon photonics based transceiver with wavelength division multiplexing.  Such transceivers will use light to transmit data between different components in a computing system at high data rates, low cost, and in an energetically efficient manner.

Silicon nanophotonics takes advantage of pulses of light for communication rather than traditional copper wiring and provides a super highway for large volumes of data to move at rapid speeds between computer chips in servers, large datacenters, and supercomputers, thus alleviating the limitations of congested data traffic and high-cost traditional interconnects.

Businesses are entering a new era of computing that requires systems to process and analyze, in real-time, huge volumes of information known as Big Data. Silicon nanophotonics technology provides answers to Big Data challenges by seamlessly connecting various parts of large systems, whether few centimeters or few kilometers apart from each other, and move terabytes of data via pulses of light through optical fibers.

III-V technologies

IBM researchers have demonstrated the world’s highest transconductance on a self-aligned III-V channel metal-oxide semiconductor (MOS) field-effect transistors (FETs) device structure that is compatible with CMOS scaling. These materials and structural innovation are expected to pave path for technology scaling at 7nm and beyond.  With more than an order of magnitude higher electron mobility than silicon, integrating III-V materials into CMOS enables higher performance at lower power density, allowing for an extension to power/performance scaling to meet the demands of cloud computing and big data systems.

Carbon Nanotubes

IBM Researchers are working in the area of carbon nanotube (CNT) electronics and exploring whether CNTs can replace silicon beyond the 7 nm node.  As part of its activities for developing carbon nanotube based CMOS VLSI circuits, IBM recently demonstrated — for the first time in the world — 2-way CMOS NAND gates using 50 nm gate length carbon nanotube transistors.

IBM also has demonstrated the capability for purifying carbon nanotubes to 99.99 percent, the highest (verified) purities demonstrated to date, and transistors at 10 nm channel length that show no degradation due to scaling–this is unmatched by any other material system to date.

Carbon nanotubes are single atomic sheets of carbon rolled up into a tube. The carbon nanotubes form the core of a transistor device that will work in a fashion similar to the current silicon transistor, but will be better performing. They could be used to replace the transistors in chips that power data-crunching servers, high performing computers and ultra fast smart phones.

Carbon nanotube transistors can operate as excellent switches at molecular dimensions of less than ten nanometers – the equivalent to 10,000 times thinner than a strand of human hair and less than half the size of the leading silicon technology. Comprehensive modeling of the electronic circuits suggests that about a five to ten times improvement in performance compared to silicon circuits is possible.

Graphene

Graphene is pure carbon in the form of a one atomic layer thick sheet.  It is an excellent conductor of heat and electricity, and it is also remarkably strong and flexible.  Electrons can move in graphene about ten times faster than in commonly used semiconductor materials such as silicon and silicon germanium. Its characteristics offer the possibility to build faster switching transistors than are possible with conventional semiconductors, particularly for applications in the handheld wireless communications business where it will be a more efficient switch than those currently used.

Recently in 2013, IBM demonstrated the world’s first graphene based integrated circuit receiver front end for wireless communications. The circuit consisted of a 2-stage amplifier and a down converter operating at 4.3 GHz.

Next Generation Low Power Transistors

In addition to new materials like CNTs, new architectures and innovative device concepts are required to boost future system performance. Power dissipation is a fundamental challenge for nanoelectronic circuits. To explain the challenge, consider a leaky water faucet — even after closing the valve as far as possible water continues to drip — this is similar to today’s transistor, in that energy is constantly “leaking” or being lost or wasted in the off-state.

A potential alternative to today’s power hungry silicon field effect transistors are so-called steep slope devices. They could operate at much lower voltage and thus dissipate significantly less power. IBM scientists are researching tunnel field effect transistors (TFETs). In this special type of transistors the quantum-mechanical effect of band-to-band tunneling is used to drive the current flow through the transistor. TFETs could achieve a 100-fold power reduction over complementary CMOS transistors, so integrating TFETs with CMOS technology could improve low-power integrated circuits.

Recently, IBM has developed a novel method to integrate III-V nanowires and heterostructures directly on standard silicon substrates and built the first ever InAs/Si tunnel diodes and TFETs using InAs as source and Si as channel with wrap-around gate as steep slope device for low power consumption applications.

“In the next ten years computing hardware systems will be fundamentally different as our scientists and engineers push the limits of semiconductor innovations to explore the post-silicon future,” said Tom Rosamilia, senior vice president, IBM Systems and Technology Group. “IBM Research and Development teams are creating breakthrough innovations that will fuel the next era of computing systems.”

IBM’s historic contributions to silicon and semiconductor innovation include the invention and/or first implementation of: the single cell DRAM, the “Dennard scaling laws” underpinning “Moore’s Law”, chemically amplified photoresists, copper interconnect wiring, Silicon on Insulator, strained engineering, multi core microprocessors, immersion lithography, high speed silicon germanium (SiGe), High-k gate dielectrics, embedded DRAM, 3D chip stacking, and Air gap insulators.

IBM researchers also are credited with initiating the era of nano devices following the Nobel prize winning invention of the scanning tunneling microscope which enabled nano and atomic scale invention and innovation.

IBM will also continue to fund and collaborate with university researchers to explore and develop the future technologies for the semiconductor industry. In particular, IBM will continue to support and fund university research through private-public partnerships such as the NanoElectornics Research Initiative (NRI), and the Semiconductor Advanced Research Network (STARnet), and the Global Research Consortium (GRC) of the Semiconductor Research Corporation.

I highlighted ‘memory systems’ as this brings to mind HP Labs and their major investment in ‘memristive’ technologies noted in my June 26, 2014 posting,

… During a two-hour presentation held a year and a half ago, they laid out how the computer might work, its benefits, and the expectation that about 75 percent of HP Labs personnel would be dedicated to this one project. “At the end, Meg {Meg Whitman, CEO of HP Labs] turned to [Chief Financial Officer] Cathie Lesjak and said, ‘Find them more money,’” says John Sontag, the vice president of systems research at HP, who attended the meeting and is in charge of bringing the Machine to life. “People in Labs see this as a once-in-a-lifetime opportunity.”

The Machine is based on the memristor and other associated technologies.

Getting back to IBM, there’s this analysis of the $3B investment ($600M/year for five years) by Alex Konrad in a July 10, 2014 article for Forbes (Note: A link has been removed),

When IBM … announced a $3 billion commitment to even tinier semiconductor chips that no longer depended on silicon on Wednesday, the big news was that IBM’s putting a lot of money into a future for chips where Moore’s Law no longer applies. But on second glance, the move to spend billions on more experimental ideas like silicon photonics and carbon nanotubes shows that IBM’s finally shifting large portions of its research budget into more ambitious and long-term ideas.

… IBM tells Forbes the $3 billion isn’t additional money being added to its R&D spend, an area where analysts have told Forbes they’d like to see more aggressive cash commitments in the future. IBM will still spend about $6 billion a year on R&D, 6% of revenue. Ten percent of that research budget, however, now has to come from somewhere else to fuel these more ambitious chip projects.

Neal Ungerleider’s July 11, 2014 article for Fast Company focuses on the neuromorphic computing and quantum computing aspects of this $3B initiative (Note: Links have been removed),

The new R&D initiatives fall into two categories: Developing nanotech components for silicon chips for big data and cloud systems, and experimentation with “post-silicon” microchips. This will include research into quantum computers which don’t know binary code, neurosynaptic computers which mimic the behavior of living brains, carbon nanotubes, graphene tools and a variety of other technologies.

IBM’s investment is one of the largest for quantum computing to date; the company is one of the biggest researchers in the field, along with a Canadian company named D-Wave which is partnering with Google and NASA to develop quantum computer systems.

The curious can find D-Wave Systems here. There’s also a January 19, 2012 posting here which discusses the D-Wave’s situation at that time.

Final observation, these are fascinating developments especially for the insight they provide into the worries troubling HP Labs, Intel, and IBM as they jockey for position.

ETA July 14, 2014: Dexter Johnson has a July 11, 2014 posting on his Nanoclast blog (on the IEEE [Institute for Electrical and Electronics Engineers]) about the IBM announcement and which features some responses he received from IBM officials to his queries,

While this may be a matter of fascinating speculation for investors, the impact on nanotechnology development  is going to be significant. To get a better sense of what it all means, I was able to talk to some of the key figures of IBM’s push in nanotechnology research.

I conducted e-mail interviews with Tze-Chiang (T.C.) Chen, vice president science & technology, IBM Fellow at the Thomas J. Watson Research Center and Wilfried Haensch, senior manager, physics and materials for logic and communications, IBM Research.

Silicon versus Nanomaterials

First, I wanted to get a sense for how long IBM envisioned sticking with silicon and when they expected the company would permanently make the move away from CMOS to alternative nanomaterials. Unfortunately, as expected, I didn’t get solid answers, except for them to say that new manufacturing tools and techniques need to be developed now.

He goes on to ask about carbon nanotubes and graphene. Interestingly, IBM does not have a wide range of electronics applications in mind for graphene.  I encourage you to read Dexter’s posting as Dexter got answers to some very astute and pointed questions.

Graphene, Perimeter Institute, and condensed matter physics

In short, researchers at Canada’s Perimeter Institute are working on theoretical models involving graphene. which could lead to quantum computing. A July 3, 2014 Perimeter Institute news release by Erin Bow (also on EurekAlert) provides some insight into the connections between graphene and condensed matter physics (Note: Bow has included some good basic explanations of graphene, quasiparticles, and more for beginners),

One of the hottest materials in condensed matter research today is graphene.

Graphene had an unlikely start: it began with researchers messing around with pencil marks on paper. Pencil “lead” is actually made of graphite, which is a soft crystal lattice made of nothing but carbon atoms. When pencils deposit that graphite on paper, the lattice is laid down in thin sheets. By pulling that lattice apart into thinner sheets – originally using Scotch tape – researchers discovered that they could make flakes of crystal just one atom thick.

The name for this atom-scale chicken wire is graphene. Those folks with the Scotch tape, Andre Geim and Konstantin Novoselov, won the 2010 Nobel Prize for discovering it. “As a material, it is completely new – not only the thinnest ever but also the strongest,” wrote the Nobel committee. “As a conductor of electricity, it performs as well as copper. As a conductor of heat, it outperforms all other known materials. It is almost completely transparent, yet so dense that not even helium, the smallest gas atom, can pass through it.”

Developing a theoretical model of graphene

Graphene is not just a practical wonder – it’s also a wonderland for theorists. Confined to the two-dimensional surface of the graphene, the electrons behave strangely. All kinds of new phenomena can be seen, and new ideas can be tested. Testing new ideas in graphene is exactly what Perimeter researchers Zlatko Papić and Dmitry (Dima) Abanin set out to do.

“Dima and I started working on graphene a very long time ago,” says Papić. “We first met in 2009 at a conference in Sweden. I was a grad student and Dima was in the first year of his postdoc, I think.”

The two young scientists got to talking about what new physics they might be able to observe in the strange new material when it is exposed to a strong magnetic field.

“We decided we wanted to model the material,” says Papić. They’ve been working on their theoretical model of graphene, on and off, ever since. The two are now both at Perimeter Institute, where Papić is a postdoctoral researcher and Abanin is a faculty member. They are both cross-appointed with the Institute for Quantum Computing (IQC) at the University of Waterloo.

In January 2014, they published a paper in Physical Review Letters presenting new ideas about how to induce a strange but interesting state in graphene – one where it appears as if particles inside it have a fraction of an electron’s charge.

It’s called the fractional quantum Hall effect (FQHE), and it’s head turning. Like the speed of light or Planck’s constant, the charge of the electron is a fixed point in the disorienting quantum universe.

Every system in the universe carries whole multiples of a single electron’s charge. When the FQHE was first discovered in the 1980s, condensed matter physicists quickly worked out that the fractionally charged “particles” inside their semiconductors were actually quasiparticles – that is, emergent collective behaviours of the system that imitate particles.

Graphene is an ideal material in which to study the FQHE. “Because it’s just one atom thick, you have direct access to the surface,” says Papić. “In semiconductors, where FQHE was first observed, the gas of electrons that create this effect are buried deep inside the material. They’re hard to access and manipulate. But with graphene you can imagine manipulating these states much more easily.”

In the January paper, Abanin and Papić reported novel types of FQHE states that could arise in bilayer graphene – that is, in two sheets of graphene laid one on top of another – when it is placed in a strong perpendicular magnetic field. In an earlier work from 2012, they argued that applying an electric field across the surface of bilayer graphene could offer a unique experimental knob to induce transitions between FQHE states. Combining the two effects, they argued, would be an ideal way to look at special FQHE states and the transitions between them.

Once the scientists developed their theory they went to work on some experiments,

Two experimental groups – one in Geneva, involving Abanin, and one at Columbia, involving both Abanin and Papić – have since put the electric field + magnetic field method to good use. The paper by the Columbia group appears in the July 4 issue of Science. A third group, led by Amir Yacoby of Harvard, is doing closely related work.

“We often work hand-in-hand with experimentalists,” says Papić. “One of the reasons I like condensed matter is that often even the most sophisticated, cutting-edge theory stands a good chance of being quickly checked with experiment.”

Inside both the magnetic and electric field, the electrical resistance of the graphene demonstrates the strange behaviour characteristic of the FQHE. Instead of resistance that varies in a smooth curve with voltage, resistance jumps suddenly from one level to another, and then plateaus – a kind of staircase of resistance. Each stair step is a different state of matter, defined by the complex quantum tangle of charges, spins, and other properties inside the graphene.

“The number of states is quite rich,” says Papić. “We’re very interested in bilayer graphene because of the number of states we are detecting and because we have these mechanisms – like tuning the electric field – to study how these states are interrelated, and what happens when the material changes from one state to another.”

For the moment, researchers are particularly interested in the stair steps whose “height” is described by a fraction with an even denominator. That’s because the quasiparticles in that state are expected to have an unusual property.

There are two kinds of particles in our three-dimensional world: fermions (such as electrons), where two identical particles can’t occupy one state, and bosons (such as photons), where two identical particles actually want to occupy one state. In three dimensions, fermions are fermions and bosons are bosons, and never the twain shall meet.

But a sheet of graphene doesn’t have three dimensions – it has two. It’s effectively a tiny two-dimensional universe, and in that universe, new phenomena can occur. For one thing, fermions and bosons can meet halfway – becoming anyons, which can be anywhere in between fermions and bosons. The quasiparticles in these special stair-step states are expected to be anyons.

In particular, the researchers are hoping these quasiparticles will be non-Abelian anyons, as their theory indicates they should be. That would be exciting because non-Abelian anyons can be used in the making of qubits.

Graphene qubits?

Qubits are to quantum computers what bits are to ordinary computers: both a basic unit of information and the basic piece of equipment that stores that information. Because of their quantum complexity, qubits are more powerful than ordinary bits and their power grows exponentially as more of them are added. A quantum computer of only a hundred qubits can tackle certain problems beyond the reach of even the best non-quantum supercomputers. Or, it could, if someone could find a way to build stable qubits.

The drive to make qubits is part of the reason why graphene is a hot research area in general, and why even-denominator FQHE states – with their special anyons – are sought after in particular.

“A state with some number of these anyons can be used to represent a qubit,” says Papić. “Our theory says they should be there and the experiments seem to bear that out – certainly the even-denominator FQHE states seem to be there, at least according to the Geneva experiments.”

That’s still a step away from experimental proof that those even-denominator stair-step states actually contain non-Abelian anyons. More work remains, but Papić is optimistic: “It might be easier to prove in graphene than it would be in semiconductors. Everything is happening right at the surface.”

It’s still early, but it looks as if bilayer graphene may be the magic material that allows this kind of qubit to be built. That would be a major mark on the unlikely line between pencil lead and quantum computers.

Here are links for further research,

January PRL paper mentioned above: http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.112.046602

Experimental paper from the Geneva graphene group, including Abanin: http://pubs.acs.org/doi/abs/10.1021/nl5003922

Experimental paper from the Columbia graphene group, including both Abanin and Papić: http://arxiv.org/abs/1403.2112. This paper is featured in the journal Science.

Related experiment on bilayer graphene by Amir Yacoby’s group at Harvard: http://www.sciencemag.org/content/early/2014/05/28/science.1250270

The Nobel Prize press release on graphene, mentioned above: http://www.nobelprize.org/nobel_prizes/physics/laureates/2010/press.html

I recently posted a piece about some research into the ‘scotch-tape technique’ for isolating graphene (June 30, 2014 posting). Amusingly, Geim argued against coining the technique as the ‘scotch-tape’ technique, something I found out only recently.

Technion-Israel Institute of Technology and the University of Waterloo (Canada) together at last

A March 18, 2014 University of Waterloo news release describes a new agreement signed at a joint Technion-Israel Institute of Technology-University of Waterloo conference held in Israel.

“As two of the world’s top innovation universities, the University of Waterloo and Technion are natural partners,” said Feridun Hamdullahpur, president and vice-chancellor of the University of Waterloo. “This partnership positions both Waterloo and Technion for accelerated progress in the key areas of quantum information science, nanotechnology, and water. [emphasis mine] These disciplines will help to shape the future of communities, industries, and everyday life.”

The conference to mark the start of the new partnership, and a reciprocal event in Waterloo planned for later in 2014, is funded by a donation to the University of Waterloo from The Gerald Schwartz & Heather Reisman Foundation.

“The agreement between the University of Waterloo and Technion will lead to joint research projects between Israeli and Canadian scientists in areas crucial for making our world a better place,” said Peretz Lavie, president of Technion. “I could not think of a better partner for such projects than the University of Waterloo.”

The new partnership agreement will connect students and faculty from both institutions with global markets through technology transfer and commercialization opportunities with industrial partners in Canada and in Israel.

“This partnership between two global innovation leaders puts in place the conditions to support research breakthroughs and new opportunities for commercialization on an international scale,” said George Dixon, vice-president of research at Waterloo. “University of Waterloo and Technion have a history of research collaboration going back almost 20 years.”

Which one of these items does not fit on the list “quantum information science, nanotechnology, and water?” I pick water. I think they mean water remediation or water desalination or, perhaps, water research.

Given the issues with the lack of potable water in that region the interest in water is eminently understandable. (My Feb. 24, 2014 posting mentions the situation in the Middle East in the context of water desalination research at a new nanotechnology at Oman’s Sultan Qaboos University.)

Carbon nanotubes, good vibrations, and quantum computing

Apparently carbon nanotubes can store information within their vibrations and this could have implications for quantum computing, from the Mar. 21, 2013 news release on EurekAlert,

A carbon nanotube that is clamped at both ends can be excited to oscillate. Like a guitar string, it vibrates for an amazingly long time. “One would expect that such a system would be strongly damped, and that the vibration would subside quickly,” says Simon Rips, first author of the publication. “In fact, the string vibrates more than a million times. The information is thus retained up to one second. That is long enough to work with.”

Since such a string oscillates among many physically equivalent states, the physicists resorted to a trick: an electric field in the vicinity of the nanotube ensures that two of these states can be selectively addressed. The information can then be written and read optoelectronically. “Our concept is based on available technology,” says Michael Hartmann, head of the Emmy Noether research group Quantum Optics and Quantum Dynamics at the TU Muenchen. “It could take us a step closer to the realization of a quantum computer.”

The research paper can be found here,

Quantum Information Processing with Nanomechanical Qubits
Simon Rips and Michael J. Hartmann,
Physical Review Letters, 110, 120503 (2013) DOI: 10.1103/PhysRevLett.110.120503
Link: http://prl.aps.org/abstract/PRL/v110/i12/e120503

The paper is behind a paywall.

There are two Good Vibrations songs on YouTube, one by the Beach Boys and one by Marky Mark. I decided to go with this Beach Boys version in part due to its technical description at http://youtu.be/NwrKKbaClME,

FIRST TRUE STEREO version with lead vocals properly placed in the mix. I also restored the original full length of the bridge that was edited out of the released version. An official true stereo mix of the vocal version was not made back in 1967. While there are other “stereo” versions posted, for the most part they are “fake” or poor stereo versions. I tried to make the best judicious decision on sound quality, stereo imaging and mastering while maintaining TRUE STEREO integrity given the source parts available. I hope you enjoy it!

The video,

What is a diamond worth?

A couple of diamond-related news items have crossed my path lately causing me to consider diamonds and their social implications. I’ll start first with the news items, according to an April 4, 2012 news item on physorg.com a quantum computer has been built inside a diamond (from the news item),

Diamonds are forever – or, at least, the effects of this diamond on quantum computing may be. A team that includes scientists from USC has built a quantum computer in a diamond, the first of its kind to include protection against “decoherence” – noise that prevents the computer from functioning properly.

I last mentioned decoherence in my July 21, 2011 posting about a joint (University of British Columbia, University of California at Santa Barbara and the University of Southern California) project on quantum computing.

According to the April 5, 2012 news item by Robert Perkins for the University of Southern California (USC),

The multinational team included USC professor Daniel Lidar and USC postdoctoral researcher Zhihui Wang, as well as researchers from the Delft University of Technology in the Netherlands, Iowa State University and the University of California, Santa Barbara. The findings were published today in Nature.

The team’s diamond quantum computer system featured two quantum bits, or qubits, made of subatomic particles.

As opposed to traditional computer bits, which can encode distinctly either a one or a zero, qubits can encode a one and a zero at the same time. This property, called superposition, along with the ability of quantum states to “tunnel” through energy barriers, some day will allow quantum computers to perform optimization calculations much faster than traditional computers.

Like all diamonds, the diamond used by the researchers has impurities – things other than carbon. The more impurities in a diamond, the less attractive it is as a piece of jewelry because it makes the crystal appear cloudy.

The team, however, utilized the impurities themselves.

A rogue nitrogen nucleus became the first qubit. In a second flaw sat an electron, which became the second qubit. (Though put more accurately, the “spin” of each of these subatomic particles was used as the qubit.)

Electrons are smaller than nuclei and perform computations much more quickly, but they also fall victim more quickly to decoherence. A qubit based on a nucleus, which is large, is much more stable but slower.

“A nucleus has a long decoherence time – in the milliseconds. You can think of it as very sluggish,” said Lidar, who holds appointments at the USC Viterbi School of Engineering and the USC Dornsife College of Letters, Arts and Sciences.

Though solid-state computing systems have existed before, this was the first to incorporate decoherence protection – using microwave pulses to continually switch the direction of the electron spin rotation.

“It’s a little like time travel,” Lidar said, because switching the direction of rotation time-reverses the inconsistencies in motion as the qubits move back to their original position.

Here’s an image I downloaded from the USC webpage hosting Perkins’s news item,

The diamond in the center measures 1 mm X 1 mm. Photo/Courtesy of Delft University of Technolgy/UC Santa Barbara

I’m not sure what they were trying to illustrate with the image but I thought it would provide an interesting contrast to the video which follows about the world’s first purely diamond ring,

I first came across this ring in Laura Hibberd’s March 22, 2012 piece for Huffington Post. For anyone who feels compelled to find out more about it, here’s the jeweller’s (Shawish) website.

What with the posting about Neal Stephenson and Diamond Age (aka, The Diamond Age Or A Young Lady’s Illustrated Primer; a novel that integrates nanotechnology into a story about the future and ubiquitous diamonds), a quantum computer in a diamond, and this ring, I’ve started to wonder about role diamonds will have in society. Will they be integrated into everyday objects or will they remain objects of desire? My guess is that the diamonds we create by manipulating carbon atoms will be considered everyday items while the ones which have been formed in the bowels of the earth will retain their status.

Rail system and choreography metaphors in a couple of science articles

If you are going to use a metaphor/analogy when you’re writing about a science topic  because you want to reach beyond an audience that’s expert on the topic you’re covering or you want to grab attention from an audience that’s inundated with material, or you want to play (for writers, this can be a form of play [for this writer, anyway]), I think you need to remain true to your metaphor. I realize that’s a lot tougher than it sounds.

I’ve got examples of the use of metaphors/analogies in two recent pieces of science writing.

First, here’s the title for a Jan. 23, 2012 article by Samantha Chan for The Asian Scientist,

Scientists Build DNA Rail System For Nanomotors, Complete With Tracks & Switches

Then, there’s the text where the analogy/metaphor of a railway system with tracks and switchers is developed further and abandoned for origami tiles,

Expanding on previous work with engines traveling on straight tracks, a team of researchers at Kyoto University and the University of Oxford have used DNA building blocks to construct a motor capable of navigating a programmable network of tracks with multiple switches.

In this latest effort, the scientists built a network of tracks and switches atop DNA origami tiles, which made it possible for motor molecules to travel along these rail systems.

Sometimes, the material at hand is the issue. ‘DNA origami tiles’ is a term in this field so Chan can’t change it to ‘DNA origami ties’ which would fit with the railway analogy. By the way, the analogy itself comes from (or was influenced by) the title the scientists chose for their published paper in Nature Nanotechnology (it’s behind a paywall),

A DNA-based molecular motor that can navigate a network of tracks

All in all, this was a skillful attempt to get the most out of a metaphor/analogy.

For my second example, I’m using a Jan. 12, 2012 news release by John Sullivan for Princeton University which was published in Jan. 12, 2012 news item on Nanowerk. Here’s the headline from Princeton,

Ten-second dance of electrons is step toward exotic new computers

This sets up the text for the first few paragraphs (found in both the Princeton news release and the Nanowerk news item),

In the basement of Hoyt Laboratory at Princeton University, Alexei Tyryshkin clicked a computer mouse and sent a burst of microwaves washing across a silicon crystal suspended in a frozen cylinder of stainless steel.

The waves pulsed like distant music across the crystal and deep within its heart, billions of electrons started spinning to their beat.

Reaching into the silicon crystal and choreographing the dance of 100 billion infinitesimal particles is an impressive achievement on its own, but it is also a stride toward developing the technology for powerful machines known as quantum computers.

Sullivan has written some very appealing text for an audience who may or may not know about quantum computers.

Somebody on Nanowerk changed the headline to this,

Choreographing dance of electrons offers promise in pursuit of quantum computers

Here, the title has been skilfully reworded for an audience that knows more quantum computers while retaining the metaphor. Nicely done.

Sullivan’s text goes on to provide a fine explanation of an issue in quantum computing, maintaining coherence, for an audience not expert in quantum computing. The one niggle I do have is a shift in the metaphor,

To understand why it is so hard, imagine circus performers spinning plates on the top of sticks. Now imagine a strong wind blasting across the performance space, upending the plates and sending them crashing to the ground. In the subatomic realm, that wind is magnetism, and much of the effort in the experiment goes to minimizing its effect. By using a magnetically calm material like silicon-28, the researchers are able to keep the electrons spinning together for much longer.

Wasn’t there a way to stay with dance? You could have had dancers spinning props or perhaps the dancers themselves being blown off course and avoided the circus performers. Yes, the circus is more colourful and appealing but, in this instance, I would have worked to maintain the metaphor first introduced, assuming I’d noticed that I’d switched metaphors.

So, I think I can safely say that using metaphors is tougher than it looks.

D-Wave Systems, a Vancouver (Canada) area company gets one step closer to quantum computing

It takes a great deal of nerve to found a startup company for any emerging technology; I’m not sure what it takes to found a startup company that produces quantum computers.

D-Wave Systems: the quantum computing company (based in the Vancouver area) recently announced they were able to employ an 84-qubit calculation in a demonstration calculating what Dexter Johnson at the Nanoclast blog for the IEEE (Institute of Electrical and Electronics Engineers) called ‘notoriously difficult’ Ramsey numbers.

Here’s a brief description of the demonstration (excerpted from the Jan. 12, 2012 article by Bob Yirka for phsyorg.com),

In the research at D-Wave, those involved worked to run a just recently discovered quantum algorithm on an actual quantum computer; in this case, to solve for a two-color Ramsey number, R(m,2), where m= 4, 5, 6, 7 and 8, also known as the “Party Problem” because it’s use can be explained by posing a problem experienced by many party planners, i.e. how to invite the minimum number of guests where one group knows a certain number of others, and another group doesn’t, forcing just the right amount of mingling. Because increasing the number of different kinds of guests increases the difficulty of finding the answer, modern computers aren’t able to find R(5,5) much less anything higher. …

Quantum algorithms take advantage of such facilities [ability to take advantage of quantum mechanics capabilities which allow superconducting circuits to recognize 1 or 0 as current traveling in opposite directions or the existence of both states simultaneously] and allow for the execution of “instructions” far faster than conventional computers ever could. In the demonstration by the D-Wave team, the computer solved for a R(8,2) Ramsey number in just 270 milliseconds using 84 qubits, though just 28 of them were used in actual computation as the rest were delegated to correcting errors. Also, for those that are curious, the answer is 8.

While Yirka goes on to applaud the accomplishment, he notes that it may not be very useful. I think that’s always an issue with the early stages of an emerging technology; it may not prove to have any practical applications now or in the future.

Dexter in his Jan. 12, 2012 blog posting about D-Wave Systems and their recent announcement speaks as someone with lengthy experience dealing with emerging technologies (he provides a little history first [I have removed links from the excerpt, please see the posting for those]),

After erring on the side of caution—if not doubt—when IEEE Spectrum [magazine] cited D-Wave Systems as one of its “Big Losers” two years ago,  it seems that there was a reversal of opinion within this publication back in June of last year when Spectrum covered D-Wave’s first big sale of a quantum computer with an article and then a podcast interview of the company’s CTO.

In the job of covering nanotechnology, one develops—sometimes—a bit more hopeful perspective on the potential of emerging technologies. Basic research that may lead to applications such as quantum computers get more easily pushed up in the development cycle than perhaps they should. So, I have been following the developments of D-Wave for at least the last seven years with a bit more credence than Spectrum had offered the company earlier.

While it may seem that D-Wave is on irreversible upward technological slope, one problem indicated … is that capital may be beginning to dry up.

If so, it would seem almost ironic that after years of not selling anything and attracting a lot of capital, D-Wave would make a $10-million sale and then not be able to get any more funding.

Here’s an excerpt from an interview that Brian Wang had with Geordie Rose, D-Wave’s Chief Technical Officer, for The Next Big Future blog (mentioned in Dexter’s piece) which brings the conundrum Dexter notes into high relief (from Wang’s Dec. 29, 2011 post),

The next 18 months will be a critical period for Dwave systems [sic]. Raising private money has become far more difficult in the current economic conditions. If Dwave were profitable, then they could IPO. If Dwave were not able to become profitable and IPO and could not raise private capital, then there would be the risk of having to shutdown.

According to Wang’s post, D-Wave managed the feat with the Ramsey number two years ago. There was no mention of what they are currently managing to do with their quantum computer.

This is the piece I mentioned yesterday (Jan. 18, 2012) in my posting about the recently released report, Science and Engineering Indicators 2012, from the US National Science Board (NSB) in the context of the government initiative, Startup America, and what I thought was a failure to address the issue of a startup trying to become profitable.

ETA Jan. 22, 2012: Dexter Johnson, Nanoclast blog at the IEEE (Institute of Electrical and Electronics Engineers) mentions the problem in a different context of a recent US initiative to support startup companies through a public/private partnership consortium called the Advanced Manufacturing Partnership (AMP), from his Jan. 20, 2012 posting,

My concern is that a small company that has spun itself out from a university, developed some advanced prototypes, lined up their market, and picked their management group still need by some estimates somewhere in the neighborhood of $10 to $30 million to scale up to being an industrial manufacturer of a product.

Dexter’s concern is that AMP funds available for disbursement will only support a limited number of companies as they scale up.

This contrasts with the Canadian situation where it almost none of our smaller companies can get sufficient funds to scale up when they most need it, e.g., D-Wave System’s current situation.

 

Environmental decoherence tackled by University of British Columbia and California researchers

The research team at the University of British Columbia (UBC) proved a theory for the prediction and control of environmental decoherence in a complex system (an important step on the way to quantum computing) while researchers performed experiments at the University of California Santa Barbara (UCSB) to prove the theory.  Here’s an explanation of decoherence and its impact on quantum computing from the July 20, 2011 UBC news release,

Quantum mechanics states that matter can be in more than one physical state at the same time – like a coin simultaneously showing heads and tails. In small objects like electrons, physicists have had success in observing and controlling these simultaneous states, called “state superpositions.”

Larger, more complex physical systems appear to be in one consistent physical state because they interact and “entangle” with other objects in their environment. This entanglement makes these complex objects “decay” into a single state – a process called decoherence.

Quantum computing’s potential to be exponentially faster and more powerful than any conventional computer technology depends on switches that are capable of state superposition – that is, being in the “on” and “off” positions at the same time. Until now, all efforts to achieve such superposition with many molecules at once were blocked by decoherence.

The UBC research team, headed by Phil Stamp, developed a theory for predicting and controlling environmental decoherence in the Iron-8 molecule, which is considered a large complex system.

Iron-8 molecule (image provided by UBC)

This next image represents one of two states of decoherence, i. e., the molecule ‘occupies’ only one of two superpositions, spin up or spin down,

 

Decoherence: occupying either the spin up or spin down position (image provided by UBC)

Here’s how the team at the UCSB proved the theory experimentally,

In their study, Takahashi [Professor Susumu Takahashi is now at the University of Southern California {USC}] and his colleagues investigated single crystals of molecular magnets. Because of their purity, molecular magnets eliminate the extrinsic decoherence, allowing researchers to calculate intrinsic decoherence precisely.

“For the first time, we’ve been able to predict and control all the environmental decoherence mechanisms in a very complex system – in this case a large magnetic molecule,” said Phil Stamp, University of British Columbia professor of physics and astronomy and director of the Pacific Institute of Theoretical Physics.

Using crystalline molecular magnets allowed researchers to build qubits out of an immense quantity of quantum particles rather than a single quantum object – the way most proto-quantum computers are built at the moment.

I did try to find definitions for extrinsic and intrinsic decoherence unfortunately the best I could find is the one provided by USC (from the news item on Nanowerk),

Decoherence in qubit systems falls into two general categories. One is an intrinsic decoherence caused by constituents in the qubit system, and the other is an extrinsic decoherence caused by imperfections of the system - impurities and defects, for example.

I have a conceptual framework of sorts for a ‘qubit system’, I just don’t understand what they mean by ‘system’. I performed an internet search and virtually all of the references I found to intrinsic and extrinsic decoherence cite this news release or, in a few cases, papers written by physicists for other physicists. If anyone could help clarify this question for me, I would much appreciate it.

Leaving extrinsic and intrinsic systems aside, the July 20, 2011 news item on Science Daily provides a little more detail about the experiment,

In the experiment, the California researchers prepared a crystalline array of Iron-8 molecules in a quantum superposition, where the net magnetization of each molecule was simultaneously oriented up and down. The decay of this superposition by decoherence was then observed in time — and the decay was spectacularly slow, behaving exactly as the UBC researchers predicted.

“Magnetic molecules now suddenly appear to have serious potential as candidates for quantum computing hardware,” said Susumu Takahashi, assistant professor of chemistry and physics at the University of Southern California.

Congratulations to all of the researchers involved.

ETA July 22, 2011: I changed the title to correct the grammar.

Siemens, nano, and advertising

The product is called the Simatic IPC227D Nanobox PC and it’s from Siemens. Of course, the ‘nano’ is what caught my attention. For the record, I could find no mention of this being a nanotechnology-enabled product; it appears that this is purely an advertising/marketing ploy. From the May 3, 2011 news item on physorg.com,

The nano-format PC uses new, high-performance Atom [emphasis mine] processors from Intel. These processors consume little energy and generate almost no heat, which is why the computer doesn’t need a fan and can be installed practically anywhere. In its basic configuration, the computer measures only 19 x 10 x 6 centimeters and is completely maintenance-free. Instead of a hard disk, it has temperature-resistant CompactFlash cards with up to eight gigabytes of capacity or solid-state drives (SSDs) of at least 50 gigabytes. What’s more, the BIOS setup data is magnetically stored so that no batteries are needed as a safeguard.

The compact computer is also available for display and operating systems. Known as the Simatic HMI IPC277D Nanopanel PC, this version is embedded with 7-inch, 9-inch, or 12-inch high-resolution industrial touch displays. The displays consume very little power, thanks to LED backlighting that can be dimmed by up to 100 percent.

The Atom processor from Intel is not a single atom processor, this too is an advertising/marketing ploy.

Coincidentally, I came across this news item on Nanowerk, Single atom stores quantum information on the same day. From the news item,

A data memory can hardly be any smaller: researchers working with Gerhard Rempe at the Max Planck Institute of Quantum Optics in Garching have stored quantum information in a single atom. The researchers wrote the quantum state of single photons, i.e. particles of light, into a rubidium atom and read it out again after a certain storage time (“A single-atom quantum memory”). This technique can be used in principle to design powerful quantum computers and to network them with each other across large distances.

I do find it a bit confusing when companies use terms for marketing purposes in ways that could be construed as misleading. Or perhaps it’s only misleading for someone like me, not really scientific but not really ‘general public’ material either.