Anxieties about how much longer we can design and manufacture smaller, faster computer chips are commonplace even as companies continue to announce new, faster, smaller chips. Just before the US National Science Foundation (NSF) issued a press release concerning a Nature (journal) essay on the limits of computation, Intel announced a new microarchitecture for its 14nm chips .
First, there’s Intel. In an Aug. 12, 2014 news item on Azonano, Intel announced its newest microarchitecture optimization,
Intel today disclosed details of its newest microarchitecture that is optimized with Intel’s industry-leading 14nm manufacturing process. Together these technologies will provide high-performance and low-power capabilities to serve a broad array of computing needs and products from the infrastructure of cloud computing and the Internet of Things to personal and mobile computing.
An Aug. 11, 2014 Intel news release, which originated the news item, lists key points,
The company has made available supporting materials including videos titled, ‘Advancing Moore’s Law in 2014′, ‘Microscopic Mark Bohr: 14nm Explained’, and ‘Intel 14nm Manufacturing Process’ which can be found here. An earlier mention of Intel and its 14nm manufacturing process can be found in my July 9, 2014 posting.
Meanwhile, in a more contemplative mood, Igor Markov of the University of Michigan has written an essay for Nature questioning the limits of computation as per an Aug. 14, 2014 news item on Azonano,
From their origins in the 1940s as sequestered, room-sized machines designed for military and scientific use, computers have made a rapid march into the mainstream, radically transforming industry, commerce, entertainment and governance while shrinking to become ubiquitous handheld portals to the world.
This progress has been driven by the industry’s ability to continually innovate techniques for packing increasing amounts of computational circuitry into smaller and denser microchips. But with miniature computer processors now containing millions of closely-packed transistor components of near atomic size, chip designers are facing both engineering and fundamental limits that have become barriers to the continued improvement of computer performance.
Have we reached the limits to computation?
In a review article in this week’s issue of the journal Nature, Igor Markov of the University of Michigan reviews limiting factors in the development of computing systems to help determine what is achievable, identifying “loose” limits and viable opportunities for advancements through the use of emerging technologies. His research for this project was funded in part by the National Science Foundation (NSF).
An Aug. 13, 2014 NSF news release, which originated the news item, describes Markov’s Nature essay in greater detail,
“Just as the second law of thermodynamics was inspired by the discovery of heat engines during the industrial revolution, we are poised to identify fundamental laws that could enunciate the limits of computation in the present information age,” says Sankar Basu, a program director in NSF’s Computer and Information Science and Engineering Directorate. “Markov’s paper revolves around this important intellectual question of our time and briefly touches upon most threads of scientific work leading up to it.”
The article summarizes and examines limitations in the areas of manufacturing and engineering, design and validation, power and heat, time and space, as well as information and computational complexity.
“What are these limits, and are some of them negotiable? On which assumptions are they based? How can they be overcome?” asks Markov. “Given the wealth of knowledge about limits to computation and complicated relations between such limits, it is important to measure both dominant and emerging technologies against them.”
Limits related to materials and manufacturing are immediately perceptible. In a material layer ten atoms thick, missing one atom due to imprecise manufacturing changes electrical parameters by ten percent or more. Shrinking designs of this scale further inevitably leads to quantum physics and associated limits.
Limits related to engineering are dependent upon design decisions, technical abilities and the ability to validate designs. While very real, these limits are difficult to quantify. However, once the premises of a limit are understood, obstacles to improvement can potentially be eliminated. One such breakthrough has been in writing software to automatically find, diagnose and fix bugs in hardware designs.
Limits related to power and energy have been studied for many years, but only recently have chip designers found ways to improve the energy consumption of processors by temporarily turning off parts of the chip. There are many other clever tricks for saving energy during computation. But moving forward, silicon chips will not maintain the pace of improvement without radical changes. Atomic physics suggests intriguing possibilities but these are far beyond modern engineering capabilities.
Limits relating to time and space can be felt in practice. The speed of light, while a very large number, limits how fast data can travel. Traveling through copper wires and silicon transistors, a signal can no longer traverse a chip in one clock cycle today. A formula limiting parallel computation in terms of device size, communication speed and the number of available dimensions has been known for more than 20 years, but only recently has it become important now that transistors are faster than interconnections. This is why alternatives to conventional wires are being developed, but in the meantime mathematical optimization can be used to reduce the length of wires by rearranging transistors and other components.
Several key limits related to information and computational complexity have been reached by modern computers. Some categories of computational tasks are conjectured to be so difficult to solve that no proposed technology, not even quantum computing, promises consistent advantage. But studying each task individually often helps reformulate it for more efficient computation.
When a specific limit is approached and obstructs progress, understanding the assumptions made is key to circumventing it. Chip scaling will continue for the next few years, but each step forward will meet serious obstacles, some too powerful to circumvent.
What about breakthrough technologies? New techniques and materials can be helpful in several ways and can potentially be “game changers” with respect to traditional limits. For example, carbon nanotube transistors provide greater drive strength and can potentially reduce delay, decrease energy consumption and shrink the footprint of an overall circuit. On the other hand, fundamental limits–sometimes not initially anticipated–tend to obstruct new and emerging technologies, so it is important to understand them before promising a new revolution in power, performance and other factors.
“Understanding these important limits,” says Markov, “will help us to bet on the right new techniques and technologies.”
Here’s a link to and a citation for Markov’s article,
Limits on fundamental limits to computation by Igor L. Markov. Nature 512, 147–154 (14 August 2014) doi:10.1038/nature13570 Published online 13 August 2014
This paper is behind a paywall but a free preview is available via ReadCube Access.
It’s a fascinating question, what are the limits? It’s one being asked not only with regard to computation but also to medicine, human enhancement, and artificial intelligence for just a few areas of endeavour.