In turn, the driving force behind information and communications technology has been Moore's law, which can understood as the proposition that the number of components packed on to a computer chip would double every two years, implying a sharp fall in the costs and rise in the capabilities of information technology. But the capability of making transistors ever-smaller, at least with current technology, is beginning to run into physical limits. IEEE Spectrum has published a "Special Report: 50 Years of Moore's Law," with a selection of a dozen short articles looking back at Moore's original formulation of the law, how it has developed over time, and prospects for the law continuing. Here are some highlights.
It's very hard to get an intuitive sense of the exponential power of Moore's law, but Dan Hutcheson takes a shot at it with few well-chosen sentences and a figure. He writes:
In 2014, semiconductor production facilities made some 250 billion billion (250 x 1018) transistors. This was, literally, production on an astronomical scale. Every second of that year, on average, 8 trillion transistors were produced. That figure is about 25 times the number of stars in the Milky Way and some 75 times the number of galaxies in the known universe. The rate of growth has also been extraordinary. More transistors were made in 2014 than in all the years prior to 2011.Here's a figure from Hutcheson showing the trends of semiconductor output and price over time. Notice that both axes are measured as logarithmic scales: that is, they rise by powers of 10. The price of a transistor was more than a dollar back in the 1950s, and now it's a billionth of a penny.
As the engineering project of making the components on a computer chip smaller and smaller is beginning to get near some physical limits. What might happen next?
Chris Mack makes the case that Moore's law is is not a fact of nature; instead, it's the result of competition among chip-makers, who viewed it as the baseline for their technological progress, and thus set their budgets for R&D and investment according to keeping up this pace. He argues that as technological constraints begin to bind, the next step will be for combining capabilities on a chip.
I would argue that nothing about Moore’s Law was inevitable. Instead, it’s a testament to hard work, human ingenuity, and the incentives of a free market. Moore’s prediction may have started out as a fairly simple observation of a young industry. But over time it became an expectation and self-fulfilling prophecy—an ongoing act of creation by engineers and companies that saw the benefits of Moore’s Law and did their best to keep it going, or else risk falling behind the competition. ...
Going forward, innovations in semiconductors will continue, but they won’t systematically lower transistor costs. Instead, progress will be defined by new forms of integration: gathering together disparate capabilities on a single chip to lower the system cost. This might sound a lot like the Moore’s Law 1.0 era, but in this case, we’re not looking at combining different pieces of logic into one, bigger chip. Rather, we’re talking about uniting the non-logic functions that have historically stayed separate from our silicon chips.
An early example of this is the modern cellphone camera, which incorporates an image sensor directly onto a digital signal processor using large vertical lines of copper wiring called through-silicon vias. But other examples will follow. Chip designers have just begun exploring how to integrate microelectromechanical systems, which can be used to make tiny accelerometers, gyroscopes, and even relay logic. The same goes for microfluidic sensors, which can be used to perform biological assays and environmental tests.
Andrew Huang makes the intriguing claim that a slowdown in Moore's law might be useful for other sources of productivity growth. He argues that when the power of information technology is increasing so quickly, there is an understandably heavy focus on adapting to these rapid gains. But if gains in raw information processing slow down, there would be room for more focus on making the devices that use information technology cheaper to produce, easier to use, and cost-effective in many ways.
Jonathan Koomey and Samuel Naffziger point out that computing power has become so cheap that we often aren't using what we've got--which suggests the possibility of efficiency gains in energy use and computer utilization:
Today, most computers run at peak output only a small fraction of the time (a couple of exceptions being high-performance supercomputers and Bitcoin miners). Mobile devices such as smartphones and notebook computers generally operate at their computational peak less than 1 percent of the time based on common industry measurements. Enterprise data servers spend less than 10 percent of the year operating at their peak. Even computers used to provide cloud-based Internet services operate at full blast less than half the time.Final note: I've written about Moore's law a couple of times previously this blog, including "Checkerboard Puzzle, Moore's Law, and Growth Prospects" (February 4, 2013) and "Moore's Law: At Least a Little While Longer" (February 18, 2014). These posts tend to emphasize that Moore's law may still be good for a few more doublings. But at that point, the course of technological progress in information technology, for better or worse, will take some new turns.