Source: http://hackaday.com/2015/09/09/exponential-growth-in-linear-time-the-end-of-moores-law/
Moore’s Law states the number of transistors on an integrated circuit will double about every two years. This law, coined by Intel and Fairchild founder [Gordon Moore] has been a truism since it’s introduction in 1965. Since the introduction of the Intel 4004 in 1971, to the Pentiums of 1993, and the Skylake processors introduced last month, the law has mostly held true.
The law, however, promises exponential growth in linear time. This is a promise that is ultimately unsustainable. This is not an article that considers the future roadblocks that will end [Moore]’s observation, but an article that says the expectations of Moore’s Law have already ended. It ended quietly, sometime around 2005, and we will never again see the time when transistor density, or faster processors, more capable graphics cards, and higher density memories will double in capability biannually.
Chip Frequency graphed against year of introduction. Source: The Future of Computing Performance (2011)
In 2011, the Committee on Sustaining Growth in Computing Performance of the National Research Council released the report, The Future of Computing Performance: Game Over or Next Level? This report provides an overview of computing performance from the first microprocessors to the latest processors of the day.
Although Moore’s Law applies only to transistors on a chip, this measure aligns very well with other measures of the performance of integrated circuits. Introduced in 1971, Intel’s 4004 has a maximum clock frequency of about 700 kilohertz. In two years, according to bastardizations of Moore’s Law, this speed would double, and in two years double again. By around 1975 or 1976, so the math goes, processors capable of running at four or five Megahertz should appear, and this was the historical precedent: the earliest Motorola 6800 processors, introduced in 1974, ran at 1MHz. In 1976, RCA introduced the 1802, capable of 5MHz. In 1979, the Motorola 68000 was introduced, with speed grades of 4, 6, and 8MHz. Shortly after Intel released the 286 in 1982, the speed was quickly scaled to 12.5 MHz. Despite being completely different architectures with different instruction sets and bus widths, a Moore’s Law of the clock speed has existed for a very long time. This is law also holds true with the performance and even TDP per device.
Number of transistors, performance, clock speed, power, and cores per chip, graphed over time Source: The Future of Computing Performance (2011).
Everything went wrong in 2004. At least, this is the thesis of The Future of Computing Performance. Since 2004, the exponential increase in performance, both in floating point and integer calculations, clock frequency, and even power dissipation has leveled off.
One could hope that the results are an anomaly and that computer vendors will soon return to robust annual improvements. However, public roadmaps and private conversations with vendors reveal that single threaded computer-performance gains have entered a new era of modest improvement.
There was never any question Moore’s Law would end. No one now, or when the law was first penned in 1965, would assume exponential growth could last forever. Whether this exponential growth would apply to transistors, or in [Kurzweil] and other futurists’ interpretation of general computing power was never a question; exponential growth can not continue indefinitely in linear time.
Continuations of a recent trend
The Future of Computing Performance was written in 2011, and we have another half decade of data to draw from. Has the situation improved in the last five years?
Unfortunately, no. In a survey of Intel Core i7 processors with comparable TDP, the performance from the first i7s to the latest Broadwells shows no change from 2005 through 2015. Whatever happened to Moore’s Law in 2005 is still happening today.
The Future Of Moore’s Law
Even before 2011, when The Future of Computing Performance was published, the high-performance semiconductor companies started gearing up for the end of Moore’s Law. It’s no coincidence that the first multi-core chips made an appearance around the same time TDP, performance, and clock speed took the hard turn to the right seen in the graphs above.
A slowing of Moore’s Law would also be seen in the semiconductor business, and this has also been the case. In 2014, Intel released a refresh of the 22nm Haswell architecture because of problems spinning up the 14nm Broadwell architecture. Recently, Intel announced they will not be introducing the 10nm Cannonlake in 2016 as expected, and instead will introduce the 14nm Kaby Lake in 2016. Clearly the number of transistors on a die can not be doubled every two years.
While the future of Moore’s Law will see the introduction of exotic substrates such as indium gallium arsenide replacing silicon, this much is clear: Moore’s Law is broken, and it has been for a decade. It’s no longer possible for transistor densities to double every two years, and the products of these increased densities – performance and clock speed – will remain relatively stagnant compared to their exponential rise in the 80s and 90s.
There is, however, a saving grace: When [Gordon Moore] first penned his law in 1965, the number of transistors on an integrated circuit saw a doubling every year. In 1975, [Moore] revised his law to a doubling every two years. Here you have a law where not only the meaning – transistors, performance, or speed – can change, but also the duration. Now, it seems, Moore’s law has extended to three years. Until new technologies are created, and chips are no longer made on silicon, this will hold true.