Moore’s Law is a famous observation relating to integrated circuits, though most people use the term lazily. The correct statement of the law not only involves the doubling of the number of transistors on a computer chip within a certain span of time, but specifically that the most economical number of transistors on the chip is what doubles. If you set up a multi-billion dollar fabrication facility and started making chips that had only one transistor on them, you’d quickly lose your shirt. Conversely if it was 1980 and you wanted to fit a trillion transistors on a postage stamp… well, good luck.
If you search for Moore’s Law plots you’ll invariably find something that looks like this Notice that the vertical axis is in absolute number of transistors. This annoys me.
Moore’s Law is all about transistor density, as the size of the chip itself can change. Look at that above chart: The original Intel 4004 had an area of 12 mm², while that beast of a 10-core Xeon Westmere-EX was a far larger 512 mm². This too is technological advancement, as ensuring functionality over a larger area is a harder problem (more places for something to go wrong). Now, this probably doesn’t matter too much as the area only spans at most two orders of magnitude whereas the transistor count spans many more, but at least you’d be comparing like with like.
So, using the Wikipedia page “Transistor count” which records a lot of the relevant information and was used to generate that above plot, I’ve made the following: I don’t see any evidence of a slowdown in transistor densities in recent years out of whack with the decades-long curve, so that probably isn’t the source of the slowdown in supercomputer performance. I’m guessing that there’s been a lot of effort that’s gone into squeezing more machinery into smartphone and tablet processors, which necessarily need to be smaller than desktop or laptop parts. That’s why the newest iPhone CPU, the Apple A8, has twice as many transistors as my 2011 desktop part in a much smaller package.