The Top500 list of supercomputer performance is clearly on a new, slower trajectory, continuing from previous lists. This is exemplified by the Tianhe-2 holding the top spot for 3 years running, though as can be seen in the below “Performance Development” graph, it’s not unusual for the #1 computer to hold the spot for several lists.
The combined performance of all five hundred supercomputers is now at 420 PFLOPS, from 361 PFLOPS in June and 309 PFLOPS from last November. Turnover continues to be slow by historical comparisons, with the last system on the Nov 2015 list occupying position 369 on the June 2015 list.
I’ve previously speculated on why this slowdown has occurred and whether its permanent or an artifact of something (the global recession?). Recently, however, some analysis was posted at HPCwire which shows that the problem is not with Moore’s Law but with the financials of large systems.
Here we see the average performance of an individual CPU core flatlined around 2005–2006, but that was compensated by the rise in the number of cores per physical CPU (called socket here) which is the red line. The combined performance (blue line) has been smooth sailing.
What we instead see is that the average number of sockets (physical CPUs) shifted to a slower growth curve around that time. So CPU performance continues to grow as before (at least for highly parallelizable tasks typical of supercomputers) but there are financial limits at the systems level with things like number of cabinets and floor space or more likely power.
So it might be the case that the previous growth curve was anomalous as we got a double effect from increasing CPU performance and increasingly large datacenters to house the supercomputers. If the latter has run out of gas, we’re seeing a return to the underlying circuit technology path.