I am a pretty avid follower of technological progress, particularly advances in computer technology (hell, I got a BSc focusing on digital electronics and microfabrication out of pure interest). The overarching trend of this phenomenon is called Moore’s Law, an oft-misunderstood “law.” Simply put, it is an industry-wide self-set goal to double the complexity (ie. number of transistors) of an integrated circuit at a given price point every 2 or so years. This has held remarkably steady for several decades now.

If we take Moore’s Law as an axiom in our argument, since it is at present a valid observational law, some interesting things happen. For the sake of this discussion, the actual doubling time is largely irrelevant, so long as some doubling does in fact occur. This is because a doubling can be represented as where is the time for one doubling. For simplicity, I’ll normalize (that is set it equal to 1 in some system of time units).

When we look at the pattern of the integer powers of 2, we find

After a bit of thinking, you might notice that any given power of 2 is almost equal to the sum of all the preceding powers of 2:

with a difference from truth being just 1. Eg) If this observation holds true for all then we arrive at an interesting result: *Each doubling is equivalent to all previous doublings combined* (if you’ll forgive an error of 1, which for is already less than 1% error).

Since time can be considered continuous for human lengths of time (but is it fact ultimately continuous or discrete?), we might like to more formally show this result using integrals:

If we divide the latter by the former, we’ll clearly get a value of 2, as expected.

This effect is quite astounding. Beginning from wherever you set your , (say 1971) any computer and computational device you buy will in essence have the aggregate power of all its ancestral devices in a line combined. What are the ramifications of this for computationally intense projects like SETI, Folding@Home, or the Blue Brain Project (years one and two)? In my own experience, every time I’ve built a new computer for the last decade or so I’ve given it a run around Folding@Home and each time it vastly outpaced my previous efforts. What would be the effect if everybody thought “Ah what the hell, no sense doing any computational work on it since my computer in two years will be far, far more useful.” That mindset could conceivably persist until computational performance stops doubling, if it ever were to take hold which I somewhat doubt. It is interesting to think that all the work volunteers were doing for distributive computing projects just a few years ago is now essentially negligible compared to present work.