Some thoughts on computer hardware and economics

To create a book used to be an extraordinarily expensive task, where a man (probably a monk) would sit at a desk and meticulously write out all of the words and embellishments and pictures in a book by hand. Though the capital expense was low, the labor cost was extremely high (in Nate Silver’s book The Signal and the Noise he lists the going rate for that era at $200 per five pages) and at the end of the process you’d only have a single book. Futhermore, the available market for the book was small since most people couldn’t read.

The invention of movable type printing by Johannes Gutenberg in the mid-1400s flipped the costs around: Now there was a very high capital cost to set up a printing run. You needed relatively complicated presses and large collections of cast metal type. With such a large fixed cost (the money and time spent just to get an operation set up), to print a single page would offer no cost savings compared to being written (if anything, it could be even more expensive). The magic was that the marginal cost to print an additional page was very low: the material cost of the paper, perhaps some additional ink, and the additional minute or so of labor. What you got for that though was two pages, and since the marginal cost was very low compared to the fixed cost, you in essence divided up your total cost in two, so the first page cost half of what it did before. Take this to the extreme: print hundreds or thousands of such pages and the price of a book would plunge (there was still the problem of limited markets due to rampant illiteracy). The price of a book dropped hundreds of times and the number of books published exploded into the millions.

Printing I think illustrates the difference between fixed and marginal costs fairly well. An additional factor is that when you create a huge volume of goods (say printing many thousands of pages), you often developed methods, techniques, and technologies to make the process more efficient and thus generate economies of scale. That is a large printing house would have a lower per per cost than a small printing house, though their fixed capital costs would also be greater.

Let’s now switch to computing hardware, particularly integrated circuits found in processors and memory. The fixed capital costs are astronomical: You need an army of scientists doing research into materials science and solid state physics/chemistry; an army of engineers to design and build the machines to process the wafers and print the circuit dice; colossal buildings with strict environmental controls to house all this equipment; managers and accountants and supply specialists to keep the whole thing running smoothly. Moreover you can’t leapfrog the march to more advanced hardware since you need present-day computers in order to design next-generation computers. To create a single chip would entail costs in the tens of billions of dollars, at least.

But of course they don’t just make one chip; they print and etch and package hundreds of millions to billions of them that go in everything from your laptop to large servers to smartphones to alarm clocks to Blu-ray players. The huge volume also enables detailed process engineering and management to squeeze inefficiencies and defects out and improve yields.

This is all hidden from the average consumer, or for a particular example a humble researcher, who only see that every few years they spend a few hundred dollars on a new processor that’s much faster than what they currently have. If our humble researcher had to do all of this work themselves then any computational science would be in essence finished. By grafting themselves onto this huge endeavor, however, our humble researcher is able to see dramatic cost savings brought about by millions of people buy new XBoxes. And the hardware gets dramatically better over time.

Let’s say our humble researcher is in some field like computational chemistry, and that to run a simulation takes a whole work day. The code is set to run when they go home and is finished when they return the next morning (only to find that a small mistake in setup parameters has ruined the whole run). Now step forward seven or eight years when computers are ten times faster: What use to take a day now happens over a lunch hour; what would take a month running on a cluster now takes just a weekend. Not only does this allow many more simulations to be run, but their scope and resolution can also grow exponentially (so our humble researcher is back where they began in terms of time).

This is the miracle of marginal costs and economies of scale brought about by massive investments in fixed capital costs as applied to science. By piggybacking on the efforts and outlays of the silicon electronics industry, certain branches of science can advance simply by waiting around, and this doesn’t even take into account research and advancements in algorithms and compilers and software languages that allow code on the same hardware to run faster. The additional computing power is also quickly gobbled up by more elaborate and advanced projects, maintaining demand and keeping the whole thing moving forward.

Advertisements
This entry was posted in Science (general), Technology. Bookmark the permalink.