Facing the Intelligence Explosion by Luke Muehlhauser
The concept of the technological singularity, or simply the Singularity, probably has its most popular formulation due to Kurzweil as something along the lines of “technological advance feeds on itself (you use computers to make better computers) and so increases exponentially in cost-performance, and beyond some point still in the future the rate of change is so fast that humans can no longer keep track of it.” This is a hardware-centric version of the Singularity, and if you’re skeptical of the exponential nature of technology either occurring right now or being projected into the future, you necessarily should rate the probability of this occurrence low.
There is another formulation of the Singularity, though, based less on hardware and more on mathematics. If we humans could unlock the algorithms of cognition, we could build an generally intelligent agent in a computer which would have the attendant speedup of relying on electrical transistors (or maybe in the future optical transistors) rather than biochemical neurons. One thing this intelligence would be good at is optimizing and extending its own source code, and so you’d end up with an intelligence explosion. As I.J. Good put it,
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.
This is the version of the Singularity defended in Facing the Intelligence Explosion, and if somebody were interested in learning what all this Singularity business was about this is the book I’d recommend them start with. The first half of the book is about rationality in general, things like logic, probability theory, and decision theory, and why humans are so bad at it. To me, the core chapter is “Plenty of Room Above Us,” which puts into perspective that all the range of intelligences we’ve encountered, from village idiot to Einstein, are just a small slice on the scale from bacteria to recursive, self-improving artificial intelligences.
The latter half of the book is about why AI is such a hard problem, as things like values and morality are exceedingly complex to state formally, and if you only get, say, 95% of the way there you still end in disaster (what if the machine intelligence is not programmed to take into account boredom, and so just executes the same optimized “fun” operation over and over again?). Concluding the book is the chapter “Engineering Utopia” which makes the case that despite all these difficulties, a friendly AI is a dream worth working towards.
In the last roughly 10,000 years there has been a remarkable increase and improvement in our knowledge, technology, and culture. More remarkably, this has all been done with essentially constant levels of brainpower (there just hasn’t been enough time for evolution to select for greater intelligence). This is why artificial intelligence could be such a break from the past, as the difference would be less modern scientist over hunter-gatherer but more so modern scientist over cricket.
Facing the Intelligence Explosion is also available freely as an audiobook read by the author, under its former title of Facing the Singularity.