When gods play chess

A few months ago I browsed through a book called Beyond Deep Blue: Chess in the Stratosphere. Now, to put things in perspective, in the past ten years I think I’ve played one game of chess, so I’m probably in one of the bottommost tiers of chess capability and knowledge. Nevertheless, I like following the rough trends in the game as a surrogate for advancements in artificial intelligence generally (for example, here’s a recent piece I enjoyed). So I opened the book and read some of the historical bits and tried to follow some of the games…

… which was of course an absolutely hopeless exercise. It requires a certain amount of skill just to be able to tell apart levels even farther up the skill ladder from where you are. If a master and a grandmaster played a game of chess, I probably couldn’t tell which was which. I doubt I could tell the difference between a good violin player and a great violin player. A sample of code from a good programmer and a sample from a great programmer would likely be indistinguishable to me. And so on.

Anyways, the book talked about how we’re now at the point where the best human players in chess sometimes have no clue why chess engines made the moves they did. If they were a bit better than humans, we’d expect the response to be “I doubt I’d have played that move, since it would have required a fair bit of analysis to see why it was so good, but now after careful consideration after the fact I can understand the reasoning behind it.” No, now we’re at “Fuck if I know.”

Of course, this gap will only widen. Algorithm advances in computer chess will continue, and you can always hook the engines up to larger and more powerful processors. But we’re already past the point where humanity’s best are left scratching their heads, let alone people without skill in that area like me.

This is probably how it goes for many subfields of AI. At first the performance of the computers will be scoffed at. As it increases, non-experts will be unable to discern the gap between the machines and the top human performers.  Then for a very short time, top-level humans will be on par with the machines. Only a short while thereafter, however, the AI will move from formidable to incomprehensible.

One of my favorite moments in Mass Effect came in the first game where you speak with the alien god-machine Sovereign. The gist you came out of it with was: They are machines, and they are alien, and therefore their goals are completely unknown to us. And that was awesome. In later games an explanation was given for their actions, and regardless of whether the reasoning was good (it wasn’t) the very existence of a reason comprehensible to us cheapened it. A superintelligent AI’s goals, if they arose not from human programmers coding in our ethical values, should be beyond our comprehension.

When the intelligence or skill or ability of another agent grows beyond your own at a certain point, the experience is not one of awe but of confusion (at the micro level of course. At the macro level there’s still the awe of them consistently winning, even if you don’t know how).

Advertisements
This entry was posted in Rationality, Science (general). Bookmark the permalink.