The reasoning robot, Jaynes’ desiderata, and Cox’s Theorem

I’ve been reading Probability Theory: The Logic of Science by ET Jaynes, and it’s worth sitting down with just for the first two chapters alone. Probabilities are typically introduced as obvious entities in and of themselves, where the chance of something happening is somewhere between 0 and 100%, or between 0 and 1 in normalized units. What I didn’t know before reading the book is that the rule that probabilities are real numbers monotonically increasing between 0 and 1 is the result of more basic postulates, due to something called Cox’s Theorem, which is what the first two chapters of Jaynes are about.

Probabilities are come at from a circuitous route. We start by pretending we know nothing about probabilities, and just want to build a reasoning robot that can express beliefs about propositions and has memory (“Well, Reasoning Robot, given all the facts of the matter, how much do you believe that the defendant is guilty?”). We’d have some desired traits for such a robot, which I’ll call Jaynes’ desiderata, and they are as follows:

Desiderata     I: Degrees of plausibility are represented by real numbers.
Desiderata   II: Qualitative correspondence with common sense.
Desiderata III: Consistency

  • IIIa: If a conclusion can be reasoned out in more than one way, then
    every possible way must lead to the same result.
  • IIIb: The robot always takes into account all of the evidence it has
    relevant to a question. It does not arbitrarily ignore some of
    the information, basing its conclusions only on what remains.
    In other words, the robot is completely nonideological.
  • IIIc: The robot always represents equivalent states of knowledge by
    equivalent plausibility assignments. That is, if in two problems
    the robot’s state of knowledge is the same (except perhaps for
    the labeling of the propositions), then it must assign the same
    plausibilities in both.

(I) comes about because we need to actually build the robot from real physical components. (II) seems a bit odd, but just means that we will fix the robot such that it doesn’t violate basic facets of Aristotelian logic (for example, if A then B, A, therefore B). The consistency desiderata (III) is where the robot launches well beyond typical human functioning into outer space.

So here’s the amazing thing: It’s a necessary consequence of the above desiderata that the robot assigns to degrees of belief real numbers that monotonically increase between 0 and 1.* That is, probability theory is a consequence of how an ideal reasoning robot would function to fulfill Jaynes’ desiderata. I’ll show the derivation in a future post (it can get a bit hairy). Moreover, our ideal reasoner is a Bayesian, and as to that centuries-old battle I have not studied enough to say much intelligently.

The important thing to note is that if you violate any of the desiderata, you are necessarily deviating from how our ideal reasoning robot would operate. The one I’ve most taken note of recently is (IIIb). One thing that rationalists will do in order to avoid committing the argument from authority logical fallacy is to say “I don’t care that so-and-so is an expert on this subject! Just the arguments please.” But in view of the reasoning robot, we are throwing out information about the world (so-and-so is an expert and so probably is aware of more caveats and hidden assumptions than we are if we hear some argument for or against some proposition) and therefore we are necessarily falling short of the gold standard of rationality. The robot would include the fact that so-and-so is an expert in updating its beliefs.

Nothing is ever easy.

* Note: There actually is a bit of ambiguity here, as one could choose 1/classical probabilities and so have a real number between 1 and infinity, but we could also transform back by another division under 1.

Advertisements
This entry was posted in Logic, Mathematics, Rationality, Science (general). Bookmark the permalink.