Definability of Truth in Probabilistic Logic

This post explores a draft paper by Paul Christiano et al (06/10/2013 draft), produced at a workshop hosted by the Machine Intelligence Research Institute. My intent is to go thoroughly through the main argument, to ensure that I fully understand it. Also, on my first or second reading, I had some worries that the argument seemed to produce something for nothing, possibly violating a “conservation of depth” in mathematics. I am no longer worried about this.

Towards the end of this post, I discuss the non-constructive nature of the proof given, but show that it can at least be modified so as not to rely on the Axiom of Choice.

A one-sentence summary of the paper is: Although a logical theory can’t contain its own Truth predicate, it can contain its own “subjective probability function” which assigns reasonable probabilities to sentences of the theory, including of course statements about the probability function itself.

Or to quote the conclusion of the paper:

Tarski’s result on the undefinability of truth is in some sense an artifact of the infinite precision demanded by reasoning about complete certainty.

1. Setup

One starts with some theory {T} in a countable language {L} that can interpret rational and integer arithmetic (say it extends Peano Arithmetic), and in particular admits Gödel numbering of its sentences. Append to the language another symbol {P} to get a language {L'}, and consider {T} now as an {L'}-theory. For the most part, we will think of {P} as a one-place function symbol taking Gödel numbers of {L'}-sentences to real values, but actually {P} just needs to be a relation between Gödel numbers and pairs of rational numbers (which we think of as saying that the probability of a sentence lies in the specified range).

Now at the meta-level, think of Borel probability distributions {\mathop{\mathbb P}} over some space of models of {T} as an {L'}-structure (the basic open sets in the model space are determined by satisfaction of particular {L'}-sentences). These induce (finitely additive) probability measures on the set of {L'}-sentences, the measure of a sentence being the measure of the set of models in which it holds. Such a measure will have desirable properties allowing us to reason intelligently at the level of sentences, such as:

  • assigning probability {1} to any theorem of {T},
  • assigning probability {0} to any sentence refuted by {T}, and
  • {\mathop{\mathbb P}(\varphi)=\mathop{\mathbb P}(\varphi\wedge\psi)+\mathop{\mathbb P}(\varphi\wedge\neg\psi)} for all {L'}-sentences {\varphi}, {\psi} (many of which include the for-now-unrelated symbol {P}).

In fact, Christiano et al show that any function from {L'}-sentences to {[0,1]} satisfying the above is induced by some measure on the model space, and they call such functions coherent probability measures. Note however, that any coherent {\mathop{\mathbb P}} must be uncomputable, as it assigns probability {1} to the theorems of Peano Arithmetic, and probability {0} to the refuted sentences.

2. Reflection

We’d like to have a coherent {\mathop{\mathbb P}} which says things about the {L'}-sentences which include {P} as if {P=\mathop{\mathbb P}}. If we are thinking of {\mathop{\mathbb P}} as an algorithm (despite the uncomputability mentioned above), perhaps it can compute the probabilities of statements up to arbitrary precision, and so it can determine in finite time whether a probability lies in an open set (the important characteristic of open sets here is that they provide a little wiggle room in either direction).

This is the reflection principle that the authors found to work: at the meta-level, for all rational {a} and {b}, and all {\varphi\in L'},

\displaystyle a < \mathop{\mathbb P}(\varphi) < b \qquad\implies\qquad \mathop{\mathbb P}(a < P(\ulcorner\varphi\urcorner) < b) = 1,

where {\ulcorner\varphi\urcorner} denotes the Gödel number of {\varphi} in {L'}. Intuitively, if {\mathop{\mathbb P}(\varphi)} is actually in {(a,b)}, then {\mathop{\mathbb P}} will surely recognize this about itself. We sometimes call a coherent distribution satisfying this schema reflective.

Of course, we’d like for the statements that {\mathop{\mathbb P}} makes about {P(\ulcorner\varphi\urcorner)} to have some grounding in reality, i.e. something like a converse to the reflection schema. We’ll see shortly that a full converse would be too strong, but actually from the contrapositives of the above schema, one can derive for coherent {\mathop{\mathbb P}} and all {a}, {b}, {\varphi}:

\displaystyle a \leq \mathop{\mathbb P}(\varphi) \leq b \qquad\Longleftarrow\qquad \mathop{\mathbb P}(a\leq P(\ulcorner\varphi\urcorner)\leq b) > 0.

We shall now take a quick look at what reflection means, and see why the converses of the reflection schema can’t hold for coherent {\mathop{\mathbb P}}. Fix rational {0<p\leq1}, and using Gödel’s diagonal lemma, construct an {L'}-sentence {G} which asserts that its own probability is less than {p}. That is, {T} proves

\displaystyle G\ \leftrightarrow\ P(\ulcorner G\urcorner)<p.

Informally, If {p} is small, {G} says “this sentence is probably false”. Assuming {\mathop{\mathbb P}} is reflective:

  • If {\mathop{\mathbb P}(G)>p}, then {\mathop{\mathbb P}} “knows” it, i.e. {\mathop{\mathbb P}(P(\ulcorner G\urcorner)>p)=1}. Since {\mathop{\mathbb P}} is coherent, we have by the definition of {G} that {\mathop{\mathbb P}(\neg G)=1}, and so {\mathop{\mathbb P}(G)=0}, contradiction.
  • Similarly, {\mathop{\mathbb P}(G)} cannot be less than {p}, or else {\mathop{\mathbb P}} would “know” that {\mathop{\mathbb P}(G)} was actually {1}.
  • Thus the only value {\mathop{\mathbb P}(G)} can take is the boundary point {p}, where {\mathop{\mathbb P}(G)} isn’t really in the interval {(0,p)}, but {\mathop{\mathbb P}} cannot verify this with finite precision and must simply settle on “maybe I am less than {p}“.
  • The special case where {p=1}, is somewhat analogous to the classic liar sentence. By the above analysis, {\mathop{\mathbb P}(G)=p=1}. Here we see that the converse to the reflection schema is inconsistent, for it would derive from {\mathop{\mathbb P}(G)=1} that {\mathop{\mathbb P}(G)<1}.

This state of affairs, in which the modified liar sentence is assigned probability {1}, may not be intuitive, and you may still wonder whether you could deduce a contradiction with a more clever argument (perhaps with {G\iff (\forall\varepsilon>0)(P(\ulcorner G\urcorner)<\varepsilon)}). The main result proved by Christiano et al is that you could not: the reflection schema is consistent.

3. Consistency

Here’s a statement of the Theorem:

Theorem 1 Let {T} be a consistent theory in a countable language {L}, powerful enough to interpret Peano and rational number arithmetic. Then there exists a coherent probability measure {\mathop{\mathbb P}} satisfying the reflection schema; in particular the {L'}-theory obtained by adding the reflection schema to {T} remains consistent.

At the highest level, the construction of a coherent, reflective {\mathop{\mathbb P}} looks something like this: start with an arbitrary coherent {\mathop{\mathbb P}}, making no attempt whatsoever at reflection, and iteratively replace it with some {\mathop{\mathbb P}^*} from the (non-empty) set of coherent probability measures which treat {\mathop{\mathbb P}} as an acceptable interpretation for {P}. By this, I mean (for all {a}, {b}, {\varphi}):

\displaystyle a < \mathop{\mathbb P}(\varphi) < b \qquad\implies\qquad \mathbb{P}^*(a < \mathop{\mathbb P}(\ulcorner\varphi\urcorner) < b) = 1.

Call such a {\mathop{\mathbb P}^*} an immediate revision of {\mathop{\mathbb P}}; a coherent distribution is reflective iff its is an immediate revision of itself. Then the sequence above is constructed so that a limit can be taken to produce a fixed point.

But this is not really correct. It is not clear that an arbitrary sequence {\mathop{\mathbb P},\mathop{\mathbb P}^*,\mathop{\mathbb P}^{**},\ldots}, as above should actually converge, and in fact we’ll see that the “most obvious” such sequence does not.

  • We first show that any real-valued function {\mathop{\mathbb P}} on sentences of {L'} admits at least one immediate revision (including but not limited to coherent probability distributions). Since {T} is consistent, take any model {\mathbf M\models T}, and interpret {P^{\mathbf M}} as {\mathop{\mathbb P}} (by which we really mean for every rational {a} and {b} and every {\varphi\in L'}, {a<P^{\mathbf M}(\ulcorner\varphi\urcorner)<b} iff {a<\mathop{\mathbb P}(\varphi)<b}). Since {T} says nothing about the symbol {P}, {(\mathbf M,\mathop{\mathbb P})} remains a model of {T}. One can then define an immediate revision {\mathop{\mathbb P}^*} of {\mathop{\mathbb P}} in a trivial way,

    \displaystyle \mathbb{P}^*(\psi) = \left\{\begin{array}{ll} 1&\mathrm{if}\ \mathbf (\mathbf M,\mathop{\mathbb P})\models\psi\\ 0&\mathrm{otherwise} \end{array}\right.

    Here we certainly have

    \displaystyle a < \mathop{\mathbb P}(\varphi) < b \qquad\implies\qquad \mathbb{P}^*(a < P(\ulcorner\varphi\urcorner) < b) = 1,

    and moreover {\mathop{\mathbb P}^*} is coherent, because it is induced by the “point mass at the model {(\mathbf M,\mathop{\mathbb P})}” (technically it’s the point mass at the elementary equivalence class of {(\mathbf M,\mathop{\mathbb P})}).

     

  • Thus one could simply define a sequence of immediate revisions {\mathop{\mathbb P}_0,\mathop{\mathbb P}_1,\mathop{\mathbb P}_2,\ldots} with {\mathop{\mathbb P}_0} arbitrary, say {\mathop{\mathbb P}_0\equiv0}, and successors determined as above given some fixed model {\mathbf M}. It is tempting at this point to argue that
    1. Valuations of the countably many {L'}-sentences can be thought of as infinite-length vectors in the Hilbert cube {[0,1]^\omega}, which is compact by Tychonoff’s Theorem.
    2. The set of coherent probability distributions is a closed subset of {[0,1]^\omega}, by the alternate characterization mentioned in section 1, thus {\langle\mathop{\mathbb P}_n\rangle_{n<\omega}} has a subsequence converging to a coherent probability distribution.
    3. The function {f} taking a coherent probability distribution {\mathop{\mathbb P}} to the set of its immediate revisions has an analogue of continuity known as the closed graph property: if a sequence {\mathop{\mathbb P}'_n\rightarrow\mathop{\mathbb P}'} and a sequence {\mathop{\mathbb P}^*_n\rightarrow\mathop{\mathbb P}^*} (in the product topology of course), with {\mathop{\mathbb P}'_n\in f(\mathop{\mathbb P}_n^*)}, then {\mathop{\mathbb P}'\in f(\mathop{\mathbb P}^*)}. The proof is that for any {a,b,\varphi}, if {a<\mathop{\mathbb P}^*(\varphi)<b}, then {a<\mathop{\mathbb P}_n^*(\varphi)<b} for all large {n}, thus {\mathop{\mathbb P}_n'(a<P(\ulcorner\varphi\urcorner)<b)=1} for all large {n}, so {\mathop{\mathbb P}'(a<P(\ulcorner\varphi\urcorner)<b)=1}.
    4. Furthermore, by compactness of the product, the paired sequence {\langle(\mathop{\mathbb P}_n,\mathop{\mathbb P}_{n+1})\rangle_{n<\omega}} admits a convergent subsequence

      \displaystyle \langle(\mathbb{P}_{n_k},\mathbb{P}_{n_k+1})\rangle_{k<\omega}\rightarrow(\mathbb{P}_{\omega},\mathbb{P}_{\omega+1}).

      By the closed graph property in (3), {\mathop{\mathbb P}_{\omega+1}} is an immediate revision of {\mathop{\mathbb P}_\omega}.

    5. Hopefully {\mathop{\mathbb P}_{\omega+1}=\mathop{\mathbb P}_\omega}. (fingers crossed!)

     

  • After some thought, you may realize that (5) cannot hold (though the reasoning in (1)-(3) will be helpful for the correct proof). This is because the particular sequence {\langle\mathop{\mathbb P}_n\rangle_{n<\omega}} we built in the above bullet point is too trivial: it only takes the extreme values {0} and {1}. Thus if {\mathop{\mathbb P}_{\omega+1}=\mathop{\mathbb P}_{\omega}}, we’d have a {\{0,1\}}-valued coherent and reflective distribution. This is basically the forbidden Truth predicate, and in any case, it certainly can’t handle the sentences like {G\iff P(\ulcorner G\urcorner)<1/2} from section 2.

Thus something more must go into producing a reflective and coherent distribution. The key to obtaining coherent distributions with values throughout {[0,1]} is taking convex combinations of old distributions. That is, if {\mathop{\mathbb P}'} and {\mathop{\mathbb P}''} are coherent distributions induced by measures {\mu'} and {\mu''} on the model space, then {t\mathop{\mathbb P}' + (1-t)\mathop{\mathbb P}''} is a coherent distribution induced by {t\mu'+(1-t)\mu''} for any {t\in[0,1]}. Moreover, if {\mathop{\mathbb P}'} and {\mathop{\mathbb P}''} are actually immediate revisions of {\mathop{\mathbb P}}, then so is {t\mathop{\mathbb P}'+(1-t)\mathop{\mathbb P}''}, so the set of immediate revisions of a function is convex.

Since there are many point mass measures in the model space (even if {T} is complete as an {L}-theory, different interpretations of {P} will yield continuum-many distinct elementary equivalence classes among {L'}-structures), convexity guarantees a fairly rich space of immediate revisions from which to pick our sequence, as known measures can be mixed via convex combination to produce new ones. But how shall we handle all this mixing and choosing?

The missing ingredient, which intelligently makes the choices in a way that guarantees convergence, is the Kakutani-Fan-Glicksberg fixed point theorem.

Theorem 2 (Kakutani, Fan, Glicksberg) Let {S} be a non-empty, compact, convex subset of a locally convex Hausdorff space. Let {f:S\rightarrow2^S} be a set-valued function on {S} which has a closed graph and the property that {f(x)} is non-empty and convex for all {x\in S}. Then the set of fixed points of {f} (meaning {x\in f(x)}) is non-empty and compact.

The KFG theorem is the very last piece of this puzzle. So although the above reasoning failed to produce a fixed point, we have already done all the work of checking that the KFG hypotheses are met:

  • Let {S} be the set of coherent probability distributions, which we have already seen is compact, convex, and non-empty (viewed as a subset of the locally convex space {{\mathbb R}^\omega}).
  • The function {f} mapping a coherent probability distribution to its set of immediate revisions similarly has {f(\mathop{\mathbb P})\subseteq S} convex and non-empty for each {\mathop{\mathbb P}\in S}.

So a coherent and reflective distribution is produced as fixed point by the KFG theorem.

4. Discussion

As written, the proof is non-constructive in that KFG relies (at least mildly) on the Axiom of Choice and so proves that a fixed point exists without explicitly exhibiting one. My knee-jerk reaction is that in any separable complete metric space (that is, in any Polish space), this mild reliance on the Axiom of Choice can usually be eliminated: the regularity of the domain should be enough to uniquely specify arbitrary choices when they must be made, so that one should be able to uniquely specify one particular fixed point (though there may be many) with a single formula in ZF set theory.1

Thus one could probably develop a constructive version of the KFG theorem for {[0,1]^\omega}. I put a bit of thought into this (it’s easy for the {1}-dimensional Kakutani fixed point theorem anyway), though I’d not been terribly motivated to complete it, because it seemed long and tedious for a rather minor improvement: so long as we’re stuck with uncomputability, undefinability just didn’t seem that much worse.

I later realized there’s a much easier (albeit higher-level) way to remove the reliance on Choice here. From whatever set-theoretic universe {\mathbf V} we’re living in, look “inward” to Gödel’s constructible universe {\mathbf L}, which provides a definable well-ordering of itself via some two-variable set-theoretic formula {\varphi_\mathrm{wo}}. Now the KFG theorem (relative to {\mathbf L}) can pick out a single fixed point with some formula involving {\varphi_\mathrm{wo}}. Thus the main theorem (relative to {\mathbf L}) defines via some formula {\psi} an object {\mathop{\mathbb P}} which {\mathbf L} believes to be a coherent, reflective distribution. Step back out of {\mathbf L}, and we are done when we recognize that {\mathbf V} agrees that {\mathop{\mathbb P}} is a coherent, reflective distribution defined by the relativized formula {\psi^{\mathbf L}}. (That is, by standard absoluteness arguments, {\mathbf V} agrees that {\mathop{\mathbb P}} is an {\omega}-sequence of reals satisfying the countably many coherence and reflection axioms. Also, following that absoluteness link, it sounds like I really ought to learn about Shoenfield’s absoluteness theorem to automate these kinds of arguments. I had not heard of that before.)

Is the above of much practical use? Well, probably not!

To dodge the more serious problem of uncomputability, a “subjective probability function” would have to allow for logical uncertainty (perhaps in the sense of Gaifman), in which all true theorems are not automatically derived from the axioms. For instance, I may believe that the {n}-th digit of {\pi} –say, for {n =} Graham’s number— has a {50\%} chance of being odd. A reasonable mathematical theory like ZF clearly either proves or disproves this statement, but (barring the discovery of a very simple rule for calculating digits of {\pi}) there is not enough computation in the universe to find out which.

Hopefully there’s a more practical analogue of this theorem, out there and waiting to be discovered. I’ve already indicated that the one ought to be able to frame the process of finding a fixed point as a converging sequence of revisions, with the current proof just relying on the KFG theorem to do this work behind the scenes. Totally speculating here, I wonder whether one might be able to start the process with a “prior” distribution over some finite set of sentences, and alternately update the distribution in two directions:

  1. by increasing the coherence and reflectiveness of the distribution, and
  2. by expanding the set of sentences in the domain.

All the obvious disclaimers apply, including

  • I just made this up without too much thought as to how it would be implemented, and
  • I would expect any limit point produced by such a process to depend rather strongly on the arbitrary choices that would have to be made in a proper specification (e.g. how much of step (1) should we do before switching to step (2) and vice versa?). In particular, if one made obviously wrong choices here, I would not expect any convergence at all.

Still, I think the above is at least a fair model of what a computable reflective reasoning process should look like.

In any case, Christiano et al have written a nice draft paper demonstrating that a logic can reason probabilistically about its own truth without crashing and burning at the liar’s paradox. In a vast sea of logical impossibility results, this kind of thing is refreshing.

Notes

  1. The classic example here is the Baire Category Theorem, extremely useful for existence proofs in analysis, which states for complete metric spaces that the intersection of countably many open dense sets remains dense (and in particular is non-empty). The BCT is known to be equivalent to the Axiom of Dependent Choice (weaker than the full Axiom of Choice, but not a theorem of ZF), but the BCT can be proved for Polish spaces in ZF alone. The trick: given an enumeration for a countable dense subset of a Polish space, I can construct an enumeration of the rational-radius open balls centered around these points, and I can prove that these balls form a base for the Polish topology. The proof of the BCT relies on choosing a sequence of open balls with ZF-checkable properties; I now have an enumeration (aka well-ordering) to guide my hand.
Advertisements

2 thoughts on “Definability of Truth in Probabilistic Logic

  1. Pingback: Recent Progress and Gromov-Hausdorff Convergence | Eventually Almost Everywhere

  2. Pingback: Meditations on Lö and probabilistic logic | Accumulated Knowledge

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s