26.12.20

Kripke semantics and Tableaux for Intuitionistic Logic

Simply-typed $\lambda$ calculus terms can be seen as encoding of natural deduction proofs of intuitionistic propositional logic (IPC - the C is for calculus; IPL is also common). This is the Curry Howard correspondence.

So we can prove an IPC formula by finding a suitable term of simply-typed $\lambda$ calculus. Sounds great - but there are other ways to represent proofs and I wanted to understand better how structure of these natural deduction proofs relates to the structure of those other proofs. \(\newcommand{\qfalse}[0]{\color{blue} {\bot}}\) \(\newcommand{\qp}[0]{\color{blue} p}\) \(\newcommand{\qq}[0]{\color{blue} q}\) \(\newcommand{\qX}[0]{\color{blue} X}\) \(\newcommand{\qY}[0]{\color{blue} Y}\) \(\newcommand{\qZ}[0]{\color{blue} Z}\) \(\newcommand{\qand}[0]{\color{blue} \wedge}\) \(\newcommand{\qor}[0]{\color{blue} \vee}\) \(\newcommand{\qimplies}[0]{\rightarrow}\) \(\newcommand{\qneg}[1]{{\neg #1}}\) \(\newcommand{\andintro}[0]{\color{blue}{(\wedge{\mathrm{\small INTRO}})}}\) \(\newcommand{\andelim}[0]{\color{blue}{(\wedge{\mathrm{\small ELIM}})}}\) \(\newcommand{\andelimleft}[0]{\color{blue}{(\wedge{\mathrm{\small ELIM/1}})}}\) \(\newcommand{\andelimright}[0]{\color{blue}{(\wedge{\mathrm{\small ELIM/2}})}}\) \(\newcommand{\orintro}[0]{\color{blue}{(\vee{\mathrm{\small INTRO}})}}\) \(\newcommand{\orelim}[0]{\color{blue}{(\vee{\mathrm{\small ELIM}})}}\) \(\newcommand{\implintro}[0]{\color{blue}{(\rightarrow{\mathrm{\small INTRO}})}}\) \(\newcommand{\implelim}[0]{\color{blue}{(\rightarrow{\mathrm{\small ELIM}})}}\) \(\newcommand{\notelim}[0]{\color{blue}{(\neg{\mathrm{\small ELIM}})}}\) \(\newcommand{\botelim}[0]{\color{blue}{(\bot{\mathrm{\small ELIM}})}}\)

As an example, consider the formula $\qneg{\qneg{(\qp \qor \qneg{\qp)}}}$. We can take $\qneg{\qX}$ as abbreviation for $\qX \qimplies \bot$. Is it "obvious" how to find a term of type $((\qp \qor \qneg{\qp}) \qimplies \bot) \qimplies \bot$? Here is a natural deduction proof of (the names are from this list of intuitionistic natural deduction rules from the post above; I can't be bothered with typesetting a tree). $$\begin{array}{lll} 1. & \qneg{(\qp \qor \qneg{\qp)}} & \mathrm{(assumption)} \\ 2. & \qp & \mathrm{(assumption)} \\ 3. & \qp \qor \qneg{\qp} & \mathrm{(via}~\orintro/1~\mathrm{from~2.}\\ 4. & \bot & \mathrm{(via}~\implelim~\mathrm{apply~1.~to~3.)} \\ 5. & \qneg{\qp} & \mathrm{(via}~\implintro~\mathrm{from~[2.]~to~4.)} \\ 6. & \qp \qor \qneg{\qp} & \mathrm{(via}~\orintro/2~\mathrm{from~5.)} \\ 7. & \bot & \mathrm{(via}~\implelim~\mathrm{apply~1.~to~6.)} \\ 8. & \qneg{\qneg{(\qp \qor \qneg{\qp)}}}& \mathrm{(via}~\implintro~\mathrm{from~[1.]~to~7.)} \\ \end{array} $$

Maybe a better question to ask is: Why does this proof looks the way it looks? How, starting from the formula, can we arrive at such a proof, with steps that are so simple that we can leave them to a machine?

Intuitionistic Tableaux

Kripke semantics and tableaux give us an alternative view of proving IPC formulae. The tableau calculus we discuss here is not very different from sequent calculus, it uses "block tableaux" where each node is set of (signed) formulae. It will be a refutation calculus - despite proving intuitionistic logic!

The definitions here follow Melvin Fitting's book [1] and Smullyan's unified presentation [2].

In contrast to block tableaux, "analytic tableaux" have one formula per node and we consider the set of all formulas on a branch instead of looking at a node). These and connexion methods are out of scope today, just like modal logic or the algebraic view (denotational semantics). The only differences from the aforementioned texts is that instead of using negation as primitive, I will use $\qfalse$ as primitive, and I make it explicit when a principal $T \qX$ formulae has to be preserved in tableau extension.

[1] Melvin Fitting. Intuitionistic Logic, Model Theory and Forcing [researchgate pdf].

[2] Raymond M. Smullyan. A generalization of intuitionistic and modal logics. https://doi.org/10.1016/S0049-237X(08)71543-X

Syntax for Intuitionistic and Minimal Propositional Logic

Once more, we define of well-formed formulae (wff):

  • a propositional variable $\qp$ is a wff
  • $\qfalse$ is a wff
  • if $\qX$ and $\qY$ are wff, then $\qX \wedge \qY$ is a wff.
  • if $\qX$ and $\qY$ are wff, then $\qX \vee \qY$ is a wff.
  • if $\qX$ is a wff, then $\qX \qimplies \qY$ is a wff

We use $\qq,\qp$ for propositional variables and $\qX, \qY, \qZ$ as metavariables for formulae. The symbol $\qfalse$ stands for "falsehood", "falsity" or "absurd" and is also written as $\bot$. We use $\qfalse$ and $\qimplies$ as primitive (built-in) symbols, and define negation $\qneg{X}$ as an abbreviation for $\qX \qimplies \qfalse$. This goes well with the intuitionist notion that "a proof of $\qneg{X}$ amounts to a construction that transforms a proof $\qX$ to a proof of falsehood."

Kripke semantics

A (Kripke) model for intuitionistic propositional logic is a triple $\langle \mathcal{G}, \mathcal{R}, \Vdash \rangle$ where $\mathcal{G}$ is a set of states, $\mathcal{R}$ an accessibility relation between states that is reflexive and transitive, and a relation $\Vdash$ between states and formulae that satisfies conditions $K_0 - K_3$ below. We use $\Gamma, \Delta$ for elements of $\mathcal{G}$, read $\Gamma \Vdash \qX$ as "$\Gamma$ forces $\qX$" and write $\Gamma^\star$ for "any $\Delta$ with $\Gamma \mathcal{R} \Delta$".

($K_0$) If $\Gamma \Vdash \qX$ then $\Gamma^\star \Vdash \qX$

($K_1$) $\Gamma \Vdash \qX \qand \qY$ iff $\Gamma \Vdash \qX$ and $\Gamma \Vdash \qY$

($K_2$) $\Gamma \Vdash \qX \qor \qY$ iff $\Gamma \Vdash \qX$ or $\Gamma \Vdash \qY$

($K_3$) $\Gamma \Vdash \qX \qimplies \qY$ iff for every $\Gamma^\star$, either $\Gamma^\star \not\Vdash \qX$ or $\Gamma^\star \Vdash \qY$.

($K_4$) $\Gamma \not\Vdash \qfalse$.

A formula is valid in a model if $\Gamma \Vdash \qX$ for all $\Gamma \in \mathcal{G}$. A formula is valid if it is valid in all models. Note that with $K_0$ in mind, we could also have written conditions $K_1$ and $K_2$ differently:

($K_1'$) $\Gamma \Vdash \qX \qand \qY$ iff for every $\Gamma^\star$, $\Gamma^\star \Vdash \qX$ and $\Gamma^\star \Vdash \qY$

($K_2'$) $\Gamma \Vdash \qX \qor \qY$ iff for every $\Gamma^\star$, $\Gamma^\star \Vdash \qX$ or $\Gamma^\star \Vdash \qY$

In the above definition, it is understood that $\Vdash$ is "seeded" with a relation $\Vdash^{0}$ between states and propositional variables $\qp$. This $\Vdash^{0}$ is then extended to a relation between states and whole formulae. It is also common to define states directly as subsets of the "available" propositional variables (the ones forced in that state). Here is an example, with two states $\Gamma = \{\qq\}$ and $\Delta = \{\qp, \qq\}$:

The relation $\Vdash$ takes all reachable states into consideration, and once a state forces $\Gamma \Vdash \qq$, any subsequent $\Gamma^\star$ must force it, too. The "truth in a state" is rather more dynamic than absolute "truth" we are used to from classical logic. One can give this a temporal epistemic interpretation and say states represent subjective knowledge of true facts, and that this knowledge "grows over time" (as we access subsequent states).

As an exercise: Convince yourself that $\Delta \Vdash \qp \qand \qq$, $\Gamma \not\Vdash \qp$ and the formula $\qp \qimplies \qq$ is valid in this model. Note further that $\qp \vee \qneg{p}$ is not valid. We have $\Gamma \not\Vdash \qp$, which is very different from saying $\Gamma \Vdash \qneg{p}$.

[When Kripke models are used for modal logics, people consider accessibility relation with different properties, leading to different modal logics; there is an explicit symbol $\square$ to talk about accessibility, and our reflexive and (in particular) transitive $\mathcal{R}$ corresponds to S4 modal logic, with the S4 axiom $\square \qp \rightarrow \square\square \qp$. We will just mention here that intuitionistic logic can be interpreted in S4 modal logic.] \(\newcommand{\qT}[0]{T}\) \(\newcommand{\qF}[0]{F}\) \(\newcommand{\qS}[0]{\mathrm{S}}\)

Kripke semantics for Signed Formulae

On towards a proof system for IPC based on an intuitionistic tableau calculus (or just intuitionistic tableaux). To this end, we introduce signed formulae, using symbols $T$ and $F$ and writing $T\qX, F\qX$ for a signed version of formula $X$:
  • $\Gamma \vdash T\qX$ means $\Gamma \Vdash \qX$ and
  • $\Gamma \vdash F\qX$ means $\Gamma \not\Vdash \qX$.

    In classical logic, $T\qX$ would mean $\qX$ is true and $F\qX$ would mean $\qX$ is false (or $\qneg{X}$ is true). The way to understand $\Gamma \vdash F\qX$ non-classically is to read it as "it is not known that $\Gamma \Vdash \qX$." Note that $\Gamma$ is merely a state, not a context or set of formulae - we are using this assertion as a building block to compare proof situations. For example, the assertions $T\qX \qand \qY$ and $F\qX$ cannot both be true at the same time (or in the same state).

    We follow Smullyan's uniform notation, and classify our formulae into different types (whether they are "conjunctive" or "disjunctive" in nature):

    • type $\alpha$ is any signed formula of the shape $T\qX \qand \qY, F\qX \qor \qY, F\qX \qimplies \qY, F\qfalse$
    • type $\beta$ is any formula $F\qX \qand \qY, T\qX \qor \qY, T\qX \qimplies \qY, T\qfalse$.

    Since we added $\qfalse$ as primitive, we need to say something about its components, too. We treat $\qfalse$ like we treat an empty disjunction.

    For a signed formula $\alpha (\beta)$ we called certain signed subformula its components $\alpha_1$ and $\alpha_2$ (same for $\beta$), according to this table: $$ \begin{array}{c|c|c}% \alpha & \alpha_1 & \alpha_2 \\ \hline T\qX \qand \qY & T\qX & T\qY \\ \hline F\qX \qor \qY & F\qX & F\qY \\ \hline F\qX \qimplies \qY & T\qX & F\qY \\ \hline F \qfalse & F \qfalse & F \qfalse \end{array} \quad \quad \begin{array}{c|c|c}% \beta & \beta_1 & \beta_2 \\ \hline F\qX \qand \qY & F\qX & F\qY \\ \hline T\qX \qor \qY & T\qX & T\qY \\ \hline T\qX \qimplies \qY & F\qX & T\qY \\ \hline T \qfalse & T \qfalse & T \qfalse \end{array} $$

    Again, for $\qfalse$ we have subformulae that are not proper, but this is not going to be a concern.

    Let us call the conjugate of $\overline{T\qX}$ the corresponding $F\qX$ and vice versa. Note that the conjugate of a formula of type $\alpha$ is of type $\beta$ and vice versa. In classical logic, a set of signed formulae $\{T\qX_1,\ldots,T\qX_n, F\qY_1, \ldots, \qY_m\}$ would correspond to a sequent calculus sequent $\qX_1, \ldots \qX_n \longrightarrow \qY_1, \ldots, \qY_m$. The sequent would be provable iff the set of signed formulae is unsatisfiable. The distinction into type $\alpha$ and $\beta$ explain how a formula can either be split into its components for conjunctive formulae $\alpha$ or the proof needs to branch into two possibilities for disjunctive formulae $\beta$.

    For intuitionistic logic, we are going to call all formulae $T \qX$ permanent and all formulae $F \qX$ co-permanent. With all this setup, we can remix our definition of Kripke model and $\Vdash$ into the following conditions using signed formulae $\qX$:

    $(U_0(a))$ Either $\Gamma \Vdash \qX$ or $\Gamma \Vdash \overline{\qX}$, but not both.

    $(U_0(b))$ If $\Gamma \Vdash \qX$ and $\qX$ is permanent, then $\Gamma^\star \Vdash \qX$.

    $(U_1)$ For any permanent $\alpha$, $\Gamma \Vdash \alpha$ iff for every $\Gamma^\star$, $\Gamma^\star \Vdash \alpha_1$ and $\Gamma^\star \Vdash \alpha_2$

    $(U_2)$ For any permanent $\beta$, $\Gamma \Vdash \beta$ iff for every $\Gamma^\star$, $\Gamma^\star \Vdash \beta_1$ or $\Gamma^\star \Vdash \beta_2$

    The behavior on co-permanent $\alpha$ and $\beta$ is determined by $U_1$ and $U_2$ too, by taking into consideration $(U_0(b))$. Observe:

    $(U_1')$ For any co-permanent $\alpha$, $\Gamma \Vdash \alpha$ iff for some $\Gamma^\star$, $\Gamma^\star \Vdash \alpha_1$ and $\Gamma^\star \Vdash \alpha_2$

    $(U_2')$ For any co-permanent $\beta$, $\Gamma \Vdash \beta$ iff for some $\Gamma^\star$, $\Gamma^\star \Vdash \beta_1$ or $\Gamma^\star \Vdash \beta_2$

    As an exercise: Go back to the example above, and convince yourself that $\Delta \Vdash T\qp \qand \qq$ and $\Gamma \Vdash F\qp$. Check that the formula $T \qp \qimplies \qq$ is valid in every state, using the conditions $U_\star$ above. Now do the same for $T \qp \vee \qneg{p}$, remembering that it stands for $T \qp \vee (\qp \qimplies \qfalse)$. Now, to keep it interesting, what about $\Gamma \Vdash F \qp \vee \qneg{p}$? Is $F \qp \vee \qneg{p}$ valid?

    A final definition is to call a formula special if it is co-permanent but has a permanent component, otherwise it is called ordinary. The only special formula we have is $F\qX \qimplies \qY$, of type $\alpha$. If we had negation as a built-in operator, that would be a second special $\alpha$. Check out Smullyan's paper for predicate logic and other generalizations that require special $\beta$.

    Tableau Proofs

    A tableau proof is like a sequent calculus proof written upside-down: we draw it with the root at the top, growing towards the bottom. Instead of $S \cup \qX$, we write $S, \qX$. A vertical bar $\vert$ indicates that there are two children. $\qS_T$ means $\{ \qT\qX \vert \qT\qX \in \qS\}$, and $\alpha_P$ means repeat $\alpha$ if it is permanent (same for $\beta$). This is just to emphasize that $T \qX$ (possibly) needs be to preserved; in sequent calculus, this would surface as a structural operation. $$ \begin{array}{ll} (A1) & \mathrm{for~ordinary} ~ \alpha: ~ \LARGE \frac{\qS,~ \alpha }{\qS,~ \alpha_1,~ \alpha_2 } \\ (A2) & \mathrm{for~special} ~ \alpha: ~ \LARGE \frac{\qS,~ \alpha }{\qS_P,~ \alpha_1,~ \alpha_2 } \\ (B1) & \mathrm{for~ordinary} ~ \beta: ~ \LARGE \frac{\qS,~ \beta }{\qS,~ \beta_1 ~ \vert ~ \qS, ~\beta_2 } \\ \end{array} $$

    Now, a configuration is a collection $[~\qS_1, \qS_2, \ldots, \qS_n~]$ of sets of signed formulae. A rule takes a configuration and replaces one $S_i$ with its result (or its two results). A tableau is a finite sequence of configurations $C_1, \ldots C_n$ in which each configuration except the first resulted from its predecessor by application of a rule.

    A set of signed formulae is closed if it contains both $\qT \qX$ and $\qF \qX$ for some formula $\qX$ or if it contains $\qT \qfalse$. A configuration $[~\qS_1, \qS_2, \ldots, \qS_n~]$ is closed if all sets $\qS_i$ are closed. A tableau $C_1, \ldots C_n$ is closed if some $C_i$ is closed.

    A tableau for a signed set of formula $\qS$ is a tableau $C_1, \ldots, C_n$ with $C_1 = \{\qS\}$. A finite set of of signed formulae $\qS$ is inconsistent if some tableau for $S$ is closed. Otherwise, $\qS$ is consistent. A formula $\qX$ is a theorem, written $\vdash_I \qX$, if $\{\qF \qX\}$ is inconsistent, and a closed tableau for $\{\qF\qX\}$ is called a proof of $\qX$.

    What we have done here is to define provability of $X$ as inconsistency of $FX$. This refutation calculus may not be appealing to an intuitionist.

    Let's look at a proof of $\qneg{\qneg{(\qX \vee \qneg{X})}}$ in our $\qfalse$-primitive logic. $$ \begin{array}{l} [~\{\qF \qneg{\qneg{(\qX \vee \qneg{\qX})})} ] \\ [~\{\qT \qneg{(\qX \vee \qneg{\qX}})\} ] \\ [~\{\qT \qneg{(\qX \vee \qneg{\qX}}), \qF \qX \vee \qneg{\qX}\} ] \\ [~\{\qT \qneg{(\qX \vee \qneg{\qX}}), \qF \qX, \qF \qneg{\qX}\} ] \\ [~\{\qT \qneg{(\qX \vee \qneg{\qX}}), \qT \qX\} ] \\ [~\{\qT \qneg{(\qX \vee \qneg{\qX}}), \qF \qX \vee (\qneg{\qX}), \qT \qX\} ] \\ [~\{\qT \qneg{(\qX \vee \qneg{\qX}}), \qF \qX, \qF \qneg{\qX}, \qT \qX\} ] \\ \end{array} $$ The same proof in a bit more verbose: $$ \begin{array}{l} [~\{\qF ((\qX \vee (\qX \qimplies \qfalse)) \qimplies \qfalse) \qimplies \qfalse\}~] \\ \alpha (F\qimplies) \\ [~\{\qT ((\qX \vee (\qX \qimplies \qfalse)) \qimplies \qfalse, \qF \bot)\}~] \\ \beta (T\qimplies) \\ [~\{\qT ((\qX \vee (\qX \qimplies \qfalse)) \qimplies \qfalse), \qF (\qX \vee (\qX \qimplies \qfalse)), \qF \qfalse\}~, \{\ldots, \qT \qfalse\}] \\ \alpha (F\qor)\\ [~\{\qT ((\qX \vee (\qX \qimplies \qfalse)) \qimplies \qfalse), \qF \qX, \qF (\qX \qimplies \qfalse), \qF \qfalse\}~, \{\ldots, \qT \qfalse\}] \\ \alpha (F\qimplies)\\ [~\{\qT ((\qX \vee (\qX \qimplies \qfalse)) \qimplies \qfalse), \qT \qX, \qF \qfalse\}, \{\ldots, \qT \qfalse\}] \\ \beta (T\qimplies)\\ [~\{\qT ((\qX \vee (\qX \qimplies \qfalse)) \qimplies \qfalse), \qF (\qX \vee (\qX \qimplies \qfalse)), \qT \qX, \qF \qfalse \}, \\ ~ \{\ldots, \qT \qfalse\}, \{\ldots, \qT \qfalse\}] \\ \alpha (F\qor)\\ [~\{\qT ((\qX \vee (\qX \qimplies \qfalse)) \qimplies \qfalse), \qF \qX, \qF (\qX \qimplies \qfalse), \qT \qX, \qF \qfalse \}, \\ ~ \{\ldots, \qT \qfalse\}, \{\ldots, \qT \qfalse\}] \\ \end{array} $$

    What is convenient about this?

    In applying the rules of the tableau calculus, we are taking the formula we want to prove apart and deal with its subformulae until we close a branch (configuration). However, it is not an entirely deterministic procedure, in the sense that we still need to pick which of the formulas we want to take apart, and if we pick the wrong one, we may have to backtrack. Automated reasoning! And it should be easy to convert the result into a natural deduction proof (similar to Prawitz's view that a sequent calculus proof can be seen as a description of how to construct a natural deduction proof).

    Another thing that is great about the tableaux method is that if after taking everything apart, you end up with open branches (that cannot be closed), you can read them as models that provide counter-examples. The application of a "special" rule indicates that we transitioned to another state.

    What is special about intuitionistic tableaux?

    Let's consider that signed formula $\qF \qX$ correspond to the right-hand side of a sequent calculus (the succedent). So unlike Gentzen's LJ, it is quite fine to have multiple formulae mingle on the right-hand side and stay intuitionistic, as long as the right-$\rightarrow$ rule (i.e. $\qF \qX \qimplies \qY$) keeps only one. This has also been explained here:

    [3] Valeria de Paiva, Luiz Carlos Pereira. A short note on intuitionistic propositional logic with multiple conclusions [researchgate pdf]

    The assertion $\qF \qX \qimplies \qY$ says: "at present, we think we can prove $\qX$ without at the same time proving $\qY$". In considering such a possible proof $\qT \qX$, we need to do bookkeeping to ensure that anything we prove along the way is separate from other formulae $\qF \qZ$ that we labeled as not-yet-known. This is why in the special $\alpha$ rule we use $\qS_P$ instead of $\qS$. This looks like the essence of intuitionistic reasoning as far as proof rules are concerned. It also makes intuitionistic tableaux a bit more complicated than classical.

    Coda

    I hope the above sheds some light on the connection between tableaux calculus, Kripke semantics and intuitionistic logic. Soundness and completeness can be found in Fitting's book, as well as generalizations to modal logic, predicate logic. Of course, for stronger logics, carrying over soundness and completeness does not mean we retain decidability; stronger logics require not only taking formulae apart, but also instantiating them, which increases non-determinism and search space. (Slightly updated 2022-01-22, for clarity.)