What I call type logic is the kind of logic that is directly available out of the box, especially in Haskell. There are two other further kinds of logic, the propositional logic, where the type are assumed to have “at most one element”, and further, classical logic where one assumes the law of excluded middle.

As we will see, one can go pretty far with type logic! What was really exciting for me is that for the first time a computer verifies my proofs. What is even more exciting is that it is written is simple, straightforward Haskell.

If you wonder how computer might help to assist mathematicians in proof verifications, read on!

math blackboard

Note: The code corresponding to this post is on GitHub.

Type Logic in Haskell

Using Haskell, we define three fundamental types which we can interpret as logical relations.

  • The implication $A \implies B$ is the type a -> b
  • The conjunction $A \land B$ is represented by the product type (a,b)
  • The disjunction $A \lor B$ is represented by the sum type Either a b

We also use the following type aliases, to make the logical statements easier to read:

type a :&: b = (a,b)
type a :+: b = Either a b

Finally, I define for simplicity the logical equivalence as usual: \[ A \iff B := (A \implies B) \land (B\implies A) \]

In what follows, the type signature is a proposition, and the function implementation is the proof. For instance, if we want to show that $A \implies A$, we state the proposition as

reflexivity :: a -> a

This is just a type signature for now. The proof that the type is inhabited is an actual implementation. In that case, it is straightforward:

reflexivity x = x

The following logical statements are also easy to show in a similar way:

\begin{align} (A \implies B \implies C) &\iff (B \implies A \implies C)\\ B \implies A &\implies B\\ A \land B &\iff B \land A\\ A \lor B &\iff B \lor A\\ \end{align} where the $\implies$ operator associates on the right as -> in Haskell.

For instance, both sides of the third proposition are proved by:

andCommut :: a :&: b -> b :&: a
andCommut (x,y) = (y,x)

Another example is the modus ponens law which states that

\[ (A \implies B) \land A \implies B \]

The corresponding Haskell proof is simply function application:

modusPonens :: (a -> b) :&: a -> b
modusPonens f x = f x

Associativity and Distributivity

Associativity of $\land$ on the one hand, and of $\lor$ on the other hand are both true in type logic.

Distributivity between $\land$ and $\lor$ is still valid in type logic. It means that we have:

\[ A \land (B \lor C) \iff (A\land B) \lor (B \land C) \]


\[ A \lor (B \land C) \iff (A\lor B) \land (B \lor C) \]


defies logic

In type logic, there is no truth tables. A proposition is not true or false. As a proposition is simply a type, there may be many inhabitants which “prove” that the proposition is true.

There is no way to say that a type is empty. However, one can imagine an empty type $\Zero$, and define a contradiction as an element of the form:

\[ \lnot_{\Zero} A := A \to \Zero \] where $\Zero$ is the empty type. As there is no empty type in Haskell, in what follows the type $\Zero$ is arbitrary. I will also simply write $\lnot := \lnot_\Zero$.

The meaning of a contradiction is thus extremely general, as any function from the type $A$ to any another type represents a contradiction.

Note that $\lnot$ is a contravariant functor. In particular, one has the very important “pull back”, or “modus tollens” law:

\[ (A \implies B) \implies \lnot B \implies \lnot A \]

The Haskell proof is just function composition:

contramap :: (a -> b) -> Not c b -> Not c a
contramap f nb = nb . f


Sometimes, we will need that the type $\Zero$ is actually empty. As this is not possible in Haskell, I use the following instead:

class Absurd c a where
    absurd :: c -> a

Assuming that Absurd c a is inhabited simply means that there is a function from c to a. The empty type is such that Absurd c a is always inhabited, for any type a.

As it turns out, absurdity is only needed for disjunctive inference laws.

Potentially true proposition

In constructive logic, the double negation plays a prominent role. I will denote it as

\[ \sim A := \lnot \lnot A \]

(Yes, I know, it's not a standard notation, as $\sim$ is sometimes used instead of $\lnot$, but please bear with me.)

When $\sim A$ holds, one says that “$A$ is potentially true” (terminology from an expository paper by Andrej Bauer).

I define it as follows in Haskell:

type Pot c a = Not c (Not c a)

An astute reader will have recognized the continuation monad. As a result, we immediately have the following:

\begin{equation} A \implies \sim A\\ (A \implies \sim B) \implies \sim A \implies \sim B \end{equation}

The first proposition is proved as follows, by the implementation of return in the continuation monad:

dn :: a -> Pot c a
dn = flip ($)

In particular, $\sim$ is a functor (in Haskell's sense).

The reductio ad absurdum law states that $\sim A \implies A$. This is not provable in type logic. However, one can easily prove that

\[ \sim \lnot A \implies \lnot A \]

This is proved as follows:

potNot :: Pot c (Not c a) -> Not c a
potNot = contramap return

As a consequence, we also obtain that $\sim$ is idempotent:

\[ \sim\sim A \implies \sim A \]

It is a direct consequence of the last proposition, and is proved as:

potPot :: Pot c (Pot c a) -> Pot c a
potPot = potNot

Anti-distributivity of Not

These laws are also called de Morgans' laws. In type logic, it is straightforward to prove that:

\begin{align} \lnot A \land \lnot B \iff \lnot (A \lor B)\\ \end{align}

For example, the $\impliedby$ side of last proposition is stated and proved as

notOr :: Not c (a :+: b) -> Not c a :&: Not c b
notOr n = (n . Left, n . Right)

We can also prove that:

\[ A \land B \implies \lnot (\lnot A \lor \lnot B) \\ A \lor B \implies \lnot (\lnot A \land \lnot B) \]

Finally, we can also prove

\[ \lnot A \lor \lnot B \implies \lnot (A \land B) \] but not the other way around.

The best alternative I found is this one: \[ \lnot (\sim A \land \sim B) \implies \sim (\lnot A \lor \lnot B) \]

The proof is straightforward, by pullback of the notOr proof above.

notAnd'' :: Not c (Pot c a :&: Pot c b) -> Pot c (Not c a :+: Not c b)
notAnd'' = contramap notOr

Conjunctive Inference

The inference laws are the relations betweens implications and disjunctions or conjunctions.

For instance, we can prove the following:

\[ A \land B \implies \lnot (A \implies \lnot B) \\ \]

or \[ (A \implies \lnot B) \iff \lnot (A \land B) \]

Disjunctive Inference

Disjunctive inference relates implications and disjunctions.

For instance, the following law:

\[ A \lor B \implies (\lnot A \implies B) \]

In order to prove this, we need the absurdity assumption:

disjunctive :: Absurd c b => (a :+: b) -> Not c a -> b
disjunctive (Left x) nx = (absurd . nx) x
disjunctive (Right y) _ = y

Some proofs

pythagoras proof

Peirce's Law

Peirce's law (pronounced “purse's law”) is the following \[ ((A \implies B) \implies A) \implies A \]

The law is not valid in constructive logic. However, we have the following corresponding law, with the reservation that we need an element from the empty type $\mathbf{0}$ to the type $B$: \[ (\sim(A \implies B) \implies A) \implies \sim A \]

The corresponding Haskell type signature is

peirce :: Absurd c b => (Pot c (a -> b) -> a) -> Pot c a

The proof is in the Haskell code.

Classical logic?

What about classical logic? It is easy to recover it by making the reductio ad absurdum assumption, or double negation assumption, which is in mathematics: \[ \sim A \implies A \]

In Haskell this takes the form:

type ReductioT c a = Pot c a -> a

class Reductio c a where
    lem :: ReductioT c a

Let us show for instance the famous contraposition law in classical logic, namely:

\[ (\lnot B \implies \lnot A) \implies (A \implies B) \]

To do this in Haskell we need the assumption that the double negation holds for the type $B$:

contraposition :: (Reductio c b) => (Not c b -> Not c a) -> a -> b
contraposition g a = (lem . contramap g . dn) a

It is also possible to prove that the reductio ad absurdum law is equivalent to the law of excluded middle, which takes the form: \[ A \lor \lnot A \]

Let us show that the law of excluded middle implies the double negation law in Haskell:

fromLem :: (Absurd c a) => a :+: Not c a -> Pot c a -> a
fromLem (Left x) _ = x
fromLem (Right nx) px = absurd (px nx)

Now let us prove the other direction:

toLem' :: (Pot c a -> a) 
    -> (Not c a -> Not c a)
toLem' lm = potNot . contramap lm

toLem :: (Pot c a -> a)
    -> Not c (Not c a :&: a)
toLem = inf1_ . toLem'

If we summarise, any statement in classical logic could be proved in Haskell by assuming the double negation assumption, as well as the absurdity assumption for the type c, for all the types involved in the statement.


In conclusion, one can do quite a lot in Haskell! However, we cannot directly use quantifiers (the ubiquitous $\forall$ and $\exists$), because Haskell does not implement dependent types.

Another problem that one quickly stumbles upon is that the proof are quite hard to read! The proofs are also quite hard to write. The usual solution, for instance in Coq, is to use tactics instead, which makes it a little bit easier.

Whatever the difficulties, I think that the fact that a computer can prove mathematical statements is quite exhilarating!

Say thanks!