# All Science Fair Projects

## Science Fair Project Encyclopedia for Schools!

 Search    Browse    Forum  Coach    Links    Editor    Help    Tell-a-Friend    Encyclopedia    Dictionary

# Science Fair Project Encyclopedia

For information on any area of science that interests you,
enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.).
Or else, you can start by choosing any of the categories below.

# Curry-Howard

The Curry-Howard correspondence is the close relationship between computer programs and mathematical proofs; the correspondence is also known as the Curry-Howard isomorphism, or the formulae-as-types correspondence. A number of different formulations have been used, for a principle now identified as the independent discovery of the American mathematician Haskell Curry and logician William A. Howard .

The correspondence can be seen at two levels, firstly, that of an analogy, which states that the assertion of the type of value computed by a function is analogous to a logical theorem, and that the program to compute that value is analogous to a proof of that theorem. In theoretical computer science, this is an important underlying principle connecting the adjacent areas of lambda calculus and type theory. It is often stated in the form proofs are programs. An alternate formulation is propositions as types. Secondly, and more formally, it specifies a formal isomorphism between two mathematical domains, that of natural deduction formalised in a particular manner, and of the simply-typed lambda calculus, where there is a bijection between, in the first place proofs and terms, and in the second, proof reduction steps and beta reductions.

There are a number of ways to think about the Curry-Howard correspondence. In one direction, it operates on the level compile proofs into programs. Here proof is limited, certainly, to proofs in constructive logic — typically in a system of intuitionistic logic. And program means in the sense of functional programming; from the point of view of syntax such programs are expressed in some kind of lambda calculus. Therefore one concrete realisation of the Curry-Howard isomorphism is to study in detail how proofs from intuitionistic logic should be written into lambda terms. (Which is Howard's contribution; Curry being committed to the combinator, which is more like writing everything in a machine language for which the lambda calculus is a high-level language.)

There is a converse direction, relating to the possibility of proof extraction, given the correctness of a program. This is only feasible in a quite rich type theory environment. In fact the development of very rich typed functional programming languages, by theoreticians, has been partly motivated by the wish to bring the Curry-Howard correspondence to a more 'manifest' status.

 Contents

## Types

Following the lambda calculus, we will use λx.E to denote the function with formal parameter x and body E. When applied to an argument, say a, this function yields E, with every free appearance of x replaced with a. Valid λ-calculus expressions have one of these forms:

1. v (where v is a variable)
2. λv.E (where v is a variable and E is a λ-calculus expression)
3. (E F) (where E and F are λ-calculus expressions)

As is usual, we will abbreviate ((A B) C) as (A B C) and λab.E as λab.E. An expression is said to be closed if it contains no free variables.

Types can depend on type variables, which will be denoted by lowercase Greek letters α, β, and so on. In our notation type variables are implicitly universally quantified, i.e. if a value is of a type that includes type variables, it must be consistent with all possible instantiations of the type variables. We can define the type of an arbitrary expression as follows: a variable, say x, can have any type. If variable x has type α, and expression E has type β, then λx.E has a type denoted as α → β; that is, it is a function which takes values of type α to values of type β. In the expression (E F), if F has type α, then E must have type α → β (that is, it must be a function that expects an argument of type α) and the entire expression has type β.

For example, consider the function K = λab.a. Suppose that a has type α and b has type β. Then λb.a has type β → α, and λab.a has type α → (β → α). We adopt the convention that → associates to the right, so that the type can also be written as α → β → α. This means that the K function takes an argument of type α and returns a function, which, given an argument of type β, returns a value of type α.

Similarly, consider the function B = λabc.(a (b c)). Suppose that c has type γ. Then b must have some type of the form γ → β, and the expression (b c) has the type β. a must have some type of the form β → α, and the expression (a (b c)) has the type α. Then λc.(a (b c)) has the type γ → α; λbc.(a (b c)) has the type (γ → β) → γ → α, and λabc.(a (b c)) has the type (β → α) → (γ → β) → γ → α. One can interpret this as meaning that the B function takes an argument of type (β → α) and an argument of type (γ → β) and returns a function, which, given an argument of type γ, returns a result of type α.

## The type inhabitation problem

It's clear that λ-expressions can have quite complicated types. One might ask whether there is a λ-expression with any given type. The problem of finding a λ-expression with a particular type is called the type inhabitation problem .

The answer turns out to be remarkable: There is a closed λ-expression with a particular type only if the type corresponds to a theorem of logic, when the → symbols are re-interpreted as meaning logical implication.

For example, consider the identity function, λx.x, which takes an argument of type ξ and returns a result of type ξ, and so has type ξ → ξ. ξ → ξ is certainly a theorem of logic: given a formula ξ, one may conclude ξ.

The type of the K function, α → β → α, is also a theorem. If α is known to be true, then one can conclude that if β were true, one would be able to deduce α. This is sometimes known as the reiteration rule . The type of B is (β → α) → (γ → β) → γ → α. One can interpret this similarly as a logical tautology; if (β → α) were known to be true, then if (γ → β) were known to be true, then one could deduce that γ would imply α.

On the other hand, consider α → β, which is not a theorem. Clearly there is no λ-term which has this type; it would have to be a function which would take an argument of type α and which would return a result of some completely unrelated and unconstrained type. This is impossible, because you can't get something out of such a function unless you put it in somewhere else.

Although there is a function of type β → (α → α) (for example, λba.a, which takes an argument b, ignores the argument, and returns the identity function), there is no function of type (α → α) → β, which, if it existed, would be a function for transforming the identity function into an arbitrary value.

## Intuitionistic logic

Although it is true that all inhabited types correspond to theorems of logic, the converse is not true. Even if we restrict our attention to logical formulas that contain only the → operator, the so-called implicational fragment of logic, not every classically valid logical formula is an inhabited type. For example, the type ((α → β) → α) → α is uninhabited, even though the corresponding logical formula, known as Peirce's law, is a tautology.

Intuitionistic logic is a weaker system than classical logic. Whereas classical logic concerns itself with formulas that are 'true' in an abstract, Platonic sense, a formula of intuitionistic logic is considered true only if a proof can be constructed for it. The formula α ∨ ¬α exemplifies the difference; it is a theorem of classical logic but not of intuitionistic logic. Although it is 'true' classically, in intuitionistic logic this formula asserts that we can either prove α or we can prove ¬α. Since it isn't always the case that we can do one of these things, this isn't a theorem of intuitionistic logic.

The correspondence between inhabited types and valid logical formulas is exact in both directions if we restrict our attention to theorems of the implicational fragment of intuitionistic logic (this is called a Hilbert algebra ).

## Hilbert-style proofs

One simple way to formally characterize intuitionistic logic is as follows. It has two axiom schemas. All formulas of the form

α → β → α

are axioms, as are all formulas of the form

(α → β → γ) → (α → β) → α → γ

The only deduction rule is modus ponens, which states that if we have proved two theorems, one of the form α → β and one of the form α, we may conclude that β is also a theorem. The set of formulas that can be deduced in this fashion is precisely the set of formulas of intuitionistic logic. In particular, Peirce's law is not deducible in this way. (For a proof that Peirce's law is not deducible in this way, see the article on Heyting algebras.)

If we add Peirce's law as a third axiom schema, the system above becomes capable of proving all the theorems in the implicational fragment of classical logic.

## Programs are proofs

A second aspect of the Curry-Howard isomorphism is that a program whose type corresponds to a logical formula is itself analogous to a proof of that formula.

Consider the two following functions of λ-calculus:

K: λxy.x
S: λxyz.(x z (y z))

It turns out that any function can be created by suitable applications of K and S to each other. (See the combinatory logic article for a proof.) For example, the function B defined above is equivalent to (S (K S) K). If a function, such as B, can be expressed as a composition of S and K, the expression can be converted directly into a proof that the type of B, interpreted as a logical formula, is a theorem of intuitionistic logic. Essentially, appearances of K correspond to uses of the first axiom schema, appearances of S correspond to uses of the second axiom schema, and function applications correspond to uses of the modus ponens deduction rule.

The first axiom schema is α → β → α, and this is precisely the type of the K function; similarly the second axiom schema, (α → β → γ) → (α → β) → α → γ, is the type of the S function. Modus ponens says that if we have two expressions, one of type α → β (that is, a function that takes a value of type α and returns a value of type β') and the other of type α (that is, a value of type α) then we should be able to find a value of type β; clearly, we can do this by applying the function to the value; the result will have type β.

### Proof of α → α

As a simple example, we will construct a proof of the theorem α → α. This is the type of the identity function I = λx.x. The function ((S K) K) is also equivalent to the identity function. As a description of a proof, this says that we need to first apply S to K. We do this as follows:

α → β → α
(α → β → γ) → (α → β) → α → γ

If we let γ = α in S, we get:

(α → β → α) → (α → β) → α → α

Since the antecedent of this formula (α → β → α) is a known theorem, and is in fact precisely K, we can apply modus ponens and deduce the consequent:

(α → β) → α → α

This is the type of the function (S K). Now we need to apply this function to K. Again, we manipulate the formula so that the antecedent looks like K, this time by substituting (β → α) for β:

(α → β → α) → α → α

Once again we can use modus ponens to detach the consequent:

α → α

and we are done.

In general, the procedure is that whenever the program contains an application of the form (P Q), we should first prove theorems corresponding to the types of P and Q. Since P is being applied to Q, the type of P must have the form α → β and the type of Q must have the form α for some α and β. We can then detach the conclusion, β, via the modus ponens rule.

### Proof of (β → α) → (γ → β) → γ → α

As a more complicated example, let's look at the theorem that corresponds to the B function. The type of B is (β → α) → (γ → β) → γ → α. B is equivalent to (S (K S) K). This is our roadmap for the proof of the theorem (β → α) → (γ → β) → γ → α.

First we need to construct (K S). We make the antecedent of the K axiom look like the S axiom by setting α equal to (α → β → γ) → (α → β) → α → γ, and β equal to δ:

α → β → α
((α → β → γ) → (α → β) → α → γ) → δ → (α → β → γ) → (α → β) → α → γ

Since the antecedent here is just S, we can detach the consequent:

δ → (α → β → γ) → (α → β) → α → γ

This is the theorem that corresponds to the type of (K S). We now apply S to this expression. Taking S

(α → β → γ) → (α → β) → α → γ

we put α = δ, β = (α → β → γ), and γ = ((α → β) → α → γ), yielding

(δ → (α → β → γ) → (α → β) → α → γ) → (δ → (α → β → γ)) → δ → (α → β) → α → γ

and we then detach the consequent:

(δ → α → β → γ) → δ → (α → β) → α → γ

This is the formula for the type of (S (K S)). A special case of this theorem has δ = (β → γ):

((β → γ) → α → β → γ) → (β → γ) → (α → β) → α → γ

We need to apply this last formula to K. Again, we specialize K, this time by replacing α with (β → γ) and β with α:

α → β → α
(β → γ) → α → (β → γ)

This is the same as the antecedent of the prior formula, so we detach the consequent:

(β → γ) → (α → β) → α → γ

Switching the names of the variables α and γ gives us

(β → α) → (γ → β) → γ → α

which was what we had to prove.

## Sequent calculus

Hilbert-style proofs can be difficult to construct. A more intuitive way to prove theorems of logic is Gentzen's 'sequent calculus'. Sequent calculus proofs correspond to λ-calculus programs in the same way that Hilbert-style proofs correspond to combinator expressions.

The sequent calculus rules for the implicational fragment of intuitionistic logic are:

${{} \over \Gamma, \alpha \vdash\alpha } \qquad {\rm (Axiom)}$
${ \Gamma, \alpha\vdash\beta \over \Gamma\vdash\alpha\rarr\beta}\qquad ({\rm Right}\rarr)$
${\Gamma\vdash\alpha\qquad\qquad\Sigma, \beta\vdash\gamma \over \Gamma, \Sigma, \alpha\rarr\beta \vdash \gamma}\qquad({\rm Left}\rarr)$

Γ represents a context, which is a set of hypotheses. $\Gamma\vdash a$ indicates that we can prove a assuming the context Γ. Theorems of logic consist of precisely those formulas t for whose proofs no hypotheses are required; that is, t is a theorem if and only if we can show $\vdash t$. For complete details, see the sequent calculus article.

A proof in sequent calculus is a tree whose root is the theorem we wanted to prove, and whose leaves are instances of the axiom schema; each interior node must be labeled with either 'Right →' or 'Left →', and must contain a formula derived from the child nodes according to the specified rule. For example, a sequent calculus proof of (β → α) → (γ → β) → γ → α goes as follows:

                       b|-b          a|-a
------------------ Left → ('G = b, 'S = {})
g |- g             b,b→a |- a
----------------------------------- Left → ('G = g, S = b→a)
g,b→a,g→b |- a
----------------- Right →
b→a,g→b |- g→a
---------------------- Right →
b→a |- (g→b) → g→a
--------------------------- Right →
|- (b→a) → (g→b) → g→a


Sequent calculus proofs correspond closely to λ-calculus expressions. The assertion $\Gamma\vdash a$ can be interpreted to mean that, given values with the types listed in Γ, we can manufacture a value with type a. An axiom corresponds to the introduction of a new variable with a new, unconstrained type. The 'Right →' rule corresponds to function abstraction:

${ \Gamma, \alpha\vdash\beta \over \Gamma\vdash\alpha\rarr\beta}\qquad ({\rm Right}\rarr)$

When can we conclude that some program Γ contains a function of type α→β ? When Γ, plus a value of type α, contains enough machinery to allow us to manufacture values of type β.

The 'Left →' rule corresponds to function application:

${ \Gamma \vdash \alpha \qquad \Sigma, \beta \vdash \gamma \over \Gamma, \Sigma, \alpha\rarr\beta \vdash\gamma} \qquad ({\rm Left} \rarr)$

If we could manufacture a value of type α, and if, given a value of type β we could also manufacture a value of type γ, then a function of type α→β would also allow us to manufacture a value of type γ: First we would manufacture α, then apply the α→β function to it, and then use the resulting β value to manufacture a value of type γ.

The proof of (β → α) → (γ → β) → γ → α above tells us how to manufacture a λ-expression with that type. First, introduce variables a and b with types α and β, respectively. The 'Left →' rule says to manufacture the program (λb.a b), which constructs a value of type α. We use 'Left →' again to construct (λb.ag.b g)), which still has type α.

## Point of view of category theory

Starting from the point of view that functional programming supports programming languages that are typed and have higher-order functions, the primary content of the "isomorphism" is to identify that amount of structure with that occurring in type theories of intuitionistic logic. Under the influence of category theory there are a number of further heuristic ways of looking at the overall position. These are syntax-lite, one could say; if Curry-Howard were some sort of compiler, they would try to explain what structure is preserved in translation, rather than tell one how to write the program. This therefore is an essentially mathematical, rather than computational approach: if Curry-Howard is to be treated as an isomorphism, what is the preserved structure?

One favourite formulation is propositions as types. This can be rather more confusing than the matching statement proofs as programs: using the idea of category in its very simple form, we can say that if proofs and programs are two kinds of morphisms, then propositions and types are going to be two corresponding kinds of objects. It is suggested that we have two distinct categories, with some sort of functor (translation) from one to the other, respecting structure. This schematic picture at least allows to orient ourselves. We suppose that proofs are here being treated as some sort of high-level programming language, and our functor is acting as a compiler. If we ask "proofs of what?" the answer is "of some propositions"; as proofs carry over to programs, the labelling of propositions becomes that of abstract types. To say we have a functor is to say that modus ponens in some form becomes composition of functional programs, i.e., explicit substitution, operationally speaking.

This is a helpful if schematic guide, that will apply over a range of situations. That is, we don't expect there to be a single, canonical formulation. A traditional way of looking at it is to expect existentially quantified propositions to lead (in intuitionistic logic) to constructive solutions of problems posed. This goes back to Kolmogorov, and fits perfectly with the category theory view. With emphasis at the morphism level, we interpret A implies B as a function space type: if A says something exists satisfying P, and B that something exists satisfying Q, then our interest is in methods that construct something of type Q from something of type P (Kolmogorov's interpretation). That this matches up on the functional program side is the basic identification of Cartesian-closed categories as the models for typed lambda-calculus.

In this way the content of the Curry-Howard correspondence can practically be integrated into categorical logic, as foundational explanation of the modelling. Type theories become categories, categories support lambda calculus interpretations; what results is sometimes called the Curry-Howard-Lambek correspondence, in tribute to Joachim Lambek, who pioneered this way of looking at categorical logic. The programming content is perhaps concealed, in the next step of compiling lambda calculus into something lower-level (provided by Curry's combinatory logic). Whether it should so be read is of course another matter. There are other heuristics, such as the universal quantifier interpretation of the function space (as converting one proposition with an existential quantifier to another).

Last updated: 10-12-2005 08:06:09
03-10-2013 05:06:04