The formalism of dependent pairs can be used to define subtypes: we just replace the type family F : X → Type
by a predicate P : X → Prop
.
In Lean, the subtype associated to a predicate can be denoted by (x : X) ×' P x
or by {x : X // P x}
. Its terms are dependent pairs ⟨x, px⟩
where x
is an element of X
and px
is a proof of the proposition P x
.
For instance, we could also define Vec X n
as {L : List X // L.length = n}
, using the following predicate:
def List.has_length {X : Type} (n : Nat) : List X → Prop :=
fun L ↦ L.length = n
Given a type family F : X → Type
, the associated type of dependent functions (also called a Π-type) is denoted by (x : X) → F x
.
If f : (x : X) → X
, then, given x : X
, we get f x : F x
(the return type depends on the input parameter x
).
def zero_vector : (n : Nat) → Vec Int n
| 0 => Vec.null
| n + 1 => Vec.cons 0 (zero_vector n)
#check zero_vector 42 -- zero_vector 42 : Vec Int 42
Dependent functions generalize functions: if for all
Given a predicate P : X → Prop
on a type X
, the proposition ∃ x : X, P x
is the proposition defined inductively as follows.
inductive Exists {X : Type} (P : X → Prop) : Prop
| intro (x : X) (p : P x) : Exists P
This means that, in order to prove that ∃ x : X, P x
, you need to construct a term x : X
(a witness) and a proof of the proposition P x
(the evidence).
Note the analogy with subtypes:
inductive Subtype {X : Type} (P : X → Prop) : Type
| intro (x : X) (p : P x) : Subtype P
Also note that ∃ x : X, P x
is stronger than saying ¬(∀ x : X, ¬(P x))
.
Given a predicate P : X → Prop
on a type X
, the proposition ∀ x : X, P x
is the proposition defined by the type of dependent functions (x : X) → P x
.
This means that, to prove such a statement, you need to construct a function. So you start your proof with fun x ↦ _
(if you are in term mode) or intro x
(if you are in tactic mode).
example : ∀ w : ℂ, ∃ z : ℂ, z ^ 2 = w :=
by { -- ⊢ ∀ w : ℂ, ∃ z : ℂ, z ^ 2 = w
intro w -- w : ℂ ⊢ ∃ z : ℂ, z ^ 2 = w
sorry
}
Note that there might be more than one witness z : ℂ
for the property z ^ 2 = w
, but that piece of data cannot be recovered from the existential statement itself.
So far, we have focused on the calculus of predicates. Dependent type theory provides a syntax in which we can express mathematical statements.
theorem FLT {n : Nat} (x y z : Int) :
(n > 2) → x ^ n + y ^ n = z ^ n → x * y * z = 0 :=
sorry
But we can also use it to represent mathematical structures such as groups, rings, or topological spaces.
To do this in a programming language such as Lean, it is useful to first have a sense of what a record type is.
As a first approximation, you can think of a record type as an inductive type with only one constructor. In Lean, record types are introduced via the keyword structure
.
The product of two types, for instance, can be defined as a structure.
structure Prod (X : Type) (Y : Type) : Type where
mk :: (x : X) (y : Y)
The definition as an inductive type used quite similar syntax:
inductive Prod (X : Type) (Y : Type) : Type where
| mk (x : X) (y : Y) : Prod X Y
While valid, the previous syntax for declaring Prod X Y
as a record is not very enlightening.
structure Prod (X : Type) (Y : Type) : Type where
mk :: (x : X) (y : Y)
Try instead:
structure Prod (X : Type) (Y : Type) : Type where
mk :: -- indicating the constructor's name is optional (try it!)
fst : X
snd : Y
Prod
is a structure with two fields, named fst
and snd
.
Record types come equipped with projections to their fields:
#check Prod.fst -- Prod.fst : {X Y : Type} → Prod X Y → X
#check Prod.snd -- Prod.snd : {X Y : Type} → Prod X Y → Y
The name of the field should reflect that: Prod.fst
is much more expressive than Prod.x
as a name for the first projection.
A convenient feature of these projections is that you can use dot notation.
#check (2, -1) -- (2, -1) : Nat × Int
#check (2, -1).fst -- (2, -1).fst : Nat
#eval (2, -1).fst -- 2
In mathematics, a monoid is a triple
Since a monoid is some kind of tuple, it is natural to translate this directly into a recod type in Lean. We just have to unpack the information about
The type of monoids can be introduced as follows in Lean.
structure Monoid : Type 1 where
carrier : Type
op : carrier → carrier → carrier
assoc : ∀ x y z : carrier, op (op x y) z = op x (op y z)
elt : carrier
neutral : ∀ x : carrier, (op elt x = x) ∧ (op x elt = x)
Note how the field op
depends on the field carrier
and how the fields assoc
and neutral
depend on the fields carrier
, op
and elt
.
Also, expressions such as ∀ x y z : carrier, op (op x y) z = op x (op y z)
(which expresses the associativity property of the operation op
) are types.
You can forget about Type 1
😅. If you remove : Type 1
, Lean will infer it!
Concretely, how do we construct a monoid? We must supply an element for each field of the Monoid
structure.
def NatAddZero : Monoid where
carrier := Nat
op := Nat.add
assoc := Nat.add_assoc
elt := Nat.zero
neutral := fun (n : Nat) ↦ ⟨Nat.zero_add n, Nat.add_zero n⟩
Note the use of the where
keyword.
This works because Nat.add
, Nat.add_assoc
, etc are already contained in Lean's standard library. As shown in the neutral
field, the term you need can be defined there directly.
You can also use tactic mode to write terms that go into the various fields.
def NatAddZero : Monoid where
carrier := by {exact Nat}
... (omitted)
neutral := by {
intro n
constructor
exact Nat.zero_add n
exact Nat.add_zero n
}
Or, without the where
keyword (try it!):
def NatAddZero : Monoid := by {constructor, exact Nat, ... (omitted)}
One could also think of declaring the type of monoids as follows.
structure Monoid : Type 1 where
carrier : Type
op : carrier → carrier → carrier
assoc : ∀ x y z : carrier, op (op x y) z = op x (op y z)
neutral : ∃ elt : carrier, ∀ x : carrier, (op elt x = x) ∧ (op x elt = x)
The issue with this is that it is then unclear how to refer to the neutral element of a monoid, or if it is even possible to do that. In the previous construction, we had for instance NatAddZero.elt = Nat.zero
(you can prove that if you want).
This is problematic if we want to write the definition of a group: to add something like ∀ x : carrier, ∃ y : carrier, (op y x = elt) ∧ (op x y = elt)
, we need a term elt
to refer to.
There is a theoretical way to get out of the issues above (use a definite description operator). But, for practical purposes, we may as well define the type of monoids as we did and the type of groups as follows (the Type 1
ascription is again optional here).
structure Group : Type 1 extends Monoid where
inv_map : carrier → carrier
inv_ppty : ∀ x : carrier, (op (inv_map x) x = elt) ∧ (op x (inv_map x) = elt)
Thanks to the extends
keyword, there is no need to repeat the fields carrier
, op
, etc. They are part of the new structure and can be used when adding further fields. This also creates a projection map from Group
to Monoid
, which "forgets" the new fields.
#check @Group.toMonoid -- Group.toMonoid : Group → Monoid
Group.toMonoid
by hand.Prod X Y
as an inductive type (as opposed to a record type) and define the projection to the first factor by hand.Int
, the fact that 0 : ℤ
is left and right neutral, or the existence of an inverse (the exact?
or simp?
tactics may be of assistance).It is possible to use usual mathematical notation in Lean. The quantifier symbols unfold to the definitions that we have given before and it is a good exercise to convert the types represented below into Σ-types and Π-types (by definition, Sequence ℝ := ℕ → ℝ
).
def Sequence.isConvergent (s : Sequence ℝ) : Prop :=
∃ l : ℝ, ∀ ε > 0, ∃ n : ℕ, ∀ m : Nat, m ≥ n → |s m - l| < ε
def Sequence.isStationary (s : Sequence ℝ) : Prop :=
∃ a : ℝ, ∃ n : ℕ, ∀ m : Nat, m ≥ n → s m = a
Another good exercise is to prove the following implication.
theorem stationary_implies_convergent :
∀ (s : ℕ → ℝ), s.isStationary → s.isConvergent := sorry
Here are three practice files for you to work on during the rest of the session. I am happy to answer any questions you may have 😊 . Thank you for your attention!
**+ link to these slides?**