I recently attended the SPLASH-E presentation on Lambdulus: Teaching Lambda Calculus Practically by Jan Sliacky and Petr Maj. It was a very interesting presentation describing the Programming Paradigms (PPA) course at the Czech Technical University. I really think they are onto something!

Much of the presentation focused on the web-based programmer-friendly λ-calculus evaluator, affectionately called Lambdulus. To make the evaluator more programmer-friendly, they extend the untyped λ-calculus with macros and break-points and they use special evaluation rules for reducing macros. The paper was also a very good read and went into a little more detail about the course and their approach to teaching the λ-calculus.

One of the more important parts of Lambdulus is that it chooses which type of evaluation is appropriate. This is particularly important for reducing church numerals, but also leads to much cleaner looking λ-expressions because they are careful with how they expand macros. Lambdulus is also careful about how many of the evaluation steps to show to the programmer.

Macros are named expressions defined using the syntax `NAME := [λ-expression]`

. Lambdulus is very careful about when they expand macros. In general, macros are only expanded if they are applied to some other expression. If a λ-abstraction is applied to a macro it is passed by reference, i.e. the abstracted variable is substituted for the macro not its definition. This makes the reduced expression look much cleaner.

Lambdulus also supports something they call dynamic macros. Currently these are numbers and arithmetic operators. Instead of defining a infinitely many macros manually, one for each church numeral, Lambdulus defines numeric macros dynamically. Reduction, when arithmetic operators are applied to numeric macros are also simplified.

Overall I really liked the project and their approach to teaching students how to program in the λ-calculus! I really liked how they have a clearly defined goal of teaching students how to program in the λ-calculus by treating it as a “real” programming language and they build everything around that goal. One of the strengths of their approach is realizing that when programming in the λ-calculus, we really want different kinds of reduction for different kinds of λ-expressions. For example, even if we are working in a call-by-value or call-by-name setting, for arithmetic on church numerals we probably want to be a little more agressive and do a full normal order reduction and then contract the result into a numeric macro. This makes the result look a lot cleaner and helps programmers debug their λ-calculus programs since what is going on during execution is a lot more clearer.

Their evaluator is available at https://lambdulus.github.io/! They paid a lot of attention to making Lambdulus more developer friendly. I didn’t talk about break-points above, but I found that interesting as well! I remember having a lot of trouble with church encodings when I was learning to program in the λ-calculus and I really think I could have benefitted from playing around in Lambdulus!

**Background Required:** This post assumes some familiarity with the simply typed $\lambda$-calculus and $\beta$-reduction.

Pure Type Systems (PTS) are a class of explicitly typed $\lambda$-calculi. The most remarkable thing about PTSs, is that types, terms, kinds are all expressed in one syntax and with only a few simple rules they can express crazy type systems.

The general system is defined as being polymorphic over three sets: a empty set of sorts $S$, and the set of axioms $Ax$ and the set of rules $R$, where $Ax$ contains pairs of sorts and $R$ contains triples of sorts. By selecting various $S, Ax, R$ we can express different $\lambda$-calculi.

The simply typed $\lambda$-calculus (STLC) can be viewed as a Pure Type System, but this system has some interesting (and sometimes annoying) properties. In this post, I will highlight some of these properties.

The typing rules for Pure Type Systems are usually expressed in their most general setting, having rules that can express dependent types and type level computation. These are not necessary to study the STLC, so in what follows we only express the rules necessary to express simple types and expressions.

In the following, the set of sorts is $S = \{\square{},*\}$, $s$ ranges over sorts, $x,y$ and $X,Y$ ranges over variables, and $a,b,c,f$ and $A,B,C,F$ range over terms.

When studying the non-PTS STLC, we usually assume a set of base types. In the PTS version of STLC, we instead assume a base kind $*$ and allow the introduction of type variables as base types of kind $*$.

$\begin{array}{c} \vdash *:\square{} \end{array}\quad(\text{Axiom})$

$\begin{array}{c} \Gamma \vdash A : s \\ \hline \Gamma, x:A \vdash x : A \end{array}\quad(\text{Start})$

$\begin{array}{c} \Gamma \vdash A : B \qquad \Gamma \vdash C : s \\ \hline \Gamma, x:C \vdash A : B \end{array}\quad(\text{Weakening})$

Here the $(\text{Axiom})$ rule can be read as “$*$ is a kind”.

The $(\text{Start})$ rule is used in two ways. Firstly, it allows for typing judgements of the following form which can be roughly read as “$x$ is a new base type”.

$\Gamma, X:* \vdash X : *$

Secondly, it allows us to introduce variables of base types into the context.

$\Gamma, X:*, x: X \vdash x : X$

The $(\text{Weakening})$ rule allows us to type terms in extended contexts.

The rules so far, only allow us to work with types of the form $X:*$. The next rule allows us to work with function types.

$\begin{array}{c} \Gamma \vdash A : * \qquad \Gamma \vdash B : * \\ \hline \Gamma \vdash A \to B : * \end{array}\quad(\text{Product})$

Using the above rule we can get typing judgements such as the following.

$\Gamma, X:*, Y:*, x: X \to Y \vdash x : X \to Y$

The next rules are somewhat standard and allow us to type $\lambda$ abstractions and applications.

$\begin{array}{c} \Gamma,x:A \vdash a : B \qquad \Gamma \vdash A \to B : * \\ \hline \Gamma \vdash \lambda (x:A).a : A \to B \end{array}\quad(\text{Abstraction})$

$\begin{array}{c} \Gamma \vdash f : A \to B \qquad \Gamma \vdash a : A \\ \hline \Gamma \vdash f\,a : B \end{array}\quad(\text{Application})$

The above are all the rules we need to study the PTS version of the simply typed $\lambda$ calculus. The system we presented does not have any type-level abstraction or computation, so some PTS rules were elided.

One of the wierder quirks of the PTS version of STLC is that there are no Elimination rules for type variables. This means that the only thing that we can type in the empty context is $\vdash * : \square$. Everything else must be typed in a non-empty context.

$\begin{array}{c} X:* \vdash \lambda (x:X).x : X \to X \end{array}$

Although, there are no “closed” terms, we can still define $\beta$-reduction and prove progress and preservation lemmas, they just must happen in non-empty contexts. This brings us to our second interesting fact.

The progress lemma for the non-PTS STLC is stated as follows.

*Lemma (non-PTS Progress): For any $\Gamma\vdash A : *$, either $A$ is a variable, a $\lambda$-abstraction, or there exists a term $B$ such that $A\to_{\beta}B$.*

The PTS version has an additional special case, since sorts are treated as first class and are an additional normal form in PTS.

*Lemma (Progress): For any $\Gamma\vdash A : *$, either $A$ is a variable, a $\lambda$-abstraction, $A$ is a sort, or there exists a term $B$ such that $A\to_{\beta}B$.*

This is not specific to STLC, but to applies to any PTSs. To introduce any binding of the form $x:A$ into the context, we must first show that $A:s$ for some sort $s$.

While the PTS version of STLC has type variables, non-type/non-sort terms are still simply typed. We still do not have any form of polymorphism, type constructors, or dependent types. We also do not have any kind of recursion.

The fact that there are no typable closed terms other than $*$ is somewhat weird, but it stems from the fact that we do not want $\lambda$-abstractions which are polymorphic over types when studying STLC. This makes the PTS version of STLC somewhat uninteresting. However, there are many interesting extensions of the PTS version of STLC. For example, we can introduce an additional axiom $\vdash Nat:*$ and constants $0$ and functions $succ,pred$ to study PCF in a PTS setting. We can also consider the above system extended with simple inductive types, which we will explore in a future post!

This is the blog version of a talk I did for the PLSE seminars at UofA.

In this post I am going to discuss the problems with adding mutation to an object-oriented programming language with depth-subtyping and propose a type-system which safely supports mutation while keeping a notion of depth-subtyping.

I will avoid formal definitions for this post and instead motivate ideas through examples. Many of these ideas will be familiar for anyone who has programmed in an object-oriented language before. I will also be very loose with terminology. For example, I might say objects are subtypes when I mean their types are subtypes.

Before I can discuss depth-subtyping or even subtyping, I am going to discuss what I mean by typing. For this post I am going to focus on typing for values, rather than typing for program fragments.

So what are values? Values are things such as `true`

, `1`

, `1.3`

, and the object `{x: 2, y: 3}`

. This last example is an object with fields `x`

and `y`

which contain the numbers `2`

and `3`

.

If you are familiar with object-oriented languages you will recognize the above values as having the types `Bool`

, `Int`

, `Float`

, and `{x: Int, y: Int}`

.

For this post, we are going to think of typing as the following:

The type of a value represents the operations we are allowed to perform on the value.

For example, if we have an `Int`

, we are allowed to add it to another `Int`

, but we are not allowed to add it to an object:

`1 + 3`

is allowed`1 + {x: 2, y: 3}`

is**not**allowed

Let’s start with some examples:

`Nat <: Int`

. We often think of natural numbers as being a subtype of integers.`{x: Int, y: Int, z: Int} <: {x: Int, y: Int}`

. If we have a object with fields`x`

,`y`

, and`z`

are subtypes of objects with fields`x`

and`y`

.

But what does subtyping mean? Given the above notion of typing:

We say that`T`

is a subtype of`U`

(written`T<:U`

) if all operations allowed on values of type`U`

are allowed on values of type`T`

.

Before we discuss depth-subtyping, let’s discuss a contrasting idea: **width-subtyping**. Let’s go back to the last example:

`{x: Int, y: Int, z: Int} <: {x: Int, y: Int}`

Any operations allowed on `{x: Int, y: Int}`

are also allowed on `{x: Int, y: Int, z: Int}`

; we just ignore the field `z`

. This idea is known as width-subtyping: objects which are wider in fields are subtypes.

Most languages which support subtyping support width-subtyping in some way or the other. For example, in `Java`

we can extend an object type with more fields.

Depth-subtyping is the idea that if we have subtypes `T<:U`

, then objects containing `T`

s are subtypes of objects containing `U`

s: `{field: T} <: {field: U}`

. Contrasted with width-subtyping, instead of going wider in fields, we go deeper in fields.

For example, since we had `Nat<:Int`

, we have `PointNat<:PointInt`

where `PointNat`

and `PointInt`

are defined as:

And it can be depth-subtyping all the way down: objects containing `{field: T}`

are subtypes of objects containing `{field: U}`

, and so on.

Depth-subtyping can be very useful. Consider the following function:

`diff`

takes an object which contains `Int`

s in the field `x`

and `y`

, and subtracts the `x`

value from the `y`

value and returns the answer.

Now suppose we have a `PointNat p`

.

It is **completely safe** to pass `p`

to the function `diff`

, and depth-subtyping would allow us to reuse the diff function for `PointNat`

s. Without depth-subtyping we would have to rewrite the function for `PointNat`

s.

The `diff`

function only relies on the input object having fields `x`

and `y`

containing values which can be treated as `Int`

s. In general, it is safe to pass a `PointNat`

to a function which expects its input to be a `PointInt`

… unless the input is being mutated.

To see why mutation is bad for depth-subtyping, we are going to introduce some more types.

Usual width-subtyping implies that `Point3D <: PointInt`

. Consider the following program. We will explain it shortly.

```
1. Point3D p3d = new Point3D(1, 2, 3)
2. PointInt p2d = new PointInt(1, 2)
3. Container[Point3D] c = new Container(p3d)
4. Container[PointInt] c2 = c
5. c2.field := p2d
6. Point3d p = c.field
7. return p.x + p.y + p.z
```

We first create points `p3d`

and `p2d`

, the first being a 3d-point (`Point3D`

) and the second being a 2d-point (`PointInt`

). Then, we create a 3d-point container, `c`

. Next, we use depth-subtyping to cast `c`

to a 2d-point container `c2`

. Then we mutate the container with `p2d`

using the reference `c2`

. Now we use the original reference `c`

, which is treated as a 3d-point container, to read its field as a 3d-point. Then we read the fields of the object and sum them up.

If we run this code, it will throw a runtime exception when `p.z`

is executed! This is because, we when we read `c.field`

, it actually contains a 2d-point, so it does not have a `z`

field. So trying to read the non-existent `z`

field throws a runtime exception.

Fun fact: If we used `Array`

instead of `Container`

, `Java`

would compile the above code! The `Array`

upcasting in Java was well intentioned, it would allow a form of depth-subtyping for arrays, but it is ultimately unsafe.

So what went wrong? The problem was that the operation “assign 2d-point” is:

- should be allowed for 2d-point containers, but
- should not be allowed for 3d-point containers.

So not all operations allowed for 2d-point containers should be allowed for 3d-point containers, even though 3d-points are subtypes of 2d-points.

This might lead us to believe that we cannot allow depth-subtyping in languages with field mutation. Fortunately, this turns out not be true! We can recover depth-subtyping, or at least a notion thereof, if we use something which I call **bounded field-subtyping**.

In bounded field-typing, fields have bounded types such as the following:

` {fld: T..U}`

We call the lower bound `T`

the *setter type* of the field `fld`

, and `U`

the *getter type*. Usually, the setter type is a subtype of the getter type, `T<:U`

.

If `x`

has the above object type, reading the field `fld`

produces a value of type `U`

. Under bounded field-typing, field reads produce the getter type. For mutation, the assignment `x.fld:=y`

is allowed if `y`

has type `T`

; i.e. we allow assignments of the setter type.

In the bounded setting, we allow depth-subtyping for getter-types:

`S<:U`

implies`{fld: T..S}<:{fld: T..U}`

.

`{fld: T..U}`

allows us to read a `U`

from the field `fld`

. Since all operations on `U`

s are allowed on `S`

s, we are allowed to read an `S`

from a `{fld: T..S}`

and treat it as an `U`

.

We also allow depth-subtyping for setter-types, but in the other direction, i.e. subtyping is covariant in the setter type.

`S<:T`

implies`{fld: T..U}<:{fld: S..U}`

.

Notice how, `S`

and `T`

appear in opposite directions of the `<:`

symbol. If we are allowed to write a value which allows the operations of `T`

to be performed on it, then it is safe to write a value which allows more operations.

In the bounded setting, the previous types such as `PointInt`

and `PointNat`

are written with the setter and getter types being equal:

In the bounded setting, `PointNat`

is not a subtype of `PointInt`

. Since we do **not** have that `Int<:Nat`

, their setter types are not in the subtyping relationship required for depth-subtyping. However, since `Nat<:Int`

we have:

Using the same subtyping relationship for the setter types for `PointInt`

, we have:

Using this, we can write a function `diff`

which works for both `PointInt`

s and `PointNat`

s.

At this point, you will notice that the above, `diff`

function is not quite general enough. If we had a value of the following type, we would not be able to use `diff`

on it, even integers are neither subtypes nor supertypes of natural numbers.

For this reason, under bounded field-typing it is very useful to have a bottom type, `⊥`

. There is no way to create a value of type `⊥`

, but we assume that all operations are allowed on values of type `⊥`

. More importantly, `⊥<:T`

for any type `T`

. This allows us to write `diff`

as the following.

Notice that we use `⊥`

for the setter types of our input parameter. We can now call the `diff`

function with a `PointEvenInt`

as its input, or any object as long as it contains field `x`

and `y`

which contain values which can be treated as `Int`

s. So we have effectively recovered what we wanted from depth-subtyping!

Under bounded field-typing, we still have that 3d-points (`Point3D`

) are subtypes of 2d-points (`PointInt`

), since width-subtyping is still safe. As we discussed in the previous example, 3d-point Containers are not 2d-point Containers because they do not allow the operation “Assign 2d-point”. This is now reflected at the type level.

```
1. Point3D p3d = new Point3D(1, 2, 3)
2. PointInt p2d = new PointInt(1, 2)
3. Container[Point3D] c = new Container(p3d)
4. Container[PointInt] c2 = c
5. c2.field := p2d
6. Point3d p = c.field
7. return p.x + p.y + p.z
```

Here the assignment in line 4 is not allowed and caught by the type system. To make the assignment valid, we can change the container type to a bounded type.

```
1. Point3D p3d = new Point3D(1, 2, 3)
2. PointInt p2d = new PointInt(1, 2)
3. Container[Point3D] c = new Container(p3d)
4. Container[Point3d..PointInt] c2 = c
5. c2.field := p2d
6. Point3d p = c.field
7. return p.x + p.y + p.z
```

But now, the assignment in line 5 is not valid since it is only safe to assign 3d-points to `c2`

. If we change the original Container of `c`

type to `PointInt`

, then the assignment is allowed.

```
1. Point3D p3d = new Point3D(1, 2, 3)
2. PointInt p2d = new PointInt(1, 2)
3. Container[PointInt] c = new Container(p3d)
4. Container[PointInt] c2 = c
5. c2.field := p2d
6. Point3d p = c.field
7. return p.x + p.y + p.z
```

But now, the field read in line 6 is not allowed, since we are only allowed to assume that the field contains a `PointInt`

. No matter what we do, the type system does not allow us to assign a 2d-point and read it back as a 3d-point.

This concludes the post. I hope this shed some light into why the “getter and setter methods” meme in Java exists. I’m going to leave you guys by saying that bounded field-typing really is safe! I formally proved it to be safe in an extension of the DOT calculus called *κ*Dot (officially kappa-dot, informally kay-dot). You can find the Coq code at here and the accompanying paper which I presented at the Scala Symposium here.

A friend pointed out that this post is being discussed on Reddit.

Wanted a Hakyll generated site which prerendered $\LaTeX$ so that no JavaScript runs on the client. Hacked this together over a few days and it somehow works!

Inline math looks like this: $x+y$, and display math looks like the following. $\prod_{i=1}^{n} p_i + q_i$

The unfortunate part is that I still need some JavaScript on the server side. The blog posts are prerendered using $\KaTeX$ and relies on the `katex`

binary which got added to my path when I did `npm install katex -g`

.

The $\KaTeX$ compiler activates if there is a `katex`

metadata field. The idea is to only enable $\KaTeX$ selectively when heavy $\LaTeX$ is needed and just use plain pandoc $\LaTeX$ otherwise. The $\KaTeX$ files are somewhat slow to compile since we spin up a new `katex`

process for each $\LaTeX$ expression.

```
--------------------------------------------------------------------------------
{-# LANGUAGE OverloadedStrings #-}
import Hakyll
import Hakyll.Core.Compiler (unsafeCompiler)
import KaTeX.KaTeXify (kaTeXifyIO)
--------------------------------------------------------------------------------
main :: IO ()
main = hakyll $ do
...
match "posts/*" $ do
route $ setExtension "html"
compile $ pandocMathCompiler
>>= loadAndApplyTemplate "templates/post.html" postCtx
>>= loadAndApplyTemplate "templates/default.html" postCtx
>>= relativizeUrls
...
--------------------------------------------------------------------------------
pandocMathCompiler :: Compiler (Item String)
pandocMathCompiler = do
identifier <- getUnderlying
s <- getMetadataField identifier "katex"
case s of
Just _ ->
pandocCompilerWithTransformM
defaultHakyllReaderOptions defaultHakyllWriterOptions
(unsafeCompiler . kaTeXifyIO)
Nothing -> pandocCompiler
```

Most of the magic happens in the `KaTeX.KaTeXify`

module. The file ended up being somewhat small since Pandoc suppies most of the functions needed out of the box. In particular, Pandoc provides the `walkM`

function which walks a Pandoc parse tree bottom up.

```
module KaTeX.KaTeXify (kaTeXifyIO) where
import System.Process (readCreateProcess, shell)
import Text.Pandoc.Definition (MathType(..), Inline(..), Pandoc, Format(..))
import Text.Pandoc.Readers.HTML (readHtml)
import Text.Pandoc.Options (def)
import Text.Pandoc.Walk (walkM)
import Text.Pandoc.Class (PandocPure, runPure)
import Data.String.Conversions (convertString)
--------------------------------------------------------------------------------
kaTeXCmd :: MathType -> String
kaTeXCmd DisplayMath = "katex --display-mode"
kaTeXCmd _ = "katex"
rawKaTeX :: MathType -> String -> IO String
rawKaTeX mt inner = readCreateProcess (shell $ kaTeXCmd mt) inner
parseKaTeX :: String -> Maybe Inline
parseKaTeX str =
-- Ensure str is parsable HTML
case runPure $ readHtml def (convertString str) of
Right _ -> Just (RawInline (Format "html") str)
otherwise -> Nothing
kaTeXify :: Inline -> IO Inline
kaTeXify orig@(Math mt str) =
do
s <- fmap parseKaTeX $ rawKaTeX mt str
case s of
Just inl -> return inl
Nothing -> return orig
kaTeXify x = return x
--------------------------------------------------------------------------------
kaTeXifyIO :: Pandoc -> IO Pandoc
kaTeXifyIO = walkM kaTeXify
```