Reversing normalization for errors

Normalization is a technique used to ensure that any two objects that represent the same thing have the same form aka the normal form, a unique form of an expression.

What is the problem?

Normalization is one of the core concepts in programming languages, but it leads to not great errors and normalization is everywhere, but hopefully reversing normalization may save us.

Normalization is evaluation

Normalization is useful whenever we want to do pretty much anything with an object, if I want to (print 3 + 2 times the message Hello) then I first normalize 3 + 2 to 5 and then I will (print 5 times the message Hello).

The normal form of many expressions are known as a value, so the act of producing a value is also known as evaluation. In fact “running” a program is no different from normalizing it.

Normalization for type checking

Normalization is also useful whenever we want to check if two objects are equal, such as during type checking.

How does TypeScript knows that the following code should be allowed?

1
2
type Id<A> = A;
const f = (x: number): Id<number> => x;

Well the function wants to return a Id<number> but it is actually returning a number, but this works, because the normal form of Id<number> is actually number and number == number is trivially true.

So by first normalizing the types we reduced the entire checking process to a simple equality. This is the gist of pretty much all fancy’ish type checking.

But what happens when it fails?

Let’s say you wrote a bug and you’re trying to debug it, you have a hunch of where the issue should be and as a professional programmer your first reflex is to add a debug message to check.

1
2
3
4
const f = (x: number) => {
// checking if f is called with the right value
console.log(x);
};

After running the program, you find out that x is actually wrong, it should be 13 but it’s 12, the main issue is that f is being called in the entire codebase, you can check stack traces, but those are hard to read, such if you have bad async code, it would be really nice if you could just regex search the codebase and find the desired call site or even better just ask the language where this value was produced.

Sadly this doesn’t work, maybe on the call site it was actually called with f(5 + 7) and you only know 12, also, sadly the language doesn’t know where a value is created, as only the actual value is known but not how it was created, as such, good luck.

This actually happens, because types

Another case, likely more realistic and painful are those on the type level. Let’s check the previous example, but now with a twist, the type constructor is wrong.

1
2
3
4
5
// far.away.lib.ts
type Id<A> = string; // wrong

// your.crud.ts
const f = (x: number): Id<number> => x;

The typer will try to do the same thing, it is expecting Id<number> and you returned number, but this time normalizing Id<number> actually produces string and string == number is false.

This would not be a major problem, but when reporting the error it reports string is not a number, which is a reasonable error, except that nowhere on your code there is any mention of string. The issue is that the typer is giving you the diff over the normal form, which may look unrelated to the actual annotations.

Solving the problem

While there is a lot of partial solutions for both issues, I’ve been thinking about it for a while, trying to find a principled solution that always works, hopefully one that can be implemented without too much overhead for a debugging build and that works nicely for the small interpreters inside of typers.

Originally my hope was to just preserve the original form as much as possible, this is okay(ish) for type checking because the forms are small and you don’t need to hold intermediary values, but still, that implies on preserving a lot of versions of the same type, which is not very efficient.

Reductions as equivalences

Recently I was toying with an idea about rewriting systems, making reductions into actual equivalences, to make any reduction into an equivalence, so I played with adding the reverse reduction on the system, but most reductions actually erase information, so you don’t know what was the “original” term.

The reductions as equivalences thing, didn’t go anywhere yet, but the idea of reversing reductions is quite interesting.

Finally, reversing reductions

The key insight here, is that if you can reverse all the reductions, then you can just go back to the original term, producing wherever error message you want and showing wherever intermediary step you want.

The only requirement is to track which reductions happened, I will be calling this the reduction’s receipt. Here it will be a special node on my tree, but in a efficient implementation it is likely tracked in a different way, I have a couple ideas, but this is for another post.

Here I will focus on evaluation of the lambda calculus, while this may look unrelated to typing, the idea is the same, in fact you can think about type constructors as lambda’s on the type level. As such the original example could still be produced.

A reversible LC

Let’s imagine the Lambda Calculus extended with let.

1
2
3
4
5
Term (M, N) ::=
| x // var
| x => M // lambda
| M N // apply
| x = N; M // let

It has two reductions, beta and subst.

1
2
(x => M) N |-> x = N; M  // beta
x = N; M |-> M{x := N} // subst

Beta will produce let and subst will eliminate them, this is a more or less traditional system. Here you can notice that both reductions actually loose information, this can be verified because both the following examples will produce the same term and as such you cannot dinstinguish between them after normalized.

1
2
3
4
5
6
(x = 2; f x) // you wrote this
(f 2) // output

(x => f x) 2 // but what if you wrote this?
(x = 2; f x)
(f 2) // output

Of course, this is intended as they share the same normal form, but here the goal is to be able to actually figure out which one you wrote, so let’s extend the system and tweak the reduction rules such that they preserve the receipts.

1
2
3
4
5
6
7
8
9
Receipt (R) ::=
| [beta]
| [x = N] // subst
| [x] // var
Term (M, N) ::=
| #R; M // receipt

(x => M) N |-> #[beta]; x = N; M // beta
x = N; M |-> #[x = N]; M{x := #[x]; N} // subst

Now let’s check the original examples using those modified rules.

1
2
3
4
5
6
7
8
(x = 2; f x)
(#[x = 2]; f (#[x]; 2)) // actual output
(f 2) // prune receipt

(x => f x) 2 // but what if you wrote this?
(#[beta]; x = 2; f x)
(#[beta]; #[x = 2]; f (#[x]; 2)) // actual output
(f 2) // prune receipt

Notice that they actually produce different results, so normalization is broken. And yes technically it is, but by prunning those they actually go back to producing the same normal form. But now all the original information is preserved and as such those rules can be reversed.

1
2
#[beta]; x = N; M |-> (x => M) N  // rev-beta
#[x = N]; M{x := #[x]; N} |-> x = N; M // rev-subst

Mostly the final goal is to have the following

1
2
3
4
eval : (initial : Term) -> (normal : Term);
rev : (normal : Term) -> (initial : Term);

eval_rev_is_id : (initial : Term) -> term == rev (eval term);

Typers are interpreters and I gotta go fast

Another idea that I’ve been playing for a while is that typers are just interpreters, this allows all the techniques developed for evaluating code to be used while typing, such as abstract machines, closure conversion, defunctionalization, bump allocation and JITs, to be used here.

This is especially important when thinking about languages that have a lot of computing on equality, such as dependently typed languages or TypeScript. But under the right circunstances it may also be quite interesting to speeding up elaboration.

This is half the problem

Reversing reductions doesn’t solve the error messages problem, but it unlocks the best possible results, by having the full term, we can then start producing better tools for fancier type systems. Maybe more important than that, it also allows for much faster development of programming languages, hopefully type errors don’t require almost complete rewrites of codebases, as long as you can add receipts back into the system, you’re good.

I also need to think in a slightly more formal way about the receipts and try further to find prior research on them, the general trick seems to work but as always, it’s easy to screw it up. Follow me on Twitter(not X) and Twitch for that.

https://x.com/TheEduardoRFS/
https://www.twitch.tv/EduardoRFS

References

Internalizing computing

The Lambda Calculus is one of the simplests rewriting systems ever made and while all of its objects are functions it is still a turing complete system.

Church Encoding

One of the most natural things to do in the Lambda Calculus is to describe data such as the booleans, naturals, pairs and list. This is often done through Church encoding, but most people don’t seem to be able to mechanically derive those encodings.

Computing Power

The main insight provided here is that church encodings is just the internalization of the elimination function for some data.

Every data structure seems to come with a function that is capable of doing every fundamental operation on the data itself. This single function provides all the computing power possible for the data. Such as case for booleans and fold for list.

In fact, it is easy to notice that anything that can describe case can be used as a boolean, such as using the empty list as false and all non-empty list as true.

Booleans

Let’s reinvent church encoding for the booleans step by step, as mentioned above to describe some piece of data, internalizing the elimination rule is enough, for booleans this is the case function.

In more concrete terms, the goal is to meet a definition of true, false and case that suffices the following rules:

1
2
3
4
true === case true;
false === case false;
case true then else === then;
case false then else === else;

A nice property to notice here is that b === case b, so a valid definition is that case === id, which leads to:

1
2
3
4
5
6
case = x => x;
(case true) then else === then;
(case false) then else === else;
// implies in
true then else === then;
false then else === else;

Well true and false would need to be functions of the format then => else => _ to meet the rules, but they need to return different values.

Now we have a set of equations that can be solved, by applying some algebra.

1
2
3
4
5
6
7
8
9
10
11
12
13
// assume
true = then => else => ?true;
true then else === then;
// expands
(then => else => ?true) then else === then;
// reduce
?true === then;

// same for false
false = then => else => ?false;
false then else === else;
(then => else => ?false) then else === else;
?false === else;

This leads to the canonical representation of the booleans.

1
2
3
true = then => else => then;
false = then => else => false;
case = x => x;

Finding types

Another interesting property of the internalized version being the same as the elimination function is that the type of fold n and n will be the same, in fact a good way to find is to start with the type of the elimination function:

1
2
3
4
5
6
7
8
9
// make Nat equal to the type of fold and remove the first parameter
fold : (n : Nat) -> (A : Type) -> A -> (A -> A) -> A;
Nat = (n : Nat) -> (A : Type) -> A -> (A -> A) -> A;
Nat = (A : Type) -> A -> (A -> A) -> A;

// make Bool equal to the type of case and remove the first parameter
case : (b : Bool) -> (A : Type) -> A -> A -> A;
Bool = (b : Bool) -> (A : Type) -> A -> A -> A;
Bool = (A : Type) -> A -> A -> A;

An interesting property is that for most examples of structural recursion, there is no type level recursion.

References

A tale of sum types on Linear F

The Linear F is a system similar to System F°, but where the traditional type kind was removed, so it is a pure linear lambda calculus with first-class polymorphism.

TLDR

To encode sum types, weakening is required. By carrying the garbage around in a monad you can easily model weakening on Linear F. As such you can encode sum types on Linear F. Proof weak.linf

Context

I’ve been playing with linear type systems for a while, currently I hold the opinion that some form of linear calculus is probably the right solution for a modern functional programming language.

As such I’ve been trying to show that you can do everything in a pure linear calculus. By doing {church,scott}-encoding of every interesting primitive present in real languages, sadly many of the traditional encodings rely on weakening, which is not directly available on a linear calculus. Due to that, the traditional wisdom is that sum types are not possible in a pure linear calculus.

Explicit Weakening

My hypothesis is that weakening can always be explicitly encoded in a linear system by carrying everything discard explicitly as a parameter.

The main idea is that any function that relies on weakening can return all the discarded elements together with its output as a multiplicative products aka a pair.

Naive Weakening Encoding

The simplest encoding possible is to literally just return a pair, so to describe the affine term x => y => x would be written as x => y => (x, y), the convention here is that the second element of the pair is just garbage and as such should be ignored.

Nicer Weakening Encoding

A type encoding is possible for the garbage in the presence of existential types, the garbage bag can have the type of Garbage === ∃x. x, which can be encoded in Linear F. This makes so that a function to collect garbage can be done.

An even nicer weakening encoding

A nicer encoding can be done by making a monad for weakening, this makes so that handling garbage is implicit in the monadic context. Weak A === Garbage -> (A, Garbage), so any function doing weakening can have the type A -> Weak B.

The perfect weakening encoding

While monadic weakening is nice enough to actually use it, an even better one would be an encoding based on algebraic effects, such that the function weak : ∀A. A -[Weak]> () can be used to explicit weaken anything, such a function will simply pack it as Garbage and call the effect handler, which can then decide what to do with such piece of data.

This could be combined with first class support of the language as an implicit effect so that it behaves exactly like an affine system.

Back to Sum Types

Many of the traditional church encodings for data rely on weakening, such as booleans and sum types, ex : true === x => y => x. As such those encodings seems to not work in a purely linear setting, but they can be done in an affine setting.

And as shown above, weakening can be done on the Linear System F, which is contrary to some beliefs:

[MZZ10] - we cannot encode linear sums in System F° as presented so far

Church encoding of linear types - Unfortunate, but known fact. So, we cannot (at least obviously) simulate A & B using something else.

Linear F + Either

Either is the canonical sum type, if you can describe it you can describe any other sum type, so showing Either is enough to show all sum types.

Example for Either can be found at weak.linf.

Linear F + Bool

But a simpler example that is easier to analyze are booleans,

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
// let's assume the weakening monad
type Weak A === Garbage -> Pair A Garbage in
let weak : forall A. A -> Weak Unit === _ in
let map : forall A B. Weak A -> (A -> B) -> Weak B === _ in

// on booleans one of the arguments is always weakened
type Bool === forall A. A -> A -> Weak A in
let true : Bool === fun (type A) x y ->
map (type Unit) (type A)
// weakens y
(weak (type A) y)
// returns x
(fun unit -> unit (type A) x) in
let false : Bool === fun (type A) x y ->
map (type Unit) (type A)
// weakens x
(weak (type A) x)
// returns y
(fun unit -> unit (type A) y) in

// examples
/* because variables cannot appear twice, closures cannot be used
so the solution is to pass functions, aka CPSify the matching

A is the shared context between branches
K is the return type of the matching */
let match_bool : forall A K. Bool -> (A -> K) -> (A -> K) -> A -> Weak K ===
fun (type A) (type K) b then_ else_ x ->
b (type K) then_ else_ x in

// slightly more concrete example, assumes integers
let incr_if_true : Bool -> Int -> Weak Int ===
fun b x ->
/* because x cannot appear twice,
we need to do the case on a function */
b (type Int)
(fun (x : Int) -> x + 1)
(fun (x : Int) -> x)
// actually apply
x

References

A different level based typer

In this post I will try to propose / share a rank / level based typer which I believe has free generalization, it can be adapted to the core typer present at OCaml(let-ranking) and SML(lambda-ranking) while still following the same mental model.

warning, no soundness guarantees

TLDR

We can make so that the level on a rank / level based typer always only increases and couple the dead region to the generalized region so that generalization is free. That requires an additional pass that can be done together with parsing for a “true” “free” generalization.

How did I get here

Recently I’ve been studying how types and typers works, that includes classical like STLC(Simple Typed Lambda Calculus), HM (Damas-Hindley-Milner type system), System F.

And around the way I implemented many typers and start to understand how they work in theory and in practice(value restriction), most of them implemented in OCaml and as a natural thing I started to look more and more in the OCaml typer which I already had some intuitive understanding after so many type constructor t would escape it's scope.

But after reading How OCaml type checker work by Oleg I had an enlightenment on the topic, but there is a hard feeling on me of “this can be extended even further”, so I tried the natural ideas that came to my mind, using negative levels to encode multiple forall levels and short circuit instantiation, which seems promising and I plan to make a post in the following weeks, the other one is levels that always increase which is the topic of this week.

I HIGHLY RECOMMEND that you read How OCaml type checker work to understand what I’m talking about and find any problem in my approach.

What is a level based typer?

The idea is that we’re using instead of scanning the context during typing we’re gonna use a level to know when a type variable is present in the scope then generalize it, this is effectivelly an algorithm of escape analysis. It was invented / discovered by Didier Rémy and it leads to a more efficient implementation of a HM typer.

Note that Didier Rémy and the literature calls levels, ranks, but the OCaml typer calls it levels, and it makes more sense in my head(probably bias), so I will be using levels here.

It is formalized to be equivalent to the Algorithm W which ensures that it generates the most general type and in a sound manner.

It also exists in two major variations:

  • lambda-ranking, every lambda introduces a new level and generalizes
  • let-ranking, every let introduces a new level and generalizes.
    Each has it’s advantages and the idea showed here can easily work with both, but my implementation will focus more on lambda-ranking as for me it looks that it it can be more easily extended.

Generalization

One of the properties of this family of typers is that generalization requires you to go over the type after typing some expression to check if it is bigger than the current level and mark it as quantified, as a level bigger than the current level lives in a dead region and never escaped it’s region, so it’s not present in the context, again escape analysis.

While this is cheap as types are treated as always being really small, it’s not free and will do O(n) operations being n the number of nodes in the type tree.

The dead region

A dead region is a region of code that was already typed and lies “above” the current typing point.

In the current designs all type variables in a dead region are treated as quantified because we know that they never escaped it’s scope. And so they need to actually be elevated to a level where all variables are quantified, essentially it’s a process where we look on a type and check if it’s in a dead region, if so mark it.

Here I will be proposing that the dead region should be the quantified level and that any variable at the dead region.

So any variable outside of the dead region should be treated as a free variable and not duplicated during instantiation, any variable inside the dead region is a quantified variable and should be duplicated during instantiation

Moving the line in only one direction

Because the dead region moves as typing is done and now the level that marks something to be quantified is the same as the level that delimits the dead region. So my proposal is essentially that the level that marks what is quantified is actually moving.

You can imagine that the level that marks something as generalized is a line where everything below it, is not generalized and everything above it is generalized, currently we’re moving every type that is not on the generalized level individually to above the line, here we will be actually moving the line so that all types which did not escape its scope are automatically treated as quantified. This makes so that generalization is now an O(1) operation, and effectivelly incrementing an integer.

Creating variables in the future

But this means that creating a variable on the current level doesn’t work as a free variable for it’s inner expressions, a solution to this is creating a variable in the level after the current typing finishes.

This doesn’t work with the way that we normally do regions by entering and leaving a region, as the level after typing everything will always be the same level as before typing everything, so instead of entering and leaving, we only enters a region and never leaves it.

But this means that before typing the typer needs to somehow know what will be the region after it finishes it’s typing, this means that we need to somehow know the future.

A simple solution is to do a pass annotating every AST node that create a variable, with the level expected after typing its content, this pass can actually happens for “free” by doing it during parsing so that there is no cost of iterating the AST.

In lambda-ranking this means that lambda + application will need to carry an additional level in the AST. In let-ranking only a let is required to carry the additional level.

Moving the line

So during typing of an expression variables are created on the type after the current expressions finishes and after every expression the level is increased, marking that we leaved the current region.

This also requires that when unifying two types instead of the end type being always the smallest possible type it will be the largest possible type, so that unifying 'a[1] : 'b[2] will results in 'b[2].

Value Restriction?

Yeah, I don’t know. But I believe the easiest way is to tag types with some crazy big level.

Implementation

The following implementation does lambda-ranking + free generalization as described above, while I’m not so sure the current implementation is sound, I’m hoping the idea described here is.

a_different_level_based_typer.ml

References

A journey into Reason Mobile

This is mostly a report on why simple things aren’t simple so no TLDR. And also a bit about Reason Mobile.

Context

Last year(2019) when I was still employed I was looking at a cool piece of tech, called Revery a framework to develop desktop applications using Reason Native, JSX and super fast, also it’s not React, it felt really cool, trying some applications like Oni2 the performance was really impressive.

At this time I was still employed and was working with embedded, on a device with 128mb of memory, running on a armv7hf linux box with a broken userspace running QT and using QML, a screen that could only make full updates 5 times per second, yes 5fps. Then I was really curious would it be possible to use something like Revery to make embedded development? Sure this thing can run Revery right?

I was correct(I always am)

But … OCaml

Normally I would say that a cool feature of Reason is being fully compatible with OCaml, so that you can easily use the tools from the OCaml ecosystem like the compiler, build system’s like Dune and even packages from opam to build native applications aka Reason Native.

This time was a little bit different, see, using the OCaml ecosystem also makes Reason Native suffer from the same problems as the OCaml ecosystem, like missing proper tooling to cross compile and not having a great support for Android and iOS.

Yeah the hardware could easily run it, it’s possible to run Revery with less than 64mb of memory and a potato as a CPU, it will not be exactly efficient on battery but that was okay for me, but the tooling? There was no tooling

To make things worse, we also have a new tool, called esy which can consume opam and npm packages, while also making a really easy to reproduce environment, is a really cool piece of tech, but how does it works? Yeah sandboxing, and that completely break the previous attempts to cross compile from the OCaml ecosystem namely opam-cross.

The easy trick

The obvious choice is “caveman cross-compiling” just emulate the entire environment, sure, it did work, took a couple of hours and I got to compile binaries from Linux x86_64 to Linux ARMv7l, there is just a single detail, the reason why it took a couple of hours isn’t because the setup of the environment needed any trick, nope, with esy that “just works”, it took a couple hours because emulating an ISA is one of the slowest thing you can ever do if you’re doing it properly and especially emulating a RISC on a CISC like ARMv7l on x86_64.

But the trick that I was doing is called full system emulation, there is also another trick which uses user-space emulation combined with binfmt to run a chroot(like a docker container) from one architecture in the other. That was a lot better, but probably still 5x slower than natively compiling on my desktop.

Hackish Solution

A couple of months ago, I was not employed anymore and had a lot of spare time, so I tried to properly address that by adding cross compiling support on esy, yeah that wasn’t so simple, modeling multiple versions of the same package turned out to be really tricky, and I didn’t have any proper knowledge on package managers, then I made a hackish solution, like really hackish, I don’t even want to tell you how it works, but trust me it’s a hackish solution.

I called it reason-mobile a bad name, but the intent was “providing tools to cross compile Reason to mobile aka Android and iOS”, on that … yeah I got it to work.

This entire time I was only looking on Android, because it’s what I daily drive … no iOS wasn’t simpler. But well what you need to know now is that it works, in a future post the road to iOS can be discussed. Currently it works.

How to use it?

It’s a hackish solution, you clone the repository, put your project inside the root of the project, and run some magic, there is a example on the README, but the commented script is the following

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
git clone git@github.com:EduardoRFS/reason-mobile.git
cd reason-mobile/hello-reason

## it will install the host dependencies
esy install

## cursed node magic, don't ask
node ../generate/dist/cli.js android.arm64

## builds all the dependencies for host and target
## it's going to take a while, seriously
esy @android.arm64

## enter the patched esy shell
esy @android.arm64 not-esy-setup $SHELL

Inside this shell you can run the normal commands, like

1
2
3
4
5
## it will build for Android ARM64
dune build -x android.arm64

## binary located at
ls -lah $cur__target_dir/default.android.arm64/bin/hello.exe

Supported platforms

  • android.arm64
  • android.x86_64
  • ios.arm64
  • ios.simulator.x86_64
  • linux.musl.x86_64

Ok, so how it works?

Mostly bad magic, and a lot of shell script hacked.

Reads the esy.lock generated by esy, extract a lot of data using some low level commands from esy like esy ls-build and esy build-plan, duplicate every dependency adding a prefix to it, patch commands like dune build, add some hand made patches for broken dependencies, add a shell script wrapper to remove OCAMLLIB and OCAMLPATH from the environment as these’s are problematic with cross compilation.

Then it generate a bunch of files inside .mocks and a “package.json” for the specific platform, so you can do esy @android.arm64, but that would still make your environment be broken so it has another hack, esy @android.arm64 not-esy-setup <command> which will execute commands in the patched environment.

Simple as that

Limitations

I tried all supported platforms from Linux and Mac, I have no idea if it works on Windows, my bet is that it will not even on cygwin but feel free to try.

And there will be some bugs, if you need help with it feel free to contact me.

Future and possibilities

I started talking about Revery, yeah that was also maded and is another post

We also need a proper solution, integrated on esy, ideally doing a lot of magic.

Maybe Reason React Native Native? You know, RRNN, maybe RNRN, it need’s a better name, but it’s also something that I’m looking for.