What is a programming language?

This post is my highly condensed take on Abelson and Sussman’s classic Structure and Interpretation of Computer Programs (SICP) and Krishnamurthi’s Programming Languages: Application and Interpretation (PLAI), which I think do a nice (if incomplete) job of answering the question “what is a programming language?” by implementing several of them.

Just as in the reference texts, the example languages I’ll give are fragments of Lisp, implemented in Racket (a dialect of Lisp in the Scheme family). Lisp syntax is unusual in that it forces the programmer to explicitly write out parse trees for code, e.g. (+ 2 (* 3 4)) instead of 2 + 3 * 4.

The benefit of this is that I don’t have to write a parser just to experiment with languages: code written in Lisp-like syntax can be read literally as a data structure with the Lisp builtin quote (usually abbreviated with a single tick mark ').

My goal in this post is to implement a programming language which can run the following two test programs with minimal syntactic revision.

The first test is a recursively defined factorial function; that is, I want our language to support recursion.

(define (fact n)
  (if (= n 0)
      (* n (fact (- n 1)))))

The second test is Paul Graham’s accumulator function, specified in the appendix to his article Revenge of the Nerds. Graham thinks a good test of a language’s expressiveness is the length of its shortest accumulator implementation, and who am I to argue with him? The idea of an accumulator: one calls (make-accumulator 100), assigning the result to a variable a; then a is a procedure, and evaluating (a 10) once gives 110, evaluating (a 10) again gives 120, and evaluating (a -34.1) at this point gives 85.9. Different accumulator objects produced by make-accumulator should be independent.

(define (make-accumulator n)
  (lambda (x)
    (begin (set! n (+ n x))

The above Racket implementation relies on the following three language features, all of which I intend to capture in the toy language of this post:

  • Functions are first-class values: they are valid inputs to and return values from other functions.
  • Mutable data: variable values can be reassigned after they’ve been defined.
  • Lexical variables: I want the ability to both read and write to intermediate scopes.1

With our goal in mind, we start by defining a very simple little programming language which, while Turing complete, isn’t sufficiently powerful to easily do what we want.

1. The Littlest Lambda

Our first language is quite a small piece of Lisp: basically the lambda calculus with some integer arithmetic primitives and a very simple conditional. I’ll call it Little Lambda, as in the popular children’s rhyme “Curry had a little lambda”.

The real lambda calculus achieves true minimalism by dispensing with everything but variable symbols, (single-variable) functions, and applications of functions to arguments. Integers and the rest can be simulated, but it’s impractical. (For one thing, the Church numeral of a number takes linear space to write down, whereas binary expansions take only logarithmic space.)

The syntax of our slightly larger language is specified in the following definition of its expressions (in Backus-Naur form):

<expr> ::= <symbol>
         | <number>
         | (+ <expr> <expr>)
         | (* <expr> <expr>)
         | (if0 <expr> <expr> <expr>)
         | (lambda (<symbol>) <expr>)
         | (<expr> <expr>)

where symbols and numbers are literals in the underlying Racket.

The form if0 is intended as a conditional, which will evaluate a test expression and check whether the result equals 0 to decide which of the other two expressions to evaluate.

As in Racket, a form like (lambda (x) (* x x)) is intended as a single-variable (anonymous) function taking the variable x to its square (compare with the mathematical notation {x\mapsto x^2}).

The final form is intended as the application of a function to a value; for now, only numbers and functions will be acceptable values. This could proceed by substituting the value for all occurrences of the function’s variable in its body… but actually you only want to substitute the value for free occurrences of the variable, and this is slightly tricky.

The easy way to sidestep this headache is to evaluate expressions relative to environments to keep track of values which may be needed in the future. An environment is a mapping from variable symbols to values, which we think of as a collection of bindings. The default environment is the empty mapping.2

Variables are bound as function applications are evaluated. Consider the expression

(lambda (x)
  (lambda (x)
    (* x x)))

representing the mathematical function which takes any value to the squaring function; if you apply the above to 3 and the result to 5, you should get 25, not 9.

Rather than mucking around with which x‘s ought to be substituted when, it’s easier to bind x to 3 in the first application, bind x to 5 in the second (shadowing the first binding), then evaluate (* x x) in an environment where x is 5.

But usually bindings will be useful, and we won’t usually immediately shadow them (just as most substitutions wouldn’t induce variable conflicts).

For example, we didn’t explicitly build multi-variable functions into the language, but they can be simulated via currying. A sum of squares function is represented by the expression

(lambda (x)
  (lambda (y)
    (+ (* x x) (* y y))))

If x is bound to some value and the result (a function of y) is to be used elsewhere, it will be the responsibility of this function object to “remember” the value of x until it is needed.

Explicitly, to evaluate a lambda expression, we package it up with a reference to the environment in which it was evaluated. The technical term for a function which “remembers” values in this way is a closure. To evaluate the application of a closure to some value, extend the closure’s packaged-up environment with a binding of the closure’s parameter symbol to the given value, and evaluate the closure’s body in that environment.

The following Racket code for a Little Lambda interpreter is fairly straightforward. Here environments are implemented as “association lists”, i.e. lists of symbol-value pairs (as heterogeneous length-2 lists), using assoc for lookups. Efficiency could be improved by using hash tables or (if one prefers functional style) self-balancing binary search trees in the style of Haskell’s Data.Map module.

Finally, I use Racket’s pattern matching library to write cleaner code than is possible with the old-school car/cdr selectors, though they are certainly capable of expressing the same ideas.

One runs a program in Little Lambda by quoting it and calling eval with the empty environment in Racket, e.g.

(eval '(+ 2 (* 3 4)) '())

gives 14.

2. Recursion in Little Lambda

I’ve never really liked the argument that any language with lambda is “obviously” Turing complete just because it contains the lambda calculus. The argument strikes me as incomplete because the semantics of the lambda calculus are actually specified through equivalence rules for lambda forms, not through an evaluation process. Our little language has no tools to tell when two lambda expressions are α- or η-equivalent, so it’s not obvious to me that the language does technically contain the lambda calculus.

Instead, I claim that any of Kleene’s μ-recursive partial functions can be computed with a (curried) lambda expression in Little Lambda (though I essentially use the proof that the lambda calculus is Turing complete).

For this, it’s sufficient to show that one can emulate recursive function definitions in Little Lambda (as this subsumes both primitive recursion and unbounded minimization in the definition of μ-recursive functions). We’ll demonstrate with a definition of the factorial function, though the technique can be used in arbitrary recursive definitions.

The naive (incorrect) approach to this is as follows: symbols are bound to values by function application, so that’s what we’ll use for function definitions. In fact, Racket’s let keyword (which binds a variable to a value in a computation) is just a bit of syntactic sugar for a function application:

(let ([var val]) body)

means roughly the same thing as

((lambda (var) body) val)

And yet typing

(let ([fact (lambda (n)
              (if (= n 0)
                  (* n (fact (- n 1)))))])
  (fact 100))

into Racket gives an unbound identifier error: the recursive call in the body of fact can’t find a value for fact! (Running the translation in Little Lambda gives a cryptic error message, where assoc fails and returns false, resulting in a Racket error when second tries to get the relevant part of a pair that wasn’t found.) This is because Racket wants to evaluate

(lambda (n)
  (if (= n 0)
      (* n (fact (- n 1)))))

first in the empty environment, and then bind the result to the symbol fact. Obviously, this doesn’t work! The closure created has no binding for the symbol fact. Racket does provide the alternate binding form letrec, which, as the name suggests is powerful enough to support recursion. We will implement letrec in the next section.

The trick to recovering recursive function definitions without letrec is based on the so-called Y-combinator from the lambda calculus.

Define factorial in “open recursive” style. In Little Lambda, this means we first create a curried function with variables f and n, replacing any mention of fact in the body of the function with f applied to itself, (f f). It looks like this:

(lambda (f)
  (lambda (n)
    (if0 n 1 (* n ((f f) (+ n -1))))))

Now let fact be the result of applying that to itself (either by copy and paste, or with a higher-order function), and then any (f f) in the body does the same thing as fact! The resulting function has one free variable n, and indeed computes factorials:

(eval '(((lambda (f)
           (lambda (n)
             (if0 n 1 (* n ((f f) (+ n -1))))))
         (lambda (f)
           (lambda (n)
             (if0 n 1 (* n ((f f) (+ n -1)))))))

quickly and correctly gives



Tangentially, I want to point out that this “double substitution” trick is almost exactly what Kurt Gödel used to construct self-referential sentences in the proof of his celebrated incompleteness theorems. Basically: once a provability predicate has been defined, one can define a formula in x which says “The result of substituting x into x is unprovable”, and then substitute that formula into itself to get a sentence (with no free variables) asserting its own unprovability.3

Still… we’re kidding ourselves if we consider the above a “minimal syntactic rewrite” of the first test program. We should not have to perform such gymnastics just to achieve recursion!

3. Mutation and recursion

Little Lambda has no way to reassign the value of a variable, so there is no hope of easily writing Graham’s accumulator program in it. And as we just saw, recursion is possible, but awkward.

But we can kill two birds with one stone by implementing set!. In Racket, the code

(letrec ([var val]) body)

is syntactic sugar for something like4

(let ([var 'undefined])
  (begin (set! var val)

Continuing with the factorial example, Racket evaluates

(letrec ([fact (lambda (n)
                 (if (= n 0)
                     (* n (fact (- n 1)))))])
  (fact 100))


  1. Binding fact to undefined in some environment env.
  2. Constructing a closure with parameter n, body
    (if (= n 0)
        (* n (fact (- n 1))))

    and environment env.

  3. Changing the binding of fact in env to the closure. Thus the closure’s environment now has a reference to the closure itself, so recursive calls to fact can find the right thing.
  4. Evaluating (fact 100) in the global environment.

One way to handle step (3) is to use mutable environments, e.g. based on hash tables. This is the approach taken in both SICP and Peter Norvig’s elegant article (How to Write a (Lisp) Interpreter (in Python)).

Instead, we follow Krishnamurthi’s functional-style approach in PLAI which doesn’t require mutation in the base language. The trick is to pass the explicit state as a parameter (and additional return value) step-by-step through the evaluation process.

It works something like this:

  1. A store is a mapping from locations, represented by integers, to values. An environment is now a mapping from symbols to locations.
  2. Evaluation takes place relative to an environment and a store. Variables are looked up by composing the symbol-to-location and location-to-value lookups.
  3. The return type of eval is now a pair consisting of a value and a store. When expressions are evaluated in sequence, the store returned by one call to eval is passed as a parameter to the next.
  4. The store being passed around is extended with new location-to-value bindings when function applications are evaluated (at this step, environments are also extended with name-to-location bindings). When set! expressions are evaluated, the appropriate location must be looked up in the current environment, and a new store is created with that location bound to a new value.

As an instructive example, consider evaluation of

(let ([x 2] [y 3])
    (set! x (begin
              (set! y 5)

in an empty environment and store.

  1. Evaluating the function application implicit in the let expression starts by finding fresh locations–say, 0 and 1 because the store is empty–for the values 2 and 3. The environment is also extended by bindings of x to location 0 and y to location 1. Henceforth the environment used won’t change, but this store–call it sto1–will be replaced in the two set!‘s.
  2. The (set! y 5) portion must be evaluated before the (set! x ...) portion. This is done by looking up the location of y in env, finding 1, then creating a store sto2 identical to sto1 except that location 1 is now mapped to the value 5.
  3. Then the second expression (namely y) of (begin (set! y 5) y) is evaluated in sto2, yielding 5 (and sto2).
  4. Now the (set! x ...) can be evaluated relative to sto2. This creates a new sto3 where location 0 is mapped to the value 5.
  5. Finally x is evaluated relative to sto3, returning 5 (and sto3).

One can similarly trace what happens on the level of environments and stores in the letrec/factorial example or others.

I’ve written an interpreter that implements a larger subset of Racket (including set! and letrec) in this store-passing style, and made sure that it did indeed pass the above nested set! test as well as the two test programs set out in the introduction (replacing define with letrec). It’s too long for an embedded Gist, so I created a repository for it on GitHub proper, which includes translations of the test programs.

I don’t have a cute name for that language yet.


  1. Graham points out that in 2002, even the hip, new Python programming language lacked this feature and had to resort to tricks to bring the variable n into local scope to change it. The nonlocal keyword in Python 3 patches this particular limitation. I like Python a lot, but even nonlocal feels a bit like a hack compared to the clean define/set! distinction in Scheme and its descendants.
  2. Additionally, substitution semantics will not be appropriate when we expand the language to allow mutation of variable values, as a value could change between the time substitution would occur and the time the value is actually needed. It would not do to evaluate
    ((lambda (x)
       (begin (set! x 1) x))

    as either (begin (set! 0 1) 0) or (begin (set! x 1) 0); even if one of those expressions didn’t raise an error, the return value would be 0 rather than 1. So while the notion of an environment is just a shortcut in implementing Little Lambda, it will be a necessity later.

  3. The other, technical, ingredient in the proof is the Gödel numbering which allows one to assign numbers to formulas, so that formulas (and proofs) can be reasoned about within arithmetic. Gödel numbering is somewhat analogous to quotation in Lisp, reading Lisp code [arithmetic formulas] as Lisp data [arithmetic terms, i.e. numbers].
  4. In Racket, letrec actually binds variables to the singleton #<undefined> before resetting them. This is somewhat relevant because certain nonsensical letrec expressions leak this temporary value, for instance Racket evaluates
    (letrec ([x x]) x)

    to #<undefined>. Generally, we prefer our programs to crash ASAP if we try to do anything non-trivial with the value of such an expression. Thus a minimally useful value like #<undefined> is appropriate, whereas the symbol undefined or, say, the number 0 has a greater chance of going unnoticed if accidentally leaked into a computation.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s