Haskell one-liners for search

On the road to functional programming enlightenment, I discovered two beautiful one-liners for depth-first and breadth-first search in Haskell. Here they are:

dfs, bfs :: (a -> [a]) -> a -> [a]
dfs succ start = start : concatMap (dfs succ) (succ start)
bfs succ start = concat $ takeWhile (not . null) $ iterate (concatMap succ) [start]

Both functions generate the list of states reachable from a starting state in a finite number of steps. The inputs to bfs and dfs are the starting state and a “successors” function taking a state to the list of its “child” states.

What consistently impresses me about Haskell is that algorithms can be expressed in code as concisely as in English, if only you can learn the language to the same level of native fluency. As a native English speaker who’s only been writing Haskell casually for a year, I puzzled over the two one-liners in this post for about an hour before getting them right!

Continue reading

Advertisements

What is a programming language?

This post is my highly condensed take on Abelson and Sussman’s classic Structure and Interpretation of Computer Programs (SICP) and Krishnamurthi’s Programming Languages: Application and Interpretation (PLAI), which I think do a nice (if incomplete) job of answering the question “what is a programming language?” by implementing several of them.

Just as in the reference texts, the example languages I’ll give are fragments of Lisp, implemented in Racket (a dialect of Lisp in the Scheme family). Lisp syntax is unusual in that it forces the programmer to explicitly write out parse trees for code, e.g. (+ 2 (* 3 4)) instead of 2 + 3 * 4.

The benefit of this is that I don’t have to write a parser just to experiment with languages: code written in Lisp-like syntax can be read literally as a data structure with the Lisp builtin quote (usually abbreviated with a single tick mark ').

My goal in this post is to implement a programming language which can run the following two test programs with minimal syntactic revision.

The first test is a recursively defined factorial function; that is, I want our language to support recursion.

(define (fact n)
  (if (= n 0)
      1
      (* n (fact (- n 1)))))

The second test is Paul Graham’s accumulator function, specified in the appendix to his article Revenge of the Nerds. Graham thinks a good test of a language’s expressiveness is the length of its shortest accumulator implementation, and who am I to argue with him? The idea of an accumulator: one calls (make-accumulator 100), assigning the result to a variable a; then a is a procedure, and evaluating (a 10) once gives 110, evaluating (a 10) again gives 120, and evaluating (a -34.1) at this point gives 85.9. Different accumulator objects produced by make-accumulator should be independent.

(define (make-accumulator n)
  (lambda (x)
    (begin (set! n (+ n x))
           n)))

The above Racket implementation relies on the following three language features, all of which I intend to capture in the toy language of this post:

  • Functions are first-class values: they are valid inputs to and return values from other functions.
  • Mutable data: variable values can be reassigned after they’ve been defined.
  • Lexical variables: I want the ability to both read and write to intermediate scopes.1

With our goal in mind, we start by defining a very simple little programming language which, while Turing complete, isn’t sufficiently powerful to easily do what we want.
Continue reading

What is a monad?

This post is my attempt to come to an understanding of monads as they are relevant to the Haskell programming language.

Haskell’s use of monads may be the first unreasonably-difficult-seeming aspect of the language which users will encounter. There is no shortage of monad tutorials online; I suspect that the only way to learn about monads to write such a tutorial oneself.

But Haskell is purely functional at heart, and some kind of tricks must be used to perform basic imperative-style tasks, including IO, error handling, and mutable state. The monad is Haskell’s mathematically elegant solution to imperative-stye programming.

Leading Haskell researcher Simon Peyton Jones has cheekily claimed that Haskell’s organization around monads makes it the “world’s finest imperative programming language”. I won’t comment on this claim, but I will suggest that monads make Haskell the world’s most fascinating imperative programming language.

The focus of this post is the mathematics of monads, though there are plenty of code snippets in this post. I avoid the somewhat complex and opaque IO monad in this post, in favor of exploring simpler monads (especially List and Maybe) more completely.

Continue reading