Haskell’s use of monads may be the first unreasonably-difficult-seeming aspect of the language which users will encounter. There is no shortage of monad tutorials online; I suspect that the only way to learn about monads to write such a tutorial oneself.
But Haskell is purely functional at heart, and some kind of tricks must be used to perform basic imperative-style tasks, including IO, error handling, and mutable state. The monad is Haskell’s mathematically elegant solution to imperative-stye programming.
Leading Haskell researcher Simon Peyton Jones has cheekily claimed that Haskell’s organization around monads makes it the “world’s ﬁnest imperative programming language”. I won’t comment on this claim, but I will suggest that monads make Haskell the world’s most fascinating imperative programming language.
The focus of this post is the mathematics of monads, though there are plenty of code snippets in this post. I avoid the somewhat complex and opaque
IO monad in this post, in favor of exploring simpler monads (especially
Maybe) more completely.