This post investigates the paper *Evolvability* by computer scientist Leslie Valiant analyzing evolution and its limits through the lens of machine learning theory.

The first section gives an overview of Valiant’s now famous Probably Approximately Correct learning model, and the material is derived from Kearns and Vazirani’s short textbook *Introduction to Computational Learning Theory*. Valiant’s more recent notion of evolvability (explored in the second section) implies PAC learnability, but not conversely. In particular, the parity functions (i.e. the mod 2 linear functions ) are learnable, but provably not evolvable (under Valiant’s or any similar notion of evolvability).

Throughout, I’ve included some fun pictures, produced by simple simulations in Python with matplotlib.