Consider the classic problem: How many ways can one make change for one dollar using half-dollars, quarters, dimes, nickels, and pennies? Or more generally, how many ways can one make change for a given amount using arbitrary (positive integer) denominations?

This post chronicles a series of incremental improvements to solutions to this problem. In the first section, we attack the problem with dynamic programming in Python, and we’re able to count the ways to change quite large amounts of money (eventually up to about $100M for the given set of five coins). The second section explores a nice method for deriving closed form solutions, due to Lee Newberg, and the third is a synthesis of the previous two, automating the closed form derivation in the general setting.

**1. From 1 to 100,000,000 **

It’s not hard to produce a recursive function definition which computes the answer to the coin problem. To count all the ways of making change, we just need to condition on whether or not at least one coin of the first denomination will be used (and then add these counts together). If one will be used, then we must change the original amount minus the first denomination. If not, then we must change the whole original amount, but with one fewer denomination. In either case, we’ve made progress toward obvious base cases, so the recursion will terminate.

```
def make_change(amt, denoms):
if amt < 0: return 0
if amt == 0: return 1
if not denoms: return 0
return make_change(amt-denoms[0], denoms) + make_change(amt, denoms[1:])
```

```
```

This function is sufficient for changing a dollar as originally described: `make_change(100, (50,25,10,5,1))`

quickly evaluates to `292`

.

Experimentation shows that this function really starts slowing down around five dollars, which is unacceptable. If `d`

is the number of denominations, then the number of recursive calls that `make_change`

must make above satisfies a recurrence something like

`f(A, d) = f(A-1, d) + f(A, d-1).`

Note that binomial coefficients satisfy the similar recurrence,

`f(A, d) = f(A-1, d) + f(A-1, d-1).`

so that the running time of `make_change`

should grow at least as fast (up to adjustment for base cases). In short, for `d=5`

as above, the runtime of `make_change`

seems to be proportional to `A**4`

.

The easiest way to transform the above into an `O(A*d)`

-time algorithm is memoization. Essentially, memoization transforms a (pure) function into a lookup table whose entries are evaluated *lazily*, in that any entry is evaluated at most once, and only if that entry is needed.

The coin changing problem has the "optimal substructure" property: a solution to a problem instance of size `(A,d)`

can easily (i.e. in constant time) be reconstructed from solutions to all smaller subproblems. Since a problem instance of size `(A,d)`

has only `O(A*d)`

subproblems, the memoized version runs in `O(A*d)`

time (and space), assuming constant-time table operations.

The easiest way to handle memoization in Python is to define the higher-order function

```
def memo(f):
cache = {}
def _f(*args):
if args not in cache:
cache[args] = f(*args)
return cache[args]
return _f
```

and to set `make_change = memo(make_change)`

(or precede the definition of `make_change`

with `@memo`

, Python's syntactic sugar for such "decorators").^{1}

By increasing Python's recursion limit (`sys.setrecursionlimit`

), we're able to change reasonably large amounts of money before running out of memory. On my modest machine (2GB RAM), this happens a little after $5000 (with 41682501983425001 ways).

Memory usage can be eased somewhat by building the lookup table inside a loop rather than with function calls, but the table is still of size proportional to `A*d`

. A better approach taking "only" `O(A)`

space is to build a list/array of the number of ways to make `a`

cents for each `a < A`

and to add the `d`

coins in one at a time.^{2}

But the issue is still that we run out of memory before time constraints become noticeable (now changing $100K in 10 seconds, but crashing for $1M).

To really bring down space usage, just note that the recursion only requires bounded look-backs. That is, if `m`

is the size of the largest denomination, then to solve the problem at `(A, d)`

, we only need to look back as far as `(A-m, d)`

(and `(A, d-1)`

); the rest of the table can be "forgotten". This brings space usage down to `O(d*m)`

, independent of `A`

.

The rewrite looks like this.

With the memory problem solved, I ran the above overnight and and got data for powers of 10 up to $100M, with `denoms = (50,25,10,5,1)`

as usual.^{3}

```
1: 292
10: 801451
100: 6794128501
1K: 66793412685001
10K: 666793341266850001
100K: 6666793334126668500001
1M: 66666793333412666685000001
10M: 666666793333341266666850000001
100M: 6666666793333334126666668500000001
```

Starting at $100, a clear pattern emerges. This was actually the first indication to me that there might be a nice closed form solution to the problem at hand with the specified denominations, at least for whole dollar amounts. Scaling the input by 10 scales the output by roughly 10000, suggesting a quartic relation between the two. Low-degree polynomial interpolation is a fairly straightforward linear algebra problem, and it's one that Wolfram Alpha can already solve, so I felt no need to reinvent the wheel here. Given 4+1 arbitrary data points from the table above, Wolfram Alpha suggests

This agrees with the other data points in my table, and with a few other arbitrarily chosen whole dollar amounts (e.g. 9384) I tested against.

At this point, I'm quite confident that I've got the closed form solution in hand. I decided to consult the internet, and I was pleased with what I found.

**2. To infinity **

The *really* efficient algorithm is of course to just evaluate a closed form solution. The content for this section is derived from an argument by Lee Newberg in a guest post to Frank Morgan's math chat. In the post, Newberg solves the problem of making change for whole dollars using the denominations 1, 5, 10, 25, 50, *and* the 100-cent coin/bill. In retrospect, this is probably the more reasonable problem to consider (at least for large ), but it's not the one I set out to solve, and adapting Newberg's 6-denomination approach to the 5-denomination version at hand forces me to work through the details omitted for brevity in his writeup.

Newberg's solution is organized around generating functions: formal power series whose coefficents relate to whatever problem we're trying to solve. While they are not strictly necessary here, effectively using generating functions seems like a skill worth developing.

The simplest example I know of a generating function in a combinatorial argument is

Before simplification, each of the monomial terms represented in the expansion of corresponds to a way of choosing a subset of and "tagging" the chosen elements with and the others with . Collecting like terms (corresponding to sets of the same size), we see that binomial coefficients are the same thing as " choose ", and insights about one are insights about the other.

In similar fashion, the ways to make cents out of denominations 1, 5, 10, 25, and 50 correspond exactly to the ways to choose a multiple of each denomination such that the sum is . Thus this number is equal to the -th coefficient in

So to get this coefficient, we just simplify the above to

take the -th derivative, evaluate at , and divide by , right?

Well it's not actually obvious how to take these higher-order derivatives, but Newberg has a more subtle strategy.

The trick is to break the problem into two pieces:

- How many ways are there to make dollars using only whole dollar amounts of each coin?
- How many ways are there to make dollars using strictly less than one dollar's worth of each coin?

Since the individual denominations each divide 100, answers to the above two questions can be combined (convolved) to get the total number of ways to make whole dollars. Explicitly, if is the answer to (1) and is the answer to (2), then the number of ways to make whole dollars is

Note, of course, that for , which simplifies this combination step considerably. (Equivalent to the above convolution: the product of generating functions for subproblems (1) and (2) is a generating function for the whole problem.)

Subproblem (1) is easy to solve with a generating function. The particulars of the denominations are irrelevant, except that they divide one dollar evenly, so we're asking for the number of ways to partition the set into 5 contiguous (possibly empty) subsets. This is the -th coefficient of

We can easily find all derivatives of *this* function. Using

we get

Dividing by , we get the answer to (1) as

In general, the number of ways to partition into continguous subsets is , and this identity can be proved the same way. (Exercise: give a purely combinatorial proof.)

Subproblem (2) can also be solved with a generating function... and a computer algebra system. (Alternately/equivalently, it's a small enough problem to brute force with a short script.) The argument goes that is the -th coefficient of

Since

and similarly for the other factors, the product above equals

The main benefit of this rewrite is that the quotient is easier to type into the CAS! Wolfram Alpha unfortunately times out on me, but Mathics can fully expand the 409-degree polynomial, from which we read off , , , , . (These are the same numbers Newberg got with the 100-cent coin included, which should come as no surprise, because that coin clearly can't contribute here.)

Finally then, the number of ways to change dollars with our five coins is the convolution of and ,

Again relying on Mathics to perform the simplification (Wolfram Alpha apparently doesn't support intermediate function definitions), our final answer is

exactly as predicted in section 2.

**3. And beyond! **

Newberg's approach can be adapted to the general coin changing problem as well. In the general case, any common multiple of the available denominations can take the role of the dollar in the preceding calculation, allowing the derivation of closed form solutions when the number of cents to be changed is divisible by the LCM of the denominations. Further modification of subproblem (2) allows for the derivation of closed form solutions for numbers of cents of a given *residue* mod the LCM. Thus we can in principle solve the general coin changing problem with an algorithm whose running time does not depend on the total amount of money to change.

The trade-off is that "compiling" such a closed form solution becomes quite costly as the number of denominations `d`

and their LCM `L`

grows: subproblem (2) involves the generation and collection of `O(L**d)`

monomials!

The situtation can be improved *somewhat* via memoization/dynamic programming. Generation of monomials is incidental to subproblem (2); we really just want to calculate the number of ways to make certain amounts of money using less than `L`

worth of each denomination. This suggests the following function definition.

```
@memo
def make_change_bdd(amt, denoms, bound, used):
if amt < 0 or used >= bound: return 0
if amt == 0: return 1
if not denoms: return 0
return ( make_change_bdd(amt, denoms[1:], bound, 0) +
make_change_bdd(amt-denoms[0], denoms, bound, used+denoms[0]) )
```

The parameter `used`

keeps track of the value used by the first coin in `denoms`

, and is necessary to preserve "optimal substructure". Post memoization, `make_change_bdd(A, denoms, b, 0)`

can be easily reconstructed from its `O(A*d*b)`

subproblems. We need to evaluate the function for `A`

up to `O(L*d)`

(the degree of the polynomial in subproblem (2)) and with `b = L`

. Thus we can at least solve (2) in `O((L*d)**2)`

time.

Putting the pieces together, we might get something like the following. Experimentation confirms that the "compilation" step is still quite costly (e.g. 40 seconds for `denoms = (17,19,23)`

at 2.2 GHz), but the "executable" function `changer`

produced (and saved via memoization) is as fast as you could ever want.

Note that I've cheated slightly by not bothering to make the `changer`

functions (that I'm calling closed form solutions) transparent to the user. One possible solution is to define a (rational coefficient) `Polynomial`

class with its own `__call__`

and `__str__`

methods.

** Notes **

- Python forbids hashing mutable types like
`list`

, so it's important when using the memoized version to pass`denoms`

as a`tuple`

. - On the topic of using lazily evaluated data structures as memoized functions, a truly gorgeous 3-line Haskell implementation of this one-coin-at-a-time approach is available on Rosetta Code (though it probably still uses
`O(A*d)`

space). It's basically a more complex version of my favorite ever Haskell one-liner`fibs = 0:1: zipWith (+) fibs (tail fibs)`

which picks out the Fibonacci sequence as an infinite list defined in terms of two shifted copies of itself;

`fibs`

can be indexed into (forcing evaluation of an initial segment) in linear time. - Annoyingly,
`xrange`

in 32-bit Python 2.7.3 won't accept arguments larger than`2**31-1`

, so I actually had to rewrite the`for`

loop as a`while`

/counter loop to get the result for $100M (10 billion cents).

Hi, excuse me could you explain me how this works:

window = [[0 for _ in range(d+1)] for _ in range(wsize)]

i don’t understand this window is that like a matriz? Thank you so much

Yes, the window is a matrix, declared as a list of lists. The rows are initialized to each contain d+1 zeros, i.e. each row equals [0 for _ in range(d+1)], and wsize is the number of rows. I’ve used _ as the name of the throw-away variable in both comprehensions.

Thank You!

I just came across this problem today. The way we solved it was to use Lagrange interpolation to evaluate the solution polynomial, using the DP values we already pre-computed up d times the LCM of the denominations. This way evaluation is only quadratic in d, the number of different denominations used.