Tags Math

I'm going through Stepanov and Rose's From Mathematics to Generic Programming, and on page 48 they present a fast algorithm for computing remainders without using either division or multiplication. Unfortunately, there's not much in terms of proof [1], so this post is to document my understanding of the algorithm.

The algorithm relies on the following lemma: For a,b\in\mathbb{N}, given t=remainder(a,2b), we have:

\[remainder(a,b)=\left\{\begin{matrix} t & t < b\\ t-b & t \geq b \end{matrix}\right.\]

To prove this, consider the standard quotient and remainder representation of a's and b's relation: a=qb+r, with q the quotient and r the remainder. q can be either even or odd. If it's even, we can say that there exists k\in\mathbb{N} such that q=2k, so:

\[a=2kb+r\]

In this case, the remainder of a divided by 2b is trivially r (the same as the remainder divided by b). If q is odd, we can say that q=2k+1, so:

\[a=(2k+1)b+r=2kb+b+r\]

In this case, the remainder of a divided by 2b is b+r. Now it's obvious why the lemma is true. Without explicitly distinguishing q as even or odd, it just examines the remainder of a divided by 2b. If this remainder is smaller than b, then that's also the remainder of dividing by b because q must be even. On the other hand, if the remainder is larger than b, q must be odd and we have b+r as the remainder, in which case we subtract b to get to r.

Now, the algorithm itself, as Python code [2]:

def fast_remainder(a, b):
    if a < b: return a
    if a - b < b: return a - b
    r = fast_remainder(a, b + b)
    if r < b: return r
    return r - b

It starts by covering base cases of a being up to 2b. Then it recurses to find the remainder of a divided by 2b. This is a curious recursive pattern, as the parameters grow rather than shrink! Therefore, it's important to prove that this recursion terminates (if it does, its correctness stems from the lemma).

We keep doubling b in every recursive invocation, and the base cases break the recursive cycle once b outgrows a. It will take at most \left \lceil log_{2}a\right \rceil steps to reach that point. Therefore, the recursion terminates.


[1]Which is a bit disappointing for a book that was written to show the beauty of math to programmers and is full of proofs for other stuff. For this algorithm the authors just mention "It's not obvious where the work is done, but it works" and then provide a single extended example.
[2]This is a slightly adapted version of the algorithm, which also works when a is a multiple of b, such that the remainder is 0.