Horner’s rule: efficient evaluation of polynomials

March 30th, 2010 at 3:10 pm

Here’s a general degree-n polynomial:

$$p(x) = \sum_{i=0}^n a_i x^i = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + \cdots + a_n x^n$$

To evaluate such a polynomial using a computer program, several approaches can be employed.

The simplest, naive method is to compute each term of the polynomial separately and then add them up. Here’s the Python code for it:

def poly_naive(A, x):
    p = 0
    for i, a in enumerate(A):
        p += (x ** i) * a
    return p

A is an array of coefficients, lowest first, $$a_0$$ until $$a_n$$.

This method is quite inefficient. It requires n additions (since there are n+1 terms to be added) and $$(n^2 + n)/2$$ multiplications.

Iterative method

It’s obvious that there’s a lot of repetitive computations being done by raising x to successive powers. We can make things much more efficient by simply keeping the previous power of x between iterations. This is the "iterative method":

def poly_iter(A, x):
    p = 0
    xn = 1
    for a in A:
        p += xn * a
        xn *= x
    return p

In this code xn is the current power of x. We don’t need to raise x to a power on each iteration of the loop, a single multiplication suffices. It’s easy to see that there are 2n multiplications and n additions for each computation. The algorithm is now linear instead of quadratic.

Horner’s rule

It can be further improved, however. Take a look at this polynomial:

$$6 x^3 + 4 x^2 + 7 x + 19$$

It can be rewritten as follows:

$$((6 x + 4) x + 7) x + 19$$

And in general, we can always rewrite the polynomial:

$$a_0 + a_1 x + a_2 x^2 + a_3 x^3 + \cdots + a_n x^n$$


$$a_0 + x (a_1 + x ( \cdots + x (a_{n-1} + a_n x))$$

This rearrangement is usually called "Horner’s rule". We can write the code to implement it as follows:

def poly_horner(A, x):
    p = A[-1]
    i = len(A) - 2
    while i >= 0:
        p = p * x + A[i]
        i -= 1
    return p

Here we start by assigning $$a_n$$ to p and then successively multiplying by x and adding the next coefficient. This code requires n multiplications and n additions (I’m ignoring here the modification of the loop variable i, as I ignored it in all other algorithms, where it was implicit in the Python for loop).

While asymptotically similar to the iterative method, Horner’s method has better constants and thus is faster.

Curiously, Horner’s rule was discovered in the early 19th century, far before the advent of computers. It’s obviously useful for manual computation of polynomials as well, for the same reason: it requires less operations.

I’ve timed the 3 algorithms on a random polynomial of degree 500. The one using Horner’s rule is about 5 times faster than the naive approach, and 15% faster than the iterative method.

Related posts:

  1. Efficient integer exponentiation algorithms
  2. Efficient modular exponentiation algorithms
  3. the rule of 72
  4. Space-efficient list rotation
  5. mathematical musing

3 Responses to “Horner’s rule: efficient evaluation of polynomials”

  1. Paddy3118No Gravatar Says:

    Hi Eli,
    You post spurred me to create this task on Rosetta Code where you can see Horner’s method performed in several languages now: http://rosettacode.org/wiki/Horner%27s_rule_for_polynomial_evaluation

    Thanks, Paddy.

  2. Emmanuel JacynaNo Gravatar Says:

    Nice to see an expansion of something I’ve read in SICP. I really like horner’s rule, it saves a lot of time even of my grafix calculator. btw, I love you posts on SICP, they’re informative and do a very good job of explaining things.

    - Emmanuel

  3. SeanVNNo Gravatar Says:

    I have used Horner’s rule recently. I can’t answer the question if it is more numerically accurate, maybe it is?

Leave a Reply

To post code with preserved formatting, enclose it in `backticks` (even multiple lines)