It's a known piece of math folklore that e was "discovered" by Jacob Bernoulli in the 17th century, when he was pondering compound interest, and defined thus [1]:
e is extremely important in mathematics for several reasons; one of them is its useful behavior under derivation and integration; specifically, that:
In this post I want to present a couple of simple proofs of this fundamental fact.
Proof using the limit definition
As a prerequisite for this proof, let's reorder the original definition of e slightly. If we perform a change of variable replacing n by , we get:
This equation will become useful a bit later.
Let's start our proof by spelling out the definition of a derivative:
A bit of algebra and observing that does not depend on h gives us:
At this point we're stuck; clearly as h approaches 0, both the numerator and denominator approach 0 as well. The way out - as is often the case in such scenarios - is a sneaky change of variable. Recall equation (1) - how could we use it here?
The change of variable we'll use is , which implies that . Note that as h approaches zero, so does m. Rewriting our last expression, we get:
Equation (1) tells us that as m approaches zero, approaches e. Substituting that into the denominator we get:
Proof using power series expansion
It's always fun to prove the same thing in multiple ways; while I'm sure there are many other techniques to find the derivative of , one I particularly like for its simplicity uses its power series expansion.
Similarly to the way e itself was defined empirically, one can show that:
(For a proof of this equation, see the Appendix)
Let's use the Binomial theorem to open up the parentheses inside the limit:
We'll unroll the sum a bit, so it's easier to manipulate algebraically. We can use the standard formula for "choose n out of k" and get:
Inside the limit, we can simplify all the n-c terms with a constant c to just n, since compared to infinity c is negligible. This means that all these terms can be simplified as , and so on. All these powers of n cancel out in the numerator and denominator, and we get:
And since the contents of the limit don't actually depend on n any more, this leaves us with a well-known formula for approximating [2]:
We can finally use this power series expansion to calculate the derivative of quite trivially. Since it's a sum of terms, the derivative is the sum of the derivatives of the terms:
Look at that, we've got back,
Appendix
Let's see why:
We'll start with the limit and will arrive at . Using a change of variable :
Given our change of variable, since n approaches infinity, so does m. Therefore, we get:
Nothing in the limit depends on x, so that exponent can be seen as applying to the whole limit. And the limit is the definition of e; therefore, we get ,
[1] | What I love about this definition is that it's entirely empirical. Try to substitute successively larger numbers for n in the equation, and you'll see that the result approaches the value e more and more closely. The limit of this process for an infinite n was called e. Bernoulli did all of this by hand, which is rather tedious. His best estimate was that e is "larger than 2 and a half but smaller than 3". |
[2] | Another way to get this formula is from the Maclaurin series expansion of , but we couldn't use that here since Maclaurin series require derivatives, while we're trying to figure out what the derivative of is. |