Gradient descent is a standard tool for optimizing complex functions iteratively within a computer program. Its goal is: given some arbitrary function, find a minumum. For some small subset of functions - those that are convex - there's just a single minumum which also happens to be global. For most realistic functions, there may be many minima, so most minima are local. Making sure the optimization finds the "best" minumum and doesn't get stuck in sub-optimial minima is out of the scope of this article. Here we'll just be dealing with the core gradient descent algorithm for finding some minumum from a given starting point.

The main premise of gradient descent is: given some current location x in the search space (the domain of the optimized function) we ought to update x for the next step in the direction opposite to the gradient of the function computed at x. But why is this the case? The aim of this article is to explain why, mathematically.

This is also the place for a disclaimer: the examples used throughout the article are trivial, low-dimensional, convex functions. We don't really need an algorithmic procedure to find their global minumum - a quick computation would do, or really just eyeballing the function's plot. In reality we will be dealing with non-linear, 1000-dimensional functions where it's utterly impossible to visualize anything, or solve anything analytically. The approach works just the same there, however.

Building intuition with single-variable functions

The gradient is formally defined for multivariate functions. However, to start building intuition, it's useful to begin with the two-dimensional case, a single-variable function f(x).

In single-variable functions, the simple derivative plays the role of a gradient. So "gradient descent" would really be "derivative descent"; let's see what that means.

As an example, let's take the function f(x)=(x-1)^2. Here's its plot, in red:

Plot of parabola with tangent lines

I marked a couple of points on the plot, in blue, and drew the tangents to the function at these points. Remember, our goal is to find the minimum of the function. To do that, we'll start with a guess for an x, and continously update it to improve our guess based on some computation. How do we know how to update x? The update has only two possible directions: increase x or decrease x. We have to decide which of the two directions to take.

We do that based on the derivative of f(x). The derivative at some point x_0 is defined as the limit [1]:

\[\frac{d}{dx}f(x_0)=\lim_{h \to 0}\frac{f(x_0+h)-f(x_0)}{h}\]

Intuitively, this tells us what happens to f(x) when we add a very small value to x. For example in the plot above, at x_0=3 we have:

\[\begin{align*} \frac{d}{dx}f(3)&=\lim_{h \to 0}\frac{f(3+h)-f(3)}{h} \\                 &=\lim_{h \to 0}\frac{(3+h-1)^2-(3-1)^2}{h} \\                 &=\lim_{h \to 0}\frac{h^2+4h}{h} \\                 &=\lim_{h \to 0}h+4=4 \end{align*}\]

This means that the slope of \frac{df}{dx} at x_0=3 is 4; for a very small positive change h to x at that point, the value of f(x) will increase by 4h. Therefore, to get closer to the minimum of f(x) we should rather decrease x_0 a bit.

Let's take another example point, x_0=-1. At that point, if we add a little bit to x_0, f(x) will decrease by 4x that little bit. So that's exactly what we should do to get closer to the minimum.

It turns out that in both cases, we should nudge x_0 in the direction opposite to the derivative at x_0. That's the most basic idea behind gradient descent - the derivative shows us the way to the minimum; or rather, it shows us the way to the maximum and we then go in the opposite direction. Given some initial guess x_0, the next guess will be:

\[x_1=x_0-\eta\frac{d}{dx}f(x_0)\]

Where \eta is what we call a "learning rate", and is constant for each given update. It's the reason why we don't care much about the magnitude of the derivative at x_0, only its direction. In general, it makes sense to keep the learning rate fairly small so we only make a tiny step at at time. This makes sense mathematically, because the derivative at a point is defined as the rate of change of f(x) assuming an infinitesimal change in x. For some large change x who knows where we will get. It's easy to imagine cases where we'll entirely overshoot the minimum by making too large a step [2].

Multivariate functions and directional derivatives

With functions of multiple variables, derivatives become more interesting. We can't just say "the derivative points to where the function is increasing", because... which derivative?

Recall the formal definition of the derivative as the limit for a small step h. When our function has many variables, which one should have the step added? One at a time? All at once? In multivariate calculus, we use partial derivatives as building blocks. Let's use a function of two variables - f(x,y) as an example throughout this section, and define the partial derivatives w.r.t. x and y at some point (x_0,y_0):

\[\begin{align*} \frac{\partial }{\partial x}f(x_0,y_0)&=\lim_{h \to 0}\frac{f(x_0+h,y_0)-f(x_0,y_0)}{h} \\ \frac{\partial }{\partial y}f(x_0,y_0)&=\lim_{h \to 0}\frac{f(x_0,y_0+h)-f(x_0,y_0)}{h} \end{align*}\]

When we have a single-variable function f(x), there's really only two directions in which we can move from a given point x_0 - left (decrease x) or right (increase x). With two variables, the number of possible directions is infinite, becase we pick a direction to move on a 2D plane. Hopefully this immediately pops ups "vectors" in your head, since vectors are the perfect tool to deal with such problems. We can represent the change from the point (x_0,y_0) as the vector \vec{v}=\langle a,b \rangle [3]. The directional derivative of f(x,y) along \vec{v} at (x_0,y_0) is defined as its rate of change in the direction of the vector at that point. Mathematically, it's defined as:

\[\begin{equation} D_{\vec{v}}f(x_0,y_0)=\lim_{h \to 0}\frac{f(x_0+ah,y_0+bh)-f(x_0,y_0)}{h} \tag{1} \end{equation}\]

The partial derivatives w.r.t. x and y can be seen as special cases of this definition. The partial derivative \frac{\partial f}{\partial x} is just the directional direvative in the direction of the x axis. In vector-speak, this is the directional derivative for \vec{v}=\langle a,b \rangle=\widehat{e_x}=\langle 1,0 \rangle, the standard basis vector for x. Just plug a=1,b=0 into (1) to see why. Similarly, the partial derivative \frac{\partial f}{\partial y} is the directional derivative in the direction of the standard basis vector \widehat{e_y}=\langle 0,1 \rangle.

A visual interlude

Functions of two variables f(x,y) are the last frontier for meaningful visualizations, for which we need 3D to plot the value of f for each given x and y. Even in 3D, visualizing gradients is significantly harder than in 2D, and yet we have to try since for anything above two variables all hopes of visualization are lost.

Here's the function f(x,y)=x^2+y^2 plotted in a small range around zero. I drew the standard basis vectors \widehat{x}=\widehat{e_x} and \widehat{y}=\widehat{e_y} [4] and some combination of them \vec{v}.

3D parabola with direction vector markers

I also marked the point on f(x,y) where the vectors are based. The goal is to help us keep in mind how the independent variables x and y change, and how that affects f(x,y). We change x and y by adding some small vector \vec{v} to their current value. The result is "nudging" f(x,y) in the direction of \vec{v}. Remember our goal for this article - find \vec{v} such that this "nudge" gets us closer to a minimum.

Finding directional derivatives using the gradient

As we've seen, the derivative of f(x,y) in the direction of \vec{v} is defined as:

\[D_{\vec{v}}f(x_0,y_0)=\lim_{h \to 0}\frac{f(x_0+ah,y_0+bh)-f(x_0,y_0)}{h}\]

Looking at the 3D plot above, this is how much the value of f(x,y) changes when we add \vec{v} to the vector \langle x_0,y_0 \rangle. But how do we do that? This limit definition doesn't look like something friendly for analytical analysis for arbitrary functions. Sure, we could plug (x_0,y_0) and \vec{v} in there and do the computation, but it would be nice to have an easier-to-use formula. Luckily, with the help of the gradient of f(x,y) it becomes much easier.

The gradient is a vector value we compute from a scalar function. It's defined as:

\[\nabla f=\left \langle \frac{\partial f}{\partial x},\frac{\partial f}{\partial y} \right \rangle\]

It turns out that given a vector \vec{v}, the directional derivative D_{\vec{v}}f can be expressed as the following dot product:

\[D_{\vec{v}}f=(\nabla f) \cdot \vec{v}\]

If this looks like a mental leap too big to trust, please read the Appendix section at the bottom. Otherwise, feel free to verify that the two are equivalent with a couple of examples. For instance, try to find the derivative in the direction of \vec{v}=\langle \frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}} \rangle at (x_0,y_0)=(-1.5,0.25). You should get \frac{-2.5}{\sqrt{2}} using both methods.

Direction of maximal change

We're almost there! Now that we have a relatively simple way of computing any directional derivative from the partial derivatives of a function, we can figure out which direction to take to get the maximal change in the value of f(x,y).

We can rewrite:

\[D_{\vec{v}}f=(\nabla f) \cdot \vec{v}\]

As:

\[D_{\vec{v}}f=\left \| \nabla f \right \| \left \| \vec{v} \right \| cos(\theta)\]

Where \theta is the angle between the two vectors. Now, recall that \vec{v} is normalized so its magnitude is 1. Therefore, we only care about the direction of \vec{v} w.r.t. the gradient. When is this equation maximized? When \theta=0, because then cos(\theta)=1. Since a cosine can never be larger than 1, that's the best we can have.

So \theta=0 gives us the largest positive change in f(x,y). To get \theta=0, \vec{v} has to point in the same direction as the gradient. Similarly, for \theta=180^{\circ} we get cos(\theta)=-1 and therefore the largest negative change in f(x,y). So if we want to decrease f(x,y) the most, \vec{v} has to point in the opposite direction of the gradient.

Gradient descent update for multivariate functions

To sum up, given some starting point (x_0,y_0), to nudge it in the direction of the minimum of f(x,y), we first compute the gradient of f(x,y) at (x_0,y_0). Then, we update (using vector notation):

\[\langle x_1,y_1 \rangle=\langle x_0,y_0 \rangle-\eta \nabla{f(x_0,y_0)}\]

Generalizing to multiple dimensions, let's say we have the function f:\mathbb{R}^n\rightarrow \mathbb{R} taking the n-dimensional vector \vec{x}=(x_1,x_2 \dots ,x_n). We define the gradient update at step k to be:

\[\vec{x}_{(k)}=\vec{x}_{(k-1)} - \eta \nabla{f(\vec{x}_{(k-1)})}\]

Previously, for the single-variate case we said that the derivatve points us to the way to the minimum. Now we can say that while there are many ways to get to the minimum (eventually), the gradient points us to the fastest way from any given point.

Appendix: directional derivative definition and gradient

This is an optional section for those who don't like taking mathematical statements for granted. Now it's time to prove the equation shown earlier in the article, and on which its main result is based:

\[D_{\vec{v}}f=(\nabla f) \cdot \vec{v}\]

As usual with proofs, it really helps to start by working through an example or two to build up some intuition into why the equation works. Feel free to do that if you'd like, using any function, starting point and direction vector \vec{v}.

Suppose we define a function w(t) as follows:

\[w(t)=f(x,y)\]

Where x=x(t) and y=y(t) defined as:

\[\begin{align*} x(t)&=x_0+at \\ y(t)&=y_0+bt \end{align*}\]

In these definitions, x_0, y_0, a and b are constants, so both x(t) and y(t) are truly functions of a single variable. Using the chain rule), we know that:

\[\frac{dw}{dt}=\frac{\partial f}{\partial x}\frac{dx}{dt}+\frac{\partial f}{\partial y}\frac{dy}{dt}\]

Substituting the derivatives of x(t) and y(t), we get:

\[\frac{dw}{dt}=a\frac{\partial f}{\partial x}+b\frac{\partial f}{\partial y}\]

One more step, the significance of which will become clear shortly. Specifically, the derivative of w(t) at t=0 is:

\[\begin{equation} \frac{d}{dt}w(0)=a\frac{\partial}{\partial x}f(x_0,y_0)+b\frac{\partial}{\partial y}f(x_0,y_0) \tag{2} \end{equation}\]

Now let's see how to compute the derivative of w(t) at t=0 using the formal limit definition:

\[\begin{align*} \frac{d}{dt}w(0)&=\lim_{h \to 0}\frac{w(h)-w(0)}{h} \\                 &=\lim_{h \to 0}\frac{f(x_0+ah,b_0+bh)-f(x_0,y_0)}{h} \end{align*}\]

But the latter is precisely the definition of the directional derivative in equation (1). Therefore, we can say that:

\[\frac{d}{dt}w(0)=D_{\vec{v}}f(x_0,y_0)\]

From this and (2), we get:

\[\frac{d}{dt}w(0)=D_{\vec{v}}f(x_0,y_0)=a\frac{\partial}{\partial x}f(x_0,y_0)+b\frac{\partial}{\partial y}f(x_0,y_0)\]

This derivation is not special to the point (x_0,y_0) - it works just as well for any point where f(x,y) has partial derivatives w.r.t. x and y; therefore, for any point (x,y) where f(x,y) is differentiable:

\[\begin{align*} D_{\vec{v}}f(x,y)&=a\frac{\partial}{\partial x}f(x,y)+b\frac{\partial}{\partial y}f(x,y) \\                      &=\left \langle \frac{\partial f}{\partial x},\frac{\partial f}{\partial y} \right \rangle \cdot \langle a,b \rangle \\                      &=(\nabla f) \cdot \vec{v} \qedhere \end{align*}\]
[1]The notation \frac{d}{dx}f(x_0) means: the value of the derivative of f w.r.t. x, evaluated at x_0. Another way to say the same would be f{}'(x_0).
[2]That said, in some advanced variations of gradient descent we actually want to probe different areas of the function early on in the process, so a larger step makes sense (remember, realistic functions have many local minima and we want to find the best one). Further along in the optimization process, when we've settled on a general area of the function we want the learning rate to be small so we actually get to the minimum. This approach is called annealing and I'll leave it for some future article.
[3]To avoid tracking vector magnitudes, from now on in the article we'll be dealing with normalized direction vectors. That is, we always assume that \left \| \vec{v}  \right \|=1.
[4]Yes, \widehat{y} is actually going in the opposite direction so it's -\widehat{e_y}, but that really doesn't change anything. It was easier to draw :)