# All Science Fair Projects

## Science Fair Project Encyclopedia for Schools!

 Search    Browse    Forum  Coach    Links    Editor    Help    Tell-a-Friend    Encyclopedia    Dictionary

# Science Fair Project Encyclopedia

For information on any area of science that interests you,
enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.).
Or else, you can start by choosing any of the categories below.

Gradient descent is an optimization algorithm that approaches a local maximum of a function by taking steps proportional to the gradient (or the approximate gradient) of the function at the current point. If instead one takes steps proportional to the negative of the gradient, one approaches a local minimum of that function.

This algorithm is also known as steepest descent, or the method of steepest descent, not to be confused with the method for approximating integrals with the same name, see method of steepest descent.

## Description of the method

Gradient descent is based on the observation that if the real-valued function $F(\mathbf{x})$ is defined and differentiable in a neighborhood of a point $\mathbf{a}$, then $F(\mathbf{x})$ increases fastest if one goes from $\mathbf{a}$ in the direction of the gradient of F at $\mathbf{a}$, $\nabla F(\mathbf{a})$. It follows that, if

$\mathbf{b}=\mathbf{a}+\gamma\nabla F(\mathbf{a})$

for γ > 0 a small enough number, then $F(\mathbf{a})\leq F(\mathbf{b})$. With this observation in mind, one starts with a guess $\mathbf{x}_0$ for a local maximum of F, and considers the sequence $\mathbf{x}_0, \mathbf{x}_1, \mathbf{x}_2, \dots$ such that

$\mathbf{x}_{n+1}=\mathbf{x}_n+\gamma \nabla F(\mathbf{x}_n),\ n \ge 0.$

We have $F(\mathbf{x}_0)\le F(\mathbf{x}_1)\le F(\mathbf{x}_2)\le \dots,$ so hopefully the sequence $(\mathbf{x}_n)$ converges to the desired local maximum. Note that the value of the step size γ is allowed to change at every iteration.

Let us illustrate this process in the picture below. Here F is assumed to be defined on the plane, and that its graph looks like a hill. The blue curves are the contour lines, that is, the regions on which the value of F is constant. A red arrow originating at a point shows the direction of the gradient at that point. Note that the gradient at a point is perpendicular to the contour line going through that point. We see that gradient descent leads us to the top of the hill, that is, to the point where the value of the function F is largest.

To have gradient descent go towards a local minimum, one needs to replace γ with - γ.

Note that gradient descent works in spaces of any number of dimensions, even in infinite-dimensional ones.

Two weaknesses of gradient descent are:

1. The algorithm can take many iterations to converge towards a local maximum/minimum, if the curvature in different directions is very different
2. Finding the optimal γ per step can be time-consuming. Conversely, using a fixed γ can yield poor results. Conjugate gradient is often a better alternative.

A more powerful algorithm is given by the BFGS method which consists in calculating on every step a matrix by which is multiplied the gradient vector to go into a "better" direction, combined with a more sophisticated linear search algorithm, to find the "best" value of γ.