# All Science Fair Projects

## Science Fair Project Encyclopedia for Schools!

 Search    Browse    Forum  Coach    Links    Editor    Help    Tell-a-Friend    Encyclopedia    Dictionary

# Science Fair Project Encyclopedia

For information on any area of science that interests you,
enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.).
Or else, you can start by choosing any of the categories below.

# Diagonalizable matrix

In linear algebra, a square matrix A is called diagonalizable if it is similar to a diagonal matrix, i.e. if there exists an invertible matrix P such that P -1AP is a diagonal matrix. If V is a finite-dimensional vector space, then a linear map T : VV is called diagonalizable if there exists a basis of V with respect to which T is represented by a diagonal matrix. Diagonalization is the process of finding a corresponding diagonal matrix for a diagonalizable matrix or linear map.

Diagonalizable matrices and maps are of interest because diagonal matrices are especially easy to handle: their eigenvalues and eigenvectors are known and one can raise a diagonal matrix to a power by simply raising the diagonal entries to that same power.

The fundamental fact about diagonalizable maps and matrices is expressed by the following:

• An n-by-n matrix A over the field F is diagonalizable if and only if the sum of the dimensions of its eigenspaces is equal to n, which is the case if and only if there exists a basis of Fn consisting of eigenvectors of A. If such a basis has been found, one can form the matrix P having these basis vectors as columns, and P -1AP will be a diagonal matrix. The diagonal entries of this matrix are the eigenvalues of A.
• A linear map T : VV is diagonalizable if and only if the sum of the dimensions of its eigenspaces is equal to dim(V), which is the case if and only if there exists a basis of V consisting of eigenvectors of T. With respect to such a basis, T will be represented by a diagonal matrix. The diagonal entries of this matrix are the eigenvalues of T.

Another characterization: A matrix or linear map is diagonalizable over the field F if and only if its minimal polynomial is a product of distinct linear factors over F.

The following sufficient (but not necessary) condition is often useful.

• An n-by-n matrix A is diagonalizable over the field F if it has n distinct eigenvalues in F, i.e. if its characteristic polynomial has n distinct roots in F.
• A linear map T : VV with n=dim(V) is diagonalizable if it has n distinct eigenvalues, i.e. if its characteristic polynomial has n distinct roots in F.

As a rule of thumb, over C almost every matrix is diagonalizable. More precisely: the set of complex n-by-n matrices that are not diagonalizable over C, considered as a subset of Cn×n, is a null set with respect to the Lebesgue measure. One can also say that the diagonalizable matrices form a dense subset with respect the Zariski topology: the complement lies inside the set where the discriminant of the characteristic polynomial vanishes, which is a hypersurface. From that follows also density in the usual (strong) topology given by a norm.

The same is not true over R. As n increases, it becomes (in some sense) less and less likely that a randomly selected real matrix is diagonalizable over R.

 Contents

## Examples

### How to diagonalize a matrix

Consider a matrix

$A=\begin{bmatrix} 1 & 2 & 0 \\ 0 & 3 & 0 \\ 2 & -4 & 2 \end{bmatrix}$

This matrix has eigenvalues

$\lambda_1 = 3, \quad \lambda_2 = 2, \quad \lambda_3= 1.$

So A is a 3-by-3 matrix with 3 different eigenvalue, therefore it is diagonalizable.

If we want to diagonalize A, we need to compute the corresponding eigenvectors. They are

$v_1 = \begin{bmatrix} -1 \\ -1 \\ 2 \end{bmatrix}, \quad v_2 = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}, \quad v_3 = \begin{bmatrix} -1 \\ 0 \\ 2 \end{bmatrix}.$

One can easily check that Avk = λkvk.

Now, let P be the matrix with these eigenvectors as its columns:

$P= \begin{bmatrix} -1 & 0 & -1 \\ -1 & 0 & 0 \\ 2 & 1 & 2 \end{bmatrix}.$

Then P diagonalizes A, as a simple computation confirms:

$P^{-1}AP = \begin{bmatrix} -1 & 0 & -1 \\ -1 & 0 & 0 \\ 2 & 1 & 2 \end{bmatrix}^{-1} \begin{bmatrix} 1 & 2 & 0 \\ 0 & 3 & 0 \\ 2 & -4 & 2 \end{bmatrix} \begin{bmatrix} -1 & 0 & -1 \\ -1 & 0 & 0 \\ 2 & 1 & 2 \end{bmatrix} = \begin{bmatrix} 3 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 1\end{bmatrix}.$

Note that the eigenvalues λk appear in the diagonal matrix.

### Matrices that are not diagonalizable

Some real matrices are not diagonalizable over the reals. Consider for instance the matrix

$B = \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}.$

The matrix B does not have any real eigenvalues, so there is no real matrix Q such that Q - 1BQ is a diagonal matrix. However, we can diagonalize B if we allow complex numbers. Indeed, if we take

$Q = \begin{bmatrix} 1 & \textrm{i} \\ \textrm{i} & 1 \end{bmatrix},$

then Q - 1BQ is diagonal.

However, there are also matrices that are not diagonalizable, even if complex numbers are used. This happens if the geometric and algebraic multiplicities of an eigenvalues do not coincide. For instance, consider

$C = \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}.$

This matrix is not diagonalizable: there is no matrix U such that U - 1CU is a diagonal matrix. Indeed, C has one eigenvalue (namely zero) and this eigenvalue has algebraic multiplicity 2 and geometric multiplicity 1.

## An application

Diagonalization can be used to compute the powers of a matrix A efficiently, provided the matrix is diagonalizable. Suppose we have found that

P - 1AP = D

is a diagonal matrix. Then

Ak = (PDP - 1)k = PDkP - 1

and the latter is easy to calculate since it only involves the powers of a diagonal matrix.

For example, consider the following matrix:

$M =\begin{bmatrix}a & b-a \\ 0 &b \end{bmatrix}.$

Calculating the various powers of M reveals a surprising pattern:

$M^2 = \begin{bmatrix}a^2 & b^2-a^2 \\ 0 &b^2 \end{bmatrix},\quad M^3 = \begin{bmatrix}a^3 & b^3-a^3 \\ 0 &b^3 \end{bmatrix},\quad M^4 = \begin{bmatrix}a^4 & b^4-a^4 \\ 0 &b^4 \end{bmatrix},\quad \ldots$

The above phenomenon can be explained by diagonalizing M. To accomplish this, we need a basis of R2 consisting of eigenvectors of M. One such eigenvector basis is given by

$\mathbf{u}=\begin{bmatrix} 1 \\ 0 \end{bmatrix}=\mathbf{e}_1,\quad \mathbf{v}=\begin{bmatrix} 1 \\ 1 \end{bmatrix}=\mathbf{e}_1+\mathbf{e}_2,$

where ei denotes the standard basis of Rn. The reverse change of basis is given by

$\mathbf{e}_1 = \mathbf{u},\qquad \mathbf{e}_2 = \mathbf{v}-\mathbf{u}.$

Straighforward calculations show that

$M\mathbf{u} = a\mathbf{u},\qquad M\mathbf{v}=b\mathbf{v}.$

Thus, a and b are the eigenvalues corresponding to u and v, respectively. By linearity of matrix multiplication, we have that

$M^n \mathbf{u} = a^n\, \mathbf{u},\qquad M^n \mathbf{v}=b^n\,\mathbf{v}.$

Switching back to the standard basis, we have

$M^n \mathbf{e}_1 = M^n \mathbf{u} = a^n \mathbf{e}_1,$
$M^n \mathbf{e}_2 = M^n (\mathbf{v}-\mathbf{u}) = b^n \mathbf{v} - a^n\mathbf{a} = (b^n-a^n) \mathbf{e}_1+b^n\mathbf{e}_2.$

The preceding relations, expressed in matrix form, are

$M^n = \begin{bmatrix}a^n & b^n-a^n \\ 0 &b^n \end{bmatrix},$

thereby explaining the above phenomenon.