Science Fair Project Encyclopedia
For other topics related to Einstein see Einstein (disambig)
In mathematics, especially in applications of linear algebra to physics, the Einstein notation or Einstein summation convention is a notational convention useful when dealing with coordinate equations or formulas.
According to this convention, when an index variable appears twice in a single term, it implies that we are summing over all of its possible values. In typical applications, these are 1,2,3 (for calculations in Euclidean space), or 0,1,2,3 or 1,2,3,4 (for calculations in Minkowski space), but they can have any range, even (in some applications) an infinite set. Furthermore, abstract index notation uses Einstein notation without requiring any range of values.
In general relativity, the Greek alphabet and the Roman alphabet are used to distinguish whether summing over 1,2,3 or 0,1,2,3 (e.g. Roman, i, j, ... for 1,2,3 and Greek, μ, ν, ... for 0,1,2,3). As in sign conventions, the convention used in practice varies: Roman and Greek may be reversed.
Sometimes (as in general relativity), the index is required to appear once as a superscript and once as a subscript; in other applications, all indices are subscripts. See Dual vector space and Tensor product.
In the traditional usage, one has in mind a vector space V with finite dimension n, and a specific basis of V. We can write the basis vectors as e1,e2,...,en. Then if v is a vector in V, it has coordinates v1,...,vn relative to this basis.
The basic rule is:
- v = vi ei.
In this expression, it is assumed that the term on the right side is to be summed as i goes from 1 to n, because the index i appears on both sides. In that case, the equation is indeed true.
The i is known as a dummy index since the result is not dependent on it; thus we could also write, for example:
- v = vj ej.
An index that is not summed over is a free index and should be found in each term of the equation or formula.
In contexts where the index must appear once as a subscript and once as a superscript, the basis vectors ei retain subscripts but the coordinates become vi with superscripts. Then the basic rule is:
- v = vi ei.
The value of the Einstein convention is that it applies to other vector spaces built from V using the tensor product and duality. For example, V V, the tensor product of V with itself, has a basis consisting of tensors of the form eij := ei ej. Any tensor T in V V can be written as:
- T = Tij eij.
V*, the dual of V, has a basis e1,e2,...,en, which obeys the rule:
- ei(ej) = dij.
Here d (or δ) is the Kronecker delta, so dij is 1 if i = j and 0 otherwise.
We have also used a superscript for the dual basis, which fits in with a convention requiring summed indices to appear once as a subscript and once as a superscript. In this case, if L is an element in V*, then:
- L = Li ei.
If instead every index is required to be a subscript, then a different letter must be used for the dual basis, say di := ei.
The real purpose of the Einstein notation is for formulas and equations that make no mention of the chosen basis. For example, if L and v are as above, then
- L(v) = Li vi,
and this is true for every basis. The next few sections contain further examples of such equations.
Elementary vector algebra and matrix algebra
If V be Euclidean n-space Rn, then there is a standard basis for V, in which ei is (0,...,0,1,0,...,0), with the 1 in the ith position. Then n-by-n matrices can be thought of as elements of V* V. We can also think of vectors in V as column vectors, or n-by-1 matrices; elements of V* are row vectors, or 1-by-n matrices.
If H is a matrix and v is a column vector, then Hv is another column vector. To define w := Hv, we can write:
- wi := Hij vj.
Notice that the free index i appears once in every term, while the dummy index j appears twice in a single term.
The distributive law, that H(u + v) = Hu + Hv, can be written:
- Hij (uj + vj) = Hij uj + Hij vj.
This example also indicates the proof of the distributive law, since the index equation makes direct reference only to certain real numbers, and its validity follows directly from the distributive law for real numbers.
The transpose of a column vector is a row vector with the same components, and the transpose of a matrix is another matrix whose components are given by swapping the indices. Suppose that we're interested in the product of vT and HT. If w (a row vector) is this product, then:
- wi = vi Hji.
Thus to say that taking the transpose of a product switches the order of multiplication, we can write:
- Hji vi = vi Hji.
Again, this is obviously true, by the commutative law for real numbers.
- u · v = ui vi.
- wi = eijk uj vk.
Here, the Levi-Civita symbol e (or ε) satisfies eijk is 1 if (i,j,k) is a positive permutation of (1,2,3), -1 if it's a negative permutation, and 0 if it's not a permutation of (1,2,3) at all.
You may have noticed in these examples that we often introduced a vector w that would normally not have to be given a specific name using coordinate-free notation. This vector wouldn't need to be given a specific name using only index notation either, but the translation between the notations is easier to describe by giving it a name.
With no implicit inner product
If you review the above examples, you'll find that all of them through the distributive law make sense if a summed index must appear once as a subscript and once as a superscript. But the examples from the transpose on don't make sense in that case. This is because they implicitly use the standard inner product on Euclidean space, while the earlier examples do not.
In some applications, there is no inner product on V. In these cases, requiring a summed index to appear once as a subscript and once as a superscript can help one avoid errors in calculation, in much the same way as dimensional analysis does. Perhaps more significantly, the inner product may be a primary object of study that shouldn't be suppressed in the notation; this is the case, for example, in general relativity. Then the difference between a subscript and a superscript can be quite significant.
When an inner product is explicitly referred to, its components are often referred to as gij. Note that gij = gji. Then the formula for the dot product becomes:
- u · v = gij ui vj.
We can also lower the index on ui by defining:
- ui := gij uj.
Then we have:
- u · v = ui vi.
Note that we have implicitly used gij = gji here.
Similarly, we can raise an index using the corresponding inner product on V*. The coordinates of this inner product are gij, which is (as a matrix) the inverse of gij. If you raise an index and then lower it (or the other way around), then you get back where you started. If you raise the i in gij, then you get dij, and if you raise the j in dij, then you get gij.
If the chosen basis of V is orthonormal, then gij = dij and ui = ui. In this case, the formula for the dot product from the previous section may be recovered. But if the basis is not orthonormal, then this will not be true; thus, if you're studying the inner product and can't know ahead of time whether a given basis is orthonormal, you'll need to refer to gij explicitly. Furthermore, if the inner product is not positive-definite (as is the case, for example, in special relativity), then gij = dij will not be true even if the basis is chosen to be orthonormal, since you will sometimes have -1 instead of 1 when i = j. Thus, raising and lowering indices are important operations in these applications.
See also Bra-ket notation.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details