Tensors for undergraduate undergraduate students

Gerardo Urrutia S´anchez

∗1

1

Facultad de Ciencias, Universidad Nacional Aut´onoma de M´exico, Distrito Federal, C.P.

04510, M´exico

Noviembre 2013

Abstract

The paper ”Tensors: A guide for undergraduate students ”, AIP, 81, 498 (2013); doi: 10.1119/1.4802811

is an interesting publication aimed for students of Physics. But some treatments seem inecesarios and

reduntantes. This means that physics students can get bored when read this article. I used more

familiar notation to make more attractive this Article. Treatment is made from Euclidean geometry to

generalize a all metric space. Also are considered the most important operators in physics for make the

tensor description. Reading this article you will discover many surprises especially with the concept of

transformation and derivates.

1 Introduction

The topic of tensor calculus is dark for underdgrad-

uate students. I am aware that there is much lit-

erature on the subject. But sometimes the concepts

are not clear enough. Perhaps it’s easier to take con-

cepts that underdgraduate students relate to then go

including more sophisticated concepts.

A ﬁrst idea is to change the notation to denote a

vector with multiple entries.

A −→

_

A

0

, A

1

, A

2

, . . . , A

N

_

= {A

i

} (1)

This tiny change in notation might seem extravagant

and unnatural but, as we shall see, it greatly simpli-

ﬁes the manipulations involved. A common notation

that we adopt is the Einstein summation rule. As

we will see, we need two types of indices that we

distinguish from each other by locating them either

in a lower position, as in A

i

or in an upper position,

as in A

a

. The Einstein summation rule is:

Rule 0: Whenever an index is repeated twice in a

product-once in an upper position and once in a

lower position-the index is called a dummy index

and summation over it is implied.

N

i=0

A

i

ˆ e

i

= A

i

ˆ e

i

(2)

A

ij

B

j

k

= A

i0

B

0

k

+ A

i1

B

1

k

+ A

i2

B

2

k

+· · · + A

iN

B

N

k

(3)

and

C

i

i

= C

0

0

+ C

1

1

+ C

2

2

+· · · + C

N

N

(4)

An index that is not a dummy index is called a free

∗

[email protected]

1

index. Dummy and free indices follow three rules

that, as trivial as they might sound, are so crucial

and used continuously that we better state them ex-

plicitly:

Rule 1: In the case of a multiple sum, diﬀerent let-

ters must denote the dummy indices A

ij

B

ij

. Its im-

plique a double sum over the two indices.

A

ij

B

ij

= A

0j

B

0j

+ A

1j

B

1j

+· · · + A

Nj

B

Nj

= A

00

B

00

+ A

01

B

01

+· · · + A

0N

B

0N

+A

10

B

10

+ A

11

B

11

+· · · + A

1N

B

1N

.

.

.

+A

N0

B

N0

+ A

N1

B

N1

+· · · + A

NN

B

NN

(5)

Rule 2 A dummy index may be renamed, at will and

as convenience requires, within any single factor (as

long there is no conﬂict with Rule 1).

A

ij

B

i

= A

kj

B

k

(6)

Rule 3 Any free index must appear with the same

name and position on each side of an equation; there-

after, if one wishes to rename it, it must be done

throughout the entire equation.

2 Review Orthonormal Bases

Let {ˆ e

i

; i = 1, 2, 3} be an orthonormal basis span-

ning the vectors of the ordinary Euclidean three-

dimensional (3D) space. The vectors may be associa-

tively and commutatively summed, there is a unique

zero-vector, and each vector has its unique opposite.

The vectors may be associatively multiplied by real

numbers, an operation that is distributive with re-

spect to both the sum of real numbers and the sum of

vectors. Between any two vectors is deﬁned a dot (or

scalar) product, a commutative rule that associates

a real number to each pair of vectors. This dot-

product rule is distributive with respect to a linear

combination of vectors and provides a non-negative

number whenever a vector is dotted with itself (only

the zero-vector dotted with itself gives the number

zero). A set of vectors {A

i

; i = 1, 2, . . . , N} is said

to be linearly independent if (note that a sum over

n is implied).

C

i

A

i

= 0 ⇒ C

i

= 0 ∀i (7)

In general, if N is the maximum allowed number of

linearly independent vectors, then the space is said

to be N-dimensional and, in this case, N linearly in-

dependent vectors are said to form a basisany vector

may be written as a linear combination of the basis.

V ˆ e

i

(8)

In our ordinary 3D space a basis of vectors may al-

ways be chosen to be orthonormal. ie.

ˆ e

i

· ˆ e

j

= δ

j

i

(9)

where δ

j

i

is the is the Kronecker delta, a quantity

whose 9 components are equal to 0 if i = j or equal

to 1 if i = j. The properties of the dot product is

A

i

= ˆ e

i

A (10)

and

A·

B = (A

i

ˆ e

i

) · (B

j

ˆ e

j

) = (A

i

B

j

) (ˆ e

i

· ˆ e

j

)

= (A

i

B

j

) δ

j

i

= A

i

B

i

(11)

As done here, and in several calculations that fol-

low, we shall redundantly collect some factors within

parentheses with the aim of clarifying the algebra in-

volved without the need for any other explanation.

In the 3D space, a cross (or vector) product is de-

ﬁned as

A×

B ≡ det

_

_

ˆ e

1

ˆ e

2

ˆ e

3

A

1

A

2

A

3

B

1

B

2

B

3

_

_

=

ijk

A

i

B

j

ˆ e

c

(12)

2

where

ijk

is the Levi-Civita symbol a quantity

whose 27 components are equal to +1 or −1 accord-

ing to whether (i, j, k) forms an even or odd permu-

tation of the sequence (1, 2, 3) and equal to 0 other-

wise (if two or more indices take on the same value).

A ﬁeld is a function of the space coordinates x ≡

(x

1

, x

2

, x

3

), and possibly of time as well, and may

be a scalar or a vector ﬁeld depending on whether

the function is a scalar or a vector. In this section,

we denote the diﬀerential operator by ∂

i

≡ ∂/∂x

i

.

Given a vector ﬁeld A = A(x) = A

i

(x) ˆ e

i

its diver-

gence is the scalar ﬁeld given by

∇· A = ∂

i

A

i

(13)

and its curl is the vector ﬁeld given by

∇×A ≡ det

_

_

ˆ e

1

ˆ e

2

ˆ e

3

∂

1

∂

2

∂

3

A

1

A

2

A

3

_

_

=

ijk

∂

i

A

j

ˆ e

c

(14)

Given a scalar ﬁeld φ(x) its gradient is the vector

ﬁeld deﬁned by

∇φ(x) = ˆ e

i

∂

i

φ (15)

and its Laplacian is deﬁned as the divergence of its

gradient, namely

φ ≡ ∇

2

φ ≡= ∇· ∇φ = ∂

i

∂

i

φ (16)

Why do we need scalars, vectors, and, in general,

tensors? What we need in deﬁning a physical quan-

tity is for it to have some character of objectivity

in the sense that it does not have to depend on

the coordinate system used. For instance, a one-

component function of position, such as a tempera-

ture ﬁeld specifying the temperature T at each point

P of space, must provide a unique value T when the

point P is considered, regardless of the coordinate

system used.

Not every quantity speciﬁed by a single number is a

scalar. For example, in our 3D space whose points

are parameterized by a Cartesian coordinate system,

all displacements from a given point, such as the ori-

gin, will be speciﬁed by three quantities (x

1

, x

2

, x

3

).

Now, the displacement that in one coordinate sys-

tem is speciﬁed, for instance, as (1, 0, 0), would be

speciﬁed as (0, 1, 0) in a coordinate system rotated

clockwise by π/2 around the third axis. Thus, each

component of the displacement is indeed a single-

component quantity, yet these components depend

on the coordinate system used (they are not scalars)

and do not have that objectivity character we desire.

An objectivity-preserving, single-component quan-

tity must transform in a speciﬁc way under a coor-

dinate transformationit must be an invariant.

Likewise, multi-component quantities must have an

objectivity character (and are thereby called vectors

and tensors), a circumstance that translates into spe-

ciﬁc rules about how their components must trans-

form in order to preserve that objectivity. As not

all single-component quantities are scalars, similarly,

not all multi-component quantities are vectors or

tensors.

3 Arbitrary Bases: Duality

We consider a set of basis vectors {ˆ e

i

; i = 1, 2, 3} for

which

g

ij

≡ ˆ e

i

· ˆ e

j

= δ

j

i

(17)

It is appropriate to remark at this point that the

(symmetric) matrix [g

ij

], whose elements are g

ij

, has

a determinant G ≡ det [g

ij

] = 0. In fact, given {ˆ e

i

}

as a basis its vectors are linearly independent so that

C

i

ˆ e

j

= 0 ⇒C

i

= 0 ∀i (18)

But this equation implies that

_

C

i

ˆ e

i

_

· ˆ e

j

= C

i

(ˆ e

1

· ˆ e

j

) = C

i

g

ij

= 0 ⇒C

1

= 0 ∀i

(19)

However, a homogeneous linear equation such as

C

i

g

ij

= 0 possesses the trivial solution

_

C

i

= 0

_

3

if and only if det [g

ij

] = 0. (Incidentally, this im-

plies that det [g

ij

] is a non-singular matrix; in other

words, it admits an inverse.) We shall denote the

elements of the inverse matrix by g

ij

, so that

[g

ij

]

−1

≡

_

g

ij

¸

(20)

which is equivalent to writing

[g

ij

]

_

g

ij

¸

= I (21)

or

g

ij

g

ij

= δ

j

i

(22)

where I is the identity matrix and the indices in the

Kronecker delta have been written according to our

Rule 3. Clearly

_

g

ij

¸

also symmetric and

det

_

g

ij

¸

= 1/ det [g

ij

] = 1/G (23)

a result that follows from eq (23) and the fact that

the determinant of a product of matrices is equal to

the product of their determinants. Any vector can

be expressed as a linear combination of the basis {ˆ e

i

}

so that

A = A

i

ˆ e

i

. Now

ˆ e

i

·

A = e

i

·

_

A

j

ˆ e

j

_

= (ˆ e

i

· ˆ e

j

) A

j

= g

ij

A

j

= δ

i

j

A

j

= A

i

(24)

and

A·

B =

_

A

i

ˆ e

i

_

·

_

B

j

ˆ e

j

_

= (ˆ e

i

· ˆ e

j

) A

i

B

j

= g

ij

A

i

B

j

= δ

j

i

A

i

B

j

= A

j

B

j

(25)

In conclusion, A

i

= ˆ e

i

·

A and A · B = A

j

B

j

when

we are not in Euclidean space. A natural question to

ask is whether there exists some other basis, which

we provisionally denote by {e

i

}, such that

A

i

= ˆ e

i

·

A (26)

Although this provisional notion is motivated by our

desire to follow Rule 3, we will see that such a basis

does exist and is unique. Moreover, if we call {e

i

}

the dual basis of the original basis {e

i

}, it turns out

that the dual of {e

i

} is simply {e

i

}. Now

A

i

= ˆ e

i

·

A = ˆ e

i

·

_

A

j

ˆ e

j

_

=

_

ˆ e

i

· ˆ e

j

_

A

j

(27)

which holds provided the vectors {e

i

} are solutions

of the equation

_

ˆ e

i

· ˆ e

j

_

= δ

i

j

(28)

If the vectors {e

i

} exist they must be expressible

as a linear combination of the basis {ˆ e

i

}. Therefore

ˆ e

i

= C

ik

ˆ e

k

and the left-hand side of eq (28) becomes

_

C

ik

ˆ e

k

_

· ˆ e

j

= C

ik

(ˆ e

k

· ˆ e

j

) = C

ik

g

ij

, which means

c

ik

g

ij

= δ

i

j

(29)

Because the matrix [g

ij

] is non-singular. The eq(29)

has the solution

_

c

iK

¸

= [g

ij

]

−1

=

_

g

ij

¸

, which

means that the unique solution of eq. (28) is given

by

ˆ e

i

= g

ij

ˆ e

j

(30)

The basis {e

i

} is called the dual basis of {e

i

} and

is obtained from the latter by transforming it via

the matrix inverse of [ˆ e

i

· ˆ e

j

]. Similarly, the dual ba-

sis of the basis {e

i

} is obtained from the latter by

transforming it via the matrix inverse of

_

ˆ e

i

· ˆ e

j

¸

To

understand this, we note that Eqs. (29) and (27)

lead to

ˆ e

i

· ˆ e

j

=

_

g

ik

ˆ e

k

_

· ˆ e

j

=

_

ˆ e

k

· ˆ e

j

_

= g

ik

δ

j

k

= g

ij

(31)

so that

g

ij

= ˆ e

i

· ˆ e

j

(32)

so that

4

g

ij

= ˆ e

i

· ˆ e

j

(33)

whereby

_

ˆ e

i

· ˆ e

j

¸

−1

≡

_

g

ij

¸

−1

= [g

ij

]. Thus, the vec-

tor dual to ˆ e

i

is

g

ij

ˆ e

j

= g

ij

_

g

jk

ˆ e

k

_

=

_

g

ij

g

jk

_

ˆ e

k

= δ

k

i

ˆ e

k

= ˆ e

i

(34)

In other words, if the basis dual to {e

i

} is {e

i

}, then

the casis dual to the latter is {e

i

}:

{ˆ e

i

= g

ij

ˆ e

j

} ⇐⇒{ˆ e

i

= g

ij

ˆ e

j

} (35)

We therefore see that once a non-orthonormal ba-

sis is considered, another basisits dualnaturally

emerges. Accordingly, any vector A may then be

written as

A = A

j

ˆ e

j

= A

j

ˆ e

j

(36)

where the components Aa can be found by dotting

ea with eq. (36) and using Eq. (28) to get

A

i

=

A· ˆ e

i

(37)

Similarly, the components A

i

can be found by dot-

ting e

i

with Eq. (36) to get

A

i

=

A· ˆ e

i

(38)

the components labeled by an upper index are called

the contravariant components of

A, and those la-

beled by a lower index are called the covariant com-

ponents of

A. A relation between the contravariant

and covariant components may be readily obtained

from the equality A

i

ˆ e

j

= A

i

ˆ e

j

dotting it with either

e

i

or e

j

and using Eq. (28) either Eq. (17) or Eq.

(32) gives

A

i

= g

ij

A

j

(39)

A

i

= g

ij

A

j

(40)

Now

A·

B = g

ij

A

i

B

j

= A

i

B

i

(41)

We notice here that in our 3D space, a ready way to

obtain the dual vectors ˆ e

i

of a given basis {ˆ e

i

} is by

using the relation

e

i

=

1

V

ˆ e

j

× ˆ e

k

(42)

With V = ˆ e

i

· (ˆ e

j

× ˆ e

k

) is the volume of the paral-

lelepiped spanned by the vectors of {e

i

}.

To ﬁnd the contravariant and covariant components

of the cross product of two vectors, let us ﬁrst show

that for any given six vectors, which shall here be

denoted as S

1

, S

2

, S

3

, T

1

, T

2

, T

3

one has

(S

1

· S

2

×S

3

) (T

1

· T

2

×T

3

) = det [S

i

· T

j

] (43)

Using Eqs. (11) and (12), we see that the left-hand

side becomes the product of determinants and ap-

plying Eq. (43) to the basis {ˆ e

i

} we get

V

2

= (ˆ e

1

· ˆ e

2

× ˆ e

3

)

2

= det [ˆ e

i

· ˆ e

j

] = det [g

ij

] ≡ G

(44)

We note that

_

ˆ e

1

· ˆ e

2

× ˆ e

3

_

2

= det

_

ˆ e

i

· ˆ e

j

¸

= det

_

g

ij

¸

≡ 1/G

(45)

due to the fact that det

_

g

ij

¸

= 1/ det [g

ij

] Armed

with Eqs.(44)(45) and (38)(40), we are now ready to

give an expression for the cross product of two vec-

tors

C =

A ×

B. For the covariant components, we

have

5

C

k

=

Cˆ e

k

=

_

A×

B

_

· ˆ e

k

= A

i

B

j

(ˆ e

i

× ˆ e

j

) · ˆ e

k

= V

ijk

A

i

B

j

=

√

G

ijk

A

i

B

j

(46)

and similarly, for the covariant components, we have

C

k

=

Cˆ e

k

=

_

A×

B

_

· ˆ e

k

= A

i

B

j

_

ˆ e

i

× ˆ e

j

_

· ˆ e

k

=

1

√

G

ijk

A

i

B

j

(47)

where

ijk

=

ijk

with the indices located to be con-

sistent with our Rule 0. In conclusion we have

_

A×

B

_

=

√

G

ijk

A

i

B

j

ˆ e

k

=

1

√

G

ijk

A

i

B

j

ˆ e

k

(48)

4 Changing Bases for Tensors

We now wish to move from a given basis {ˆ e

α

} to

another basis denoted as {ˆ e

α

}. From the require-

ment that {ˆ e

α

} be a basis, it follows that each vector

ˆ e

α

of the unprimed basis can be written as a linear

combination of the primed basis vectors

ˆ e

α

= R

β

α

ˆ e

β

(49)

and, given {ˆ e

α

} as a basis, each vector ˆ e

α

of the

primed basis can be written as a linear combination

of the unprimed basis vectors

ˆ e

α

= R

β

α

ˆ e

β

(50)

Inserting Eq. (50) into (49), we get ˆ e

α

=

R

β

α

_

R

γ

β

ˆ e

γ

_

=

_

R

β

α

R

γ

β

_

ˆ e

γ

whereby

R

β

α

R

γ

β

= δ

γ

α

or

_

R

β

α

_ _

R

β

α

_

= I (51)

The matrix

_

R

β

α

_

is invertible and its inverse is the

matrix

_

R

β

α

_

. In particular, det

_

R

β

α

_

= det

_

R

β

α

_

and these nonzero determinants are reciprocal to

each other.

Any vector can now be written as

A ≡ A

α

ˆ e

α

≡ A

α

ˆ e

α

≡ A

α

ˆ e

α

= A

α

ˆ e

α

(52)

The metric of the primed system is

g

α

β

≡ ˆ e

α

ˆ e

β

= (R

γ

α

ˆ e

γ

)·

_

R

ω

β

ˆ e

ω

_

= R

γ

α

R

ω

β

(ˆ e

γ

· ˆ e

ω

)

(53)

or

g

α

β

= R

γ

α

R

ω

β

g

γω

(54)

and

g

α

β

= R

α

γ

R

β

ω

g

γω

(55)

Now it is easy to guess that

A

α

= R

α

β

A

β

(56)

or

A

α

= R

β

α

A

β

(57)

For the general basis

ˆ e

α

= R

α

β

ˆ e

β

(58)

The dot product also must be scalar (invariant)

A

α

B

α

= (R

γ

α

A

γ

)

_

R

α

ω

B

ω

_

=

_

R

γ

α

R

α

ω

_

A

γ

B

ω

= δ

γ

ω

A

γ

B

ω

= A

γ

B

γ

= A

α

B

α

(59)

Thus, when going from one basis to another in our

3D space as dictated by the transformation (50),

scalar quantities are, by deﬁnition, invariant. Or,

6

if we like, quantities that are invariant under trans-

formation (50) are legitimate one-component physi-

cal quantities. On the other hand, the components

of a legitimate 3-component physical quantity (vec-

tor) must transform according to either Eq. (56)

(contravariant components) or Eq. (57) (covariant

components).

We are then led to deﬁne a second-rank tensor as a 9-

component quantity T whose covariant, contravari-

ant, and mixed components transform, respectively,

as

T

α

β

= R

γ

α

R

ω

β

T

γω

(60)

T

α

β

= R

α

γ

R

β

ω

T

γω

(61)

T

β

α

= R

γ

α

R

β

ω

T

ω

γ

(62)

Equations (60)-(62) may also be considered as the

deﬁning relations of, respectively, a (0, 2)-type, (2,

0)-type, and (1, 1)-type second-rank tensor. Con-

travariant and covariant vectors are (1, 0)-type and

(0, 1)-type ﬁrst-rank tensors, and scalars are (0, 0)-

type zero-rank tensors. Meanwhile, (p, q)-type ten-

sors of rank (p + q) may be easily deﬁned by gener-

alizing the relations (55)-(57).

Now we have δ

β

α

= R

γ

α

R

β

ω

δ

ω

γ

. Sure enough, we

ﬁnd that R

γ

α

_

R

β

ω

δ

ω

γ

_

= R

γ

α

R

β

γ

= δ

β

α

. This re-

sult also shows that the Kronecker delta δ

β

α

has the

same components in all coordinate systems, a prop-

erty not shared by either δ

αβ

orδ

αβ

. Also has the

property R

−1

= R

T

.

A tensor is symmetric if it is invariant under the ex-

change of two equal-variance (both upper or lower)

indices, such as T

αβ

= T

βα

, whereas a tensor is an-

tisymmetric if it changes sign under the exchange

of two equal-variance indices, such as T

αβ

= −T

βα

.

The importance of these properties is due to the

fact that they hold in any coordinate system. If

T

αβ

= ±T

βα

then

T

α

β

= R

γ

α

R

ω

β

T

γω

= ±R

γ

α

R

ω

β

T

ωγ

= ±R

ω

β

R

γ

α

T

ωγ

= ±T

β

α

(63)

and similarly for two contravariant indices. From

any tensor with two equal-variance indices, such

as T

αβ

, we may construct a symmetric and an

antisymmetric tensor, namely T

αβ

± T

βα

. Like-

wise, any such tensor may be written as a sum

of a symmetric and an antisymmetric tensor as

T

αβ

=

1

2

[(T

αβ

+ T

βα

) + (T

αβ

−T

βα

)]. Symme-

try/antisymmetry is not deﬁned for two indices of

diﬀerent variance because in this case the property

would not be coordinate-independent.

5 Fields

A ﬁeld is a function of the space coordinates

(and possibly of time as well). To represent an

objectivity-preserving physical quantity the ﬁeld

function must have a tensorial character. Specif-

ically, in our ordinary 3D space, it must be a

3

r

-component tensor ﬁeld of rank r, with r =

0, 1, 2, 3, . . . : a (0, 0)-type tensor (scalar ﬁeld), (1,

0)-type tensor (contravariant vector ﬁeld), (0, 1)-

type tensor (covariant vector ﬁeld) or, in general, a

(p, q)-type tensor ﬁeld of rank r = p + q. Let us

then consider our ordinary Euclidean 3D space pa-

rameterized by a set of arbitrary coordinates, which

we shall denote by x ≡

_

x

1

, x

2

, x

3

_

. We assume

that they are related to the Cartesian coordinates-

hereafter denoted by x’ ≡

_

x

1

, x

2

, x

3

_

by an in-

vertible relation, so that

x

α

= x

α

_

x

1

, x

2

, x

3

_

≡ x

α

(x) (64)

and

x

α

= x

α

_

x

1

, x

2

, x

3

_

≡ x

α

(x

) (65)

The Jacobian determinants are nonzero

7

J = det

_

∂x

α

∂x

β

_

= det

_

R

α

β

_

= 0 (66)

and

J

−1

= det

_

∂x

α

∂x

β

_

= det

_

R

α

β

¸

= 0 (67)

where we have set

∂x

α

∂x

β

= R

α

β

and

∂x

α

∂x

β

= R

α

β

(68)

These are elements of matrices inverse to each other

R

α

β

R

β

γ

=

∂x

α

∂x

β

∂x

β

∂x

γ

=

∂x

α

∂x

γ

(69)

Now we have

ˆ e

α

=

∂x

β

∂x

α

ˆ e

β

= R

β

α

ˆ e

β

(70)

and

dx

α

=

∂x

α

∂x

β

dx

β

≡ R

α

β

dx

β

(71)

For ease of notation you should consider ∂

α

=

∂

∂x

α

because now we derivate a tensor

∂

β

A

α

= ∂

β

(R

γ

α

A

γ

) = R

γ

α

(∂

β

A

γ

) + A

γ

∂

β

R

γ

α

= R

γ

α

_

R

ω

β

∂

ω

A

γ

_

+ A

γ

R

γ

β

α

= R

ω

β

R

γ

α

∂

ω

A

γ

+ A

γ

R

γ

β

γ

(72)

For a vector in any basis

∂

β

A = ∂

β

(A

γ

ˆ e

γ

) = (∂

β

A

γ

) ˆ e

γ

+ A

γ

(∂

β

ˆ e

γ

) (73)

In the last term, ∂

β

ˆ e

γ

, may be rewritten as a linear

combination of the basis vectors to give

∂

β

ˆ e

γ

= (∂

β

ˆ e

γ

)

α

ˆ e

α

= [ˆ e

α

· (∂

β

ˆ e

γ

)] ˆ e

α

≡ Γ

α

βγ

ˆ e

α

(74)

Then

Γ

α

βγ

ˆ e

α

≡ ˆ e

α

· (∂

β

ˆ e

γ

) (75)

When ∂

β

e

γ

= ∂

γ

e

β

then Γ

α

βγ

= Γ

α

γβ

ie in metric

spaces untwisted.

We deﬁne the covariant derivative

A

α

;

β

= ∂

β

A

α

+ Γ

α

βγ

A

γ

(76)

and

A

α

;

β

= ∂

β

A

α

−Γ

α

βγ

A

γ

(77)

The Kristoﬀel symbols have the following properties

that can be derived easily with the tools we have

learned (but I will to save sheets)

Γ

α

βγ

=

1

2

g

αω

[∂

β

g

ωγ

+ ∂

γ

g

ωβ

−∂

ω

g

βγ

] (78)

The following properties are also met

Γ

αβγ

≡ g

αβ

Γ

ω

βγ

=

1

2

g

αω

g

ωµ

[∂

ω

g

µγ

+ ∂

γ

g

µβ

−∂

µ

g

βγ

]

=

1

2

δ

µ

α

[∂

β

g

µγ

+ ∂

γ

g

µβ

−∂

µ

]

(79)

which gives

Γ

αβγ

= g

αω

Γ

ω

βγ

=

1

2

[∂

β

g

αγ

+ ∂

γ

g

αβ

−∂

α

g

βγ

] (80)

The covariant derivates of higher rank tensors is

A

αβ

;

γ

≡ ∂

γ

A

αβ

+ Γ

α

γω

A

ωβ

+ Γ

β

γω

A

αω

(81)

8

A

αβ

;

γ

≡ ∂

γ

A

αβ

−Γ

ω

αγ

A

ωβ

−Γ

ω

αω

A

αω

(82)

and

A

β

α

;

γ

≡ ∂

γ

A

β

α

+ Γ

β

γω

A

ω

α

−Γ

ω

αγ

A

β

ω

(83)

The covariant derivative of a product follows the

same rules as the ordinary derivative. For instance

_

A

α

B

β

_

;

γ

= A

α

;

γ

B

β

+ B

β

;

γ

A

α

(84)

Another important rule is to the metric

g

αβ

;

γ

= 0 (85)

We are then led to deﬁne the divergence of a vector

in an arbitrary coordinate system as

∇·

A ≡ A

α

;

α

≡ ∂

α

A

α

+ Γ

α

αγ

A

γ

(86)

Notice that the divergence of a vector is deﬁned in

terms of its contravariant components. If the covari-

ant components are available, then ∇·

A ≡ A

α

;

α

=

_

g

αβ

A

β

_

;

α

= g

αβ

A

β

;

α

we ;a deﬁne the divergence

of (the components of) a covariant vector to be the

divergence of the associated contravariant (compo-

nents of the) vector.

Remarkably, to evaluate the divergence of a vector,

it is not necessary to compute the Christoﬀel sym-

bols. In fact, the ones appearing in Eq. (86) are

Γ

α

αγ

=

1

2

g

αγ

∂

γ

g

αω

(87)

where the cancellation between the ﬁrst and third

terms on the right-hand side of the ﬁrst equality

arises upon exchanging the dummy indices α and ω

and taking advantage of the symmetry of the metric

tensor. However, for the matrix whose elements are

g

αω

, we may write

[g

αω

] = [g

ωα

] = [g

ωα

]

−1

=

_

G

αβ

G

_

(88)

However, from the deﬁnition of a determinant we

also have G ≡ g

αβ

G

αβ

whereby ∂G/∂g

αω

= G

αω

and g

αβ

= G

αβ

/G = (1/G)(∂G/∂g

αω

) The Eq (87)

Γ

α

αγ

≡

∂

γ

(

√

G)

√

G

(89)

which can be inserted into Eq. (86) to give

∇·

A =

∂

α

√

GA

α

√

G

(90)

and

φ =

∂

α

_

√

Gg

αβ

∂

β

φ

_

√

G

(91)

6 Conclusions

The tensor calculus helps us to perform all opera-

tions with vectors in space. Also helps us to fully

understand the operations. We uses a comfortable

and elegant notation. The tensor calculus is essential

to understand topics of the Modern Physics.

References

[1] Franco B, ”Tensors: A guide for undergraduate students ”, AIP, 81, 498 (2013); doi: 10.1119/1.4802811

[2] BF. Schutz, ” Geometrical Methods of Mathematical Physics ”,Cambridge University Press; First

Published edition (January 28, 1980).

9

[3] M Alcubierre; ” Introduction to 3+1 Numerical Relativity ”,Oxford University Press, USA (June 16,

2008)

10