Dimension

Published on May 2016 | Categories: Documents | Downloads: 145 | Comments: 0 | Views: 1518
of 180
Download PDF   Embed   Report

1. From Wikipedia, the free encyclopedia2. Lexicographical order

Comments

Content

Dimension
From Wikipedia, the free encyclopedia

Contents
1

2

3

2 1/2D

1

1.1

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1

1.1.1

1

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Abscissa

2

2.1

Etymology

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3

2.2

In parametric equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3

2.3

Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3

2.4

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3

2.5

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4

2.6

External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4

Cartesian coordinate system

5

3.1

History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7

3.2

Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7

3.2.1

One dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7

3.2.2

Two dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7

3.2.3

Three dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

8

3.2.4

Higher dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

9

3.2.5

Generalizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

9

Notations and conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

10

3.3.1

Quadrants and octants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

10

Cartesian formulae for the plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

10

3.4.1

Distance between two points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

10

3.4.2

Euclidean transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

11

Orientation and handedness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

14

3.5.1

In two dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

14

3.5.2

In three dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

15

3.6

Representing a vector in the standard basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

17

3.7

Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

17

3.8

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

18

3.9

Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

18

3.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

18

3.3
3.4

3.5

i

ii

4

5

6

7

8

9

CONTENTS
3.11 Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

18

3.12 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

19

3.13 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

19

Codimension

20

4.1

Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

20

4.2

Additivity of codimension and dimension counting . . . . . . . . . . . . . . . . . . . . . . . . . .

21

4.3

Dual interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

21

4.4

In geometric topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

21

4.5

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

22

4.6

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

22

Complex dimension

23

5.1

23

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Complex network zeta function

24

6.1

Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

24

6.2

Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

25

6.3

Values for discrete regular lattices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

25

6.4

Random graph zeta function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

26

6.5

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

26

Concentration dimension

27

7.1

Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

27

7.2

Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

27

7.3

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

27

Degrees of freedom

28

8.1

28

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Degrees of freedom (physics and chemistry)

29

9.1

Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

29

9.2

Degrees of freedom of gas molecules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

29

9.3

Independent degrees of freedom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

31

9.4

Quadratic degrees of freedom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

31

9.4.1

Quadratic and independent degree of freedom . . . . . . . . . . . . . . . . . . . . . . . .

32

9.4.2

Equipartition theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

32

9.5

Generalizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

32

9.6

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

33

10 Dimension

34

10.1 In mathematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

35

10.1.1 Dimension of a vector space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

35

10.1.2 Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

35

CONTENTS

iii

10.1.3 Varieties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

36

10.1.4 Krull dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

36

10.1.5 Lebesgue covering dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

36

10.1.6 Inductive dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

36

10.1.7 Hausdorff dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

37

10.1.8 Hilbert spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

37

10.2 In physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

37

10.2.1 Spatial dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

37

10.2.2 Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

37

10.2.3 Additional dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

37

10.3 Networks and dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

38

10.4 In literature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

38

10.5 In philosophy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

39

10.6 More dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

39

10.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

39

10.7.1 Topics by dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

39

10.8 References

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

40

10.9 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

41

10.10External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

41

11 Dimension (vector space)

42

11.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

42

11.2 Facts

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

42

11.3 Generalizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

43

11.3.1 Trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

43

11.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

43

11.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

44

11.6 References

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

44

11.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

44

12 Dimension of an algebraic variety

45

12.1 Dimension of an affine algebraic set

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

45

12.2 Dimension of a projective algebraic set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

46

12.3 Computation of the dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

46

12.4 Real dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

47

12.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

47

12.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

47

13 Dimension theory (algebra)

48

13.1 Basic results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

48

13.2 Local rings

48

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

13.2.1 Fundamental theorem

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

49

iv

CONTENTS
13.2.2 Consequences of the fundamental theorem

. . . . . . . . . . . . . . . . . . . . . . . . .

49

13.2.3 Nagata’s altitude formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

50

13.3 Homological methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

51

13.3.1 Regular rings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

51

13.3.2 Depths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

52

13.3.3 Koszul complex

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

53

13.3.4 Injective dimension and Tor dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . .

55

13.4 Multiplicity theory

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

55

13.5 Dimensions of non-commutative rings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

55

13.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

56

13.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

56

13.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

56

14 Dimensional metrology

57

14.1 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

57

14.2 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

58

14.3 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

58

14.4 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

58

14.5 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

58

15 Eight-dimensional space

59

15.1 Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

59

15.1.1 8-polytope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

59

15.1.2 7-sphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

59

15.1.3 Kissing number problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

59

15.2 Octonions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

60

15.3 Biquaternions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

60

15.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

60

16 Exterior dimension

61

16.1 References

61

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

17 Five-dimensional space

62

17.1 Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

63

17.2 Five-dimensional geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

63

17.2.1 Polytopes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

64

17.2.2 Hypersphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

64

17.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

64

17.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

64

17.5 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

65

17.6 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

65

18 Flatland

66

CONTENTS

v

18.1 Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

66

18.2 Social elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

67

18.3 As a social satire . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

68

18.4 Critical reception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

69

18.5 Editions in print . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

69

18.6 Adaptations and parodies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

69

18.6.1 In film . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

69

18.6.2 In literature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

70

18.6.3 In television . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

70

18.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

70

18.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

71

18.9 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

71

18.9.1 Online and downloadable versions of the text

. . . . . . . . . . . . . . . . . . . . . . . .

19 Four-dimensional space

72
73

19.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

74

19.2 Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

74

19.3 Orthogonality and vocabulary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

76

19.4 Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

76

19.4.1 Hypersphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

76

19.5 Cognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

76

19.6 Dimensional analogy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

77

19.6.1 Cross-sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

78

19.6.2 Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

78

19.6.3 Shadows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

79

19.6.4 Bounding volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

80

19.6.5 Visual scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

80

19.6.6 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

80

19.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

80

19.8 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

81

19.9 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

82

19.10External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

82

20 Fourth dimension in art

83

20.1 Early influence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

83

20.2 Dimensionist manifesto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

83

20.3 Crucifixion (Corpus Hypercubus) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

86

20.4 Abstract art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

87

20.5 Other forms of art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

87

20.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

87

20.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

87

20.8 Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

88

vi

CONTENTS
20.9 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

88

20.10External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

88

21 Gelfand–Kirillov dimension

90

21.1 Basic facts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

90

21.2 In the theory of D-Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

90

21.3 References

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

90

21.4 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

90

22 Global dimension

91

22.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

91

22.2 Alternative characterizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

91

22.3 References

92

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

23 Interdimensional
23.1 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
24 Isoperimetric dimension

93
93
94

24.1 Formal definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

94

24.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

94

24.3 Isoperimetric dimension of graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

95

24.4 Consequences of isoperimetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

95

24.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

96

25 Kaplan–Yorke conjecture

97

25.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

97

25.2 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

98

26 Kodaira dimension
26.1 The plurigenera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

99
99

26.1.1 Interpretations of the Kodaira dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
26.1.2 Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
26.1.3 Dimension 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
26.1.4 Dimension 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
26.1.5 Any dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
26.2 General type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
26.3 Application to classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
26.4 The relationship to Moishezon manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
26.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
26.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
27 Krull dimension

104

27.1 Explanation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
27.2 Krull dimension and schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

CONTENTS

vii

27.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
27.4 Krull dimension of a module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
27.5 Krull dimension for non-commutative rings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
27.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
27.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
27.8 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
28 Matroid rank

107

28.1 Properties and axiomatization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
28.2 Other matroid properties from rank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
28.3 Ranks of special matroids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
28.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
28.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
29 Negative-dimensional space

109

29.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
29.2 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
29.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
29.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
29.5 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
30 Nonlinear dimensionality reduction

111

30.1 Linear methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
30.2 Uses for NLDR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
30.3 Manifold learning algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
30.3.1 Sammon’s mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
30.3.2 Self-organizing map

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

30.3.3 Principal curves and manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
30.3.4 Autoencoders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
30.3.5 Gaussian process latent variable models
30.3.6 Curvilinear component analysis

. . . . . . . . . . . . . . . . . . . . . . . . . . . 115

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

30.3.7 Curvilinear distance analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
30.3.8 Diffeomorphic dimensionality reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
30.3.9 Kernel principal component analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
30.3.10 Isomap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
30.3.11 Locally-linear embedding

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

30.3.12 Laplacian eigenmaps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
30.3.13 Manifold alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
30.3.14 Diffusion maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
30.3.15 Hessian Locally-Linear Embedding (Hessian LLE)

. . . . . . . . . . . . . . . . . . . . . 119

30.3.16 Modified Locally-Linear Embedding (MLLE) . . . . . . . . . . . . . . . . . . . . . . . . 119
30.3.17 Relational perspective map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

viii

CONTENTS
30.3.18 Local tangent space alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
30.3.19 Local multidimensional scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
30.3.20 Maximum variance unfolding

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

30.3.21 Nonlinear PCA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
30.3.22 Data-driven high-dimensional scaling

. . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

30.3.23 Manifold sculpting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
30.3.24 t-distributed stochastic neighbor embedding . . . . . . . . . . . . . . . . . . . . . . . . . 121
30.3.25 RankVisu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
30.3.26 Topologically constrained isometric embedding . . . . . . . . . . . . . . . . . . . . . . . 121
30.4 Methods based on proximity matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
30.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
30.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
30.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
31 One-dimensional space

125

31.1 One-dimensional geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
31.1.1 Polytopes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
31.1.2 Hypersphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
31.2 Coordinate systems in one-dimensional space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
31.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
32 Ordinate

127

32.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
32.2 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
32.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
32.4 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
33 Regular sequence

129

33.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
33.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
33.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
33.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
33.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
33.6 References

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

34 Relative canonical model

132

34.1 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
35 Relative dimension

133

36 Schauder dimension

134

36.1 Definitions

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134

36.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
36.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

CONTENTS

ix

36.3.1 Relation to Fourier series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
36.3.2 Bases for spaces of operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
36.4 Unconditionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
36.5 Schauder bases and duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
36.6 Related concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
36.7 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
36.8 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
36.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
36.10Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
37 Seven-dimensional space

141

37.1 Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
37.1.1 7-polytope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
37.1.2 6-sphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
37.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
37.2.1 Cross product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
37.2.2 Exotic spheres . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
37.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
37.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
37.5 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
38 Six-dimensional space

143

38.1 Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
38.1.1 6-polytope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
38.1.2 5-sphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
38.1.3 6-sphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
38.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
38.2.1 Transformations in three dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
38.2.2 Rotations in four dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
38.2.3 Plücker coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
38.2.4 Electromagnetism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
38.2.5 String theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
38.3 Theoretical background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
38.3.1 Bivectors in four dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
38.3.2 6-vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
38.3.3 Complex 3-space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
38.4 Footnotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
38.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
39 Two-dimensional space

149

39.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
39.2 In geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150

x

CONTENTS
39.2.1 Coordinate systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
39.2.2 Polytopes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
39.2.3 Circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
39.2.4 Other shapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
39.3 In linear algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
39.3.1 Dot product, angle, and length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
39.4 In calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
39.4.1 Gradient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
39.4.2 Line integrals and double integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
39.4.3 Fundamental theorem of line integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
39.4.4 Green’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
39.5 In topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
39.6 In graph theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
39.7 References

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154

39.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
40 VC dimension

156

40.1 Shattering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
40.2 Uses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
40.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
40.4 References

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157

41 Zero-dimensional space

158

41.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
41.2 Properties of spaces with covering dimension zero . . . . . . . . . . . . . . . . . . . . . . . . . . 158
41.3 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
41.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
41.5 Text and image sources, contributors, and licenses . . . . . . . . . . . . . . . . . . . . . . . . . . 160
41.5.1 Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
41.5.2 Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
41.5.3 Content license . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169

Chapter 1

2 1/2D
2 1/2D is a conventional way of describing a process for obtaining an object where the third dimension is somehow
dependent from the transversal cross-section of the same.
2 1/2D is used for differentiating a 3D process, like for example injection moulding, casting or forging, as well as 2D
processes like plastics extrusion, extrusion, drawing or dipping from other processes, like for example microextrusion,
overmoulding, etc.
The product of the 2 1/2D process can, in fact, be described by mean of a transversal cross-section and a longitudinal
cross-section as any other, but in which the transversal cross-section varies, generating a variable longitudinal crosssection generally more simple than the transversal cross-section.
In other words, the third dimension, so the longitudinal cross-section can be represented as the projection of the
transversal cross-section in the infinity of the horizon.
An example of 2 1/2D processing, can be 2 1/2D microextrusion, where, thanks to an optimal control of the process,
the obtained extrudate, which is generally, but not always a tube, can have:
• Tapered profile, which means that diameters or characteristic dimensions, varies independently from each other.
• Bump profile, which means that dimensions will vary in a dependent way, so the dimensions will be proportional.
• TIE (Totally Intermittent Extrusion) profile, which means that at least one of the materials composing the
cross-section switch to another allowing a variation of hardness, radio-opacity or other specific characteristics.
• ILE (Intermittent Layer Extrusion) profile, which means that the materials composing the cross-section can
vary the proportion between themselves, in case of a tube for example, the thickness of one layer can change
independently from others.

1.1 References
1.1.1

Bibliography

• Maccagnan, Simone; Perale, Giuseppe; Capelletti, Tiziano; Maccagnan, Davide (2011), The Next Generation
of Microextrusion Technology, UBM Canon

1

Chapter 2

Abscissa

y

(2,3)

3
2

(−3,1)

1

(0,0)
−3

−2

−1

1

x
2

3

−1
−2

(−1.5,−2.5)

−3

Illustration of a Cartesian coordinate plane. The first value in each ordered pair is the abscissa of the corresponding point.

In mathematics, an abscissa (/æbˈsɪs.ə/; plural abscissae or abscissæ or abscissas) is the perpendicular distance of
a point from the vertical axis. Usually this is the horizontal coordinate of a point in a two-dimensional rectangular
Cartesian coordinate system. The term can also refer to the horizontal axis (typically x-axis) of a two-dimensional
2

2.1. ETYMOLOGY

3

graph (because that axis is used to define and measure the horizontal coordinates of points in the space). An ordered
pair consists of two terms—the abscissa (horizontal, usually x) and the ordinate (vertical, usually y)—which define
the location of a point in two-dimensional rectangular space.

abscissa ordinate

z}|{ z}|{
( x , y )

2.1 Etymology
Though the word “abscissa” (Latin; “linea abscissa”, “a line cut off”) has been used at least since De Practica Geometrie
published in 1220 by Fibonacci (Leonardo of Pisa), its use in its modern sense may be due to Venetian mathematician
Stefano degli Angeli in his work Miscellaneum Hyperbolicum, et Parabolicum of 1659.[1]
In his 1892 work Vorlesungen über Geschichte der Mathematik, Volume 2, ("Lectures on history of mathematics")
German historian of mathematics Moritz Cantor writes
“Wir kennen keine ältere Benutzung des Wortes Abssisse in lateinischen Originalschriften [than degli
Angeli’s]. Vielleicht kommt das Wort in Übersetzungen der Apollonischen Kegelschnitte vor, wo Buch
I Satz 20 von ἀποτεμνομέναις die Rede ist, wofür es kaum ein entsprechenderes lateinisches Wort als
abscissa geben möchte.”[2]
“We know no earlier use of the word abscissa in Latin originals [than degli Angeli’s]. Maybe the
word descends from translations of the Apollonian conics, where in Book I, Chapter 20 there appears
ἀποτεμνομέναις, for which there would hardly be as an appropriate Latin word as abscissa."

2.2 In parametric equations
In a somewhat obsolete variant usage, the abscissa of a point may also refer to any number that describes the point’s
location along some path, e.g. the parameter of a parametric equation.[3] Used in this way, the abscissa can be
thought of as a coordinate-geometry analog to the independent variable in a mathematical model or experiment (with
any ordinates filling a role analogous to dependent variables).

2.3 Examples
• For the point (2, 3), 2 is called the abscissa and 3 the ordinate.
• For the point (−1.5, −2.5), −1.5 is called the abscissa and −2.5 the ordinate.

2.4 See also
• Ordinate
• Dependent and independent variables
• Function (mathematics)
• Relation (mathematics)
• Line chart

4

CHAPTER 2. ABSCISSA

2.5 References
[1] Dyer, Jason (March 8, 2009). “On the Word “Abscissa"". https://numberwarrior.wordpress.com. The number Warrior.
Retrieved September 10, 2015.
[2] Cantor, Moritz (1900). Vorlesungen über Geschichte der Mathematik, Volume 2. Leipzig: B.G. Teubner. p. 898. Retrieved
10 September 2015.
[3] Hedegaard, Rasmus; Weisstein, Eric W. “Abscissa”. MathWorld--A Wolfram Web Resource. Retrieved 14 July 2013.
External link in |work= (help)

This article is based on material taken from the Free On-line Dictionary of Computing prior to 1 November 2008
and incorporated under the “relicensing” terms of the GFDL, version 1.3 or later.

2.6 External links
• The dictionary definition of abscissa at Wiktionary

Chapter 3

Cartesian coordinate system

y

(2,3)

3
2

(−3,1)

1

(0,0)
−3

−2

−1

1

x
2

3

−1
−2

(−1.5,−2.5)

−3

Illustration of a Cartesian coordinate plane. Four points are marked and labeled with their coordinates: (2,3) in green, (−3,1) in
red, (−1.5,−2.5) in blue, and the origin (0,0) in purple.

A Cartesian coordinate system is a coordinate system that specifies each point uniquely in a plane by a pair of
numerical coordinates, which are the signed distances from the point to two fixed perpendicular directed lines,
5

6

CHAPTER 3. CARTESIAN COORDINATE SYSTEM

measured in the same unit of length. Each reference line is called a coordinate axis or just axis of the system, and the
point where they meet is its origin, usually at ordered pair (0, 0). The coordinates can also be defined as the positions
of the perpendicular projections of the point onto the two axes, expressed as signed distances from the origin.
One can use the same principle to specify the position of any point in three-dimensional space by three Cartesian
coordinates, its signed distances to three mutually perpendicular planes (or, equivalently, by its perpendicular projection onto three mutually perpendicular lines). In general, n Cartesian coordinates (an element of real n-space)
specify the point in an n-dimensional Euclidean space for any dimension n. These coordinates are equal, up to sign,
to distances from the point to n mutually perpendicular hyperplanes.

y
3
2

2

2

x +y = 4

1
-3

-2

-1

1
-1

2

3

x

-2
-3

Cartesian coordinate system with a circle of radius 2 centered at the origin marked in red. The equation of a circle is (x − a)2 + (y
− b)2 = r2 where a and b are the coordinates of the center (a, b) and r is the radius.

The invention of Cartesian coordinates in the 17th century by René Descartes (Latinized name: Cartesius) revolutionized mathematics by providing the first systematic link between Euclidean geometry and algebra. Using the
Cartesian coordinate system, geometric shapes (such as curves) can be described by Cartesian equations: algebraic
equations involving the coordinates of the points lying on the shape. For example, a circle of radius 2 in a plane may
be described as the set of all points whose coordinates x and y satisfy the equation x2 + y2 = 4.
Cartesian coordinates are the foundation of analytic geometry, and provide enlightening geometric interpretations
for many other branches of mathematics, such as linear algebra, complex analysis, differential geometry, multivariate
calculus, group theory and more. A familiar example is the concept of the graph of a function. Cartesian coordinates

3.1. HISTORY

7

are also essential tools for most applied disciplines that deal with geometry, including astronomy, physics, engineering
and many more. They are the most common coordinate system used in computer graphics, computer-aided geometric
design and other geometry-related data processing.

3.1 History
The adjective Cartesian refers to the French mathematician and philosopher René Descartes (who used the name
Cartesius in Latin).
The idea of this system was developed in 1637 in writings by Descartes and independently by Pierre de Fermat,
although Fermat also worked in three dimensions and did not publish the discovery.[1] Both authors used a single axis
in their treatments and have a variable length measured in reference to this axis. The concept of using a pair of axes
was introduced later, after Descartes’ La Géométrie was translated into Latin in 1649 by Frans van Schooten and his
students. These commentators introduced several concepts while trying to clarify the ideas contained in Descartes’
work.[2]
The development of the Cartesian coordinate system would play a fundamental role in the development of the calculus
by Isaac Newton and Gottfried Wilhelm Leibniz.[3]
Nicole Oresme, a French cleric and friend of the Dauphin (later to become King Charles V) of the 14th Century,
used constructions similar to Cartesian coordinates well before the time of Descartes and Fermat.
Many other coordinate systems have been developed since Descartes, such as the polar coordinates for the plane, and
the spherical and cylindrical coordinates for three-dimensional space.

3.2 Description
3.2.1

One dimension

Main article: Number line
Choosing a Cartesian coordinate system for a one-dimensional space—that is, for a straight line—involves choosing
a point O of the line (the origin), a unit of length, and an orientation for the line. An orientation chooses which
of the two half-lines determined by O is the positive, and which is negative; we then say that the line “is oriented”
(or “points”) from the negative half towards the positive half. Then each point P of the line can be specified by its
distance from O, taken with a + or − sign depending on which half-line contains P.
A line with a chosen Cartesian system is called a number line. Every real number has a unique location on the line.
Conversely, every point on the line can be interpreted as a number in an ordered continuum such as the real numbers.

3.2.2

Two dimensions

Further information: Two-dimensional space
The Cartesian coordinate system in two dimensions (also called a rectangular coordinate system) is defined by an
ordered pair of perpendicular lines (axes), a single unit of length for both axes, and an orientation for each axis. (Early
systems allowed “oblique” axes, that is, axes that did not meet at right angles.) The lines are commonly referred to
as the x- and y-axes where the x-axis is taken to be horizontal and the y-axis is taken to be vertical. The point where
the axes meet is taken as the origin for both, thus turning each axis into a number line. For a given point P, a line is
drawn through P perpendicular to the x-axis to meet it at X and second line is drawn through P perpendicular to the
y-axis to meet it at Y. The coordinates of P are then X and Y interpreted as numbers x and y on the corresponding
number lines. The coordinates are written as an ordered pair (x, y).
The point where the axes meet is the common origin of the two number lines and is simply called the origin. It is
often labeled O and if so then the axes are called Ox and Oy. A plane with x- and y-axes defined is often referred to
as the Cartesian plane or xy-plane. The value of x is called the x-coordinate or abscissa and the value of y is called
the y-coordinate or ordinate.

8

CHAPTER 3. CARTESIAN COORDINATE SYSTEM

The choices of letters come from the original convention, which is to use the latter part of the alphabet to indicate
unknown values. The first part of the alphabet was used to designate known values.
In the Cartesian plane, reference is sometimes made to a unit circle or a unit hyperbola.

3.2.3

Three dimensions

Further information: Three-dimensional space
Choosing a Cartesian coordinate system for a three-dimensional space means choosing an ordered triplet of lines

Z

(x,y,z)
O
x
y

z
Y

X

A three dimensional Cartesian coordinate system, with origin O and axis lines X, Y and Z, oriented as shown by the arrows. The
tick marks on the axes are one length unit apart. The black dot shows the point with coordinates x = 2, y = 3, and z = 4, or (2,3,4).

(axes) that are pair-wise perpendicular, have a single unit of length for all three axes and have an orientation for each
axis. As in the two-dimensional case, each axis becomes a number line. The coordinates of a point P are obtained
by drawing a line through P perpendicular to each coordinate axis, and reading the points where these lines meet the
axes as three numbers of these number lines.
Alternatively, the coordinates of a point P can also be taken as the (signed) distances from P to the three planes
defined by the three axes. If the axes are named x, y, and z, then the x-coordinate is the distance from the plane
defined by the y and z axes. The distance is to be taken with the + or − sign, depending on which of the two halfspaces separated by that plane contains P. The y and z coordinates can be obtained in the same way from the xz- and
xy-planes respectively.

3.2. DESCRIPTION

9

The coordinate surfaces of the Cartesian coordinates (x, y, z). The z-axis is vertical and the x-axis is highlighted in green. Thus, the
red plane shows the points with x = 1, the blue plane shows the points with z = 1, and the yellow plane shows the points with y =
−1. The three surfaces intersect at the point P (shown as a black sphere) with the Cartesian coordinates (1, −1, 1).

3.2.4

Higher dimensions

A Euclidean plane with a chosen Cartesian system is called a Cartesian plane. Since Cartesian coordinates are
unique and non-ambiguous, the points of a Cartesian plane can be identified with pairs of real numbers; that is with
the Cartesian product R2 = R × R , where R is the set of all reals. In the same way, the points any Euclidean space
of dimension n be identified with the tuples (lists) of n real numbers, that is, with the Cartesian product Rn .

3.2.5

Generalizations

The concept of Cartesian coordinates generalizes to allow axes that are not perpendicular to each other, and/or different units along each axis. In that case, each coordinate is obtained by projecting the point onto one axis along a
direction that is parallel to the other axis (or, in general, to the hyperplane defined by all the other axes). In such
an oblique coordinate system the computations of distances and angles must be modified from that in standard
Cartesian systems, and many standard formulas (such as the Pythagorean formula for the distance) do not hold.

10

CHAPTER 3. CARTESIAN COORDINATE SYSTEM

3.3 Notations and conventions
The Cartesian coordinates of a point are usually written in parentheses and separated by commas, as in (10, 5) or
(3, 5, 7). The origin is often labelled with the capital letter O. In analytic geometry, unknown or generic coordinates
are often denoted by the letters x and y on the plane, and x, y, and z in three-dimensional space. This custom comes
from a convention of algebra, which use letters near the end of the alphabet for unknown values (such as were the
coordinates of points in many geometric problems), and letters near the beginning for given quantities.
These conventional names are often used in other domains, such as physics and engineering, although other letters
may be used. For example, in a graph showing how a pressure varies with time, the graph coordinates may be denoted
t and p. Each axis is usually named after the coordinate which is measured along it; so one says the x-axis, the y-axis,
the t-axis, etc.
Another common convention for coordinate naming is to use subscripts, as in x1 , x2 , ... xn for the n coordinates in
an n-dimensional space; especially when n is greater than 3, or not specified. Some authors prefer the numbering x0 ,
x1 , ... xn₋₁. These notations are especially advantageous in computer programming: by storing the coordinates of a
point as an array, instead of a record, the subscript can serve to index the coordinates.
In mathematical illustrations of two-dimensional Cartesian systems, the first coordinate (traditionally called the
abscissa) is measured along a horizontal axis, oriented from left to right. The second coordinate (the ordinate) is
then measured along a vertical axis, usually oriented from bottom to top.
However, computer graphics and image processing often use a coordinate system with the y axis oriented downwards
on the computer display. This convention developed in the 1960s (or earlier) from the way that images were originally
stored in display buffers.
For three-dimensional systems, a convention is to portray the xy-plane horizontally, with the z axis added to represent
height (positive up). Furthermore, there is a convention to orient the x-axis toward the viewer, biased either to the
right or left. If a diagram (3D projection or 2D perspective drawing) shows the x and y axis horizontally and vertically,
respectively, then the z axis should be shown pointing “out of the page” towards the viewer or camera. In such a 2D
diagram of a 3D coordinate system, the z axis would appear as a line or ray pointing down and to the left or down
and to the right, depending on the presumed viewer or camera perspective. In any diagram or display, the orientation
of the three axes, as a whole, is arbitrary. However, the orientation of the axes relative to each other should always
comply with the right-hand rule, unless specifically stated otherwise. All laws of physics and math assume this righthandedness, which ensures consistency.
For 3D diagrams, the names “abscissa” and “ordinate” are rarely used for x and y, respectively. When they are, the
z-coordinate is sometimes called the applicate. The words abscissa, ordinate and applicate are sometimes used to
refer to coordinate axes rather than the coordinate values.[4]

3.3.1

Quadrants and octants

Main articles: Octant (solid geometry) and Quadrant (plane geometry)
The axes of a two-dimensional Cartesian system divide the plane into four infinite regions, called quadrants, each
bounded by two half-axes. These are often numbered from 1st to 4th and denoted by Roman numerals: I (where the
signs of the two coordinates are +,+), II (−,+), III (−,−), and IV (+,−). When the axes are drawn according to the
mathematical custom, the numbering goes counter-clockwise starting from the upper right (“north-east”) quadrant.
Similarly, a three-dimensional Cartesian system defines a division of space into eight regions or octants, according to
the signs of the coordinates of the points. The convention used for naming a specific octant is to list its signs, e.g. (+
+ +) or (− + −). The generalization of the quadrant and octant to an arbitrary number of dimensions is the orthant,
and a similar naming system applies.

3.4 Cartesian formulae for the plane
3.4.1

Distance between two points

The Euclidean distance between two points of the plane with Cartesian coordinates (x1 , y1 ) and (x2 , y2 ) is

3.4. CARTESIAN FORMULAE FOR THE PLANE

11

The four quadrants of a Cartesian coordinate system.

d=


(x2 − x1 )2 + (y2 − y1 )2 .

This is the Cartesian version of Pythagoras’s theorem. In three-dimensional space, the distance between points
(x1 , y1 , z1 ) and (x2 , y2 , z2 ) is

d=


(x2 − x1 )2 + (y2 − y1 )2 + (z2 − z1 )2 ,

which can be obtained by two consecutive applications of Pythagoras’ theorem.

3.4.2

Euclidean transformations

The Euclidean transformations or Euclidean motions are the (bijective) mappings of points of the Euclidean plane to
themselves which preserve distances between points. There are four types of these mappings (also called isometries):
translations, rotations, reflections and glide reflections.[5]
Translation
Translating a set of points of the plane, preserving the distances and directions between them, is equivalent to adding
a fixed pair of numbers (a, b) to the Cartesian coordinates of every point in the set. That is, if the original coordinates

12

CHAPTER 3. CARTESIAN COORDINATE SYSTEM

of a point are (x, y), after the translation they will be

(x′ , y ′ ) = (x + a, y + b).
Rotation
To rotate a figure counterclockwise around the origin by some angle θ is equivalent to replacing every point with
coordinates (x,y) by the point with coordinates (x',y'), where

x′ = x cos θ − y sin θ
y ′ = x sin θ + y cos θ.
Thus: (x′ , y ′ ) = ((x cos θ − y sin θ ), (x sin θ + y cos θ )).
Reflection
If (x, y) are the Cartesian coordinates of a point, then (−x, y) are the coordinates of its reflection across the second
coordinate axis (the Y-axis), as if that line were a mirror. Likewise, (x, −y) are the coordinates of its reflection across
the first coordinate axis (the X-axis). In more generality, reflection across a line through the origin making an angle
θ with the x-axis, is equivalent to replacing every point with coordinates (x, y) by the point with coordinates (x′,y′),
where

x′ = x cos 2θ + y sin 2θ
y ′ = x sin 2θ − y cos 2θ.
Thus: (x′ , y ′ ) = ((x cos 2θ + y sin 2θ ), (x sin 2θ − y cos 2θ )).
Glide reflection
A glide reflection is the composition of a reflection across a line followed by a translation in the direction of that
line. It can be seen that the order of these operations does not matter (the translation can come first, followed by the
reflection).
General matrix form of the transformations
These Euclidean transformations of the plane can all be described in a uniform way by using matrices. The result
(x′ , y ′ ) of applying a Euclidean transformation to a point (x, y) is given by the formula

(x′ , y ′ ) = (x, y)A + b
where A is a 2×2 orthogonal matrix and b = (b1 , b2 ) is an arbitrary ordered pair of numbers;[6] that is,
x′ = xA11 + yA21 + b1
y ′ = xA12 + yA22 + b2 ,
where
(
)
A11 A12
A=
. [Note the use of row vectors for point coordinates and that the matrix
A21 A22
is written on the right.]

3.4. CARTESIAN FORMULAE FOR THE PLANE

13

To be orthogonal, the matrix A must have orthogonal rows with same Euclidean length of one, that is,

A11 A21 + A12 A22 = 0
and

A211 + A212 = A221 + A222 = 1.
This is equivalent to saying that A times its transpose must be the identity matrix. If these conditions do not hold, the
formula describes a more general affine transformation of the plane provided that the determinant of A is not zero.
The formula defines a translation if and only if A is the identity matrix. The transformation is a rotation around some
point if and only if A is a rotation matrix, meaning that

A11 A22 − A21 A12 = 1.
A reflection or glide reflection is obtained when,

A11 A22 − A21 A12 = −1.
Assuming that translation is not used transformations can be combined by simply multiplying the associated transformation matrices.

Affine transformation
Another way to represent coordinate transformations in Cartesian coordinates is through affine transformations. In
affine transformations an extra dimension is added and all points are given a value of 1 for this extra dimension. The
advantage of doing this is that point translations can be specified in the final column of matrix A. In this way, all of
the euclidean transformations become transactable as matrix point multiplications. The affine transformation is given
by:
 ′
 
x
A11 A21 b1
x
A12 A22 b2 y  = y ′ . [Note the matrix A from above was transposed. The
1
1
0
0
1
matrix is on the left and column vectors for point coordinates are used.]


Using affine transformations multiple different euclidean transformations including translation can be combined by
simply multiplying the corresponding matrices.

Scaling
An example of an affine transformation which is not a Euclidean motion is given by scaling. To make a figure larger
or smaller is equivalent to multiplying the Cartesian coordinates of every point by the same positive number m. If (x,
y) are the coordinates of a point on the original figure, the corresponding point on the scaled figure has coordinates

(x′ , y ′ ) = (mx, my).
If m is greater than 1, the figure becomes larger; if m is between 0 and 1, it becomes smaller.

14

CHAPTER 3. CARTESIAN COORDINATE SYSTEM

Shearing
A shearing transformation will push the top of a square sideways to form a parallelogram. Horizontal shearing is
defined by:

(x′ , y ′ ) = (x + ys, y)
Shearing can also be applied vertically:

(x′ , y ′ ) = (x, xs + y)

3.5 Orientation and handedness
Main article: Orientation (mathematics)
See also: right-hand rule and Axes conventions

3.5.1

In two dimensions

The right hand rule.

Fixing or choosing the x-axis determines the y-axis up to direction. Namely, the y-axis is necessarily the perpendicular
to the x-axis through the point marked 0 on the x-axis. But there is a choice of which of the two half lines on
the perpendicular to designate as positive and which as negative. Each of these two choices determines a different
orientation (also called handedness) of the Cartesian plane.

3.5. ORIENTATION AND HANDEDNESS

15

The usual way of orienting the axes, with the positive x-axis pointing right and the positive y-axis pointing up (and
the x-axis being the “first” and the y-axis the “second” axis) is considered the positive or standard orientation, also
called the right-handed orientation.
A commonly used mnemonic for defining the positive orientation is the right hand rule. Placing a somewhat closed
right hand on the plane with the thumb pointing up, the fingers point from the x-axis to the y-axis, in a positively
oriented coordinate system.
The other way of orienting the axes is following the left hand rule, placing the left hand on the plane with the thumb
pointing up.

3D Cartesian Coordinate Handedness

When pointing the thumb away from the origin along an axis towards positive, the curvature of the fingers indicates
a positive rotation along that axis.
Regardless of the rule used to orient the axes, rotating the coordinate system will preserve the orientation. Switching
any two axes will reverse the orientation, but switching both will leave the orientation unchanged.

3.5.2

In three dimensions

Once the x- and y-axes are specified, they determine the line along which the z-axis should lie, but there are two
possible directions on this line. The two possible coordinate systems which result are called 'right-handed' and 'lefthanded'. The standard orientation, where the xy-plane is horizontal and the z-axis points up (and the x- and the y-axis
form a positively oriented two-dimensional coordinate system in the xy-plane if observed from above the xy-plane)
is called right-handed or positive.
The name derives from the right-hand rule. If the index finger of the right hand is pointed forward, the middle finger
bent inward at a right angle to it, and the thumb placed at a right angle to both, the three fingers indicate the relative
directions of the x-, y-, and z-axes in a right-handed system. The thumb indicates the x-axis, the index finger the
y-axis and the middle finger the z-axis. Conversely, if the same is done with the left hand, a left-handed system
results.
Figure 7 depicts a left and a right-handed coordinate system. Because a three-dimensional object is represented on
the two-dimensional screen, distortion and ambiguity result. The axis pointing downward (and to the right) is also
meant to point towards the observer, whereas the “middle” axis is meant to point away from the observer. The red
circle is parallel to the horizontal xy-plane and indicates rotation from the x-axis to the y-axis (in both cases). Hence
the red arrow passes in front of the z-axis.
Figure 8 is another attempt at depicting a right-handed coordinate system. Again, there is an ambiguity caused

16

CHAPTER 3. CARTESIAN COORDINATE SYSTEM

Fig. 7 – The left-handed orientation is shown on the left, and the right-handed on the right.

z-axis

y=0
xz
plane

x = 0 plane
yz plane
origin
y-axis
z = 0 plane
xy plane

x-axis
Fig. 8 – The right-handed Cartesian coordinate system indicating the coordinate planes.

by projecting the three-dimensional coordinate system into the plane. Many observers see Figure 8 as “flipping in
and out” between a convex cube and a concave “corner”. This corresponds to the two possible orientations of the

3.6. REPRESENTING A VECTOR IN THE STANDARD BASIS

17

coordinate system. Seeing the figure as convex gives a left-handed coordinate system. Thus the “correct” way to view
Figure 8 is to imagine the x-axis as pointing towards the observer and thus seeing a concave corner.

3.6 Representing a vector in the standard basis
A point in space in a Cartesian coordinate system may also be represented by a position vector, which can be thought
of as an arrow pointing from the origin of the coordinate system to the point.[7] If the coordinates represent spatial
positions (displacements), it is common to represent the vector from the origin to the point of interest as r . In two
dimensions, the vector from the origin to the point with Cartesian coordinates (x, y) can be written as:

r = xi + yj
( )
( )
1
0
where i =
, and j =
are unit vectors in the direction of the x-axis and y-axis respectively, generally
0
1
referred to as the standard basis (in some application areas these may also be referred to as versors). Similarly, in
three dimensions, the vector from the origin to the point with Cartesian coordinates (x, y, z) can be written as:[8]

r = xi + yj + zk
 
0
where k = 0 is the unit vector in the direction of the z-axis.
1
There is no natural interpretation of multiplying vectors to obtain another vector that works in all dimensions, however
there is a way to use complex numbers to provide such a multiplication. In a two dimensional cartesian plane, identify
the point with coordinates (x, y) with the complex number z = x + iy. Here, i is the imaginary unit and is identified
with the point with coordinates (0, 1), so it is not the unit vector in the direction of the x-axis. Since the complex
numbers can be multiplied giving another complex number, this identification provides a means to “multiply” vectors.
In a three dimensional cartesian space a similar identification can be made with a subset of the quaternions.

3.7 Applications
Cartesian coordinates are an abstraction that have a multitude of possible applications in the real world. However,
three constructive steps are involved in superimposing coordinates on a problem application. 1) Units of distance
must be decided defining the spatial size represented by the numbers used as coordinates. 2) An origin must be
assigned to a specific spatial location or landmark, and 3) the orientation of the axes must be defined using available
directional cues for (n-1) of the n axes.
Consider as an example superimposing 3D Cartesian coordinates over all points on the Earth (i.e. geospatial 3D).
What units make sense? Kilometers are a good choice, since the original definition of the kilometer was geospatial...10,000 km equalling the surface distance from the Equator to the North Pole. Where to place the origin? Based
on symmetry, the gravitational center of the Earth suggests a natural landmark (which can be sensed via satellite orbits). Finally, how to orient X, Y and Z axis directions? The axis of Earth’s spin provides a natural direction strongly
associated with “up vs. down”, so positive Z can adopt the direction from geocenter to North Pole. A location on
the Equator is needed to define the X-axis, and the Prime Meridian stands out as a reference direction, so the X-axis
takes the direction from geocenter out to [ 0 degrees longitude, 0 degrees latitude ]. Note that with 3 dimensions,
and two perpendicular axes directions pinned down for X and Z, the Y-axis is determined by the first two choices. In
order to obey the right hand rule, the Y-axis must point out from the geocenter to [ 90 degrees longitude, 0 degrees
latitude ]. So what are the geocentric coordinates of the Empire State Building in New York City? Using [ longitude
= −73.985656, latitude = 40.748433 ], Earth radius = 40,000/2π, and transforming from spherical --> Cartesian
coordinates, you can estimate the geocentric coordinates of the Empire State Building, [ x, y, z ] = [ 1330.53 km,
–4635.75 km, 4155.46 km ]. GPS navigation relies on such geocentric coordinates.
In engineering projects, agreement on the definition of coordinates is a crucial foundation. One cannot assume that
coordinates come predefined for a novel application, so knowledge of how to erect a coordinate system where there
is none is essential to applying René Descartes’ ingenious thinking.

18

CHAPTER 3. CARTESIAN COORDINATE SYSTEM

While spatial apps employ identical units along all axes, in business and scientific apps, each axis may have different
units of measurement associated with it (such as kilograms, seconds, pounds, etc.). Although four- and higherdimensional spaces are difficult to visualize, the algebra of Cartesian coordinates can be extended relatively easily
to four or more variables, so that certain calculations involving many variables can be done. (This sort of algebraic
extension is what is used to define the geometry of higher-dimensional spaces.) Conversely, it is often helpful to use
the geometry of Cartesian coordinates in two or three dimensions to visualize algebraic relationships between two or
three of many non-spatial variables.
The graph of a function or relation is the set of all points satisfying that function or relation. For a function of one
variable, f, the set of all points (x, y), where y = f(x) is the graph of the function f. For a function g of two variables,
the set of all points (x, y, z), where z = g(x, y) is the graph of the function g. A sketch of the graph of such a function
or relation would consist of all the salient parts of the function or relation which would include its relative extrema,
its concavity and points of inflection, any points of discontinuity and its end behavior. All of these terms are more
fully defined in calculus. Such graphs are useful in calculus to understand the nature and behavior of a function or
relation.

3.8 See also
• Horizontal and vertical
• Jones diagram, which plots four variables rather than two.
• Orthogonal coordinates
• Polar coordinate system
• Spherical coordinate system

3.9 Notes
3.10 References
[1] “Analytic geometry”. Encyclopædia Britannica (Encyclopædia Britannica Online ed.). 2008.
[2] Burton 2011, p. 374
[3] A Tour of the Calculus, David Berlinski
[4] Springer online reference Encyclopedia of Mathematics
[5] Smart 1998, Chap. 2
[6] Brannan, Esplen & Gray 1998, pg. 49
[7] Brannan, Esplen & Gray 1998, Appendix 2, pp. 377–382
[8] David J. Griffiths (1999). Introduction to Electrodynamics. Prentice Hall. ISBN 0-13-805326-X.

3.11 Sources
• Brannan, David A.; Esplen, Matthew F.; Gray, Jeremy J. (1998), Geometry, Cambridge: Cambridge University
Press, ISBN 0-521-59787-0
• Burton, David M. (2011), The History of Mathematics/An Introduction (7th ed.), New York: McGraw-Hill,
ISBN 978-0-07-338315-6
• Smart, James R. (1998), Modern Geometries (5th ed.), Pacific Grove: Brooks/Cole, ISBN 0-534-35188-3

3.12. FURTHER READING

19

3.12 Further reading
• Descartes, René (2001). Discourse on Method, Optics, Geometry, and Meteorology. Trans. by Paul J. Oscamp
(Revised ed.). Indianapolis, IN: Hackett Publishing. ISBN 0-87220-567-3. OCLC 488633510.
• Korn GA, Korn TM (1961). Mathematical Handbook for Scientists and Engineers (1st ed.). New York:
McGraw-Hill. pp. 55–79. LCCN 59-14456. OCLC 19959906.
• Margenau H, Murphy GM (1956). The Mathematics of Physics and Chemistry. New York: D. van Nostrand.
LCCN 55-10911.
• Moon P, Spencer DE (1988). “Rectangular Coordinates (x, y, z)". Field Theory Handbook, Including Coordinate Systems, Differential Equations, and Their Solutions (corrected 2nd, 3rd print ed.). New York: SpringerVerlag. pp. 9–11 (Table 1.01). ISBN 978-0-387-18430-2.
• Morse PM, Feshbach H (1953). Methods of Theoretical Physics, Part I. New York: McGraw-Hill. ISBN
0-07-043316-X. LCCN 52-11515.
• Sauer R, Szabó I (1967). Mathematische Hilfsmittel des Ingenieurs. New York: Springer Verlag. LCCN 6725285.

3.13 External links
• Cartesian Coordinate System
• Printable Cartesian Coordinates
• Cartesian coordinates at PlanetMath.org.
• MathWorld description of Cartesian coordinates
• Coordinate Converter – converts between polar, Cartesian and spherical coordinates
• Coordinates of a point Interactive tool to explore coordinates of a point
• open source JavaScript class for 2D/3D Cartesian coordinate system manipulation

Chapter 4

Codimension
In mathematics, codimension is a basic geometric idea that applies to subspaces in vector spaces, to submanifolds in
manifolds, and suitable subsets of algebraic varieties.
The dual concept is relative dimension.

4.1 Definition
Codimension is a relative concept: it is only defined for one object inside another. There is no “codimension of a
vector space (in isolation)”, only the codimension of a vector subspace.
If W is a linear subspace of a finite-dimensional vector space V, then the codimension of W in V is the difference
between the dimensions:

codim(W ) = dim(V ) − dim(W ).
It is the complement of the dimension of W, in that, with the dimension of W, it adds up to the dimension of the
ambient space V:

dim(W ) + codim(W ) = dim(V ).
Similarly, if N is a submanifold or subvariety in M, then the codimension of N in M is

codim(N ) = dim(M ) − dim(N ).
Just as the dimension of a submanifold is the dimension of the tangent bundle (the number of dimensions that you
can move on the submanifold), the codimension is the dimension of the normal bundle (the number of dimensions
you can move off the submanifold).
More generally, if W is a linear subspace of a (possibly infinite dimensional) vector space V then the codimension of
W in V is the dimension (possibly infinite) of the quotient space V/W, which is more abstractly known as the cokernel
of the inclusion. For finite-dimensional vector spaces, this agrees with the previous definition

codim(W ) = dim(V /W ) = dim coker(W → V ) = dim(V ) − dim(W ),
and is dual to the relative dimension as the dimension of the kernel.
Finite-codimensional subspaces of infinite-dimensional spaces are often useful in the study of topological vector
spaces.
20

4.2. ADDITIVITY OF CODIMENSION AND DIMENSION COUNTING

21

4.2 Additivity of codimension and dimension counting
The fundamental property of codimension lies in its relation to intersection: if W 1 has codimension k1 , and W 2 has
codimension k2 , then if U is their intersection with codimension j we have
max (k1 , k2 ) ≤ j ≤ k1 + k2 .
In fact j may take any integer value in this range. This statement is more perspicuous than the translation in terms of
dimensions, because the RHS is just the sum of the codimensions. In words
codimensions (at most) add.
If the subspaces or submanifolds intersect transversally (which occurs generically), codimensions add
exactly.
This statement is called dimension counting, particularly in intersection theory.

4.3 Dual interpretation
In terms of the dual space, it is quite evident why dimensions add. The subspaces can be defined by the vanishing of
a certain number of linear functionals, which if we take to be linearly independent, their number is the codimension.
Therefore we see that U is defined by taking the union of the sets of linear functionals defining the Wᵢ. That union may
introduce some degree of linear dependence: the possible values of j express that dependence, with the RHS sum being
the case where there is no dependence. This definition of codimension in terms of the number of functions needed
to cut out a subspace extends to situations in which both the ambient space and subspace are infinite dimensional.
In other language, which is basic for any kind of intersection theory, we are taking the union of a certain number of
constraints. We have two phenomena to look out for:
1. the two sets of constraints may not be independent;
2. the two sets of constraints may not be compatible.
The first of these is often expressed as the principle of counting constraints: if we have a number N of parameters
to adjust (i.e. we have N degrees of freedom), and a constraint means we have to 'consume' a parameter to satisfy
it, then the codimension of the solution set is at most the number of constraints. We do not expect to be able to find
a solution if the predicted codimension, i.e. the number of independent constraints, exceeds N (in the linear algebra
case, there is always a trivial, null vector solution, which is therefore discounted).
The second is a matter of geometry, on the model of parallel lines; it is something that can be discussed for linear
problems by methods of linear algebra, and for non-linear problems in projective space, over the complex number
field.

4.4 In geometric topology
Codimension also has some clear meaning in geometric topology: on a manifold, codimension 1 is the dimension of
topological disconnection by a submanifold, while codimension 2 is the dimension of ramification and knot theory.
In fact, the theory of high-dimensional manifolds, which starts in dimension 5 and above, can alternatively be said to
start in codimension 3, because higher codimensions avoid the phenomenon of knots. Since surgery theory requires
working up to the middle dimension, once one is in dimension 5, the middle dimension has codimension greater than
2, and hence one avoids knots.
This quip is not vacuous: the study of embeddings in codimension 2 is knot theory, and difficult, while the study of
embeddings in codimension 3 or more is amenable to the tools of high-dimensional geometric topology, and hence
considerably easier.

22

CHAPTER 4. CODIMENSION

4.5 See also
• glossary of differential geometry and topology

4.6 References
• Hazewinkel, Michiel, ed. (2001), “Codimension”, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608010-4

Chapter 5

Complex dimension
In mathematics, complex dimension usually refers to the dimension of a complex manifold M, or a complex algebraic
variety V.[1] If the complex dimension is d, the real dimension will be 2d.[2] That is, the smooth manifold M has
dimension 2d; and away from any singular point V will also be a smooth manifold of dimension 2d.
However, for a real algebraic variety (that is a variety defined by equations with real coefficients), its dimension refers
commonly to its complex dimension, and its real dimension refers to the maximum of the dimensions of the manifolds
contained in the set of its real points. The real dimension is not greater than the dimension, and equals it if the variety
is irreducible and has real points that are nonsingular. For example, the equation x2 +y 2 +z 2 = 0 defines a variety of
(complex) dimension 2 (a surface), but of real dimension 0 — it has only one real point, (0, 0, 0), which is singular.[3]
The same points apply to codimension. For example a smooth complex hypersurface in complex projective space of
dimension n will be a manifold of dimension 2(n − 1). A complex hyperplane does not separate a complex projective
space into two components, because it has real codimension 2.

5.1 References
[1] Cavagnaro, Catherine; Haight, William T., II (2001), Dictionary of Classical and Theoretical Mathematics, CRC Press, p.
22, ISBN 9781584880509.
[2] Marsden, Jerrold E.; Ratiu, Tudor S. (1999), Introduction to Mechanics and Symmetry: A Basic Exposition of Classical
Mechanical Systems, Texts in Applied Mathematics 17, Springer, p. 152, ISBN 9780387986432.
[3] Bates, Daniel J.; Hauenstein, Jonathan D.; Sommese, Andrew J.; Wampler, Charles W. (2013), Numerically Solving Polynomial Systems with Bertini, Software, Environments, and Tools 25, SIAM, p. 225, ISBN 9781611972702.

23

Chapter 6

Complex network zeta function
Different definitions have been given for the dimension of a complex network or graph. For example, metric dimension
is defined in terms of the resolving set for a graph. Dimension has also been defined based on the box covering
method applied to graphs.[1] Here we describe the definition based on the complex network zeta function.[2] This
generalises the definition based on the scaling property of the volume with distance.[3] The best definition depends
on the application.

6.1 Definition
One usually thinks of dimension for a set which is dense, like the points on a line, for example. Dimension makes
sense in a discrete setting, like for graphs, only in the large system limit, as the size tends to infinity. For example, in
Statistical Mechanics, one considers discrete points which are located on regular lattices of different dimensions. Such
studies have been extended to arbitrary networks, and it is interesting to consider how the definition of dimension can
be extended to cover these cases. A very simple and obvious way to extend the definition of dimension to arbitrary
large networks is to consider how the volume (number of nodes within a given distance from a specified node) scales
as the distance (shortest path connecting two nodes in the graph) is increased. For many systems arising in physics,
this is indeed a useful approach. This definition of dimension could be put on a strong mathematical foundation,
similar to the definition of Hausdorff dimension for continuous systems. The mathematically robust definition uses
the concept of a zeta function for a graph. The complex network zeta function and the graph surface function were
introduced to characterize large graphs. They have also been applied to study patterns in Language Analysis. In this
section we will briefly review the definition of the functions and discuss further some of their properties which follow
from the definition.
We denote by rij the distance from node i to node j , i.e., the length of the shortest path connecting the first node to
the second node. rij is ∞ if there is no path from node i to node j . With this definition, the nodes of the complex
network become points in a metric space.[2] Simple generalisations of this definition can be studied, e.g., we could
consider weighted edges. The graph surface function, S(r) , is defined as the number of nodes which are exactly at a
distance r from a given node, averaged over all nodes of the network. The complex network zeta function ζG (α) is
defined as

ζG (α) :=

1 ∑ ∑ −α
rij ,
N i
j̸=i

where N is the graph size, measured by the number of nodes. When α is zero all nodes contribute equally to the sum
in the previous equation. This means that ζG (0) is N − 1 , and it diverges when N → ∞ . When the exponent α
tends to infinity, the sum gets contributions only from the nearest neighbours of a node. The other terms tend to zero.
Thus, ζG (α) tends to the average degree < k > for the graph as α → ∞ .

⟨k⟩ = lim ζG (α).
α→∞

24

6.2. PROPERTIES

25

The need for taking an average over all nodes can be avoided by using the concept of supremum over nodes, which
makes the concept much easier to apply for formally infinite graphs.[4] The definition can be expressed as a weighted
sum over the node distances. This gives the Dirichlet series relation

ζG (α) =



S(r)/rα .

r

This definition has been used in the shortcut model to study several processes and their dependence on dimension.

6.2 Properties
ζG (α) is a decreasing function of α , ζG (α1 ) > ζG (α2 ) , if α1 < α2 . If the average degree of the nodes (the mean
coordination number for the graph) is finite, then there is exactly one value of α , αtransition , at which the complex
network zeta function transitions from being infinite to being finite. This has been defined as the dimension of the
complex network. If we add more edges to an existing graph, the distances between nodes will decrease. This results
in an increase in the value of the complex network zeta function, since S(r) will get pulled inward. If the new links
connect remote parts of the system, i.e., if the distances change by amounts which do not remain finite as the graph
size N → ∞ , then the dimension tends to increase. For regular discrete d-dimensional lattices Zd with distance
defined using the L1 norm

∥⃗n∥1 = ∥n1 ∥ + · · · + ∥nd ∥,
the transition occurs at α = d . The definition of dimension using the complex network zeta function satisfies
properties like monotonicity (a subset has a lower or the same dimension as its containing set), stability (a union of
sets has the maximum dimension of the component sets forming the union) and Lipschitz invariance,[5] provided
the operations involved change the distances between nodes only by finite amounts as the graph size N goes to ∞ .
Algorithms to calculate the complex network zeta function have been presented.[6]

6.3 Values for discrete regular lattices
For a one-dimensional regular lattice the graph surface function S1 (r) is exactly two for all values of r (there are two
nearest neighbours, two next-nearest neighbours, and so on). Thus, the complex network zeta function ζG (α) is equal
to 2ζ(α) , where ζ(α) is the usual Riemann zeta function. By choosing a given axis of the lattice and summing over
cross-sections for the allowed range of distances along the chosen axis the recursion relation below can be derived

Sd+1 (r) = 2 + Sd (r) + 2

r−1


Sd (i).

i=1

From combinatorics the surface function for a regular lattice can be written[7] as

Sd (r) =

( )(
)
d−1

d d+r−i−1
(−1)i 2d−i
.
i
d−i−1
i=0

The following expression for the sum of positive integers raised to a given power k will be useful to calculate the
surface function for higher values of d :

r

i=1

ik =

rk+1
rk
+
+
(k + 1)
2

⌊(k+1)/2⌋


j=1

(−1)j+1 2ζ(2j)k!rk+1−2j
.
(2π)2j (k + 1 − 2j)!

Another formula for the sum of positive integers raised to a given power k is

26

n

(
k=1

CHAPTER 6. COMPLEX NETWORK ZETA FUNCTION

n+1 k

r
)∑

ik = (r + 1)((r + 1)n − 1).

i=1

Sd (r) → O(2d rd−1 /Γ(d)) as r → ∞ .
The Complex network zeta function for some lattices is given below.
d = 1 : ζG (α) = 2ζ(α)
d = 2 : ζG (α) = 4ζ(α − 1)
d = 3 : ζG (α) = 4ζ(α − 2) + 2ζ(α )
d = 4 : ζG (α) = 83 ζ(α − 3) +

16
3 ζ(α

− 1)

r → ∞ : ζG (α) = 2d ζ(α − d + 1)/Γ(d) (for α near the transition point.)

6.4 Random graph zeta function
Random graphs are networks having some number N of vertices, in which each pair is connected with probability p
, or else the pair is disconnected. Random graphs have a diameter of two with probability approaching one, in the
infinite limit ( N → ∞ ). To see this, consider two nodes A and B . For any node C different from A or B , the
probability that C is not simultaneously connected to both A and B is (1 − p2 ) . Thus, the probability that none of
the N − 2 nodes provides a path of length 2 between nodes A and B is (1 − p2 )N −2 . This goes to zero as the system
size goes to infinity, and hence most random graphs have their nodes connected by paths of length at most 2 . Also,
the mean vertex degree will be p(N − 1) . For large random graphs almost all nodes are at a distance of one or two
from any given node, S(1) is p(N − 1) , S(2) is (N − 1)(1 − p) , and the graph zeta function is
ζG (α) = p(N − 1) + (N − 1)(1 − p)2−α .

6.5 References
[1] K.-I. Goh, G. Salvi, B. Kahng and D. Kim, Phys. Rev. Lett. 96, 018701 (2006).
[2] O. Shanker (2007). “Graph Zeta Function and Dimension of Complex Network”. Modern Physics Letters B 21 (11):
639–644. Bibcode:2007MPLB...21..639S. doi:10.1142/S0217984907013146.
[3] O. Shanker (2007). “Defining Dimension of a Complex Network”. Modern Physics Letters B 21 (6): 321–326. Bibcode:2007MPLB...21..321S.
doi:10.1142/S0217984907012773.
[4] O. Shanker (2010). “Complex Network Dimension and Path Counts”. Theoretical Computer Science 411 (26–28): 2454–
2458. doi:10.1016/j.tcs.2010.02.013.
[5] K. Falconer, Fractal Geometry: Mathematical Foundations and Applications, Wiley, second edition, 2003

[6] O. Shanker, (2008). “Algorithms for Fractal Dimension Calculation”. Modern Physics Letters B 22 (7): 459–466. Bibcode:2008MPLB...22..459S
doi:10.1142/S0217984908015048.
[7] O. Shanker (2008). “Sharp dimension transition in a shortcut model”. J. Phys. A: Math. Theor. 41 (28): 285001.
Bibcode:2008JPhA...41B5001S. doi:10.1088/1751-8113/41/28/285001.

Chapter 7

Concentration dimension
In mathematics — specifically, in probability theory — the concentration dimension of a Banach space-valued
random variable is a numerical measure of how “spread out” the random variable is compared to the norm on the
space.

7.1 Definition
Let (B, || ||) be a Banach space and let X be a Gaussian random variable taking values in B. That is, for every linear
functional ℓ in the dual space B∗ , the real-valued random variable ⟨ℓ, X⟩ has a normal distribution. Define

σ(X) = sup

{√


}

E[⟨ℓ, X⟩2 ] ℓ ∈ B ∗ , ∥ℓ∥ ≤ 1 .

Then the concentration dimension d(X) of X is defined by

d(X) =

E[∥X∥2 ]
.
σ(X)2

7.2 Examples
• If B is n-dimensional Euclidean space Rn with its usual Euclidean norm, and X is a standard Gaussian random
variable, then σ(X) = 1 and E[||X||2 ] = n, so d(X) = n.
• If B is Rn with the supremum norm, then σ(X) = 1 but E[||X||2 ] (and hence d(X)) is of the order of log(n).

7.3 References
• Ledoux, Michel; Talagrand, Michel (1991), Probability in Banach spaces: Isoperimetry and processes, Ergebnisse der Mathematik und ihrer Grenzgebiete 23, Berlin: Springer-Verlag, p. 237, doi:10.1007/978-3-64220212-4, ISBN 3-540-52013-9, MR 1102015.
• Pisier, Gilles (1989), The volume of convex bodies and Banach space geometry, Cambridge Tracts in Mathematics 94, Cambridge University Press, Cambridge, pp. 42–43, doi:10.1017/CBO9780511662454, ISBN
0-521-36465-5, MR 1036275.

27

Chapter 8

Degrees of freedom
In many scientific fields, the degrees of freedom of a system is the number of parameters of the system that may
vary independently. For example, a point in the plane has two degrees of freedom for translation: its two coordinates;
a non-infinitesimal object on the plane might have additional degrees of freedoms related to its orientation.
In mathematics, this notion is formalized as the dimension of a manifold or an algebraic variety. When degrees of
freedom is used instead of dimension, this usually means that the manifold or variety that models the system is only
implicitly defined. See:
• Degrees of freedom (mechanics), number of independent motions that are allowed to the body or, in case of
a mechanism made of several bodies, number of possible independent relative motions between the pieces of
the mechanism
• Degrees of freedom (physics and chemistry), a term used in explaining dependence on parameters, or the
dimensions of a phase space
• Degrees of freedom (statistics), the number of values in the final calculation of a statistic that are free to vary
• Degrees of freedom problem, the problem of controlling motor movement given abundant degrees of freedom

8.1 See also
• Six degrees of freedom

28

Chapter 9

Degrees of freedom (physics and
chemistry)
This article is about physics and chemistry. For other fields, see Degrees of freedom.
In physics, a degree of freedom is an independent physical parameter in the formal description of the state of a
physical system. The set of all dimensions of a system is known as a phase space, and degrees of freedom are
sometimes referred to as its dimensions.

9.1 Definition
A degree of freedom of a physical system refers to a (typically real) parameter that is necessary to characterize the
state of a physical system.
Consider a point particle that is free to move in three dimensions. The location of any particle in three-dimensional
space can be specified by three position coordinates: x, y, and z. The direction and speed at which a particle moves
can be described in terms of three velocity components, e.g. vx, vy, and vz. If the time evolution of the system is
deterministic, where the state at one instant uniquely determines its past and future position and velocity as a function
of time, such a system will have six degrees of freedom. If the motion of the particle is constrained to a lower number
of dimensions – if, for example, the particle must move along a wire or on a fixed surface – then the system will have
fewer than six degrees of freedom. On the other hand, a system with an extended object that may rotate or vibrate
can have more than six degrees of freedom. A force on the particle that depends only upon time and the particle’s
position and velocity fits this description.
In mechanics, a point particle's state at any given time can be described with position and velocity coordinates in the
Lagrangian formalism, or with position and momentum coordinates in the Hamiltonian formalism.
Similarly, in statistical mechanics, a degree of freedom is a single scalar number describing the microstate of a
system.[1] The specification of all microstates of a system is a point in the system’s phase space.
A degree of freedom may be any useful property that is not dependent on other variables. For example, in the 3D
ideal chain model, two angles are necessary to describe each monomer’s orientation.
In statistical mechanics and thermodynamics, it is often useful to specify quadratic degrees of freedom. These are
degrees of freedom that contribute in a quadratic way to the energy of the system. They are also variables that
contribute quadratically to the Hamiltonian.

9.2 Degrees of freedom of gas molecules
In three-dimensional space, three degrees of freedom are associated with the movement of a particle. A diatomic
gas molecule thus has 6 degrees of freedom. This set may be decomposed in terms of translations, rotations, and
vibrations of the molecule. The center of mass motion of the entire molecule accounts for 3 degrees of freedom. In
29

30

CHAPTER 9. DEGREES OF FREEDOM (PHYSICS AND CHEMISTRY)

Different ways of visualizing the 6 degrees of freedom of a diatomic molecule. (CM: center of mass of the system, T: translational
motion, R: rotational motion, V: vibrational motion.)

addition, the molecule has two rotational degrees of motion and one vibrational mode. The rotations occur around the
two axes perpendicular to the line between the two atoms. The rotation around the atom–atom bond is not a physical
rotation. This yields, for a diatomic molecule, a decomposition of:

3N = 6 = 3 + 2 + 1.
For a general (non-linear) molecule with N > 2 atoms, all 3 rotational degrees of freedom are considered, resulting
in the decomposition:
3N = 3 + 3 + (3N − 6)
which means that an N-atom molecule has 3N − 6 vibrational degrees of freedom for N > 2. In special cases, such
as adsorbed large molecules, the rotational degrees of freedom can be limited to only one.[2]
As defined above one can also count degrees of freedom using the minimum number of coordinates required to
specify a position. This is done as follows:
1. For a single particle we need 2 coordinates in a 2-D plane to specify its position and 3 coordinates in 3-D space.
Thus its degree of freedom in a 3-D space is 3.

9.3. INDEPENDENT DEGREES OF FREEDOM

31

2. For a body consisting of 2 particles (ex. a diatomic molecule) in a 3-D space with constant distance between
them (let’s say d) we can show (below) its degrees of freedom to be 5.
Let’s say one particle in this body has coordinate (x1 , y1 , z1 ) and the other has coordinate (x2 , y2 , z2 ) with z2 unknown.
Application of the formula for distance between two coordinates

d=


(x2 − x1 )2 + (y2 − y1 )2 + (z2 − z1 )2

results in one equation with one unknown, in which we can solve for z2 . One of x1 , x2 , y1 , y2 , z1 , or z2 can be
unknown.
Contrary to the classical equipartition theorem, at room temperature, the vibrational motion of molecules typically
makes negligible contributions to the heat capacity. This is because these degrees of freedom are frozen because
the spacing between the energy eigenvalues exceeds the energy corresponding to ambient temperatures (kBT). In the
following table such degrees of freedom are disregarded because of their low effect on total energy. However, at very
high temperatures they cannot be neglected.

9.3 Independent degrees of freedom
The set of degrees of freedom X1 , ... , XN of a system is independent if the energy associated with the set can be
written in the following form:

E=

N


Ei (Xi ),

i=1

where Eᵢ is a function of the sole variable Xᵢ.
example: if X1 and X2 are two degrees of freedom, and E is the associated energy:
• If E = X14 + X24 , then the two degrees of freedom are independent.
• If E = X14 + X1 X2 + X24 , then the two degrees of freedom are not independent. The term
involving the product of X1 and X2 is a coupling term, that describes an interaction between the
two degrees of freedom.
For i from 1 to N, the value of the ith degree of freedom Xᵢ is distributed according to the Boltzmann distribution.
Its probability density function is the following:

pi (Xi ) = ∫

e

Ei
BT

−k

dXi e

Ei
BT

−k

In this section, and throughout the article the brackets ⟨⟩ denote the mean of the quantity they enclose.
The internal energy of the system is the sum of the average energies associated with each of the degrees of freedom:

⟨E⟩ =

N

⟨Ei ⟩.
i=1

9.4 Quadratic degrees of freedom
A degree of freedom Xᵢ is quadratic if the energy terms associated with this degree of freedom can be written as

32

CHAPTER 9. DEGREES OF FREEDOM (PHYSICS AND CHEMISTRY)

E = αi Xi2 + βi Xi Y
where Y is a linear combination of other quadratic degrees of freedom.
example: if X1 and X2 are two degrees of freedom, and E is the associated energy:
• If E = X14 +X13 X2 +X24 , then the two degrees of freedom are not independent and non-quadratic.
• If E = X14 + X24 , then the two degrees of freedom are independent and non-quadratic.
• If E = X12 +X1 X2 +2X22 , then the two degrees of freedom are not independent but are quadratic.
• If E = X12 + 2X22 , then the two degrees of freedom are independent and quadratic.
For example, in Newtonian mechanics, the dynamics of a system of quadratic degrees of freedom are controlled by
a set of homogeneous linear differential equations with constant coefficients.

9.4.1

Quadratic and independent degree of freedom

X1 , ... , XN are quadratic and independent degrees of freedom if the energy associated with a microstate of the
system they represent can be written as:

E=

N


αi Xi2

i=1

9.4.2

Equipartition theorem

In the classical limit of statistical mechanics, at thermodynamic equilibrium, the internal energy of a system of N
quadratic and independent degrees of freedom is:

U = ⟨E⟩ = N

kB T
2

Here, the mean energy associated with a degree of freedom is:



⟨Ei ⟩ =

dXi αi Xi2 pi (Xi ) =

kB T
⟨Ei ⟩ =
2





αi Xi2

dXi αi Xi2 e kB T
α X2

− i i
dXi e kB T

x2

dx x2 e− 2
kB T
=

x2
2
dx e− 2

Since the degrees of freedom are independent, the internal energy of the system is equal to the sum of the mean
energy associated with each degree of freedom, which demonstrates the result.

9.5 Generalizations
The description of a system’s state as a point in its phase space, although mathematically convenient, is thought to be
fundamentally inaccurate. In quantum mechanics, the motion degrees of freedom are superseded with the concept
of wave function, and operators which correspond to other degrees of freedom have discrete spectra. For example,
intrinsic angular momentum operator (which corresponds to the rotational freedom) for an electron or photon have
only two eigenvalues. This discreteness becomes apparent when action has an order of magnitude of the Planck
constant, and individual degrees of freedom can be distinguished.

9.6. REFERENCES

33

9.6 References
[1] Reif, F. (2009). Fundamentals of Statistical and Thermal Physics. Long Grove, IL: Waveland Press, Inc. p. 51. ISBN
1-57766-612-7.
[2] Thomas Waldmann, Jens Klein, Harry E. Hoster, R. Jürgen Behm (2012), “Stabilization of Large Adsorbates by Rotational
Entropy: A Time-Resolved Variable-Temperature STM Study” (in German), ChemPhysChem: pp. n/a–n/a, doi:10.1002/cphc.201200531

Chapter 10

Dimension
This article is about dimensions of space. For the dimension of a quantity, see Dimensional analysis. For other uses,
see Dimension (disambiguation).
In physics and mathematics, the dimension of a mathematical space (or object) is informally defined as the minimum

From left to right: the square, the cube and the tesseract. The two-dimensional (2d) square is bounded by one-dimensional (1d) lines;
the three-dimensional (3d) cube by two-dimensional areas; and the four-dimensional (4d) tesseract by three-dimensional volumes.
For display on a two-dimensional surface such as a screen, the 3d cube and 4d tesseract require projection.

The first four spatial dimensions.

number of coordinates needed to specify any point within it.[1][2] Thus a line has a dimension of one because only
one coordinate is needed to specify a point on it – for example, the point at 5 on a number line. A surface such as a
plane or the surface of a cylinder or sphere has a dimension of two because two coordinates are needed to specify a
point on it – for example, both a latitude and longitude are required to locate a point on the surface of a sphere. The
inside of a cube, a cylinder or a sphere is three-dimensional because three coordinates are needed to locate a point
within these spaces.
34

10.1. IN MATHEMATICS

35

In classical mechanics, space and time are different categories and refer to absolute space and time. That conception
of the world is a four-dimensional space but not the one that was found necessary to describe electromagnetism. The
four dimensions of spacetime consist of events that are not absolutely defined spatially and temporally, but rather are
known relative to the motion of an observer. Minkowski space first approximates the universe without gravity; the
pseudo-Riemannian manifolds of general relativity describe spacetime with matter and gravity. Ten dimensions are
used to describe string theory, and the state-space of quantum mechanics is an infinite-dimensional function space.
The concept of dimension is not restricted to physical objects. High-dimensional spaces frequently occur in mathematics and the sciences. They may be parameter spaces or configuration spaces such as in Lagrangian or Hamiltonian
mechanics; these are abstract spaces, independent of the physical space we live in.

10.1 In mathematics
In mathematics, the dimension of an object is an intrinsic property independent of the space in which the object is
embedded. For example, a point on the unit circle in the plane can be specified by two Cartesian coordinates, but
a single polar coordinate (the angle) would be sufficient, so the circle is 1-dimensional even though it exists in the
2-dimensional plane. This intrinsic notion of dimension is one of the chief ways the mathematical notion of dimension
differs from its common usages.
The dimension of Euclidean n-space En is n. When trying to generalize to other types of spaces, one is faced with
the question “what makes En n-dimensional?" One answer is that to cover a fixed ball in En by small balls of radius ε,
one needs on the order of ε−n such small balls. This observation leads to the definition of the Minkowski dimension
and its more sophisticated variant, the Hausdorff dimension, but there are also other answers to that question. For
example, the boundary of a ball in En looks locally like En−1 and this leads to the notion of the inductive dimension.
While these notions agree on En , they turn out to be different when one looks at more general spaces.
A tesseract is an example of a four-dimensional object. Whereas outside mathematics the use of the term “dimension”
is as in: “A tesseract has four dimensions", mathematicians usually express this as: “The tesseract has dimension 4",
or: “The dimension of the tesseract is 4”.
Although the notion of higher dimensions goes back to René Descartes, substantial development of a higher-dimensional
geometry only began in the 19th century, via the work of Arthur Cayley, William Rowan Hamilton, Ludwig Schläfli
and Bernhard Riemann. Riemann’s 1854 Habilitationsschrift, Schläfli’s 1852 Theorie der vielfachen Kontinuität,
Hamilton’s 1843 discovery of the quaternions and the construction of the Cayley algebra marked the beginning of
higher-dimensional geometry.
The rest of this section examines some of the more important mathematical definitions of the dimensions.

10.1.1

Dimension of a vector space

Main article: Dimension (vector space)
The dimension of a vector space is the number of vectors in any basis for the space, i.e. the number of coordinates
necessary to specify any vector. This notion of dimension (the cardinality of a basis) is often referred to as the Hamel
dimension or algebraic dimension to distinguish it from other notions of dimension.

10.1.2

Manifolds

A connected topological manifold is locally homeomorphic to Euclidean n-space, and the number n is called the
manifold’s dimension. One can show that this yields a uniquely defined dimension for every connected topological
manifold.
For connected differentiable manifolds, the dimension is also the dimension of the tangent vector space at any point.
In geometric topology, the theory of manifolds is characterized by the way dimensions 1 and 2 are relatively elementary, the high-dimensional cases n > 4 are simplified by having extra space in which to “work"; and the cases
n = 3 and 4 are in some senses the most difficult. This state of affairs was highly marked in the various cases of the
Poincaré conjecture, where four different proof methods are applied.

36

10.1.3

CHAPTER 10. DIMENSION

Varieties

Main article: Dimension of an algebraic variety
The dimension of an algebraic variety may be defined in various equivalent ways. The most intuitive way is probably
the dimension of the tangent space at any regular point. Another intuitive way is to define the dimension as the number
of hyperplanes that are needed in order to have an intersection with the variety that is reduced to a finite number of
points (dimension zero). This definition is based on the fact that the intersection of a variety with a hyperplane reduces
the dimension by one unless if the hyperplane contains the variety.
An algebraic set being a finite union of algebraic varieties, its dimension is the maximum of the dimensions of its
components. It is equal to the maximal length of the chains V0 ⊊ V1 ⊊ . . . ⊊ Vd of sub-varieties of the given
algebraic set (the length of such a chain is the number of " ⊊ ").
Each variety can be considered as an algebraic stack, and its dimension as variety agrees with its dimension as stack.
There are however many stacks which do not correspond to varieties, and some of these have negative dimension.
Specifically, if V is a variety of dimension m and G is an algebraic group of dimension n acting on V, then the quotient
stack [V/G] has dimension m−n.[3]

10.1.4

Krull dimension

Main article: Krull dimension
The Krull dimension of a commutative ring is the maximal length of chains of prime ideals in it, a chain of length n
being a sequence P0 ⊊ P1 ⊊ . . . ⊊ Pn of prime ideals related by inclusion. It is strongly related to the dimension
of an algebraic variety, because of the natural correspondence between sub-varieties and prime ideals of the ring of
the polynomials on the variety.
For an algebra over a field, the dimension as vector space is finite if and only if its Krull dimension is 0.

10.1.5

Lebesgue covering dimension

Main article: Lebesgue covering dimension
For any normal topological space X, the Lebesgue covering dimension of X is defined to be n if n is the smallest integer
for which the following holds: any open cover has an open refinement (a second open cover where each element is a
subset of an element in the first cover) such that no point is included in more than n + 1 elements. In this case dim
X = n. For X a manifold, this coincides with the dimension mentioned above. If no such integer n exists, then the
dimension of X is said to be infinite, and one writes dim X = ∞. Moreover, X has dimension −1, i.e. dim X = −1 if
and only if X is empty. This definition of covering dimension can be extended from the class of normal spaces to all
Tychonoff spaces merely by replacing the term “open” in the definition by the term "functionally open".

10.1.6

Inductive dimension

Main article: Inductive dimension
An inductive definition of dimension can be created as follows. Consider a discrete set of points (such as a finite
collection of points) to be 0-dimensional. By dragging a 0-dimensional object in some direction, one obtains a 1dimensional object. By dragging a 1-dimensional object in a new direction, one obtains a 2-dimensional object. In
general one obtains an (n + 1)-dimensional object by dragging an n-dimensional object in a new direction.
The inductive dimension of a topological space may refer to the small inductive dimension or the large inductive
dimension, and is based on the analogy that (n + 1)-dimensional balls have n-dimensional boundaries, permitting an
inductive definition based on the dimension of the boundaries of open sets.

10.2. IN PHYSICS

10.1.7

37

Hausdorff dimension

Main article: Hausdorff dimension
For structurally complicated sets, especially fractals, the Hausdorff dimension is useful. The Hausdorff dimension is
defined for all metric spaces and, unlike the dimensions considered above, can also attain non-integer real values.[4]
The box dimension or Minkowski dimension is a variant of the same idea. In general, there exist more definitions of
fractal dimensions that work for highly irregular sets and attain non-integer positive real values. Fractals have been
found useful to describe many natural objects and phenomena.[5][6]

10.1.8

Hilbert spaces

Every Hilbert space admits an orthonormal basis, and any two such bases for a particular space have the same
cardinality. This cardinality is called the dimension of the Hilbert space. This dimension is finite if and only if
the space’s Hamel dimension is finite, and in this case the above dimensions coincide.

10.2 In physics
10.2.1

Spatial dimensions

Classical physics theories describe three physical dimensions: from a particular point in space, the basic directions in
which we can move are up/down, left/right, and forward/backward. Movement in any other direction can be expressed
in terms of just these three. Moving down is the same as moving up a negative distance. Moving diagonally upward
and forward is just as the name of the direction implies; i.e., moving in a linear combination of up and forward.
In its simplest form: a line describes one dimension, a plane describes two dimensions, and a cube describes three
dimensions. (See Space and Cartesian coordinate system.)

10.2.2

Time

A temporal dimension is a dimension of time. Time is often referred to as the "fourth dimension" for this reason,
but that is not to imply that it is a spatial dimension. A temporal dimension is one way to measure physical change.
It is perceived differently from the three spatial dimensions in that there is only one of it, and that we cannot move
freely in time but subjectively move in one direction.
The equations used in physics to model reality do not treat time in the same way that humans commonly perceive it.
The equations of classical mechanics are symmetric with respect to time, and equations of quantum mechanics are
typically symmetric if both time and other quantities (such as charge and parity) are reversed. In these models, the
perception of time flowing in one direction is an artifact of the laws of thermodynamics (we perceive time as flowing
in the direction of increasing entropy).
The best-known treatment of time as a dimension is Poincaré and Einstein's special relativity (and extended to general
relativity), which treats perceived space and time as components of a four-dimensional manifold, known as spacetime,
and in the special, flat case as Minkowski space.

10.2.3

Additional dimensions

In physics, three dimensions of space and one of time is the accepted norm. However, there are theories that attempt
to unify the four fundamental forces by introducing more dimensions. Most notably, superstring theory requires 10
spacetime dimensions, and originates from a more fundamental 11-dimensional theory tentatively called M-theory
which subsumes five previously distinct superstring theories. To date, no experimental or observational evidence is
available to confirm the existence of these extra dimensions. If extra dimensions exist, they must be hidden from us by
some physical mechanism. One well-studied possibility is that the extra dimensions may be “curled up” at such tiny
scales as to be effectively invisible to current experiments. Limits on the size and other properties of extra dimensions
are set by particle experiments such as those at the Large Hadron Collider.[7]

38

CHAPTER 10. DIMENSION

At the level of quantum field theory, Kaluza–Klein theory unifies gravity with gauge interactions, based on the realization that gravity propagating in small, compact extra dimensions is equivalent to gauge interactions at long distances.
In particular when the geometry of the extra dimensions is trivial, it reproduces electromagnetism. However at sufficiently high energies or short distances, this setup still suffers from the same pathologies that famously obstruct direct
attempts to describe quantum gravity. Therefore, these models still require a UV completion, of the kind that string
theory is intended to provide. Thus Kaluza-Klein theory may be considered either as an incomplete description on
its own, or as a subset of string theory model building.
In addition to small and curled up extra dimensions, there may be extra dimensions that instead aren't apparent
because the matter associated with our visible universe is localized on a (3 + 1)-dimensional subspace. Thus the extra
dimensions need not be small and compact but may be large extra dimensions. D-branes are dynamical extended
objects of various dimensionalities predicted by string theory that could play this role. They have the property that
open string excitations, which are associated with gauge interactions, are confined to the brane by their endpoints,
whereas the closed strings that mediate the gravitational interaction are free to propagate into the whole spacetime, or
“the bulk”. This could be related to why gravity is exponentially weaker than the other forces, as it effectively dilutes
itself as it propagates into a higher-dimensional volume.
Some aspects of brane physics have been applied to cosmology. For example, brane gas cosmology[8][9] attempts to
explain why there are three dimensions of space using topological and thermodynamic considerations. According to
this idea it would be because three is the largest number of spatial dimensions where strings can generically intersect.
If initially there are lots of windings of strings around compact dimensions, space could only expand to macroscopic
sizes once these windings are eliminated, which requires oppositely wound strings to find each other and annihilate.
But strings can only find each other to annihilate at a meaningful rate in three dimensions, so it follows that only three
dimensions of space are allowed to grow large given this kind of initial configuration.
Extra dimensions are said to be universal if all fields are equally free to propagate within them.

10.3 Networks and dimension
Some complex networks are characterized by fractal dimensions.[10] The concept of dimension can be generalized to
include networks embedded in space.[11] The dimension characterize their spatial constraints.

10.4 In literature
Main article: Fourth dimension in literature
Science fiction texts often mention the concept of “dimension” when referring to parallel or alternate universes or other
imagined planes of existence. This usage is derived from the idea that to travel to parallel/alternate universes/planes of
existence one must travel in a direction/dimension besides the standard ones. In effect, the other universes/planes are
just a small distance away from our own, but the distance is in a fourth (or higher) spatial (or non-spatial) dimension,
not the standard ones.
One of the most heralded science fiction stories regarding true geometric dimensionality, and often recommended as
a starting point for those just starting to investigate such matters, is the 1884 novella Flatland by Edwin A. Abbott.
Isaac Asimov, in his foreword to the Signet Classics 1984 edition, described Flatland as “The best introduction one
can find into the manner of perceiving dimensions.”
The idea of other dimensions was incorporated into many early science fiction stories, appearing prominently, for
example, in Miles J. Breuer's The Appendix and the Spectacles (1928) and Murray Leinster's The Fifth-Dimension
Catapult (1931); and appeared irregularly in science fiction by the 1940s. Classic stories involving other dimensions
include Robert A. Heinlein's —And He Built a Crooked House (1941), in which a California architect designs a house
based on a three-dimensional projection of a tesseract; and Alan E. Nourse's Tiger by the Tail and The Universe
Between (both 1951). Another reference is Madeleine L'Engle's novel A Wrinkle In Time (1962), which uses the fifth
dimension as a way for “tesseracting the universe” or “folding” space in order to move across it quickly. The fourth
and fifth dimensions were also a key component of the book The Boy Who Reversed Himself by William Sleator.

10.5. IN PHILOSOPHY

39

10.5 In philosophy
Immanuel Kant, in 1783, wrote: “That everywhere space (which is not itself the boundary of another space) has three
dimensions and that space in general cannot have more dimensions is based on the proposition that not more than
three lines can intersect at right angles in one point. This proposition cannot at all be shown from concepts, but rests
immediately on intuition and indeed on pure intuition a priori because it is apodictically (demonstrably) certain.”[12]
“Space has Four Dimensions” is a short story published in 1846 by German philosopher and experimental psychologist
Gustav Fechner under the pseudonym “Dr. Mises”. The protagonist in the tale is a shadow who is aware of and able
to communicate with other shadows, but who is trapped on a two-dimensional surface. According to Fechner, this
“shadow-man” would conceive of the third dimension as being one of time.[13] The story bears a strong similarity to
the "Allegory of the Cave" presented in Plato's The Republic (c. 380 BC).
Simon Newcomb wrote an article for the Bulletin of the American Mathematical Society in 1898 entitled “The Philosophy of Hyperspace”.[14] Linda Dalrymple Henderson coined the term “hyperspace philosophy”, used to describe
writing that uses higher dimensions to explore metaphysical themes, in her 1983 thesis about the fourth dimension
in early-twentieth-century art.[15] Examples of “hyperspace philosophers” include Charles Howard Hinton, the first
writer, in 1888, to use the word “tesseract";[16] and the Russian esotericist P. D. Ouspensky.

10.6 More dimensions
• Degrees of freedom in mechanics / physics and chemistry / statistics

10.7 See also
10.7.1

Topics by dimension

Zero
• Point
• Zero-dimensional space
• Integer
One
• Line
• Graph (combinatorics)
• Real number
Two
• Complex number
• Cartesian coordinate system
• List of uniform tilings
• Surface
Three
• Platonic solid
• Stereoscopy (3-D imaging)

40

CHAPTER 10. DIMENSION
• Euler angles
• 3-manifold
• Knots

Four
• Spacetime
• Fourth spatial dimension
• Convex regular 4-polytope
• Quaternion
• 4-manifold
• Fourth dimension in art
• Fourth dimension in literature
Higher dimensions
in mathematics
• Octonion
• Vector space
• Manifold
• Calabi–Yau spaces
• Curse of dimensionality
in physics
• Kaluza–Klein theory
• String theory
• M-theory
Infinite
• Hilbert space
• Function space

10.8 References
[1] “Curious About Astronomy”. Curious.astro.cornell.edu. Retrieved 2014-03-03.
[2] “MathWorld: Dimension”. Mathworld.wolfram.com. 2014-02-27. Retrieved 2014-03-03.
[3] Fantechi, Barbara (2001), “Stacks for everybody” (PDF), European Congress of Mathematics Volume I, Progr. Math. 201,
Birkhäuser, pp. 349–359
[4] Fractal Dimension, Boston University Department of Mathematics and Statistics
[5] Bunde, Armin; Havlin, Shlomo, eds. (1991). Fractals and Disordered Systems. Springer.
[6] Bunde, Armin; Havlin, Shlomo, eds. (1994). Fractals in Science. Springer.
[7] CMS Collaboration, “Search for Microscopic Black Hole Signatures at the Large Hadron Collider” (arxiv.org)
[8] Brandenberger, R., Vafa, C., Superstrings in the early universe

10.9. FURTHER READING

41

[9] Scott Watson, Brane Gas Cosmology (pdf).
[10] Song, Chaoming; Havlin, Shlomo; Makse, Hernán A. (2005). “Self-similarity of complex networks”. Nature 433 (7024).
arXiv:cond-mat/0503078v1. Bibcode:2005Natur.433..392S. doi:10.1038/nature03248.
[11] Daqing, Li; Kosmidis, Kosmas; Bunde, Armin; Havlin, Shlomo (2011). “Dimension of spatially embedded networks”.
Nature Physics 7 (6). Bibcode:2011NatPh...7..481D. doi:10.1038/nphys1932.
[12] Prolegomena, § 12
[13] Banchoff, Thomas F. (1990). “From Flatland to Hypergraphics: Interacting with Higher Dimensions”. Interdisciplinary
Science Reviews 15 (4): 364. doi:10.1179/030801890789797239.
[14] Newcomb, Simon (1898). “The Philosophy of Hyperspace”. Bulletin of the American Mathematical Society 4 (5): 187.
doi:10.1090/S0002-9904-1898-00478-0.
[15] Kruger, Runette (2007). “Art in the Fourth Dimension: Giving Form to Form – The Abstract Paintings of Piet Mondrian”
(PDF). Spaces of Utopia: an Electronic Journal (5): 11.
[16] Pickover, Clifford A. (2009), “Tesseract”, The Math Book: From Pythagoras to the 57th Dimension, 250 Milestones in the
History of Mathematics, Sterling Publishing Company, Inc., p. 282, ISBN 9781402757969.

10.9 Further reading
• Katta G Murty, “Systems of Simultaneous Linear Equations” (Chapter 1 of Computational and Algorithmic
Linear Algebra and n-Dimensional Geometry, World Scientific Publishing: 2014 (ISBN 978-981-4366-62-5).
• Edwin A. Abbott, Flatland: A Romance of Many Dimensions (1884) (Public domain: Online version with
ASCII approximation of illustrations at Project Gutenberg).
• Thomas Banchoff, Beyond the Third Dimension: Geometry, Computer Graphics, and Higher Dimensions, Second Edition, W. H. Freeman and Company: 1996.
• Clifford A. Pickover, Surfing through Hyperspace: Understanding Higher Universes in Six Easy Lessons, Oxford
University Press: 1999.
• Rudy Rucker, The Fourth Dimension, Houghton-Mifflin: 1984.
• Kaku, Michio (1994). Hyperspace, a Scientific Odyssey Through the 10th Dimension. Oxford University Press.
ISBN 0-19-286189-1.
• Krauss, Lawrence M. (2005). Hiding in the Mirror. Viking Press. ISBN 0670033952.

10.10 External links
• Copeland, Ed (2009). “Extra Dimensions”. Sixty Symbols. Brady Haran for the University of Nottingham.

Chapter 11

Dimension (vector space)
In mathematics, the dimension of a vector space V is the cardinality (i.e. the number of vectors) of a basis of V over
its base field.[1][lower-alpha 1]
For every vector space there exists a basis,[lower-alpha 2] and all bases of a vector space have equal cardinality;[lower-alpha 3]
as a result, the dimension of a vector space is uniquely defined. We say V is finite-dimensional if the dimension of
V is finite, and infinite-dimensional if its dimension is infinite.
The dimension of the vector space V over the field F can be written as dimF(V) or as [V : F], read “dimension of V
over F". When F can be inferred from context, dim(V) is typically written.

11.1 Examples
The vector space R3 has
      
0
0 
 1
0, 1, 0


0
0
1
as a basis, and therefore we have dimR(R3 ) = 3. More generally, dimR(Rn ) = n, and even more generally, dimF(F n )
= n for any field F.
The complex numbers C are both a real and complex vector space; we have dimR(C) = 2 and dimC(C) = 1. So the
dimension depends on the base field.
The only vector space with dimension 0 is {0}, the vector space consisting only of its zero element.

11.2 Facts
If W is a linear subspace of V, then dim(W) ≤ dim(V).
To show that two finite-dimensional vector spaces are equal, one often uses the following criterion: if V is a finitedimensional vector space and W is a linear subspace of V with dim(W) = dim(V), then W = V.
Rn has the standard basis {e1 , ..., en}, where ei is the i-th column of the corresponding identity matrix. Therefore
Rn has dimension n.
Any two vector spaces over F having the same dimension are isomorphic. Any bijective map between their bases
can be uniquely extended to a bijective linear map between the vector spaces. If B is some set, a vector space with
dimension |B| over F can be constructed as follows: take the set F (B) of all functions f : B → F such that f(b) = 0
for all but finitely many b in B. These functions can be added and multiplied with elements of F, and we obtain the
desired F-vector space.
An important result about dimensions is given by the rank–nullity theorem for linear maps.
42

11.3. GENERALIZATIONS

43

If F/K is a field extension, then F is in particular a vector space over K. Furthermore, every F-vector space V is also
a K-vector space. The dimensions are related by the formula
dimK(V) = dimK(F) dimF(V).
In particular, every complex vector space of dimension n is a real vector space of dimension 2n.
Some simple formulae relate the dimension of a vector space with the cardinality of the base field and the cardinality
of the space itself. If V is a vector space over a field F then, denoting the dimension of V by dim V, we have:
If dim V is finite, then |V| = |F|dim V .
If dim V is infinite, then |V| = max(|F|, dim V).

11.3 Generalizations
One can see a vector space as a particular case of a matroid, and in the latter there is a well-defined notion of dimension.
The length of a module and the rank of an abelian group both have several properties similar to the dimension of
vector spaces.
The Krull dimension of a commutative ring, named after Wolfgang Krull (1899–1971), is defined to be the maximal
number of strict inclusions in an increasing chain of prime ideals in the ring.

11.3.1

Trace

See also: Trace (linear algebra)
The dimension of a vector space may alternatively be characterized as the trace of the identity operator. For instance,
tr idR2 = tr ( 10 01 ) = 1 + 1 = 2. This appears to be a circular definition, but it allows useful generalizations.
Firstly, it allows one to define a notion of dimension when one has a trace but no natural sense of basis. For example,
one may have an algebra A with maps η : K → A (the inclusion of scalars, called the unit) and a map ϵ : A → K
(corresponding to trace, called the counit). The composition ϵ ◦ η : K → K is a scalar (being a linear operator on a
1-dimensional space) corresponds to “trace of identity”, and gives a notion of dimension for an abstract algebra. In
practice, in bialgebras one requires that this map be the identity, which can be obtained by normalizing the counit by
dividing by dimension ( ϵ := n1 tr ), so in these cases the normalizing constant corresponds to dimension.
Alternatively, one may be able to take the trace of operators on an infinite-dimensional space; in this case a (finite)
trace is defined, even though no (finite) dimension exists, and gives a notion of “dimension of the operator”. These
fall under the rubric of "trace class operators” on a Hilbert space, or more generally nuclear operators on a Banach
space.
A subtler generalization is to consider the trace of a family of operators as a kind of “twisted” dimension. This
occurs significantly in representation theory, where the character of a representation is the trace of the representation,
hence a scalar-valued function on a group χ : G → K, whose value on the identity 1 ∈ G is the dimension of the
representation, as a representation sends the identity in the group to the identity matrix: χ(1G ) = tr IV = dim V.
One can view the other values χ(g) of the character as “twisted” dimensions, and find analogs or generalizations of
statements about dimensions to statements about characters or representations. A sophisticated example of this occurs
in the theory of monstrous moonshine: the j-invariant is the graded dimension of an infinite-dimensional graded
representation of the Monster group, and replacing the dimension with the character gives the McKay–Thompson
series for each element of the Monster group.[2]

11.4 See also
• Basis (linear algebra)
• Topological dimension, also called Lebesgue covering dimension

44

CHAPTER 11. DIMENSION (VECTOR SPACE)
• Fractal dimension
• Krull dimension
• Matroid rank
• Rank (linear algebra)

11.5 Notes
[1] It is sometimes called Hamel dimension or algebraic dimension to distinguish it from other types of dimension.
[2] if one assumes the axiom of choice
[3] see dimension theorem for vector spaces

11.6 References
[1] Itzkov, Mikhail (2009). Tensor Algebra and Tensor Analysis for Engineers: With Applications to Continuum Mechanics.
Springer. p. 4. ISBN 978-3-540-93906-1.
[2] Gannon, Terry (2006), Moonshine beyond the Monster: The Bridge Connecting Algebra, Modular Forms and Physics, ISBN
0-521-83531-3

11.7 External links
• MIT Linear Algebra Lecture on Independence, Basis, and Dimension by Gilbert Strang at MIT OpenCourseWare

Chapter 12

Dimension of an algebraic variety
In mathematics and specifically in algebraic geometry, the dimension of an algebraic variety may be defined in various
equivalent ways.
Some of these definitions are of geometric nature, while some other are purely algebraic and rely on commutative
algebra. Some are restricted to algebraic varieties while others apply also to any algebraic set. Some are intrinsic,
as independent of any embedding of the variety into an affine or projective space, while other are related to such an
embedding.

12.1 Dimension of an affine algebraic set
Let K be a field, and L ⊇ K be an algebraically closed extension. An affine algebraic set V is the set of the common
zeros in Ln of the elements of an ideal I in a polynomial ring R = K[x1 , . . . , xn ]. Let A=R/I be the algebra of the
polynomials over V. The dimension of V is any of the following integers. It does not change if K is enlarged, if L is
replaced by another algebraically closed extension of K and if I is replaced by another ideal having the same zeros
(that is having the same radical). The dimension is also independent of the choice of coordinates; in other words is
does not change if the xi are replaced by linearly independent linear combinations of them. The dimension of V is
• The maximal length d of the chains V0 ⊂ V1 ⊂ . . . ⊂ Vd of distinct nonempty subvarieties.
This definition generalizes a property of the dimension of a Euclidean space or a vector space. It is thus probably the
definition that gives the easiest intuitive description of the notion.
• The Krull dimension of A.
This is the transcription of the preceding definition in the language of commutative algebra, the Krull dimension
being the maximal length of the chains p0 ⊂ p1 ⊂ . . . ⊂ pd of prime ideals of A.
• The maximal Krull dimension of the local rings at the points of V.
This definition shows that the dimension is a local property.
• If V is a variety, the Krull dimension of the local ring at any regular point of V
This shows that the dimension is constant on a variety
• The maximal dimension of the tangent vector spaces at the non singular points of V.
This relies the dimension of a variety to that of a differentiable manifold. More precisely, if V if defined over the
reals, then the set of its real regular points is a differentiable manifold that has the same dimension as variety and as
a manifold.
45

46

CHAPTER 12. DIMENSION OF AN ALGEBRAIC VARIETY
• If V is a variety, the dimension of the tangent vector space at any non singular point of V.

This is the algebraic analogue to the fact that a connected manifold has a constant dimension.
• The number of hyperplanes or hypersurfaces in general position which are needed to have an intersection with
V which is reduced to a nonzero finite number of points.
This definition is not intrinsic as it apply only to algebraic sets that are explicitly embedded in an affine or projective
space.
• The maximal length of a regular sequence in A.
This the algebraic translation of the preceding definition.
• The difference between n and the maximal length of the regular sequences contained in I.
This is the algebraic translation of the fact that the intersection of n-d hypersurfaces is, in general, an algebraic set of
dimension d.
• The degree of the Hilbert polynomial of A.
• The degree of the denominator of the Hilbert series of A.
This allows, through a Gröbner basis computation to compute the dimension of the algebraic set defined by a given
system of polynomial equations
• If I is a prime ideal (i.e. V is an algebraic variety), the transcendence degree over K of the field of fractions of
A.
This allows to prove easily that the dimension is invariant under birational equivalence.

12.2 Dimension of a projective algebraic set
Let V be a projective algebraic set defined as the set of the common zeros of a homogeneous ideal I in a polynomial
ring R = K[x0 , x1 , . . . , xn ] over a field K, and let A=R/I be the graded algebra of the polynomials over V.
All the definitions of the previous section apply, with the change that, when A or I appear explicitly in the definition,
the value of the dimension must be reduced by one. For example, the dimension of V is one less than the Krull
dimension of A.

12.3 Computation of the dimension
Given a system of polynomial equations, it may be difficult to compute the dimension of the algebraic set that it
defines.
Without further information on the system, there is only one practical method that consists to compute a Gröbner
basis and to deduce the degree of the denominator of the Hilbert series of the ideal generated by the equations.
The second step, which is usually the fastest, may be accelerated in the following way: Firstly, the Gröbner basis is
replaced by the list of its leading monomials (this is already done for the computation of the Hilbert series). Then
min(e ,1)
min(e ,1)
each monomial like x1 e1 · · · xn en is replaced by the product of the variables in it: x1 1 · · · xn n . Then the
dimension is the maximal size of a subset S of the variables, such that none of these products of variables depends
only on the variables in S.
This algorithm is implemented in several computer algebra systems. For example in Maple, this is the function
Groebner[HilbertDimension].

12.4. REAL DIMENSION

47

12.4 Real dimension
See also: Complex dimension
The dimension of a set of real points, typically a semialgebraic set, is the dimension of its Zariski closure. For
an algebraic set defined over the reals (that is defined by polynomials with real coefficients), it may occur that the
dimension of the set of its real points differs from its dimension. For example, the algebraic surface of equation
x2 + y 2 + z 2 = 0 is an algebraic variety of dimension two, which has only one real point (0, 0, 0), and thus the real
dimension zero.
The real dimension is more difficult to compute than the algebraic dimension, and, to date, there is no available
software to compute it.

12.5 See also
• Dimension theory (algebra)

12.6 External links
• Hazewinkel, Michiel, ed. (2001), “Algebraic function”, Encyclopedia of Mathematics, Springer, ISBN 978-155608-010-4

Chapter 13

Dimension theory (algebra)
In mathematics, dimension theory is a branch of commutative algebra studying the notion of the dimension of a
commutative ring, and by extension that of a scheme.
The theory is much simpler for an affine ring; i.e., an integral domain that is a finitely generated algebra over a
field. By Noether’s normalization lemma, the Krull dimension of such a ring is the transcendence degree over the
base field and the theory runs in parallel with the counterpart in algebraic geometry; cf. Dimension of an algebraic
variety. The general theory tends to be less geometrical; in particular, very little works/is known for non-noetherian
rings. (Kaplansky’s commutative rings gives a good account of the non-noetherian case.) Today, a standard approach
is essentially that of Bourbaki and EGA, which makes essential use of graded modules and, among other things,
emphasizes the role of multiplicities, the generalization of the degree of a projective variety. In this approach, Krull’s
principal ideal theorem appears as a corollary.
Throughout the article, dim denotes Krull dimension of a ring and ht the height of a prime ideal (i.e., the Krull
dimension of the localization at that prime ideal.) Rings are assumed to be commutative except in the last section on
dimensions of non-commutative rings.

13.1 Basic results
Let R be a noetherian ring or valuation ring. Then

dim R[x] = dim R + 1.
If R is noetherian, this follows from the fundamental theorem below (in particular, Krull’s principal ideal theorem.)
But it is also a consequence of the more precise result. For any prime ideal p in R,
ht(pR[x]) = ht(p) .
ht(q) = ht(p) + 1 for any prime ideal q ⊋ pR[x] in R[x] that contracts to p .
This can be shown within basic ring theory (cf. Kaplansky, commutative rings). By the way, it says in particular that
in each fiber of Spec R[x] → Spec R , one cannot have a chain of primes ideals of length ≥ 2 .
Since an artinian ring (e.g., a field) has dimension zero, by induction, one gets the formula: for an artinian ring R,

dim R[x1 , . . . , xn ] = n.

13.2 Local rings
48

13.2. LOCAL RINGS

13.2.1

49

Fundamental theorem

Let (R, m) be a noetherian local ring and I a m -primary ideal (i.e., it sits between some power of m and m ). Let
n
n+1
. That is,
F (t) be the Poincaré series of the associated graded ring grI R = ⊕∞
0 I /I

F (t) =




ℓ(I n /I n+1 )tn

0

where ℓ refers to the length of a module (over an artinian ring (grI R)0 = R/I ). If x1 , . . . , xs generate I, then
their image in I/I 2 have degree 1 and generate grI R as R/I -algebra. By the Hilbert–Serre theorem, F is a rational
function with exactly one pole at t = 1 of order d ≤ s . Since

(1 − t)−d =

)
∞ (

d−1+j j
t
d−1
0

we find that the coefficient of tn in F (t) = (1 − t)d F (t)(1 − t)−d is of the form
N

0

)
(
d−1+n−k
nd−1
+ O(nd−2 ).
ak
= (1 − t)d F (t)|t=1
d − 1!
d−1

That is to say, ℓ(I n /I n+1 ) is a polynomial P in n of degree d − 1 . P is called the Hilbert polynomial of grI R .
We set d(R) = d . We also set δ(R) to be the minimum number of elements of R that can generate a m -primary
ideal of R. Our ambition is to prove the fundamental theorem:

δ(R) = d(R) = dim R
Since we can take s to be δ(R) , we already have δ(R) ≥ d(R) from the above. Next we prove d(R) ≥ dim R by
induction on d(R) . Let p0 ⊊ · · · ⊊ pm be a chain of prime ideals in R. Let D = R/p0 and x a nonzero nonunit
element in D. Since x is not a zero-divisor, we have the exact sequence

x

0 → D→D → D/xD → 0
The degree bound of the Hilbert-Samuel polynomial now implies that d(D) > d(D/xD) ≥ d(R/p1 ) . (This
essentially follows from the Artin-Rees lemma; see Hilbert-Samuel function for the statement and the proof.) In
R/p1 , the chain pi becomes a chain of length m − 1 and so, by inductive hypothesis and again by the degree
estimate,

m − 1 ≤ dim(R/p1 ) ≤ d(R/p1 ) ≤ d(D) − 1 ≤ d(R) − 1
The claim follows. It now remains to show dim R ≥ δ(R). More precisely, we shall show:
Lemma: R contains elements x1 , . . . , xs such that, for any i, any prime ideal containing (x1 , . . . , xi )
has height ≥ i .
(Notice: (x1 , . . . , xs ) is then m -primary.) The proof is omitted. It appears, for example, in Atiyah–MacDonald.
But it can also be supplied privately; the idea is to use prime avoidance.

13.2.2

Consequences of the fundamental theorem

Let (R, m) be a noetherian local ring and put k = R/m . Then

50

CHAPTER 13. DIMENSION THEORY (ALGEBRA)
• dim R ≤ dimk m/m2 , since a basis of m/m2 lifts to a generating set of m by Nakayama. If the equality holds,
then R is called a regular local ring.
b = dim R , since gr R = gr R
b.
• dim R
• (Krull’s principal ideal theorem) The height of the ideal generated by elements x1 , . . . , xs in a noetherian ring
is at most s. Conversely, a prime ideal of height s can be generated by s elements. (Proof: Let p be a prime
ideal minimal over such an ideal. Then s ≥ dim Rp = ht p . The converse was shown in the course of the
proof of the fundamental theorem.)

Theorem — If A → B is a morphism of noetherian local rings, then
dim B/mA B ≥ dim B − dim A. [1]
The equality holds if A → B is flat or more generally if it has the going-down property.
Proof: Let x1 , . . . , xn generate a mA -primary ideal and y1 , . . . , ym be such that their images generate a mB /mA B
-primary ideal. Then mB s ⊂ (y1 , . . . , ym ) + mA B for some s. Raising both sides to higher powers, we see some
power of mB is contained in (y1 , . . . , ym , x1 , . . . , xn ) ; i.e., the latter ideal is mB -primary; thus, m + n ≥ dim B .
The equality is a straightforward application of the going-down property. □
Proposition — If R is a noetherian ring, then
dim R + 1 = dim R[x] = dim R[[x]]
Proof: If p0 ⊊ p1 ⊊ · · · ⊊ pn are a chain of prime ideals in R, then pi R[x] are a chain of prime ideals in R[x] while
pn R[x] is not a maximal ideal. Thus, dim R + 1 ≤ dim R[x] . For the reverse inequality, let m be a maximal ideal of
R[x] and p = R ∩ m . Clearly, R[x]m = Rp [x]m . Since R[x]m /pRp R[x]m = (Rp /pRp )[x]m is then a localization
of a principal ideal domain and has dimension at most one, we get 1 + dim R ≥ 1 + dim Rp ≥ dim R[x]m by the
previous inequality. Since m is arbitrary, it follows 1 + dim R ≥ dim R[x] . □

13.2.3

Nagata’s altitude formula

Theorem — Let R ⊂ R′ be integral domains, p′ ⊂ R′ be a prime ideal and p = R ∩ p′ . If R is a Noetherian ring,
then
dim Rp′ ′ + tr. degR/p R′ /p′ ≤ dim Rp + tr. degR R′
where the equality holds if either (a) R is universally catenary and R ' is finitely generated R-algebra or (b) R ' is a
polynomial ring over R.
Proof:[2] First suppose R′ is a polynomial ring. By induction on the number of variables, it is enough to consider the
case R′ = R[x] . Since R ' is flat over R,
dim Rp′ ′ = dim Rp + dim κ(p) ⊗R R′ p′
By Noether’s normalization lemma, the second term on the right side is:
dim κ(p) ⊗R R′ − dim κ(p) ⊗R R′ /p′ = 1 − tr. degκ(p) κ(p′ ) = tr. degR R′ − tr. deg κ(p′ ).
Next, suppose R′ is generated by a single element; thus, R′ = R[x]/I . If I = 0, then we are already done. Suppose
not. Then R′ is algebraic over R and so tr. degR R′ = 0 . Since R is a subring of R ', I ∩ R = 0 and so ht I =
dim R[x]I = dim Q(R)[x]I = 1 − tr. degQ(R) κ(I) = 1 since κ(I) = Q(R′ ) is algebraic over Q(R) . Let p′c
denote the pre-image in R[x] of p′ . Then, as κ(p′c ) = κ(p) , by the polynomial case,
ht p′ = ht p′c /I ≤ ht p′c − ht I = dim Rp − tr. degκ(p) κ(p′ ).
Here, note that the inequality is the equality if R ' is catenary. Finally, working with a chain of prime ideals, it is
straightforward to reduce the general case to the above case. □

13.3. HOMOLOGICAL METHODS

51

13.3 Homological methods
13.3.1

Regular rings

Let R be a noetherian ring. The projective dimension of a finite R-module M is the shortest length of any projective
resolution of R (possibly infinite) and is denoted by pdR M . We set gl. dim R = sup{pdR M |module finite a is M}
; it is called the global dimension of R.
Assume R is local with residue field k.
Lemma — pdR k = gl. dim R (possibly infinite).
Proof: We claim: for any finite R-module M,

pdR M ≤ n ⇔ TorR
n+1 (M, k) = 0
By dimension shifting (cf. the proof of Theorem of Serre below), it is enough to prove this for n = 0 . But then, by
the local criterion for flatness, TorR
1 (M, k) = 0 ⇒ M flat ⇒ M free ⇒ pdR (M ) ≤ 0. Now,
gl. dim R ≤ n ⇒ pdR k ≤ n ⇒ TorR
n+1 (−, k) = 0 ⇒ pdR − ≤ n ⇒ gl. dim R ≤ n,
completing the proof. □
Remark: The proof also shows that pdR K = pdR M − 1 if M is not free and K is the kernel of some surjection
from a free module to M.
Lemma — Let R1 = R/f R , f a non-zerodivisor of R. If f is a non-zerodivisor on M, then

pdR M ≥ pdR1 (M ⊗ R1 )
Proof: If pdR M = 0 , then M is R-free and thus M ⊗ R1 is R1 -free. Next suppose pdR M > 0 . Then we have:
pdR K = pdR M − 1 as in the remark above. Thus, by induction, it is enough to consider the case pdR M = 1 .
Then there is a projective resolution: 0 → P1 → P0 → M → 0 , which gives:

TorR
1 (M, R1 ) → P1 ⊗ R1 → P0 ⊗ R1 → M ⊗ R1 → 0
But TorR
1 (M, R1 ) = f M = {m ∈ M |f m = 0} = 0. Hence, pdR (M ⊗ R1 ) is at most 1. □
Theorem of Serre — R regular ⇔ gl. dim R < ∞ ⇔ gl. dim R = dim R.
Proof:[3] If R is regular, we can write k = R/(f1 , . . . , fn ) , fi a regular system of parameters. An exact sequence
f

0 → M →M → M1 → 0 , some f in the maximal ideal, of finite modules, pdR M < ∞ , gives us:
f

R
R
R
0 = TorR
i+1 (M, k) → Tori+1 (M1 , k) → Tori (M, k)→ Tori (M, k),

i ≥ pdR M.

R
But f here is zero since it kills k. Thus, TorR
i+1 (M1 , k) ≃ Tori (M, k) and consequently pdR M1 = 1 + pdR M .
Using this, we get:

pdR k = 1 + pdR (R/(f1 , . . . , fn−1 )) = · · · = n.
The proof of the converse is by induction on dim R . We begin with the inductive step. Set R1 = R/f1 R , f1 among
a system of parameters. To show R is regular, it is enough to show R1 is regular. But, since dim R1 < dim R , by
inductive hypothesis and the preceding lemma with M = m ,

gl. dim R < ∞ ⇒ gl. dim R1 = pdR1 k ≤ pdR1 m/f1 m < ∞ ⇒ R1 regular .

52

CHAPTER 13. DIMENSION THEORY (ALGEBRA)

The basic step remains. Suppose dim R = 0 . We claim gl. dim R = 0 if it is finite. (This would imply that R is a
semisimple local ring; i.e., a field.) If that is not the case, then there is some finite module M with 0 < pdR M < ∞
and thus in fact we can find M with pdR M = 1 . By Nakayama’s lemma, there is a surjection F → M from a free
module F to M whose kernel K is contained in mF . Since dim R = 0 , the maximal ideal m is an associated prime
of R; i.e., m = ann(s) for some nonzero s in R. Since K ⊂ mF , sK = 0 . Since K is not zero and is free, this
implies s = 0 , which is absurd. □
Corollary — A regular local ring is a unique factorization domain.
Proof: Let R be a regular local ring. Then gr R ≃ k[x1 , . . . , xd ] , which is an integrally closed domain. It is a standard
algebra exercise to show this implies that R is an integrally closed domain. Now, we need to show every divisorial ideal
is principal; i.e., the divisor class group of R vanishes. But, according to Bourbaki, Algèbre commutative, chapitre 7,
§. 4. Corollary 2 to Proposition 16, a divisorial ideal is principal if it admits a finite free resolution, which is indeed
the case by the theorem. □
Theorem — Let R be a ring. Then gl. dim R[x1 , . . . , xn ] = gl. dim R + n .

13.3.2

Depths

Let R be a ring and M a module over it. A sequence of elements x1 , . . . , xn in R is called an M-regular sequence if
x1 is not a zero-divisor on M and xi is not a zero divisor on M /(x1 , . . . , xi−1 )M for each i = 2, . . . , n . A priori, it
is not obvious whether any permutation of a regular sequence is still regular (see the section below for some positive
answer.)
Let R be a local Noetherian ring with maximal ideal m and put k = R/m . Then, by definition, the depth of a finite Rmodule M is the supremum of the lengths of all M-regular sequences in m . For example, we have depth M = 0 ⇔ m
consists of zerodivisors on M ⇔ m is associated with M. By induction, we find
depth M ≤ dim R/p
for any associated primes p of M. In particular, depth M ≤ dim M . If the equality holds for M = R, R is called a
Cohen–Macaulay ring.
Example: A regular Noetherian local ring is Cohen–Macaulay (since a regular system of parameters is an R-regular
sequence.)
In general, a Noetherian ring is called a Cohen–Macaulay ring if the localizations at all maximal ideals are Cohen–
Macaulay. We note that a Cohen–Macaulay ring is universally catenary. This implies for example that a polynomial
ring k[x1 , . . . , xd ] is universally catenary since it is regular and thus Cohen–Macaulay.
Proposition (Rees) — Let M be a finite R-module. Then depth M = sup{n| ExtiR (k, M ) = 0, i < n} .
More generally, for any finite R-module N whose support is exactly {m} ,

depth M = sup{n| ExtiR (N, M ) = 0, i < n}
Proof: We first prove by induction on n the following statement: for every R-module M and every M-regular sequence
x1 , . . . , xn in m ,

ExtnR (N, M ) ≃ HomR (N, M /(x1 , . . . , xn )M ).
The basic step n = 0 is trivial. Next, by inductive hypothesis, Extn−1
R (N, M ) ≃ HomR (N, M /(x1 , . . . , xn−1 )M )
. But the latter is zero since the annihilator of N contains some power of xn . Thus, from the exact sequence
x1
0 → M →M
→ M1 → 0 and the fact that x1 kills N, using the inductive hypothesis again, we get
ExtnR (N, M ) ≃ Extn−1
R (N, M /x1 M ) ≃ HomR (N, M /(x1 , . . . , xn )M )
proving (*). Now, if n < depth M , then we can find an M-regular sequence of length more than n and so by (*) we
see ExtnR (N, M ) = 0 . It remains to show ExtnR (N, M ) ̸= 0 if n = depth M . By (*) we can assume n = 0. Then

13.3. HOMOLOGICAL METHODS

53

m is associated with M; thus is in the support of M. On the other hand, m ∈ Supp(N ). It follows by linear algebra
that there is a nonzero homomorphism from N to M modulo m ; hence, one from N to M by Nakayama’s lemma. □
The Auslander–Buchsbaum formula relates depth and projective dimension.
Theorem — Let M be a finite module over a noetherian local ring R. If pdR M < ∞ , then

pdR M + depth M = depth R.
Proof: We argue by induction on pdR M , the basic case (i.e., M free) being trivial. By Nakayama’s lemma, we
f

have the exact sequence 0 → K →F → M → 0 where F is free and the image of f is contained in mF . Since
pdR K = pdR M − 1, what we need to show is depth K = depth M + 1 . Since f kills k, the exact sequence yields:
for any i,

ExtiR (k, F ) → ExtiR (k, M ) → Exti+1
R (k, K) → 0.
Note the left-most term is zero if i < depth R . If i < depth K − 1 , then since depth K ≤ depth R by inductive
i
hypothesis, we see ExtiR (k, M ) = 0. If i = depth K − 1 , then Exti+1
R (k, K) ̸= 0 and it must be ExtR (k, M ) ̸= 0.

As a matter of notation, for any R-module M, we let

Γm (M ) = {s ∈ M | supp(s) ⊂ {m}} = {s ∈ M |mj s = 0 some for j}.
j
= Rj Γm be its j-th right derived functor,
One sees without difficulty that Γm is a left-exact functor and then let Hm
j
called the local cohomology of R. Since Γm (M ) = lim HomR (R/m , M ) , via abstract nonsense,
−→

i
(M ) = lim ExtiR (R/mj , M )
Hm
−→

This observation proves the first part of the theorem below.
Theorem (Grothendieck) — Let M be a finite R-module. Then
i
1. depth M = sup{n|Hm
(M ) = 0, i < n} .
i
2. Hm
(M ) = 0, i > dim M and ̸= 0 if i = dim M.

3. If R is complete and d its Krull dimension and if E is the injective hull of k, then
d
(−), E)
HomR (Hm

is representable (the representing object is sometimes called the canonical module especially
if R is Cohen–Macaulay.)
Proof: 1. is already noted (except to show the nonvanishing at the degree equal to the depth of M; use induction
to see this) and 3. is a general fact by abstract nonsense. 2. is a consequence of an explicit computation of a local
cohomology by means of Koszul complexes (see below). □

13.3.3

Koszul complex

Main article: Koszul complex
Let R be a ring and x an element in it. We form the chain complex K(x) given by K(x)i = R for i = 0, 1 and
K(x)i = 0 for any other i with the differential

54

CHAPTER 13. DIMENSION THEORY (ALGEBRA)

d : K1 (R) → K0 (R), r 7→ xr.
For any R-module M, we then get the complex K(x, M ) = K(x)⊗R M with the differential d⊗1 and let H∗ (x, M ) =
H∗ (K(x, M )) be its homology. Note:

H0 (x, M ) = M /xM
H1 (x, M ) = x M = {m ∈ M |xm = 0}
More generally, given a finite sequence x1 , . . . , xn of elements in a ring R, we form the tensor product of complexes:

K(x1 , . . . , xn ) = K(x1 ) ⊗ · · · ⊗ K(xn )
and let H∗ (x1 , . . . , xn , M ) = H∗ (K(x1 , . . . , xn , M )) its homology. As before,

H0 (x, M ) = M /(x1 , . . . , xn )M
Hn (x, M ) = AnnM ((x1 , . . . , xn ))
We now have the homological characterization of a regular sequence.
Theorem — Suppose R is Noetherian, M is a finite module over R and xi are in the Jacobson radical of R. Then the
following are equivalent
(i) x is an M-regular sequence.
(ii) Hi (x, M ) = 0, i ≥ 1 .
(iii) H1 (x, M ) = 0 .
Corollary — The sequence xi is M-regular if and only if any of its permutations is so.
Corollary — If x1 , . . . , xn is an M-regular sequence, then xj1 , . . . , xjn is also an M-regular sequence for each positive
integer j.
A Koszul complex is a powerful computational tool. For instance, it follows from the theorem and the corollary

Him (M ) ≃ lim Hi (K(xj1 , . . . , xjn ; M ))
−→
(Here, one uses the self-duality of a Koszul complex; see Proposition 17.15. of Eisenbud, Commutative Algebra with
a View Toward Algebraic Geometry.)
Another instance would be
Theorem — Assume R is local. Then let

s = dimk m/m2
the dimension of the Zariski tangent space (often called the embedding dimension of R). Then
( )
s
≤ dimk TorR
i (k, k)
i
Remark: The theorem can be used to give a second quick proof of Serre’s theorem, that R is regular if and only
if it has finite global dimension. Indeed, by the above theorem, TorR
s (k, k) ̸= 0 and thus gl. dim R ≥ s . On

13.4. MULTIPLICITY THEORY

55

the other hand, as gl. dim R = pdR k , the Auslander–Buchsbaum formula gives gl. dim R = dim R . Hence,
dim R ≤ s ≤ gl. dim R = dim R .
We next use a Koszul homology to define and study complete intersection rings. Let R be a Noetherian local ring.
By definition, the first deviation of R is the vector space dimension

ϵ1 (R) = dimk H1 (x)
where x = (x1 , . . . , xd ) is a system of parameters. By definition, R is a complete intersection ring if dim R + ϵ1 (R)
is the dimension of the tangent space. (See Hartshorne for a geometric meaning.)
Theorem — R is a complete intersection ring if and only if its Koszul algebra is an exterior algebra.

13.3.4

Injective dimension and Tor dimensions

Let R be a ring. The injective dimension of an R-module M denoted by idR M is defined just like a projective
dimension: it is the minimal length of an injective resolution of M. Let ModR be the category of R-modules.
Theorem — For any ring R,
gl. dim R = sup{idR M |M ∈ ModR }
= inf{n| ExtiR (M, N ) = 0, i > n, M, N ∈ ModR }
Proof: Suppose gl. dim R ≤ n . Let M be an R-module and consider a resolution
ϕ0

ϕn−1

0 → M → I0 →I1 → · · · → In−1 → N → 0
where Ii are injective modules. For any ideal I,

Ext1R (R/I, N ) ≃ Ext2R (R/I, ker(ϕn−1 )) ≃ · · · ≃ Extn+1
R (R/I, M ),
which is zero since Extn+1
R (R/I, −) is computed via a projective resolution of R/I . Thus, by Baer’s criterion, N
is injective. We conclude that sup{idR M |M } ≤ n . Essentially by reversing the arrows, one can also prove the
implication in the other way. □
The theorem suggests that we consider a sort of a dual of a global dimension:

w. gl. dim = inf{n| TorR
i (M, N ) = 0, i > n, M, N ∈ ModR }
It was originally called the weak global dimension of R but today it is more commonly called the Tor dimension of R.
Remark: for any ring R, w. gl. dim R ≤ gl. dim R .
Proposition — A ring has weak global dimension zero if and only if it is von Neumann regular.

13.4 Multiplicity theory
See also: intersection theory

13.5 Dimensions of non-commutative rings
Let A be a graded algebra over a field k. If V is a finite-dimensional generating subspace of A, then we let f (n) =
dimk V n and then put

56

CHAPTER 13. DIMENSION THEORY (ALGEBRA)

gk(A) = lim sup
n→∞

log f (n)
log n

It is called the Gelfand–Kirillov dimension of A. It is easy to show gk(A) is independent of a choice of V.
Example: If A is finite-dimensional, then gk(A) = 0. If A is an affine ring, then gk(A) = Krull dimension of A.
Bernstein’s inequality — See
See also: Goldie dimension, Krull–Gabriel dimension.

13.6 See also
• Bass number
• Perfect complex
• amplitude

13.7 Notes
[1] Eisenbud, Theorem 10.10
[2] Matsumura, Theorem 15.5.
[3] Weibel 1994, Theorem 4.4.16

13.8 References
• Bruns, Winfried; Herzog, Jürgen (1993), Cohen-Macaulay rings, Cambridge Studies in Advanced Mathematics
39, Cambridge University Press, ISBN 978-0-521-41068-7, MR 1251956
• Part II of Eisenbud, David (1995), Commutative algebra. With a view toward algebraic geometry, Graduate
Texts in Mathematics 150, New York: Springer-Verlag, ISBN 0-387-94268-8, MR 1322960.
• Chapter 10 of Atiyah, Michael Francis; Macdonald, I.G. (1969), Introduction to Commutative Algebra, Westview Press, ISBN 978-0-201-40751-8.
• Kaplansky, Irving, Commutative rings, Allyn and Bacon, 1970.
• H. Matsumura Commutative ring theory. Translated from the Japanese by M. Reid. Second edition. Cambridge
Studies in Advanced Mathematics, 8.
• Serre, Jean-Pierre (1975), Algèbre locale. Multiplicités, Cours au Collège de France, 1957-−1958, rédigé par
Pierre Gabriel. Troisième édition, 1975. Lecture Notes in Mathematics (in French) 11, Berlin, New York:
Springer-Verlag
• Weibel, Charles A. (1995). An Introduction to Homological Algebra. Cambridge University Press.

Chapter 14

Dimensional metrology
Dimensional Metrology is the science of calibrating and using physical measurement equipment to quantify the
physical size of or distance from any given object. Inspection is a critical step in product development and quality
control. Dimensional Metrology requires the use of a variety of physical scales to determine dimension, with the
most accurate of these being holographic etalons or laser interferometers. The realization of dimension using these
accurate scale technologies is the end goal of dimensional metrologists.
Early Mesopotamian and Egyptian metrologists created a set of measurement standards based on body measures
such as fingers, palms, hands, feet, Cubits, and paces and agricultural measures such as feet, yards, paces, fathoms,
rods, cords, perch, stadia, miles and degrees of the Earth’s circumference. Early Egyptian rulers based on units of
fingers, palms and feet based on inscription grids that incorporated standards of measure as canons of proportion were
made commensurate with Mesopotamian standards based on fingers, hands and feet so that four palms or three hands
equaled one foot and ten hands equaled one meter. These standards which were used to measure and define property
such as buildings and fields were adopted by the Greeks, Romans and Persians as legal standards and became the
basis of European standards of measure. They were also used to relate length to area with units such as the khet, setat
and aroura, area to volume with units such as the artaba and space to time with units such as the Egyptian minute of
march, the itrw which recorded an hours travel on a river, and the days sail. Specialized units for carpenters, masons
and other craftsmen such as the remen were worked into a system of unit fractions that allowed calculations utilizing
analytic geometry. Carpenters and surveyors were some of the first dimensional inspectors.
Modern measurement equipment include hand tools, CMMs (Coordinate-Measurement Machine), machine vision
systems, laser trackers, and optical comparators. For hand tools, see Caliper and micrometer. A CMM is based on
CNC technology to automate measurement of Cartesian coordinates using a touch probe, contact scanning probe, or
non-contact sensor. Optical comparators are used when physically touching the part is undesirable. Optical comparators can now build 3D models of a scanned part and internal passages using x-ray technology. Furthermore, optical
3D (laser) scanners are becoming more and common. By using a light sensitive detector (e.g. digital camera) and a
light source (laser, line projector) the triangulation principle is employed to generate 3D data, which is evaluated in
order to compare the measures against nominal geometries.
Data is collected in or compared to a print. A print is a blueprint illustrating crucial features. Prints can be hand
drawn or automatically generated by a CAD model.

14.1 See also
• Mechanical Engineering
Industrial Metrology for manufacturing quality and for Standards room/calibration activities are called as first principle
method of deriving the actual value of measurement. the field of in process gauging, incycle gauging and post process
gauging are excluded from this topic
Equipment generally used for manufacturing quality depends on industry, but broadly around mechanical engineering
and in particular automotive aerospace, machine tool, any other precision parts suitable for instrumentation. This is
broadly listed below:
1. Basic hand held instruments like Vernier Caliper, digital caliper, micrometer etc.
57

58

CHAPTER 14. DIMENSIONAL METROLOGY

14.2 References
14.3 Notes
14.4 External links
• An example of Industrial Metrology equipment.

14.5 Further reading
• Doiron, T. (2007). “20 °C—A Short History of the Standard Reference Temperature for Industrial Dimensional Measurements” (PDF). Journal of Research of the National Institute of Standards and Technology (National Institute of Science and Technology) 112 (1): 1. doi:10.6028/jres.112.001.

Chapter 15

Eight-dimensional space
In mathematics, a sequence of n real numbers can be understood as a location in n-dimensional space. When n = 8,
the set of all such locations is called 8-dimensional space. Often such spaces are studied as vector spaces, without
any notion of distance. Eight-dimensional Euclidean space is eight-dimensional space equipped with a Euclidean
metric, which is defined by the dot product.
More generally the term may refer to an eight-dimensional vector space over any field, such as an eight-dimensional
complex vector space, which has 16 real dimensions. It may also refer to an eight-dimensional manifold such as an
8-sphere, or a variety of other geometric constructions.

15.1 Geometry
15.1.1

8-polytope

Main article: 8-polytope
A polytope in eight dimensions is called an 8-polytope. The most studied are the regular polytopes, of which there
are only three in eight dimensions: the 8-simplex, 8-cube, and 8-orthoplex. A broader family are the uniform 8polytopes, constructed from fundamental symmetry domains of reflection, each domain defined by a Coxeter group.
Each uniform polytope is defined by a ringed Coxeter-Dynkin diagram. The 8-demicube is a unique polytope from
the D8 family, and 421 , 241 , and 142 polytopes from the E8 family.

15.1.2

7-sphere

The 7-sphere or hypersphere in eight dimensions is the seven-dimensional surface equidistant from a point, e.g. the
origin. It has symbol S7 , with formal definition for the 7-sphere with radius r of
{
}
S 7 = x ∈ R8 : ∥x∥ = r .
The volume of the space bounded by this 7-sphere is

V8 =

π4 8
R
24

which is 4.05871 × r8 , or 0.01585 of the 8-cube that contains the 7-sphere.

15.1.3

Kissing number problem

Main article: Kissing number problem

59

60

CHAPTER 15. EIGHT-DIMENSIONAL SPACE

The kissing number problem has been solved in eight dimensions, thanks to the existence of the 421 polytope and its
associated lattice. The kissing number in eight dimensions is 240.

15.2 Octonions
Main article: Octonion
The octonions are a normed division algebra over the real numbers, the largest such algebra. Mathematically they
can be specified by 8-tuplets of real numbers, so form an 8-dimensional vector space over the reals, with addition of
vectors being the addition in the algebra. A normed algebra is one with a product that satisfies

∥xy∥ ≤ ∥x∥∥y∥
for all x and y in the algebra. A normed division algebra additionally must be finite-dimensional, and have the
property that every non-zero vector has a unique multiplicative inverse. Hurwitz’s theorem prohibits such a structure
from existing in dimensions other than 1, 2, 4, or 8.

15.3 Biquaternions
The complexified quaternions C ⊗ H , or "biquaternions,” are an eight-dimensional algebra dating to William Rowan
Hamilton's work in the 1850s. This algebra is equivalent (that is, isomorphic) to the Clifford algebra Cℓ2 (C) and the
Pauli algebra. It has also been proposed as a practical or pedagogical tool for doing calculations in special relativity,
and in that context goes by the name Algebra of physical space (not to be confused with the Spacetime algebra, which
is 16-dimensional.)

15.4 References
• H.S.M. Coxeter:
• H.S.M. Coxeter, Regular Polytopes, 3rd Edition, Dover New York, 1973
• Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony
C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, ISBN 978-0-471-01003-6 Wiley::Kaleidoscopes:
Selected Writings of H.S.M. Coxeter
• (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR
2,10]
• (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591]
• (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
• Table of the Highest Kissing Numbers Presently Known maintained by Gabriele Nebe and Neil Sloane (lower
bounds)
• Conway, John Horton; Smith, Derek A. (2003), On Quaternions and Octonions: Their Geometry, Arithmetic,
and Symmetry, A. K. Peters, Ltd., ISBN 1-56881-134-9. (Review).
• Duplij, Steven; Siegel, Warren; Bagger, Jonathan, eds. (2005), Concise Encyclopedia of Supersymmetry And
Noncommutative Structures in Mathematics and Physics, Berlin, New York: Springer, ISBN 978-1-4020-13386 (Second printing)

Chapter 16

Exterior dimension
In geometry, exterior dimension is a type of dimension that can be used to characterize fat fractals.
A fat fractal is a Cantor set with Lebesgue measure (an extension of the classical notions of length and area to more
complicated sets) greater than 0.
The Cantor set, sometimes also called the Cantor comb or no-middle-third set (Cullen 1968, pp. 78–81), is given
by taking the interval (set), removing the open middle third, removing the middle third of each of the two remaining
pieces, and continuing this procedure ad infinitum. It is therefore the set of points in the interval whose ternary
expansions do not contain 1.
Iterating the process 1 -> 101, 0 -> 000 starting with 1 gives the sequence 1, 101, 101000101, 101000101000000000101000101,
.... The sequence of binary bits thus produced is therefore 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0,
0, 1, 0, 1, 0, ... (Sloane’s A088917) whose nth term is D(n, n) = Pn (mod 3), where D(n, n) is a (central) Delannoy
number and Pn(x) is a Legendre polynomial (E. W. Weisstein, Apr. 9, 2006).

16.1 References
• Cullen, H. F. (1968). Introduction to General Topology. Heath.

61

Chapter 17

Five-dimensional space
This article is about the hypothetical extra dimension. For the musical group, see The 5th Dimension.
Five-dimensional space refers to a hypothetical extra dimension beyond the usual three spatial dimensions and the

A perspective projection 3D to 2D of stereographic projection 4D to 3D of Schlegel diagram 5D to 4D of the 5-cube (or penteract)

fourth dimension of time in relativity physics.[1] It is an abstraction which occurs frequently in mathematics, where
62

17.1. PHYSICS

63

it is a legitimate construct. In physics and mathematics, a sequence of N numbers can be understood to represent a
location in an N-dimensional space. Whether or not the real universe in which we live is somehow five-dimensional
is a topic of debate.

17.1 Physics
Much of the early work on five dimensional space was in an attempt to develop a theory that unifies the four
fundamental forces in nature: strong and weak nuclear forces, gravity and electromagnetism. German mathematician
Theodor Kaluza and Swedish physicist Oskar Klein independently developed the Kaluza–Klein theory in 1921, which
used the fifth dimension to unify gravity with electromagnetic force. Although their approaches were later found to
be at least partially inaccurate, their concept provided the basis for further research over the past century.[1]
To explain why this dimension would not be directly observable, Klein suggested that the fifth dimension would be
rolled up into a tiny, compact loop on the order of 10−33 centimeters.[1] Under his reasoning, he envisioned light
as a disturbance caused by rippling in the higher dimension just beyond human perception, similar to how fish in
a pond can only see shadows of ripples across the surface of the water caused by raindrops.[2] As such, while not
detectable, it would provide a means of unification. Kaluza-Klein theory experienced a revival in the 1970s due to
the emergence of superstring theory and supergravity: the concept that reality is composed of vibrating strands of
energy, a postulate only mathematically viable in ten dimensions or more. Superstring theory then evolved into a more
generalized approach known as M-theory. M-theory suggested a potentially observable extra dimension in addition
to the ten essential dimensions which would allow for the existence of superstrings. The other 10 dimensions are
compacted, or “rolled up”, to a size below the subatomic level.[1][2] Kaluza–Klein theory today is seen as essentially
a gauge theory, with the gauge being the circle group.
The fifth dimension is difficult to directly observe, though the Large Hadron Collider provides an opportunity for
indirect evidence of its existence.[1] Physicists theorize that collisions of subatomic particles in turn produce new
particles as a result of the collision, including a graviton that escapes from the fourth dimension, or brane, leaking
off into a five-dimensional bulk.[3] M-theory would explain the weakness of gravity relative to the other fundamental
forces of nature, as can be seen, for example, when using a magnet to lift a pin off a table — the magnet is able to
overcome the gravitational pull of the entire earth with ease.[1]
Mathematical approaches were developed in the early 20th century that viewed the fifth dimension as a theoretical
construct. These theories make reference to Hilbert space, a mathematical concept that postulates an infinite number
of mathematical dimensions to allow for a limitless number of quantum states. Einstein, Bergmann and Bargmann
later tried to extend the four-dimensional spacetime of general relativity into an extra physical dimension to incorporate electromagnetism, though they were unsuccessful.[1] In their 1938 paper, Einstein and Bergmann were among
the first to introduce the modern viewpoint that a four-dimensional theory, which coincides with Einstein-Maxwell
theory at long distances, is derived from a five-dimensional theory with complete symmetry in all five dimensions.
They suggested that electromagnetism resulted from a gravitational field that is “polarized” in the fifth dimension.[4]
The main novelty of Einstein and Bergmann was to seriously consider the fifth dimension as a physical entity, rather
than an excuse to combine the metric tensor and the electromagnetic potential. But they then reneged, modifying the
theory to break the five-dimensional symmetry. Their reasoning, as suggested by Edward Witten, was that the more
symmetric version of the theory predicted the existence of a new long range field, one that was both massless and
scalar, which would have required a fundamental modification to Einstein’s theory of general relativity.[5] Minkowski
space and Maxwell’s equations in vacuum can be embedded in a five-dimensional Riemann curvature tensor.
In 1993, the physicist Gerard 't Hooft put forward the holographic principle, which explains that the information
about an extra dimension is visible as a curvature in a spacetime with one fewer dimension. For example, holograms
are three-dimensional pictures placed on a two-dimensional surface, which gives the image a curvature when the
observer moves. Similarly, in general relativity, the fourth dimension is manifested in observable three dimensions
as the curvature path of a moving infinitesimal (test) particle. Hooft has speculated that the fifth dimension is really
the spacetime fabric.

17.2 Five-dimensional geometry
According to Klein’s definition, “a geometry is the study of the invariant properties of a spacetime, under transformations within itself.” Therefore, the geometry of the 5th dimension studies the invariant properties of such space-time,

64

CHAPTER 17. FIVE-DIMENSIONAL SPACE

as we move within it, expressed in formal equations.[6]

17.2.1

Polytopes

Main article: 5-polytope
In five or more dimensions, only three regular polytopes exist. In five dimensions, they are:
1. The 5-simplex of the simplex family, with 6 vertices, 15 edges, 20 faces (each an equilateral triangle), 15 cells
(each a regular tetrahedron), and 6 hypercells (each a 5-cell).
2. The 5-cube of the hypercube family, with 32 vertices, 80 edges, 80 faces (each a square), 40 cells (each a
cube), and 10 hypercells (each a tesseract).
3. The 5-orthoplex of the cross polytope family, with 10 vertices, 40 edges, 80 faces (each a triangle), 80 cells
(each a tetrahedron), and 32 hypercells (each a 5-cell).
A fourth polytope, a demihypercube, can be constructed as an alternation of the 5-cube, and is called a 5-demicube,
with half the vertices (16), bounded by alternating 5-cell and 16-cell hypercells.

17.2.2

Hypersphere

A hypersphere in 5-space (also called a 4-sphere due to its surface being 4-dimensional) consists of the set of all
points in 5-space at a fixed distance r from a central point P. The hypervolume enclosed by this hypersurface is:

V =

8π 2 r5
15

17.3 References
[1] Paul Halpern (April 3, 2014). “How Many Dimensions Does the Universe Really Have”. Public Broadcasting Service.
Retrieved September 12, 2015.
[2] Oulette, Jennifer (March 6, 2011). “Black Holes on a String in the Fifth Dimension”. Discovery News. Retrieved September 12, 2015.
[3] Boyle, Alan (June 6, 2006). “Physicists probe fifth dimension”. NBC news. Retrieved September 12, 2015.
[4] Einstein, Albert; Bergmann, Peter (1938). “On A Generalization Of Kaluza’s Theory Of Electricity”. Annals of Mathematics 39: 683.
[5] Witten, Edward (January 31, 2014). “A Note On Einstein, Bergmann, and the Fifth Dimension”. Institute for Advanced
Study. Retrieved September 12, 2015.
[6] Sancho, Luis (October 4, 2011). Absolute Relativity: The 5th dimension (abridged). p. 442.

17.4 See also
• 5-manifold
• Hypersphere
• List of regular 5-polytopes

17.5. FURTHER READING

65

17.5 Further reading
• Wesson, Paul S. (1999). Space-Time-Matter, Modern Kaluza-Klein Theory. Singapore: World Scientific. ISBN
981-02-3588-7.
• Wesson, Paul S. (2006). Five-Dimensional Physics: Classical and Quantum Consequences of Kaluza-Klein
Cosmology. Singapore: World Scientific. ISBN 981-256-661-9.
• Weyl, Hermann, Raum, Zeit, Materie, 1918. 5 edns. to 1922 ed. with notes by Jūrgen Ehlers, 1980. trans. 4th
edn. Henry Brose, 1922 Space Time Matter, Methuen, rept. 1952 Dover. ISBN 0-486-60267-2.

17.6 External links
• Anaglyph of a five dimensional hypercube in hyper perspective

Chapter 18

Flatland
This article is about the novella. For other uses, see Flatland (disambiguation).
Flatland: A Romance of Many Dimensions is an 1884 satirical novella by the English schoolmaster Edwin Abbott
Abbott.
Writing pseudonymously as “A Square”,[1] the book used the fictional two-dimensional world of Flatland to comment
on the hierarchy of Victorian culture, but the novella’s more enduring contribution is its examination of dimensions.[2]
Several films have been made from the story, including the feature film Flatland (2007). Other efforts have been
short or experimental films, including one narrated by Dudley Moore and the short films Flatland: The Movie (2007)
and Flatland 2: Sphereland (2012) starring Martin Sheen and Kristen Bell.[3]

18.1 Plot
The story describes a two-dimensional world occupied by geometric figures, whereof women are simple line-segments,
while men are polygons with various numbers of sides. The narrator is a square named A Square, a member of the
caste of gentlemen and professionals, who guides the readers through some of the implications of life in two dimensions. The first half of the story goes through the practicalities of existing in a two-dimensional universe as well as a
history leading up to the year 1999 on the eve of the 3rd Millennium.
On New Year’s Eve, the Square dreams about a visit to a one-dimensional world (Lineland) inhabited by “lustrous
points”, in which he attempts to convince the realm’s monarch of a second dimension; but is unable to do so. In the
end, the monarch of Lineland tries to kill A Square rather than tolerate his nonsense any further.
Following this vision, he is himself visited by a three-dimensional sphere named A Sphere, which he cannot comprehend until he sees Spaceland (a tridimensional world) for himself. This Sphere visits Flatland at the turn of each
millennium to introduce a new apostle to the idea of a third dimension in the hopes of eventually educating the population of Flatland. From the safety of Spaceland, they are able to observe the leaders of Flatland secretly acknowledging
the existence of the sphere and prescribing the silencing of anyone found preaching the truth of Spaceland and the
third dimension. After this proclamation is made, many witnesses are massacred or imprisoned (according to caste),
including A Square’s brother, B.
After the Square’s mind is opened to new dimensions, he tries to convince the Sphere of the theoretical possibility of
the existence of a fourth (and fifth, and sixth ...) spatial dimension; but the Sphere returns his student to Flatland in
disgrace.
The Square then has a dream in which the Sphere visits him again, this time to introduce him to Pointland, whereof
the point (sole inhabitant, monarch, and universe in one) perceives any communication as a thought originating in his
own mind (cf. Solipsism):
“You see,” said my Teacher, “how little your words have done. So far as the Monarch understands
them at all, he accepts them as his own – for he cannot conceive of any other except himself – and
plumes himself upon the variety of Its Thought as an instance of creative Power. Let us leave this God
66

18.2. SOCIAL ELEMENTS

67

Illustration of a simple house in Flatland.

of Pointland to the ignorant fruition of his omnipresence and omniscience: nothing that you or I can do
can rescue him from his self-satisfaction.”[4]
— the Sphere

The Square recognizes the identity of the ignorance of the monarchs of Pointland and Lineland with his own (and
the Sphere’s) previous ignorance of the existence of higher dimensions. Once returned to Flatland, the Square cannot
convince anyone of Spaceland’s existence, especially after official decrees are announced that anyone preaching the
existence of three dimensions will be imprisoned (or executed, depending on caste). Eventually the Square himself is
imprisoned for just this reason, with only occasional contact with his brother who is imprisoned in the same facility.
He does not manage to convince his brother, even after all they have both seen. Seven years after being imprisoned,
A Square writes out the book Flatland in the form of a memoir, hoping to keep it as posterity for a future generation
that can see beyond their two-dimensional existence.

18.2 Social elements
Men are portrayed as polygons whose social status is determined by their regularity and the number of their sides,
with a Circle considered the “perfect” shape. On the other hand, females consist only of lines and are required by

68

CHAPTER 18. FLATLAND

law to sound a “peace-cry” as they walk, lest she be mistaken face-to-face for a point. The Square evinces accounts
of cases where women have accidentally or deliberately stabbed men to death, as evidence of the need for separate
doors for women and men in buildings.
In the world of Flatland, classes are distinguished by the “Art of Hearing”, the “Art of Feeling”, and the “Art of Sight
Recognition”. Classes can be distinguished by the sound of one’s voice, but the lower classes have more developed
vocal organs, enabling them to feign the voice of a Polygon or even a Circle. Feeling, practised by the lower classes
and women, determines the configuration of a person by feeling one of its angles. The “Art of Sight Recognition”,
practised by the upper classes, is aided by “Fog”, which allows an observer to determine the depth of an object.
With this, polygons with sharp angles relative to the observer will fade more rapidly than polygons with more gradual
angles. Colour of any kind is banned in Flatland after Isosceles workers painted themselves to impersonate noble
Polygons. The Square describes these events, and the ensuing class war at length.
The population of Flatland can “evolve” through the “Law of Nature”, which states: “a male child shall have one more
side than his father, so that each generation shall rise (as a rule) one step in the scale of development and nobility.
Thus the son of a Square is a Pentagon, the son of a Pentagon, a Hexagon; and so on”.
This rule is not the case when dealing with Isosceles Triangles (Soldiers and Workmen) with only two congruent sides.
The smallest angle of an Isosceles Triangle gains thirty arc minutes (half a degree) each generation. Additionally,
the rule does not seem to apply to many-sided Polygons. For example, the sons of several hundred-sided Polygons
will often develop fifty or more sides more than their parents. Furthermore, the angle of an Isosceles Triangle or the
number of sides of a (regular) Polygon may be altered during life by deeds or surgical adjustments.
An Equilateral Triangle is a member of the craftsman class. Squares and Pentagons are the “gentlemen” class, as
doctors, lawyers, and other professions. Hexagons are the lowest rank of nobility, all the way up to (near) Circles,
who make up the priest class. The higher-order Polygons have much less of a chance of producing sons, preventing
Flatland from being overcrowded with noblemen.
Regular Polygons were considered in isolation until chapter seven of the book when the issue of irregularity, or
physical deformity, became considered. In a two dimensional world a regular polygon can be identified by a single
angle and/or vertex. In order to maintain social cohesion, irregularity is to be abhorred, with moral irregularity and
criminality cited, “by some” (in the book), as inevitable additional deformities, a sentiment with which the Square
concurs. If the error of deviation is above a stated amount, the irregular Polygon faces euthanasia; if below, he
becomes the lowest rank of civil servant. An irregular Polygon is not destroyed at birth, but allowed to develop to
see if the irregularity can be “cured” or reduced. If the deformity remains, the irregular is “painlessly and mercifully
consumed.”[5]

18.3 As a social satire
In Flatland Abbott describes a society rigidly divided into classes. Social ascent is the main aspiration of its inhabitants, apparently granted to everyone but strictly controlled by the top of the hierarchy. Freedom is despised and
the laws are cruel. Innovators are imprisoned or suppressed. Members of lower classes who are intellectually valuable, and potential leaders of riots, are either killed, or promoted to the higher classes. Every attempt for change is
considered dangerous and harmful. This world, as ours, is not prepared to receive “Revelations from another world”.
The satirical part is mainly concentrated in the first part of the book, “This World”, which describes Flatland. The
main points of interest are the Victorian concept on women’s roles in the society and in the class-based hierarchy of
men.[6] Abbott has been accused of misogyny due to his portrait of women in Flatland. In his Preface to the Second
and Revised Edition, 1884, he answers such critics by stating that the Square:

was writing as a Historian, he has identified himself (perhaps too closely) with the views generally
adopted by Flatland and (as he has been informed) even by Spaceland, Historians; in whose pages (until
very recent times) the destinies of Women and of the masses of mankind have seldom been deemed
worthy of mention and never of careful consideration.
— the Editor

18.4. CRITICAL RECEPTION

69

18.4 Critical reception
Although Flatland was not ignored when it was published,[7] it did not obtain a great success. In the entry on Edwin
Abbott in the Dictionary of National Biography, Flatland is not even mentioned.[2]
The book was discovered again after Albert Einstein's general theory of relativity was published, which introduced the
concept of a fourth dimension. Flatland was mentioned in a letter entitled “Euclid, Newton and Einstein” published
in Nature on February 12, 1920. In this letter Abbott is depicted, in a sense, as a prophet due to his intuition of the
importance of time to explain certain phenomena:[2][8]
Some thirty or more years ago a little jeu d'esprit was written by Dr. Edwin Abbott entitled Flatland.
At the time of its publication it did not attract as much attention as it deserved... If there is motion of
our three-dimensional space relative to the fourth dimension, all the changes we experience and assign
to the flow of time will be due simply to this movement, the whole of the future as well as the past always
existing in the fourth dimension.
— from a “Letter to the Editor” by William Garnett. in Nature on February 12, 1920.

The Oxford Dictionary of National Biography now contains a reference to Flatland.

18.5 Editions in print
• Flatland (5th edition, 1963), 1983 reprint with foreword by Isaac Asimov, HarperCollins, ISBN 0-06-4635732
• bound together back-to-back with Dionys Burger's Sphereland (1994), HarperCollins, ISBN 0-06-2732765
• The Annotated Flatland (2002), coauthor Ian Stewart, Perseus Publishing, ISBN 0-7382-0541-9
• Signet Classics edition (2005), ISBN 0-451-52976-6
• Oxford University Press (2006), ISBN 0-19-280598-3
• Dover Publications thrift edition (2007), ISBN 0-486-27263-X
• CreateSpace edition (2008), ISBN 1-4404-1778-4
• FLATLAND - A Romance of Many Dimensions (The Distinguished Chiron Edition 2015, Chiron Academic
Press), ISBN 978-9187751165 (reproduction of the 1884 edition)

18.6 Adaptations and parodies
Numerous imitations or sequels to Flatland have been written, and multiple other works have alluded to it. Examples
include:

18.6.1

In film

Flatland (1965), an animated short film based on the novella, was directed by Eric Martin and based on an idea by
John Hubley.[9][10][11]
Flatland (2007), a 98-minute animated independent feature film version directed by Ladd Ehlinger Jr,[12] updates the
satire from Victorian England to the modern-day United States.[12]
Flatland: The Movie (2007), by Dano Johnson and Jeffrey Travis,[13] is a 34-minute animated educational film voice
acted by Martin Sheen, Kristen Bell, Michael York, and Tony Hale.[14] Its sequel was Flatland 2: Sphereland (2012),
inspired by the novel Sphereland by Dionys Burger and starring Kristen Bell, Danny Pudi, Michael York, Tony Hale,
Danica McKellar, and Kate Mulgrew.[15][16][17]

70

CHAPTER 18. FLATLAND

18.6.2

In literature

VAS: An Opera in Flatland is a novel of biotechnology by Steve Tomasula with art and design by Stephen Farrell.
It is an adaptation of Edwin Abbott’s 1884 novel Flatland: A Romance of Many Dimensions. It uses Abbott’s
characters Square and Circle and the flat, two-dimensional world in which they live to critique contemporary society
during the rise of genetic engineering and other body manipulations[18] The text demonstrates a strong correlation
between biology and art: “Utilizing a wide and historical sweep of representations of the body, from pedigree charts
to genetic sequences, this hybrid novel recounts how differing ways of imagining the body generate differing stories
of knowledge, power, history, gender, politics, art, and, of course, the literature of who we are. It is the intersection
of one tidy family’s life with the broader times in which they live. "[19]
An Episode on Flatland: Or How a Plain Folk Discovered the Third Dimension by Charles Howard Hinton (1907),
Sphereland by Dionys Burger (1965), The Planiverse by A. K. Dewdney (1984), Flatterland by Ian Stewart (2001),
and Spaceland by Rudy Rucker (2002). Short stories inspired by Flatland include "The Dot and the Line: A Romance
in Lower Mathematics" by Norton Juster (1963), “The Incredible Umbrella” by Marvin Kaye (1980), and “Message
Found in a Copy of Flatland" by Rudy Rucker (1983)
Physicists and science popularizers Carl Sagan and Stephen Hawking have both commented on and postulated about
the effects of Flatland. Sagan recreates the thought experiment as a set-up to discussing the possibilities of higher
dimensions of the physical universe in both the book and television series Cosmos,[20] whereas Dr. Hawking notes
the impossibility of life in two-dimensional space, as any inhabitants would necessarily be unable to digest their own
food.[21]

18.6.3

In television

Flatland features in The Big Bang Theory episode “The Psychic Vortex”,[22] when Sheldon Cooper declares it one of
his favorite imaginary places to visit.[23]
It also features in the Futurama episode “2-D Blacktop”, when Professor Farnsworth’s adventures in drag racing lead
to a foray of drifting in and out of inter-dimensional spaces.[24]

18.7 See also
• The Planiverse (1984), book by A.K. Dewdney
• Animal Farm (1945), novella by George Orwell
• Blind men and an elephant, Indian parable
• Fourth dimension in literature
• Dimension
• Sphere-world
• Triangle and Robert (1999-2007 webcomic)
• The Dot and the Line (1963 book)
• "—And He Built a Crooked House—" (1941 short story)
• Dimension-bending video games:









Super Paper Mario (2007)
Crush (2007)
Echochrome (2008)
Lost in Shadow (2010)
Fez (2012)
The Legend of Zelda: A Link Between Worlds (2013)
The Bridge (video game) (2013)
Miegakure (in development)

18.8. REFERENCES

71

18.8 References
• Tuck, Donald H. (1974). The Encyclopedia of Science Fiction and Fantasy. Chicago: Advent. p. 1. ISBN
0-911682-20-1.
[1] Abbott, Edwin A. (1884). Flatland: A Romance in Many Dimensions. New York: Dover Thrift Edition (1992 unabridged).
p. ii.
[2] Stewart, Ian (2008). The Annotated Flatland: A Romance of Many Dimensions. New York: Basic Books. pp. xiii. ISBN
0-465-01123-3.
[3] “Review of Flatland: The Movie and Flatland 2: Sphereland". Science News.
[4] Abbott, Edwin A. (1884) Flatland, Part II, § 20.—How the Sphere encouraged me in a Vision, p 92
[5] Abbott, Edwin A. (1952) [1884], Flatland: A Romance of Many Dimensions (6th ed.), New York: Dover, p. 31, ISBN
0-486-20001-9
[6] Stewart, Ian (2008). The Annotated Flatland: A Romance of Many Dimensions. New York: Basic Books. pp. xvii. ISBN
0-465-01123-3.
[7] “Flatland Reviews”. Retrieved 2011-04-02.
[8] “Flatland Reviews - Nature, February 1920”. Retrieved 2011-04-02.
[9] Flatland at the Internet Movie Database
[10] “DER Documentary: Flatland”. Retrieved 11 October 2012.
[11] “Flatland Animation: The project”. Retrieved 11 October 2012.
[12] "Flatland the Film". Retrieved 2007-01-14.
[13] "Flatland: The Movie". Retrieved 2007-01-14.
[14] “IMDB Flatland: The Movie”.
[15] "Flatland 2: Sphereland".
[16] Flatland 2: Sphereland at the Internet Movie Database
[17] GeekDad.com Review of Flatland: The Movie and Flatland 2: Sphereland
[18] Vanderborg, Susan (Fall 2008). “Of 'Men and Mutations’: The Art of Reproduction in Fatland”. Journal of Artistic Books
(24): 4-11.
[19] Tomasula, Steve. “VAS”.
[20] Tremlin, Todd (2006). Minds and Gods: The Cognitive Foundations of Religion. USA: Oxford University Press. p. 91.
ISBN 978-0199739011.
[21] Gott, J. Richard (2001-05-21). Time Travel in Einstein’s Universe: The Physical Possibilities of Travel through Time. USA:
Houghton Mifflin Company. p. 61. ISBN 978-0395955635.
[22] VanDerWerff, Todd. “The Big Bang Theory: “The Psychic Vortex"". A.V. Club. Retrieved 2014-03-14.
[23] “Flatland Featured on The Big Bang Theory on CBS Television”. Giant Screen Cinema Association. Retrieved 2014-03-14.
[24] DeNitto, Nick. “Futurama Invades Flatland”. Stage Buddy. Retrieved 2014-03-14.

18.9 External links
• “Sci-Fri Bookclub”—recording of National Public Radio discussion of Flatland, featuring mathematician Ian
Stewart (Sept. 21, 2012)

72

CHAPTER 18. FLATLAND

18.9.1

Online and downloadable versions of the text

eBooks
• Flatland, a Romance of Many Dimensions (first edition) on Wikisource
• Flatland, a Romance of Many Dimensions (second edition) on Wikisource

• Flatland at Project Gutenberg, text, no illustrations

• Flatland at Project Gutenberg, with ASCII illustrations
• Flatland, digitized copy of the first edition from the Internet Archive
• Flatland (Second Edition), Revised with original illustrations (HTML format, one page)
• Flatland (Fifth Edition), Revised, with original illustrations (HTML format, one chapter per page)
• Flatland (Fifth Edition), Revised, with original illustrations (PDF format, all pages, with LaTeX source on
github)
• Flatland (illustrated version) on Manybooks
• Flatland on Open Library at the Internet Archive
Recording
• Flatland audio book (mp3 format from Librivox)

Chapter 19

Four-dimensional space

3D projection of a tesseract undergoing a simple rotation in four dimensional space.

In mathematics, four-dimensional space (“4D”) is a geometric space with four dimensions. It typically is more
specifically four-dimensional Euclidean space, generalizing the rules of three-dimensional Euclidean space. It has
been studied by mathematicians and philosophers for over two centuries, both for its own interest and for the insights
73

74

CHAPTER 19. FOUR-DIMENSIONAL SPACE

it offered into mathematics and related fields.
Algebraically, it is generated by applying the rules of vectors and coordinate geometry to a space with four dimensions.
In particular a vector with four elements (a 4-tuple) can be used to represent a position in four-dimensional space.
The space is a Euclidean space, so has a metric and norm, and so all directions are treated as the same: the additional
dimension is indistinguishable from the other three.
In modern physics, space and time are unified in a four-dimensional Minkowski continuum called spacetime, whose
metric treats the time dimension differently from the three spatial dimensions (see below for the definition of the
Minkowski metric/pairing). Spacetime is not a Euclidean space.

19.1 History
See also: n-dimensional space § History
Lagrange wrote in his Mécanique analytique (published 1788, based on work done around 1755) that mechanics can be
viewed as operating in a four-dimensional space — three of dimensions of space, and one of time.[1] In 1827 Möbius
realized that a fourth dimension would allow a three-dimensional form to be rotated onto its mirror-image,[2] and by
1853 Ludwig Schläfli had discovered many polytopes in higher dimensions, although his work was not published until
after his death.[3] Higher dimensions were soon put on firm footing by Bernhard Riemann's 1854 Habilitationsschrift,
Über die Hypothesen welche der Geometrie zu Grunde liegen, in which he considered a “point” to be any sequence of
coordinates (x1 , ..., xn). The possibility of geometry in higher dimensions, including four dimensions in particular,
was thus established.
An arithmetic of four dimensions called quaternions was defined by William Rowan Hamilton in 1843. This associative
algebra was the source of the science of vector analysis in three dimensions as recounted in A History of Vector Analysis. Soon after tessarines and coquaternions were introduced as other four-dimensional algebras over R.
One of the first major expositors of the fourth dimension was Charles Howard Hinton, starting in 1880 with his essay
What is the Fourth Dimension?; published in the Dublin University magazine.[4] He coined the terms tesseract, ana
and kata in his book A New Era of Thought, and introduced a method for visualising the fourth dimension using cubes
in the book Fourth Dimension.[5][6] In 1886 Victor Schlegel described[7] his method of visualizing four-dimensional
objects with Schlegel diagrams.
In 1908, Hermann Minkowski presented a paper[8] consolidating the role of time as the fourth dimension of spacetime,
the basis for Einstein’s theories of special and general relativity.[9] But the geometry of spacetime, being nonEuclidean, is profoundly different from that popularised by Hinton. The study of Minkowski space required new
mathematics quite different from that of four-dimensional Euclidean space, and so developed along quite different
lines. This separation was less clear in the popular imagination, with works of fiction and philosophy blurring the
distinction, so in 1973 H. S. M. Coxeter felt compelled to write:

Little, if anything, is gained by representing the fourth Euclidean dimension as time. In fact, this idea,
so attractively developed by H. G. Wells in The Time Machine, has led such authors as John William
Dunne (An Experiment with Time) into a serious misconception of the theory of Relativity. Minkowski’s
geometry of space-time is not Euclidean, and consequently has no connection with the present investigation.
— H. S. M. Coxeter, Regular Polytopes[10]

19.2 Vectors
Mathematically four-dimensional space is simply a space with four spatial dimensions, that is a space that needs four
parameters to specify a point in it. For example, a general point might have position vector a, equal to

19.2. VECTORS

75

 
a1
a 2 

a=
a3 .
a4
This can be written in terms of the four standard basis vectors (e1 , e2 , e3 , e4 ), given by
 
 
 
 
1
0
0
0
0
1
0
0







e1 =  ; e2 =  ; e3 =  ; e4 =  
,
0
0
1
0
0
0
0
1
so the general vector a is

a = a1 e1 + a2 e2 + a3 e3 + a4 e4 .
Vectors add, subtract and scale as in three dimensions.
The dot product of Euclidean three-dimensional space generalizes to four dimensions as

a · b = a1 b1 + a2 b2 + a3 b3 + a4 b4 .
It can be used to calculate the norm or length of a vector,

|a| =



a · a = a1 2 + a2 2 + a3 2 + a4 2 ,

and calculate or define the angle between two vectors as

θ = arccos

a·b
.
|a| |b|

Minkowski spacetime is four-dimensional space with geometry defined by a nondegenerate pairing different from the
dot product:

a · b = a1 b1 + a2 b2 + a3 b3 − a4 b4 .
As an example, the distance squared between the points (0,0,0,0) and (1,1,1,0) is 3 in both the Euclidean and
Minkowskian 4-spaces, while the distance squared between (0,0,0,0) and (1,1,1,1) is 4 in Euclidean space and 2
in Minkowski space; increasing b4 actually decreases the metric distance. This leads to many of the well known
apparent “paradoxes” of relativity.
The cross product is not defined in four dimensions. Instead the exterior product is used for some applications, and
is defined as follows:

a ∧ b = (a1 b2 − a2 b1 )e12 + (a1 b3 − a3 b1 )e13 + (a1 b4 − a4 b1 )e14 + (a2 b3 − a3 b2 )e23
+(a2 b4 − a4 b2 )e24 + (a3 b4 − a4 b3 )e34 .
This is bivector valued, with bivectors in four dimensions forming a six-dimensional linear space with basis (e12 , e13 ,
e14 , e23 , e24 , e34 ). They can be used to generate rotations in four dimensions.

76

CHAPTER 19. FOUR-DIMENSIONAL SPACE

19.3 Orthogonality and vocabulary
In the familiar 3-dimensional space in which we live there are three coordinate axes — usually labeled x, y, and z
— with each axis orthogonal (i.e. perpendicular) to the other two. The six cardinal directions in this space can be
called up, down, east, west, north, and south. Positions along these axes can be called altitude, longitude, and latitude.
Lengths measured along these axes can be called height, width, and depth.
Comparatively, 4-dimensional space has an extra coordinate axis, orthogonal to the other three, which is usually
labeled w. To describe the two additional cardinal directions, Charles Howard Hinton coined the terms ana and kata,
from the Greek words meaning “up toward” and “down from”, respectively. A position along the w axis can be called
spissitude, as coined by Henry More.

19.4 Geometry
See also: Rotations in 4-dimensional Euclidean space
The geometry of 4-dimensional space is much more complex than that of 3-dimensional space, due to the extra degree
of freedom.
Just as in 3 dimensions there are polyhedra made of two dimensional polygons, in 4 dimensions there are 4-polytopes
made of polyhedra. In 3 dimensions there are 5 regular polyhedra known as the Platonic solids. In 4 dimensions
there are 6 convex regular 4-polytopes, the analogues of the Platonic solids. Relaxing the conditions for regularity
generates a further 58 convex uniform 4-polytopes, analogous to the 13 semi-regular Archimedean solids in three
dimensions. Relaxing the conditions for convexity generates a further 10 nonconvex regular 4-polytopes.
In 3 dimensions, a circle may be extruded to form a cylinder. In 4 dimensions, there are several different cylinderlike objects. A sphere may be extruded to obtain a spherical cylinder (a cylinder with spherical “caps”, known as a
spherinder), and a cylinder may be extruded to obtain a cylindrical prism (a cubinder). The Cartesian product of two
circles may be taken to obtain a duocylinder. All three can “roll” in 4-dimensional space, each with its own properties.
In 3 dimensions, curves can form knots but surfaces cannot (unless they are self-intersecting). In 4 dimensions,
however, knots made using curves can be trivially untied by displacing them in the fourth direction, but 2-dimensional
surfaces can form non-trivial, non-self-intersecting knots in 4-dimensional space.[11] Because these surfaces are 2dimensional, they can form much more complex knots than strings in 3-dimensional space can. The Klein bottle is
an example of such a knotted surface . Another such surface is the real projective plane.

19.4.1

Hypersphere

Main article: Hypersphere
The set of points in Euclidean 4-space having the same distance R from a fixed point P0 forms a hypersurface known
as a 3-sphere. The hyper-volume of the enclosed space is:

V = 12 π 2 R4
This is part of the Friedmann–Lemaître–Robertson–Walker metric in General relativity where R is substituted by
function R(t) with t meaning the cosmological age of the universe. Growing or shrinking R with time means expanding
or collapsing universe, depending on the mass density inside.[12]

19.5 Cognition
Research using virtual reality finds that humans in spite of living in a three-dimensional world can without special practice make spatial judgments based on the length of, and angle between, line segments embedded in four-dimensional
space.[13] The researchers noted that “the participants in our study had minimal practice in these tasks, and it remains
an open question whether it is possible to obtain more sustainable, definitive, and richer 4D representations with

19.6. DIMENSIONAL ANALOGY

77

Stereographic projection of a Clifford torus: the set of points (cos(a), sin(a), cos(b), sin(b)), which is a subset of the 3-sphere.

increased perceptual experience in 4D virtual environments.”[13] In another study,[14] the ability of humans to orient
themselves in 2D, 3D and 4D mazes has been tested. Each maze consisted of four path segments of random length
and connected with orthogonal random bends, but without branches or loops (i.e. actually labyrinths). The graphical
interface was based on John McIntosh’s free 4D Maze game.[15] The participating persons had to navigate through
the path and finally estimate the linear direction back to the starting point. The researchers found that some of the
participants were able to mentally integrate their path after some practice in 4D (the lower-dimensional cases were
for comparison and for the participants to learn the method).

19.6 Dimensional analogy
To understand the nature of four-dimensional space, a device called dimensional analogy is commonly employed. Dimensional analogy is the study of how (n − 1) dimensions relate to n dimensions, and then inferring how n dimensions
would relate to (n + 1) dimensions.[16]
Dimensional analogy was used by Edwin Abbott Abbott in the book Flatland, which narrates a story about a square
that lives in a two-dimensional world, like the surface of a piece of paper. From the perspective of this square, a threedimensional being has seemingly god-like powers, such as ability to remove objects from a safe without breaking it
open (by moving them across the third dimension), to see everything that from the two-dimensional perspective is

78

CHAPTER 19. FOUR-DIMENSIONAL SPACE

A net of a tesseract

enclosed behind walls, and to remain completely invisible by standing a few inches away in the third dimension.
By applying dimensional analogy, one can infer that a four-dimensional being would be capable of similar feats from
our three-dimensional perspective. Rudy Rucker illustrates this in his novel Spaceland, in which the protagonist
encounters four-dimensional beings who demonstrate such powers.

19.6.1

Cross-sections

As a three-dimensional object passes through a two-dimensional plane, a two-dimensional being would only see a
cross-section of the three-dimensional object. For example, if a spherical balloon passed through a sheet of paper, a
being on the paper would see first a single point, then a circle gradually growing larger, then smaller again until it shrank
to a point and then disappeared. Similarly, if a four-dimensional object passed through three dimensions, we would
see a three-dimensional cross-section of the four-dimensional object—for example, a hypersphere would appear first
as a point, then as a growing sphere, with the sphere then shrinking to a single point and then disappearing.[17] This
means of visualizing aspects of the fourth dimension was used in the novel Flatland and also in several works of
Charles Howard Hinton.[18]

19.6.2

Projections

A useful application of dimensional analogy in visualizing the fourth dimension is in projection. A projection is a way
for representing an n-dimensional object in n − 1 dimensions. For instance, computer screens are two-dimensional,
and all the photographs of three-dimensional people, places and things are represented in two dimensions by projecting
the objects onto a flat surface. When this is done, depth is removed and replaced with indirect information. The retina
of the eye is also a two-dimensional array of receptors but the brain is able to perceive the nature of three-dimensional
objects by inference from indirect information (such as shading, foreshortening, binocular vision, etc.). Artists often
use perspective to give an illusion of three-dimensional depth to two-dimensional pictures.

19.6. DIMENSIONAL ANALOGY

79

Similarly, objects in the fourth dimension can be mathematically projected to the familiar 3 dimensions, where they
can be more conveniently examined. In this case, the 'retina' of the four-dimensional eye is a three-dimensional
array of receptors. A hypothetical being with such an eye would perceive the nature of four-dimensional objects by
inferring four-dimensional depth from indirect information in the three-dimensional images in its retina.
The perspective projection of three-dimensional objects into the retina of the eye introduces artifacts such as foreshortening, which the brain interprets as depth in the third dimension. In the same way, perspective projection
from four dimensions produces similar foreshortening effects. By applying dimensional analogy, one may infer fourdimensional “depth” from these effects.
As an illustration of this principle, the following sequence of images compares various views of the 3-dimensional
cube with analogous projections of the 4-dimensional tesseract into three-dimensional space.

19.6.3

Shadows

A concept closely related to projection is the casting of shadows.

If a light is shone on a three dimensional object, a two-dimensional shadow is cast. By dimensional analogy, light
shone on a two-dimensional object in a two-dimensional world would cast a one-dimensional shadow, and light on
a one-dimensional object in a one-dimensional world would cast a zero-dimensional shadow, that is, a point of nonlight. Going the other way, one may infer that light shone on a four-dimensional object in a four-dimensional world

80

CHAPTER 19. FOUR-DIMENSIONAL SPACE

would cast a three-dimensional shadow.
If the wireframe of a cube is lit from above, the resulting shadow is a square within a square with the corresponding
corners connected. Similarly, if the wireframe of a tesseract were lit from “above” (in the fourth dimension), its
shadow would be that of a three-dimensional cube within another three-dimensional cube. (Note that, technically,
the visual representation shown here is actually a two-dimensional image of the three-dimensional shadow of the
four-dimensional wireframe figure.)

19.6.4

Bounding volumes

Dimensional analogy also helps in inferring basic properties of objects in higher dimensions. For example, twodimensional objects are bounded by one-dimensional boundaries: a square is bounded by four edges. Three-dimensional
objects are bounded by two-dimensional surfaces: a cube is bounded by 6 square faces. By applying dimensional
analogy, one may infer that a four-dimensional cube, known as a tesseract, is bounded by three-dimensional volumes.
And indeed, this is the case: mathematics shows that the tesseract is bounded by 8 cubes. Knowing this is key to understanding how to interpret a three-dimensional projection of the tesseract. The boundaries of the tesseract project
to volumes in the image, not merely two-dimensional surfaces.

19.6.5

Visual scope

Being three-dimensional, we are only able to see the world with our eyes in two dimensions. A four-dimensional
being would be able to see the world in three dimensions. For example, it would be able to see all six sides of an
opaque box simultaneously, and in fact, what is inside the box at the same time, just as we can see the interior of a
square on a piece of paper. It would be able to see all points in 3-dimensional space simultaneously, including the
inner structure of solid objects and things obscured from our three-dimensional viewpoint. Our brains receive images
in the second dimension and use reasoning to help us “picture” three-dimensional objects.

19.6.6

Limitations

Reasoning by analogy from familiar lower dimensions can be an excellent intuitive guide, but care must be exercised
not to accept results that are not more rigorously tested. For example, consider the formulas for the circumference of
a circle C = 2πr and the surface area of a sphere: A = 4πr2 . One might be tempted to suppose that the surface
volume of a hypersphere is V = 6πr3 , or perhaps V = 8πr3 , but either of these would be wrong. The correct
formula is V = 2π 2 r3 .[10]

19.7 See also
• Euclidean space
• Euclidean geometry
• 4-manifold
• Exotic R4
• Fourth dimension in art
• Dimension
• Four-dimensionalism
• Fifth dimension
• Sixth dimension
• 4-polytope
• Polytope

19.8. REFERENCES

81

• List of geometry topics
• Block Theory of the Universe
• Flatland, a book by Edwin A. Abbott about two- and three-dimensional spaces, to understand the concept of
four dimensions
• Sphereland, an unofficial sequel to Flatland
• Charles Howard Hinton
• Dimensions, a set of films about two-, three- and four-dimensional polytopes
• List of four-dimensional games

19.8 References
[1] Bell, E.T. (1937). Men of Mathematics, Simon and Schuster, p. 154.
[2] Coxeter, H. S. M. (1973). Regular Polytopes, Dover Publications, Inc., p. 141.
[3] Coxeter, H. S. M. (1973). Regular Polytopes, Dover Publications, Inc., pp. 142–143.
[4] Rudolf v.B. Rucker, editor Speculations on the Fourth Dimension: Selected Writings of Charles H. Hinton, p. vii, Dover
Publications Inc., 1980 ISBN 0-486-23916-0
[5] Hinton, Charles Howard (1904). Fourth Dimension. ISBN 1-5645-9708-3.
[6] Gardner, Martin (1975). Mathematical Carnival. Knopf Publishing. pp. 42, 52–53. ISBN 0-394-49406-7.
[7] Victor Schlegel (1886) Ueber Projectionsmodelle der regelmässigen vier-dimensionalen Körper, Waren
[8] Minkowski, Hermann (1909), "Raum und Zeit", Physikalische Zeitschrift 10: 75–88
• Various English translations on Wikisource: Space and Time
[9] C Møller (1952). The Theory of Relativity. Oxford UK: Clarendon Press. p. 93. ISBN 0-19-851256-2.
[10] Coxeter, H. S. M. (1973). Regular Polytopes, Dover Publications, Inc., p. 119.
[11] J. Scott Carter, Masahico Saito Knotted Surfaces and Their Diagrams
[12] Ray d'Inverno (1992), Introducing Einstein’s Relativity, Clarendon Press, chp. 22.8 Geometry of 3-spaces of constant curvature, p.319ff, ISBN 0-19-859653-7
[13] Ambinder MS, Wang RF, Crowell JA, Francis GK, Brinkmann P. (2009). Human four-dimensional spatial intuition in
virtual reality. Psychon Bull Rev. 16(5):818-23. doi:10.3758/PBR.16.5.818 PMID 19815783 online supplementary material
[14] Aflalo TN, Graziano MS (2008). Four-Dimensional Spatial Reasoning in Humans. Journal of Experimental Psychology:
Human Perception and Performance 34(5):1066-1077. doi:10.1037/0096-1523.34.5.1066 Preprint
[15] John McIntosh’s four dimensional maze game. Free software
[16] Michio Kaku (1994). Hyperspace: A Scientific Odyssey Through Parallel Universes, Time Warps, and the Tenth Dimension,
Part I, chapter 3, The Man Who “Saw” the Fourth Dimension (about tesseracts in years 1870–1910). ISBN 0-19-286189-1.
[17] Rucker, Rudy (1984), The Fourth Dimension /A Guided Tour of the Higher Universes, Houghton Mifflin, p. 18, ISBN
0-395-39388-4
[18] In particular, Hinton, Charles Howard (1904). Fourth Dimension. pp. 11–14. ISBN 1-5645-9708-3.

82

CHAPTER 19. FOUR-DIMENSIONAL SPACE

19.9 Further reading
• Andrew Forsyth (1930) Geometry of Four Dimensions, link from Internet Archive.
• Gamow, George (1988). One Two Three . . . Infinity: Facts and Speculations of Science (3rd ed.). Courier
Dover Publications. p. 68. ISBN 0-486-25664-2., Extract of page 68
• E. H. Neville (1921) The Fourth Dimension, Cambridge University Press, link from University of Michigan
Historical Math Collection.

19.10 External links
• “Dimensions” videos, showing several different ways to visualize four dimensional objects
• Science News article summarizing the “Dimensions” videos, with clips
• Garrett Jones’ tetraspace page
• Flatland: a Romance of Many Dimensions (second edition)
• TeV scale gravity, mirror universe, and ... dinosaurs Article from Acta Physica Polonica B by Z.K. Silagadze.
• Exploring Hyperspace with the Geometric Product
• 4D Euclidean space
• 4D Building Blocks - Interactive game to explore 4D space
• 4DNav - A small tool to view a 4D space as four 3D space uses ADSODA algorithm
• MagicCube 4D A 4-dimensional analog of traditional Rubik’s Cube.
• Frame-by-frame animations of 4D - 3D analogies

Chapter 20

Fourth dimension in art
New possibilities opened up by the concept of four-dimensional space (and difficulties involved in trying to visualize
it) helped inspire many modern artists in the first half of the twentieth century. Early Cubists, Surrealists, Futurists,
and abstract artists took ideas from higher-dimensional mathematics and used them to radically advance their work.[1]

20.1 Early influence
Further information: Proto-Cubism and Mathematics and art
French mathematician Maurice Princet was known as “le mathématicien du cubisme” (“the mathematician of cubism”).[2]
An associate of the School of Paris, a group of avant-gardists including Pablo Picasso, Guillaume Apollinaire, Max
Jacob, Jean Metzinger, and Marcel Duchamp, Princet is credited with introducing the work of Henri Poincaré and
the concept of the "fourth dimension" to the cubists at the Bateau-Lavoir in the late 1900s.[3]
Princet introduced Picasso to Esprit Jouffret's Traité élémentaire de géométrie à quatre dimensions (Elementary Treatise on the Geometry of Four Dimensions, 1903),[4] a popularization of Poincaré's Science and Hypothesis in which
Jouffret described hypercubes and other complex polyhedra in four dimensions and projected them onto the twodimensional page. Picasso’s Portrait of Daniel-Henry Kahnweiler in 1910 was an important work for the artist, who
spent many months shaping it.[5] The portrait bears similarities to Jouffret’s work and shows a distinct movement
away from the Proto-Cubist fauvism displayed in Les Demoiselles d'Avignon, to a more considered analysis of space
and form.[6]
Early cubist Max Weber wrote an article entitled “In The Fourth Dimension from a Plastic Point of View”, for Alfred
Stieglitz's July 1910 issue of Camera Work. In the piece, Weber states, “In plastic art, I believe, there is a fourth
dimension which may be described as the consciousness of a great and overwhelming sense of space-magnitude in
all directions at one time, and is brought into existence through the three known measurements.”[7]
Another influence on the School of Paris was that of Jean Metzinger and Albert Gleizes, both painters and theoreticians. The first major treatise written on the subject of Cubism was their 1912 collaboration Du “Cubisme”, which
says that:
“If we wished to relate the space of the [Cubist] painters to geometry, we should have to refer it
to the non-Euclidian mathematicians; we should have to study, at some length, certain of Riemann’s
theorems.”[8]
In a review of the 1913 Armory Show for the Philadelphia Enquirer, the influence of the fourth dimension on avantegarde painting was discussed; the paper’s art-critic describing how the artists’ employed "..harmonic use of what may
arbitrarily be called volume”.[9]

20.2 Dimensionist manifesto
In 1936 in Paris, Charles Tamkó Sirató published his Manifeste Dimensioniste,[10] which described how
83

84

CHAPTER 20. FOURTH DIMENSION IN ART

An illustration from Jouffret’s Traité élémentaire de géométrie à quatre dimensions. The book, which influenced Picasso, was given
to him by Princet.

the Dimensionist tendency has led to:
1. Literature leaving the line and entering the plane.
2. Painting leaving the plane and entering space.

20.2. DIMENSIONIST MANIFESTO

85

Picasso’s Portrait of Daniel-Henry Kahnweiler, 1910, The Art Institute of Chicago

3. Sculpture stepping out of closed, immobile forms.
4. …The artistic conquest of four-dimensional space, which to date has been completely art-free.
The manifesto was signed by many prominent modern artists worldwide. Hans Arp, Francis Picabia, Kandinsky,
Robert Delaunay and Marcel Duchamp amongst others added their names in Paris, then a short while later it was

86

CHAPTER 20. FOURTH DIMENSION IN ART

Jean Metzinger, 1912-1913, L'Oiseau bleu, (The Blue Bird), oil on canvas, 230 x 196 cm, Musée d'Art Moderne de la Ville de Paris

endorsed by artists abroad including László Moholy-Nagy, Joan Miró, David Kakabadze, Alexander Calder, and Ben
Nicholson.[10]

20.3 Crucifixion (Corpus Hypercubus)
In 1953, the surrealist Salvador Dalí proclaimed his intention to paint “an explosive, nuclear and hypercubic” crucifixion scene.[11][12] He said that, “This picture will be the great metaphysical work of my summer”.[13] Completed
the next year, Crucifixion (Corpus Hypercubus) depicts Jesus Christ upon the net of a hypercube, also known as a
tesseract. The unfolding of a tesseract into eight cubes is analogous to unfolding the sides of a cube into six squares.
The Metropolitan Museum of Art describes the painting as a “new interpretation of an oft-depicted subject. ..[show-

20.4. ABSTRACT ART

87

ing] Christ’s spiritual triumph over corporeal harm.”[14]

20.4 Abstract art
Some of Piet Mondrian's (1872–1944) abstractions and his practice of Neoplasticism are said to be rooted in his
view of a utopian universe, with perpendiculars visually extending into another dimension.[15]

20.5 Other forms of art
Main article: Fourth dimension in literature
The Fourth dimension has been the subject of numerous fictional stories.[16]

20.6 See also
• De Stijl
• Five-dimensional space
• Four-dimensional space
• Octacube

20.7 References
[1] Henderson, Linda Dalrymple. “Overview of The Fourth Dimension And Non-Euclidean Geometry In Modern Art, Revised
Edition". MIT Press. Retrieved 24 March 2013.
[2] Décimo, Marc (2007). Maurice Princet, Le Mathématicien du Cubisme (in French). Paris: Éditions L'Echoppe. ISBN
2-84068-191-9.
[3] Miller, Arthur I. (2001). Einstein, Picasso: space, time, and beauty that causes havoc (Print). New York: Basic Books. p.
101. ISBN 0-465-01859-9.
[4] Jouffret, Esprit (1903). Traité élémentaire de géométrie à quatre dimensions et introduction à la géométrie à n dimensions
(in French). Paris: Gauthier-Villars. OCLC 1445172. Retrieved 2008-02-06.
[5] Robbin, Tony (2006). Shadows of Reality: The Fourth Dimension in Relativity, Cubism, and Modern Thought (Print). New
Haven: Yale University Press. p. 28. ISBN 9780300110395.
[6] Robbin, Tony (2006). Shadows of Reality: The Fourth Dimension in Relativity, Cubism, and Modern Thought (Print). New
Haven: Yale University Press. pp. 28–30. ISBN 9780300110395.
[7] Weber, Max (1910). “In The Fourth Dimension from a Plastic Point of View”. Camera Work 31 (July 1910).
[8] Gleizes, Albert; Metzinger, Jean (1912). Du “Cubisme” (Edition Figuière). Paris.
[9] Oja, Carol J. (2000, February 2003). Making Music Modern: New York in the 1920s. Oxford University Press, USA. p.
84. Check date values in: |date= (help)ISBN 9780195162578
[10] Sirató, Charles Tamkó (1936). “Dimensionist Manifesto” (PDF). Paris. Retrieved 24 March 2013.
[11] Dalí, Salvador; Gómez de la Serna, Ramón (2001) [1988]. Dali. Secaucus, NJ: Wellfleet Press. p. 41. ISBN 1555213421.
[12] “Salvador Dalí (1904–1989)". SpanishArts. 2013. Retrieved 24 March 2013.
[13] “Crucifixion ('Corpus Hypercubus’), 1954”. Dalí gallery website. Retrieved 25 March 2013.
[14] "Crucifixion (Corpus Hypercubus)". The Metropolitan Museum of Art. Retrieved 24 March 2013.

88

CHAPTER 20. FOURTH DIMENSION IN ART

[15] Kruger, Runette (Summer 2007). “Art in the Fourth Dimension: Giving Form to Form – The Abstract Paintings of Piet
Mondrian” (PDF) (5). Spaces of Utopia: An Electronic Journal. pp. 23–35. ISSN 1646-4729.
[16] Clair, Bryan (16 September 2002). “Spirits, Art, and the Fourth Dimension”. Strange Horizons. Retrieved 25 March
2012.

20.8 Sources
• Clair, Bryan (16 September 2002). “Spirits, Art, and the Fourth Dimension”. Strange Horizons. Retrieved 25
March 2012.
• Dalí, Salvador; Gómez de la Serna, Ramón (2001) [1988]. Dali (Print). Secaucus, NJ: Wellfleet Press. ISBN
1555213421.
• Décimo, Marc (2007). Maurice Princet, Le Mathématicien du Cubisme (in French). Paris: Éditions L'Echoppe.
ISBN 2-84068-191-9.
• Gleizes, Albert; Metzinger, Jean (1912). Du “Cubisme” (Edition Figuière). Paris.
• Henderson, Linda Dalrymple. “Overview of The Fourth Dimension And Non-Euclidean Geometry In Modern
Art, Revised Edition". MIT Press. Retrieved 24 March 2013.
• Jouffret, Esprit (1903). Traité élémentaire de géométrie à quatre dimensions et introduction à la géométrie à n
dimensions (in French). Paris: Gauthier-Villars. OCLC 1445172. Retrieved 2008-02-06.
• Kruger, Runette (Summer 2007). “Art in the Fourth Dimension: Giving Form to Form – The Abstract Paintings of Piet Mondrian” (PDF) (5). Spaces of Utopia: An Electronic Journal. pp. 23–35. ISSN 1646-4729.
Retrieved 25 March 2012.
• Miller, Arthur I. (2001). Einstein, Picasso: space, time, and beauty that causes havoc (Print). New York: Basic
Books. ISBN 0-465-01859-9.
• Oja, Carol J. (2000, February 2003). Making Music Modern: New York in the 1920s. Oxford University Press,
USA. p. 84. Check date values in: |date= (help)ISBN 9780195162578
• Robbin, Tony (2006). Shadows of Reality: The Fourth Dimension in Relativity, Cubism, and Modern Thought
(Print). New Haven: Yale University Press. pp. 28–30. ISBN 9780300110395.
• Sirató, Charles Tamkó (1936). “Dimensionist Manifesto” (PDF). Paris. Retrieved 24 March 2013.
• Weber, Max (1910). “In The Fourth Dimension from a Plastic Point of View”. Camera Work 31 (July 1910).

20.9 Further reading
• Henderson, Linda Dalrymple (2013). The Fourth Dimension And Non-Euclidean Geometry In Modern Art
(Revised ed.). Cambridge, Massachusetts: The MIT Press. ISBN 0262582449.
• Henderson, Linda Dalrymple (1998). Duchamp in Context: Science and Technology in the Large Glass and
Related Works. Princeton, N.J: Princeton University Press. ISBN 9780691123868.

20.10 External links
• Talk at the Dali museum on his 4th dimension art

20.10. EXTERNAL LINKS

Dalí's 1954 painting Crucifixion (Corpus Hypercubus)

89

Chapter 21

Gelfand–Kirillov dimension
In algebra, the Gelfand–Kirillov dimension (or GK dimension) of a right module M over a k-algebra A is:

GKdim = sup lim sup logn dimk M0 V n
V,M0 n→∞

where the sup is taken over all finite-dimensional subspaces V ⊂ A and M0 ⊂ M .
An algebra is said to have polynomial growth if its Gelfand–Kirillov dimension is finite.

21.1 Basic facts
• The Gelfand–Kirillov dimension of a finitely generated commutative algebra A over a field is the Krull dimension of A (or equivalently the transcendence degree of the field of fractions of A over the base field.)
• In particular, the GK dimension of the polynomial ring k[x1 , . . . , xn ] Is n.
• (Warfield) For any real number r ≥ 2, there exists a finitely generated algebra whose GK dimension is r.[1]

21.2 In the theory of D-Modules
Given a right module M over the Weyl algebra An , the Gelfand–Kirillov dimension of M over the Weyl algebra
coincides with the dimension of M, which is by definition the degree of the Hilbert polynomial of M. This enables to
prove additivity in short exact sequences for the Gelfand–Kirillov dimension and finally to prove Bernstein’s inequality,
which states that the dimension of M must be at least n. This leads to the definition of holonomic D-Modules as those
with the minimal dimension n, and these modules play a great role in the geometric Langlands program.

21.3 References
[1] Artin 1999, Theorem VI.2.1.

• Smith, S. Paul; Zhang, James J. (1998). “A remark on Gelfand–Kirillov dimension” (PDF). Proceedings of the
American Mathematical Society 126 (2): 349–352.
• Coutinho: A primer of algebraic D-modules. Cambridge, 1995

21.4 Further reading
• Artin, Michael (1999). “Noncommutative Rings” (PDF). Chapter VI.

90

Chapter 22

Global dimension
In ring theory and homological algebra, the global dimension (or global homological dimension; sometimes just
called homological dimension) of a ring A denoted gl dim A, is a non-negative integer or infinity which is a homological invariant of the ring. It is defined to be the supremum of the set of projective dimensions of all A-modules.
Global dimension is an important technical notion in the dimension theory of Noetherian rings. By a theorem of
Jean-Pierre Serre, global dimension can be used to characterize within the class of commutative Noetherian local
rings those rings which are regular. Their global dimension coincides with the Krull dimension, whose definition is
module-theoretic.
When the ring A is noncommutative, one initially has to consider two versions of this notion, right global dimension
that arises from consideration of the right A-modules, and left global dimension that arises from consideration of the
left A-modules. For an arbitrary ring A the right and left global dimensions may differ. However, if A is a Noetherian
ring, both of these dimensions turn out to be equal to weak global dimension, whose definition is left-right symmetric.
Therefore, for noncommutative Noetherian rings, these two versions coincide and one is justified in talking about the
global dimension.

22.1 Examples
Let A = K[x1,...,xn] be the ring of polynomials in n variables over a field K. Then the global dimension of A is equal
to n. This statement goes back to David Hilbert's foundational work on homological properties of polynomial rings,
see Hilbert’s syzygy theorem. More generally, if R is a Noetherian ring of finite global dimension k and A = R[x] is
a ring of polynomials in one variable over R then the global dimension of A is equal to k + 1.
The first Weyl algebra A1 is a noncommutative Noetherian domain of global dimension one.
A ring has global dimension zero if and only if it is semisimple. The global dimension of a ring A is less than or
equal to one if and only if A is hereditary. In particular, a commutative principal ideal domain which is not a field
has global dimension one.

22.2 Alternative characterizations
The right global dimension of a ring A can be alternatively defined as:
• the supremum of the set of projective dimensions of all cyclic right A-modules;
• the supremum of the set of projective dimensions of all finite right A-modules;
• the supremum of the injective dimensions of all right A-modules;
• when A is a commutative Noetherian local ring with maximal ideal m, the projective dimension of the residue
field A/m.
91

92

CHAPTER 22. GLOBAL DIMENSION

The left global dimension of A has analogous characterizations obtained by replacing “right” with “left” in the above
list.
Serre proved that a commutative Noetherian local ring A is regular if and only if it has finite global dimension, in
which case the global dimension coincides with the Krull dimension of A. This theorem opened the door to application
of homological methods to commutative algebra.

22.3 References
• Eisenbud, David (1999), Commutative Algebra with a View Toward Algebraic Geometry, Graduate Texts in
Mathematics 150 (3rd ed.), Springer-Verlag, ISBN 0-387-94268-8.
• Kaplansky, Irving (1972), Fields and Rings, Chicago Lectures in Mathematics (2nd ed.), University Of Chicago
Press, ISBN 0-226-42451-0, Zbl 1001.16500
• Matsumura, Hideyuki (1989), Commutative Ring Theory, Cambridge Studies in Advanced Mathematics 8,
Cambridge University Press, ISBN 0-521-36764-6.
• McConnell, J. C.; Robson, J. C.; Small, Lance W. (2001), Revised, ed., Noncommutative Noetherian Rings,
Graduate Studies in Mathematics 30, American Mathematical Society, ISBN 0-8218-2169-5.

Chapter 23

Interdimensional
Interdimensional may refer to:
• Interdimensional hypothesis
• Interdimensional doorway
• Interdimensional travel
• Interdimensional being
• Interdimensional (song by Darkest Horizon) [1]

23.1 References
[1] http://www.darkesthorizon.com

93

Chapter 24

Isoperimetric dimension
In mathematics, the isoperimetric dimension of a manifold is a notion of dimension that tries to capture how the
large-scale behavior of the manifold resembles that of a Euclidean space (unlike the topological dimension or the
Hausdorff dimension which compare different local behaviors against those of the Euclidean space).
In the Euclidean space, the isoperimetric inequality says that of all bodies with the same volume, the ball has the
smallest surface area. In other manifolds it is usually very difficult to find the precise body minimizing the surface
area, and this is not what the isoperimetric dimension is about. The question we will ask is, what is approximately the
minimal surface area, whatever the body realizing it might be.

24.1 Formal definition
We say about a differentiable manifold M that it satisfies a d-dimensional isoperimetric inequality if for any open
set D in M with a smooth boundary one has

area (∂D) ≥ C vol (D)(d−1)/d .
The notations vol and area refer to the regular notions of volume and surface area on the manifold, or more precisely,
if the manifold has n topological dimensions then vol refers to n-dimensional volume and area refers to (n − 1)dimensional volume. C here refers to some constant, which does not depend on D (it may depend on the manifold
and on d).
The isoperimetric dimension of M is the supremum of all values of d such that M satisfies a d-dimensional isoperimetric inequality.

24.2 Examples
A d-dimensional Euclidean space has isoperimetric dimension d. This is the well known isoperimetric problem —
as discussed above, for the Euclidean space the constant C is known precisely since the minimum is achieved for the
ball.
An infinite cylinder (i.e. a product of the circle and the line) has topological dimension 2 but isoperimetric dimension
1. Indeed, multiplying any manifold with a compact manifold does not change the isoperimetric dimension (it only
changes the value of the constant C). Any compact manifold has isoperimetric dimension 0.
It is also possible for the isoperimetric dimension to be larger than the topological dimension. The simplest example
is the infinite jungle gym, which has topological dimension 2 and isoperimetric dimension 3. See for pictures and
Mathematica code.
The hyperbolic plane has topological dimension 2 and isoperimetric dimension infinity. In fact the hyperbolic plane
has positive Cheeger constant. This means that it satisfies the inequality
94

24.3. ISOPERIMETRIC DIMENSION OF GRAPHS

95

area (∂D) ≥ C vol (D),
which obviously implies infinite isoperimetric dimension.

24.3 Isoperimetric dimension of graphs
Main article: Expander graph
The isoperimetric dimension of graphs can be defined in a similar fashion. A precise definition is given in Chung’s
survey.[1] Area and volume are measured by set sizes. For every subset A of the graph G one defines ∂A as the set of
vertices in G \ A with a neighbor in A. A d-dimensional isoperimetric inequality is now defined by

|∂A| ≥ C (min (|A|, |G \ A|))

(d−1)/d

.

(This MathOverflow question provides more details.) The graph analogs of all the examples above hold but the
definition is slightly different in order to avoid that the isoperimetric dimension of any finite graph is 0: In the above
formula the volume of A is replaced by min(|A|, |G \ A|) (see Chung’s survey, section 7).
The isoperimetric dimension of a d-dimensional grid is d. In general, the isoperimetric dimension is preserved by
quasi isometries, both by quasi-isometries between manifolds, between graphs, and even by quasi isometries carrying
manifolds to graphs, with the respective definitions. In rough terms, this means that a graph “mimicking” a given
manifold (as the grid mimics the Euclidean space) would have the same isoperimetric dimension as the manifold. An
infinite complete binary tree has isoperimetric dimension ∞.

24.4 Consequences of isoperimetry
A simple integration over r (or sum in the case of graphs) shows that a d-dimensional isoperimetric inequality implies
a d-dimensional volume growth, namely

vol B(x, r) ≥ Crd
where B(x,r) denotes the ball of radius r around the point x in the Riemannian distance or in the graph distance.
In general, the opposite is not true, i.e. even uniformly exponential volume growth does not imply any kind of
isoperimetric inequality. A simple example can be had by taking the graph Z (i.e. all the integers with edges between
n and n + 1) and connecting to the vertex n a complete binary tree of height |n|. Both properties (exponential growth
and 0 isoperimetric dimension) are easy to verify.
An interesting exception is the case of groups. It turns out that a group with polynomial growth of order d has
isoperimetric dimension d. This holds both for the case of Lie groups and for the Cayley graph of a finitely generated
group.
A theorem of Varopoulos connects the isoperimetric dimension of a graph to the rate of escape of random walk on
the graph. The result states
Varopoulos’ theorem: If G is a graph satisfying a d-dimensional isoperimetric inequality then

pn (x, y) ≤ Cn−d/2
where pn (x,y) is the probability that a random walk on G starting from x will be in y after n steps, and C is some
constant.

96

CHAPTER 24. ISOPERIMETRIC DIMENSION

24.5 References
[1] Chung, Fan. “Discrete Isoperimetric Inequalities” (PDF).

• Isaac Chavel, Isoperimetric Inequalities: Differential geometric and analytic persepectives, Cambridge university
press, Cambridge, UK (2001), ISBN 0-521-80267-9
Discusses the topic in the context of manifolds, no mention of graphs.
• N. Th. Varopoulos, Isoperimetric inequalities and Markov chains, J. Funct. Anal. 63:2 (1985), 215–239.
• Thierry Coulhon and Laurent Saloff-Coste, Isopérimétrie pour les groupes et les variétés, Rev. Mat. Iberoamericana 9:2 (1993), 293–314.
This paper contains the result that on groups of polynomial growth, volume growth and isoperimetric
inequalities are equivalent. In French.
• Fan Chung, Discrete Isoperimetric Inequalities. Surveys in Differential Geometry IX, International Press, (2004),
53–82. http://math.ucsd.edu/~{}fan/wp/iso.pdf.
This paper contains a precise definition of the isoperimetric dimension of a graph, and establishes many
of its properties.

Chapter 25

Kaplan–Yorke conjecture
In applied mathematics, the Kaplan–Yorke conjecture concerns the dimension of an attractor, using Lyapunov
exponents.[1][2] By arranging the Lyapunov exponents in order from largest to smallest λ1 ≥ λ2 ≥ · · · ≥ λn , let j
be the index for which

j


λi > 0

i=1

and

j+1


λi < 0.

i=1

Then the conjecture is that the dimension of the attractor is
∑j
D=j+

λi
.
|λj+1 |
i=1

25.1 Examples
Especially for chaotic systems, the Kaplan–Yorke conjecture is a useful tool in order to determine the fractal dimension
of the corresponding attractor.[3]
• The Hénon map with parameters a = 1.4 and b = 0.3 has the ordered Lyapunov exponents λ1 = 0.603 and
λ2 = −2.34 . In this case, we find j = 1 and the dimension formula reduces to

D=j+

0.603
λ1
=1+
= 1.26.
|λ2 |
| − 2.34|

• The Lorenz system shows chaotic behavior at the parameter values σ = 16 , ρ = 45.92 and β = 4.0 . The
resulting Lyapunov exponents are {2.16, 0.00, −32.4}. Noting that j = 2, we find

D =2+

2.16 + 0.00
= 2.07.
| − 32.4|
97

98

CHAPTER 25. KAPLAN–YORKE CONJECTURE

25.2 References
[1] J. Kaplan and J. Yorke, “Chaotic behavior of multidimensional difference equations,” in: Functional Differential Equations
and the Approximation of Fixed Points, Lecture Notes in Mathematics, vol. 730, H.O. Peitgen and H.O. Walther, eds.
(Springer, Berlin), p. 228.
[2] P. Frederickson, J. Kaplan, E. Yorke and J. Yorke, “The Lyapunov Dimension of Strange Attractors,” J. Diff. Eqs. 49
(1983) 185.
[3] A. Wolf, A. Swift, B. Jack, H. L. Swinney and J.A. Vastano “Determining Lyapunov Exponents from a Time Series,”
Physica 16D, 1985, 16, pp. 285–317.

Chapter 26

Kodaira dimension
In algebraic geometry, the Kodaira dimension κ(X) (or canonical dimension) measures the size of the canonical
model of a projective variety X.
Igor Shafarevich introduced an important numerical invariant of surfaces with the notation κ in the seminar Shafarevich
1965. Shigeru Iitaka (1970) extended it and defined the Kodaira dimension for higher dimensional varieties (under
the name of canonical dimension), and later named it after Kunihiko Kodaira in Iitaka (1971).

26.1 The plurigenera
The canonical bundle of a smooth algebraic variety X of dimension n over a field is the line bundle of n-forms,

KX =

n


Ω1X ,

which is the nth exterior power of the cotangent bundle of X. For an integer d, the dth tensor power of KX is again a
line bundle. For d ≥ 0, the vector space of global sections H0 (X,KXd ) has the remarkable property that it is a birational
invariant of smooth projective varieties X. That is, this vector space is canonically identified with the corresponding
space for any smooth projective variety which is isomorphic to X outside lower-dimensional subsets.
For d ≥ 0, the dth plurigenus of X is defined as the dimension of the vector space of global sections of KXd :

d
d
Pd = h0 (X, KX
) = dim H 0 (X, KX
).

The plurigenera are important birational invariants of an algebraic variety. In particular, the simplest way to prove
that a variety is not rational (that is, not birational to projective space) is to show that some plurigenus Pd with d > 0
is not zero. If the space of sections of KXd is nonzero, then there is a natural rational map from X to the projective
space
d
P(H 0 (X, KX
)) = PPd −1

called the d-canonical map. The canonical ring R(KX) of a variety X is the graded ring

R(KX ) :=



d
H 0 (X, KX
).

d≥0

Also see geometric genus and arithmetic genus.
The Kodaira dimension of X is defined to be −∞ if the plurigenera Pd are zero for all d > 0; otherwise, it is the
minimum κ such that Pd/dκ is bounded. The Kodaira dimension of an n-dimensional variety is either −∞ or an integer
in the range from 0 to n.
99

100

26.1.1

CHAPTER 26. KODAIRA DIMENSION

Interpretations of the Kodaira dimension

The following integers are equal if they are non-negative. A good reference is Lazarsfeld (2004), Theorem 2.1.33.
• The dimension of the Proj construction Proj R(KX) (this variety is called the canonical model of X; it only
depends on the birational equivalence class of X).
• The dimension of the image of the d-canonical mapping for all positive multiples d of some positive integer
d0 .
• The transcendence degree of R, minus one, i.e. t − 1, where t is the number of algebraically independent
generators one can find.
• The rate of growth of the plurigenera: that is, the smallest number κ such that Pd/dκ is bounded. In Big O
notation, it is the minimal κ such that Pd = O(dκ ).
When one of these numbers is undefined or negative, then all of them are. In this case, the Kodaira dimension is said
to be negative or to be −∞. Some historical references define it to be −1, but then the formula κ(X × Y) = κ(X) +
κ(Y) does not always hold, and the statement of the Iitaka conjecture becomes more complicated. For example, the
Kodaira dimension of P1 × X is −∞ for all varieties X.

26.1.2

Application

The Kodaira dimension gives a useful rough division of all algebraic varieties into several classes.
Varieties with low Kodaira dimension can be considered special, while varieties of maximal Kodaira dimension are
said to be of general type.
Geometrically, there is a very rough correspondence between Kodaira dimension and curvature: negative Kodaira
dimension corresponds to positive curvature, zero Kodaira dimension corresponds to flatness, and maximum Kodaira
dimension (general type) corresponds to negative curvature.
The specialness of varieties of low Kodaira dimension is analogous to the specialness of Riemannian manifolds of
positive curvature (and general type corresponds to the genericity of non-positive curvature); see classical theorems,
especially on Pinched sectional curvature and Positive curvature.
These statements are made more precise below.

26.1.3

Dimension 1

Smooth projective curves are discretely classified by genus, which can be any natural number g = 0, 1, ....
By “discretely classified”, we mean that for a given genus, there is a connected, irreducible moduli space of curves of
that genus.
The Kodaira dimension of a curve X is:
• κ = −∞: genus 0 (the projective line P1 ): KX is not effective, Pd = 0 for all d > 0.
• κ = 0: genus 1 (elliptic curves): KX is a trivial bundle, Pd = 1 for all d ≥ 0.
• κ = 1: genus g ≥ 2: KX is ample, Pd = (2d−1)(g−1) for all d ≥ 2.
Compare with the Uniformization theorem for surfaces (real surfaces, since a complex curve has real dimension
2): Kodaira dimension −∞ corresponds to positive curvature, Kodaira dimension 0 corresponds to flatness, Kodaira
dimension 1 corresponds to negative curvature. Note that most algebraic curves are of general type: in the moduli
space of curves, two connected components correspond to curves not of general type, while all the other components
correspond to curves of general type. Further, the space of curves of genus 0 is a point, the space of curves of genus
1 has (complex) dimension 1, and the space of curves of genus g ≥ 2 has dimension 3g−3.

26.2. GENERAL TYPE

26.1.4

101

Dimension 2

The Enriques–Kodaira classification classifies algebraic surfaces: coarsely by Kodaira dimension, then in more detail
within a given Kodaira dimension. To give some simple examples: the product P1 × X has Kodaira dimension −∞
for any curve X; the product of two curves of genus 1 (an abelian surface) has Kodaira dimension 0; the product of
a curve of genus 1 with a curve of genus at least 2 (an elliptic surface) has Kodaira dimension 1; and the product of
two curves of genus at least 2 has Kodaira dimension 2 and hence is of general type.

For a surface X of general type, the image of the d-canonical map is birational to X if d ≥ 5.

26.1.5

Any dimension

Rational varieties (varieties birational to projective space) have Kodaira dimension −∞. Abelian varieties (the compact
complex tori that are projective) have Kodaira dimension zero. More generally, Calabi–Yau manifolds (in dimension
1, elliptic curves; in dimension 2, abelian surfaces, K3 surfaces, and quotients of those varieties by finite groups) have
Kodaira dimension zero (corresponding to admitting Ricci flat metrics).
Any variety in characteristic zero that is covered by rational curves (nonconstant maps from P1 ), called a uniruled
variety, has Kodaira dimension −∞. Conversely, the main conjectures of minimal model theory (notably the abundance conjecture) would imply that every variety of Kodaira dimension −∞ is uniruled. This converse is known for
varieties of dimension at most 3.
Siu (2002) proved the invariance of plurigenera under deformations for all smooth complex projective varieties. In
particular, the Kodaira dimension does not change when the complex structure of the manifold is changed continuously.

A fibration of normal projective varieties X → Y means a surjective morphism with connected fibers.
For a 3-fold X of general type, the image of the d-canonical map is birational to X if d ≥ 61.[1]

26.2 General type
A variety of general type X is one of maximal Kodaira dimension (Kodaira dimension equal to its dimension):

κ(X) = dim X.
Equivalent conditions are that the line bundle KX is big, or that the d-canonical map is generically injective (that is,
a birational map to its image) for d sufficiently large.
For example, a variety with ample canonical bundle is of general type.
In some sense, most algebraic varieties are of general type. For example, a smooth hypersurface of degree d in the ndimensional projective space is of general type if and only if d > n+1. So we can say that most smooth hypersurfaces
in projective space are of general type.
Varieties of general type seem too complicated to classify explicitly, even for surfaces. Nonetheless, there are some
strong positive results about varieties of general type. For example, Bombieri showed in 1973 that the d-canonical map
of any complex surface of general type is birational for every d ≥ 5. More generally, Hacon-McKernan, Takayama,
and Tsuji showed in 2006 that for every positive integer n, there is a constant c(n) such that the d-canonical map of
any complex n-dimensional variety of general type is birational when d ≥ c(n).
The birational automorphism group of a variety of general type is finite.

102

CHAPTER 26. KODAIRA DIMENSION

26.3 Application to classification
Let X be a variety of nonnegative Kodaira dimension over a field of characteristic zero, and let B be the canonical
model of X, B = Proj R(X, KX); the dimension of B is equal to the Kodaira dimension of X. There is a natural rational
map X – → B; any morphism obtained from it by blowing up X and B is called the Iitaka fibration. The minimal
model and abundance conjectures would imply that the general fiber of the Iitaka fibration can be arranged to be a
Calabi-Yau variety, which in particular has Kodaira dimension zero. Moreover, there is an effective Q-divisor Δ on B
(not unique) such that the pair (B, Δ) is klt, KB + Δ is ample, and the canonical ring of X is the same as the canonical
ring of (B, Δ) in degrees a multiple of some d > 0.[2] In this sense, X is decomposed into a family of varieties of
Kodaira dimension zero over a base (B, Δ) of general type. (Note that the variety B by itself need not be of general
type. For example, there are surfaces of Kodaira dimension 1 for which the Iitaka fibration is an elliptic fibration over
P1 .)
Given the conjectures mentioned, the classification of algebraic varieties would largely reduce to the cases of Kodaira
dimension −∞, 0 and general type. For Kodaira dimension −∞ and 0, there are some approaches to classification.
The minimal model and abundance conjectures would imply that every variety of Kodaira dimension −∞ is uniruled,
and it is known that every uniruled variety in characteristic zero is birational to a Fano fiber space. The minimal model
and abundance conjectures would imply that every variety of Kodaira dimension 0 is birational to a Calabi-Yau variety
with terminal singularities.
The Iitaka conjecture states that the Kodaira dimension of a fibration is at least the sum of the Kodaira dimension of
the base and the Kodaira dimension of a general fiber; see Mori (1987) for a survey. The Iitaka conjecture helped to
inspire the development of minimal model theory in the 1970s and 1980s. It is now known in many cases, and would
follow in general from the minimal model and abundance conjectures.

26.4 The relationship to Moishezon manifolds
Nakamura and Ueno proved the following additivity formula for complex manifolds (Ueno (1975)). Although the
base space is not required to be algebraic, the assumption that all the fibers are isomorphic is very special. Even with
this assumption, the formula can fail when the fiber is not Moishezon.
Let π: V → W be an analytic fiber bundle of compact complex manifolds, meaning that π is locally a
product (and so all fibers are isomorphic as complex manifolds). Suppose that the fiber F is a Moishezon
manifold. Then
κ(V ) = κ(F ) + κ(W ).

26.5 Notes
[1] J. A. Chen and M. Chen, Explicit birational geometry of 3-folds and 4-folds of general type III, Theorem 1.4.
[2] O. Fujino and S. Mori, J. Diff. Geom. 56 (2000), 167-188. Theorems 5.2 and 5.4.

26.6 References
• Chen, Jungkai A.; Chen, Meng (2013), Explicit birational geometry of 3-folds and 4-folds of general type, III,
arXiv:1302.0374, Bibcode:2013arXiv1302.0374M
• Dolgachev, I, (2001), “Kodaira dimension”, in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer,
ISBN 978-1-55608-010-4
• Fujino, Osamu; Mori, Shigefumi (2000), “A canonical bundle formula”, Journal of Differential Geometry 56
(1): 167–188, MR 1863025
• Iitaka, Shigeru (1970), “On D-dimensions of algebraic varieties”, Proc. Japan Acad. 46: 487–489, doi:10.3792/pja/1195520260,
MR 0285532

26.6. REFERENCES

103

• Iitaka, Shigeru (1971), “On D-dimensions of algebraic varieties.”, J. Math. Soc. Japan 23: 356–373, doi:10.2969/jmsj/02320356,
MR 0285531
• Lazarsfeld, Robert (2004), Positivity in algebraic geometry 1, Berlin: Springer-Verlag, ISBN 3-540-22533-1,
MR 2095471
• Mori, Shigefumi (1987), “Classification of higher-dimensional varieties”, Algebraic geometry (Bowdoin, 1985),
Proceedings of Symposia in Pure Mathematics, 46, Part 1, American Mathematical Society, pp. 269–331, MR
0927961
• Shafarevich, Igor R.; Averbuh, B. G.; Vaĭnberg, Ju. R.; Zhizhchenko, A. B.; Manin, Ju. I.; Moĭshezon, B. G.;
Tjurina, G. N.; Tjurin, A. N. (1965), “Algebraic surfaces”, Akademiya Nauk SSSR. Trudy Matematicheskogo
Instituta imeni V. A. Steklova 75: 1–215, ISSN 0371-9685, MR 0190143, Zbl 0154.21001
• Siu, Y.-T. (2002), “Extension of twisted pluricanonical sections with plurisubharmonic weight and invariance
of semi-positively twisted plurigenera for manifolds not necessarily of general type”, Complex geometry (Gottingen, 2000), Berlin: Springer-Verlag, pp. 223–277, MR 1922108
• Ueno, Kenji (1975), Classification theory of algebraic varieties and compact complex spaces, Lecture Notes in
Mathematics 439, Springer-Verlag, MR 0506253

Chapter 27

Krull dimension
In commutative algebra, the Krull dimension of a commutative ring R, named after Wolfgang Krull, is the supremum
of the lengths of all chains of prime ideals. The Krull dimension need not be finite even for a Noetherian ring. More
generally the Krull dimension can be defined for modules over possibly non-commutative rings as the deviation of
the poset of submodules.
The Krull dimension has been introduced to provide an algebraic definition of the dimension of an algebraic variety:
the dimension of the affine variety defined by an ideal I in a polynomial ring R is the Krull dimension of R/I.
A field k has Krull dimension 0; more generally, k[x1 , ..., xn] has Krull dimension n. A principal ideal domain that
is not a field has Krull dimension 1. A local ring has Krull dimension 0 if and only if every element of its maximal
ideal is nilpotent.

27.1 Explanation
We say that a chain of prime ideals of the form p0 ⊊ p1 ⊊ . . . ⊊ pn has length n. That is, the length is the number
of strict inclusions, not the number of primes; these differ by 1. We define the Krull dimension of R to be the
supremum of the lengths of all chains of prime ideals in R .
Given a prime p in R, we define the height of p , written ht(p) , to be the supremum of the lengths of all chains of
prime ideals contained in p , meaning that p0 ⊊ p1 ⊊ . . . ⊊ pn ⊆ p .[1] In other words, the height of p is the Krull
dimension of the localization of R at p . A prime ideal has height zero if and only if it is a minimal prime ideal. The
Krull dimension of a ring is the supremum of the heights of all maximal ideals, or those of all prime ideals.
In a Noetherian ring, every prime ideal has finite height. Nonetheless, Nagata gave an example of a Noetherian ring
of infinite Krull dimension.[2] A ring is called catenary if any inclusion p ⊂ q of prime ideals can be extended to a
maximal chain of prime ideals between p and q , and any two maximal chains between p and q have the same length.
A ring is called universally catenary if any finitely generated algebra over it is catenary. Nagata gave an example of
a Noetherian ring which is not catenary.[3]
In a Noetherian ring, Krull’s height theorem says that the height of an ideal generated by n elements is no greater than
n.
More generally, the height of an ideal I is the infimum of the heights of all prime ideals containing I. In the language
of algebraic geometry, this is the codimension of the subvariety of Spec( R ) corresponding to I.[4]

27.2 Krull dimension and schemes
It follows readily from the definition of the spectrum of a ring Spec(R), the space of prime ideals of R equipped with
the Zariski topology, that the Krull dimension of R is equal to the dimension of its spectrum as a topological space,
meaning the supremum of the lengths of all chains of irreducible closed subsets. This follows immediately from the
Galois connection between ideals of R and closed subsets of Spec(R) and the observation that, by the definition of
Spec(R), each prime ideal p of R corresponds to a generic point of the closed subset associated to p by the Galois
connection.
104

27.3. EXAMPLES

105

27.3 Examples
• The dimension of a polynomial ring over a field k[x1 , ..., xn] is the number of variables n. In the language of
algebraic geometry, this says that the affine space of dimension n over a field has dimension n, as expected.
In general, if R is a Noetherian ring of dimension n, then the dimension of R[x] is n + 1. If the Noetherian
hypothesis is dropped, then R[x] can have dimension anywhere between n + 1 and 2n + 1.
• The ring of integers Z has dimension 1. More generally, any principal ideal domain that is not a field has
dimension 1.
• An integral domain is a field if and only if its Krull dimension is zero. Dedekind domains that are not fields
(for example, discrete valuation rings) have dimension one.
• The Krull dimension of the zero ring is typically defined to be either −∞ or −1 . The zero ring is the only
ring with a negative dimension.
• A ring is Artinian if and only if it is Noetherian and its Krull dimension is ≤0.
• An integral extension of a ring has the same dimension as the ring does.
• Let R be an algebra over a field k that is an integral domain. Then the Krull dimension of R is less than or equal
to the transcendence degree of the field of fractions of R over k.[5] The equality holds if R is finitely generated
as algebra (for instance by the noether normalization lemma).
k
k+1
be the associated graded ring (geometers
• Let R be a Noetherian ring, I an ideal and grI (R) = ⊕∞
0 I /I
call it the ring of the normal cone of I.) Then dim grI (R) is the supremum of the heights of maximal ideals of
R containing I.[6]

• A commutative Noetherian ring of Krull dimension zero is a direct product of a finite number (possibly one)
of local rings of Krull dimension zero.
• A Noetherian local ring is called a Cohen–Macaulay ring if its dimension is equal to its depth. A regular local
ring is an example of such a ring.
• A Noetherian integral domain is a unique factorization domain if and only if every height 1 prime ideal is
principal.[7]
• For a commutative Noetherian ring the three following conditions are equivalent: being a reduced ring of Krull
dimension zero, being a field or a direct product of a finite number of fields, being von Neumann regular.

27.4 Krull dimension of a module
If R is a commutative ring, and M is an R-module, we define the Krull dimension of M to be the Krull dimension of
the quotient of R making M a faithful module. That is, we define it by the formula:

dimR M := dim(R/ AnnR (M ))
where AnnR(M), the annihilator, is the kernel of the natural map R → EndR(M) of R into the ring of R-linear
endomorphisms of M.
In the language of schemes, finitely generated modules are interpreted as coherent sheaves, or generalized finite rank
vector bundles.

106

CHAPTER 27. KRULL DIMENSION

27.5 Krull dimension for non-commutative rings
The Krull dimension of a module over a possibly non-commutative ring is defined as the deviation of the poset of
submodules ordered by inclusion. For commutative Noetherian rings, this is the same as the definition using chains
of prime ideals.[8] The two definitions can be different for commutative rings which are not Noetherian.

27.6 See also
• Dimension theory (algebra)
• Regular local ring
• Hilbert function
• Krull’s principal ideal theorem
• Gelfand–Kirillov dimension
• Homological conjectures in commutative algebra

27.7 Notes
[1] Matsumura, Hideyuki: “Commutative Ring Theory”, page 30–31, 1989
[2] Eisenbud, D. Commutative Algebra (1995). Springer, Berlin. Exercise 9.6.
[3] Matsumura, H. Commutative Algebra (1970). Benjamin, New York. Example 14.E.
[4] Matsumura, Hideyuki: “Commutative Ring Theory”, page 30–31, 1989
[5] http://mathoverflow.net/questions/79959/krull-dimension-transcendence-degree
[6] Eisenbud 2004, Exercise 13.8
[7] Hartshorne,Robin:"Algebraic Geometry”, page 7,1977
[8] McConnell, J.C. and Robson, J.C. Noncommutative Noetherian Rings (2001). Amer. Math. Soc., Providence. Corollary
6.4.8.

27.8 Bibliography
• Irving Kaplansky, Commutative rings (revised ed.), University of Chicago Press, 1974, ISBN 0-226-42454-5.
Page 32.
• L.A. Bokhut'; I.V. L'vov; V.K. Kharchenko (1991). “I. Noncommuative rings”. In Kostrikin, A.I.; Shafarevich,
I.R. Algebra II. Encyclopaedia of Mathematical Sciences 18. Springer-Verlag. ISBN 3-540-18177-6. Sect.4.7.
• Eisenbud, David (1995), Commutative algebra with a view toward algebraic geometry, Graduate Texts in Mathematics 150, Berlin, New York: Springer-Verlag, ISBN 978-0-387-94268-1, MR 1322960
• Hartshorne, Robin (1977), Algebraic Geometry, Graduate Texts in Mathematics 52, New York: SpringerVerlag, ISBN 978-0-387-90244-9, MR 0463157
• Matsumura, Hideyuki (1989), Commutative Ring Theory, Cambridge Studies in Advanced Mathematics (2nd
ed.), Cambridge University Press, ISBN 978-0-521-36764-6

Chapter 28

Matroid rank
In the mathematical theory of matroids, the rank of a matroid is the maximum size of an independent set in the
matroid. The rank of a subset S of elements of the matroid is, similarly, the maximum size of an independent subset
of S, and the rank function of the matroid maps sets of elements to their ranks.
The rank function is one of the fundamental concepts of matroid theory via which matroids may be axiomatized.
The rank functions of matroids form an important subclass of the submodular set functions, and the rank functions of
the matroids defined from certain other types of mathematical object such as undirected graphs, matrices, and field
extensions are important within the study of those objects.

28.1 Properties and axiomatization
The rank function of a matroid obeys the following properties.
• The value of the rank function is always a non-negative integer.
• For any two subsets A and B of E , r(A ∪ B) + r(A ∩ B) ≤ r(A) + r(B) . That is, the rank is a submodular
function.
• For any set A and element x , r(A) ≤ r(A ∪ {x}) ≤ r(A) + 1 . From the first of these two inequalities it
follows more generally that, if A ⊂ B ⊂ E , then r(A) ≤ r(B) ≤ r(E) . That is, the rank is a monotonic
function.
These properties may be used as axioms to characterize the rank function of matroids: every integer-valued submodular function on the subsets of a finite set that obeys the inequalities r(A) ≤ r(A ∪ {x}) ≤ r(A) + 1 for all A and
x is the rank function of a matroid.[1][2]

28.2 Other matroid properties from rank
The rank function may be used to determine the other important properties of a matroid:
• A set is independent if and only if its rank equals its cardinality, and dependent if and only if it has greater
cardinality than rank.[3]
• A nonempty set is a circuit if its rank equals one plus its cardinality and every subset formed by removing one
element from the set has equal rank.[3]
• A set is a basis if its rank equals both its cardinality and the rank of the matroid.[3]
• A set is closed if it is maximal for its rank, in the sense that there does not exist another element that can be
added to it while maintaining the same rank.
• The difference |A| − r(A) is called the nullity or corank of the subset A . It is the minimum number of
elements that must be removed from A to obtain an independent set.[4]
107

108

CHAPTER 28. MATROID RANK

28.3 Ranks of special matroids
In graph theory, the circuit rank (or cyclomatic number) of a graph is the corank of the associated graphic matroid;
it measures the minimum number of edges that must be removed from the graph to make the remaining edges form
a forest.[5] Several authors have studied the parameterized complexity of graph algorithms parameterized by this
number.[6][7]
In linear algebra, the rank of a linear matroid defined by linear independence from the columns of a matrix is the
rank of the matrix,[8] and it is also the dimension of the vector space spanned by the columns.
In abstract algebra, the rank of a matroid defined from sets of elements in a field extension L/K by algebraic independence is known as the transcendence degree.[9]

28.4 See also
• Rank oracle

28.5 References
[1] Shikare, M. M.; Waphare, B. N. (2004), Combinatorial Optimization, Alpha Science Int'l Ltd., p. 155, ISBN 9788173195600.
[2] Welsh, D. J. A. (2010), Matroid Theory, Courier Dover Publications, p. 8, ISBN 9780486474397.
[3] Oxley (2006), p. 25.
[4] Oxley (2006), p. 34.
[5] Berge, Claude (2001), “Cyclomatic number”, The Theory of Graphs, Courier Dover Publications, pp. 27–30, ISBN
9780486419756.
[6] Coppersmith, Don; Vishkin, Uzi (1985), “Solving NP-hard problems in 'almost trees’: Vertex cover”, Discrete Applied
Mathematics 10 (1): 27–45, doi:10.1016/0166-218X(85)90057-5, Zbl 0573.68017.
[7] Fiala, Jiří; Kloks, Ton; Kratochvíl, Jan (2001), “Fixed-parameter complexity of λ-labelings”, Discrete Applied Mathematics
113 (1): 59–72, doi:10.1016/S0166-218X(00)00387-5, Zbl 0982.05085.
[8] Oxley, James G. (2006), Matroid Theory, Oxford Graduate Texts in Mathematics 3, Oxford University Press, p. 81, ISBN
9780199202508.
[9] Lindström, B. (1988), “Matroids, algebraic and non-algebraic”, Algebraic, extremal and metric combinatorics, 1986 (Montreal, PQ, 1986), London Math. Soc. Lecture Note Ser. 131, Cambridge: Cambridge Univ. Press, pp. 166–174, MR
1052666.

Chapter 29

Negative-dimensional space
In topology, a discipline within mathematics, a negative-dimensional space is an extension of the usual notion of
space allowing for negative dimensions.[1]

29.1 Definition
Suppose that Mt 0 is a compact space of Hausdorff dimension t 0 , which is an element of a scale of compact spaces
embedded in each other and parametrized by t (0 < t < ∞). Such scales are considered equivalent with respect to Mt 0
if the compact spaces constituting them coincide for t ≥ t 0 . It is said that the compact space Mt 0 is the hole in this
equivalent set of scales, and −t 0 is the negative dimension of the corresponding equivalence class.[2]

29.2 History
By the 1940s, the science of topology had developed and studied a thorough basic theory of topological spaces of
positive dimension. Motivated by computations, and to some extent aesthetics, topologists searched for mathematical
frameworks that extended our notion of space to allow for negative dimensions. Such dimensions, as well as the
fourth and higher dimensions, are hard to imagine since we are not able to directly observe them. It wasn’t until the
1960s that a special topological framework was constructed—the category of spectra. A spectrum is a generalization
of space that allows for negative dimensions. The concept of negative-dimensional spaces is applied, for example, to
analyze linguistic statistics.[3]

29.3 See also
• Cone (topology)
• Equidimensionality
• Join (topology)
• Suspension/desuspension
• Spectrum (topology)

29.4 References
[1] Wolcott, Luke; McTernan, Elizabeth (2012). “Imagining Negative-Dimensional Space” (PDF). In Bosch, Robert; McKenna,
Douglas; Sarhangi, Reza. Proceedings of Bridges 2012: Mathematics, Music, Art, Architecture, Culture. Phoenix, Arizona,
USA: Tessellations Publishing. pp. 637–642. ISBN 978-1-938664-00-7. ISSN 1099-6702. Retrieved 25 June 2015.

109

110

CHAPTER 29. NEGATIVE-DIMENSIONAL SPACE

[2] Maslov, V.P. “General Notion of a Topological Space of Negative Dimension and Quantization of Its Density”. springer.com.
Retrieved 2015-06-23.
[3] Maslov, V.P. “Negative Dimension in General and Asymptotic Topology”. arxiv.org. Retrieved 2015-06-25.

29.5 External links
• Отрицательная асимптотическая топологическая размерность, новый конденсат и их связь с квантованным
законом Ципфа. For a translation into English, see Maslov, V.P. (November 2006). “Negative asymptotic
topological dimension, a new condensate, and their relation to the quantized Zipf law”. Mathematical Notes 80
(5-6): 806–813. Retrieved 30 June 2015.

Chapter 30

Nonlinear dimensionality reduction
High-dimensional data, meaning data that requires more than two or three dimensions to represent, can be difficult to
interpret. One approach to simplification is to assume that the data of interest lie on an embedded non-linear manifold
within the higher-dimensional space. If the manifold is of low enough dimension, the data can be visualised in the
low-dimensional space.

Top-left: a 3D dataset of 1000 points in a spiraling band (a.k.a. the Swiss roll) with a rectangular hole in the middle. Top-right:
the original 2D manifold used to generate the 3D dataset. Bottom left and right: 2D recoveries of the manifold respectively using the
LLE and Hessian LLE algorithms as implemented by the Modular Data Processing toolkit.

Below is a summary of some of the important algorithms from the history of manifold learning and nonlinear
dimensionality reduction (NLDR).[1] Many of these non-linear dimensionality reduction methods are related to
the linear methods listed below. Non-linear methods can be broadly classified into two groups: those that provide a
mapping (either from the high-dimensional space to the low-dimensional embedding or vice versa), and those that
just give a visualisation. In the context of machine learning, mapping methods may be viewed as a preliminary feature
111

112

CHAPTER 30. NONLINEAR DIMENSIONALITY REDUCTION

extraction step, after which pattern recognition algorithms are applied. Typically those that just give a visualisation
are based on proximity data – that is, distance measurements.

30.1 Linear methods
• Independent component analysis (ICA).
• Principal component analysis (PCA) (also called Karhunen–Loève transform — KLT).
• Singular value decomposition (SVD).
• Factor analysis.

30.2 Uses for NLDR
Consider a dataset represented as a matrix (or a database table), such that each row represents a set of attributes (or
features or dimensions) that describe a particular instance of something. If the number of attributes is large, then the
space of unique possible rows is exponentially large. Thus, the larger the dimensionality, the more difficult it becomes
to sample the space. This causes many problems. Algorithms that operate on high-dimensional data tend to have a
very high time complexity. Many machine learning algorithms, for example, struggle with high-dimensional data.
This has become known as the curse of dimensionality. Reducing data into fewer dimensions often makes analysis
algorithms more efficient, and can help machine learning algorithms make more accurate predictions.
Humans often have difficulty comprehending data in many dimensions. Thus, reducing data to a small number of
dimensions is useful for visualization purposes.

Plot of the two-dimensional points that results from using a NLDR algorithm. In this case, Manifold Sculpting used to reduce the
data into just two dimensions (rotation and scale).

The reduced-dimensional representations of data are often referred to as “intrinsic variables”. This description implies
that these are the values from which the data was produced. For example, consider a dataset that contains images of
a letter 'A', which has been scaled and rotated by varying amounts. Each image has 32x32 pixels. Each image can be
represented as a vector of 1024 pixel values. Each row is a sample on a two-dimensional manifold in 1024-dimensional
space (a Hamming space). The intrinsic dimensionality is two, because two variables (rotation and scale) were varied
in order to produce the data. Information about the shape or look of a letter 'A' is not part of the intrinsic variables
because it is the same in every instance. Nonlinear dimensionality reduction will discard the correlated information
(the letter 'A') and recover only the varying information (rotation and scale). The image to the right shows sample
images from this dataset (to save space, not all input images are shown), and a plot of the two-dimensional points
that results from using a NLDR algorithm (in this case, Manifold Sculpting was used) to reduce the data into just two
dimensions.

30.3. MANIFOLD LEARNING ALGORITHMS

113

PCA (a linear dimensionality reduction algorithm) is used to reduce this same dataset into two dimensions, the resulting values are
not so well organized.

By comparison, if PCA (a linear dimensionality reduction algorithm) is used to reduce this same dataset into two
dimensions, the resulting values are not so well organized. This demonstrates that the high-dimensional vectors (each
representing a letter 'A') that sample this manifold vary in a non-linear manner.
It should be apparent, therefore, that NLDR has several applications in the field of computer-vision. For example,
consider a robot that uses a camera to navigate in a closed static environment. The images obtained by that camera
can be considered to be samples on a manifold in high-dimensional space, and the intrinsic variables of that manifold
will represent the robot’s position and orientation. This utility is not limited to robots. Dynamical systems, a more
general class of systems, which includes robots, are defined in terms of a manifold. Active research in NLDR seeks to
unfold the observation manifolds associated with dynamical systems to develop techniques for modeling such systems
and enable them to operate autonomously.[2]

30.3 Manifold learning algorithms
Some of the more prominent manifold learning algorithms are listed below (in approximately chronological order).
An algorithm may learn an internal model of the data, which can be used to map points unavailable at training time
into the embedding in a process often called out-of-sample extension.

114

30.3.1

CHAPTER 30. NONLINEAR DIMENSIONALITY REDUCTION

Sammon’s mapping

Sammon’s mapping is one of the first and most popular NLDR techniques.

Approximation of a principal curve by one-dimensional SOM (a broken line with red squares, 20 nodes). The first principal component is presented by a blue straight line. Data points are the small grey circles. For PCA, the Fraction of variance unexplained in
this example is 23.23%, for SOM it is 6.86%.[3]

30.3.2

Self-organizing map

The self-organizing map (SOM, also called Kohonen map) and its probabilistic variant generative topographic mapping (GTM) use a point representation in the embedded space to form a latent variable model based on a non-linear
mapping from the embedded space to the high-dimensional space.[4] These techniques are related to work on density
networks, which also are based around the same probabilistic model.

30.3.3

Principal curves and manifolds

Principal curves and manifolds give the natural geometric framework for nonlinear dimensionality reduction and
extend the geometric interpretation of PCA by explicitly constructing an embedded manifold, and by encoding using standard geometric projection onto the manifold. This approach was proposed by Trevor Hastie in his thesis
(1984)[8] and developed further by many authors.[9] How to define the “simplicity” of the manifold is problemdependent, however, it is commonly measured by the intrinsic dimensionality and/or the smoothness of the manifold.
Usually, the principal manifold is defined as a solution to an optimization problem. The objective function includes
a quality of data approximation and some penalty terms for the bending of the manifold. The popular initial approximations are generated by linear PCA, Kohonen’s SOM or autoencoders. The elastic map method provides the
expectation-maximization algorithm for principal manifold learning with minimization of quadratic energy functional
at the “maximization” step.

30.3. MANIFOLD LEARNING ALGORITHMS

115

Application of principal curves: Nonlinear quality of life index.[5] Points represent data of the UN 171 countries in 4-dimensional
space formed by the values of 4 indicators: gross product per capita, life expectancy, infant mortality, tuberculosis incidence. Different
forms and colors correspond to various geographical locations. Red bold line represents the principal curve, approximating the
dataset. This principal curve was produced by the method of elastic map. Software is available for free non-commercial use.[6][7]

30.3.4

Autoencoders

An autoencoder is a feed-forward neural network which is trained to approximate the identity function. That is, it is
trained to map from a vector of values to the same vector. When used for dimensionality reduction purposes, one of
the hidden layers in the network is limited to contain only a small number of network units. Thus, the network must
learn to encode the vector into a small number of dimensions and then decode it back into the original space. Thus,
the first half of the network is a model which maps from high to low-dimensional space, and the second half maps
from low to high-dimensional space. Although the idea of autoencoders is quite old, training of deep autoencoders has
only recently become possible through the use of restricted Boltzmann machines and stacked denoising autoencoders.
Related to autoencoders is the NeuroScale algorithm, which uses stress functions inspired by multidimensional scaling
and Sammon mappings (see below) to learn a non-linear mapping from the high-dimensional to the embedded space.
The mappings in NeuroScale are based on radial basis function networks.

30.3.5

Gaussian process latent variable models

Gaussian process latent variable models (GPLVM)[10] are probabilistic dimensionality reduction methods that use
Gaussian Processes (GPs) to find a lower dimensional non-linear embedding of high dimensional data. They are an
extension of the Probabilistic formulation of PCA. The model is defined probabilistically and the latent variables are
then marginalized and parameters are obtained by maximizing the likelihood. Like kernel PCA they use a kernel
function to form a non linear mapping (in the form of a Gaussian process). However in the GPLVM the mapping is
from the embedded(latent) space to the data space (like density networks and GTM) whereas in kernel PCA it is in
the opposite direction. It was originally proposed for visualization of high dimensional data but has been extended to
construct a shared manifold model between two observation spaces.

116

CHAPTER 30. NONLINEAR DIMENSIONALITY REDUCTION

30.3.6

Curvilinear component analysis

Curvilinear component analysis (CCA)[11] looks for the configuration of points in the output space that preserves
original distances as much as possible while focusing on small distances in the output space (conversely to Sammon’s
mapping which focus on small distances in original space).
It should be noticed that CCA, as an iterative learning algorithm, actually starts with focus on large distances (like the
Sammon algorithm), then gradually change focus to small distances. The small distance information will overwrite
the large distance information, if compromises between the two have to be made.
The stress function of CCA is related to a sum of right Bregman divergences[12]

30.3.7

Curvilinear distance analysis

CDA[11] trains a self-organizing neural network to fit the manifold and seeks to preserve geodesic distances in its
embedding. It is based on Curvilinear Component Analysis (which extended Sammon’s mapping), but uses geodesic
distances instead.

30.3.8

Diffeomorphic dimensionality reduction

Diffeomorphic Dimensionality Reduction or Diffeomap[13] learns a smooth diffeomorphic mapping which transports
the data onto a lower-dimensional linear subspace. The methods solves for a smooth time indexed vector field such that
flows along the field which start at the data points will end at a lower-dimensional linear subspace, thereby attempting
to preserve pairwise differences under both the forward and inverse mapping.

30.3.9

Kernel principal component analysis

Perhaps the most widely used algorithm for manifold learning is kernel PCA.[14] It is a combination of Principal
component analysis and the kernel trick. PCA begins by computing the covariance matrix of the m × n matrix X
1 ∑
xi xT
i.
m i=1
m

C=

It then projects the data onto the first k eigenvectors of that matrix. By comparison, KPCA begins by computing the
covariance matrix of the data after being transformed into a higher-dimensional space,
1 ∑
Φ(xi )Φ(xi )T .
m i=1
m

C=

It then projects the transformed data onto the first k eigenvectors of that matrix, just like PCA. It uses the kernel trick
to factor away much of the computation, such that the entire process can be performed without actually computing
Φ(x) . Of course Φ must be chosen such that it has a known corresponding kernel. Unfortunately, it is not trivial to
find a good kernel for a given problem, so KPCA does not yield good results with some problems when using standard
kernels. For example, it is known to perform poorly with these kernels on the Swiss roll manifold. However, one can
view certain other methods that perform well in such settings (e.g., Laplacian Eigenmaps, LLE) as special cases of
kernel PCA by constructing a data-dependent kernel matrix.[15]
KPCA has an internal model, so it can be used to map points onto its embedding that were not available at training
time.

30.3.10

Isomap

Isomap[16] is a combination of the Floyd–Warshall algorithm with classic Multidimensional Scaling. Classic Multidimensional Scaling (MDS) takes a matrix of pair-wise distances between all points, and computes a position for
each point. Isomap assumes that the pair-wise distances are only known between neighboring points, and uses the

30.3. MANIFOLD LEARNING ALGORITHMS

117

Floyd–Warshall algorithm to compute the pair-wise distances between all other points. This effectively estimates the
full matrix of pair-wise geodesic distances between all of the points. Isomap then uses classic MDS to compute the
reduced-dimensional positions of all the points.
Landmark-Isomap is a variant of this algorithm that uses landmarks to increase speed, at the cost of some accuracy.

30.3.11

Locally-linear embedding

Locally-Linear Embedding (LLE)[17] was presented at approximately the same time as Isomap. It has several advantages over Isomap, including faster optimization when implemented to take advantage of sparse matrix algorithms,
and better results with many problems. LLE also begins by finding a set of the nearest neighbors of each point. It
then computes a set of weights for each point that best describe the point as a linear combination of its neighbors.
Finally, it uses an eigenvector-based optimization technique to find the low-dimensional embedding of points, such
that each point is still described with the same linear combination of its neighbors. LLE tends to handle non-uniform
sample densities poorly because there is no fixed unit to prevent the weights from drifting as various regions differ in
sample densities. LLE has no internal model.
LLE computes the barycentric coordinates of a point Xi based on its neighbors Xj. The original point is reconstructed
by a linear combination, given by the weight matrix Wij, of its neighbors. The reconstruction error is given by the
cost function E(W).

E(W ) =



|Xi −

i



Wij Xj |

2

j

The weights Wij refer to the amount of contribution the point Xj has while reconstructing the point Xi. The cost
function is minimized under two constraints: (a) Each data point Xi is reconstructed only from its neighbors, thus
enforcing Wij to be zero if point Xj is not a neighbor of the point Xi and (b) The sum of every row of the weight
matrix equals 1.


Wij = 1

j

The original data points are collected in a D dimensional space and the goal of the algorithm is to reduce the dimensionality to d such that D >> d. The same weights Wij that reconstructs the ith data point in the D dimensional
space will be used to reconstruct the same point in the lower d dimensional space. A neighborhood preserving map is
created based on this idea. Each point Xᵢ in the D dimensional space is mapped onto a point Yᵢ in the d dimensional
space by minimizing the cost function

C(Y ) =


i

|Yi −



Wij Yj |

2

j

In this cost function, unlike the previous one, the weights Wᵢ are kept fixed and the minimization is done on the
points Yᵢ to optimize the coordinates. This minimization problem can be solved by solving a sparse N X N eigen
value problem (N being the number of data points), whose bottom d nonzero eigen vectors provide an orthogonal
set of coordinates. Generally the data points are reconstructed from K nearest neighbors, as measured by Euclidean
distance. For such an implementation the algorithm has only one free parameter K, which can be chosen by cross
validation.

30.3.12

Laplacian eigenmaps

Laplacian Eigenmaps[18] uses spectral techniques to perform dimensionality reduction. This technique relies on the
basic assumption that the data lies in a low-dimensional manifold in a high-dimensional space.[19] This algorithm
cannot embed out of sample points, but techniques based on Reproducing kernel Hilbert space regularization exist
for adding this capability.[20] Such techniques can be applied to other nonlinear dimensionality reduction algorithms
as well.

118

CHAPTER 30. NONLINEAR DIMENSIONALITY REDUCTION

Traditional techniques like principal component analysis do not consider the intrinsic geometry of the data. Laplacian
eigenmaps builds a graph from neighborhood information of the data set. Each data point serves as a node on the
graph and connectivity between nodes is governed by the proximity of neighboring points (using e.g. the k-nearest
neighbor algorithm). The graph thus generated can be considered as a discrete approximation of the low-dimensional
manifold in the high-dimensional space. Minimization of a cost function based on the graph ensures that points close
to each other on the manifold are mapped close to each other in the low-dimensional space, preserving local distances.
The eigenfunctions of the Laplace–Beltrami operator on the manifold serve as the embedding dimensions, since under
mild conditions this operator has a countable spectrum that is a basis for square integrable functions on the manifold
(compare to Fourier series on the unit circle manifold). Attempts to place Laplacian eigenmaps on solid theoretical
ground have met with some success, as under certain nonrestrictive assumptions, the graph Laplacian matrix has
been shown to converge to the Laplace–Beltrami operator as the number of points goes to infinity.[21] Matlab code
for Laplacian Eigenmaps can be found in algorithms[22] and the PhD thesis of Belkin can be found at the Ohio State
University.[23]
In classification applications, low dimension manifolds can be used to model data classes which can be defined from
sets of observed instances. Each observed instance can be described by two independent factors termed ’content’ and
’style’, where ’content’ is the invariant factor related to the essence of the class and ’style’ expresses variations in that
class between instances.[24] Unfortunately, Laplacian Eigenmaps may fail to produce a coherent representation of a
class of interest when training data consist of instances varying significantly in terms of style.[25] In the case of classes
which are represented by multivariate sequences, Structural Laplacian Eigenmaps has been proposed to overcome
this issue by adding additional constraints within the Laplacian Eigenmaps neighborhood information graph to better
reflect the intrinsic structure of the class.[26] More specifically, the graph is used to encode both the sequential structure of the multivariate sequences and, to minimise stylistic variations, proximity between data points of different
sequences or even within a sequence, if it contains repetitions. Using dynamic time warping, proximity is detected
by finding correspondences between and within sections of the multivariate sequences that exhibit high similarity.
Experiments conducted on vision-based activity recognition, object orientation classification and human 3D pose
recovery applications have demonstrate the added value of Structural Laplacian Eigenmaps when dealing with multivariate sequence data.[26] An extension of Structural Laplacian Eigenmaps, Generalized Laplacian Eigenmaps led to
the generation of manifolds where one of the dimensions specifically represents variations in style. This has proved
particularly valuable in applications such as tracking of the human articulated body and silhouette extraction.[27]

30.3.13

Manifold alignment

Manifold alignment takes advantage of the assumption that disparate data sets produced by similar generating processes will share a similar underlying manifold representation. By learning projections from each original space to
the shared manifold, correspondences are recovered and knowledge from one domain can be transferred to another.
Most manifold alignment techniques consider only two data sets, but the concept extends to arbitrarily many initial
data sets.[28]

30.3.14

Diffusion maps

Diffusion maps leverages the relationship between heat diffusion and a random walk (Markov Chain); an analogy is
drawn between the diffusion operator on a manifold and a Markov transition matrix operating on functions defined
on the graph whose nodes were sampled from the manifold.[29] In particular let a data set be represented by X =
[x1 , x2 , . . . , xn ] ∈ Ω ⊂ RD . The underlying assumption of diffusion map is that the data although high-dimensional,
lies on a low-dimensional manifold of dimensions d .X represents the data set and let µ represent the distribution of
the data points on X. In addition to this lets define a kernel which represents some notion of affinity of the points in
X. The kernel k has the following properties[30]

k(x, y) = k(y, x),
k is symmetric

k(x, y) ≥ 0

∀x, y, k

k is positivity preserving

30.3. MANIFOLD LEARNING ALGORITHMS

119

Thus one can think of the individual data points as the nodes of a graph and the kernel k defining some sort of affinity
on that graph. The graph is symmetric by construction since the kernel is symmetric. It is easy to see here that from
the tuple {X,k} one can construct a reversible Markov Chain. This technique is fairly popular in a variety of fields
and is known as the graph laplacian.
The graph K = (X,E) can be constructed for example using a Gaussian kernel.
{
Kij =

e−∥xi −xj ∥2 /σ
0
2

2

ifxi ∼ xj
otherwise

In this above equation xi ∼ xj denotes that xi is a nearest neighbor of xj . In reality Geodesic distance should be
used to actually measure distances on the manifold. Since the exact structure of the manifold is not available, the
geodesic distance is approximated by euclidean distances with only nearest neighbors. The choice σ modulates our
notion of proximity in the sense that if ∥xi − xj ∥2 ≫ σ then Kij = 0 and if ∥xi − xj ∥2 ≪ σ then Kij = 1 .
The former means that very little diffusion has taken place while the latter implies that the diffusion process is nearly
complete. Different strategies to choose σ can be found in.[31] If K has to faithfully represent a Markov matrix, then
it has to be normalized by the corresponding degree matrix D :
P = D−1 K.
P now represents a Markov chain. P (xi , xj ) is the probability of transitioning from xi to xj in one a time step.
Similarly the probability of transitioning from xi to xj in t time steps is given by P t (xi , xj ) . Here P t is the matrix
P multiplied to itself t times. Now the Markov matrix P constitutes some notion of local geometry of the data set
X. The major difference between diffusion maps and principal component analysis is that only local features of the
data is considered in diffusion maps as opposed to taking correlations of the entire data set.
K defines a random walk on the data set which means that the kernel captures some local geometry of data set. The
Markov chain defines fast and slow directions of propagation, based on the values taken by the kernel, and as one
propagates the walk forward in time, the local geometry information aggregates in the same way as local transitions
(defined by differential equations) of the dynamical system.[30] The concept of diffusion arises from the definition of
a family diffusion distance { Dt } t∈N
Dt2 (x, y) = ||pt (x, ·) − pt (y, ·)||2
For a given value of t Dt defines a distance between any two points of the data set. This means that the value of
Dt (x, y) will be small if there are many paths that connect x to y and vice versa. The quantity Dt (x, y) involves
summing over of all paths of length t, as a result of which Dt is extremely robust to noise in the data as opposed to
geodesic distance. Dt takes into account all the relation between points x and y while calculating the distance and
serves as a better notion of proximity than just Euclidean distance or even geodesic distance.

30.3.15

Hessian Locally-Linear Embedding (Hessian LLE)

Like LLE, Hessian LLE[32] is also based on sparse matrix techniques. It tends to yield results of a much higher quality
than LLE. Unfortunately, it has a very costly computational complexity, so it is not well-suited for heavily-sampled
manifolds. It has no internal model.

30.3.16

Modified Locally-Linear Embedding (MLLE)

Modified LLE (MLLE)[33] is another LLE variant which uses multiple weights in each neighborhood to address the
local weight matrix conditioning problem which leads to distortions in LLE maps. MLLE produces robust projections
similar to Hessian LLE, but without the significant additional computational cost.

30.3.17

Relational perspective map

Relational perspective map is a multidimensional scaling algorithm. The algorithm finds a configuration of data points
on a manifold by simulating a multi-particle dynamic system on a closed manifold, where data points are mapped to

120

CHAPTER 30. NONLINEAR DIMENSIONALITY REDUCTION

particles and distances (or dissimilarity) between data points represent a repulsive force. As the manifold gradually
grows in size the multi-particle system cools down gradually and converges to a configuration that reflects the distance
information of the data points.
Relational perspective map was inspired by a physical model in which positively charged particles move freely on the
surface of a ball. Guided by the Coulomb force between particles, the minimal energy configuration of the particles
will reflect the strength of repulsive forces between the particles.
The Relational perspective map was introduced in.[34] The algorithm firstly used the flat torus as the image manifold,
then it has been extended (in the software VisuMap to use other types of closed manifolds, like the sphere, projective
space, and Klein bottle, as image manifolds.

30.3.18

Local tangent space alignment

Main article: Local tangent space alignment
LTSA[35] is based on the intuition that when a manifold is correctly unfolded, all of the tangent hyperplanes to the
manifold will become aligned. It begins by computing the k-nearest neighbors of every point. It computes the tangent
space at every point by computing the d-first principal components in each local neighborhood. It then optimizes to
find an embedding that aligns the tangent spaces.

30.3.19

Local multidimensional scaling

Local Multidimensional Scaling[36] performs multidimensional scaling in local regions, and then uses convex optimization to fit all the pieces together.

30.3.20

Maximum variance unfolding

Maximum Variance Unfolding was formerly known as Semidefinite Embedding. The intuition for this algorithm is
that when a manifold is properly unfolded, the variance over the points is maximized. This algorithm also begins
by finding the k-nearest neighbors of every point. It then seeks to solve the problem of maximizing the distance
between all non-neighboring points, constrained such that the distances between neighboring points are preserved.
The primary contribution of this algorithm is a technique for casting this problem as a semidefinite programming
problem. Unfortunately, semidefinite programming solvers have a high computational cost. The Landmark–MVU
variant of this algorithm uses landmarks to increase speed with some cost to accuracy. It has no model.

30.3.21

Nonlinear PCA

Nonlinear PCA[37] (NLPCA) uses backpropagation to train a multi-layer perceptron to fit to a manifold. Unlike
typical MLP training, which only updates the weights, NLPCA updates both the weights and the inputs. That is, both
the weights and inputs are treated as latent values. After training, the latent inputs are a low-dimensional representation of the observed vectors, and the MLP maps from that low-dimensional representation to the high-dimensional
observation space.

30.3.22

Data-driven high-dimensional scaling

Data-Driven High Dimensional Scaling (DD-HDS)[38] is closely related to Sammon’s mapping and curvilinear component analysis except that (1) it simultaneously penalizes false neighborhoods and tears by focusing on small distances
in both original and output space, and that (2) it accounts for concentration of measure phenomenon by adapting the
weighting function to the distance distribution.

30.3.23

Manifold sculpting

Manifold Sculpting[39] uses graduated optimization to find an embedding. Like other algorithms, it computes the
k-nearest neighbors and tries to seek an embedding that preserves relationships in local neighborhoods. It slowly

30.4. METHODS BASED ON PROXIMITY MATRICES

121

scales variance out of higher dimensions, while simultaneously adjusting points in lower dimensions to preserve those
relationships. If the rate of scaling is small, it can find very precise embeddings. It boasts higher empirical accuracy
than other algorithms with several problems. It can also be used to refine the results from other manifold learning
algorithms. It struggles to unfold some manifolds, however, unless a very slow scaling rate is used. It has no model.

30.3.24

t-distributed stochastic neighbor embedding

t-distributed stochastic neighbor embedding (t-SNE) [40] is widely used. It is one of a family of stochastic neighbor
embedding methods.

30.3.25

RankVisu

RankVisu[41] is designed to preserve rank of neighborhood rather than distance. RankVisu is especially useful on
difficult tasks (when the preservation of distance cannot be achieved satisfyingly). Indeed, the rank of neighborhood
is less informative than distance (ranks can be deduced from distances but distances cannot be deduced from ranks)
and its preservation is thus easier.

30.3.26

Topologically constrained isometric embedding

Topologically Constrained Isometric Embedding (TCIE)[42] is an algorithm based approximating geodesic distances
after filtering geodesics inconsistent with the Euclidean metric. Aimed at correcting the distortions caused when
Isomap is used to map intrinsically non-convex data, TCIE uses weight least-squares MDS in order to obtain a more
accurate mapping. The TCIE algorithm first detects possible boundary points in the data, and during computation of
the geodesic length marks inconsistent geodesics, to be given a small weight in the weighted Stress majorization that
follows.

30.4 Methods based on proximity matrices
A method based on proximity matrices is one where the data is presented to the algorithm in the form of a similarity
matrix or a distance matrix. These methods all fall under the broader class of metric multidimensional scaling.
The variations tend to be differences in how the proximity data is computed; for example, Isomap, locally linear
embeddings, maximum variance unfolding, and Sammon mapping (which is not in fact a mapping) are examples of
metric multidimensional scaling methods.

30.5 See also
• Discriminant analysis
• Elastic map[43]
• Feature learning
• Growing self-organizing map (GSOM)
• Multilinear subspace learning (MSL)
• Pairwise distance methods
• Self-organizing map (SOM)

30.6 References
[1] John A. Lee, Michel Verleysen, Nonlinear Dimensionality Reduction, Springer, 2007.

122

CHAPTER 30. NONLINEAR DIMENSIONALITY REDUCTION

[2] Gashler, M. and Martinez, T., Temporal Nonlinear Dimensionality Reduction, In Proceedings of the International Joint
Conference on Neural Networks IJCNN'11, pp. 1959–1966, 2011
[3] The illustration is prepared using free software: E.M. Mirkes, Principal Component Analysis and Self-Organizing Maps:
applet. University of Leicester, 2011
[4] Yin, Hujun; Learning Nonlinear Principal Manifolds by Self-Organising Maps, in A.N. Gorban, B. Kégl, D.C. Wunsch,
and A. Zinovyev (Eds.), Principal Manifolds for Data Visualization and Dimension Reduction, Lecture Notes in Computer
Science and Engineering (LNCSE), vol. 58, Berlin, Germany: Springer, 2007, Ch. 3, pp. 68-95. ISBN 978-3-540-737490
[5] A. N. Gorban, A. Zinovyev, Principal manifolds and graphs in practice: from molecular biology to dynamical systems,
International Journal of Neural Systems, Vol. 20, No. 3 (2010) 219–232.
[6] A. Zinovyev, ViDaExpert - Multidimensional Data Visualization Tool (free for non-commercial use). Institut Curie, Paris.
[7] A. Zinovyev, ViDaExpert overview, IHES (Institut des Hautes Études Scientifiques), Bures-Sur-Yvette, Île-de-France.
[8] T. Hastie, Principal Curves and Surfaces, Ph.D Dissertation, Stanford Linear Accelerator Center, Stanford University,
Stanford, California, US, November 1984.
[9] A.N. Gorban, B. Kégl, D.C. Wunsch, A. Zinovyev (Eds.), Principal Manifolds for Data Visualisation and Dimension
Reduction, Lecture Notes in Computer Science and Engineering (LNCSE), Vol. 58, Springer, Berlin – Heidelberg – New
York, 2007. ISBN 978-3-540-73749-0
[10] N. Lawrence, Probabilistic Non-linear Principal Component Analysis with Gaussian Process Latent Variable Models, Journal of Machine Learning Research 6(Nov):1783–1816, 2005.
[11] P. Demartines and J. Hérault, Curvilinear Component Analysis: A Self-Organizing Neural Network for Nonlinear Mapping
of Data Sets, IEEE Transactions on Neural Networks, Vol. 8(1), 1997, pp. 148–154
[12] Jigang Sun, Malcolm Crowe, and Colin Fyfe, Curvilinear component analysis and Bregman divergences, In European
Symposium on Artificial Neural Networks (Esann), pages 81–86. d-side publications, 2010
[13] Christian Walder and Bernhard Schölkopf, Diffeomorphic Dimensionality Reduction, Advances in Neural Information
Processing Systems 22, 2009, pp. 1713–1720, MIT Press
[14] B. Schölkopf, A. Smola, K.-R. Müller, Nonlinear Component Analysis as a Kernel Eigenvalue Problem. Neural Computation 10(5):1299-1319, 1998, MIT Press Cambridge, MA, USA, doi:10.1162/089976698300017467
[15] Jihun Ham, Daniel D. Lee, Sebastian Mika, Bernhard Schölkopf. A kernel view of the dimensionality reduction of
manifolds. Proceedings of the 21st International Conference on Machine Learning, Banff, Canada, 2004. doi:10.1145/
1015330.1015417
[16] J. B. Tenenbaum, V. de Silva, J. C. Langford, A Global Geometric Framework for Nonlinear Dimensionality Reduction,
Science 290, (2000), 2319–2323.
[17] S. T. Roweis and L. K. Saul, Nonlinear Dimensionality Reduction by Locally Linear Embedding, Science Vol 290, 22
December 2000, 2323–2326.
[18] Mikhail Belkin and Partha Niyogi, Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering, Advances
in Neural Information Processing Systems 14, 2001, p. 586–691, MIT Press
[19] Mikhail Belkin Problems of Learning on Manifolds, PhD Thesis, Department of Mathematics, The University Of Chicago,
August 2003
[20] Bengio et al. “Out-of-Sample Extensions for LLE, Isomap, MDS, Eigenmaps, and Spectral Clustering” in Advances in
Neural Information Processing Systems (2004)
[21] Mikhail Belkin Problems of Learning on Manifolds, PhD Thesis, Department of Mathematics, The University Of Chicago,
August 2003
[22] Ohio-state.edu
[23] Ohio-state.edu
[24] J. Tenenbaum and W. Freeman, Separating style and content with bilinear models, Neural Computation, vol. 12, 2000.
[25] M. Lewandowski, J. Martinez-del Rincon, D. Makris, and J.-C. Nebel, Temporal extension of laplacian eigenmaps for
unsupervised dimensionality reduction of time series, Proceedings of the International Conference on Pattern Recognition
(ICPR), 2010

30.7. EXTERNAL LINKS

123

[26] M. Lewandowski, D. Makris, S.A. Velastin and J.-C. Nebel, Structural Laplacian Eigenmaps for Modeling Sets of Multivariate Sequences, IEEE Transactions on Cybernetics, 44(6): 936-949, 2014
[27] J. Martinez-del-Rincon, M. Lewandowski, J.-C. Nebel and D. Makris, Generalized Laplacian Eigenmaps for Modeling and
Tracking Human Motions, IEEE Transactions on Cybernetics, 44(9), pp 1646-1660, 2014
[28] Wang, Chang; Mahadevan, Sridhar (July 2008). Manifold Alignment using Procrustes Analysis (PDF). The 25th International Conference on Machine Learning. pp. 1120–1127.
[29] Diffusion Maps and Geometric Harmonics, Stephane Lafon, PhD Thesis, Yale University, May 2004
[30] Diffusion Maps, Ronald R. Coifman and Stephane Lafon,: Science, 19 June 2006
[31] B. Bah, “Diffusion Maps: Applications and Analysis”, Masters Thesis, University of Oxford
[32] D. Donoho and C. Grimes, “Hessian eigenmaps: Locally linear embedding techniques for high-dimensional data” Proc
Natl Acad Sci U S A. 2003 May 13; 100(10): 5591–5596
[33] Z. Zhang and J. Wang, “MLLE: Modified Locally Linear Embedding Using Multiple Weights” http://citeseerx.ist.psu.edu/
viewdoc/summary?doi=10.1.1.70.382
[34] James X. Li, Visualizing high-dimensional data with relational perspective map, Information Visualization (2004) 3, 49–59
[35] Zhang, Zhenyue; Hongyuan Zha (2005). “Principal Manifolds and Nonlinear Dimension Reduction via Local Tangent
Space Alignment”. SIAM Journal on Scientific Computing 26 (1): 313–338. doi:10.1137/s1064827502419154.
[36] J Venna and S Kaski, Local multidimensional scaling, Neural Networks, 2006
[37] Scholz, M. Kaplan, F. Guy, C. L. Kopka, J. Selbig, J., Non-linear PCA: a missing data approach, In Bioinformatics, Vol.
21, Number 20, pp. 3887–3895, Oxford University Press, 2005
[38] S. Lespinats, M. Verleysen, A. Giron, B. Fertil, DD-HDS: a tool for visualization and exploration of high-dimensional data,
IEEE Transactions on Neural Networks 18 (5) (2007) 1265–1279.
[39] Gashler, M. and Ventura, D. and Martinez, T., Iterative Non-linear Dimensionality Reduction with Manifold Sculpting, In
Platt, J.C. and Koller, D. and Singer, Y. and Roweis, S., editor, Advances in Neural Information Processing Systems 20,
pp. 513–520, MIT Press, Cambridge, MA, 2008
[40] van der Maaten, L.J.P.; Hinton, G.E. (Nov 2008). “Visualizing High-Dimensional Data Using t-SNE” (PDF). Journal of
Machine Learning Research 9: 2579–2605.
[41] Lespinats S., Fertil B., Villemain P. and Herault J., Rankvisu: Mapping from the neighbourhood network, Neurocomputing,
vol. 72 (13–15), pp. 2964–2978, 2009.
[42] Rosman G., Bronstein M. M., Bronstein A. M. and Kimmel R., Nonlinear Dimensionality Reduction by Topologically
Constrained Isometric Embedding, International Journal of Computer Vision, Volume 89, Number 1, 56–68, 2010
[43] ELastic MAPs

30.7 External links
• Isomap
• Generative Topographic Mapping
• Mike Tipping’s Thesis
• Gaussian Process Latent Variable Model
• Locally Linear Embedding
• Relational Perspective Map
• Waffles is an open source C++ library containing implementations of LLE, Manifold Sculpting, and some other
manifold learning algorithms.
• Efficient Dimensionality Reduction Toolkit homepage

124

CHAPTER 30. NONLINEAR DIMENSIONALITY REDUCTION

• DD-HDS homepage
• RankVisu homepage
• Short review of Diffusion Maps
• Nonlinear PCA by autoencoder neural networks

Chapter 31

One-dimensional space

The number line

In physics and mathematics, a sequence of n numbers can be understood as a location in n-dimensional space. When
n = 1, the set of all such locations is called a one-dimensional space. An example of a one-dimensional space is the
number line, where the position of each point on it can be described by a single number.[1]

31.1 One-dimensional geometry
31.1.1

Polytopes

The only regular polytope in one dimension is the line segment, with the Schläfli symbol { }.

31.1.2

Hypersphere

The hypersphere in 1 dimension is a pair of points,[2] sometimes called a 0-sphere as its surface is zero-dimensional.
Its length is

L = 2r
where r is the radius.

31.2 Coordinate systems in one-dimensional space
Main article: Coordinate system
The most popular coordinate systems are the number line and the angle.
• Number line
• Angle
125

126

CHAPTER 31. ONE-DIMENSIONAL SPACE

31.3 References
[1] Гущин, Д. Д. "Пространство как математическое понятие" (in Russian). fmclass.ru. Retrieved 2015-06-06.
[2] Gibilisco, Stan (1983). Understanding Einstein’s Theories of Relativity: Man’s New Perspective on the Cosmos. TAB Books.
p. 89.

Chapter 32

Ordinate

y

(2,3)

3
2

(−3,1)

1

(0,0)
−3

−2

−1

1

x
2

3

−1
−2

(−1.5,−2.5)

−3

Illustration of a Cartesian coordinate plane. The second value in each ordered pair is the ordinate of the corresponding point.

In mathematics, ordinate most often refers to that element of an ordered pair which is plotted on the vertical axis of a
two-dimensional Cartesian coordinate system, as opposed to the abscissa. The term can also refer to the vertical axis
(typically y-axis) of a two-dimensional graph (because that axis is used to define and measure the vertical coordinates
127

128

CHAPTER 32. ORDINATE

of points in the space). An ordered pair consists of two terms—the abscissa (horizontal, usually x) and the ordinate
(vertical, usually y)—which define the location of a point in two-dimensional rectangular space.[1]

abscissa ordinate

z}|{ z}|{
( x , y )

32.1 Examples
• For the red point in the above graph (−3, 1), −3 is called the abscissa and 1, the ordinate.

32.2 See also
• Abscissa
• Ordered pair
• Cartesian coordinate conventions
• Dependent and independent variables
• Function (mathematics)
• Relation (mathematics)
• Line chart
• The dictionary definition of ordinate at Wiktionary

32.3 References
[1] Weisstein, Eric W. “Ordinate”. MathWorld--A Wolfram Web Resource. Retrieved 14 July 2013. External link in |work=
(help)

32.4 External links
• Earliest Known Uses of Some of the Words of Mathematics (O)
This article is based on material taken from the Free On-line Dictionary of Computing prior to 1 November 2008
and incorporated under the “relicensing” terms of the GFDL, version 1.3 or later.

Chapter 33

Regular sequence
For regular Cauchy sequence, see Cauchy sequence#In constructive mathematics.
In commutative algebra, a regular sequence is a sequence of elements of a commutative ring which are as independent
as possible, in a precise sense. This is the algebraic analogue of the geometric notion of a complete intersection.

33.1 Definitions
For a commutative ring R and an R-module M, an element r in R is called a non-zero-divisor on M if r m = 0
implies m = 0 for m in M. An M-regular sequence is a sequence
r1 , ..., rd in R
such that ri is a non-zero-divisor on M/(r1 , ..., ri−₁)M for i = 1, ..., d.[1] Some authors also require that M/(r1 , ...,
rd)M is not zero. Intuitively, to say that r1 , ..., rd is an M-regular sequence means that these elements “cut M down”
as much as possible, when we pass successively from M to M/(r1 )M, to M/(r1 , r2 )M, and so on.
An R-regular sequence is called simply a regular sequence. That is, r1 , ..., rd is a regular sequence if r1 is a nonzero-divisor in R, r2 is a non-zero-divisor in the ring R/(r1 ), and so on. In geometric language, if X is an affine scheme
and r1 , ..., rd is a regular sequence in the ring of regular functions on X, then we say that the closed subscheme {r1 =0,
..., rd=0} ⊂ X is a complete intersection subscheme of X.
For example, x, y(1-x), z(1-x) is a regular sequence in the polynomial ring C[x, y, z], while y(1-x), z(1-x), x is not a
regular sequence. But if R is a Noetherian local ring and the elements ri are in the maximal ideal, or if R is a graded
ring and the ri are homogeneous of positive degree, then any permutation of a regular sequence is a regular sequence.
Let R be a Noetherian ring, I an ideal in R, and M a finitely generated R-module. The depth of I on M, written
depthR(I, M) or just depth(I, M), is the supremum of the lengths of all M-regular sequences of elements of I. When
R is a Noetherian local ring and M is a finitely generated R-module, the depth of M, written depthR(M) or just
depth(M), means depthR(m, M); that is, it is the supremum of the lengths of all M-regular sequences in the maximal
ideal m of R. In particular, the depth of a Noetherian local ring R means the depth of R as a R-module. That is, the
depth of R is the maximum length of a regular sequence in the maximal ideal.
For a Noetherian local ring R, the depth of the zero module is ∞,[2] whereas the depth of a nonzero finitely generated
R-module M is at most the Krull dimension of M (also called the dimension of the support of M).[3]

33.2 Examples
• For a prime number p, the local ring Z₍p₎ is the subring of the rational numbers consisting of fractions whose
denominator is not a multiple of p. The element p is a non-zero-divisor in Z₍p₎, and the quotient ring of Z₍p₎
by the ideal generated by p is the field Z/(p). Therefore p cannot be extended to a longer regular sequence in
the maximal ideal (p), and in fact the local ring Z₍p₎ has depth 1.
129

130

CHAPTER 33. REGULAR SEQUENCE

• For any field k, the elements x1 , ..., xn in the polynomial ring A = k[x1 , ..., xn] form a regular sequence. It
follows that the localization R of A at the maximal ideal m = (x1 , ..., xn) has depth at least n. In fact, R has
depth equal to n; that is, there is no regular sequence in the maximal ideal of length greater than n.
• More generally, let R be a regular local ring with maximal ideal m. Then any elements r1 , ..., rd of m which
map to a basis for m/m2 as an R/m-vector space form a regular sequence.
An important case is when the depth of a local ring R is equal to its Krull dimension: R is then said to be CohenMacaulay. The three examples shown are all Cohen-Macaulay rings. Similarly, a finitely generated R-module M is
said to be Cohen-Macaulay if its depth equals its dimension.

33.3 Applications
• If r1 , ..., rd is a regular sequence in a ring R, then the Koszul complex is an explicit free resolution of R/(r1 ,
..., rd) as an R-module, of the form:
d

d

0 → R(d) → · · · → R(1) → R → R/(r1 , . . . , rd ) → 0
In the special case where R is the polynomial ring k[r1 , ..., rd], this gives a resolution of k as an R-module.
• If I is an ideal generated by a regular sequence in a ring R, then the associated graded ring
⊕j≥0 I j /I j+1
is isomorphic to the polynomial ring (R/I)[x1 , ..., xd]. In geometric terms, it follows that a local complete intersection
subscheme Y of any scheme X has a normal bundle which is a vector bundle, even though Y may be singular.

33.4 See also
• Complete intersection ring
• Koszul complex
• Depth (ring theory)
• Cohen-Macaulay ring

33.5 Notes
[1] N. Bourbaki. Algèbre. Chapitre 10. Algèbre Homologique. Springer-Verlag (2006). X.9.6.
[2] A. Grothendieck. EGA IV, Part 1. Publications Mathématiques de l'IHÉS 20 (1964), 259 pp. 0.16.4.5.
[3] N. Bourbaki. Algèbre Commutative. Chapitre 10. Springer-Verlag (2007). Th. X.4.2.

33.6 References
• Bourbaki, Nicolas (2006), Algèbre. Chapitre 10. Algèbre Homologique, Berlin, New York: Springer-Verlag,
doi:10.1007/978-3-540-34493-3, ISBN 978-3-540-34492-6, MR 2327161
• Bourbaki, Nicolas (2007), Algèbre Commutative. Chapitre 10, Berlin, New York: Springer-Verlag, doi:10.1007/9783-540-34395-0, ISBN 978-3-540-34394-3, MR 2333539
• Winfried Bruns; Jürgen Herzog, Cohen-Macaulay rings. Cambridge Studies in Advanced Mathematics, 39.
Cambridge University Press, Cambridge, 1993. xii+403 pp. ISBN 0-521-41068-1

33.6. REFERENCES

131

• David Eisenbud, Commutative Algebra with a View Toward Algebraic Geometry. Springer Graduate Texts in
Mathematics, no. 150. ISBN 0-387-94268-8
• Grothendieck, Alexander (1964), "Éléments de géometrie algébrique IV. Première partie”, Publications Mathématiques de l'Institut des Hautes Études Scientifiques 20: 1–259, MR 0173675

Chapter 34

Relative canonical model
In mathematics, the relative canonical model of a singular variety X is a particular canonical variety that maps to
X , which simplifies the structure. The precise definition is:
If f : Y → X is a resolution define the adjunction sequence to be the sequence of subsheaves f∗ ωY⊗n ; if ωX is
⊗n
invertible f∗ ωY⊗n = In ωX
where In is the higher adjunction ideal. Problem. Is ⊕n f∗ ωY⊗n finitely generated? If
this is true then P roj ⊕n f∗ ωY⊗n → X is called the relative canonical model of Y , or the canonical blow-up of X
.[1]
Some basic properties were as follows: The relative canonical model was independent of the choice of resolution.
Some integer multiple r of the canonical divisor of the relative canonical model was Cartier and the number of
exceptional components where this agrees with the same multiple of the canonical divisor of Y is also independent of
the choice of Y. When it equals the number of components of Y it was called crepant.[1] It was not known whether
relative canonical models were Cohen–Macaulay.
Because the relative canonical model is independent of Y , most authors simplify the terminology, referring to it as
the relative canonical model of X rather than either the relative canonical model of Y or the canonical blow-up of
X . The class of varieties that are relative canonical models have canonical singularities. Since that time in the 1970s
other mathematicians solved affirmatively the problem of whether they are Cohen–Macaulay. The minimal model
program started by Shigefumi Mori proved that the sheaf in the definition always is finitely generated and therefore
that relative canonical models always exist.

34.1 References
[1] M. Reid, Canonical 3-folds (courtesy copy), proceedings of the Angiers 'Journees de Geometrie Algebrique' 1979

132

Chapter 35

Relative dimension
In mathematics, specifically linear algebra and geometry, relative dimension is the dual notion to codimension.
In linear algebra, given a quotient map V → Q , the difference dim V − dim Q is the relative dimension; this equals
the dimension of the kernel.
In fiber bundles, the relative dimension of the map is the dimension of the fiber.
More abstractly, the codimension of a map is the dimension of the cokernel, while the relative dimension of a map is
the dimension of the kernel.
These are dual in that the inclusion of a subspace V → W of codimension k dualizes to yield a quotient map
W ∗ → V ∗ of relative dimension k, and conversely.
The additivity of codimension under intersection corresponds to the additivity of relative dimension in a fiber product.
Just as codimension is mostly used for injective maps, relative dimension is mostly used for surjective maps.

133

Chapter 36

Schauder dimension
In mathematics, a Schauder basis or countable basis is similar to the usual (Hamel) basis of a vector space; the
difference is that Hamel bases use linear combinations that are finite sums, while for Schauder bases they may be
infinite sums. This makes Schauder bases more suitable for the analysis of infinite-dimensional topological vector
spaces including Banach spaces.
Schauder bases were described by Juliusz Schauder in 1927,[1][2] although such bases were discussed earlier. For
example, the Haar basis was given in 1909, and G. Faber discussed in 1910 a basis for continuous functions on an
interval, sometimes called a Faber–Schauder system.[3]

36.1 Definitions
Let V denote a Banach space over the field F. A Schauder basis is a sequence {bn} of elements of V such that for
every element v ∈ V there exists a unique sequence {αn} of scalars in F so that

v=




αn bn ,

n=0

where the convergence is understood with respect to the norm topology, i.e.,


n





lim
v −
αk bk
= 0.
n→∞

k=0

V

Schauder bases can also be defined analogously in a general topological vector space. As opposed to a Hamel basis,
the elements of the basis must be ordered since the series may not converge unconditionally.
A Schauder basis {bn}n ≥ ₀ is said to be normalized when all the basis vectors have norm 1 in the Banach space V.
A sequence {xn}n ≥ ₀ in V is a basic sequence if it is a Schauder basis of its closed linear span.
Two Schauder bases, {bn} in V and {cn} in W, are said to be equivalent if there exist two constants c > 0 and C
such that for every integer N ≥ 0 and all sequences {αn} of scalars,
N

N









c
αk bk

αk ck




k=0

V

k=0

W

N





≤C
αk bk
.


k=0

V

A family of vectors in V is total if its linear span (the set of finite linear combinations) is dense in V. If V is a Hilbert
space, an orthogonal basis is a total subset B of V such that elements in B are nonzero and pairwise orthogonal.
Further, when each element in B has norm 1, then B is an orthonormal basis of V.
134

36.2. PROPERTIES

135

36.2 Properties
Let {bn} be a Schauder basis of a Banach space V over F = R or C. It follows from the Banach–Steinhaus theorem
that the linear mappings {Pn} defined by

v=





Pn
αk bk −→
Pn (v) =
αk bk

k=0

n

k=0

are uniformly bounded by some constant C. When C = 1, the basis is called a monotone basis. The maps {Pn} are
the basis projections.
Let {b*n} denote the coordinate functionals, where b*n assigns to every vector v in V the coordinate αn of v in the
above expansion. Each b*n is a bounded linear functional on V. Indeed, for every vector v in V,

|b∗n (v)| ∥bn ∥V = |αn | ∥bn ∥V = ∥αn bn ∥V = ∥Pn (v) − Pn−1 (v)∥V ≤ 2C∥v∥V .
These functionals {b*n} are called biorthogonal functionals associated to the basis {bn}. When the basis {bn} is
normalized, the coordinate functionals {b*n} have norm ≤ 2C in the continuous dual V ′ of V.
A Banach space with a Schauder basis is necessarily separable, but the converse is false, as described below. Since
every vector v in a Banach space V with a Schauder basis is the limit of Pn(v), with Pn of finite rank and uniformly
bounded, such a space V satisfies the bounded approximation property.
A theorem attributed to Mazur[4] asserts that every infinite-dimensional Banach space V contains a basic sequence,
i.e., there is an infinite-dimensional subspace of V that has a Schauder basis. The basis problem is the question
asked by Banach, whether every separable Banach space has a Schauder basis. This was negatively answered by Per
Enflo who constructed a separable Banach space failing the approximation property, thus a space without a Schauder
basis.[5]

36.3 Examples
The standard unit vector bases of c0 , and of ℓp for 1 ≤ p < ∞, are monotone Schauder bases. In this unit vector
basis {bn}, the vector bn in V = c0 or in V = ℓp is the scalar sequence {bn, j }j where all coordinates bn, j are 0,
except the nth coordinate:

bn = {bn,j }∞
j=0 ∈ V, bn,j = δn,j ,
where δn, j is the Kronecker delta. The space ℓ∞ is not separable, and therefore has no Schauder basis.
Every orthonormal basis in a separable Hilbert space is a Schauder basis. Every countable orthonormal basis is
equivalent to the standard unit vector basis in ℓ2 .
The Haar system is an example of a basis for Lp ([0, 1]), when 1 ≤ p < ∞.[2] When 1 < p < ∞, another example is the
trigonometric system defined below. The Banach space C([0, 1]) of continuous functions on the interval [0, 1], with
the supremum norm, admits a Schauder basis. The Faber–Schauder system is the most commonly used Schauder
basis for C([0, 1]).[3][6]
Several bases for classical spaces were discovered before Banach’s book appeared (Banach (1932)), but some other
cases remained open for a long time. For example, the question of whether the disk algebra A(D) has a Schauder
basis remained open for more than forty years, until Bočkarev showed in 1974 that a basis constructed from the
Franklin system exists in A(D).[7] One can also prove that the periodic Franklin system[8] is a basis for a Banach
space Ar isomorphic to A(D).[9] This space Ar consists of all complex continuous functions on the unit circle T
whose conjugate function is also continuous. The Franklin system is another Schauder basis for C([0, 1]),[10] and it is
a Schauder basis in Lp ([0, 1]) when 1 ≤ p < ∞.[11] Systems derived from the Franklin system give bases in the space
C 1 ([0, 1]2 ) of differentiable functions on the unit square.[12] The existence of a Schauder basis in C 1 ([0, 1]2 ) was a
question from Banach’s book.[13]

136

36.3.1

CHAPTER 36. SCHAUDER DIMENSION

Relation to Fourier series

Let {xn} be, in the real case, the sequence of functions
{1, cos(x), sin(x), cos(2x), sin(2x), cos(3x), sin(3x), . . .}
or, in the complex case,
{ ix −ix 2ix −2ix 3ix −3ix
}
1, e , e , e , e
,e ,e
,... .
The sequence {xn} is called the trigonometric system. It is a Schauder basis for the space Lp ([0, 2π]) for any p such
that 1 < p < ∞. For p = 2, this is the content of the Riesz–Fischer theorem, and for p ≠ 2, it is a consequence of the
boundedness on the space Lp ([0, 2π]) of the Hilbert transform on the circle. It follows from this boundedness that
the projections PN defined by
{
f :x→

+∞


}
ck eikx

{
PN

−→

N


PN f : x →

k=−∞

}
ck eikx

k=−N
p

are uniformly bounded on L ([0, 2π]) when 1 < p < ∞. This family of maps {PN} is equicontinuous and tends to
the identity on the dense subset consisting of trigonometric polynomials. It follows that PN f tends to f in Lp -norm
for every f ∈ Lp ([0, 2π]). In other words, {xn} is a Schauder basis of Lp ([0, 2π]).[14]
However, the set {xn} is not a Schauder basis for L1 ([0, 2π]). This means that there are functions in L1 whose
Fourier series does not converge in the L1 norm, or equivalently, that the projections PN are not uniformly bounded
in L1 -norm. Also, the set {xn} is not a Schauder basis for C([0, 2π]).

36.3.2

Bases for spaces of operators

The space K(ℓ2 ) of compact operators on the Hilbert space ℓ2 has a Schauder basis. For every x, y in ℓ2 , let x ⊗ y
denote the rank one operator v ∈ ℓ2 → <v, x> y. If {en }n ≥ ₁ is the standard orthonormal basis of ℓ2 , a basis for
K(ℓ2 ) is given by the sequence[15]
e1 ⊗ e1 , e1 ⊗ e2 , e2 ⊗ e2 , e2 ⊗ e1 , . . . ,
e1 ⊗ en , e2 ⊗ en , . . . , en ⊗ en , en ⊗ en−1 , . . . , en ⊗ e1 , . . .
For every n, the sequence consisting of the n2 first vectors in this basis is a suitable ordering of the family {ej ⊗ ek},
for 1 ≤ j, k ≤ n.
The preceding result can be generalized: a Banach space X with a basis has the approximation property, so the space
K(X) of compact operators on X is isometrically isomorphic[16] to the injective tensor product
b ε X ≃ K(X).
X ′⊗
If X is a Banach space with a Schauder basis {en }n ≥ ₁ such that the biorthogonal functionals are a basis of the dual,
that is to say, a Banach space with a shrinking basis, then the space K(X) admits a basis formed by the rank one
operators e *j ⊗ ek : v → e *j (v) ek, with the same ordering as before.[15] This applies in particular to every reflexive
Banach space X with a Schauder basis
On the other hand, the space B(ℓ2 ) has no basis, since it is non-separable. Moreover, B(ℓ2 ) does not have the
approximation property.[17]

36.4 Unconditionality

A Schauder basis {bn} is unconditional if whenever the series αn bn converges, it converges unconditionally. For
a Schauder basis {bn}, this is equivalent to the existence of a constant C such that

36.5. SCHAUDER BASES AND DUALITY

137

n
n








αk bk
εk αk bk
≤ C

V

k=0

V

k=0

for all integers n, all scalar coefficients {αk} and all signs εk = ± 1. Unconditionality is an important property since it
allows one to forget about the order of summation. A Schauder basis is symmetric if it is unconditional and uniformly
equivalent to all its permutations: there exists a constant C such that for every integer n, every permutation π of the
integers {0, 1, …, n} , all scalar coefficients {αk} and all signs {εk},
n
n








εk αk bπ(k)
≤ C
αk bk
.

V

k=0

k=0

V

The standard bases of the sequence spaces c0 and ℓp for 1 ≤ p < ∞, as well as every orthonormal basis in a Hilbert
space, are unconditional. These bases are also symmetric.
The trigonometric system is not an unconditional basis in Lp , except for p = 2.
The Haar system is an unconditional basis in Lp for any 1 < p < ∞. The space L1 ([0, 1]) has no unconditional basis.[18]
A natural question is whether every infinite-dimensional Banach space has an infinite-dimensional subspace with an
unconditional basis. This was solved negatively by Timothy Gowers and Bernard Maurey in 1992.[19]

36.5 Schauder bases and duality
A basis {en}n≥₀ of a Banach space X is boundedly complete if for every sequence {an}n≥₀ of scalars such that the
partial sums

Vn =

n


a k ek

k=0

are bounded in X, the sequence {Vn} converges in X. The unit vector basis for ℓp , 1 ≤ p < ∞, is boundedly complete.
However, the unit vector basis is not boundedly complete in c0 . Indeed, if an = 1 for every n, then

∥Vn ∥c0 = max |ak | = 1
0≤k≤n

for every n, but the sequence {Vn} is not convergent in c0 , since ||Vn₊₁ − Vn|| = 1 for every n.
A space X with a boundedly complete basis {en}n≥₀ is isomorphic to a dual space, namely, the space X is isomorphic
to the dual of the closed linear span in the dual X ′ of the biorthogonal functionals associated to the basis {en}.[20]
A basis {en}n≥₀ of X is shrinking if for every bounded linear functional f on X, the sequence of non-negative
numbers

φn = sup{|f (x)| : x ∈ Fn , ∥x∥ ≤ 1}
tends to 0 when n → ∞, where Fn is the linear span of the basis vectors em for m ≥ n. The unit vector basis for ℓp , 1
< p < ∞, or for c0 , is shrinking. It is not shrinking in ℓ1 : if f is the bounded linear functional on ℓ1 given by

f : x = {xn } ∈ ℓ1 →




xn ,

n=0

then φn ≥ f(en) = 1 for every n.

138

CHAPTER 36. SCHAUDER DIMENSION

A basis {en }n ≥ ₀ of X is shrinking if and only if the biorthogonal functionals {e*n }n ≥ ₀ form a basis of the dual
X ′.[21]
Robert C. James characterized reflexivity in Banach spaces with basis: the space X with a Schauder basis is reflexive if and only if the basis is both shrinking and boundedly complete.[22] James also proved that a space with an
unconditional basis is non-reflexive if and only if it contains a subspace isomorphic to c0 or ℓ1 .[23]

36.6 Related concepts
A Hamel basis is a subset B of a vector space V such that every element v ∈ V can uniquely be written as

v=



αb b

b∈B

with αb ∈ F, with the extra condition that the set

{b ∈ B | αb ̸= 0}
is finite. This property makes the Hamel basis unwieldy for infinite-dimensional Banach spaces; as a Hamel basis
for an infinite-dimensional Banach space has to be uncountable. (Every finite-dimensional subspace of an infinitedimensional Banach space X has empty interior, and is no-where dense in X. It then follows from the Baire category
theorem that a countable union of these finite-dimensional subspaces cannot serve as a basis.[24] )

36.7 See also
• Generalized Fourier series
• Orthogonal polynomials
• Haar wavelet
• Banach space

36.8 Notes
[1] see Schauder (1927).
[2] Schauder, Juliusz (1928), “Eine Eigenschaft des Haarschen Orthogonalsystems”, Mathematische Zeitschrift 28: 317–320.
[3] Faber, Georg (1910), "Über die Orthogonalfunktionen des Herrn Haar”, Deutsche Math.-Ver (in German) 19: 104–
112. ISSN 0012-0456; http://www-gdz.sub.uni-goettingen.de/cgi-bin/digbib.cgi?PPN37721857X ; http://resolver.sub.
uni-goettingen.de/purl?GDZPPN002122553
[4] for an early published proof, see p. 157, C.3 in Bessaga, C. and Pełczyński, A. (1958), “On bases and unconditional
convergence of series in Banach spaces”, Studia Math. 17: 151–164. In the first lines of this article, Bessaga and Pełczyński
write that Mazur’s result appears without proof in Banach’s book —to be precise, on p. 238— but they do not provide a
reference containing a proof.
[5] Enflo, Per (July 1973). “A counterexample to the approximation problem in Banach spaces”. Acta Mathematica 130 (1):
309–317. doi:10.1007/BF02392270.
[6] see pp. 48–49 in Schauder (1927). Schauder defines there a general model for this system, of which the Faber–Schauder
system used today is a special case.
[7] see Bočkarev, S. V. (1974), “Existence of a basis in the space of functions analytic in the disc, and some properties of
Franklin’s system”, (in Russian) Mat. Sb. (N.S.) 95(137): 3–18, 159. Translated in Math. USSR-Sb. 24 (1974), 1–16.
The question is in Banach’s book, Banach (1932) p. 238, §3.

36.9. REFERENCES

139

[8] See p. 161, III.D.20 in Wojtaszczyk (1991).
[9] See p. 192, III.E.17 in Wojtaszczyk (1991).
[10] Franklin, Philip (1928), “A set of continuous orthogonal functions”, Math. Ann. 100: 522–529.
[11] see p. 164, III.D.26 in Wojtaszczyk (1991).
[12] see Ciesielski, Z. (1969), “A construction of basis in C 1 (I 2 )", Studia Math. 33: 243–247, and Schonefeld, Steven (1969),
“Schauder bases in spaces of differentiable functions”, Bull. Amer. Math. Soc. 75: 586–590.
[13] see p. 238, §3 in Banach (1932).
[14] see p. 40, II.B.11 in Wojtaszczyk (1991).
[15] see Proposition 4.25, p. 88 in Ryan (2002).
[16] see Corollary 4.13, p. 80 in Ryan (2002).
[17] see Szankowski, Andrzej (1981), "B(H) does not have the approximation property”, Acta Math. 147: 89–108.
[18] see p. 24 in Lindenstrauss & Tzafriri (1977).
[19] Gowers, W. Timothy; Maurey, Bernard (6 May 1992). “The unconditional basic sequence problem”. arXiv:math/9205204.
[20] see p. 9 in Lindenstrauss & Tzafriri (1977).
[21] see p. 8 in Lindenstrauss & Tzafriri (1977).
[22] see James, Robert. C. (1950), “Bases and reflexivity of Banach spaces”, Ann. of Math. (2) 52: 518–527. See also
Lindenstrauss & Tzafriri (1977) p. 9.
[23] see James, Robert C. (1950), “Bases and reflexivity of Banach spaces”, Ann. of Math. (2) 52: 518–527. See also p. 23 in
Lindenstrauss & Tzafriri (1977).
[24] Carothers, N. L. (2005), A short course on Banach space theory, Cambridge University Press ISBN 0-521-60372-2

This article incorporates material from Countable basis on PlanetMath, which is licensed under the Creative Commons
Attribution/Share-Alike License.

36.9 References
• Schauder, Juliusz (1927), “Zur Theorie stetiger Abbildungen in Funktionalraumen”, Mathematische Zeitschrift
(in German) 26: 47–65, doi:10.1007/BF01475440.
• Banach, Stefan (1932), Théorie des opérations linéaires, Monografie Matematyczne 1, Warszawa: Subwencji
Funduszu Kultury Narodowej, Zbl 0005.20901.
• Lindenstrauss, Joram; Tzafriri, Lior (1977), Classical Banach Spaces I, Sequence Spaces, Ergebnisse der Mathematik und ihrer Grenzgebiete 92, Berlin: Springer-Verlag, ISBN 3-540-08072-4.
• Ryan, Raymond A. (2002), Introduction to Tensor Products of Banach Spaces, Springer Monographs in Mathematics, London: Springer-Verlag, pp. xiv+225, ISBN 1-85233-437-1.
• Schaefer, Helmut H. (1971), Topological vector spaces, Graduate Texts in Mathematics 3, New York: SpringerVerlag, pp. xi+294, ISBN 0-387-98726-6.
• Wojtaszczyk, Przemysław (1991), Banach spaces for analysts, Cambridge Studies in Advanced Mathematics
25, Cambridge: Cambridge University Press, pp. xiv+382, ISBN 0-521-35618-0.
• Golubov, B.I. (2001), “Faber–Schauder system”, in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer,
ISBN 978-1-55608-010-4 .
• Heil, Christopher E. (1997). “A basis theory primer” (PDF)..
• Franklin system. B.I. Golubov (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.
org/index.php?title=Franklin_system&oldid=16655

140

CHAPTER 36. SCHAUDER DIMENSION

36.10 Further reading
• Kufner, Alois (2013), Function spaces, De Gruyter Series in Nonlinear analysis and applicatioons 14, Prague:
Academia Publishing House of the Czechoslovak Academy of Sciences, de Gruyter

Chapter 37

Seven-dimensional space
In mathematics, a sequence of n real numbers can be understood as a location in n-dimensional space. When n = 7,
the set of all such locations is called 7-dimensional space. Often such a space are studied as a vector space, without
any notion of distance. Seven-dimensional Euclidean space is seven-dimensional space equipped with a Euclidean
metric, which is defined by the dot product.
More generally, the term may refer to a seven-dimensional vector space over any field, such as a seven-dimensional
complex vector space, which has 14 real dimensions. It may also refer to a seven-dimensional manifold such as a
7-sphere, or a variety of other geometric constructions.
Seven-dimensional spaces have a number of special properties, many of them related to the octonions. An especially
distinctive property is that a cross product can be defined only in three or seven dimensions. This is related to Hurwitz’s
theorem, which prohibits the existence of algebraic structures like the quaternions and octonions in dimensions other
than 2, 4, and 8. The first exotic spheres ever discovered were seven-dimensional.

37.1 Geometry
37.1.1

7-polytope

Main article: Uniform 7-polytope
A polytope in seven dimensions is called a 7-polytope. The most studied are the regular polytopes, of which there
are only three in seven dimensions: the 7-simplex, 7-cube, and 7-orthoplex. A wider family are the uniform 7polytopes, constructed from fundamental symmetry domains of reflection, each domain defined by a Coxeter group.
Each uniform polytope is defined by a ringed Coxeter-Dynkin diagram. The 7-demicube is a unique polytope from
the D7 family, and 321 , 231 , and 132 polytopes from the E7 family.

37.1.2

6-sphere

The 6-sphere or hypersphere in seven-dimensional Euclidean space is the six-dimensional surface equidistant from a
point, e.g. the origin. It has symbol S6 , with formal definition for the 6-sphere with radius r of
{
}
S 6 = x ∈ R7 : ∥x∥ = r .
The volume of the space bounded by this 6-sphere is

V7 =

16π 3 7
r
105

which is 4.72477 × r7 , or 0.0369 of the 7-cube that contains the 6-sphere.
141

142

CHAPTER 37. SEVEN-DIMENSIONAL SPACE

37.2 Applications
37.2.1

Cross product

Main article: Seven-dimensional cross product
As mentioned above, a cross product in seven dimensions analogous to the usual three can be defined, and in fact a
cross product can only be defined in three and seven dimensions.

37.2.2

Exotic spheres

Main article: Exotic sphere
In 1956, John Milnor constructed an exotic sphere in 7 dimensions and showed that there are at least 7 differentiable
structures on the 7-sphere. In 1963 he showed that the exact number of such structures is 28.

37.3 See also
• Euclidean geometry
• List of geometry topics
• List of regular polytopes

37.4 References
• H.S.M. Coxeter: Regular Polytopes. Dover, 1973
• J.W. Milnor: On manifolds homeomorphic to the 7-sphere. Annals of Mathematics 64, 1956

37.5 External links
• Hazewinkel, Michiel, ed. (2001), “Euclidean geometry”, Encyclopedia of Mathematics, Springer, ISBN 9781-55608-010-4

Chapter 38

Six-dimensional space
Six-dimensional space is any space that has six dimensions, that is, six degrees of freedom, and that needs six pieces
of data, or coordinates, to specify a location in this space. There are an infinite number of these, but those of most interest are simpler ones that model some aspect of the environment. Of particular interest is six-dimensional Euclidean
space, in which 6-polytopes and the 5-sphere are constructed. Six-dimensional elliptical space and hyperbolic spaces
are also studied, with constant positive and negative curvature.
Formally, six-dimensional Euclidean space, ℝ6 , is generated by considering all real 6-tuples as 6-vectors in this space.
As such it has the properties of all Euclidean spaces, so it is linear, has a metric and a full set of vector operations.
In particular the dot product between two 6-vectors is readily defined, and can be used to calculate the metric. 6 × 6
matrices can be used to describe transformations such as rotations that keep the origin fixed.
More generally, any space that can be described locally with six coordinates, not necessarily Euclidean ones, is sixdimensional. One example is the surface of the 6-sphere, S6 . This is the set of all points in seven-dimensional
Euclidean space ℝ7 that are equidistant from the origin. This constraint reduces the number of coordinates needed to
describe a point on the 6-sphere by one, so it has six dimensions. Such non-Euclidean spaces are far more common
than Euclidean spaces, and in six dimensions they have far more applications.

38.1 Geometry
38.1.1

6-polytope

Main article: 6-polytope
A polytope in six dimensions is called a 6-polytope. The most studied are the regular polytopes, of which there are
only three in six dimensions: the 6-simplex, 6-cube, and 6-orthoplex. A wider family are the uniform 6-polytopes,
constructed from fundamental symmetry domains of reflection, each domain defined by a Coxeter group. Each
uniform polytope is defined by a ringed Coxeter-Dynkin diagram. The 6-demicube is a unique polytope from the D6
family, and 221 and 122 polytopes from the E6 family.

38.1.2

5-sphere

The 5-sphere, or hypersphere in six dimensions, is the five-dimensional surface equidistant from a point. It has symbol
S5 , and the equation for the 5-sphere, radius r, centre the origin is
{
}
S 5 = x ∈ R6 : ∥x∥ = r .
The volume of six-dimensional space bounded by this 5-sphere is

V6 =

π 3 r6
6
143

144

CHAPTER 38. SIX-DIMENSIONAL SPACE

which is 5.16771 × r6 , or 0.0807 of the smallest 6-cube that contains the 5-sphere.

38.1.3

6-sphere

The 6-sphere, or hypersphere in seven dimensions, is the six-dimensional surface equidistant from a point. It has
symbol S6 , and the equation for the 6-sphere, radius r, centre the origin is
{
}
S 6 = x ∈ R7 : ∥x∥ = r .
The volume of the space bounded by this 6-sphere is

V7 =

16π 3 r7
105

which is 4.72477 × r7 , or 0.0369 of the smallest 7-cube that contains the 6-sphere.

38.2 Applications
38.2.1

Transformations in three dimensions

In three-dimensional space a generalised transformation has six degrees of freedom, three translations along the three
coordinate axes and three from the rotation group SO(3). Often these transformations are handled separately as
they have very different geometrical structures, but there are ways of dealing with them that treat them as a single
six-dimensional object.

Homogeneous coordinates
Main article: Homogeneous coordinates
Using four-dimensional Homogeneous coordinates it is possible to describe a general transformation using a single 4
× 4 matrix. This matrix has six degrees of freedom, which can identified with the six elements of the matrix above
the main diagonal, as all others are determined by these.

Screw theory
Main article: Screw theory
In screw theory angular and linear velocity are combined into one six-dimensional object, called a twist. A similar
object called a wrench combines forces and torques in six dimensions. These can be treated as six-dimensional vectors
that transform linearly when changing frame of reference. Translations and rotations cannot be done this way, but
are related to a twist by exponentiation.

Phase space
Main article: Phase space
Phase space is a space made up of the position and momentum of a particle, which can be plotted together in a phase
diagram to highlight the relationship between the quantities. A general particle moving in three dimensions has a
phase space with six dimensions, too many to plot but they can be analysed mathematically.[1]

38.2. APPLICATIONS

145

Phase portrait of the Van der Pol oscillator

38.2.2

Rotations in four dimensions

Main article: Rotations in 4-dimensional Euclidean space
The rotation group in four dimensions, SO(4), has six degrees of freedom. This can be seen by considering the 4 ×
4 matrix that represents a rotation: as it is an orthogonal matrix the matrix is determined, up to a change in sign, by
e.g. the six elements above the main diagonal. But this group is not linear, and it has a more complex structure than
other applications seen so far.
Another way of looking at this group is with quaternion multiplication. Every rotation in four dimensions can be
achieved by multiplying by a pair of unit quaternions, one before and one after the vector. These quaternion are
unique, up to a change in sign for both of them, and generate all rotations when used this way, so the product of their
groups, S3 × S3 , is a double cover of SO(4), which must have six dimensions.
Although the space we live in is considered three-dimensional, there are practical applications for four-dimensional
space. Quaternions, one of the ways to describe rotations in three dimensions, consist of a four-dimensional space.
Rotations between quaternions, for interpolation for example, take place in four dimensions. Spacetime, which has
three space dimensions and one time dimension is also four-dimensional, though with a different structure to Euclidean
space.

38.2.3

Plücker coordinates

Main article: Plücker coordinates
Plücker coordinates are a way of representing lines in three dimensions using six homogeneous coordinates. As
homogeneous coordinates they have only five degrees of freedom, corresponding to the five degrees of freedom of
a general line, but they are treated as 6-vectors for some purposes. For example the check for the intersection of

146

CHAPTER 38. SIX-DIMENSIONAL SPACE

two lines is a 6-dimensional dot product between two sets of Plücker coordinates, one of which has exchanged its
displacement and moment parts.

38.2.4

Electromagnetism

In electromagnetism, the electromagnetic field is generally thought of as being made of two things, the electric field
and magnetic field. They are both three-dimensional vector fields, related to each other by Maxwell’s equations.
A second approach is to combine them in a single object, the six-dimensional electromagnetic tensor, a tensor or
bivector valued representation of the electromagnetic field. Using this Maxwell’s equations can be condensed from
four equations into a particularly compact single equation:

∂F = J
where F is the bivector form of the electromagnetic tensor, J is the four-current and ∂ is a suitable differential
operator.[2]

38.2.5

String theory

In physics string theory is an attempt to describe general relativity and quantum mechanics with a single mathematical
model. Although it is an attempt to model our universe it takes place in a space with more dimensions than the four of
space-time that we are familiar with. In particular a number of string theories take place in a ten-dimensional space,
adding an extra six dimensions. These extra dimensions are required by the theory, but as they cannot be observed
are thought to be quite different, perhaps compactified to form a six-dimensional space with a particular geometry
too small to be observable.
Since 1997, another string theory has come to light that works in six dimensions. Little string theories are nongravitational string theories in five and six dimensions that arise when considering limits of ten-dimensional string
theory.[3]

38.3 Theoretical background
38.3.1

Bivectors in four dimensions

A number of the above applications can be related to each other algebraically by considering the real, six-dimensional
bivectors in four dimensions. These can be written Λ2 ℝ4 for the set of bivectors in Euclidean space or Λ2 ℝ3,1 for the
set of bivectors in spacetime. The Plücker coordinates are bivectors in ℝ4 while the electromagnetic tensor discussed
in the previous section is a bivector in ℝ3,1 . Bivectors can be used to generate rotations in either ℝ4 or ℝ3,1 through
the exponential map (e.g. applying the exponential map of all bivectors in Λ2 ℝ4 generates all rotations in ℝ4 ). They
can also be related to general transformations in three dimensions through homogeneous coordinates, which can be
thought of as modified rotations in ℝ4 .
The bivectors arise from sums of all possible wedge products between pairs of 4-vectors. They therefore have C4
2 = 6 components, and can be written most generally as

B = B12 e12 + B13 e13 + B14 e14 + B23 e23 + B24 e24 + B34 e34
They are the first bivectors that cannot all be generated by products of pairs of vectors. Those that can are simple
bivectors and the rotations they generate are simple rotations. Other rotations in four dimensions are double and
isoclinic rotations and correspond to non-simple bivectors that cannot be generated by single wedge product.[4]

38.3.2

6-vectors

6-vectors are simply the vectors of six-dimensional Euclidean space. Like other such vectors they are linear, can be
added subtracted and scaled like in other dimensions. Rather than using letters of the alphabet, higher dimensions

38.3. THEORETICAL BACKGROUND

147

Calabi–Yau manifold (3D projection)

usually use suffixes to designate dimensions, so a general six-dimensional vector can be written a = (a1 , a2 , a3 , a4 , a5 ,
a6 ). Written like this the six basis vectors are (1, 0, 0, 0, 0, 0), (0, 1, 0, 0, 0, 0), (0, 0, 1, 0, 0, 0), (0, 0, 0, 1, 0, 0), (0,
0, 0, 0, 1, 0) and (0, 0, 0, 0, 0, 1).
Of the vector operators the cross product cannot be used in six dimensions; instead the wedge product of two 6-vectors
results in a bivector with 15 dimensions. The dot product of two vectors is

a · b = a1 b1 + a2 b2 + a3 b3 + a4 b4 + a5 b5 + a6 b6 .
It can be used to find the angle between two vectors and the norm,

|a| =



a · a = a1 2 + a2 2 + a3 2 + a4 2 + a5 2 + a6 2 .

This can be used for example to calculate the diagonal of a 6-cube; with one corner at the origin, edges aligned to the
axes and side length 1 the opposite corner could be at (1, 1, 1, 1, 1, 1), the norm of which is


1 + 1 + 1 + 1 + 1 + 1 = 6 = 2.4495,

148

CHAPTER 38. SIX-DIMENSIONAL SPACE

which is the length of the vector and so of the diagonal of the 6-cube.

38.3.3

Complex 3-space

The complex plane C has two real dimensions, so C3 is a six-dimensional space. William Rowan Hamilton identified this space in 1853[5] as the bivectors of is biquaternions. He had introduced vectors as 3-dimensional parts of
quaternions, so when the tensor product C ⊗H became biquaternions, the complex 3-dimensional part was bivectors.
The exponential map takes bivectors to the unit sphere of the biquaternion algebra, which is isomorphic to the Lorentz
group. Hence, as Ronald Shaw and Graham Bowtell[6] have noted, bivectors are logarithms of Lorentz transformations. Generally vector analysis is confined to three dimensions, but in Vector Analysis (1901) the six-dimensional
space of bivectors was used by J. W. Gibbs and E. B. Wilson.[7]
In the differential geometry of complex manifolds some six-dimensional spaces arise as algebraic manifolds. Examples include the quintic threefold and the Barth-Nieto quintic. According to Piergiorgio Odifreddi, the classification
of complex three-dimensional manifolds “was one of the spectacular results obtained by the Japanese school of geometry of Heisuke Hironaka, Shing Tung Yau, and Shigefumi Mori. For this work they were awarded the Fields
Medal in 1970, 1983, and 1990, respectively.”[8]

38.4 Footnotes
[1] Arthur Besier (1969). Perspectives of Modern Physics. McGraw-Hill.
[2] Lounesto (2001), pp. 109–110
[3] Aharony (2000)
[4] Lounesto (2001), pp. 86-89
[5] William Rowan Hamilton (1853) Lectures on Quaternions, p 665, Royal Irish Academy, link from Cornell University
Historical Mathematics Collection
[6] Ronald Shaw and Graham Bowtell (1969) “The Bivector Logarithm of a Lorentz Transformation”, Quarterly Journal of
Mathematics 20:497–503
[7] Edwin Bidwell Wilson (1901) Vector Analysis, pages 426 to 436, “Harmonic Vibrations and Bivectors”
[8] Piergiorgio Odifreddi (2004) The Mathematical Century, page 82, Princeton University Press ISBN 0-691-09294-X

38.5 References
• Lounesto, Pertti (2001). Clifford Algebras and Spinors. Cambridge: Cambridge University Press. ISBN 9780-521-00551-7.
• Aharony, Ofer (2000). “A Brief Review of “Little String Theories"". Quantum Grav. 17 (5). arXiv:hepth/9911147. Bibcode:2000CQGra..17..929A. doi:10.1088/0264-9381/17/5/302.

Chapter 39

Two-dimensional space

y

(2,3)

3
2

(−3,1)

1

(0,0)
−3

−2

−1

1

x
2

3

−1
−2

(−1.5,−2.5)

−3

Bi-dimensional Cartesian coordinate system

This page is about 2-dimensional Euclidean For the general theory of 2D objects, see Surface.
In physics and mathematics, two-dimensional space or bi-dimensional space is a geometric model of the planar
149

150

CHAPTER 39. TWO-DIMENSIONAL SPACE

projection of the physical universe. The two dimensions are commonly called length and width. Both directions lie
in the same plane.
A sequence of n real numbers can be understood as a location in n-dimensional space. When n = 2, the set of all such
locations is called two-dimensional space or bi-dimensional space, and usually is thought of as a Euclidean space.

39.1 History
Books I through IV and VI of Euclid’s Elements dealt with two-dimensional geometry, developing such notions as
similarity of shapes, the Pythagorean theorem (Proposition 47), equality of angles and areas, parallelism, the sum of
the angles in a triangle, and the three cases in which triangles are “equal” (have the same area), among many other
topics.
Later, the plane was described in a so-called Cartesian coordinate system, a coordinate system that specifies each point
uniquely in a plane by a pair of numerical coordinates, which are the signed distances from the point to two fixed
perpendicular directed lines, measured in the same unit of length. Each reference line is called a coordinate axis or
just axis of the system, and the point where they meet is its origin, usually at ordered pair (0, 0). The coordinates can
also be defined as the positions of the perpendicular projections of the point onto the two axes, expressed as signed
distances from the origin.
The idea of this system was developed in 1637 in writings by Descartes and independently by Pierre de Fermat,
although Fermat also worked in three dimensions, and did not publish the discovery.[1] Both authors used a single
axis in their treatments and have a variable length measured in reference to this axis. The concept of using a pair of
axes was introduced later, after Descartes’ La Géométrie was translated into Latin in 1649 by Frans van Schooten and
his students. These commentators introduced several concepts while trying to clarify the ideas contained in Descartes’
work.[2]
Later, the plane was thought of as a field, where any two points could be multiplied and, except for 0, divided. This
was known as the complex plane. The complex plane is sometimes called the Argand plane because it is used in
Argand diagrams. These are named after Jean-Robert Argand (1768–1822), although they were first described by
Norwegian-Danish land surveyor and mathematician Caspar Wessel (1745–1818).[3] Argand diagrams are frequently
used to plot the positions of the poles and zeroes of a function in the complex plane.

39.2 In geometry
See also: Euclidean geometry

39.2.1

Coordinate systems

Main article: Coordinate system
In mathematics, analytic geometry (also called Cartesian geometry) describes every point in two-dimensional space
by means of two coordinates. Two perpendicular coordinate axes are given which cross each other at the origin. They
are usually labeled x and y. Relative to these axes, the position of any point in two-dimensional space is given by an
ordered pair of real numbers, each number giving the distance of that point from the origin measured along the given
axis, which is equal to the distance of that point from the other axis.
Another widely used coordinate system is the polar coordinate system, which specifies a point in terms of its distance
from the origin and its angle relative to a rightward reference ray.
• Cartesian coordinate system
• Polar coordinate system

39.3. IN LINEAR ALGEBRA

39.2.2

151

Polytopes

Main article: Polygon
In two dimensions, there are infinitely many polytopes: the polygons. The first few regular ones are shown below:
Convex
The Schläfli symbol {p} represents a regular p-gon.
Degenerate (spherical)
The regular henagon {1} and regular digon {2} can be considered degenerate regular polygons. They can exist
nondegenerately in non-Euclidean spaces like on a 2-sphere or a 2-torus.
Non-convex
There exist infinitely many non-convex regular polytopes in two dimensions, whose Schläfli symbols consist of rational
numbers {n/m}. They are called star polygons and share the same vertex arrangements of the convex regular polygons.
In general, for any natural number n, there are n-pointed non-convex regular polygonal stars with Schläfli symbols
{n/m} for all m such that m < n/2 (strictly speaking {n/m} = {n/(n − m)}) and m and n are coprime.

39.2.3

Circle

Main article: Circle
The hypersphere in 2 dimensions is a circle, sometimes called a 1-sphere (S 1 ) because it is an one-dimensional
manifold. In a Euclidean plane, it has the length 2πr and the area of its interior is

A = πr2
where r is the radius.

39.2.4

Other shapes

Main article: List of two-dimensional geometric shapes
There are an infinitude of other curved shapes in two dimensions, notably including the conic sections: the ellipse,
the parabola, and the hyperbola.

39.3 In linear algebra
Another mathematical way of viewing two-dimensional space is found in linear algebra, where the idea of independence is crucial. The plane has two dimensions because the length of a rectangle is independent of its width. In the
technical language of linear algebra, the plane is two-dimensional because every point in the plane can be described
by a linear combination of two independent vectors.

39.3.1

Dot product, angle, and length

Main article: Dot product

152

CHAPTER 39. TWO-DIMENSIONAL SPACE

The dot product of two vectors A = [A1 , A2 ] and B = [B1 , B2 ] is defined as:[4]

A · B = A1 B1 + A2 B2
A vector can be pictured as an arrow. Its magnitude is its length, and its direction is the direction the arrow points.
The magnitude of a vector A is denoted by ∥A∥ . In this viewpoint, the dot product of two Euclidean vectors A and
B is defined by[5]

A · B = ∥A∥ ∥B∥ cos θ,
where θ is the angle between A and B.
The dot product of a vector A by itself is

A · A = ∥A∥2 ,
which gives

39.4. IN CALCULUS

153


A · A,

∥A∥ =

the formula for the Euclidean length of the vector.

39.4 In calculus
39.4.1

Gradient

In a rectangular coordinate system, the gradient is given by

∇f =

∂f
∂f
i+
j
∂x
∂y

39.4.2

Line integrals and double integrals

For some scalar field f : U ⊆ R2 → R, the line integral along a piecewise smooth curve C ⊂ U is defined as




b

f ds =

f (r(t))|r′ (t)| dt.

a

C

where r: [a, b] → C is an arbitrary bijective parametrization of the curve C such that r(a) and r(b) give the endpoints
of C and a < b .
For a vector field F : U ⊆ R2 → R2 , the line integral along a piecewise smooth curve C ⊂ U, in the direction of r, is
defined as




b

F(r) · dr =

F(r(t)) · r′ (t) dt.

a

C

where · is the dot product and r: [a, b] → C is a bijective parametrization of the curve C such that r(a) and r(b) give
the endpoints of C.
A double integral refers to an integral within a region D in R2 of a function f (x, y), and is usually written as:
∫∫
f (x, y) dx dy.
D

39.4.3

Fundamental theorem of line integrals

Main article: Fundamental theorem of line integrals
The fundamental theorem of line integrals, says that a line integral through a gradient field can be evaluated by
evaluating the original scalar field at the endpoints of the curve.
Let φ : U ⊆ R2 → R . Then

φ (q) − φ (p) =

∇φ(r) · dr.
γ[p, q]

154

39.4.4

CHAPTER 39. TWO-DIMENSIONAL SPACE

Green’s theorem

Main article: Green’s theorem
Let C be a positively oriented, piecewise smooth, simple closed curve in a plane, and let D be the region bounded by
C. If L and M are functions of (x, y) defined on an open region containing D and have continuous partial derivatives
there, then[6][7]
∫∫ (

I
(L dx + M dy) =
C

D

∂M
∂L

∂x
∂y

)
dx dy

where the path of integration along C is counterclockwise.

39.5 In topology
In topology, the plane is characterized as being the unique contractible 2-manifold.
Its dimension is characterized by the fact that removing a point from the plane leaves a space that is connected, but
not simply connected.

39.6 In graph theory
In graph theory, a planar graph is a graph that can be embedded in the plane, i.e., it can be drawn on the plane in
such a way that its edges intersect only at their endpoints. In other words, it can be drawn in such a way that no edges
cross each other.[8] Such a drawing is called a plane graph or planar embedding of the graph. A plane graph can
be defined as a planar graph with a mapping from every node to a point on a plane, and from every edge to a plane
curve on that plane, such that the extreme points of each curve are the points mapped from its end nodes, and all
curves are disjoint except on their extreme points.

39.7 References
[1] “Analytic geometry”. Encyclopædia Britannica (Encyclopædia Britannica Online ed.). 2008.
[2] Burton 2011, p. 374
[3] Wessel’s memoir was presented to the Danish Academy in 1797; Argand’s paper was published in 1806. (Whittaker &
Watson, 1927, p. 9)
[4] S. Lipschutz, M. Lipson (2009). Linear Algebra (Schaum’s Outlines) (4th ed.). McGraw Hill. ISBN 978-0-07-154352-1.
[5] M.R. Spiegel, S. Lipschutz, D. Spellman (2009). Vector Analysis (Schaum’s Outlines) (2nd ed.). McGraw Hill. ISBN
978-0-07-161545-7.
[6] Mathematical methods for physics and engineering, K.F. Riley, M.P. Hobson, S.J. Bence, Cambridge University Press,
2010, ISBN 978-0-521-86153-3
[7] Vector Analysis (2nd Edition), M.R. Spiegel, S. Lipschutz, D. Spellman, Schaum’s Outlines, McGraw Hill (USA), 2009,
ISBN 978-0-07-161545-7
[8] Trudeau, Richard J. (1993). Introduction to Graph Theory (Corrected, enlarged republication. ed.). New York: Dover
Pub. p. 64. ISBN 978-0-486-67870-2. Retrieved 8 August 2012. Thus a planar graph, when drawn on a flat surface,
either has no edge-crossings or can be redrawn without them.

39.8. SEE ALSO

39.8 See also
• Three-dimensional space
• Two-dimensional graph

155

Chapter 40

VC dimension
In statistical learning theory, or sometimes computational learning theory, the VC dimension (for Vapnik–Chervonenkis
dimension) is a measure of the capacity (complexity, expressive power, richness, or flexibility) of a statistical classification algorithm, defined as the cardinality of the largest set of points that the algorithm can shatter. It is a core
concept in Vapnik–Chervonenkis theory, and was originally defined by Vladimir Vapnik and Alexey Chervonenkis.
Informally, the capacity of a classification model is related to how complicated it can be. For example, consider the
thresholding of a high-degree polynomial: if the polynomial evaluates above zero, that point is classified as positive,
otherwise as negative. A high-degree polynomial can be wiggly, so it can fit a given set of training points well. But
one can expect that the classifier will make errors on other points, because it is too wiggly. Such a polynomial has a
high capacity. A much simpler alternative is to threshold a linear function. This function may not fit the training set
well, because it has a low capacity. This notion of capacity is made rigorous below.

40.1 Shattering
A classification model f with some parameter vector θ is said to shatter a set of data points (x1 , x2 , . . . , xn ) if, for
all assignments of labels to those points, there exists a θ such that the model f makes no errors when evaluating that
set of data points.
The VC dimension of a model f is the maximum number of points that can be arranged so that f shatters them.
More formally, it is h′ where h′ is the maximum h such that some data point set of cardinality h can be shattered by
f.
For example, consider a straight line as the classification model: the model used by a perceptron. The line should
separate positive data points from negative data points. There exist sets of 3 points that can indeed be shattered
using this model (any 3 points that are not collinear can be shattered). However, no set of 4 points can be shattered:
by Radon’s theorem, any four points can be partitioned into two subsets with intersecting convex hulls, so it is not
possible to separate one of these two subsets from the other. Thus, the VC dimension of this particular classifier
is 3. It is important to remember that while one can choose any arrangement of points, the arrangement of those
points cannot change when attempting to shatter for some label assignment. Note, only 3 of the 23 = 8 possible label
assignments are shown for the three points.

40.2 Uses
The VC dimension has utility in statistical learning theory, because it can predict a probabilistic upper bound on the
test error of a classification model.
Vapnik [1] proved that the probability of the test error distancing from an upper bound (on data that is drawn i.i.d.
from the same distribution as the training set) is given by
)
(

=1−η
P error test ≤ error training + h(log(2N /h)+1)−log(η/4)
N
where h is the VC dimension of the classification model, 0 ≤ η ≤ 1 , and N is the size of the training set (restriction:
156

40.3. SEE ALSO

157

this formula is valid when h ≪ N ). Similar complexity bounds can be derived using Rademacher complexity, but
Rademacher complexity can sometimes provide more insight than VC dimension calculations into such statistical
methods such as those using kernels. The generalization of the VC dimension for multi-valued functions is the
Natarajan dimension.
In computational geometry, VC dimension is one of the critical parameters in the size of ε-nets, which determines
the complexity of approximation algorithms based on them; range sets without finite VC dimension may not have
finite ε-nets at all.

40.3 See also
• Sauer–Shelah lemma, a bound on the number of sets in a set system in terms of the VC dimension
• Karpinski-Macintyre theorem, a bound on the VC dimension of general Pfaffian formulas

40.4 References
[1] Vapnik, Vladimir. The nature of statistical learning theory. springer, 2000.

• Andrew Moore’s VC dimension tutorial
• Vapnik, Vladimir. “The nature of statistical learning theory”. springer, 2000.
• V. Vapnik and A. Chervonenkis. “On the uniform convergence of relative frequencies of events to their probabilities.” Theory of Probability and its Applications, 16(2):264–280, 1971.
• A. Blumer, A. Ehrenfeucht, D. Haussler, and M. K. Warmuth. “Learnability and the Vapnik–Chervonenkis
dimension.” Journal of the ACM, 36(4):929–865, 1989.
• Christopher Burges Tutorial on SVMs for Pattern Recognition (containing information also for VC dimension)
• Bernard Chazelle. “The Discrepancy Method.”
• B.K. Natarajan. “On Learning sets and functions.” Machine Learning, 4, 67-97, 1989.

Chapter 41

Zero-dimensional space
This article is about zero dimension in topology. For several kinds of zero space in algebra, see zero object (algebra).
In mathematics, a zero-dimensional topological space (or nildimensional) is a topological space that has dimension
zero with respect to one of several inequivalent notions of assigning a dimension to a given topological space.[1][2] An
illustration of a nildimensional space is a point.[3]

41.1 Definition
Specifically:
• A topological space is zero-dimensional with respect to the Lebesgue covering dimension if every finite open
cover of the space has a finite refinement which is a cover of the space by open sets such that any point in the
space is contained in exactly one open set of this refinement.
• A topological space is zero-dimensional with respect to the small inductive dimension if it has a base consisting
of clopen sets.
The two notions above agree for separable, metrisable spaces.

41.2 Properties of spaces with covering dimension zero
• A zero-dimensional Hausdorff space is necessarily totally disconnected, but the converse fails. However, a
locally compact Hausdorff space is zero-dimensional if and only if it is totally disconnected. (See (Arhangel’skii
2008, Proposition 3.1.7, p.136) for the non-trivial direction.)
• Zero-dimensional Polish spaces are a particularly convenient setting for descriptive set theory. Examples of
such spaces include the Cantor space and Baire space.
• Hausdorff zero-dimensional spaces are precisely the subspaces of topological powers 2I where 2 = {0, 1} is
given the discrete topology. Such a space is sometimes called a Cantor cube. If I is countably infinite, 2I is
the Cantor space.

41.3 Notes
• Arhangel’skii, Alexander; Tkachenko, Mikhail (2008), Topological Groups and Related Structures, Atlantis
Studies in Mathematics, Vol. 1, Atlantis Press, ISBN 90-78677-06-6
• Engelking, Ryszard (1977). General Topology. PWN, Warsaw.
• Willard, Stephen (2004). General Topology. Dover Publications. ISBN 0-486-43479-6.
158

41.4. REFERENCES

159

41.4 References
[1] “zero dimensional”. planetmath.org. Retrieved 2015-06-06.
[2] Hazewinkel, Michiel (1989). Encyclopaedia of Mathematics, Volume 3. Kluwer Academic Publishers. p. 190.
[3] Wolcott, Luke; McTernan, Elizabeth (2012). “Imagining Negative-Dimensional Space” (PDF). In Bosch, Robert; McKenna,
Douglas; Sarhangi, Reza. Proceedings of Bridges 2012: Mathematics, Music, Art, Architecture, Culture. Phoenix, Arizona,
USA: Tessellations Publishing. pp. 637–642. ISBN 978-1-938664-00-7. ISSN 1099-6702. Retrieved 10 July 2015.

160

CHAPTER 41. ZERO-DIMENSIONAL SPACE

41.5 Text and image sources, contributors, and licenses
41.5.1

Text

• 2 1/2D Source: https://en.wikipedia.org/wiki/2_1/2D?oldid=535427127 Contributors: Zundark, Michael Hardy, Malcolma, EoGuy,
Yobot, Hajatvrc and Microextruders
• Abscissa Source: https://en.wikipedia.org/wiki/Abscissa?oldid=680464183 Contributors: Zundark, Andre Engels, Matusz, Ellmist, Grape~enwiki,
Stevenj, Schneelocke, Emperorbma, Dysprosia, Hyacinth, Topbanana, Robbot, Hadal, Dufekin, Bender235, BillCook, Algocu, Oleg
Alexandrov, Woohookitty, Linas, TAKASUGI Shinji, Maxal, Thecurran, EamonnPKeane, Bhny, Stephenb, Eb Oesch, SmackBot, Octahedron80, Tomhubbard, Wizard191, Tophtucker, Rt3368, Deflective, David Eppstein, R'n'B, Belovedfreak, VolkovBot, Someguy1221,
Tomaxer, Flyer22, Sean.hoyland, Siskus, Käptn Weltall, Avoided, Addbot, CarsracBot, Theraven502, Erutuon, OlEnglish, KamikazeBot, Reindra, Fighter 10, Twri, Xqbot, Amaury, Smallman12q, FrescoBot, RedBot, Double sharp, Autoreplay, Clarice Reis, EmausBot,
Nappolita, Minimac’s Clone, Solomonfromfinland, Stringybark, Sjjamsa, Aupif, ClueBot NG, TheAwesomeMathGenie, Mdb23b, ReconditeRodent, Quenhitran, Bryanrutherford0, Imafagetty, Malabsky and Anonymous: 44
• Cartesian coordinate system Source: https://en.wikipedia.org/wiki/Cartesian_coordinate_system?oldid=673154275 Contributors: Damian
Yerrick, Chuck Smith, Bryan Derksen, Zundark, Tarquin, Mark Ryan, Andre Engels, Heron, Montrealais, Patrick, Michael Hardy, Dcljr,
JWSchmidt, Александър, AugPi, Skyfaller, Smack, Pizza Puzzle, Nikola Smolenski, Drz~enwiki, Emperorbma, Charles Matthews,
Timwi, Dysprosia, Colipon, The Anomebot, Maximus Rex, Bevo, Chuunen Baka, Robbot, Romanm, Modulatum, Sverdrup, Henrygb,
Prara, Michael Snow, Jleedev, Enochlau, Snobot, Giftlite, DocWatson42, Jason Quinn, Jorge Stolfi, Manuel Anastácio, SoWhy, MrMambo, Joseph Myers, C4~enwiki, Adashiel, Discospinster, Rich Farmbrough, Guanabot, Paul August, Bender235, Elwikipedista~enwiki,
BenjBot, Edwinstearns, Rgdboer, Mavhc, Che090572, Kjkolb, Larry V, HasharBot~enwiki, OGoncho, Jumbuck, Alansohn, SnowFire, PAR, Yossiea~enwiki, Fordan, Dionoea, Kbolino, Falcorian, Oleg Alexandrov, Mel Etitis, Woohookitty, Linas, LOL, Firefishy,
Mpatel, Waldir, Ruziklan, Palica, Tony1849, Graham87, Jobnikon, BD2412, Rjwilmsi, MJSkia1, Ygrek, OneWeirdDude, MarSch,
Jmcc150, Salix alba, Brighterorange, Titoxd, FlaBot, Mathbot, Nihiltres, Maxal, Cmbrannon, Slant, Fresheneesz, Srleffler, Chobot,
Helios, DVdm, YurikBot, Wavelength, RobotE, Charles Gaudette, Pip2andahalf, RussBot, Hede2000, Piet Delport, Gustavb, Byj2000,
RabidDeity, Dhollm, Bucketsofg, Dbfirs, Cheeser1, DeadEyeArrow, Dast, MathsIsFun, HereToHelp, Tony Liao~enwiki, JLaTondre,
Gesslein, Archer7, Allens, Katieh5584, Meegs, Nekura, DVD R W, Sardanaphalus, SmackBot, RDBury, Adam majewski, Incnis Mrsi,
Delldot, Canthusus, ParlorGames, Skizzik, Mirokado, Kurykh, Miquonranger03, Silly rabbit, Basalisk, Octahedron80, Nbarth, DHNbot~enwiki, Hongooi, Antonrojo, Can't sleep, clown will eat me, RedHillian, SundarBot, Cameron Nedland, Memming, Downtown dan
seattle, Doodle77, Andeggs, Sadi Carnot, Andrei Stroe, Lambiam, Dbtfz, Kusarbo, Cronholm144, Bjankuloski06en~enwiki, Aleenf1,
IronGargoyle, 041744, JHunterJ, Stwalkerster, Mets501, Wizard191, Maelor, JForget, CmdrObot, Jackzhp, Dgw, NickW557, 345Kai,
WeggeBot, Yaris678, WillowW, LouisBB, ST47, Edgerck, Juansempere, DumbBOT, TJ09, Vanished User jdksfajlasd, Zalgo, Lanepierce,
Thijs!bot, Epbr123, Headbomb, Federhalter, Escarbot, Cyclonenim, AntiVandalBot, Gioto, Mhaitham.shammaa, Modernist, JAnDbot, Chausx, PhilKnight, DeclinedShadow, Magioladitis, Connormah, VoABot II, Catslash, Mrfence, CattleGirl, Fabrictramp, Srice13,
Maniwar, David Eppstein, DerHexer, MartinBot, Rettetast, Snozzer, J.delanoy, Captain panda, Sasajid, Trusilver, Numbo3, Century0,
Cpiral, Lantonov, Stan J Klimas, Samtheboy, NewEnglandYankee, Davecrosby uk, CardinalDan, RJASE1, Idioma-bot, KillerOfThem,
VolkovBot, JohnBlackburne, Mun206, TXiKiBoT, Anonymous Dissident, Steven J. Anderson, Martin451, Uncaringgunner, Domitius,
Topherjasmin09, Andy Dingley, Dirkbb, Blindman.rms, AlleborgoBot, Logan, Katzmik, SieBot, Euryalus, Yintan, Joaosampaio, Flyer22,
Man It’s So Loud In Here, Paolo.dL, Oxymoron83, Atmamatma, Jurlinga, Hello71, Hobartimus, Svick, Anchor Link Bot, Randomblue,
Escape Orbit, Martarius, ClueBot, The Thing That Should Not Be, Mild Bill Hiccup, DragonBot, Excirial, Abrech, SockPuppetForTomruen, Thingg, Mattreedywiki, Johnuniq, Darkicebot, CaptainVideo890, Skunkboy74, Vanostran, Hyperweb79, Addbot, Binary TSO,
DougsTech, Fgnievinski, Blethering Scot, Jncraton, Fieldday-sunday, Leszek Jańczuk, MrOllie, Allliam, TheFreeloader, Tide rolls, Alanfeynman, Lightbot, Nobono9, Zorrobot, Վազգեն, Wmplayer, Wwannsda, Luckas-bot, Yobot, Fraggle81, Anypodetos, Nallimbot, Vltava 68, Tempodivalse, AnomieBOT, 1exec1, Rajmathi mehta, Piano non troppo, Ulric1313, Materialscientist, Citation bot, Oftopladb,
Xqbot, Waffleman12, Jeffwang, NorbDigiBeaver, Almabot, Frosted14, Red van man, A.amitkumar, FrescoBot, Appropo, Majopius,
Masterknighted, Rhino bucket, Wireless Keyboard, Þjóðólfr, Pmokeefe, Jsjunkie, Rogiemac, Shanmugamp7, Rausch, ‫کاشف عقیل‬,
Lotje, Colin Cochrane, Dasteve, Suffusion of Yellow, Shanker Pur, NameIsRon, Timh3221, Slon02, EmausBot, Acather96, WikitanvirBot, GoingBatty, RA0808, Wikipelli, Slawekb, CanonLawJunkie, Knight1993, Junelvillejo, Quondum, MonoAV, Chewings72, WMC,
ClueBot NG, Gareth Griffith-Jones, KlappCK, Wcherowi, MelbourneStar, Satellizer, Movses-bot, Widr, Helpful Pixie Bot, DBigXray,
Popsh, Papadim.G, Questionefisica, Mark Arsten, Blue Mist 1, Phl.jns, Nbrothers, Pratyya Ghosh, SergeantHippyZombie, Radio15dude,
ChrisGualtieri, Bigloser12345loser, EuroCarGT, Ramesepirate, Ducknish, Kelvinsong, Kingbowen, Webclient101, Indiana State, RazrRekr201, Jc86035, Acetotyce, ProtossPylon, Liekturtles, SamX, Wamiq, Sicaeffect, Ginsuloft, Primalshell, JAaron95, Stamptrader,
Andreatristan, Akhilburle, Wilson Widyadhana, Roshmita, Elsa1098, Hdkeudhdjisjedu and Anonymous: 474
• Codimension Source: https://en.wikipedia.org/wiki/Codimension?oldid=657214672 Contributors: Zundark, Patrick, Charles Matthews,
MathMartin, Tosha, Giftlite, Fropuff, HasharBot~enwiki, Joriki, Grammarbot, FlaBot, Mathbot, Wavelength, Crasshopper, KasugaHuang, User24, SmackBot, RDBury, David Farris, TimBentley, Nbarth, Valoem, Andri Egilsson, Nick Number, EagleFan, Akulo,
R'n'B, LokiClock, Anonymous Dissident, Philmac, Mild Bill Hiccup, Addbot, Luckas-bot, Yobot, Zandr4, TechBot, Erik9bot, Nunc aut
numquam, HUnTeR4subs, Brad7777 and Anonymous: 9
• Complex dimension Source: https://en.wikipedia.org/wiki/Complex_dimension?oldid=614212764 Contributors: Michael Hardy, Charles
Matthews, SmackBot, CBM, David Eppstein, VolkovBot, CohesionBot, Addbot, Erik9bot and D.Lazard
• Complex network zeta function Source: https://en.wikipedia.org/wiki/Complex_network_zeta_function?oldid=563206803 Contributors: Michael Hardy, Charles Matthews, Andreas Kaufmann, Rjwilmsi, Crystallina, Coppertwig, Oshanker, DOI bot, Yobot, Citation bot
1, Spectral sequence and Anonymous: 1
• Concentration dimension Source: https://en.wikipedia.org/wiki/Concentration_dimension?oldid=656502748 Contributors: Michael
Hardy, Sullivan.t.j, David Eppstein, Melcombe, Yobot and Helpful Pixie Bot
• Degrees of freedom Source: https://en.wikipedia.org/wiki/Degrees_of_freedom?oldid=685055239 Contributors: The Anome, Peterlin~enwiki, Michael Hardy, 168..., Looxix~enwiki, EasilyAmused, CatherineMunro, Jfitzg, Charles Matthews, Fibonacci, Rogper~enwiki,
Bearcat, Robbot, Seglea, Wikibot, MathKnight, Amp, Zhen Lin, RyanKnoll, Icairns, Warfieldian, Brianjd, Mormegil, CALR, BrokenSegue, Duk, John Fader, Vizcarra, MIT Trekkie, Oleg Alexandrov, FlaBot, Margosbot~enwiki, Chobot, YurikBot, Russell C. Sibley,
Alex Bakharev, Rhythm, KocjoBot~enwiki, Dingar, Pissant, Pardy, Thijs!bot, Grimlock, David Eppstein, Jiuguang Wang, STBotD,

41.5. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES

161

VolkovBot, OKBot, Addbot, Fgnievinski, Zorrobot, Yobot, Amirobot, Nallimbot, Xqbot, Fortdj33, Pinethicket, TobeBot, EmausBot,
Hhhippo, ZéroBot, D.Lazard, Qurrat ul an, ClueBot NG, Rezabot, Illia Connell, Mark viking, Hamoudafg, JeremiahY and Anonymous:
28
• Degrees of freedom (physics and chemistry) Source: https://en.wikipedia.org/wiki/Degrees_of_freedom_(physics_and_chemistry)
?oldid=685004336 Contributors: Kku, Charles Matthews, Gandalf61, Giftlite, Tromer, Finn-Zoltan, Karol Langner, Icairns, Dmr2,
John Vandenberg, Vizcarra, Monk127, Jheald, Count Iblis, Oleg Alexandrov, Pol098, Adiel, TeaDrinker, Siddhant, Conscious, Grubber, Archelon, NawlinWiki, Josteinaj, Ignitus, Tom Morris, SmackBot, Rex the first, Incnis Mrsi, Dingar, ThorinMuglindir, Nbarth,
Archimerged, Cydebot, Boardhead, Thijs!bot, Lagesag~enwiki, Kilva, Al Lemos, AntiVandalBot, Jj137, Gökhan, JAnDbot, Grimlock,
Mythealias, R'n'B, Adavidb, VolkovBot, TXiKiBoT, Marc Vuffray, Antoni Barau, Kbrose, SieBot, Damorbel, Bernard Marx, Flyer22,
Oxymoron83, Taggard, Myrikhan, Dabomb87, Sfan00 IMG, Vql, The Founders Intent, XLinkBot, Leia, Addbot, Meisam, Luckas-bot,
Nallimbot, Sander roy, Rubinbot, Xqbot, Nfr-Maat, J04n, GrouchoBot, Nickmn, RibotBOT, Mnmngb, Thehelpfulbot, FrescoBot, Victor of Gaugamela, Yitping, Adlerbot, Jordgette, Heurisko, EmausBot, WikitanvirBot, Hhhippo, Johnnycracka, Quondum, Hpubliclibrary,
Maschen, ClueBot NG, BG19bot, Gnosygnu, 1994bhaskar, Supra256, Klilidiplomus, Aisteco, Hurricaneerica, Haider ali siddiqui, JeAlLi
and Anonymous: 64
• Dimension Source: https://en.wikipedia.org/wiki/Dimension?oldid=682028999 Contributors: AxelBoldt, BF, Bryan Derksen, Zundark,
The Anome, Josh Grosse, Vignaux, XJaM, Stevertigo, Frecklefoot, Patrick, Boud, Michael Hardy, Isomorphic, Menchi, Ixfd64, Kalki,
Delirium, Looxix~enwiki, William M. Connolley, Angela, Julesd, Glenn, AugPi, Poor Yorick, Jiang, Raven in Orbit, Vargenau, Pizza
Puzzle, Schneelocke, Charles Matthews, Dysprosia, Furrykef, Jgm, Omegatron, Kizor, Lowellian, Gandalf61, Blainster, Bkell, Tobias
Bergemann, Centrx, Giftlite, Beolach, Lupin, Unconcerned, Falcon Kirtaran, Alvestrand, Utcursch, CryptoDerk, Antandrus, Joseph Myers, Lesgles, Jokestress, Mike Storm, Tomruen, Bodnotbod, Marcos, Mare-Silverus, Brianjd, D6, Discospinster, Guanabot, Gadykozma,
Pjacobi, Mani1, Paul August, Corvun, Andrejj, Ground, RJHall, Mr. Billion, Lycurgus, Rgdboer, Zenohockey, Bobo192, W8TVI,
Rbj, Dungodung, Shlomital, Hesperian, Friviere, Quaoar, Msh210, Mo0, Arthena, Seans Potato Business, InShaneee, Malo, Metron4,
Wtmitchell, Velella, TheRealFennShysa, Culix, Lerdsuwa, Ringbang, Oleg Alexandrov, Bobrayner, Boothy443, Linas, ScottDavis, Arcann, Peter Hitchmough, OdedSchramm, Waldir, Christopher Thomas, Mandarax, Graham87, Marskell, Dpv, Rjwilmsi, Jmcc150, THE
KING, RobertG, Mathbot, Margosbot~enwiki, Ewlyahoocom, Gurch, Karelj, Saswann, Chobot, Sharkface217, Volunteer Marek, Bgwhite, The Rambling Man, Wavelength, Reverendgraham, Hairy Dude, Rt66lt, Jimp, RussBot, Michael Slone, Rsrikanth05, Bisqwit, CyclopsX, Rhythm, NickBush24, BaldMonkey, TVilkesalo~enwiki, Welsh, Trovatore, Eighty~enwiki, Dureo, Vb, Number 57, Elizabeyth,
Eurosong, Arthur Rubin, JoanneB, Vicarious, Geoffrey.landis, JLaTondre, ThunderBird, Profero, GrinBot~enwiki, Segv11, DVD R
W, Sardanaphalus, SmackBot, RDBury, Haza-w, Lestrade, VigilancePrime, Cavenba, Lord Snoeckx, Rojomoke, Frymaster, Spireguy,
Gilliam, Brianski, Rpmorrow, Chris the speller, Ayavaron, Kurykh, TimBentley, Persian Poet Gal, Nbarth, Christobal54, William Allen
Simpson, Foxjwill, Swilk, Tamfang, Moonsword, HoodedMan, Gurps npc, Yidisheryid, Kittybrewster, Addshore, Amazins490, Stevenmitchell, COMPFUNK2, PiMaster3, PiPhD, Dreadstar, Richard001, RandomP, Danielkwalsh, JakGd1, Snakeyes (usurped), Andeggs,
SashatoBot, Lambiam, Richard L. Peterson, MagnaMopus, SteveG23, Footballrocks41237, Pthag, Thomas Howey, Jim.belk, Loadmaster,
Waggers, Mets501, Michael Greiner, Daviddaniel37, Autonova, Fredil Yupigo, Iridescent, Kiwi8, Mulder416sBot, Courcelles, Rccarman,
Tanthalas39, KyraVixen, Randalllin, 345Kai, Requestion, MarsRover, WeggeBot, Myasuda, FilipeS, Equendil, Vectro, Cydebot, Hanju,
Tawkerbot4, Xantharius, Dragonflare82, Arb, JamesAM, Thijs!bot, Epbr123, Mbell, Mojo Hand, SomeStranger, The Hybrid, Nick
Number, Heroeswithmetaphors, Escarbot, Stannered, AntiVandalBot, Mrbip, Salgueiro~enwiki, Kent Witham, The man stephen, The
Transhumanist, Matthew Fennell, Vanished user s4irtj34tivkj12erhskj46thgdg, Beaumont, Cynwolfe, Acroterion, Bongwarrior, VoABot
II, Alvatros~enwiki, Rafuki 33, Wikidudeman, JamesBWatson, TheChard, Stijn Vermeeren, Indon, Illspirit, David Eppstein, Fang 23,
Spellmaster, JoergenB, DerHexer, Patstuart, DukeTwicep, Greenguy1090, Torrr, Gasheadsteve, Leyo, J.delanoy, Captain panda, Enchepilon, Elizabethrhodes, EscapingLife, Inimino, Maurice Carbonaro, DR TORMEY, Good-afternun!, It Is Me Here, Thomfilm, TomasBat,
Han Solar de Harmonics, SemblaceII, CardinalDan, Hamzabahaa, VolkovBot, Seldon1, JohnBlackburne, Orthologist, LokiClock, Jacroe,
Philip Trueman, Mdmkolbe, TXiKiBoT, Gaara144, Vipinhari, Technopat, Aodessey, Anonymous Dissident, Immorality, Berchin, Ferengi, Jackfork, LariAnn, Gbaor, Andyo2000, Jmath666, Wenli, HX Aeternus, Wolfrock, Lamro, Enviroboy, Mike4ty4, Symane, Rybu,
BriEnBest, YonaBot, Racer x124, Pelyukhno.erik, Oysterguitarist, Harry~enwiki, SuperLightningKick, Techman224, OKBot, Anchor
Link Bot, JL-Bot, Mr. Granger, Martarius, ClueBot, WurmWoode, The Thing That Should Not Be, Gamehero, Roal AT, DragonBot,
Kitsunegami, Excirial, D1a4l2s3y5, Grb 1991, Cute lolicon, Puceron, Jennonpress, SchreiberBike, ChrisHodgesUK, Qwfp, Alousybum,
TimothyRias, Rror, Egyptianboy15223, NellieBly, Me, Myself, and I, Addbot, Leszek Jańczuk, NjardarBot, Joonojob, Sillyfolkboy,
NerdBoy1392, Ranjitvohra, Renatokeshet, Numbo3-bot, Tide rolls, Lightbot, Yobot, Tester999, Tohd8BohaithuGh1, Fraggle81, Sarrus,
Zenquin, AnomieBOT, Andrewrp, AdjustShift, Materialscientist, Hunnjazal, 90 Auto, Citation bot, Maxis ftw, Nagoshi1, Staysfresh,
Danny955, MeerkatNerd1, Anna Frodesiak, Mlpearc, AbigailAbernathy, Srich32977, Charvest, Natural Cut, Sonoluminesence, Currydump, FrescoBot, Васин Юрий, PhysicsExplorer, Sae1962, Rtycoon, Drew R. Smith, RandomDSdevel, Pinethicket, I dream of
horses, Edderso, Jonesey95, MastiBot, Σ, December21st2012Freak, AGiorgio08, SkyMachine, Double sharp, Sheogorath, Lotje, Chrisjameshull, 4, Diannaa, Jesse V., Xnn, Distortiondude, Mean as custard, Alison22, Pokdhjdj, J36miles, Envirodan, Ovizelu, RA0808,
Scleria, Slightsmile, Wikipelli, Hhhippo, Traxs7, Medeis, StudyLakshan, Sarapaxton, D.Lazard, SpikeTD, Markshutter, Ewa5050, JaySebastos, L Kensington, Bomazi, BioPupil, RockMagnetist, 28bot, Isocliff, Khestwol, ClueBot NG, Wcherowi, Thekk2007, Lanthanum138, O.Koslowski, MerlIwBot, Bibcode Bot, Love’s Labour Lost, Snaevar-bot, Nospildoh, Bereziny, Jxuan, Rahul.quara, Naeem21,
Ownedroad9, Balance of paradox, Brat162, ChrisGualtieri, Kelvinsong, Uevboweburvkuwbekl, RobertAnderson1432, Hillbillyholiday,
Theeverst87, Titz69, Awesome2013, Wamiq, Penitence, Dez Moines, Dodi 8238, I3roly, Anonymous-232, Brandon Ernst, Kylejaylee,
Chuluojun, Prachi.apomr, KH-1, Loraof, BakedLikaBiscuit, Inkanyamba, Knight victor, Srednuas Lenoroc, TopDym and Anonymous:
485
• Dimension (vector space) Source: https://en.wikipedia.org/wiki/Dimension_(vector_space)?oldid=651516115 Contributors: AxelBoldt,
Zundark, Tarquin, Michael Hardy, Wshun, TakuyaMurata, AugPi, Charles Matthews, Gandalf61, MathMartin, Giftlite, Lethe, Bob.v.R,
Krakhan, Army1987, Obradovic Goran, Jumbuck, Oleg Alexandrov, Mathbot, YurikBot, Tetracube, JahJah, Banus, SmackBot, Chris
the speller, Bluebot, Oli Filth, Pierrecurie, Nbarth, Rludlow, Dreadstar, Tilin, SashatoBot, Jim.belk, Jbolden1517, Kanags, Epbr123,
Faigl.ladislav, JAnDbot, Quentar~enwiki, David Eppstein, Geometry guy, Jmath666, Neparis, Rybu, Ivan Štambuk, Niceguyedc, Marc
van Leeuwen, MystBot, Addbot, Kein Einstein, Legobot, Luckas-bot, Yobot, Star Flyer, Xqbot, Flavio Guitian, Edderso, EmausBot,
Quondum, Zynwyx, Mesoderm, Helpful Pixie Bot, AvocatoBot, Grv87, Impsswoon and Anonymous: 23
• Dimension of an algebraic variety Source: https://en.wikipedia.org/wiki/Dimension_of_an_algebraic_variety?oldid=620029417 Contributors: Zundark, TakuyaMurata, Charles Matthews, Giftlite, D6, SmackBot, RobHar, Dthomsen8, Addbot, AnomieBOT, D.Lazard,
ClueBot NG and Anonymous: 8

162

CHAPTER 41. ZERO-DIMENSIONAL SPACE

• Dimension theory (algebra) Source: https://en.wikipedia.org/wiki/Dimension_theory_(algebra)?oldid=673460478 Contributors: TakuyaMurata, Sligocki, CBM, Headbomb, RobHar, Addbot, AkhtaBot, AnomieBOT, John of Reading and Anonymous: 2
• Dimensional metrology Source: https://en.wikipedia.org/wiki/Dimensional_metrology?oldid=628135739 Contributors: Michael Hardy,
ALoopingIcon, Mikeblas, SmackBot, JMiall, Whpq, Mattisse, MarshBot, Ron.swonger, Flyer22, Trojancowboy, Tarheel95, MystBot,
Addbot, Hackipedia, Ibraheem alex, EmausBot, John of Reading, Roberticus, RADHA HARIHARAN, Brad7777, Sabrinaspringer,
ChrisGualtieri and Anonymous: 13
• Eight-dimensional space Source: https://en.wikipedia.org/wiki/Eight-dimensional_space?oldid=653940584 Contributors: Mporter, Edcolins, Tomruen, Rgdboer, Woohookitty, Arthur Rubin, SmackBot, Colonies Chris, Sammy1339, CmdrObot, ShelfSkewed, JaGa, JohnBlackburne, Raymondwinn, JL-Bot, Martarius, Niceguyedc, Muhandes, SockPuppetForTomruen, AnomieBOT, Materialscientist, Xqbot,
MGA73bot, Double sharp, 4, BertSeghers, EmausBot, John of Reading, Traxs7, Alborzagros, Crown Prince, ClueBot NG, Brat162, Duplij, ChrisGualtieri, Mogism, Mark viking and Anonymous: 5
• Exterior dimension Source: https://en.wikipedia.org/wiki/Exterior_dimension?oldid=609910971 Contributors: Michael Hardy, Kku,
Rjwilmsi, David Eppstein, Yobot, Materialscientist, Wayne Slam, ClueBot NG, Jonny Calvete, Brad7777 and Anonymous: 4
• Five-dimensional space Source: https://en.wikipedia.org/wiki/Five-dimensional_space?oldid=685148923 Contributors: Zundark, Charles
Matthews, Jeffq, Adam78, Mporter, Gene Ward Smith, Tomruen, Agonist, Mike Rosoft, Kehamrick, Bobo192, Army1987, Keenan Pepper, BDD, Sandover, Linas, WadeSimMiser, Mpatel, Kbdank71, Hiberniantears, Nneonneo, CannotResolveSymbol, Mathbot, Dorsk188,
Neitherday, Rt66lt, Geologician, Hellbus, Trovatore, Brian Crawford, Tony1, Tetracube, Closedmouth, Arthur Rubin, Hurricanehink,
Alasdair, JLaTondre, Katieh5584, Trickstar, Blake Dayton, SmackBot, AKismet, Mr. Mediocre, Folajimi, Tamfang, Chlewbot, OrphanBot, Decltype, Monotonehell, Treyt021, Cheesy Yeast, Starkiller, Igoldste, Nadyes, ShelfSkewed, INVERTED, Cydebot, Cahk,
Reywas92, A Softer Answer, Pascal.Tesson, Robertinventor, Jon-13, Marek69, Big Bird, AntiVandalBot, Steveprutz, VoABot II, Drewcifer3000, Wdflake, Urco, Shentino, Pekaje, PrestonH, Jr285714, J.delanoy, STBotD, Deor, JohnBlackburne, LokiClock, HENRYtastic,
Berchin, BarbaraBananas, Meters, Tiddly Tom, Caltas, BlueAzure, DavidDW, Cpl Giroro, Denisarona, Pwrong, ClueBot, ArdClose, Ender of Games, Maniac18, Onward2home, Radaustin, Jusdafax, Aidan oz, N3me$i$, Jeffrey Wordsmith, Vanished user uih38riiw4hjlsd,
Cminard, HexaChord, Addbot, Soundout, Medisaid, Kyle1278, Lightbot, Luckas-bot, Pcap, AnomieBOT, Turul2, Arbustoardiente,
Jim1138, AdjustShift, Xqbot, FrescoBot, Nicolas Perrault III, Akasanof, I dream of horses, Dazedbythebell, BigDwiki, Gazeboland, Double sharp, Panel Guy, LawBot, 4, Skakkle, Tbhotch, Nevin.williams, Onel5969, GoingBatty, K6ka, Amlentjes, AvicAWB, Alborzagros,
Petrb, ClueBot NG, Wcherowi, Gilderien, Widr, Helpful Pixie Bot, Jeraphine Gryphon, Carnac32, Wikiecam, QuarkyPi, Physiomath,
OCCullens, Louis993546, Dobie80, JYBot, Raumaank, Lugia2453, Stephen A Field, Cweedmeesy, Vanamonde93, Eyesnore, AnthonyJ
Lock, Myth of jesus, Plumpy Humperdinkle, Mohammed.Adil1989 and Anonymous: 160
• Flatland Source: https://en.wikipedia.org/wiki/Flatland?oldid=684378184 Contributors: Brion VIBBER, Ortolan88, Edward, Pmmenneg, Oliver Pereira, Dcljr, DavidWBrooks, Rossami, Palfrey, Patrick28657, Charles Matthews, Tpbradbury, Maximus Rex, Itai, Val42, Fibonacci, BenRG, Treutwein, P0lyglut, Merovingian, Blainster, Hippietrail, Adam78, Kevin Saff, Giftlite, DocWatson42, Tspoon~enwiki,
Bfinn, Tagishsimon, Neilc, Keith Edkins, LucasVB, Ruzulo, Tomruen, Hellisp, MakeRocketGoNow, Jfpierce, Abelson, RJHall, Ylee,
Crunchy Frog, El C, Bobo192, NetBot, Barno, Cje~enwiki, Michael614, Ziggurat, Rockhopper10r, PWilkinson, Alansohn, Ricardo
monteiro, Jordan117, CyberSkull, Lectonar, Velella, HenkvD, Philthecow, Mathmo, Pol098, Dodiad, Macaddct1984, Stefanomione, Graham87, ConradKilroy, Sjö, Koavf, Koos Jol, Salix alba, Bhaak, Mike Peel, Mathbot, ZoneSeek, Twipley, Strangnet, Cmadler, Wavelength,
Hairy Dude, Ppinheiro, Dannycas, Xihr, Shawn81, Gaius Cornelius, RandallJones, Nick, Jpbowen, Zwobot, DeadEyeArrow, Silverhill, Ms2ger, Tetracube, Nikkimaria, Plankhead, KenoSarawa, Rredwell, JDspeeder1, SmackBot, InverseHypercube, Rovenhot, Gilliam,
Hmains, Kevinalewis, Gorman, Mysticaloctopus, Thumperward, OrangeDog, Coffin, James Fryer, Sadads, Delta Tango, Emurphy42,
Egsan Bacon, Tamfang, JudahH, Cybercobra, MasterSheep~enwiki, Kimpire, TenPoundHammer, John, Subanark, James.S, Ocatecir,
Programmar, Eeblefish, Ryulong, AEMoreira042281, Turtleheart, Shoeofdeath, Twas Now, Zero sharp, Dclayh, Esn, The Letter J, CmdrObot, CBM, Equendil, Gogo Dodo, Tkynerd, Briantw, Danindenver, Casliber, Thijs!bot, Epbr123, PHaze, Ash274, Aericanwizard,
WikiSlasher, Widefox, Kbthompson, Donwithnoname, Sonicsuns, Planetary, Robina Fox, Albany NY, Slothman32, Belg4mit, Siddharth
Mehrotra, Douglas Jardine, Charlyz, Tremilux, Froid, Catgut, MartinBot, Anaxial, Garkbit, Huzzlet the bot, Uncle Dick, SlowJog, Quizzle, Literacola, Warlordbcm1, Lambdove, Docued, BrettAllen, Remember the dot, GrahamHardy, Dikke poes, JohnBlackburne, Rtrace,
DoorsAjar, TXiKiBoT, Gdorner, Utgard Loki, Laddehlingerjr, Broadbot, Room429, Cogburnd02, Kurowoofwoof111, Synthebot, Turgan, Wikineer, Yngvarr, Dogah, TCO, France3470, Goustien, Ipsherman, Fratrep, MrsKrishan, Blacklemon67, Timeastor, Martarius,
ClueBot, LAX, The Thing That Should Not Be, LukeShu, Niceguyedc, Shaydie, Atdenney, DumZiBoT, XLinkBot, Micmachete, DaL33T,
Marchije, Shoemaker’s Holiday, Addbot, Offenbach, BONKEROO, Sudirclu, AkhtaBot, Sussmanbern, Lightbot, Zorrobot, ArtichokeBoy, Luckas-bot, Yobot, TaBOT-zerem, Theornamentalist, Anand011892, AnomieBOT, Citation bot, Seblopedia, GB fan, Matrixmania,
Grey ghost, Xqbot, Jayarathina, Jmundo, Armydelay99, GrouchoBot, Kylelovesyou, RibotBOT, FrescoBot, Fortdj33, Paine Ellsworth,
Sqwirrel, Theglasslipper, Akinari42, Pinethicket, RedBot, TRBP, Lotje, Duoduoduo, Jesse V., Siduofan, Hbooger45, Jsawka33, Danojohnson, ZéroBot, Informative3, OnePt618, OpenlibraryBot, SBaker43, Odysseus1479, EdoBot, Manytexts, ClueBot NG, Wcherowi,
Chester Markel, Jkta97, Mpaa, Cntras, Weirdnessy0912, Helpful Pixie Bot, BG19bot, Nospildoh, Robert the Devil, SD5bot, Cobalt174,
Hillbillyholiday, Fzegdhzhee, Flat Out, Ugog Nizdast, Paul2520, Balljust, OccultZone, Fixuture, Hax123, XBoner, Shinyanga, ♥Golf,
Rob at Houghton, Deunanknute, Mvaldivieso and Anonymous: 251
• Four-dimensional space Source: https://en.wikipedia.org/wiki/Four-dimensional_space?oldid=684776514 Contributors: Zundark, The
Anome, SJK, William Avery, Patrick, Michael Hardy, Dcljr, Delirium, Ahoerstemeier, PeterBrooks, Charles Matthews, Reddi, Dysprosia, Robbot, Fredrik, Gandalf61, AceMyth, Dina, Tobias Bergemann, Adam78, Giftlite, Mporter, Cobaltbluetony, Lethe, Fropuff,
Bovlb, Daniel Brockman, Jossi, Bumm13, Tomruen, Bodnotbod, Icairns, Sam Hocevar, Mare-Silverus, Discospinster, Pak21, FT2,
JoeSmack, Ben Standeven, Aecis, Rgdboer, Lankiveil, Bobo192, Army1987, Shlomital, Physicistjedi, Ultra megatron, Alansohn, SemperBlotto, Cjthellama, Hu, Wtmitchell, Jheald, Mikeo, Axeman89, Gamiar, Oleg Alexandrov, Feezo, OwenX, Linas, LOL, Yansa,
Kmg90, Maartenvdbent, Jon Harald Søby, Marvelvsdc, KyuuA4, Jclemens, Zoz, Phoenix-forgotten, Chipuni, Attitude2000, MarSch,
Salix alba, NeonMerlin, DoubleBlue, Algebra, FlaBot, TiagoTiago, SiriusB, Nihiltres, RexNL, Gurch, DannyDaWriter, TeaDrinker,
LeCire~enwiki, Cpcheung, TheSun, King of Hearts, DaGizza, Jared Preston, DVdm, VolatileChemical, Amaurea, Algebraist, YurikBot,
Wavelength, Maelin, Splintercellguy, Sceptre, Retodon8, Michael Slone, KamuiShirou, SpuriousQ, Hellbus, Shawn81, Yamara, MightyGiant, Ihope127, Havok, NawlinWiki, Tailpig, MacGyver07, Deckiller, Mgnbar, Tetracube, Cspalletta, PBurns, Arthur Rubin, Th1rt3en,
CWenger, Fram, Geoffrey.landis, AndrewWTaylor, Attilios, Havocrazy, SmackBot, Honza Záruba, GoldenXuniversity, Vkyrt, Unyoyega,
Canthusus, Alex earlier account, Gilliam, Kevinalewis, Quinsareth, Oli Filth, Magicindark, Moshe Constantine Hassan Al-Silverburg,
Nbarth, Robth, MaxSem, Can't sleep, clown will eat me, Tamfang, Onorem, Koolone0, Jwy, Nakon, Snakeyes (usurped), Jbergquist,
Aaker, Dogears, Clicketyclack, Meetarnav, Accurizer, Bjankuloski06en~enwiki, 041744, Xenure, Hotblaster, Werdan7, Kazikame, Cerealkiller13, Animedude360, Nehrams2020, Iridescent, FVP, Use4d, Mimicat, Fsotrain09, Octane, Phoenixrod, Tawkerbot2, Robinhw,

41.5. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES

163

CmdrObot, Lavateraguy, JasonHise, Zureks, Nadyes, PeanutCheeseBar, Arnavion, Skrapion, Skybon, Cydebot, Reywas92, Red Director,
Pedro Fonini, Robertinventor, Krm500, Ryan Gittins, Brink14, Aazn, JodyB, Nol888, Mailerdaemon, Cheveyo, JamesAM, Thijs!bot,
Ulnevets, GentlemanGhost, Protocoi, Pampas Cat, Mojo Hand, AgentPeppermint, D.H, AntiVandalBot, Luna Santin, Ryttu3k, Oatmealcookiemon, The Dan, Figma, Fusionshrimp, JAnDbot, Husond, Pipedreamergrey, Xeno, Hut 8.5, J-stan, Stuffchanger, Bennybp,
Bongwarrior, VoABot II, AuburnPilot, Littlewood~enwiki, Catgut, Jbav1278, JMyrleFuller, Glen, Wdflake, Donrad, Rettetast, Nono64,
Akronym, LedgendGamer, Asalt2233~enwiki, Tgeairn, Bogey97, Uncle Dick, Extransit, Thaurisil, Century0, Lantonov, Zoxxi, McSly,
Starnestommy, Supuhstar, LittleHow, NewEnglandYankee, Rominandreu, Richard Wolf VI, Juliancolton, Cometstyles, DH85868993,
MoForce, Homo logos, Xiahou, RJASE1, Xnuala, Hamzabahaa, Wikieditor06, BowToChris, Seldon1, Chaos5023, JohnBlackburne,
Soliloquial, Veddan, Philip Trueman, Zidonuke, Anonymous Dissident, Crohnie, Sean D Martin, Berchin, IronMaidenRocks, Ian Strachan, Thomas s. briggs, AgentCDE, Dmcq, Subh83, SieBot, RageGarden, Pi is 3.14159, Flyer22, Paolo.dL, Prestonmag, Faradayplank,
Harry~enwiki, Yone Fernandes, Nick90210, Lightmouse, Hobartimus, Svick, Pinkadelica, ImageRemovalBot, ClueBot, The Thing That
Should Not Be, ArdClose, Njbh9, Mild Bill Hiccup, SuperHamster, Blanchardb, JohnTheCrow, Alexbot, Brews ohare, NuclearWarfare,
Arjayay, Razorflame, Frozen4322, Thfo, Error −128, Drippy knees, JDPhD, Questsong, Versus22, SoxBot III, Christianw7, XLinkBot,
Hotcrocodile, Zzubiri, WikHead, SilvonenBot, Alexius08, Jedi870, Addbot, Proofreader77, Theothersteve7, Jafeluv, Tcncv, Ronhjones,
Jncraton, Adam08, Glane23, Glass Sword, Favonian, Farmercarlos, Tassedethe, Pbryant7, Tide rolls, Teles, Zorrobot, Quantumobserver,
Luckas-bot, Roflcopter1000, Yobot, TaBOT-zerem, Sarrus, MassimoAr, Tempodivalse, AnomieBOT, Wickedmangroves, Turul2, AdjustShift, Citation bot, ArthurBot, Xqbot, Intelati, Blackknight7429, JFBGT, Ruy Pugliesi, Bradjuhasz, Papercutbiology, HedonismBot2911, Basket of Puppies, Locobot, Bigger digger, WaysToEscape, FrescoBot, 123456789riitta, 123456789gary, Sky Attacker, Majopius, A little insignificant, Åkebråke, DrilBot, Pinethicket, I dream of horses, JranZu, Atm2177, Daclyff, LordxDeath, Jschnur, Btilm,
RandomStringOfCharacters, RockSolidCosmo, Double sharp, Puzl bustr, Vrenator, 4, Juana1990, DARTH SIDIOUS 2, Whisky drinker,
Swishaa218, Regancy42, Fredgds, WikitanvirBot, Gfoley4, Pekka.virta, Vish-aero, Dualitynature, GoingBatty, Vanished user zq46pw21,
4dimention, Slightsmile, Tommy2010, Wikipelli, Fafaerer, ZéroBot, Deehack, Azuris, Bamyers99, Brandmeister, L Kensington, Alborzagros, Tyughj2, MPLSboarder, Orange Suede Sofa, Brianumeda, DASHBotAV, ClueBot NG, Wcherowi, Joefromrandb, Vacation9, Bryce
Albe-Quenzer, Bakrnl, LaszloSimon, Frietjes, Fujicapesta, Helpful Pixie Bot, Ernest3.141, BG19bot, Waqsajaz, Longbyte1, Nospildoh,
Piguy101, Mark Arsten, Zeke, the Mad Horrorist, Puzzle314, Muthafucker, Snow Blizzard, Tesseract4d, Isacdaavid, CeraBot, Alkagl, NGC 2736, JYBot, Rainbowking5, Davidcdunne, Samwick6, Chakravarti1997, Hillbillyholiday, Uniquestman, 4-Dimensional0034,
Neve2004, Prokaryotes, Mrkicky, Newold123, Jackscd, Artist.poet, Loraof, Infernus 780, ChaoticDequix, Dalvarezso, Maddiemmm,
Ubernachten, Gustavo noise, HeidiShaban, Peshwavignesh and Anonymous: 594
• Fourth dimension in art Source: https://en.wikipedia.org/wiki/Fourth_dimension_in_art?oldid=680939882 Contributors: Michael Hardy,
Dimadick, Mandarax, DVdm, Christian75, Awien, Chiswick Chap, Lamro, TCO, Coldcreation, 7&6=thirteen, Chris857, MLWatts, BattyBot, Hillbillyholiday, Monkbot, Artist.poet, D-4597-aR and Anonymous: 2
• Gelfand–Kirillov dimension Source: https://en.wikipedia.org/wiki/Gelfand%E2%80%93Kirillov_dimension?oldid=672809607 Contributors: Michael Hardy, TakuyaMurata, Bearcat, Katharineamy, Yobot, FrescoBot, KonradVoelkel, Brad7777, Mark viking, Yolaf.TZ
and Anonymous: 1
• Global dimension Source: https://en.wikipedia.org/wiki/Global_dimension?oldid=675189725 Contributors: Silverfish, Charles Matthews,
C12H22O11, Pearle, Oleg Alexandrov, Michael Slone, MalafayaBot, Rschwieb, JoergenB, STBot, Arcfrk, JackSchmidt, Addbot, Jim1138,
Sz-iwbot, Citation bot 1, Bomazi, ClueBot NG, Solomon7968, Deltahedron and Anonymous: 6
• Interdimensional Source: https://en.wikipedia.org/wiki/Interdimensional?oldid=633208097 Contributors: Michael Hardy, KFan II, Yobot,
Pwncoreshaman, WaysToEscape, A.amitkumar, Aurelius Lie and Anonymous: 3
• Isoperimetric dimension Source: https://en.wikipedia.org/wiki/Isoperimetric_dimension?oldid=625993034 Contributors: Michael Hardy,
Charles Matthews, Dbenbenn, Gadykozma, Oleg Alexandrov, Mathbot, Sodin, SmackBot, Ylloh, Josephorourke, .anacondabot, Anchor
Link Bot, BRicaud, Addbot, Yobot, Foobarnix, Xnn, Suslindisambiguator, Paolo Lipparini and Anonymous: 1
• Kaplan–Yorke conjecture Source: https://en.wikipedia.org/wiki/Kaplan%E2%80%93Yorke_conjecture?oldid=650537149 Contributors: Michael Hardy, Bearcat, Giftlite, TheCatalyst31, Rankersbo, Alvin Seville, Gemaze and BG19bot
• Kodaira dimension Source: https://en.wikipedia.org/wiki/Kodaira_dimension?oldid=647715758 Contributors: Michael Hardy, Charles
Matthews, Giftlite, R.e.b., Colonies Chris, Nick Number, David Eppstein, DarknessBot, WereSpielChequers, Marsupilamov, Addbot,
Ozob, Luckas-bot, Yobot, AnomieBOT, Omnipaedista, FrescoBot, RedBot, Trappist the monk, Uni.Liu, BTotaro, BG19bot, Brad7777,
ChrisGualtieri, Noellapin, Deltahedron, Enyokoyama, K9re11 and Anonymous: 6
• Krull dimension Source: https://en.wikipedia.org/wiki/Krull_dimension?oldid=674573693 Contributors: AxelBoldt, Zundark, Toby
Bartels, Michael Hardy, TakuyaMurata, Charles Matthews, Giftlite, Fropuff, Dan Gardner, Vivacissamamente, YixilTesiphon, R.e.b.,
ChiLlBeserker, Gaius Cornelius, Gwaihir, Light current, JahJah, SmackBot, MalafayaBot, Tesseran, Joerg Winkelmann~enwiki, Cronholm144, Krasnoludek, CBM, Gihanuk, Ntsimp, Hardmath, Vanish2, Policron, LokiClock, RiverStyx23, SieBot, YonaBot, Mathemajor,
Marsupilamov, MystBot, Addbot, Lightbot, Legobot, Luckas-bot, TaBOT-zerem, KamikazeBot, Clare4004, Onzie9, Junior Wrangler,
EmausBot, ZéroBot, AvicAWB, D.Lazard, Danramras, BTotaro, MaximalIdeal, Solomon7968, ChrisGualtieri, Mark viking, Ozgunner,
AJeiEM and Anonymous: 15
• Matroid rank Source: https://en.wikipedia.org/wiki/Matroid_rank?oldid=675336702 Contributors: Mack2, David Eppstein and Zroe1029
• Negative-dimensional space Source: https://en.wikipedia.org/wiki/Negative-dimensional_space?oldid=673554250 Contributors: Michael
Hardy, Ogerard, Lamro, Northernhenge and Anonymous: 1
• Nonlinear dimensionality reduction Source: https://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction?oldid=668904482 Contributors: Edward, Michael Hardy, Kku, MartinHarper, Charles Matthews, Guaka, Catbar, T0m, Lawrennd, Mukerjee, Andreas Kaufmann, Forderud, Oleg Alexandrov, Firsfron, Jfr26, Male1979, Qwertyus, Rjwilmsi, Gmelli, Mathbot, Choess, Gussisaurio, Kri, Wavelength, Gaius Cornelius, Kilianw, SmackBot, Mcld, Memming, Yannick Copin, TNeloms, Shorespirit, Morgaladh, AnAj, Coffee2theorems,
Dfalcantara, A3nm, STBot, Salih, Trondarild, Jacobsn, Pjoef, Fragrant, Melcombe, Headlessplatter, Agor153, 1ForTheMoney, JamesXinzhiLi, Arbitrarily0, Yobot, Ziyuang, FrescoBot, X7q, MGA73bot, Rc3002, Ciuncun, John of Reading, Ambarish.jash, Ida Shaw,
Aria802, Phrank36, Zephyrus Tavvier, Forefinger2, Tatome, WikiMSL, Laughsinthestocks, Kaijiang ustc, BG19bot, Tommyboucher,
Sunjigang1965, ChrisGualtieri, Ljuvela, OhGodItsSoAmazing, Monkbot, Siddhantmanocha, Jcnebel and Anonymous: 64
• One-dimensional space Source: https://en.wikipedia.org/wiki/One-dimensional_space?oldid=673381431 Contributors: Mporter, Tomruen, Wtmitchell, Red Slash, Sardanaphalus, RDBury, R'n'B, JohnBlackburne, Lamro, Bobathon71, Addbot, Amirobot, JackieBot,
Xqbot, HRoestBot, Double sharp, 4, Hhhippo, ZéroBot, Rocketrod1960, ClueBot NG, Lanthanum-138, BG19bot, IluvatarBot, Vanished user lt94ma34le12, JYBot, Jjbernardiscool, TCMemoire, Cranberry Products, KasparBot and Anonymous: 9

164

CHAPTER 41. ZERO-DIMENSIONAL SPACE

• Ordinate Source: https://en.wikipedia.org/wiki/Ordinate?oldid=667025598 Contributors: Zundark, Matusz, Michael Hardy, Grape~enwiki,
Hyacinth, Optim, Gandalf61, Edcolins, Musiphil, Oleg Alexandrov, Linas, Polyparadigm, Maxal, Allens, SmackBot, BiT, Octahedron80,
R'n'B, LizardJr8, Käptn Weltall, Addbot, Theraven502, Twri, Smallman12q, Double sharp, Solomonfromfinland, Stringybark, Dearhumanity, Aupif, ClueBot NG, Wwjd2012, Mguggis, Bryanrutherford0 and Anonymous: 19
• Regular sequence Source: https://en.wikipedia.org/wiki/Regular_sequence?oldid=597641160 Contributors: Charles Matthews, Berjoh,
Crust, Oleg Alexandrov, R.e.b., The Rambling Man, Typometer, Arcfrk, Jsbeder, Arbitrarily0, D.Lazard, BTotaro, BG19bot, Recursive
mindloop and Anonymous: 8
• Relative canonical model Source: https://en.wikipedia.org/wiki/Relative_canonical_model?oldid=492228791 Contributors: Michael
Hardy, Charles Matthews, Arthur Rubin, SmackBot, CBM, Pjoef, Favonian, Yobot, Createangelos, FrescoBot, Deltahedron and Anonymous: 5
• Relative dimension Source: https://en.wikipedia.org/wiki/Relative_dimension?oldid=634231621 Contributors: Jitse Niesen, Nbarth,
CBM, Jj137, David Eppstein, R'n'B, Philip Trueman, Brad7777 and Anonymous: 1
• Schauder dimension Source: https://en.wikipedia.org/wiki/Schauder_basis?oldid=671861507 Contributors: AxelBoldt, Michael Hardy,
TakuyaMurata, Charles Matthews, Jitse Niesen, Aetheling, Tobias Bergemann, Giftlite, BenFrantzDale, Lethe, Rich Farmbrough, Sam
Derbyshire, Linas, Rjwilmsi, R.e.b., Bgwhite, Wavelength, Henning Makholm, Ulner, Michael Kinyon, JHunterJ, Headbomb, Forgetfulfunctor, Vanish2, Jay Gatsby, Gamesou, AlleborgoBot, SieBot, Mild Bill Hiccup, Brews ohare, Addbot, DOI bot, LaaknorBot, Yobot,
Citation bot, Bdmy, Constructive editor, Trappist the monk, 777sms, RomDem, Chricho, BG19bot, Anaxagore~enwiki, Mgkrupa and
Anonymous: 11
• Seven-dimensional space Source: https://en.wikipedia.org/wiki/Seven-dimensional_space?oldid=655918492 Contributors: Dominus,
Mporter, Tomruen, Algebraist, Wavelength, Arthur Rubin, Colonies Chris, Tamfang, Sammy1339, Titus III, Pentascape, ShelfSkewed,
Myasuda, R'n'B, JohnBlackburne, Jdcrutch, JL-Bot, Niceguyedc, Muhandes, TimothyRias, Addbot, Favonian, Amirobot, AnomieBOT,
Xqbot, Foobarnix, Double sharp, 4, Jfmantis, GoingBatty, Chricho, ClueBot NG, Lanthanum-138, BG19bot, Itc editor2, KHEname,
Loraof and Anonymous: 6
• Six-dimensional space Source: https://en.wikipedia.org/wiki/Six-dimensional_space?oldid=674547053 Contributors: Michael Hardy,
Ixfd64, Mporter, Tomruen, Chris Howard, Rgdboer, Jheald, Rjwilmsi, R.e.b., Srleffler, Wavelength, Arthur Rubin, Allens, SmackBot, Colonies Chris, Tamfang, Salamurai, Newone, Eastlaw, CRGreathouse, Myasuda, Cydebot, Mato, D.H, JohnBlackburne, Lamro,
YohanN7, Mild Bill Hiccup, Robert Skyhawk, Muhandes, Brews ohare, Cenarium, TimothyRias, WikHead, Addbot, Jkasd, Yobot, Xqbot,
, Jschnur, Double sharp, Sumone10154, 4, GoingBatty, Solarra, Alborzagros, ClueBot NG, Bibcode Bot, Liefs13, Solomon7968,
Sixdimensiondesign, OCCullens, Sahanded and Anonymous: 21
• Two-dimensional space Source: https://en.wikipedia.org/wiki/Two-dimensional_space?oldid=682235655 Contributors: Dino, Bevo,
Mporter, Kri, Wavelength, RDBury, Incnis Mrsi, Kjkjava, Newone, JAnDbot, Magioladitis, R'n'B, JohnBlackburne, Dmcq, Mitch Ames,
Addbot, Luckas-bot, Yobot, AnomieBOT, Xqbot, GrouchoBot, Frosted14, Gire 3pich2005, Double sharp, 4, EmausBot, Jpvandijk,
Tijfo098, Rezabot, மதனாஹரன், Microextruders, IluvatarBot, JYBot, Aymankamelwiki, Saehry, Brirush, Suelru, Monmonmon098,
Loraof and Anonymous: 15
• VC dimension Source: https://en.wikipedia.org/wiki/VC_dimension?oldid=682803578 Contributors: The Anome, Mark Durst, Michael
Hardy, Dcljr, LouI, BAxelrod, Hike395, Charles Matthews, Dcoetzee, Dmytro, Pgan002, APH, Gene s, Icairns, Neko-chan, Teorth,
Rajah, Woohookitty, Sujith, Anindya, MithrandirMage, Crasshopper, Rofti, Repied, David s graff, CBM, David Eppstein, Stassa, LordAnubisBOT, Nadiatalent, Ubermammal, Melcombe, Dtunkelang, Tsourakakis, Ost316, Addbot, Download, Luckas-bot, Tanoshimi,
AnomieBOT, D'ohBot, MGA73bot, MondalorBot, Dinamik-bot, Pbasista, Detian, Vrmpx, Chire, ClueBot NG, FritzFasbinder, Jochen
Burghardt, Fedelis4198, Velvel2 and Anonymous: 36
• Zero-dimensional space Source: https://en.wikipedia.org/wiki/Zero-dimensional_space?oldid=676641292 Contributors: The Anome,
Dominus, Tobias Bergemann, Mporter, Paul August, Vipul, Linas, YurikBot, Trovatore, Kompik, Arthur Rubin, That Guy, From That
Show!, SmackBot, Incnis Mrsi, Melchoir, Vina-iwbot~enwiki, Cesium 133, Stotr~enwiki, P2005t, Dp462090, Cydebot, Ntsimp, R'n'B,
Squad51, Trumpet marietta 45750, Anonymous Dissident, Plclark, Lamro, SieBot, Mojoworker, Mitch Ames, Addbot, Numbo3-bot,
Luckas-bot, Yobot, 4th-otaku, Citation bot, IVAN3MAN, Dr. John D. McCarthy, Drusus 0, D.Lazard, ClueBot NG, Helpful Pixie Bot,
ChrisGualtieri, Mark viking, Jjbernardiscool, Noyster and Anonymous: 10

41.5.2

Images

• File:120-cell_graph_H4.svg Source: https://upload.wikimedia.org/wikipedia/commons/c/c1/120-cell_graph_H4.svg License: Public
domain Contributors: Own work Original artist: Tomruen
• File:24-cell_graph.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/8a/24-cell_graph.svg License: Public domain Contributors: I (Tom Ruen (talk)) created this work entirely by myself. Original artist: Tom Ruen (talk)
• File:2_41_t0_E8.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/28/2_41_t0_E8.svg License: Public domain Contributors: Own work Original artist: self
• File:3D_Cartesian_Coodinate_Handedness.jpg Source: https://upload.wikimedia.org/wikipedia/commons/b/b2/3D_Cartesian_Coodinate_
Handedness.jpg License: CC BY-SA 3.0 Contributors: Own work Original artist: Primalshell
• File:4-cube_t0.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/67/4-cube_t0.svg License: Public domain Contributors:
Own work Original artist: self
• File:4-cube_t3.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/4c/4-cube_t3.svg License: Public domain Contributors:
Own work Original artist: self
• File:4-simplex_t0.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b9/4-simplex_t0.svg License: Public domain Contributors: Own work Original artist: Tomruen
• File:4_21_t0_E8.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/ef/4_21_t0_E8.svg License: Public domain Contributors: Own work Original artist: Tomruen

41.5. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES

165

• File:5-cube_t0.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b1/5-cube_t0.svg License: Public domain Contributors:
Own work Original artist: self
• File:5-cube_t4.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/76/5-cube_t4.svg License: Public domain Contributors:
Own work Original artist: self
• File:5-demicube_t0_D5.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/d8/5-demicube_t0_D5.svg License: Public
domain Contributors: Own work Original artist: User:Tomruen
• File:5-simplex_t0.svg Source: https://upload.wikimedia.org/wikipedia/commons/c/c2/5-simplex_t0.svg License: Public domain Contributors: Own work Original artist: Tomruen
• File:6-cube_t0.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/dc/6-cube_t0.svg License: Public domain Contributors:
Own work Original artist: self
• File:6-cube_t5.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a3/6-cube_t5.svg License: Public domain Contributors:
Own work Original artist: self
• File:6-demicube_t0_D6.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/4a/6-demicube_t0_D6.svg License: Public
domain Contributors: Own work Original artist: User:Tomruen
• File:6-simplex_t0.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/d5/6-simplex_t0.svg License: Public domain Contributors: Own work Original artist: Tomruen
• File:600-cell_graph_H4.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/f4/600-cell_graph_H4.svg License: Public
domain Contributors: Own work Original artist: Tomruen
• File:7-cube_t0.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/bb/7-cube_t0.svg License: Public domain Contributors:
Own work Original artist: self
• File:7-cube_t6.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/f1/7-cube_t6.svg License: Public domain Contributors:
Own work Original artist: self
• File:7-demicube_t0_D7.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/ae/7-demicube_t0_D7.svg License: Public
domain Contributors: Own work Original artist: User:Tomruen
• File:7-simplex_t0.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/d1/7-simplex_t0.svg License: Public domain Contributors: Own work Original artist: Tomruen
• File:8-cell-simple.gif Source: https://upload.wikimedia.org/wikipedia/commons/5/55/8-cell-simple.gif License: Public domain Contributors: Transferred from en.wikipedia to Commons. Original artist: JasonHise at English Wikipedia
• File:8-cube_t0.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b9/8-cube_t0.svg License: Public domain Contributors:
Own work Original artist: self
• File:8-cube_t7.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/3d/8-cube_t7.svg License: Public domain Contributors:
Own work Original artist: self
• File:8-demicube_t0_D7.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/11/8-demicube_t0_D7.svg License: Public
domain Contributors: Own work Original artist: User:Tomruen
• File:8-simplex_t0.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/18/8-simplex_t0.svg License: Public domain Contributors: Own work Original artist: Tomruen
• File:CDel_3.png Source: https://upload.wikimedia.org/wikipedia/commons/c/c3/CDel_3.png License: Public domain Contributors: Own
work Original artist: User:Tomruen
• File:CDel_3a.png Source: https://upload.wikimedia.org/wikipedia/commons/5/56/CDel_3a.png License: Public domain Contributors:
Own work Original artist: User:Tomruen
• File:CDel_4.png Source: https://upload.wikimedia.org/wikipedia/commons/8/8c/CDel_4.png License: Public domain Contributors: Own
work Original artist: User:Tomruen
• File:CDel_5.png Source: https://upload.wikimedia.org/wikipedia/commons/1/16/CDel_5.png License: Public domain Contributors: Own
work Original artist: User:Tomruen
• File:CDel_branch.png Source: https://upload.wikimedia.org/wikipedia/commons/4/43/CDel_branch.png License: Public domain Contributors: Own work Original artist: User:Tomruen
• File:CDel_branch_01lr.png Source: https://upload.wikimedia.org/wikipedia/commons/d/df/CDel_branch_01lr.png License: Public domain Contributors: Own work Original artist: User:Tomruen
• File:CDel_node.png Source: https://upload.wikimedia.org/wikipedia/commons/5/5e/CDel_node.png License: Public domain Contributors: Own work Original artist: User:Tomruen
• File:CDel_node_1.png Source: https://upload.wikimedia.org/wikipedia/commons/b/bd/CDel_node_1.png License: Public domain Contributors: Own work Original artist: User:Tomruen
• File:CDel_nodea.png Source: https://upload.wikimedia.org/wikipedia/commons/c/c7/CDel_nodea.png License: Public domain Contributors: Own work Original artist: User:Tomruen
• File:CDel_nodea_1.png Source: https://upload.wikimedia.org/wikipedia/commons/a/aa/CDel_nodea_1.png License: Public domain
Contributors: Own work Original artist: User:Tomruen
• File:CDel_nodes_10ru.png Source: https://upload.wikimedia.org/wikipedia/commons/f/fc/CDel_nodes_10ru.png License: Public domain Contributors: Own work Original artist: Tomruen
• File:CDel_split2.png Source: https://upload.wikimedia.org/wikipedia/commons/3/32/CDel_split2.png License: Public domain Contributors: Own work Original artist: User:Tomruen
• File:CIRCLE_1.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/1d/CIRCLE_1.svg License: CC-BY-SA-3.0 Contributors: Own work Original artist: en:User:Optimager

166

CHAPTER 41. ZERO-DIMENSIONAL SPACE

• File:Calabi-Yau.png Source: https://upload.wikimedia.org/wikipedia/commons/d/d4/Calabi-Yau.png License: CC BY-SA 2.5 Contributors: own work by Lunch
http://en.wikipedia.org/wiki/Image:Calabi-Yau.png (english Wikipedia) Original artist: Lunch

• File:Cartesian-coordinate-system-with-circle.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/2e/Cartesian-coordinate-system-with-circle
svg License: CC-BY-SA-3.0 Contributors: ? Original artist: User 345Kai on en.wikipedia
• File:Cartesian-coordinate-system.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/0e/Cartesian-coordinate-system.
svg License: Public domain Contributors: Made by K. Bolino (Kbolino), based upon earlier versions. Original artist: K. Bolino
• File:Cartesian_coordinate_surfaces.png Source: https://upload.wikimedia.org/wikipedia/commons/9/94/Cartesian_coordinate_surfaces.
png License: CC BY 3.0 Contributors: Own work Original artist: WillowW
• File:Cartesian_coordinate_system_handedness.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/e2/Cartesian_coordinate_
system_handedness.svg License: CC-BY-SA-3.0 Contributors: ? Original artist: ?
• File:Cartesian_coordinates_2D.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/1a/Cartesian_coordinates_2D.svg License: CC-BY-SA-3.0 Contributors: ? Original artist: ?
• File:Clifford-torus.gif Source: https://upload.wikimedia.org/wikipedia/commons/6/6f/Clifford-torus.gif License: CC0 Contributors:
Created using Maya and Macromedia Fireworks. Original artist: Jason Hise
• File:Commons-logo.svg Source: https://upload.wikimedia.org/wikipedia/en/4/4a/Commons-logo.svg License: ? Contributors: ? Original artist: ?
• File:Coord_Angle.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/61/Coord_Angle.svg License: Public domain Contributors: Own work Original artist: Andeggs
• File:Coord_Circular.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/04/Coord_Circular.svg License: Public domain
Contributors: Own work Original artist: Andeggs
• File:Coord_LatLong.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b3/Coord_LatLong.svg License: Public domain
Contributors:
• Spherical_Coordinates_(Colatitude,_Longitude).svg Original artist: Spherical_Coordinates_(Colatitude,_Longitude).svg: Inductiveload
• File:Coord_NumberLine.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a8/Coord_NumberLine.svg License: Public
domain Contributors: Own work Original artist: Andeggs
• File:Coord_XY.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/49/Coord_XY.svg License: Public domain Contributors: Own work Original artist: Andeggs
• File:Coord_XYZ.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/64/Coord_XYZ.svg License: Public domain Contributors: Own work Original artist: Andeggs
• File:Coord_system_CA_0.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/69/Coord_system_CA_0.svg License: Public domain Contributors: Own work Original artist: Jorge Stolfi
• File:Crystal_Clear_app_3d.png Source: https://upload.wikimedia.org/wikipedia/commons/e/e4/Crystal_Clear_app_3d.png License:
LGPL Contributors: All Crystal Clear icons were posted by the author as LGPL on kde-look; Original artist: Everaldo Coelho and
YellowIcon;
• File:Cube-edge-first.png Source: https://upload.wikimedia.org/wikipedia/commons/3/38/Cube-edge-first.png License: CC BY 3.0
Contributors: Own work Original artist: Tetracube
• File:Cube-face-first.png Source: https://upload.wikimedia.org/wikipedia/commons/9/9a/Cube-face-first.png License: CC BY 3.0 Contributors: Own work Original artist: Tetracube
• File:Cube-vertex-first.png Source: https://upload.wikimedia.org/wikipedia/commons/5/51/Cube-vertex-first.png License: CC BY 3.0
Contributors: Own work Original artist: Tetracube
• File:Cylindrical_Coordinates.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b7/Cylindrical_Coordinates.svg License:
Public domain Contributors: Own work Original artist: Inductiveload
• File:Dali_Crucifixion_hypercube.jpg Source: https://upload.wikimedia.org/wikipedia/en/0/09/Dali_Crucifixion_hypercube.jpg License: Fair use Contributors: ? Original artist: ?
• File:Degrees_of_freedom_(diatomic_molecule).png Source: https://upload.wikimedia.org/wikipedia/commons/4/43/Degrees_of_freedom_
%28diatomic_molecule%29.png License: Public domain Contributors: ? Original artist: ?
• File:Digon.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/f1/Digon.svg License: Public domain Contributors: Own
work Original artist: DanPMK
• File:Dimension_levels.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/45/Dimension_levels.svg License: CC BY-SA
3.0 Contributors: Own work Original artist: NerdBoy1392
• File:Disambig_gray.svg Source: https://upload.wikimedia.org/wikipedia/en/5/5f/Disambig_gray.svg License: Cc-by-sa-3.0 Contributors: ? Original artist: ?
• File:E-to-the-i-pi.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/35/E-to-the-i-pi.svg License: CC BY 2.5 Contributors: ? Original artist: ?
• File:Edit-clear.svg Source: https://upload.wikimedia.org/wikipedia/en/f/f2/Edit-clear.svg License: Public domain Contributors: The
Tango! Desktop Project. Original artist:
The people from the Tango! project. And according to the meta-data in the file, specifically: “Andreas Nilsson, and Jakub Steiner (although
minimally).”
• File:FWF_Samuel_Monnier_détail.jpg Source: https://upload.wikimedia.org/wikipedia/commons/9/99/FWF_Samuel_Monnier_d%
C3%A9tail.jpg License: CC BY-SA 3.0 Contributors: Own work (low res file) Original artist: Samuel Monnier

41.5. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES

167

• File:Gosset_1_42_polytope_petrie.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/32/Gosset_1_42_polytope_petrie.
svg License: Public domain Contributors: Transferred from en.wikipedia Original artist: Tomruen at en.wikipedia
• File:Henagon.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/fd/Henagon.svg License: Public domain Contributors:
Own work Original artist: DanPMK
• File:Houghton_EC85_Ab264_884f_-_Flatland,_cover.jpg Source: https://upload.wikimedia.org/wikipedia/commons/d/d7/Houghton_
EC85_Ab264_884f_-_Flatland%2C_cover.jpg License: Public domain Contributors: *EC85 Ab264 884f Houghton Library, Harvard
University
Original artist: Edwin Abbott Abbott (author)
• File:Houghton_EC85_Ab264_884f_-_Flatland,_men_and_women_doors.jpg Source: https://upload.wikimedia.org/wikipedia/commons/
1/1a/Houghton_EC85_Ab264_884f_-_Flatland%2C_men_and_women_doors.jpg License: Public domain Contributors: *EC85 Ab264
884f Houghton Library, Harvard University
Original artist: Edwin Abbott Abbott (author)
• File:Jean_Metzinger,_1912-1913,_L'Oiseau_bleu,_(The_Blue_Bird)_oil_on_canvas,_230_x_196_cm,_Musée_d'Art_Moderne_
de_la_Ville_de_Paris..jpg Source: https://upload.wikimedia.org/wikipedia/en/4/4e/Jean_Metzinger%2C_1912-1913%2C_L%27Oiseau_
bleu%2C_%28The_Blue_Bird%29_oil_on_canvas%2C_230_x_196_cm%2C_Mus%C3%A9e_d%27Art_Moderne_de_la_Ville_de_Paris.
.jpg License: ? Contributors:
Artsy Original artist:
Jean Metzinger
• File:Jouffret.gif Source: https://upload.wikimedia.org/wikipedia/commons/3/3b/Jouffret.gif License: Public domain Contributors: http:
//historical.library.cornell.edu/cgi-bin/cul.math/docviewer?did=04810001&seq=&view=50&frames=0&pagenum=153 Original artist: Esprit Jouffret
• File:Lebesgue_Icon.svg Source: https://upload.wikimedia.org/wikipedia/commons/c/c9/Lebesgue_Icon.svg License: Public domain
Contributors: w:Image:Lebesgue_Icon.svg Original artist: w:User:James pic
• File:Letters_pca.png Source: https://upload.wikimedia.org/wikipedia/en/e/e1/Letters_pca.png License: PD Contributors: ? Original
artist: ?
• File:Limitcycle.jpg Source: https://upload.wikimedia.org/wikipedia/commons/a/a1/Limitcycle.jpg License: CC-BY-SA-3.0 Contributors: ? Original artist: ?
• File:Lle_hlle_swissroll.png Source: https://upload.wikimedia.org/wikipedia/commons/f/fd/Lle_hlle_swissroll.png License: CC BY 3.0
Contributors: Generated using the Modular Data Processing toolkit and matplotlib. Original artist: Olivier Grisel
• File:Nldr.jpg Source: https://upload.wikimedia.org/wikipedia/en/9/9c/Nldr.jpg License: PD Contributors: ? Original artist: ?
• File:Number-line.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/93/Number-line.svg License: CC0 Contributors: Own
work Original artist: Hakunamenta
• File:POV-Ray-Dodecahedron.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a4/Dodecahedron.svg License: CCBY-SA-3.0 Contributors: Vectorisation of Image:Dodecahedron.jpg Original artist: User:DTR
• File:Penteract_projected.png Source: https://upload.wikimedia.org/wikipedia/commons/5/51/Penteract_projected.png License: CC
BY 2.5 Contributors: Own work Original artist: Claudio Rocchini
• File:Picasso_Portrait_of_Daniel-Henry_Kahnweiler_1910.jpg Source: https://upload.wikimedia.org/wikipedia/en/6/68/Picasso_Portrait_
of_Daniel-Henry_Kahnweiler_1910.jpg License: ? Contributors:
Website of the Art Institute of Chicago Original artist:
Pablo Picasso
• File:Question_book-new.svg Source: https://upload.wikimedia.org/wikipedia/en/9/99/Question_book-new.svg License: Cc-by-sa-3.0
Contributors:
Created from scratch in Adobe Illustrator. Based on Image:Question book.png created by User:Equazcion Original artist:
Tkgd2007
• File:Rechte-hand-regel.jpg Source: https://upload.wikimedia.org/wikipedia/commons/7/79/Rechte-hand-regel.jpg License: CC-BYSA-3.0 Contributors: Own work Original artist: Abdull
• File:Regular_decagon.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/8c/Regular_decagon.svg License: Public domain Contributors: ? Original artist: ?
• File:Regular_dodecagon.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/8d/Regular_dodecagon.svg License: Public
domain Contributors: ? Original artist: ?
• File:Regular_enneadecagon.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/ad/Regular_enneadecagon.svg License:
Public domain Contributors: ? Original artist: ?
• File:Regular_hendecagon.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/83/Regular_hendecagon.svg License: Public domain Contributors: ? Original artist: ?
• File:Regular_heptadecagon.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/ef/Regular_heptadecagon.svg License: Public domain Contributors: Own work User:Gustavb/regular_polygon.pl Original artist: Gustavb
• File:Regular_heptagon.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/93/Regular_heptagon.svg License: Public domain Contributors: ? Original artist: ?
• File:Regular_hexadecagon.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/9d/Regular_hexadecagon.svg License: Public domain Contributors: ? Original artist: ?

168

CHAPTER 41. ZERO-DIMENSIONAL SPACE

• File:Regular_hexagon.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/41/Regular_hexagon.svg License: Public domain Contributors: ? Original artist: ?
• File:Regular_icosagon.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/60/Regular_icosagon.svg License: Public domain Contributors: ? Original artist: ?
• File:Regular_nonagon.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/dd/Regular_nonagon.svg License: Public domain Contributors: Created by Gustavb using User:Gustavb/regular_polygon.pl. Original artist: Gustavb
• File:Regular_octadecagon.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b7/Regular_octadecagon.svg License: Public domain Contributors: ? Original artist: ?
• File:Regular_octagon.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/66/Regular_octagon.svg License: Public domain Contributors: Created by Gustavb using User:Gustavb/regular_polygon.pl. Original artist: Gustavb
• File:Regular_pentadecagon.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/03/Regular_pentadecagon.svg License:
Public domain Contributors: ? Original artist: ?
• File:Regular_pentagon.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/39/Regular_pentagon.svg License: Public domain Contributors: ? Original artist: ?
• File:Regular_quadrilateral.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/f7/Regular_quadrilateral.svg License: Public domain Contributors: ? Original artist: ?
• File:Regular_tetradecagon.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b8/Regular_tetradecagon.svg License: Public domain Contributors: ? Original artist: ?
• File:Regular_triangle.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/ec/Regular_triangle.svg License: Public domain
Contributors: ? Original artist: ?
• File:Regular_tridecagon.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/09/Regular_tridecagon.svg License: Public
domain Contributors: ? Original artist: ?
• File:Right_hand_cartesian.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/7f/Right_hand_cartesian.svg License: CCBY-SA-3.0 Contributors: ? Original artist: ?
• File:Rubik’{}s_cube_v3.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b6/Rubik%27s_cube_v3.svg License: CCBY-SA-3.0 Contributors: Image:Rubik’{}s cube v2.svg Original artist: User:Booyabazooka, User:Meph666 modified by User:Niabot
• File:SOMsPCA.PNG Source: https://upload.wikimedia.org/wikipedia/commons/b/bb/SOMsPCA.PNG License: CC BY-SA 3.0 Contributors: Own work Original artist: Agor153
• File:Schlegel_wireframe_8-cell.png Source: https://upload.wikimedia.org/wikipedia/commons/a/a2/Schlegel_wireframe_8-cell.png
License: CC BY-SA 3.0 Contributors: Transferred from en.wikipedia; transferred to Commons by User:Jalo using CommonsHelper.
Original artist: Original uploader was Tomruen at en.wikipedia
• File:Science-symbol-2.svg Source: https://upload.wikimedia.org/wikipedia/commons/7/75/Science-symbol-2.svg License: CC BY 3.0
Contributors: en:Image:Science-symbol2.png Original artist: en:User:AllyUnion, User:Stannered
• File:Science.jpg Source: https://upload.wikimedia.org/wikipedia/commons/5/54/Science.jpg License: Public domain Contributors: ?
Original artist: ?
• File:SlideQualityLife.png Source: https://upload.wikimedia.org/wikipedia/commons/4/48/SlideQualityLife.png License: CC BY 3.0
Contributors: A. N. Gorban, A. Zinovyev, Principal manifolds and graphs in practice: from molecular biology to dynamical systems,
http://arxiv.org/abs/1001.1122 Original artist: A. N. Gorban, A. Zinovyev
• File:Spherical_Coordinates_(Colatitude,_Longitude).svg Source: https://upload.wikimedia.org/wikipedia/commons/5/51/Spherical_
Coordinates_%28Colatitude%2C_Longitude%29.svg License: Public domain Contributors: Own work Original artist: Inductiveload
• File:Squarecubetesseract.png Source: https://upload.wikimedia.org/wikipedia/commons/2/25/Squarecubetesseract.png License: CC
BY-SA 3.0 Contributors: self-made based on this (public domain), this (GFDL / CC-BY-SA-3.0) and this (use allowed for any purpose
providing “attribution is given to Robert Webb’s Great Stella software as the creator of this image along with a link to the website:
http://www.software3d.com/Stella.php"). Original artist: Andeggs
• File:Star_polygon_10-3.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/22/Star_polygon_10-3.svg License: Public
domain Contributors: Own work Original artist: Inductiveload
• File:Star_polygon_5-2.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/bc/Star_polygon_5-2.svg License: Public domain Contributors: Own work Original artist: Inductiveload
• File:Star_polygon_7-2.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/42/Star_polygon_7-2.svg License: Public domain Contributors: Own work Original artist: Inductiveload
• File:Star_polygon_7-3.svg Source: https://upload.wikimedia.org/wikipedia/commons/c/c9/Star_polygon_7-3.svg License: Public domain Contributors: Own work Original artist: Inductiveload
• File:Star_polygon_8-3.svg Source: https://upload.wikimedia.org/wikipedia/commons/5/5d/Star_polygon_8-3.svg License: Public domain Contributors: Own work Original artist: Inductiveload
• File:Star_polygon_9-2.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/97/Star_polygon_9-2.svg License: Public domain Contributors: Own work Original artist: Inductiveload
• File:Star_polygon_9-4.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/46/Star_polygon_9-4.svg License: Public domain Contributors: Own work Original artist: Inductiveload
• File:Tesseract-perspective-cell-first.png Source: https://upload.wikimedia.org/wikipedia/commons/f/fd/Tesseract-perspective-cell-first.
png License: CC BY 3.0 Contributors: Own work Original artist: Tetracube
• File:Tesseract-perspective-edge-first.png Source: https://upload.wikimedia.org/wikipedia/commons/9/98/Tesseract-perspective-edge-first.
png License: CC BY 3.0 Contributors: Own work Original artist: Tetracube

41.5. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES

169

• File:Tesseract-perspective-face-first.png Source: https://upload.wikimedia.org/wikipedia/commons/7/7e/Tesseract-perspective-face-first.
png License: CC BY 3.0 Contributors: Own work Original artist: Tetracube
• File:Tesseract-perspective-vertex-first.png Source: https://upload.wikimedia.org/wikipedia/commons/a/a9/Tesseract-perspective-vertex-first.
png License: CC BY 3.0 Contributors: Own work Original artist: Tetracube
• File:Tesseract.gif Source: https://upload.wikimedia.org/wikipedia/commons/5/55/Tesseract.gif License: Public domain Contributors: ?
Original artist: ?
• File:Tesseract_net.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/00/Tesseract_net.svg License: Public domain Contributors: Own work Original artist: de:Benutzer:Byteemoz
• File:Up2_1_32_t0_E7.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/66/Up2_1_32_t0_E7.svg License: Public domain Contributors: Own work Original artist: User:Tomruen
• File:Up2_2_31_t0_E7.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/0e/Up2_2_31_t0_E7.svg License: Public domain Contributors: Own work Original artist: User:Tomruen
• File:Up2_3_21_t0_E7.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/83/Up2_3_21_t0_E7.svg License: Public domain Contributors: Own work Original artist: User:Tomruen
• File:Up_1_22_t0_E6.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/e6/Up_1_22_t0_E6.svg License: Public domain
Contributors: Own work Original artist: User:Tomruen
• File:Up_2_21_t0_E6.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/8e/Up_2_21_t0_E6.svg License: Public domain
Contributors: Own work Original artist: User:Tomruen
• File:VC1.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/d9/VC1.svg License: CC BY-SA 3.0 Contributors: Own work
based on: VC1.png by BAxelrod. Original artist: MithrandirMage
• File:VC2.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b8/VC2.svg License: CC BY-SA 3.0 Contributors: Own work
based on: VC2.png by BAxelrod. Original artist: MithrandirMage
• File:VC3.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b5/VC3.svg License: CC BY-SA 3.0 Contributors: Own work
based on: VC3.png by BAxelrod. Original artist: MithrandirMage
• File:VC4.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/37/VC4.svg License: CC BY-SA 3.0 Contributors: Own work
based on: VC4.png by BAxelrod. Original artist: MithrandirMage
• File:Wiki_letter_w_cropped.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/1c/Wiki_letter_w_cropped.svg License:
CC-BY-SA-3.0 Contributors:
• Wiki_letter_w.svg Original artist: Wiki_letter_w.svg: Jarkko Piiroinen
• File:Wikibooks-logo-en-noslogan.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/df/Wikibooks-logo-en-noslogan.
svg License: CC BY-SA 3.0 Contributors: Own work Original artist: User:Bastique, User:Ramac et al.
• File:Wikiquote-logo.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/fa/Wikiquote-logo.svg License: Public domain
Contributors: ? Original artist: ?
• File:Wikisource-logo.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/4c/Wikisource-logo.svg License: CC BY-SA
3.0 Contributors: Rei-artur Original artist: Nicholas Moreau
• File:Wiktionary-logo-en.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/f8/Wiktionary-logo-en.svg License: Public
domain Contributors: Vector version of Image:Wiktionary-logo-en.png. Original artist: Vectorized by Fvasconcellos (talk · contribs),
based on original logo tossed together by Brion Vibber

41.5.3

Content license

• Creative Commons Attribution-Share Alike 3.0

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close