Thursday, 16 July 2009

Spinors III: Chirality of Isotropic Subspaces

This is a sequel.

Tensor Product

Tensor product is a very important, very commonplace concept in mathematics. We speak of tensor product of vector spaces and tensor product of modules, but the later will not concern us in this post. Tensor product of vector spaces logically belongs to the realm of linear algebra, but is not covered in any linear algebra course I know. The reason is probably pedagogical: linear algebra is an introductory coarse, intended for people with little experience in mathematics. On the other hand, tensor product is a rather sophisticated concept in comparison. Sometimes people refer to tensors as multilinear algebra, but I don’t find this to be a natural category. For instance, the theory of bilinear forms is a part of linear algebra, not “bilinear algebra”.

Definition

Fix k a field. Consider V and W finite dimensional vector spaces over k. Define the tensor product of V and W, denoted V (x) W (pardon my ASCII), to be the vector space of bilinear mappings
V* x W* –> k.

basic properties

  • V (x) W is naturally isomorphic to W (x) V
  • V (x) k is naturally isomorphic to V. This is so because V** is naturally isomorphic to V.
  • V (x) {0} is naturally isomorphic to {0}, the 0-dimensional vector space.
  • Consider V, U, W vector spaces. Then (V (+) U) (x) W is naturally isomorphic to V (x) W (+) U (x) W. Here “(+)” denotes direct sum of vector spaces, i.e. V (+) U is the vector space of ordered pairs (v, u) where v is in V and u is in U.
  • (V (x) W)* is naturally isomorphic to V* (x) W*.
  • V* (x) W* is naturally isomorphic to the vector space of bilinear mappings V x W –> k.
  • V* (x) W is naturally isomorphic to Hom(V, W), the vector space of linear operators (=homomorphisms) V –> W.
  • For dim V = 1, Hom(V, V) is also 1-dimensonal and it has a special basis consisting of the identity operator. Hence
    Hom(V, V) is naturally ismorphic to k and so is V* (x) V. Thus, for 1-dimensional spaces, the dual vector space is the inverse vector space with respect to tensor product.

tensor product of vectors

Consider v a vector in V, w a vector in W. Then we construct
the tensor product of v and w, denoted v (x) w, a vector in V (x) W. By definition, v (x) w is supposed to be a bilinear mapping
v (x) w: V* x W* –> k. Consider a in V* and b in W*. We define
(v (x) w)(a, b) =
a(v) b(w) (take your time to parse this expression).

basis

Suppose e1 .. en is a basis of V, f1 .. fm a basis of W.

Claim

{ei (x) fj} is a basis of V (x) W. In particular,
dim (V (x) W) = dim V dim W.

(Anti)Symmetric Tensors

Fix a vector space V. Consider the vector space V (x) V. Elements of V (x) V are called tensors of rank 2 over V. We construct a linear operator s: V (x) V –> V (x) V as follows. Consider t an element of
V (x) V. By definition, t is a bilinear mapping V* x V* –> k. We need to describe s(t), also an element of V (x) V hence also a bilinear mapping V* x V* –> k. Consider a, b in V*. We define
s(t)(a, b) = t(b, a
).

It is easy to see V (x) V splits into a direct sum of two subspaces:
S^2(V) and L
^2(V). S^2(V) consists of t in V (x) V such that s(t) = t, i.e. it is the eigenspace of s corresponding to eigenvalue 1. Elements of S^2(V) are called symmetric tensors of rank 2 over V. L^2(V) consists of t in V (x) V such that s(t) = –t, i.e. it is the eigenspace of s corresponding to eigenvalue -1. Elements of L^2(V) are called antisymmetric tensors of rank 2 over V. The direct sum structure of V (x) V follows from the observation that s^2 = 1.

More generally, consider the vector space
T^k(V) := V (x) V (x) … (x) V for k copies of V. Elements of T^k(V) are called tensors of rank k over V. Consider
p a permutation of k elements (i.e. a bijection
{1, 2 … k} –> {1, 2 … k}). We define a linear operator
s_p: T^k(V) –> T^k(V) by the condition
s_p(t)(a_1, a_2 … a_k) = t(a_p(1), a_p(2) … a_p(k)). Here t is an element of T^k(V) and a_1, a_2 … a_k are elements of V*.

We define S^k(V) to be the subspace of T^k(V) consisting of t such that for any permutation of k elements p, s_p(t) = t. Elements of
S^k(V) are called symmetric tensors of rank k over V. Remember that permutations can be divided into odd and even. An odd permutation it the composition of an odd number of permutations which are transpositions of two elements of {1, 2 … k}. An even permutation is the composition of an even number of such transpositions. We define L^k(V) to be the subspace of T^k(V) consisting of such t that for any permutation of k elements p,
s_p(t) = sgn(p) t. Here sgn(p) is +1 for p even and –1 for p odd.
Elements of L^k(V) are called antisymmetric tensors of rank k over V. For k > 2, the direct sum of S^k(V) and L^k(V) is not the entire space T^k(V).

There is a natural projection operator sym: T^k(V) –> S^k(V). Conisder t in T^k(V). Then, by definition, sym(t) = S_p s_p(t) / k! Here the sum ranges over all permutations of k elements p. There is also a natural projection operator asym: T^k(V) –> L^k(V). Given t in T^k(V), we define asym(t) = S_p sgn(p) s_p(t) / k!

Claim

Consider v1, v2 … vk elements of V. Then
asym(v1 (x) v2 (x) … (x) vk) is a non-vanishing element of L^k(V) if and only if v1, v2 … vk are linearly independent.

Suppose dim V = n. Consider e1 … en a basis of V. Consider
N = {1, 2 … n}^k, the set of all ordered k-tuples made of elements of {1, 2 … n}. Clearly #N, the number of elements of N, is n^k. Consider I = (i_1, i_2 … i_k) in N. We define E_I in T^k(V) to be e_i_1 (x) e_i_2 (x) … (x) e_i_k.

Claim

The E_I form a basis of T^k(V). In particular, dim T^k(V) = n^k.

This claims follows from our previous claim about general tensor products.

Define S(n, k) to be the set of all subsets of {1, 2 … n} of size k. Evidently #S(n, k) is the binomial coefficient (n k). Consider
I = {i_1, i_2 … i_k} in S(n, k). We define F_I in L^k(V) to be
asym(e_i_1 (x) e_i_2 (x) … (x) e_i_k). I’m a bit cheating here since this expression depends on the order of i_1 … i_k. However, the only ambiguity is the sign: even permutations don’t change the expression whereas odd permutation change its sign. For our purposes, we can make an arbitrary choice of order/sign for each I in S(n, k).

Claim

The F_I form a basis of L^k(V). In particular, dim L^k(V) = (n k).

Define M(n, k) to be set of all multisets of size k made of elements of {1, 2 … n}. Multisets are like sets except that each element can appear in a multiset several times. The size of a multiset is defined by counting the elements with multiplicity. We have
#M(n, k) = (n + k – 1 k). Consider
I = {i_1, i_2 … i_k}. Here, i_m may coincide with i_l for some m and l. We define G_I in S^k(V) to be
sym(e_i_1 (x) e_i_2 (x) … (x) e_i_k).

Claim

The G_I form a basis of S^k(V). In particular,
dim S^k(V) = (n + k – 1 k).

Vector Space Determinant

The concept of a vector space determinant is standard and widely used, however not under this name and notation.

Definition

Consider a vector space V with dim V = n. Then, the determinant of V, denoted Dim V is the vector space L^n(V).

basic properties

  • dim Det V = 1
  • Det (V (+) W) is naturally isomorphic to Det V (x) Det W
  • Suppose W is a subspace of V. Then Det V is naturally isomorphic to det W (x) det (V/W).
  • Det (V*) is naturally isomorphic to (Det V)*
  • Consider A: V –> W a linear operator, m a natural number. Then there is naturally induced operator
    L^m(A): L^m(V) –> L^m(W). In the special case
    dim V = dim W = m we get the operator
    det A: Det V –> Det W. Let us specialize further to the case
    V = W. Then det A: Det V –> Det V is simply multiplication by a constant c in the ground field k (since Det V is
    1-dimensional). c is the conventional determinant of the operator A we all know and love.

Consider v1, v2 … vn elements of V. By a previous claim,
asym(v1 (x) v2 (x) … (x) vn) is a non-vanishing element of Det V if and only if v1, v2 … vn form a basis of V. Also, suppose e1 … en and f1 … fn are two bases of V related by the n x n matrix P, i.e.
(e1 … en) = (f1 … fn) P. Then
asym(e1 (x) … (x) en) = (det P) asym(f1 (x) … (x) fn).

Chirality

Consider V a complex quadratic vector space of dimension n. Then Det V contains two special non-vanishing elements that differ by a sign, say w and –w. These elements are constructed as follows. Consider e1 … en an orthonormal basis of V. Then
asym(e1 (x) e2 (x) … (x) en) is a non-vanishing element of Det V. Any two orthonormal bases are related by an orthogonal n x n matrix O. Since O is orthogonal, we have either det O = +1 or det O = –1. Thus asym(e1 (x) e2 (x) … (x) en) can only differ by a sign for different orthonormal bases, yielding w and –w.

Suppose W is an isotropic subspace of V. Any element v of V defines a linear functional v* on V defined by v*(u) = Q(v, u). Here u is an arbitrary element of V and Q is the quadratic form of V. If v and u both belong to W we get v*(u) = Q(v, u) = 0 since W is isotropic. Thus, given v in W, v* is a linear functional vanishing on W. Since it vanishes on W, it determines a linear functional a on V/W. This can be seen as follows. Any u in V/W can be represented by some u’ in V. We can then set a(u) = v*(u’). However, u’ is only defined up to adding an arbitrary element w of W. But it’s OK because v*(u’ + w) = v*(u) + v*(w) = v*(u) since the second term vanishes by our previous observation. Thus a is well defined.

For any v in W we constructed a linear functional on V/W i.e. an element of (V/W)*. This yields a linear operator W –> (V/W)*. Now suppose dim V = 2m and W is maximal isotropic, that is,
dim W = m. Then our linear operator is an isomorphism.

In the following, we use the symbol “=” to denote “naturally isomorphic”. Det V = Det W (x) Det (V/W). By the above,
W = (V/W)*. Hence
Det V = Det ((V/W)*) (x) Det (V/W) = (Det (V/W))* (x) Det(V/W) = C. In particular we get a special element in Det V: the element corresponding to 1 in C.

Claim

The special element is either w or –w. This invariant divides maximal isotropic subspaces into two classes (chiralities). Any two maximal isotropic subspaces W and W’ are related by an orthogonal operator O: V –> V. For det O = +1, W and W’ have the same chirality. For
det O = –1, W and W’ have opposite chirality.

Wednesday, 1 July 2009

Autoevolution III

This is a sequel.

As Eli justly remarked, we are already cyborgs.The computer is extending our ability to think and remember. The Internet is extending our ability to communicate and share information. Microscopes, telescopes, night vision etc. are extending our ability to sense.

What is going to change in the future in this respect? Firstly, our technological extensions will become more numerous and powerful. Equally importantly, they will become more synergetic with ourselves. Instead of looking through a telescope, your experience will be that of your eyes being the telescope. Instead of searching in Google using your keyboard, you will do that by merely thinking. Instead of writing things down in Microsoft Outlook, it will just feel like remembering. Instead of doing computations with a calculator, you will do them in your mind without “sweat”. Instead of driving a car, you will become the car (provided you really want to control it instead of just getting from point A to point B; even the later will usually be unnecessary).

Extension of human “IO” will not be limited to information naturally contained in the physical reality. It will include conventionally encoded information: the kind we store in writing, computers etc. today. The synergy of the human use of such information will lead to the Internet becoming a “virtual reality”: something experienced directly rather than through non-transparent intermediates such as keyboards, mice and displays. “Ordinary” reality will become “augmented reality” as additional layers of information will be seamlessly available about “surrounding” physical objects (that is, objects under attention). Imaging looking at a bus and knowing what line it is and what is its trajectory. A possible intermediate form of augmented reality is superposing the additional information layers on the existing senses, e.g. “seeing” a “clickable” line number floating in the air besides the bus.

Physical location of human beings will become much less important. Even now the Internet, cellular phones and video conferences are reducing its importance. In the future this trend will continue much more. The culmination is mind-body separation. One can image huge immobile “brains” banked somewhere, operating bodies in distant physical locations and purely “virtual” realms.

Many activities don’t require engaging the “direct” physical reality but allow working with “conventionally encoded” higher abstraction layers. These include

  • “Software” engineering, that is the design of technology operating within these higher abstraction layers
  • Communication
  • Theoretical learning and research
  • “Virtual” Art

Other activities require interaction with “lower abstraction layers”. These include

  • “Hardware” engineering
  • Experimental science
  • Colonization of space
  • “Physical” Art

Eventually the “synergy” of the “virtual IO” (the exchange of “conventionally encoded” information) will make the “Internet” into a shared memory of human kind. It will store actual memories, experiences, thoughts, knowledge etc. which will be downloadable and uploadable. This will result in individuality becoming “fuzzy”, up to the point of almost utter disappearance. The entire population of a given planet will merge into a “multimind”. The multimind will be a continuous array of memories, desires etc. connected by associations as is the mind of an individual today. The mind of a modern individual contains a “stream of consciousness” (“CPU”) continuously traversing the memories (“RAM”), loading them into the short term memory (“cash”), connecting them to IO and transforming them. In an analogical way, the multimind will contain many streams of consciousness processing the shared memory in parallel, each with its own cash and connected to specific IOs at any given moment.

I call this multimind a “Solaris”, to honour Stanislaw Lem. The Solaris is a single entity uniting the whole technosphere, ecosphere and consciousness of an entire planet.

Due to the disappearance of individual, property will lose its meaning. Birth and death will also become obsolete, except for the birth of new Solarises through space colonization.

It is also possible that the Solaris will contain “quasi-individuals”: tightly bound chunks of memories, desires and ideas. Such
quasi-individuals will dynamically form out of the Solaris and dissolve back into it.

Thursday, 25 June 2009

Ask Squark: Mirror

This is a part of the Ask Squark series.

Thx Assaf for submitting the first question to “Ask Squark”!

Question:

Why does the mirror reverse left and right but not up and down?

Answer:

Firstly, there is a hidden assumption here that the mirror is hanged conventionally, i.e. vertically. A horizontal mirror (like in Hotel California) does reverse up and down.

It might seem that a vertical mirror displays some sort of asymmetry: left and right (which are perceived as horizontal directions) are reversed, whereas up and down (the vertical directions) are not reversed. However, let me assure you that there is perfect rotational symmetry with respect to the axis orthogonal to the mirror plane. The apparent paradox is mostly semantic.

Let us remember how an ideal planar mirror works. The real mirrors are not ideal (alas), but it’s irrelevant for the discussion.

Suppose the mirror lies in the plane M. Consider a point P in space. The mirror image of P is the point P’ satisfying the following criteria:

  • The line PP’ is orthogonal to M
  • The distance of P to M is equal to the distance of P’ to M

In reality, the mirror is usually reflective from one side only, hence we have to assume P lies in one of the two half-spaces defined by M. However, this is inessential.

A direction in space can be specified by two points: the beginning and the end of a vector. Thus, given a direction v = PQ, the mirror image direction is v’ = P’Q’ where P’ is the mirror image of P and Q’ is the mirror image of Q. It is easy to see that when v is parallel to M, v’ = v but when v is orthogonal to M, v’ = –v.* For example, if M is the plane spanned by the up-down and north-south directions, the mirror preserves up, down, north and south but reverses east and west.

The illustration is by the courtesy of SurDin.

But what about left and right? These notions are more complicated. While north, south, east, west, up and down are absolute directions, left and right are relative. For instance, if you face another person, her right is your left and vice versa.

Mathematically, we can describe left and right as follows. Consider ordered triples of mutually orthogonal vectors of unit length
(u, v, w). It can be shown that such triples fall into two classes R and L such that

  • A triple in R can be rotated into any other triple in R.
  • A triple in L can be rotated into any other triple in L.
  • A triple in R cannot be rotated into any triple in L.
  • A triple in L cannot be rotated into any triple in R.

Suppose a Cartesian coordinate system S is given. Then any triple (u, v, w) can be decomposed into components with respect to S and represented by a 3 x 3 matrix. Then, for some triples the determinant of this matrix is 1 whereas for others it is –1. These are R and L.

We can derive the following interesting property of R and L. Suppose (u, v, w) is a triple in R (L). Then (-u, v, w), (u, –v, w) and
(u, v, –w) are triples in L (R). It follows that (-u, –v, w), (-u, v, –w) and (u, –v, –w) are triples in R (L). Also, (-u, –v, –w) is a triple in L (R). This property follows, for instance, from the fact that the sign of the determinant is reversed when we reverse the sign of one of the matrix columns.

The relation to the familiar notions of right and left is as follows. Consider X is a person and (u, v, w) a triple as above. Suppose u is a vector pointing in the direction from X’s legs to her head. Suppose v is a vector pointing in the direction from X’s back to her front. Then w points either to X’s right or to X’s left, according to the class to which (u, v, w) belongs!

Consider (u, v, w) a triple as above. Consider (u’, v’, w’) the mirror image triple. That is, u’ is the mirror image of u, v’ is the mirror image of v and w’ is the mirror image of w. Then, if (u, v, w) is in R then (u’, v’, w’) is in L and vice versa. In this sense, the mirror reverses left and right. For instance, suppose u and v are parallel to M. Then w is orthogonal to M. We get u’ = u, v’ = v, w’ = –w. Thus (u’, v’, w’) = (u, v, –w) belongs to the class opposite to that of
(u, v, w). The general case can be proven e.g. using determinants: if we choose S such that M is parallel to two of the axes, mirror imaging amounts to changing the sign of one of the matrix rows.

The illustration is by the courtesy of SurDin.

* Remember that two vectors v = AB and w = CD are considered equal when

  • The line AB is parallel to the line CD (or A = B and C = D which means both vectors are the equal to the 0 vector).
  • The line AC is parallel to the line BD (or A = C and B = D which means both vectors coincide in the trivial sense).

Also, remember that, by definition, if v = AB then –v = BA.

Thursday, 18 June 2009

Free will, ethics and determinism

Lev has recently brought up the question of free will vs. determinism. I have spent some time thinking about these issues and came up with the ideas I want to lay out here.

The problem stems from the desire to define ethics. What is ethics? From my point of view, ethics is a set of rules describing how ethical is a given behaviour in a given situation. The definition might seem somewhat circular, but lets put this aside. The important aspect is that ethics is something allowing comparison of different choices and marking some as "better" and some as "worse", doesn't matter in what sense.
In order to discuss ethics, we must understand the set of choices available to a given person at a given situation. The fact that choice exists at all relies on the notion of "free will" which, on the surface, contradicts determinism.
What is determinism? Determinism means that given complete knowledge of the initial conditions it is possible to predict what would happen at any time in the future. This is a principle that holds in classical physics. The real world is better described by quantum physics, in which the situation is more complicated, but lets leave this for later.
In practice, such predictions are limited by 3 factors:
  1. Incomplete knowledge of the initial conditions. Indeed it is difficult to know everything about the entire universe.
  2. Incomplete knowledge of the laws of nature. Our model of reality is imperfect, and in my opinion, will always remain so.
  3. Limited information processing power. Even if you know the initial conditions and the laws of nature sufficiently precisely to give a prediction with the accuracy you need, doing so might require a complicated computation which would take lots of "CPU clock ticks" and perhaps lots of "RAM" to complete.
Thus, even if a "bird's-view" observer knows precisely what person X is going to do, X doesn't know it. Moreover, I claim it is impossible for X to know it. That is, we have the following principle of self-unpredictability:

An intelligent being X can never have a future-prediction capacity, accounting for the 3 factors above, that would allow it to predict her own behaviour.

If the principle is violated, we would get something like the grandfather paradox: if X knows she is going to do Y, what prevents her from doing Z which is different from Y? The principle also makes sense physically, as far as my intuition goes: to produce better predictions we need a more powerful "brain" which would have to be more complicated and thus more difficult to predict. It's sort of a "bootstrap".
To give an example, suppose X is a human. Certainly X is not able to use her knowledge of biology, chemistry, quantum physics and what else to predict the workings of her own brain. Now, suppose X recruits a computer Y to help her. It might appear that now her task is more realistic. However, by using the computer she made it a part of the system. That is, the required prediction now involves the joint dynamics of X + Y and thus remains out of reach.
It is fascinating to me whether the self-unpredictability principle can be reformulated as a theorem in physics about abstract information-processing systems.
Thus, the space of possible choices of X can be defined to be the space of things X can do as far as X can tell. Ethics would have to operate on this space.

It might appear that the indeterminism of quantum mechanics provides some kind of an alternative solution to the problem. However, I deem it is not so. The "freedom" allowed for by quantum mechanics is pure random. It is not consistent with the sort of choice that involves ethical judgement, which is the sort of choice we are targeting here.
Moreover, consider a person X observed by a superintelligent being Y. Y can predict X's behaviour up to quantum indeterminism. Y knows X is going to A with propability
pA = 1 - 1e-100 and B with probability pB = 1e-100. Thus B is a highly unlikely choice. However, from the point of view of X both of the choices might be equally legitimate, a priori (before ethical judgement is applied). Moreover, the small probability pB might stem from something like a lightning striking X's head, which is an artifact completely irrelevant to the ethical dilemma at hand.

Friday, 12 June 2009

Spinors II: Isotropic Subspaces

This is a sequel.

Definition
Consider V a quadratic space. Consider W a subspace. W is called isotropic when given v, w in W arbitrary, Q(v, w) = 0.

For instance, when k = R and V is positive definite (that is, sn V = dim V) the only isotropic subspace is {0}. If V is Lorentzian (that is, sn V = dim V - 1) we have
1-dimensional isotropic subspaces: null or lightlike lines in physics talk. On the other hand, for
k = C we have:

Proposition
  • Suppose V is a complex quadratic space of dimension 2n. Then the maximal dimension of an isotropic subspace is n.
  • Suppose V is a complex quadratic space of dimension 2n + 1. Then the maximal dimension of an isotropic subspace is n.
It will be useful to introduce the following

Definition
Consider V a quadratic space over an arbitrary field k. Consider W a subspace of V. Define W^ to be {v in V | for any w in W: Q(v, w) = 0}. W^ is called the subspace orthogonal to W.

Several facts about orthogonal subspace will be handly. Given V, W as above:
  • dim W + dim W^ = dim V
  • W^^ = W
  • Suppose W is isotropic. Then W^ contains W.
Consider V a quadratic space over an arbitrary field k. Consider W an isotropic subspace of V. Denote n = dim V, m = dim W. We have dim W^ = n - m but W^ contains W hence
dim W^ >= dim W i.e. n - m >= m and thus n >= 2m. It is therefore obvious that the maximal dimension of an isotropic subspace can be at most as in the proposition, even for k different from C. It remains to show that for k = C the bound can always be saturated.

Proof of Proposition
  • Suppose V is a complex quadratic space of dimension 2n.
    Consider e_1 ... e_2n a basis such that Q(e_i, e_j) = delta_ij. Here delta_ij is the Kronecker symbol, that is delta_ij = 1 for i = j and delta_ij = 0 for i =/= j. Define
    f_1 = e_1 + i e_2, f_2 = e_3 + i e_4 ... f_n = e_2n-1 + i e_2n. Here i is the imaginary unit, that is i = sqrt(-1). It is easily seen that {f_j} span an n-dimensional isotropic subspace.
  • Suppose V is a complex quadratic space of dimension 2n + 1.
    Consider e_1 ... e_2n, e_2n + 1 a basis such that Q(e_i, e_j) = delta_ij. Define
    f_1 = e_1 + i e_2, f_2 = e_3 + i e_4 ... f_n = e_2n-1 + i e_2n. It is easily seen that {f_j} span an n-dimensional isotropic subspace.
Definition
Consider V a complex quadratic subspace. A subspace W of V is called a maximal isotropic subspace when it is of maximal possible dimension. That is:
  • For dim V = 2n, we require dim W = n.
  • For dim V = 2n + 1, we require dim W = n.
Definition
Consider V a quadratic space over the field k. Consider R: V -> V an operator. V is called orthogonal when for any v, w in V we have Q(Rv, Rw) = Q(v, w). The orthogonal operators are the automorphisms of V, that is, isomorphisms of V with itself. The set of all orthogonal operators is denoted O(V).

Proposition
Consider V a complex quadratic space, U and W two maximal isotropic subspaces. Then, there exists R in O(V) s.t. R(U) = W.

This means that all maximal isotropic subspaces of a given space are essentially the same. However, we'll see in the sequels that for dim V = 2n + 1, the space of maximal isotropic subspaces has 1 connected component, whereas for dim V = 2n there are 2 connected components. For now, I won't explain what I mean by "space" and what are "connected components". These are some basic concepts of topology which I hope to explain in the sequels.

Sunday, 7 June 2009

Autoevolution II

This is a sequel.

Genetic engineering is already not science fiction. We are rapidly approaching the era when we will modify the genetic code of various living organisms and our own genetic code in massive amounts, up to a point when such modifications become a centerpiece of technology.
In a 1000 years the impact of these modifications will be so great that our descendants will have little in common with the original homo sapiens sapiens. It is impossible to tell what they will be like precisely (that is, even more impossible than my other ambitions in this series). However, I will try to guess some of their general features.
  • Survival
    Some speciments will be adapted to extreme conditions, such as extreme temperatures, extreme pressures, ionizing radiation, poison etc. In particular, when space colonization will commence, "humans" adapted to the respective conditions will be created. That is, we would have Marsians, Europans etc. historically originating from Earth humans.
  • 6th sense, 7th sense...
    Some speciments will have sensory perception very different from what we are used to. For instance they might have vision with more colour channels and / or in different areas of the spectrum.
  • Communication
    Speech will replaced by more advanced modes of communication, perhaps something like a direct mind-to-mind link. This will increase the bandwidth and reduce the error rate considerably. Among other things, this might lead to much more efficient resolution of disagreements up to the point when "irreconcilable differences" become very rare.
  • Intelligence
    Eventually, not only the "body" but also the "mind" will be enhanced. This will start with improved memory and faster thought and end-up with capabilities of completely different magnitude. In my opinion, the most critical mental capacity is the ability to hold many things in one's mind at once: a sort of "cache memory". It is the enhancement of this capacity which would lead to the most radical development of intelligence.
  • Specialization
    Different speciments will be adapted to different professions, to an extent much greater than what exists today (up to the point when different professions become virtually different species). In a way, this kind of specialization already exists: division into males and females. However, in the future there will be many more kinds (in ways unrelated to the reporductive cycle).
  • Body-mind separation
    Eventually the brain or mind which carries the information processing function will be separated from the body which carries the input/output functions. It will be possible for a given person (mind) to use a number of different bodies suited for different tasks at different times. Loosely speaking, one will be able to change bodies the way one now changes clothes or cars. Thus some of the traits mentioned above (such as survival in extreme conditions and enhanced senses) will apply to particular bodies rather than particular persons.
At first, there will exists two radically different "technologies":
  • The "conventional" technology we know today, based on semiconductors, fiber optics, lasers etc. At some point this will include some sort of "nanotechnology". The advantage of this technology is that we understand and control it perfectly, since we created it "from the ground up" (except the laws of nature, of course, which are immutable).
  • "Organic" technology employing what we now call "genetic engineering". The advantage of this technology is that it is "more sophisticated" than the ordinary: living organisms do things we yet only dream doing artificially. The disadvantage is the imperfect understanding and control we have over it.
However, eventually they will mix and become one or several technologies descendant from both. These technologies will unite the advantages of both kinds. Thus, the clear distinction between "ourselves" and the "machines" will be erased. At the same time, the distinction between the "technosphere" and the "biosphere" will also be erased. That is, instead of two different environments (the "wild" and our artificially created environment) there will be only one. This new environment will be at least as sophisticated as the ecology existing today in the wild, while being under out conscious control.
As an intermediate stage, we will create much more efficient modes of brain-computer communication. Humans would have computer "coprocessors" wired into their brain and connected into the internet.
There is another essential difference between conventional technology and life. The machines we create are usually "clones" made to resemble a given prototype as much as possible. However, living organisms, even if members of the same species, are always very different from each other (unless, of course, they are the clones of a single ancestor; such groups, however, form only tiny fractions of a given species). It is my suspicion that the second scheme is much more efficient, and we are only bound to the first scheme because of technical limitations. Thus most of the "machines" of the future will resemble living organisms rather than modern machines in this respect.

Friday, 29 May 2009

Spinors I: Clifford Algebra

This post is meant to be the first in a series about spinors, exceptional isomorphisms, twistors and supersymmetry. My interest in in-depth investigation of spinors was partially inspired by Yasha, in particular I learnt from him on the annihilator approach (will appear in the sequels). I will try to assume little prior knowledge except linear algebra. The emphasis will be on mathematics, at least for a while, so if you're interested in this purely from a physics perspective than you better already know the physical motivation / applications of this stuff.

The first object we'll need is the Clifford algebra. Fix a field k (in this series it will always be either the set of real numbers R or the set of complex numbers C).

Algebras

I'll start from a quick reminder of what an algebra is. Suppose A is a vector space over k. Suppose further that a mapping m: A x A -> A is given. For convenience sake, given a, b in A we denote m(a, b) by ab and call m multiplication. A is called a unital associative k-algebra (just k-algebra in the sequel) when the following conditions hold:
  • m is bilinear (i.e. linear in each of the arguments separately). In details, it means that
    Given a, b, c in A: (a + b)c = ac + bc additivity in the 1st argument / left distributivity
    Given x in k and a, b in A: (xa)b = x(ab) homogenuity in the 1st argument
    Given a, b, c in A: c(a + b) = ca + cb additivity in the 2nd argument / right distributivity
    Given x in k and a, b in A: a(xb) = x(ab) homogenuity in the 2nd argument
  • Given a, b, c in A: (ab)c = a(bc) associativity
  • There exists an element "1" in A such that for any a in A: a1 = 1a = a unit
An algebra A is called commutative when given a, b in A we have ab = ba.

Examples
  • Consider V a k-vector space. Consider End(V) the set of all endomorphisms of V, i.e., linear operators V -> V. End(V) is a k-algebra where multiplication corresponds to composition of linear operators. For dim V <>
  • Consider V a k-vector space, W a subspace. Consider End(V, W) the set of endomorphisms of V leaving W invariant (non-standard notation). That is, given a in End(V, W), w in W we have aw also in W. End(V, W) is an algebra. It is a subalgebra of End(V), that is, a linear subspace closed under multiplication. For dim V <>
  • Fix n a natural number. The set of n x n matrices with coefficients in k forms an algebra: Mat(n, k). It is isomorphic to End(V) for dim V = n. Obviously, dim Mat(n) = n^2.
  • Fix n a natural number. The set of upper-triangular n x n matrices with coefficients in k forms an algebra: UT(n, k) (non-standard notation). We have dim UT(n, k) = n (n + 1) / 2.
  • Consider k[x] the set of polynomials with coefficients in k in the variable x. k[x] is a
    k-algebra. It is infinite-dimensional. It is
    commutative.
  • Fix n a natural number. We have k^n the set of column vectors of size n with coefficients in k. We can define multiplication in k^n by multiplying each vector entry separately. It makes k^n into an algebra. Obviously dim k^n = n. k^n is commutative.
Ideals

A subset I of A is called a right ideal when the following conditions hold:
  • I is a linear subspace of A
  • Given a in A, b in I, ba is also in I
Consider S an arbitrary subset of A. Denote SA to be the collection of all elements of A of the form

s1 a1 + s2 a2 + ... + sn an

where:
s1, s2 ... sn are elements of S
a1, a2 ... an are elements of A

Claim: SA is a right ideal.
SA is called the right ideal of A generated by S.

A subset I of A is called a left ideal when the following conditions hold:
  • I is a linear subspace of A
  • Given a in A, b in I, ab is also in I
Consider S an arbitrary subset of A. Denote AS to be the collection of all elements of A of the form

a1 s1 + a2 s2 + ... + an sn

where:
s1, s2 ... sn are elements of S
a1, a2 ... an are elements of A

Claim: AS is a left ideal.
AS is called the left ideal of A generated by S.

A subset I of A is called a two-sided ideal when it is simultaneously a left ideal and a right ideal.
Consider S an arbitrary subset of A. Denote ASA to be the collection of all elements of A of the form

a1 s1 b1 + a2 s2 b1 + ... + an sn bn

where:
s1, s2 ... sn are elements of S
a1, a2 ... an are elements of A
b1, b2 ... bn are elements of A

Claim: ASA is a two-sided ideal.
ASA is called the two-sided ideal of A generated by S.

Claim: Suppose A is a commutative algebra. Then a subset of A is a left ideal if and only if it is a right ideal if and only if it is a two-sided ideal.
Thus for a commutative algebra all three notions coincide hence we speak simply of ideals.

Examples
  • Consider V a k-vector space. Consider the algebra End(V). Consider W a subspace of V. Define I = {a in End(V) | Im a lies in W}. I is a right ideal of End(V). Define
    J = {a in End(V) | Ker a contains W}. J is a left ideal in End(V).
  • Fix n a natural number. Consider the algebra Mat(n, k). Fix m <= n another natural number. Define I = {a in Mat(n, k) | the first m rows are zero}. I is a right ideal of Mat(n, k). Define J = {a in Mat(n, k) | the first m columns are zero}. J is a left ideal of Mat(n, k).
  • Fix n a natural number. Consider the algebra UT(n, k). Fix m <= n another natural number. Define J = {a in UT(n, k) | the first m columns are zero}. I is a two-sided ideal of UT(n, k).
  • Consider the algebra k[x]. Consider S a finite subset of k[x].
    Define I = {p in k[x] | for any a in S: p(a) = 0}. I is an ideal of k[x]. It is generated by the polynomials {x - a} where a traverses elements of S. Now fix n a natural number. Define
    J = {p in k[x] | for any m natural with m <= n: p^(m)(a) = 0}. J is an ideal of k[x]. It is generated by the single polynomial x^(m + 1).
  • Fix n a natural number. Consider the algebra k^n. Consider m <= n another natural number. Define I = {v in k^n | the first m entries of v are zero}. I is an ideal.
Quotient Algebra

Consider A an algebra and I a two-sided ideal. Then we may take the vector space quotient A/I. That is, we consider the set of equivalence classes of A under the following equivalence relation: Given a, b in A they are equivalent when a - b is in I.
It is easy to see the operation of multplication in A defines an operation of multiplication in A/I as well, that is, makes A/I into an algebra on its own right. For this to work, it is crucial that I is a two-sided ideal. A/I is called the quotient algebra of A by I.

Examples
  • Fix n a natural number. Consider the algebra UT(n, k). Fix m <= n another natural number. Define J = {a in UT(n, k) | the first m columns are zero}. Then UT(n, k) / J is naturally isomorphic to UT(m, k).
  • Consider the algebra k[x]. Consider S a finite subset of k[x].
    Define I = {p in k[x] | for any a in S: p(a) = 0}. Then k[x] / I is naturally isomorphic to k^n where n is the number of elements of S.
  • Fix n a natural number. Consider the algebra k^n. Consider m <= n another natural number. Define I = {v in k^n | the first m entries of v are zero}. Then k^n / I is naturally isomorphic to k^m.
Generators and Relations

One of the simplest ways to construct an algebra is using generators and relations. This is done as follows. Suppose G is an abritrary set (possibly infinite). Consider F = k the algebra of non-commutative polynomials with coefficients in k and variables G. For G non-empty this algebra is infinite-dimensional. It is also called the free algebra over G.
Now take R an arbitrary subset of F. We have I = FRF a two-sided ideal. We obtain the algebra A = F / I. A is called the algebra generated by G with relations R. In this context, elements of G are called generators and elements of R relations. It is often convenient to define relation using equations. For example, suppose f, g, h are elements of G and x, y, z are elements of k. Then the relation

xf^2 = ygh + zhg

means that the element xf^2 - ygh - zhg of F is in R.

Quadratic Spaces

Consider V a vector space over k. V is called a quadratic space when it is equipped with a symmetric bilinear non-degenerate form Q. A quick reminder of what that means:
  • Q is mapping V x V -> k
  • Q is bilinear (i.e. linear in each of the arguments separately):
    Given u, v, w in A: Q(u + v, w) = Q(u, w) + Q(v, w) additivity in the 1st argument
    Given x in k and u, v in A: Q(xu, v) = xQ(u, v) homogenuity in the 1st argument
    Given u, v, w in A: Q(w, u + v) = Q(w, u) + Q(w, v) additivity in the 2nd argument
    Given x in k and u, v in A: Q(u, xv) = xQ(u, v) homogenuity in the 2nd argument
  • Q is symmetric, that is, given u, v in V: Q(u, v) = Q(v, u)
  • Q is non-degenerate: Suppose u in V is such that for any v in V we have Q(u, v) = 0. Then u = 0.
Two quadratic spaces V, W with corresponding forms Q, R are called isomorphic when there exists a linear mapping i: V -> W such that
  • i is injective: Given u, v in V, i(u) = i(v) implies u = v. Equivalently, Given u in V, i(u) = 0 implies u = 0.
  • i is surjective: Given w in W, there exists v in V such that i(v) = w.
  • i preserves the quadratic stucture, that is, given u, v in V: Q(u, v) = R(i(u), i(v))
When the conditions hold, the mapping i is called an isomorphism between V and W. Two isomorphic quadratic spaces are "essentially the same".
In the sequel, we'll only care about finite-dimensional quadratic spaces.

Proposition:
  1. Suppose V, W are quadratic spaces over k = C. Then V is isomorphic to W if and only if
    dim V = dim W.
  2. Suppose V is a quadratic space over k = C of dimension n. Then there exists a basis
    e1, e2 ... en of V such that:
    Q(ei, ei) = 1
    Q(ei, ej) = 0 for i =/= j
Proposition:
  1. Suppose V is a quadratic space over k = R of dimension n. Then there exists a natural number s and a basis of V e1, e2 ... en such that:
    For i <= s: Q(ei, ei) = 1 For i > s: Q(ei, ei) = -1
    Q(ei, ej) = 0 for i =/= j
    We call s the "s-number" of V and denote it sn V (this is not standard terminology).
  2. (Trivial) Suppose V, W are quadratic spaces over k = R. Then V is isomorphic to W if and only if dim V = dim W, sn V = sn W.
Clifford Algebra

Fix V a vector space. The tensor algebra T(V) is the algebra generated by V with the following relations:
  • Given u, v, w in V with u + v = w, we take u + v = w to be a relation.
  • Given u, v in V, x in k with xu = v we take xu = v to be a relation.
T(V) is infinite-dimensional.

Suppose V is a quadratic space. The Clifford algebra C(V) is the algebra generated by V with the following relations:
  • The relations we used for T(V).
  • Given u, v in V: uv + vu = -2Q(u, v)
Claim: dim C(V) = 2^dim V

Examples

We use k = R in these examples.
  • Suppose dim V = 0. Then C(V) is isomorphic to R.
  • Suppose dim V = 1, sn V = 1. Then C(V) is isomorphic to C.
  • Suppose dim V = 2, sn V = 2. Then C(V) is isomorphic to the quaternion algebra H.
  • For dim V > 2, C(V) is no longer a division algebra, that is, it doesn't have an inverse for each non-zero element.

Friday, 20 March 2009

Humanity in a 1000 years I: autoevolution

It is probably absurd trying to predict what will happen with humanity in a 1000 years. It is likely that even the smartest person living in 1000 C.E. would find it difficult to imagine the era we live in now.
Maybe armed with rationalism and the scientific method we are in a slightly better position to do it now. On the other hand, it is possible that the methods of thought or even the very apparatus of the mind will make so much progress that our descendands will create things we would never be able to dream of. In fact, I am (somewhat paradoxially) about to claim precisely that. At any rate, it appears almost inevitable the increase in body of human knowledge will lead to incredible changes in human society and the human way of life. Unless, of course, some terrible catastrope or a new dark age will prevent it.
If so, the task of imagining humanity in the year 3000 C.E. appears almost hopeless. Nevertheless, I still think it is worthwhile. Why? Because it is an amusing thought experiment. Because thinking about the future may change the future. Because trying to stretch our ability to predict or analyse to the limit may teach us something. Evef if it won't, it is still bound to be fun :-) Let me give a go at it, then!

I warn beforehand that my view of the future is somewhat optimistic. I am assuming humanity will not be destroyed by nuclear war, alien invasion, asteroid impact or any other calamity. I am also assuming scientific progress is not going to stop or reverse as a result of such an event. My entire "prediction" is something of a mixture between what I believe will happen and what I hope will happen.

Autoevolution

Charles Darwin taught us that humans are not essentially different from any other animal. Like any other animal, or indeed, any other living creature, we gradually evolved from other species over vast periods of time. The governing principle of that process is natural selection. The princinple is so obvious it is almost a tautology: the speciments most adapted to survival are more likely to survive so each generation is adapted better than the previous one. Add mutations into the mix and we get evolution.

How long does evolution continue? As far as we know it is indefinite. External conditions change, different species compete and, most importantly from my point of view, nature never reaches perfection. There are always improvements to be made.

Improvements? Isn't home sapiens sapiens perfect? Isn't it the peak of creation?

What on Earth gave us the arrogance to think that? Oh, sorry, I know what it is: evolution ;-)

Homo sapiens sapiens can and should be improved. There's no reason to think we can't be more healthy, more enduring, more intelligent. The problem is, we are different from other animals after all. We change our environment, adapting it to our needs. This process is much faster than the self-adaptation resulting from biological evolution. The result: natural selection, the driving force of biological evolution is no longer valid for this species.

Not only that we are (apparently) no longer becoming better, we are probably becoming worse. Random mutation introduce noise into our genetic code. In the same time, in a modern society (I mean the developed countries) "weak" individuals are not allowed to perished (which is a good thing!) and have no problem of spreading their genes. Anyone short of a Nazi would agree that the situation in which each individual of society is protected and able to satisfy her basic needs is a healthy one, from a moral stand point. The downside is that in the long run the human race faces physical and intellectual degeneration (I recommend the amusing comedy "Idiocracy" on this subject precisely).

Luckily, this threat, created, in a sense, by modern technology, finds its solution in the same source. In recent decades, the field of genetics experienced vast progress. The extent of the progress is such that genetic engineering has become possible. Now, we are only making our first step in this direction. However, we are discussing a problem that will only become relevant in a the very long run and there is little doubt that by that time our ability to manipulate the genetic codes of living being including ourselved will be perfected.

Thus, genetic engineering of human beings appear to me inevitable in order to avoid degeneration. However, we can and should go beyond this and apply genetic engineering in order to improve ourselves rather than merely preserving ourselves on the same level.

OK, I thought it is going to be one post, but it would take me ages to complete in this rate. So I'm posting the beginning, to be (hopefully) continued...

Friday, 13 March 2009

Regular Polytopes and Tilings

A few random thoughts on regular polytopes and tilings I wanted to share.

2D

Given m,n >= 3, we can try to build a 2-dimensional tiling where m regular n-gons meet at each vertex. The angle of a regular n-gon is alpha = (1 - 2/n) pi, and we have the following three cases:
  1. m alpha < 2 pi. This yields a regular polyhedron (a Platonic solid). There are 5 cases like that:
    a. n = 3, m = 3: tetrahedron, a self-dual polyhedron, the 3-dimensional simplex
    b. n = 3, m = 4: octahedron, the 3-dimensional cross-polytope
    c. n = 3, m = 5: icosahedron
    d. n = 4, m = 4: cube, dual to octahedron, the 3-dimensional hypercube
    e. n = 5, m = 3: dodecahderon, dual to icosahedron
    Each of those defines a finite subgroup of SO(3), the 3-dimensional rotation group and of O(3) the 3-dimensional rotation-and-reflection group. These subgroups are, of course, the symmetry groups of the polyhedra.
  2. m alpha = 2 pi. This yields a regular tiling of the Euclidean plane. There are 3 cases like that:
    a. n = 3, m = 6
    b. n = 4, m = 4 A self-dual tiling
    c. n = 6, m = 3 dual to a
    Each of those defines a discrete subgroup of the group of isometries (rotations, translations and reflections) of the Euclidean plane. Alternatively we can use orientation-preserving isometries (rotations and translations only).
  3. m alpha > 2 pi. This yields a regular tiling of the hyperbolic plane. There's an infinite number of cases. Each of them defines a discrete subgroup of SO(2, 1). The later group has various geometric realizations:
    a. Orientation-preserving isometries of the hyperbolic plane.
    b. Lorentz transformations of special relativity in 3-dimensional spacetime (2 space dimensions and 1 time dimension).
3D

Given A, B regular polyhedrons, we can try to build a 3-dimensional tiling where #{faces of B} A-polyhedra meet at a B-type vertex. What do I mean by a B-type vertex? Imagine the vertex being in the ceter O of a B-polyhedron Y. Fix a face F of
Y. F corresponds to an A-polyhedron X of the tiling. The lines passing through the O and the vertices of F correspond to edges of X.
This wouldn't work for any A, B. For purely combinatorial reasons, we need

#{faces meeting at a vertex of A} = #{sides of a face of B}

Geometrically, we again have three cases, depending on
  • alpha, the dihedral angle of A, that is, it is the angle between its two adjacent faces.
  • m, the number of faces meeting at a vertex of B.
The three cases are:
  1. m alpha < href="http://en.wikipedia.org/wiki/Convex_regular_4-polytope">polychoron). There are 6 cases like that:
    a. A = tetrahedron, B = tetrahedron: pentachoron, a self-dual polychoron. It is the 4-dimensional simplex.
    b. A = tetrahedron, B = octahedron: hexadecachoron. It is the 4-dimensional cross-polytope.
    c. A = tetrahedron, B = icosahedron: hexacosichoron.
    d. A = cube, B = tetrahedron: tesseract, dual to the hexadecahoron. It is the 4-dimensional hypercube.
    e. A = octahedron, B = cube: icositetrachoron, a self-dual polychoron.
    f. A = dodecahedron, B = tetrahedron: hecatonicosachoron, dual to the hexaicosohoron.
    Each of those defines a finite subgroup of SO(4), the group of 4-dimensional rotations. It also defines a finite subgroup of O(4), the group of 4-dimensional rotations-and-reflections.
  2. m alpha = 2 pi. This yields a regular tiling of the Euclidean space. There is only 1 case like that: A = cube, B = octahedron. It defines a discrete subgroup of the group of isometries (rotations, translations and reflections) of the Euclidean space, or of the group of orientation-preserving isometries (no reflections).
  3. m alpha > 2 pi. This yields a regular tiling of the 3-dimensional hyperbolic space. There are 4 cases like that:
    a. A = cube, B = icosahedron
    b. A = dodecahedron, B = octahedron: dual to a
    c. A = dodecahedron, B = icosahedron: self-dual
    d. A = icosahedron, B = dodecahedron: self-dual
    Each of those defines a discrete subgroup of SO(3, 1). The later group has various geometric realizations:
    a. The group of orientation-preserving isometries of the 3-dimensional hyperbolic space.
    b. The group of Lorentz transformations in special relativity.
    c. The group of orientation-preserving conformal transformations of the
    2-sphere.
    Realization b is intriguing since it makes me wonder whether these discrete subgroups appear in any physically-interesting situation.
    Realization c is intriguing for the following reason. Each such transformation has one or two fixed points. Consider the set of fixed points of all transformations belonging to a given discrete subgroup. This is a countable subset of the sphere, invariant under the discrete subgroup (due to conjugation). Clearly it must be either dense everywhere or a sort of fractal, but I don't know which.
4D

This time we take A, B to be regular polychorons. We want to construct a 4-dimensional tiling of A-polychorons which meet at a B-type vertex. The combinatorial compatibility condition is

vertex polyhedron of A = hyperface polyhedron of B

We have 3 geometric cases:
  1. A regular tiling of the 4-sphere, that is, a 5-dimensional regular polytope. There are 3 cases like that:
    a. A = pentachoron, B = pentachoron: the self-dual 5-dimensional simplex.
    b. A = pentachoron, B = hexadecahoron: the 5-dimensional cross-polytope.
    c. A = tesseract, B = pentachoron: the 5-dimensional hypercube, dual to a.
  2. A regular tiling of the 4-dimensional Euclidean space. One example is
    A = tessaract, B = hexadecahoron, which is self-dual
  3. A regular tiling of the 4-dimensional hyperbolic space.
There are 7 exotic objects (that is, object special to dimension 4) among cases 2-3:
  1. A = pentachoron, B = hexacosichoron
  2. A = hexadecachoron, B = icositetrachoron
  3. A = tesseract, B = hexacosichoron
  4. A = icositetrachoron, B = tesseract: dual to 2
  5. A = hecatonicosachoron, B = pentachoron: dual to 1
  6. A = hecatonicosachoron, B = hexadecachoron: dual to 3
  7. A = hecatonicosachoron, B = hexacosichoron: self-dual
At the moment I'm not sure which of them is a tiling of 4-dimensional Euclidean space and which is a tiling of 4-dimensional hyperbolic space.

Higher dimension

We take A, B to be regular n-dimensional polytopes. We want to construct an n-dimensional tiling of A-polytopes which meet at a B-type vertex. The combinatorial compatibility condition is

n-1-dimensional vertex polytope of A = n-1-dimensional hyperface polytope of B

Once again, we have 3 geometric cases:
  1. A regular tiling of the n-sphere, that is an n+1-dimensional regular polytope. There are 3 cases like that:
    a. A = n-dimensional simplex, B = n-dimensional simplex. This is the self-dual
    n+1-dimensional simplex.
    b. A = n-dimensional hypercube, B = n-dimensional simplex. This is the n+1-dimensional hypercube.
    c. A = n-dimensional simplex, B = n-dimensional cross-polytope. This is the
    n+1-dimensional cross-polytope, dual to the n+1-dimensional hypercube.
  2. A regular tiling of the n-dimensional Euclidean space. There is only 1 case:
    A = n-dimensional hypercube, B = n-dimensional cross-polytope.
  3. A regular tiling of the n-dimensional hyperbolic space. There are none!
Indefinite signature

As we said, we construct n-dimensional tilings out of a pair A, B of n-dimensional polytopes. Now, a polytope is a tiling of the n-1-sphere. What if we take A, B to be tilings of the n-1-dimensional hyperbolic space instead? Logically, we should get a tiling of a space of Lorentzian signature, since the hyperbolic space plays the same role in Minkowski space that the sphere plays in Euclidean space. I'm not sure how would such a tiling look like, it appears it would be
self-interecting. As before, such tilings would come with different curvatures. That is, we should get
  1. Positive curvature: a tiling of de Sitter space.
  2. Zero curvature: a tiling of Minkowski space.
  3. Negative curvature: a tiling of anti-de Sitter space.
If it indeed makes sense, we would also get discrete subgroups of the symmetry groups of the aforementioned spaces. For instance, we might get a discrete subgroup of the symmetry group of Minkowski space: the Poincare group. I wonder whether there exists a physical object with this kind of symmetry group: a sort of relativistic crystal!