tag:blogger.com,1999:blog-68159318359428900992018-03-07T10:35:46.509-08:00SquarkoniumScience Philosophy SocietySquarkhttp://www.blogger.com/profile/08330918300643149836noreply@blogger.comBlogger9125tag:blogger.com,1999:blog-6815931835942890099.post-38331812153872388672009-07-16T15:01:00.001-07:002009-07-20T09:19:41.983-07:00Spinors III: Chirality of Isotropic Subspaces<p><em>This is a <a href="http://squarkonium.blogspot.com/2009/06/spinors-ii-isotropic-subspaces.html">sequel</a>.</em></p> <h1>Tensor Product</h1> <p>Tensor product is a very important, very commonplace concept in mathematics. We speak of tensor product of vector spaces and tensor product of modules, but the later will not concern us in this post. Tensor product of vector spaces logically belongs to the realm of linear algebra, but is not covered in any linear algebra course I know. The reason is probably pedagogical: linear algebra is an introductory coarse, intended for people with little experience in mathematics. On the other hand, tensor product is a rather sophisticated concept in comparison. Sometimes people refer to tensors as <em>multilinear algebra</em>, but I don’t find this to be a natural category. For instance, the theory of bilinear forms is a part of linear algebra, not “bilinear algebra”.</p> <h2>Definition</h2> <p>Fix k a field. Consider V and W finite dimensional vector spaces over k. Define the <em>tensor product of V and W</em>, denoted V (x) W (pardon my ASCII), to be the vector space of bilinear mappings <br />V* x W* –> k.</p> <h2>basic properties</h2> <ul> <li>V (x) W is naturally isomorphic to W (x) V </li> <li>V (x) k is naturally isomorphic to V. This is so because V** is naturally isomorphic to V. </li> <li>V (x) {0} is naturally isomorphic to {0}, the 0-dimensional vector space. </li> <li>Consider V, U, W vector spaces. Then (V (+) U) (x) W is naturally isomorphic to V (x) W (+) U (x) W. Here “(+)” denotes direct sum of vector spaces, i.e. V (+) U is the vector space of ordered pairs (v, u) where v is in V and u is in U. </li> <li>(V (x) W)* is naturally isomorphic to V* (x) W*. </li> <li>V* (x) W* is naturally isomorphic to the vector space of bilinear mappings V x W –> k. </li> <li>V* (x) W is naturally isomorphic to Hom(V, W), the vector space of linear operators (=homomorphisms) V –> W. </li> <li>For dim V = 1, Hom(V, V) is also 1-dimensonal and it has a special basis consisting of the identity operator. Hence <br />Hom(V, V) is naturally ismorphic to k and so is V* (x) V. Thus, for 1-dimensional spaces, the dual vector space is the inverse vector space with respect to tensor product. </li> </ul> <h2></h2> <h2>tensor product of vectors</h2> <p>Consider v a vector in V, w a vector in W. Then we construct <br /><em>the tensor product of v and w</em>, denoted v (x) w, a vector in V (x) W. By definition, v (x) w is supposed to be a bilinear mapping <br />v (x) w: V* x W* –> k. Consider <span style="font-family:Symbol;">a</span><span style="font-family:Trebuchet MS;"> in V* and </span><span style="font-family:Symbol;">b</span><span style="font-family:Trebuchet MS;"> in W*. We define <br />(v (x) w)(<span style="font-family:Symbol;">a</span>, <span style="font-family:Symbol;">b</span>) = </span><span style="font-family:Symbol;">a</span><span style="font-family:Trebuchet MS;">(v) <span style="font-family:Symbol;">b</span>(w) (take your time to parse this expression).</span></p> <h2>basis</h2> <p>Suppose e1 .. en is a basis of V, f1 .. fm a basis of W.</p> <h3>Claim</h3> <p>{ei (x) fj} is a basis of V (x) W. In particular, <br />dim (V (x) W) = dim V dim W.</p> <h1>(Anti)Symmetric Tensors</h1> <p>Fix a vector space V. Consider the vector space V (x) V. Elements of V (x) V are called <em>tensors of rank 2 over V</em>. We construct a linear operator <span style="font-family:Symbol;">s</span>: V (x) V –> V (x) V as follows. Consider t an element of <br />V (x) V. By definition, t is a bilinear mapping V* x V* –> k. We need to describe <span style="font-family:Symbol;">s</span><span style="font-family:Trebuchet MS;">(t), also an element of V (x) V hence also a bilinear mapping V* x V* –> k. Consider <span style="font-family:Symbol;">a</span>, <span style="font-family:Symbol;">b</span> in V*. We define <br /><span style="font-family:Symbol;">s</span><span style="font-family:Trebuchet MS;">(t)</span>(<span style="font-family:Symbol;">a</span>, <span style="font-family:Symbol;">b</span>) = t(<span style="font-family:Symbol;">b, <span style="font-family:Symbol;">a</span></span></span><span style="font-family:Trebuchet MS;">).</span></p> <p><span style="font-family:Trebuchet MS;">It is easy to see V (x) V splits into a direct sum of two subspaces: <br />S^2(V) and <span style="font-family:Symbol;">L</span></span><span style="font-family:Trebuchet MS;">^2(V). S^2(V) consists of t in V (x) V such that <span style="font-family:Symbol;">s</span><span style="font-family:Trebuchet MS;">(t)</span> = t, i.e. it is the eigenspace of <span style="font-family:Symbol;">s</span></span><span style="font-family:Trebuchet MS;"> corresponding to eigenvalue 1. Elements of S^2(V) are called <em>symmetric </em>tensors of rank 2 over V. <span style="font-family:Trebuchet MS;"><span style="font-family:Symbol;">L</span></span><span style="font-family:Trebuchet MS;">^2(V) consists of t in V (x) V such that <span style="font-family:Symbol;">s</span><span style="font-family:Trebuchet MS;">(t)</span> = –t, i.e. it is the <span style="font-family:Trebuchet MS;">eigenspace of <span style="font-family:Symbol;">s</span></span><span style="font-family:Trebuchet MS;"> corresponding to eigenvalue -1. Elements of <span style="font-family:Trebuchet MS;"><span style="font-family:Symbol;">L</span></span><span style="font-family:Trebuchet MS;">^2(V)</span> are called <em>antisymmetric</em> tensors of rank 2 over V. The direct sum structure of V (x) V follows from the observation that <span style="font-family:Symbol;">s</span>^2 = 1.</span></span></span></p> <p>More generally, consider the vector space <br />T^k(V) := V (x) V (x) … (x) V for k copies of V. Elements of T^k(V) are called <em>tensors of rank k over V</em>. Consider <br />p a permutation of k elements (i.e. a bijection <br />{1, 2 … k} –> {1, 2 … k}). We define a linear operator <br /><span style="font-family:Symbol;">s</span>_p: T^k(V) –> T^k(V) by the condition <br /><span style="font-family:Symbol;">s</span>_p(t)(<span style="font-family:Symbol;">a_</span>1, <span style="font-family:Symbol;">a_</span>2 … <span style="font-family:Symbol;">a_</span>k) = t(<span style="font-family:Symbol;">a_</span>p(1), <span style="font-family:Symbol;">a</span>_p(2) … <span style="font-family:Symbol;">a</span>_p(k)). Here t is an element of T^k(V) and <span style="font-family:Symbol;">a_</span>1, <span style="font-family:Symbol;">a_</span>2 … <span style="font-family:Symbol;">a_</span>k are elements of V*.</p> <p>We define S^k(V) to be the subspace of T^k(V) consisting of t such that for any permutation of k elements p, <span style="font-family:Symbol;">s</span>_p(t) = t. Elements of <br />S^k(V) are called <em>symmetric</em> tensors of rank k over V. Remember that permutations can be divided into <em>odd</em> and <em>even</em>. An odd permutation it the composition of an odd number of permutations which are transpositions of two elements of {1, 2 … k}. An even permutation is the composition of an even number of such transpositions. We define <span style="font-family:Trebuchet MS;"><span style="font-family:Symbol;">L</span></span><span style="font-family:Trebuchet MS;">^k(V) to be the subspace of T^k(V) consisting of such t that for any permutation of k elements p, <br /><span style="font-family:Symbol;">s</span>_p(t) = sgn(p) t. Here sgn(p) is +1 for p even and –1 for p odd. </span>Elements of <span style="font-family:Trebuchet MS;"><span style="font-family:Symbol;">L</span></span><span style="font-family:Trebuchet MS;">^k(V)</span> are called <em>antisymmetric</em> tensors of rank k over V. For k > 2, the direct sum of S^k(V) and <span style="font-family:Trebuchet MS;"><span style="font-family:Symbol;">L</span></span><span style="font-family:Trebuchet MS;">^k(V) is not the entire space T^k(V).</span></p> <p>There is a natural projection operator sym: T^k(V) –> S^k(V). Conisder t in T^k(V). Then, by definition, sym(t) = <span style="font-family:Trebuchet MS;"><span style="font-family:Symbol;">S</span></span><span style="font-family:Trebuchet MS;">_p <span style="font-family:Symbol;">s</span>_p(t) / k! Here the sum ranges over all permutations of k elements p. There is also a natural projection operator asym: T^k(V) –> <span style="font-family:Trebuchet MS;"><span style="font-family:Symbol;">L</span></span><span style="font-family:Trebuchet MS;">^k(V). Given t in T^k(V), we define asym(t) = <span style="font-family:Trebuchet MS;"><span style="font-family:Symbol;">S</span></span><span style="font-family:Trebuchet MS;">_p sgn(p) <span style="font-family:Symbol;">s</span>_p(t) / k!</span></span></span></p> <h3>Claim</h3> <p><span style="font-family:Trebuchet MS;"><span style="font-family:Trebuchet MS;"><span style="font-family:Trebuchet MS;">Consider v1, v2 … vk elements of V. Then <br />asym(v1 (x) v2 (x) … (x) vk) is a non-vanishing element of <span style="font-family:Trebuchet MS;"><span style="font-family:Symbol;">L</span></span><span style="font-family:Trebuchet MS;">^k(V) </span>if and only if v1, v2 … vk are linearly independent.</span></span></span></p> <p>Suppose dim V = n. Consider e1 … en a basis of V. Consider <br />N = {1, 2 … n}^k, the set of all ordered k-tuples made of elements of {1, 2 … n}. Clearly #N, the number of elements of N, is n^k. Consider I = (i_1, i_2 … i_k) in N. We define E_I in T^k(V) to be e_i_1 (x) e_i_2 (x) … (x) e_i_k.</p> <h3>Claim</h3> <p>The E_I form a basis of T^k(V). In particular, dim T^k(V) = n^k.</p> <p>This claims follows from our previous claim about general tensor products.</p> <p>Define S(n, k) to be the set of all subsets of {1, 2 … n} of size k. Evidently #S(n, k) is the binomial coefficient (n k). Consider <br />I = {i_1, i_2 … i_k} in S(n, k). We define F_I in <span style="font-family:Trebuchet MS;"><span style="font-family:Symbol;">L</span></span><span style="font-family:Trebuchet MS;">^k(V) to be <br />asym(e_i_1 (x) e_i_2 (x) … (x) e_i_k). I’m a bit cheating here since this expression depends on the order of i_1 … i_k. However, the only ambiguity is the sign: even permutations don’t change the expression whereas odd permutation change its sign. For our purposes, we can make an arbitrary choice of order/sign for each I in S(n, k).</span></p> <h3>Claim</h3> <p>The F_I form a basis of <span style="font-family:Trebuchet MS;"><span style="font-family:Symbol;">L</span></span><span style="font-family:Trebuchet MS;">^k(V). In particular, dim <span style="font-family:Trebuchet MS;"><span style="font-family:Symbol;">L</span></span><span style="font-family:Trebuchet MS;">^k(V) = (n k).</span></span></p> <p>Define M(n, k) to be set of all <em>multisets</em> of size k made of elements of {1, 2 … n}. Multisets are like sets except that each element can appear in a multiset several times. The size of a multiset is defined by counting the elements with multiplicity. We have <br />#M(n, k) = (n + k – 1 k). Consider <br />I = {i_1, i_2 … i_k}. Here, i_m may coincide with i_l for some m and l. We define G_I in S^k(V) to be <br />sym(e_i_1 (x) e_i_2 (x) … (x) e_i_k).</p> <h3>Claim</h3> <p>The G_I form a basis of S^k(V). In particular, <br />dim S^k(V) = (n + k – 1 k).</p> <h1>Vector Space Determinant</h1> <p>The concept of a vector space determinant is standard and widely used, however not under this name and notation.</p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <p></p> <h2>Definition</h2> <p>Consider a vector space V with dim V = n. Then, the <em>determinant</em> of V, denoted Dim V is the vector space <span style="font-family:Trebuchet MS;"><span style="font-family:Symbol;">L</span></span><span style="font-family:Trebuchet MS;">^n(V).</span></p> <h2>basic properties</h2> <ul> <li>dim Det V = 1 </li> <li>Det (V (+) W) is naturally isomorphic to Det V (x) Det W </li> <li>Suppose W is a subspace of V. Then Det V is naturally isomorphic to det W (x) det (V/W). </li> <li>Det (V*) is naturally isomorphic to (Det V)* </li> <li>Consider A: V –> W a linear operator, m a natural number. Then there is naturally induced operator <br /><span style="font-family:Trebuchet MS;"><span style="font-family:Symbol;">L</span></span><span style="font-family:Trebuchet MS;">^m(</span>A): <span style="font-family:Trebuchet MS;"><span style="font-family:Symbol;">L</span></span><span style="font-family:Trebuchet MS;">^m(</span>V) –> <span style="font-family:Trebuchet MS;"><span style="font-family:Symbol;">L</span></span><span style="font-family:Trebuchet MS;">^m(</span>W). In the special case <br />dim V = dim W = m we get the operator <br />det A: Det V –> Det W. Let us specialize further to the case <br />V = W. Then det A: Det V –> Det V is simply multiplication by a constant c in the ground field k (since Det V is <br />1-dimensional). c is the <em>conventional</em> determinant of the operator A we all know and love. </li> </ul> <p>Consider v1, v2 … vn elements of V. By a previous claim, <br />asym(v1 (x) v2 (x) … (x) vn) is a non-vanishing element of Det V if and only if v1, v2 … vn form a basis of V. Also, suppose e1 … en and f1 … fn are two bases of V related by the n x n matrix P, i.e. <br />(e1 … en) = (f1 … fn) P. Then <br />asym(e1 (x) … (x) en) = (det P) asym(f1 (x) … (x) fn).</p> <h1>Chirality</h1> <p>Consider V a complex quadratic vector space of dimension n. Then Det V contains two special non-vanishing elements that differ by a sign, say <span style="font-family:Trebuchet MS;"><span style="font-family:Symbol;">w</span></span><span style="font-family:Trebuchet MS;"> and –<span style="font-family:Trebuchet MS;"><span style="font-family:Symbol;">w</span></span>. These elements are constructed as follows. Consider e1 … en an orthonormal basis of V. Then <br />asym(e1 (x) e2 (x) … (x) en) is a non-vanishing element of Det V. Any two orthonormal bases are related by an orthogonal n x n matrix O. Since O is orthogonal, we have either det O = +1 or det O = –1. Thus asym(e1 (x) e2 (x) … (x) en) can only differ by a sign for different orthonormal bases, yielding <span style="font-family:Trebuchet MS;"><span style="font-family:Symbol;">w</span></span><span style="font-family:Trebuchet MS;"> and –<span style="font-family:Trebuchet MS;"><span style="font-family:Symbol;">w</span></span>.</span></span></p> <p>Suppose W is an isotropic subspace of V. Any element v of V defines a linear functional v* on V defined by v*(u) = Q(v, u). Here u is an arbitrary element of V and Q is the quadratic form of V. If v and u both belong to W we get v*(u) = Q(v, u) = 0 since W is isotropic. Thus, given v in W, v* is a linear functional vanishing on W. Since it vanishes on W, it determines a linear functional <span style="font-family:Symbol;">a</span> on V/W. This can be seen as follows. Any u in V/W can be represented by some u’ in V. We can then set <span style="font-family:Symbol;">a</span>(u) = v*(u’). However, u’ is only defined up to adding an arbitrary element w of W. But it’s OK because v*(u’ + w) = v*(u) + v*(w) = v*(u) since the second term vanishes by our previous observation. Thus <span style="font-family:Symbol;">a</span><span style="font-family:Trebuchet MS;"> is well defined.</span></p> <p>For any v in W we constructed a linear functional on V/W i.e. an element of (V/W)*. This yields a linear operator W –> (V/W)*. Now suppose dim V = 2m and W is <em>maximal</em> isotropic, that is, <br />dim W = m. Then our linear operator is an isomorphism.</p> <p>In the following, we use the symbol “=” to denote “naturally isomorphic”. Det V = Det W (x) Det (V/W). By the above, <br />W = (V/W)*. Hence <br />Det V = Det ((V/W)*) (x) Det (V/W) = (Det (V/W))* (x) Det(V/W) = C. In particular we get a special element in Det V: the element corresponding to 1 in C.</p> <h3>Claim</h3> <p>The special element is either <span style="font-family:Trebuchet MS;"><span style="font-family:Symbol;">w</span></span><span style="font-family:Trebuchet MS;"> or –<span style="font-family:Trebuchet MS;"><span style="font-family:Symbol;">w.</span></span></span><span style="font-family:Trebuchet MS;"> This invariant divides maximal isotropic subspaces into two classes (chiralities). Any two maximal isotropic subspaces W and W’ are related by an orthogonal operator O: V –> V. For det O = +1, W and W’ have the same chirality. For <br />det O = –1, W and W’ have opposite chirality.</span></p>Squarkhttp://www.blogger.com/profile/08330918300643149836noreply@blogger.com3tag:blogger.com,1999:blog-6815931835942890099.post-50787001036486551632009-07-01T12:00:00.001-07:002009-07-01T12:02:07.292-07:00Autoevolution III<p><em>This is a <a href="http://squarkonium.blogspot.com/2009/06/autoevolution-ii.html">sequel</a>.</em></p> <p>As Eli justly remarked, we are already cyborgs.The computer is extending our ability to think and remember. The Internet is extending our ability to communicate and share information. Microscopes, telescopes, night vision etc. are extending our ability to sense.</p> <p>What is going to change in the future in this respect? Firstly, our technological extensions will become more numerous and powerful. Equally importantly, they will become more <em>synergetic</em> with ourselves. Instead of looking <em>through</em> a telescope, your experience will be that of your eyes <em>being</em> the telescope. Instead of searching in Google using your keyboard, you will do that by merely <em>thinking</em>. Instead of writing things down in Microsoft Outlook, it will just feel like <em>remembering</em>. Instead of doing computations with a calculator, you will do them in your mind without “sweat”. Instead of driving a car, you will <em>become</em> the car (provided you really want to control it instead of just getting from point A to point B; even the later will usually be unnecessary).</p> <p>Extension of human “IO” will not be limited to information naturally contained in the physical reality. It will include <em>conventionally encoded</em> information: the kind we store in writing, computers etc. today. The synergy of the human use of such information will lead to the Internet becoming a “virtual reality”: something experienced directly rather than through non-transparent intermediates such as keyboards, mice and displays. “Ordinary” reality will become “augmented reality” as additional layers of information will be seamlessly available about “surrounding” physical objects (that is, objects under attention). Imaging looking at a bus and <em>knowing</em> what line it is and what is its trajectory. A possible intermediate form of augmented reality is superposing the additional information layers on the existing senses, e.g. “seeing” a “clickable” line number floating in the air besides the bus.</p> <p>Physical location of human beings will become much less important. Even now the Internet, cellular phones and video conferences are reducing its importance. In the future this trend will continue much more. The culmination is mind-body separation. One can image huge immobile “brains” banked somewhere, operating bodies in distant physical locations and purely “virtual” realms.</p> <p>Many activities don’t require engaging the “direct” physical reality but allow working with “conventionally encoded” higher abstraction layers. These include</p> <ul> <li>“Software” engineering, that is the design of technology operating within these higher abstraction layers</li> <li>Communication</li> <li>Theoretical learning and research</li> <li>“Virtual” Art</li> </ul> <p>Other activities require interaction with “lower abstraction layers”. These include</p> <ul> <li>“Hardware” engineering</li> <li>Experimental science</li> <li>Colonization of space</li> <li>“Physical” Art</li> </ul> <p>Eventually the “synergy” of the “virtual IO” (the exchange of “conventionally encoded” information) will make the “Internet” into a <em>shared memory of human kind.</em> It will store actual memories, experiences, thoughts, knowledge etc. which will be downloadable and uploadable. This will result in individuality becoming “fuzzy”, up to the point of almost <em>utter disappearance</em>. The entire population of a given planet will merge into a “multimind”. The multimind will be a continuous array of memories, desires etc. connected by associations as is the mind of an individual today. The mind of a modern individual contains a “stream of consciousness” (“CPU”) continuously traversing the memories (“RAM”), loading them into the short term memory (“cash”), connecting them to IO and transforming them. In an analogical way, the multimind will contain many streams of consciousness processing the shared memory in parallel, each with its own cash and connected to specific IOs at any given moment.</p> <p>I call this multimind a “Solaris”, to <a href="http://en.wikipedia.org/wiki/Solaris_(novel)">honour Stanislaw Lem</a>. The Solaris is a single entity uniting the whole technosphere, ecosphere and consciousness of an entire planet.</p> <p>Due to the disappearance of individual, property will lose its meaning. Birth and death will also become obsolete, except for the birth of new Solarises through space colonization.</p> <p>It is also possible that the Solaris will contain “quasi-individuals”: tightly bound chunks of memories, desires and ideas. Such <br />quasi-individuals will dynamically form out of the Solaris and dissolve back into it.</p>Squarkhttp://www.blogger.com/profile/08330918300643149836noreply@blogger.com8tag:blogger.com,1999:blog-6815931835942890099.post-79286617424326770022009-06-25T04:18:00.001-07:002009-06-25T06:51:44.380-07:00Ask Squark: Mirror<p><em>This is a part of the <a href="http://squarkonium.blogspot.com/2009/06/ask-squark.html">Ask Squark</a> series.</em></p> <p>Thx Assaf for submitting the first question to “Ask Squark”!</p> <p>Question:</p> <p><em>Why does the mirror reverse left and right but not up and down?</em></p> <p>Answer:</p> <p>Firstly, there is a hidden assumption here that the mirror is hanged conventionally, i.e. vertically. A horizontal mirror (like in <a href="http://www.lyrics007.com/Eagles%20Lyrics/Hotel%20California%20Lyrics.html">Hotel California</a>) <em>does</em> reverse up and down.</p> <p>It might seem that a vertical mirror displays some sort of asymmetry: left and right (which are perceived as horizontal directions) are reversed, whereas up and down (the vertical directions) are not reversed. However, let me assure you that there is perfect rotational symmetry with respect to the axis orthogonal to the mirror plane. The apparent paradox is mostly semantic.</p> <p>Let us remember how an ideal planar mirror works. The real mirrors are not ideal (alas), but it’s irrelevant for the discussion.</p> <p>Suppose the mirror lies in the plane <span style="font-family:sym;">M. Consider a point P in space. The mirror image of P is the point P’ satisfying the following criteria:</span></p> <ul> <li>The line PP’ is orthogonal to M</li> <li>The distance of P to M is equal to the distance of P’ to M</li> </ul> <p>In reality, the mirror is usually reflective from one side only, hence we have to assume P lies in one of the two half-spaces defined by M. However, this is inessential.</p> <p>A direction in space can be specified by two points: the beginning and the end of a vector. Thus, given a direction v = PQ, the mirror image direction is v’ = P’Q’ where P’ is the mirror image of P and Q’ is the mirror image of Q. It is easy to see that when v is parallel to M, v’ = v but when v is orthogonal to M, v’ = –v.* For example, if M is the plane spanned by the up-down and north-south directions, the mirror preserves up, down, north and south but reverses east and west.</p><p><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://4.bp.blogspot.com/_cEdPnVvarCc/SkOAi7L3DpI/AAAAAAAAABE/qsc7sOVJul4/s1600-h/SqMir1.JPG"><img src="http://4.bp.blogspot.com/_cEdPnVvarCc/SkOAi7L3DpI/AAAAAAAAABE/qsc7sOVJul4/s200/SqMir1.JPG" border="0" alt="" id="BLOGGER_PHOTO_ID_5351262119737626258" style="cursor: pointer; width: 161px; height: 200px; " /></a></p> <p><i>The illustration is by the courtesy of SurDin.</i></p><p>But what about left and right? These notions are more complicated. While north, south, east, west, up and down are <em>absolute</em> directions, left and right are <em>relative</em>. For instance, if you face another person, her right is your left and vice versa.</p> <p>Mathematically, we can describe left and right as follows. Consider ordered triples of mutually orthogonal vectors of unit length<br />(u, v, w). It can be shown that such triples fall into two classes R and L such that</p> <ul> <li>A triple in R can be rotated into any other triple in R.</li> <li>A triple in L can be rotated into any other triple in L.</li> <li>A triple in R cannot be rotated into any triple in L.</li> <li>A triple in L cannot be rotated into any triple in R.</li> </ul> <p>Suppose a Cartesian coordinate system S is given. Then any triple (u, v, w) can be decomposed into components with respect to S and represented by a 3 x 3 matrix. Then, for some triples the determinant of this matrix is 1 whereas for others it is –1. These are R and L.</p> <p>We can derive the following interesting property of R and L. Suppose (u, v, w) is a triple in R (L). Then (-u, v, w), (u, –v, w) and <br />(u, v, –w) are triples in L (R). It follows that (-u, –v, w), (-u, v, –w) and (u, –v, –w) are triples in R (L). Also, (-u, –v, –w) is a triple in L (R). This property follows, for instance, from the fact that the sign of the determinant is reversed when we reverse the sign of one of the matrix columns.</p> <p>The relation to the familiar notions of right and left is as follows. Consider X is a person and (u, v, w) a triple as above. Suppose u is a vector pointing in the direction from X’s legs to her head. Suppose v is a vector pointing in the direction from X’s back to her front. Then w points either to X’s right or to X’s left, according to the class to which (u, v, w) belongs!</p> <p>Consider (u, v, w) a triple as above. Consider (u’, v’, w’) the mirror image triple. That is, u’ is the mirror image of u, v’ is the mirror image of v and w’ is the mirror image of w. Then, if (u, v, w) is in R then (u’, v’, w’) is in L and vice versa. In this sense, the mirror reverses left and right. For instance, suppose u and v are parallel to M. Then w is orthogonal to M. We get u’ = u, v’ = v, w’ = –w. Thus (u’, v’, w’) = (u, v, –w) belongs to the class opposite to that of <br />(u, v, w). The general case can be proven e.g. using determinants: if we choose S such that M is parallel to two of the axes, mirror imaging amounts to changing the sign of one of the matrix rows.</p><p><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://3.bp.blogspot.com/_cEdPnVvarCc/SkOAyUpELuI/AAAAAAAAABM/WaECjaZThDc/s1600-h/SqMir2.JPG"><img src="http://3.bp.blogspot.com/_cEdPnVvarCc/SkOAyUpELuI/AAAAAAAAABM/WaECjaZThDc/s200/SqMir2.JPG" border="0" alt="" id="BLOGGER_PHOTO_ID_5351262384269045474" style="cursor: pointer; width: 161px; height: 200px; " /></a></p> <p><span class="Apple-style-span" style="font-style: italic; ">The illustration is by the courtesy of SurDin.</span></p><p>* Remember that two vectors v = AB and w = CD are considered <em>equal</em> when</p> <ul> <li>The line AB is parallel to the line CD (or A = B and C = D which means both vectors are the equal to the 0 vector).</li> <li>The line AC is parallel to the line BD (or A = C and B = D which means both vectors coincide in the trivial sense).</li> </ul> <p>Also, remember that, by definition, if v = AB then –v = BA.</p>Squarkhttp://www.blogger.com/profile/08330918300643149836noreply@blogger.com9tag:blogger.com,1999:blog-6815931835942890099.post-79021391917944788452009-06-18T02:43:00.000-07:002009-06-18T03:58:32.311-07:00Free will, ethics and determinismLev has recently <a href="http://levsblog.wordpress.com/2009/06/08/science-and-free-will/">brought up</a> the question of free will vs. determinism. I have spent some time thinking about these issues and came up with the ideas I want to lay out here.<div><br /></div><div>The problem stems from the desire to define ethics. What is ethics? From my point of view, ethics is a set of rules describing how ethical is a given behaviour in a given situation. The definition might seem somewhat circular, but lets put this aside. The important aspect is that ethics is something allowing comparison of different choices and marking some as "better" and some as "worse", doesn't matter in what sense.</div><div>In order to discuss ethics, we must understand the set of choices available to a given person at a given situation. The fact that choice exists at all relies on the notion of "free will" which, on the surface, contradicts determinism.</div><div>What is determinism? Determinism means that given complete knowledge of the initial conditions it is possible to predict what would happen at any time in the future. This is a principle that holds in classical physics. The real world is better described by quantum physics, in which the situation is more complicated, but lets leave this for later.<br />In practice, such predictions are limited by 3 factors:</div><div><ol><li>Incomplete knowledge of the initial conditions. Indeed it is difficult to know everything about the entire universe.</li><li>Incomplete knowledge of the laws of nature. Our model of reality is imperfect, and in my opinion, will always remain so.</li><li>Limited information processing power. Even if you know the initial conditions and the laws of nature sufficiently precisely to give a prediction with the accuracy you need, doing so might require a complicated computation which would take lots of "CPU clock ticks" and perhaps lots of "RAM" to complete.</li></ol>Thus, even if a "bird's-view" observer knows precisely what person X is going to do, X doesn't know it. Moreover, I claim it is <i>impossible</i> for X to know it. That is, we have the following <i>principle of self-unpredictability</i>:</div><div><br /></div><div>An intelligent being X can never have a future-prediction capacity, accounting for the 3 factors above, that would allow it to predict her own behaviour.</div><div><br /></div><div>If the principle is violated, we would get something like the <a href="http://en.wikipedia.org/wiki/Grandfather_paradox">grandfather paradox</a>: if X knows she is going to do Y, what prevents her from doing Z which is different from Y? The principle also makes sense physically, as far as my intuition goes: to produce better predictions we need a more powerful "brain" which would have to be more complicated and thus more difficult to predict. It's sort of a "bootstrap".</div><div>To give an example, suppose X is a human. Certainly X is not able to use her knowledge of biology, chemistry, quantum physics and what else to predict the workings of her own brain. Now, suppose X recruits a computer Y to help her. It might appear that now her task is more realistic. However, by using the computer she made it <i>a part of the system</i>. That is, the required prediction now involves the joint dynamics of X + Y and thus remains out of reach.</div><div>It is fascinating to me whether the self-unpredictability principle can be reformulated as a theorem in physics about abstract information-processing systems.</div><div>Thus, the space of possible choices of X can be defined to be the space of things X can do <i>as far as X can tell</i>. Ethics would have to operate on this space.</div><div><br /></div><div>It might appear that the indeterminism of quantum mechanics provides some kind of an alternative solution to the problem. However, I deem it is not so. The "freedom" allowed for by quantum mechanics is pure random. It is not consistent with the sort of choice that involves ethical judgement, which is the sort of choice we are targeting here.<br />Moreover, consider a person X observed by a superintelligent being Y. Y can predict X's behaviour up to quantum indeterminism. Y knows X is going to A with propability<br />pA = 1 - 1e-100 and B with probability pB = 1e-100. Thus B is a highly unlikely choice. However, from the point of view of X both of the choices might be equally legitimate, a priori (before ethical judgement is applied). Moreover, the small probability pB might stem from something like a lightning striking X's head, which is an artifact completely irrelevant to the ethical dilemma at hand.</div>Squarkhttp://www.blogger.com/profile/08330918300643149836noreply@blogger.com9tag:blogger.com,1999:blog-6815931835942890099.post-46230420341335306482009-06-12T06:29:00.000-07:002009-06-12T09:01:05.590-07:00Spinors II: Isotropic Subspaces<i>This is a <a href="http://squarkonium.blogspot.com/2009/05/spinors-i-clifford-algebra.html">sequel</a>.</i><div><br /></div><div><span class="Apple-style-span" style="font-weight: bold; ">Definition</span></div><div><b><span class="Apple-style-span" style="font-weight: normal; "><div><div><div>Consider V a quadratic space. Consider W a subspace. W is called <i>isotropic</i> when given v, w in W arbitrary, Q(v, w) = 0.</div><div><br /></div><div>For instance, when k = R and V is <i>positive definite</i> (that is, sn V = dim V) the only isotropic subspace is {0}. If V is <i>Lorentzian</i> (that is, sn V = dim V - 1) we have<br />1-dimensional isotropic subspaces: <i>null</i> or <i>lightlike</i> lines in physics talk. On the other hand, for<br />k = C we have:</div><div><br /></div><div><b>Proposition</b></div><div><ul><li>Suppose V is a complex quadratic space of dimension 2n. Then the maximal dimension of an isotropic subspace is n.</li><li>Suppose V is a complex quadratic space of dimension 2n + 1. Then the maximal dimension of an isotropic subspace is n.</li></ul><b><div><span class="Apple-style-span" style="font-weight: normal;">It will be useful to introduce the following</span></div><div><span class="Apple-style-span" style="font-weight: normal;"><br /></span></div>Definition</b></div><div>Consider V a quadratic space over an arbitrary field k. Consider W a subspace of V. Define W^ to be {v in V | for any w in W: Q(v, w) = 0}. W^ is called the subspace <i>orthogonal</i> to W. </div><div><br /></div><div>Several facts about orthogonal subspace will be handly. Given V, W as above:</div><div><ul><li>dim W + dim W^ = dim V</li><li>W^^ = W</li><li>Suppose W is isotropic. Then W^ contains W.</li></ul></div><div>Consider V a quadratic space over an arbitrary field k. Consider W an isotropic subspace of V. Denote n = dim V, m = dim W. We have dim W^ = n - m but W^ contains W hence<br />dim W^ >= dim W i.e. n - m >= m and thus n >= 2m. It is therefore obvious that the maximal dimension of an isotropic subspace can be at most as in the proposition, even for k different from C. It remains to show that for k = C the bound can always be saturated.</div><div><b><br /></b><b>Proof of Proposition</b></div><div><ul><li>Suppose V is a complex quadratic space of dimension 2n.<br />Consider e_1 ... e_2n a basis such that Q(e_i, e_j) = delta_ij. Here delta_ij is the <i>Kronecker symbol</i>, that is delta_ij = 1 for i = j and delta_ij = 0 for i =/= j. Define<br />f_1 = e_1 + i e_2, f_2 = e_3 + i e_4 ... f_n = e_2n-1 + i e_2n. Here i is the imaginary unit, that is i = sqrt(-1). It is easily seen that {f_j} span an n-dimensional isotropic subspace.</li><li>Suppose V is a complex quadratic space of dimension 2n + 1.<br />Consider e_1 ... e_2n, e_2n + 1 a basis such that Q(e_i, e_j) = delta_ij. Define<br />f_1 = e_1 + i e_2, f_2 = e_3 + i e_4 ... f_n = e_2n-1 + i e_2n. It is easily seen that {f_j} span an n-dimensional isotropic subspace.</li></ul><b><div>Definition</div><div><span class="Apple-style-span" style="font-weight: normal;">Consider V a complex quadratic subspace. A subspace W of V is called a <i>maximal isotropic subspace</i> when it is of maximal possible dimension. That is:</span></div><div><ul><li><span class="Apple-style-span" style="font-weight: normal;">For dim V = 2n, we require dim W = n.</span></li><li><span class="Apple-style-span" style="font-weight: normal;">For dim V = 2n + 1, we require dim W = n.</span></li></ul></div><div>Definition</div></b></div><div>Consider V a quadratic space over the field k. Consider R: V -> V an operator. V is called <i>orthogonal</i> when for any v, w in V we have Q(Rv, Rw) = Q(v, w). The orthogonal operators are the <i>automorphisms</i> of V, that is, isomorphisms of V with itself. The set of all orthogonal operators is denoted O(V).</div><div><br /></div><div><b>Proposition</b></div><div>Consider V a complex quadratic space, U and W two maximal isotropic subspaces. Then, there exists R in O(V) s.t. R(U) = W.</div><div><br /></div><div>This means that all maximal isotropic subspaces of a given space are essentially the same. However, we'll see in the sequels that <i>for dim V = 2n + 1, the space of maximal isotropic subspaces has 1 connected component, whereas for dim V = 2n there are 2 connected components.</i> For now, I won't explain what I mean by "space" and what are "connected components". These are some basic concepts of topology which I hope to explain in the sequels.</div></div></div></span></b></div>Squarkhttp://www.blogger.com/profile/08330918300643149836noreply@blogger.com25tag:blogger.com,1999:blog-6815931835942890099.post-48661541812860290642009-06-07T01:43:00.000-07:002009-06-07T02:57:34.015-07:00Autoevolution IIThis is a <a href="http://squarkonium.blogspot.com/2009/03/humanity-in-1000-years-i-autoevolution.html">sequel</a>.<div><br /></div><div>Genetic engineering is already not science fiction. We are rapidly approaching the era when we will modify the genetic code of various living organisms and our own genetic code in massive amounts, up to a point when such modifications become a centerpiece of technology.</div><div>In a 1000 years the impact of these modifications will be so great that our descendants will have little in common with the original homo sapiens sapiens. It is impossible to tell what they will be like precisely (that is, even more impossible than my other ambitions in this series). However, I will try to guess some of their general features.</div><div><ul><li><b>Survival<span class="Apple-style-span" style="font-weight: normal;"><br /></span><span class="Apple-style-span" style="font-weight: normal;">Some speciments will be adapted to extreme conditions, such as extreme temperatures, extreme pressures, ionizing radiation, poison etc. In particular, when space colonization will commence, "humans" adapted to the respective conditions will be created. That is, we would have Marsians, <a href="http://en.wikipedia.org/wiki/Europa_(moon)">Europans</a> etc. historically originating from Earth humans.</span></b></li><li><b>6th sense, 7th sense...<span class="Apple-style-span" style="font-weight: normal;"><br /></span><span class="Apple-style-span" style="font-weight: normal;">Some speciments will have sensory perception very different from what we are used to. For instance they might have vision with more colour channels and / or in different areas of the spectrum.</span></b></li><li><b>Communication<span class="Apple-style-span" style="font-weight: normal;"><br /></span><span class="Apple-style-span" style="font-weight: normal;">Speech will replaced by more advanced modes of communication, perhaps something like a direct mind-to-mind link. This will increase the bandwidth and reduce the error rate considerably. Among other things, this might lead to much more efficient resolution of disagreements up to the point when "irreconcilable differences" become very rare.</span></b></li><li><b>Intelligence<span class="Apple-style-span" style="font-weight: normal;"><br /></span><span class="Apple-style-span" style="font-weight: normal;">Eventually, not only the "body" but also the "mind" will be enhanced. This will start with improved memory and faster thought and end-up with capabilities of completely different magnitude. In my opinion, the most critical mental capacity is the ability to hold many things in one's mind at once: a sort of "cache memory". It is the enhancement of this capacity which would lead to the most radical development of intelligence.</span></b></li><li><b>Specialization<br /><span class="Apple-style-span" style="font-weight: normal;">Different speciments will be adapted to different professions, to an extent much greater than what exists today (up to the point when different professions become virtually different species). In a way, this kind of specialization already exists: division into males and females. However, in the future there will be many more kinds (in ways unrelated to the reporductive cycle).</span></b></li><li><b>Body-mind separation<span class="Apple-style-span" style="font-weight: normal;"><br /></span><span class="Apple-style-span" style="font-weight: normal;">Eventually the brain or mind which carries the information processing function will be separated from the body which carries the input/output functions. It will be possible for a given person (mind) to use a number of different bodies suited for different tasks at different times. Loosely speaking, one will be able to change bodies the way one now changes clothes or cars. Thus some of the traits mentioned above (such as survival in extreme conditions and enhanced senses) will apply to particular bodies rather than particular persons.</span></b></li></ul>At first, there will exists two radically different "technologies":</div><div><ul><li>The "conventional" technology we know today, based on semiconductors, fiber optics, lasers etc. At some point this will include some sort of "nanotechnology". The advantage of this technology is that we understand and control it perfectly, since we created it "from the ground up" (except the laws of nature, of course, which are immutable).</li><li>"Organic" technology employing what we now call "genetic engineering". The advantage of this technology is that it is "more sophisticated" than the ordinary: living organisms do things we yet only dream doing artificially. The disadvantage is the imperfect understanding and control we have over it.</li></ul>However, eventually they will mix and become one or several technologies descendant from both. These technologies will unite the advantages of both kinds. Thus, the clear distinction between "ourselves" and the "machines" will be erased. At the same time, the distinction between the "technosphere" and the "biosphere" will also be erased. That is, instead of two different environments (the "wild" and our artificially created environment) there will be only one. This new environment will be at least as sophisticated as the ecology existing today in the wild, while being under out conscious control.</div><div>As an intermediate stage, we will create much more efficient modes of brain-computer communication. Humans would have computer "coprocessors" wired into their brain and connected into the internet.</div><div>There is another essential difference between conventional technology and life. The machines we create are usually "clones" made to resemble a given prototype as much as possible. However, living organisms, even if members of the same species, are always very different from each other (unless, of course, they are the clones of a single ancestor; such groups, however, form only tiny fractions of a given species). It is my suspicion that the second scheme is much more efficient, and we are only bound to the first scheme because of technical limitations. Thus most of the "machines" of the future will resemble living organisms rather than modern machines in this respect.</div><div><br /></div>Squarkhttp://www.blogger.com/profile/08330918300643149836noreply@blogger.com59tag:blogger.com,1999:blog-6815931835942890099.post-46890468900871773562009-05-29T06:12:00.000-07:002009-05-30T02:58:40.668-07:00Spinors I: Clifford AlgebraThis post is meant to be the first in a series about spinors, exceptional isomorphisms, twistors and supersymmetry. My interest in in-depth investigation of spinors was partially inspired by Yasha, in particular I learnt from him on the annihilator approach (will appear in the sequels). I will try to assume little prior knowledge except linear algebra. The emphasis will be on mathematics, at least for a while, so if you're interested in this purely from a physics perspective than you better already know the physical motivation / applications of this stuff.<div><br /></div><div>The first object we'll need is the <i>Clifford algebra</i>. Fix a field k (in this series it will always be either the set of real numbers R or the set of complex numbers C).<br /><div><br /><div><div><span class="Apple-style-span" style="font-size: x-large;">Algebras</span></div><div><span class="Apple-style-span" style="font-size:6;"><span class="Apple-style-span" style="font-size: 24px;"><br /></span></span></div><div>I'll start from a quick reminder of what an algebra is. Suppose A is a vector space over k. Suppose further that a mapping m: A x A -> A is given. For convenience sake, given a, b in A we denote m(a, b) by ab and call m <i>multiplication</i>. A is called a <i>unital associative </i><i>k-algebra</i> (just <i>k-algebra</i> in the sequel) when the following conditions hold:</div><div><ul><li>m is <i>bilinear </i>(i.e. linear in each of the arguments separately). In details, it means that<br />Given a, b, c in A: (a + b)c = ac + bc <i>additivity in the 1st argument / left distributivity</i><i><br /><span class="Apple-style-span" style="font-style: normal;">Given x in k and a, b in A: (xa)b = x(ab) <i>homogenuity in the 1st argument</i><br />Given a, b, c in A: c(a + b) = ca + cb <i>additivity in the 2nd argument / right distributivity</i><br />Given x in k and a, b in A: a(xb) = x(ab) <i>homogenuity </i><span class="Apple-style-span" style="font-style: italic; ">in the 2nd argument</span></span></i></li><li>Given a, b, c in A: (ab)c = a(bc) <i>associativity</i></li><li>There exists an element "1" in A such that for any a in A: a1 = 1a = a <i>unit</i></li></ul><i><div><span class="Apple-style-span" style="font-style: normal;">An algebra A is called </span>commutative<span class="Apple-style-span" style="font-style: normal;"> when given a, b in A we have ab = ba.</span></div><br /></i><i><div><b><span class="Apple-style-span" style="font-style: normal;">Examples</span></b></div><div><ul><li><span class="Apple-style-span" style="font-style: normal;">Consider V a k-vector space. Consider End(V) the set of all endomorphisms of V, i.e., linear operators V -> V. End(V) is a k-algebra where multiplication corresponds to composition of linear operators. For dim V <></span></li><li><span class="Apple-style-span" style="font-style: normal;">Consider V a k-vector space, W a subspace. Consider End(V, W) the set of endomorphisms of V leaving W invariant (non-standard notation). That is, given a in End(V, W), w in W we have aw also in W. End(V, W) is an algebra. It is a </span>subalgebra <span class="Apple-style-span" style="font-style: normal;">of End(V), that is, a linear subspace closed under multiplication. For dim V <></span></li><li><span class="Apple-style-span" style="font-style: normal;">Fix n a natural number. The set of n x n matrices with coefficients in k forms an algebra: Mat(n, k). It is isomorphic to End(V) for dim V = n. Obviously, dim Mat(n) = n^2.</span></li><li><span class="Apple-style-span" style="font-style: normal;">Fix n a natural number. The set of upper-triangular n x n matrices with coefficients in k forms an algebra: UT(n, k) (non-standard notation). We have dim UT(n, k) = n (n + 1) / 2.</span></li><li><span class="Apple-style-span" style="font-style: normal;">Consider k[x] the set of polynomials with coefficients in k in the variable x. k[x] is a<br />k-algebra. It is infinite-dimensional. It is </span><span class="Apple-style-span" style=""><span class="Apple-style-span" style="font-style: normal;">commutative</span><span class="Apple-style-span" style="font-style: normal;">.</span></span></li><li><span class="Apple-style-span" style="font-style: normal;">Fix n a natural number. We have k^n the set of column vectors of size n with coefficients in k. We can define multiplication in k^n by multiplying each vector entry separately. It makes k^n into an algebra. Obviously dim k^n = n. k^n is commutative.</span></li></ul></div></i><i><span class="Apple-style-span" style="font-style: normal;"><div><span class="Apple-style-span" style="font-size: large;">Ideals</span></div><div><span class="Apple-style-span" style="font-size:180%;"><span class="Apple-style-span" style="font-size: 18px;"><br /></span></span></div><div><i><span class="Apple-style-span" style="font-style: normal;">A subset I of A is called a </span>right ideal</i> when the following conditions hold:</div></span></i></div><div><ul><li>I is a linear subspace of A</li><li>Given a in A, b in I, ba is also in I</li></ul>Consider S an arbitrary subset of A. Denote SA to be the collection of all elements of A of the form</div><div><br /></div><div><div>s1 a1 + s2 a2 + ... + sn an</div><div><br /></div><div>where:<br />s1, s2 ... sn are elements of S</div><div>a1, a2 ... an are elements of A</div><div><br /></div><div><b>Claim:</b> SA is a right ideal.</div><div>SA is called <i>the right ideal of A generated by S.</i></div></div><div><br /></div><div><i><span class="Apple-style-span" style="font-style: normal; "><div><i><span class="Apple-style-span" style="font-style: normal; ">A subset I of A is called a </span>left ideal</i> when the following conditions hold:</div><div><ul><li>I is a linear subspace of A</li><li>Given a in A, b in I, ab is also in I</li></ul><div><div>Consider S an arbitrary subset of A. Denote AS to be the collection of all elements of A of the form</div><div><br /></div><div><div>a1 s1 + a2 s2 + ... + an sn</div><div><br /></div><div>where:<br />s1, s2 ... sn are elements of S</div><div>a1, a2 ... an are elements of A</div><div><br /></div><div><b>Claim:</b> AS is a left ideal.</div><div>AS is called <i>the left ideal of A generated by S.</i></div><div><i><br /></i></div></div></div></div></span></i></div><div>A subset I of A is called a <i>two-sided ideal</i> when it is simultaneously a left ideal and a right ideal.</div><div>Consider S an arbitrary subset of A. Denote ASA to be the collection of all elements of A of the form</div><div><br /></div><div><div>a1 s1 b1 + a2 s2 b1 + ... + an sn bn</div><div><br /></div><div>where:<br />s1, s2 ... sn are elements of S</div><div>a1, a2 ... an are elements of A</div><div>b1, b2 ... bn are elements of A</div><div><br /></div><div><div><b>Claim:</b> ASA is a two-sided ideal.</div><div>ASA is called <i>the two-sided ideal of A generated by S.</i></div><div><i><br /></i></div><div><b>Claim:</b> Suppose A is a commutative algebra. Then a subset of A is a left ideal if and only if it is a right ideal if and only if it is a two-sided ideal.</div><div>Thus for a commutative algebra all three notions coincide hence we speak simply of <i>ideals</i>.</div><div><i><br /></i></div><div><b>Examples</b></div><div><ul><li>Consider V a k-vector space. Consider the algebra End(V). Consider W a subspace of V. Define I = {a in End(V) | Im a lies in W}. I is a right ideal of End(V). Define<br />J = {a in End(V) | Ker a contains W}. J is a left ideal in End(V).</li><li>Fix n a natural number. Consider the algebra Mat(n, k). Fix m <= n another natural number. Define I = {a in Mat(n, k) | the first m rows are zero}. I is a right ideal of Mat(n, k). Define J = {a in Mat(n, k) | the first m columns are zero}. J is a left ideal of Mat(n, k).</li><li>Fix n a natural number. Consider the algebra UT(n, k). Fix m <= n another natural number. Define J = {a in UT(n, k) | the first m columns are zero}. I is a two-sided ideal of UT(n, k).</li><li>Consider the algebra k[x]. Consider S a finite subset of k[x].<br />Define I = {p in k[x] | for any a in S: p(a) = 0}. I is an ideal of k[x]. It is generated by the polynomials {x - a} where a traverses elements of S. Now fix n a natural number. Define<br />J = {p in k[x] | for any m natural with m <= n: p^(m)(a) = 0}. J is an ideal of k[x]. It is generated by the single polynomial x^(m + 1).</li><li>Fix n a natural number. Consider the algebra k^n. Consider m <= n another natural number. Define I = {v in k^n | the first m entries of v are zero}. I is an ideal.</li></ul></div><div><span class="Apple-style-span" style="font-size: large;">Quotient Algebra</span></div><div><span class="Apple-style-span" style="font-size:180%;"><span class="Apple-style-span" style="font-size: 18px;"><br /></span></span></div><div><span class="Apple-style-span" style="font-size:medium;">C</span><span class="Apple-style-span" style="font-size:medium;">o</span><span class="Apple-style-span" style="font-size:medium;">n</span><span class="Apple-style-span" style="font-size:medium;">s</span><span class="Apple-style-span" style="font-size:medium;">ider A an algebra and I a two-sided ideal. Then we may take the vector space quotient A/I. That is, we consider the set of equivalence classes of A under the following equivalence relation: Given a, b in A they are equivalent when a - b is in I.</span></div><div>It is easy to see the operation of multplication in A defines an operation of multiplication in A/I as well, that is, makes A/I into an algebra on its own right. For this to work, it is crucial that I is a two-sided ideal. A/I is called <i>the quotient algebra of A by I.</i></div><div><br /></div><div><b>Examples</b></div><div><ul><li>Fix n a natural number. Consider the algebra UT(n, k). Fix m <= n another natural number. Define J = {a in UT(n, k) | the first m columns are zero}. Then UT(n, k) / J is naturally isomorphic to UT(m, k).</li><li>Consider the algebra k[x]. Consider S a finite subset of k[x].<br />Define I = {p in k[x] | for any a in S: p(a) = 0}. Then k[x] / I is naturally isomorphic to k^n where n is the number of elements of S.</li><li>Fix n a natural number. Consider the algebra k^n. Consider m <= n another natural number. Define I = {v in k^n | the first m entries of v are zero}. Then k^n / I is naturally isomorphic to k^m.</li></ul></div></div></div><div><i><span class="Apple-style-span" style="font-style: normal; "><div><span class="Apple-style-span" style="font-size: large;">Generators and Relations</span></div><div><span class="Apple-style-span" style="font-size:180%;"><span class="Apple-style-span" style="font-size: 18px;"><br /></span></span></div><div>One of the simplest ways to construct an algebra is using generators and relations. This is done as follows. Suppose G is an abritrary set (possibly infinite). Consider F = k<g> the algebra of <i>non-commutative polynomials</i> with coefficients in k and variables G. For G non-empty this algebra is infinite-dimensional. It is also called the <i>free algebra</i><i> over G</i>.</g></div><div>Now take R an arbitrary subset of F. We have I = FRF a two-sided ideal. We obtain the algebra A = F / I. A is called <i>the algebra generated by G with relations R</i>. In this context, elements of G are called <i>generators</i> and elements of R <i>relations.</i> It is often convenient to define relation using equations. For example, suppose f, g, h are elements of G and x, y, z are elements of k. Then the relation</div><div><br /></div><div>xf^2 = ygh + zhg</div><div><br /></div><div>means that the element xf^2 - ygh - zhg of F is in R.</div></span></i></div><div><i><span class="Apple-style-span" style="font-style: normal;"><br /></span></i><i><span class="Apple-style-span" style="font-style: normal;"><span class="Apple-style-span" style="font-size: x-large;">Quadratic Spaces</span></span></i></div><div><span class="Apple-style-span" style="font-size:6;"><span class="Apple-style-span" style="font-size: 24px;"><br /></span></span></div><div><span class="Apple-style-span" style="font-size:medium;">Consider V a vector space over k. V is called a </span><i><span class="Apple-style-span" style="font-size:medium;">quadratic space</span></i><span class="Apple-style-span" style="font-size:medium;"> when it is equipped with a </span><i><span class="Apple-style-span" style="font-size:medium;">symmetric bilinear non-degenerate form </span></i><span class="Apple-style-span" style="font-size:medium;">Q. A quick reminder of what that means:</span></div><div><ul><li>Q is mapping V x V -> k</li><li>Q is <i>bilinear <span class="Apple-style-span" style="font-style: normal; ">(i.e. linear in each of the arguments separately):<br />Given u, v, w in A: Q(u + v, w) = Q(u, w) + Q(v, w) <i>additivity in the 1st argument</i><i><br /><span class="Apple-style-span" style="font-style: normal; ">Given x in k and u, v in A: Q(xu, v) = xQ(u, v) <i>homogenuity in the 1st argument</i><br />Given u, v, w in A: Q(w, u + v) = Q(w, u) + Q(w, v) <i>additivity in the 2nd argument</i><i><br /><span class="Apple-style-span" style="font-style: normal; ">Given x in k and u, v in A: Q(u, xv) = xQ(u, v) <i>homogenuity in the 2nd argument</i></span></i></span></i></span></i></li><li>Q is <i>symmetric</i>, that is, given u, v in V: Q(u, v) = Q(v, u)</li><li>Q is <i>non-degenerate</i>: Suppose u in V is such that for any v in V we have Q(u, v) = 0. Then u = 0.</li></ul>Two quadratic spaces V, W with corresponding forms Q, R are called <i>isomorphic</i> when there exists a linear mapping i: V -> W such that</div><div><ul><li>i is <i>injective</i>: Given u, v in V, i(u) = i(v) implies u = v. Equivalently, Given u in V, i(u) = 0 implies u = 0.</li><li>i is <i>surjective</i>: Given w in W, there exists v in V such that i(v) = w.</li><li>i preserves the quadratic stucture, that is, given u, v in V: Q(u, v) = R(i(u), i(v))</li></ul>When the conditions hold, the mapping i is called an <i>isomorphism</i> between V and W. Two isomorphic quadratic spaces are "essentially the same".</div><div>In the sequel, we'll only care about finite-dimensional quadratic spaces.</div><div><br /><b>Proposition:</b> </div><div><ol><li>Suppose V, W are quadratic spaces over k = C. Then V is isomorphic to W if and only if<br />dim V = dim W.</li><li>Suppose V is a quadratic space over k = C of dimension n. Then there exists a basis<br />e1, e2 ... en of V such that:<br />Q(ei, ei) = 1<br />Q(ei, ej) = 0 for i =/= j</li></ol><b>Proposition:</b></div><div><ol><li>Suppose V is a quadratic space over k = R of dimension n. Then there exists a natural number s and a basis of V e1, e2 ... en such that:<br />For i <= s: Q(ei, ei) = 1 For i > s: Q(ei, ei) = -1<br />Q(ei, ej) = 0 for i =/= j<br />We call s the "s-number" of V and denote it sn V (this is not standard terminology).</li><li>(Trivial) Suppose V, W are quadratic spaces over k = R. Then V is isomorphic to W if and only if dim V = dim W, sn V = sn W.</li></ol><span class="Apple-style-span" style="font-size: x-large;">Clifford Algebra</span></div><div><span class="Apple-style-span" style="font-size:6;"><span class="Apple-style-span" style="font-size: 24px;"><br /></span></span></div><div>Fix V a vector space. The tensor algebra T(V) is the algebra generated by V with the following relations:</div><div><ul><li>Given u, v, w in V with u + v = w, we take u + v = w to be a relation.</li><li>Given u, v in V, x in k with xu = v we take xu = v to be a relation.</li></ul>T(V) is infinite-dimensional.</div><div><br /></div><div>Suppose V is a quadratic space. The Clifford algebra C(V) is the algebra generated by V with the following relations:</div><div><ul><li>The relations we used for T(V).</li><li>Given u, v in V: uv + vu = -2Q(u, v)</li></ul><span class="Apple-style-span" style="font-size:large;"><div><b>Claim:</b> dim C(V) = 2^dim V</div><div><br /></div><span class="Apple-style-span" style="font-size:large;">Examples</span></span></div><div><br /></div><div>We use k = R in these examples.</div><div><ul><li>Suppose dim V = 0. Then C(V) is isomorphic to R.</li><li>Suppose dim V = 1, sn V = 1. Then C(V) is isomorphic to C.</li><li>Suppose dim V = 2, sn V = 2. Then C(V) is isomorphic to the <i>quaternion algebra</i> H.</li><li>For dim V > 2, C(V) is no longer a <i>division algebra</i>, that is, it doesn't have an <i>inverse</i> for each non-zero element.</li></ul></div></div></div></div>Squarkhttp://www.blogger.com/profile/08330918300643149836noreply@blogger.com2tag:blogger.com,1999:blog-6815931835942890099.post-55826053984720032232009-03-20T05:46:00.000-07:002009-05-01T07:12:36.416-07:00Humanity in a 1000 years I: autoevolutionIt is probably absurd trying to predict what will happen with humanity in a 1000 years. It is likely that even the smartest person living in 1000 C.E. would find it difficult to imagine the era we live in now.<br />Maybe armed with rationalism and the scientific method we are in a slightly better position to do it now. On the other hand, it is possible that the methods of thought or even the very apparatus of the mind will make so much progress that our descendands will create things we would never be able to dream of. In fact, I am (somewhat paradoxially) about to claim precisely that. At any rate, it appears almost inevitable the increase in body of human knowledge will lead to incredible changes in human society and the human way of life. Unless, of course, some terrible catastrope or a new dark age will prevent it.<br />If so, the task of imagining humanity in the year 3000 C.E. appears almost hopeless. Nevertheless, I still think it is worthwhile. Why? Because it is an amusing thought experiment. Because thinking about the future may change the future. Because trying to stretch our ability to predict or analyse to the limit may teach us something. Evef if it won't, it is still bound to be fun :-) Let me give a go at it, then!<br /><p>I warn beforehand that my view of the future is somewhat optimistic. I am assuming humanity will not be destroyed by nuclear war, alien invasion, asteroid impact or any other calamity. I am also assuming scientific progress is not going to stop or reverse as a result of such an event. My entire "prediction" is something of a mixture between what I believe will happen and what I hope will happen.</p><p><em><span style="font-size:180%;">Autoevolution</span></em></p><p>Charles Darwin taught us that humans are not essentially different from any other animal. Like any other animal, or indeed, any other living creature, we gradually evolved from other species over vast periods of time. The governing principle of that process is natural selection. The princinple is so obvious it is almost a tautology: the speciments most adapted to survival are more likely to survive so each generation is adapted better than the previous one. Add mutations into the mix and we get evolution.</p><p>How long does evolution continue? As far as we know it is <i>indefinite</i>. External conditions change, different species compete and, most importantly from my point of view, nature never reaches perfection. There are always improvements to be made.</p><p>Improvements? Isn't <i>home sapiens sapiens</i> perfect? Isn't it the peak of creation?</p><p>What on Earth gave us the arrogance to think that? Oh, sorry, I know what it is: evolution ;-)</p><p>Homo sapiens sapiens can and should be improved. There's no reason to think we can't be more healthy, more enduring, more intelligent. The problem is, we <i>are</i> different from other animals after all. We change our environment, adapting it to our needs. This process is much faster than the self-adaptation resulting from biological evolution. The result: natural selection, the driving force of biological evolution is no longer valid for this species.</p><p>Not only that we are (apparently) no longer becoming <i>better</i>, we are probably becoming <i>worse</i>. Random mutation introduce noise into our genetic code. In the same time, in a modern society (I mean the developed countries) "weak" individuals are not allowed to perished (which is a good thing!) and have no problem of spreading their genes. Anyone short of a Nazi would agree that the situation in which each individual of society is protected and able to satisfy her basic needs is a healthy one, from a moral stand point. The downside is that in the long run the human race faces physical and intellectual degeneration (I recommend the amusing comedy "Idiocracy" on this subject precisely).</p><p>Luckily, this threat, created, in a sense, by modern technology, finds its solution in the same source. In recent decades, the field of <i>genetics</i> experienced vast progress. The extent of the progress is such that <i>genetic engineering</i> has become possible. Now, we are only making our first step in this direction. However, we are discussing a problem that will only become relevant in a the very long run and there is little doubt that by that time our ability to manipulate the genetic codes of living being including ourselved will be perfected.</p><p>Thus, genetic engineering of human beings appear to me inevitable in order to avoid degeneration. However, we can and should go beyond this and apply genetic engineering in order to <i>improve</i> ourselves rather than merely preserving ourselves on the same level.</p><p>OK, I thought it is going to be one post, but it would take me ages to complete in this rate. So I'm posting the beginning, to be (hopefully) continued...</p><p><em><span style="font-size:180%;"></span></em></p>Squarkhttp://www.blogger.com/profile/08330918300643149836noreply@blogger.com9tag:blogger.com,1999:blog-6815931835942890099.post-27341320009588878002009-03-13T06:18:00.000-07:002009-03-19T14:39:13.052-07:00Regular Polytopes and TilingsA few random thoughts on regular polytopes and tilings I wanted to share.<div>
<br /></div><div><span class="Apple-style-span" style="font-size:6;"><span class="Apple-style-span" style="font-size:24px;"><b>2D</b></span></span></div><div>
<br /></div><div>Given m,n >= 3, we can try to build a 2-dimensional tiling where m regular n-gons meet at each vertex. The angle of a regular n-gon is alpha = (1 - 2/n) pi, and we have the following three cases:</div><div><ol><li>m alpha < 2 pi. This yields a regular polyhedron (a <a href="http://en.wikipedia.org/wiki/Platonic_solid">Platonic solid</a>). There are 5 cases like that:
<br />a. n = 3, m = 3: <a href="http://en.wikipedia.org/wiki/Tetrahedron">tetrahedron</a>, a self-dual polyhedron, the 3-dimensional <a href="http://en.wikipedia.org/wiki/Simplex">simplex</a>
<br />b. n = 3, m = 4: <a href="http://en.wikipedia.org/wiki/Octahedron">octahedron</a>, the 3-dimensional <a href="http://en.wikipedia.org/wiki/Cross-polytope">cross-polytope</a>
<br />c. n = 3, m = 5: <a href="http://en.wikipedia.org/wiki/Icosahedron">icosahedron</a>
<br />d. n = 4, m = 4: <a href="http://en.wikipedia.org/wiki/Cube">cube</a>, dual to octahedron, the 3-dimensional <a href="http://en.wikipedia.org/wiki/Hypercube">hypercube</a>
<br />e. n = 5, m = 3: <a href="http://en.wikipedia.org/wiki/Dodecahedron">dodecahderon</a>, dual to icosahedron
<br />Each of those defines a finite subgroup of SO(3), the 3-dimensional rotation group and of O(3) the 3-dimensional rotation-and-reflection group. These subgroups are, of course, the symmetry groups of the polyhedra.</li><li>m alpha = 2 pi. This yields a <a href="http://en.wikipedia.org/wiki/Tiling_by_regular_polygons">regular tiling</a> of the Euclidean plane. There are 3 cases like that:
<br />a. <a href="http://upload.wikimedia.org/wikipedia/commons/c/c9/Tiling_Regular_3-6_Triangular.svg">n = 3, m = 6</a>
<br />b. <a href="http://upload.wikimedia.org/wikipedia/commons/7/73/Tiling_Regular_4-4_Square.svg">n = 4, m = 4</a> A self-dual tiling
<br />c. <a href="http://en.wikipedia.org/wiki/File:Tiling_Regular_6-3_Hexagonal.svg">n = 6, m = 3</a> dual to a
<br />Each of those defines a discrete subgroup of the group of isometries (rotations, translations and reflections) of the Euclidean plane. Alternatively we can use orientation-preserving isometries (rotations and translations only).</li><li>m alpha > 2 pi. This yields a <a href="http://en.wikipedia.org/wiki/Uniform_tilings_in_hyperbolic_plane">regular tiling of the hyperbolic plane</a>. There's an infinite number of cases. Each of them defines a discrete subgroup of SO(2, 1). The later group has various geometric realizations:
<br />a. Orientation-preserving isometries of the hyperbolic plane.
<br />b. <a href="http://en.wikipedia.org/wiki/Lorentz_transformation">Lorentz transformations</a> of special relativity in 3-dimensional spacetime (2 space dimensions and 1 time dimension).</li></ol><span class="Apple-style-span" style="font-size:6;"><span class="Apple-style-span" style="font-size:24px;"><b><span class="Apple-style-span" style=" ;font-size:48px;">3D</span></b></span></span></div><div><span class="Apple-style-span" style="font-size:180%;"><span class="Apple-style-span" style="font-size:18px;"><b>
<br /></b></span></span></div><div><span class="Apple-style-span" style="font-size:180%;"><span class="Apple-style-span" style="font-size:18px;">Given A, B regular polyhedrons, we can try to build a 3-dimensional tiling where #{faces of B} A-polyhedra meet at a B-type vertex. What do I mean by a B-type vertex? Imagine the vertex being in the ceter O of a B-polyhedron Y. Fix a face F of
<br />Y. F corresponds to an A-polyhedron X of the tiling. The lines passing through the O and the vertices of F correspond to edges of X.
<br />This wouldn't work for any A, B. For purely combinatorial reasons, we need
<br /></span></span></div><div><span class="Apple-style-span" style="font-size:180%;"><span class="Apple-style-span" style="font-size:18px;">
<br /></span></span></div><div><span class="Apple-style-span" style="font-size:180%;"><span class="Apple-style-span" style="font-size:18px;">#{faces meeting at a vertex of A} = #{sides of a face of B}</span></span></div><div><span class="Apple-style-span" style="font-size:180%;"><span class="Apple-style-span" style="font-size:18px;">
<br /></span></span></div><div><span class="Apple-style-span" style="font-size:180%;"><span class="Apple-style-span" style="font-size:18px;">Geometrically, we again have three cases, depending on</span></span></div><div><ul><li><span class="Apple-style-span" style=" ;font-size:18px;">alpha, the dihedral angle of A, that is, it is the angle between its two adjacent faces.</span></li><li><span class="Apple-style-span" style="font-size:180%;"><span class="Apple-style-span" style="font-size:18px;">m, the number of faces meeting at a vertex of B.</span></span></li></ul><span class="Apple-style-span" style="font-size:180%;"><span class="Apple-style-span" style="font-size:18px;">The three cases are:</span></span></div><div><ol><li><span class="Apple-style-span" style=" ;font-size:18px;">m alpha < href="http://en.wikipedia.org/wiki/Convex_regular_4-polytope">polychoron</a>). There are 6 cases like that:
<br />a. A = tetrahedron, B = tetrahedron: <a href="http://en.wikipedia.org/wiki/Pentachoron">pentachoron</a>, a self-dual polychoron. It is the 4-dimensional <a href="http://en.wikipedia.org/wiki/Simplex">simplex</a>.
<br />b. A = tetrahedron, B = octahedron: <a href="http://en.wikipedia.org/wiki/Hexadecachoron">hexadecachoron</a>. It is the 4-dimensional <a href="http://en.wikipedia.org/wiki/Cross-polytope">cross-polytope</a>.
<br />c. A = tetrahedron, B = icosahedron: <a href="http://en.wikipedia.org/wiki/Hexacosichoron">hexacosichoron</a>.
<br />d. A = cube, B = tetrahedron: <a href="http://en.wikipedia.org/wiki/Tesseract">tesseract</a>, dual to the hexadecahoron. It is the 4-dimensional <a href="http://en.wikipedia.org/wiki/Hypercube">hypercube</a>.
<br />e. A = octahedron, B = cube: <a href="http://en.wikipedia.org/wiki/Icositetrachoron">icositetrachoron</a>, a self-dual polychoron.
<br />f. A = dodecahedron, B = tetrahedron: <a href="http://en.wikipedia.org/wiki/Hecatonicosachoron">hecatonicosachoron</a>, dual to the hexaicosohoron.
<br />Each of those defines a finite subgroup of SO(4), the group of 4-dimensional rotations. It also defines a finite subgroup of O(4), the group of 4-dimensional rotations-and-reflections.</span></li><li><span class="Apple-style-span" style="font-size:180%;"><span class="Apple-style-span" style=" ;font-size:18px;">m alpha = 2 pi. This yields a regular tiling of the Euclidean space. There is only 1 case like that: A = cube, B = octahedron. It defines a discrete subgroup of the group of isometries (rotations, translations and reflections) of the Euclidean space, or of the group of orientation-preserving isometries (no reflections).</span></span></li><li><span class="Apple-style-span" style="font-size:180%;"><span class="Apple-style-span" style="font-size:18px;">m alpha > 2 pi. This yields a regular tiling of the 3-dimensional <a href="http://en.wikipedia.org/wiki/Hyperbolic_space">hyperbolic space</a>. There are 4 cases like that:
<br />a. A = cube, B = icosahedron
<br />b. <a href="http://en.wikipedia.org/wiki/Order-4_dodecahedral_honeycomb">A = dodecahedron, B = octahedron</a>: dual to a
<br />c. A = dodecahedron, B = icosahedron: self-dual
<br />d. A = icosahedron, B = dodecahedron: self-dual
<br />Each of those defines a discrete subgroup of SO(3, 1). The later group has various geometric realizations:
<br />a. The group of orientation-preserving isometries of the 3-dimensional hyperbolic space.
<br />b. The group of Lorentz transformations in special relativity.
<br />c. The group of orientation-preserving <a href="http://en.wikipedia.org/wiki/Conformal_map">conformal transformations</a> of the
<br />2-sphere.
<br />Realization b is intriguing since it makes me wonder whether these discrete subgroups appear in any physically-interesting situation.
<br />Realization c is intriguing for the following reason. Each such transformation has one or two <a href="http://en.wikipedia.org/wiki/Fixed_point_(mathematics)">fixed points</a>. Consider the set of fixed points of all transformations belonging to a given discrete subgroup. This is a countable subset of the sphere, invariant under the discrete subgroup (due to conjugation). Clearly it must be either dense everywhere or a sort of fractal, but I don't know which.</span></span></li></ol><span class="Apple-style-span" style="font-size:6;"><span class="Apple-style-span" style="font-size:24px;"><b><span class="Apple-style-span" style=" ;font-size:48px;">4D</span></b></span></span></div><div><span class="Apple-style-span" style="font-size:medium;">
<br /></span></div><div>This time we take A, B to be regular polychorons. We want to construct a 4-dimensional tiling of A-polychorons which meet at a B-type vertex. The combinatorial compatibility condition is</div><div>
<br /></div><div>vertex polyhedron of A = hyperface polyhedron of B</div><div>
<br /></div><div>We have 3 geometric cases:</div><div><ol><li>A regular tiling of the 4-sphere, that is, a 5-dimensional regular polytope. There are 3 cases like that:
<br />a. A = pentachoron, B = pentachoron: the self-dual 5-dimensional simplex.
<br />b. A = pentachoron, B = hexadecahoron: the 5-dimensional cross-polytope.
<br />c. A = tesseract, B = pentachoron: the 5-dimensional hypercube, dual to a.</li><li>A regular tiling of the 4-dimensional Euclidean space. One example is
<br />A = tessaract, B = hexadecahoron, which is self-dual</li><li>A regular tiling of the 4-dimensional hyperbolic space.</li></ol>There are 7 exotic objects (that is, object special to dimension 4) among cases 2-3:</div><div><ol><li>A = pentachoron, B = hexacosichoron</li><li>A = hexadecachoron, B = icositetrachoron</li><li>A = tesseract, B = hexacosichoron</li><li>A = icositetrachoron, B = tesseract: dual to 2</li><li>A = hecatonicosachoron, B = pentachoron: dual to 1</li><li>A = hecatonicosachoron, B = hexadecachoron: dual to 3</li><li>A = hecatonicosachoron, B = hexacosichoron: self-dual</li></ol>At the moment I'm not sure which of them is a tiling of 4-dimensional Euclidean space and which is a tiling of 4-dimensional hyperbolic space.</div><div>
<br /></div><div><span class="Apple-style-span" style=" font-weight: bold; font-size:48px;">Higher dimension</span></div><div>
<br /></div><div><span class="Apple-style-span" style="font-size:medium;">We take A, B to be regular n-dimensional polytopes. We want to construct an n-dimensional tiling of A-polytopes which meet at a B-type vertex. The combinatorial compatibility condition is<div>
<br /></div><div>n-1-dimensional vertex polytope of A = n-1-dimensional hyperface polytope of B</div><div>
<br /></div><div>Once again, we have 3 geometric cases:</div><div><ol><li>A regular tiling of the n-sphere, that is an n+1-dimensional regular polytope. There are 3 cases like that:
<br />a. A = n-dimensional simplex, B = n-dimensional simplex. This is the self-dual
<br />n+1-dimensional simplex.
<br />b. A = n-dimensional hypercube, B = n-dimensional simplex. This is the n+1-dimensional hypercube.
<br />c. A = n-dimensional simplex, B = n-dimensional cross-polytope. This is the
<br />n+1-dimensional cross-polytope, dual to the n+1-dimensional hypercube.</li><li>A regular tiling of the n-dimensional Euclidean space. There is only 1 case:
<br />A = n-dimensional hypercube, B = n-dimensional cross-polytope.</li><li>A regular tiling of the n-dimensional hyperbolic space. There are none!</li></ol><span class="Apple-style-span" style=" font-weight: bold; font-size:48px;">Indefinite signature</span></div><div><span class="Apple-style-span" style="font-size:medium;">
<br /></span></div><div>As we said, we construct n-dimensional tilings out of a pair A, B of n-dimensional polytopes. Now, a polytope is a tiling of the n-1-sphere. What if we take A, B to be tilings of the n-1-dimensional hyperbolic space instead? Logically, we should get a tiling of a space of Lorentzian signature, since the hyperbolic space plays the same role in Minkowski space that the sphere plays in Euclidean space. I'm not sure how would such a tiling look like, it appears it would be
<br />self-interecting. As before, such tilings would come with different curvatures. That is, we should get</div><div><ol><li>Positive curvature: a tiling of <a href="http://en.wikipedia.org/wiki/De_Sitter_space">de Sitter space</a>.</li><li>Zero curvature: a tiling of <a href="http://en.wikipedia.org/wiki/Minkowski_space">Minkowski space</a>.</li><li>Negative curvature: a tiling of <a href="http://en.wikipedia.org/wiki/Anti_de_Sitter_space">anti-de Sitter space</a>.</li></ol>If it indeed makes sense, we would also get discrete subgroups of the symmetry groups of the aforementioned spaces. For instance, we might get a discrete subgroup of the symmetry group of Minkowski space: the <a href="http://en.wikipedia.org/wiki/Poincare_group">Poincare group</a>. I wonder whether there exists a physical object with this kind of symmetry group: a sort of relativistic crystal!</div></span>
<br /></div>Squarkhttp://www.blogger.com/profile/08330918300643149836noreply@blogger.com2