Virtual physics is the study of the intrinsic perception of computer-generated universes. Algorithmic physics is the study of physical systems by methods developed in formal logic and the computer sciences. Both fields of research may be conceived as two sides of the same constructivistic attempt to re-interpret physical indeterminism and undecidability. In that way, virtual reality can give insight and intuition for algorithmic physics, and vice versa.
I shall first give an outline of virtual physics and algorithmic physics. Then I shall propose a constructive re-interpretation of undecidability in the context of algorithmic physics. Finally, I shall come back to virtual physics, in particular to questions related to interface modeling and ethics. Technical issues are addressed in the appendix.
I had a dream. I was in an old, possibly mideval, castle. I walked through it. At times I had the feeling that there was something ``out there,'' something so inconceivable hidden that it was impossible to recognize. Then suddenly I realized that there was something ``inside the walls:'' another, dual, castle, quite as spacious as the one I was walking in, formed by the inner side of what one would otherwise consider masonry. There was a small opening, and I glanced through; the inside looked like a threedimensional maze inhabited by dwarfs. The opening closed again.
Computers are exactly such openings; doors of perception to hidden universes. One may ask [1], ``what exactly makes a reality virtual?'' Or conversely, ``what makes a computer-generated universe so real?'' One may also ask, ``where exactly is this `undiscover'd country?' Is it in the circuitry? Is it on the screens, in the interfaces, in the senses, in the mind?'' - These are old questions. They have been addressed with respect to where exactly the mind is located. They can also be applied to the characters in a book or on a movie screen.
To cope with the intrinsic phenomenology of computer-generated universes systematically, we have to develop their ``virtual physics.'' This includes experiments, observations and theoretical models which are intrinsically operational. It is different from an outside description of a process of computation.
Virtual physics is neither a classical discipline of mathematics, logic or the computer sciences, nor can it pretend to be a traditional physical science. Its scope is the intrinsic perception and interpretation of pure syntax.
Is this pure syntax independent of the hardware on which it is implemented? - Yes and no: at first it would seem that within the domain of universal computation, syntactical structures are the same, no matter whether they are implemented on a silicon-based Turing machine, a nerve-based Cellular Automaton, or on a common billard table. A second glance reveals that it may be possible for a program to back-act on its hardware, very much like a malignant computer virus destroys its host processor by heating it up.
We may explore a countable number of universes of computation by virtual physics, but can we step outside of this syntactical frame formed by universal computation? There is reason to believe that this might be diffult for now. If we extend the domain of universal computation, say, by allowing some form of oracle computation, we risk inconsistency. If we restrict our domain of computation, the resulting worlds will be monotonous. There is still another question related to consistency: Will all the different universes of computation - the tree of mathematical perception - eventually collapse into a single one?
Let me point out that virtual physics is part of a program called endophysics [2,3,,5,6]. Endophysics, in short, considers observers who are embedded in the very system they observe. Such observers are naturally limited by the methods, devices and procedures which are operational therein. They cannot ``step outside'' [7] of this ``Cartesian prison'' [] and are therefore bounded to self-referential perception. Can one give concrete meaning to this ``boundedness by self-reference?'' Indeed, a research program is proposed here which is capable of the formal evaluation of bounds to self-reference. This program is based on a recursion theoretic re-formulation of physics. It may result in paradigm change concerning the perception of indeterminism in physics.
Algorithmic physics considers a physical system as a constructive object, more specifically, as an algorithm, a process of computation. It encompasses algorithmic information theory, computational complexity theory, the theory of recursive functions and effective computability, in particular undecidability theory in the physical context. The latter fields are in their infancies, while algorithmic information theory and computational complexity theory have attracted large attention due to their applicability in statistical physics and chaos theory. Algorithmic physics is based on the assumption that it is justified to assume that the world is or at least can be suitably modeled by a machine; To be more precise, that the physical universe is conceived as a computation and can be described by a recursive function [9].
Of course, at the present moment, everyone pretending that the universe is a Turing-compatible machine is in a state of sin. There are some features of our current physical woldview which seem to be in total contradiction to an intuitive understanding of ``machine-like'' and ``computation.''
Take, for example, the classical and the quantum mechanical concept of information. Intuitively and classically, a unit of information is context-free. It is independent of what other information is or might be present. A classical bit remains unchanged, no matter by what methods it is inferred. It can be copied. No doubts can be left.
By contrast, quantum information is contextual. I.e., a quantum bit may appear different, depending on the method by which it is inferred (cf. appendix A and [10,11,12,13]). Quantum bits cannot be copied or ``cloned'' [14]. Classical tautologies are not necessarily satisfied in quantum information theory (cf. appendix B and [15,16]).
More generally, quantum systems cannot be trivially ``extended'' and embedded into a classical world [10]. Quantum complementarity states that there are observables which cannot be measured simultaneously with arbitrary accuracy. There exist events which cannot be predicted; in the present dictum, they ``occur at random.''
Even the classical physics of continua operates with entities which are formally random. For instance, with probability one, i.e., almost all, elements of the continuum are Martin-Löf/Solovay/Chaitin-random, a predicate characterizing (on the average) the uncomputability of each individual digit in an expansion. No ``algorithmic compression'' of such random reals is possible, and one would need an infinite amount of storage and time to represent and manipulate them. Classical physics based on classical analysis cannot be implemented on a universal computer.
Classical determinism stating that all ``causes'' or natural laws are computable has nothing to say about whether or not initial values corresponding to physical systems should be effectively computable. This is very unsatisfactory, to say the least. Any program p requiring some particular input s can be rewritten into a new program p¢ requiring no (the empty list Æ) input. This can for instance been realized by coding the input s of p as constants of p¢. Likewise, any part of p¢ can be externalized as a subprogram s, whose code can then be identified with an input for the new program p. In this sense, the terms effective computation and initial value are interchangeable and the naming merely a matter of convention. Therefore, if classical determinism leaves unspecified the computability of initial values serving as input for recursive natural laws, it may as well leave unspecified whether or not ``causes'' or natural laws are computable.
Therefore, it turns out that celestial mechanics has never been in the realm of any ``reasonable'' Laplacian computing demon but was predistined to become part of chaos theory. Of course, one could endow Laplace's demon with oracle computing power, but then oracle computation would just be another word for Almighty God.
Moreover, recursive analysis states that there are computable functions which have their maximum at an uncomputable argument [10,17]. This may be important for physical variational principles.
All this together may persuade oneself into thinking that physics has ``finally'' come to the conclusion that the world is irrational at heart and therefore cannot be fully modeled by any reasonable formalism. Any statement denying this may be considered as unnecessary at best or sheer heresy and nonsense at worst.
Contrary to these understatements, there is still reason to pretend that the universe is governed by constructive laws, and that it can be described by a finite set of symbols representing these laws. This may be the reason for the ``unreasonable effectiveness of mathematics in the natural sciences'' [18].
With regards to physics, constructivism and classical non-constructivism are paradigms, not facts. After all, we shall never be able to proof whether the world as we can perceive it is solely governed by constructive laws. For it is impossible to know all constructive laws and their consequences [19].
A third type of physical undecidability, if it exists, is randomness or, its weaker form, non-recursivity of a system evolution or of the initial values et cetera. It cannot be constructively re-interpreted. As will be argued further on, any formal modeling of this third form of physical undecidability necessitates a form of oracle computation which is too strong to be consistent.
Computational complementarity is based upon the observation [20] that an interaction of the experimenter with the observed object - modeled by an exchange of information between algorithmic objects - may induce a transition of the observed object which results in the impossibility to measure another, complementary, observable; the same is true vice versa. The observer has a qualifying influence on the measurement result insofar as a particular observable has to be chosen among a class of non-co-measurable observables. But the observer has no quantifying influence on the measurement result insofar as the outcome of a particular measurment is concerned [21].
This can be modeled by finite automata [22,23]. An automaton (Mealy or Moore machine) is a finite deterministic system with input and output capabilities. At any time the automaton is one state of a finite set of states. The state determins the future input-output behavior of the automaton. If an input is applied, the machine assumes a new state, depending both on the old state and on the input, emitting thereby an output, dependig also on the old state and the input (Mealy machine) or depending only on the new state (Moore machine). Automata experiments are conducted by applying an input sequence and observing the output sequence. The automaton is thereby treated as a black box with known description but unknown initial state. As has already been observed by Moore, it may occur that the automaton undergoes an irreversible state change, i.e., information about the automaton's initial state is lost. A second, later experiment may therefore be affected by the first experiment, and vice versa. Hence, both experiments are incompatible.
Corresponding to any such automaton there is a propositional structure, its so-called automaton logic or partition logic, for which experimental statements from input-output analysis are ordered; cf. appendix C. In Fig. 1, all experimental-logical structures of four-state Mealy automata are drawn.
Similarity and difference between quantum and computational complementarity can been made precise. A systematic investigation reveals that automaton logic is mostly non-Boolean and thus non-classical. Many but not all orthomodular lattices ocurring in quantum logic can be realized by the logic of some particular automaton [24,25], but automaton logic is not identical with quantum logic [6,26].
Since any finite state automaton can be simulated by a universal computer, the class of non-Boolean automaton logic - and not classical Boolean logic - corresponds to the natural order of events in (sufficiently complex) computer generated universes. To put it pointedly: if the physical universe is conceived as the product of a universal computation, then complementarity is an inevitable and necessary feature of its intrinsic perception or endophysics. It cannot be avoided. Computational complementarity may serve as a constructive re-interpretation quantum complementarity.
Unpredictable events ``occuring at random'' may result from
the
intrinsic description of systems which are computable on a step-by-step
basis.
As Gödel himself put it (cf. [27], p. 55),
``I think the theorem of mine which von Neumann refers to is not
that on
the existence of undecidable propositions or that on the lengths of
proofs but rather the fact that a complete epistemological description
of a language A cannot be given in the same language A, because
the concept of truth of sentences of A cannot be defined in A. It
is this theorem which is the true reason for the existence of
undecidable propositions in the formal systems containing
arithmetic.''
That a system which is computable on a step-by-step basis features uncomputability in forecasting sounds amazing, if not self-contradictory, at first. Yet this can be ``algorithmically proven'' quite easily (cf. appendix D). The method of diagonalization employed in the proof closely resembles Cantor's diagonalization method (based on the ancient liar paradox [28]) which has been applied by Gödel, Turing and others for undecidability proofs in a recursion theoretic setup.
To proof undecidability for a particular physical system, a universal computer such as a universal Turing machine is usually embedded in that system. Then, one (mostly implicitly) applies diagonalization to obtain undecidability. Therefore, any physical realisation of a computer (with potentially infinite memory) is an example for a physical system for which undecidable propositions can be formulated. Rather than consider this further, I shall concentrate on how the method of diagonalization can be applied in quantum information theory.
Diagonalization effectively transforms the classical bit value ``0'' into value ``1'' and ``1'' into ``0.'' Any information has a physical representation. The corresponding classical bit states can be quantum mechanically expressed by |0ñ and |1ñ.
Quantum information theory allows a coherent superposition of the classical bit states. Therefore the quantum bit states are
| (1) |
[^D] as follows
[^D] |0\r = |1\r , and [^D] |1\r = |0\r . In this state basis (t1 stands for the Pauli spin operator),
| (2) |
[^D] will be called diagonalization operator, despite the fact that the only nonvanishing components are off-diagonal.
[^D] has a fixed point at
| (3) |
|0ñ = |1,0 ñ or
|1ñ = |0,1 ñ; i.e.,
| (4) |
Is it possible for a computable system to contain a ``blueprint,'' a complete representation, of itself? This question has been raised by von Neumann in his investigation of self-reproducing automata. With such a ``blueprint'' it should be possible for the automaton to construct replica of itself [27,31,32].
To avoid confusion, it should be noted that it is never possible to have a finite description with itself as proper part. The trick is to employ representations or names of objects, whose code can be smaller than the objects themselves and can indeed be contained in that object (cf. [32], p. 165).
Gödels first incompleteness theorem [33] stating its own unprovability is such an example [27].
Another example is the existence of descriptions p of length |p| whose algorithmic information content
H(p) = |p|+H(|p|)+O(1) = |p|+||p ||+|||p|||+ ¼+ O(1) exceeds the length of their code. Intuitively, they can be interpreted as representing algorithmically useful information (e.g., coded in the program length, in the length of the program length, in the length of the length of the program length, ¼) which is not contained by an immediate interpretation of the symbols of the string alone [36].
Kleene's fixed-point theorem of recursive function theory states that, given any total function f, then there exists an index i such that i and f(i) compute the same function; i.e., ji = jf(i) [31,32]. One application of the fixed point theorem is the existence of self-reproducing machines and, therefore, the existence of intrinsically representable system ``blueprints'' [6].
This is an indication that it is at least possible to represent all the (finite-size) laws governing the system within the system. A second aspect, which was the motivation for von Neumann to study self-reproduction, is the possibility for living systems to reproduce.
A totally different problem is the question how, if ever, a system can obtain such a blueprint by mere self-inspection. Two considerations yield the impossibility of such an attempt for the general case. The first one is connected to the recursive unsolvability of the rule inference problem [38,39,40,41]. The second one, which will be discussed below, is connected to the disruptive character of self-measurement [6].
Even without self-reference it is impossible to guess the law governing an effectively computable system. Assume some particular (universal) machine U which is used as a ``guessing device.'' Then there exist total functions which cannot be ``guessed'' or inferred by U. One can also interpret this result in terms of the recursive unsolvability of the halting problem: there is no recursive bound on the time the guesser U has to wait in order to make sure that the guess is correct.
Self-reproduction by self-inspection usually presupposes an unchanging original. In the general case, this is again impossible because of disruptive effects. To put it pointedly: self-measurement exhibits (paradoxical) features strongly resembling complementarity. An idealised self-referential measurement attempts the impossible: on the one hand it pretends to grasp the ``true'' value of an observable, while on the other hand it has to interact with the object to be measured and thereby inevitably changes its state. Integration of the measurement apparatus does not help because then the observables inseparably refer to the state of the object and the measurement apparatus combined, thereby surrendering the original goal of measurement (i.e., the measurement of the object). These considerations apply to quantum as well as to classical physics with the difference that quantum theory postulates a lower bound on the transfer of action by Planck's constant (h/2p). Thus, computational complementarity as described above, serves as a further modelling tool for the principal impossibility of self-reproduction by self-inspection.
Imagine statements of the form, ``feeded with program
x and input y my computer will output z,'' or
``at time
t
the system will be in state xyz,''
This blocking of speedup theorems interpretable as forecasts applies even to observers which are outside of the system. It becomes even more dramatic when rephrased in terms of self-referential prediction. The following argument resembles Zeno's paradox of ``Achilles and the Tortoise'' [43]. K. Popper has given a similar account [44], based on what he calls ``paradox of Tristram Shandy.'' Think of the attempt of a finitely describable ``intelligence'' or computing agent to understand itself completely. It might first try to describe itself by printing its initial description. (It has been argued above that there is nothing wrong with this attempt per se, and that there indeed exist automata which contain the ``blueprint'' of themselves.) But then it has to describe itself printing its initial description. Then it has to describe itself printing its printing its initial description. Then it has to describe itself printing its printing its printing its initial description ¼ ad infinitum. Any reflection about itself ``steers'' the computing agent into a never-ending vicious circle. In a sense, ``in the limit of an infinity of such circles,'' the agent has completed the task of complete self-comprehension. Yet, for any finite time, this cannot be achieved.
In psychology, the above setup is referred to as the observing ego. In experiments of this kind - e.g., imagine a vase on a table; now imagine you imagining a vase on a table; now imagine you imagining you imagining a vase on a table; now imagine you imagining you imagining you imagining a vase on a table; now imagine you imagining you imagining you imagining you imagining a vase on a table - humans may concentrate on 3-5 levels of iteration.
Quantitatively, one message of undecidability theorems is that, in a very particular sense, one cannot get more wisdom out of a system than one has put in. This can be proven within algorithmic information theory [45,46]. Related to this is the fact that it is shorter to describe a family of objects than to describe particular objects of the family.
There is yet another, qualitative, moral of undecidability theorems. It seems that whenever a system becomes too powerful, it becomes inconsistent. Conversely, any reasonable, i.e., consistent, system must be limited. This is true for formal systems as well as for physical ones.
Let us demonstrate this in the context of classical physics [47]. If one is willing to accept classical, i.e., Hilbert-style analysis, then classical continuum mechanics, and, in fact, any theory based on dense sets, becomes inconsistent.
Continuum theory, and in fact any theory based on dense sets, allows the construction of ``infinity machines,'' which could serve as oracles for the halting problem [48,49,6]. Their construction closely follows Zeno's paradox of Achilles and the Tortoise by squeezing the time it takes for successive steps of computation t with geometric progression:
Picture Omitted
I.e.,
the time necessary for the n'th step becomes t(n) = kn, k < 0.
The limit of infinite computation is reached in finite
physical time
limN® ¥ån = 1N t(n) = limN® ¥ån = 1N kn = 1/(1-k).
On such oracle machines it would be possible, for instance, to ``oracle-compute'' the non-recursive limit of Specker's bounded increasing sequence of rational numbers [50] as well as Chaitin's halting ``probability'' W [45]. It would also be possible to prove or disprove Fermat's theorem by a program of a few statements [6], as well as to ``solve'' the halting problem. - Ay, there's the rub! As has been argued before, a ``halting algorithm'' may be used in a diagonalization argument to construct a complete contradiction in classical information theory (cf. Appendix D). The same argument can be used to derive the inconsistency of classical mechanics. Here, the term ``classical'' refers both to physical continuum mechanics, as well as to mathematical non-constructivism.
There is no a priori reason in classical physics to exclude such infinite processes and thus to avoid this inconsistency. One may nevertheless attack the argument both metamathematically and physically. Constructive mathematics denies the existence of objects which cannot be obtained by finite means [51,52]. Finitism in physics excludes the existence of natural entities which correspond to infinities.
A third possibility would be to accept fundamental inconsistencies. How would inconsistencies in the intrinsic phenomenology ``show up'' and be perceived by an observer? At face value, it is taken for granted that phenomena ``have to be'' consistent. Yet there is some reason to suspect that inconsistencies may be perceived as a certain type of ``fuzzyness'' or unpredictability [29]. After all, there exist reasonable algorithmic entities such as expert systems or databases which may become inconsistent, yet remain of value for certain applications.
The term physical universe will be used as a synonym for the universe we live and do physics in. The terms virtual reality or computer-generated universe or cyberspace are synonyms for some reality mediated by some computing agent. The term player stands for a conscious observer, who for instance could be thought of as living in the physical universe and who is experiencing the virtual reality via some interface.
An interface connects two universes. It is a means or mode of communication and interaction between two universes. For example, one universe may be our physical universe, while the other universe may be a virtual, computer-generated, reality. In another interpretation, both universes may be identical.
An interface always characterises a distinction between two universes [53]. This distinction may be formed by a cut within one universe. Take the process of observation. It can be modeled by a cut between the observer and the object under observation.
Yet, while in many practical cases the arguments justify the view of the interface as an asymmetric device, it should in principle be perceived and modeled as a symmetric device which allows information to flow between two universes.
Take, for example, a summer meadow. You are observing it. How is the summer meadow observing you? You have consciousness. What consciousness has the summer meadow?
What does it mean for an interface to some virtual reality ``to be safe?'' Will the present safety measures applying to household equipment such as television sets suffice? Will it for instance be enough to ground a data glove or to restrict the light intensity of an eye-phone?
One may state that the more powerful the interface is, the more beneficial or malign will the influences from the virtual reality towards the actor be, both from a physical and much more so psychical point of view. Since the human consciousness has the tendency to construct a ``consistent, lucid reality,'' the consciousness of the player about a virtual environment gets lost.
But even if the player using the interface is well aware of this fact, it may be hard leaving a virtual reality. What if the player is not aware? What if the player gets killed in virtual reality?
There appear to occur certain potentials for misfortune if a virtual reality back-reacts in a destructive way. There is no interface design without any interaction between the virtual reality and the senses of the observer. What if the virtual reality and the interface mailfunctions or is subject of a criminal attempt or of an unpredictable malignancy? This may hurt the observer. Take, as an example, a wrestling experience with a huge spider. One has to make sure that, at least physically, the interface cannot in any way harm the observer. Probably the only general way to do this is to allow for some form of virtual emergency exit; a super-rule push-bottom providing the observer with an exit from the interface and thus from the virtual reality at any particular instance, no matter what. Or, one may use intrinsic means of sensual limitations, very much as endorphins limit certain pains. As a consequence of unhappy occurrences, terrible traumata will occur, being a challenge both for medicine and jurisdiction.
Much as computer viruses physical destroy hardware, malign virtual realities will destroy actors physically. They may make use of the actor's primary body to harm it via the interface. This virtual backflow [6] is an instance where there is a reference, indirect though, of the virtual reality towards its meta-universe [54]. It is also an additional ``opening'' of one universe into the other; an irregular interface, if you like.
Since for safety reasons an interface to some virtual reality will have to be designed to give only limited control to the meta-world of the person's physical existence, the actor's behavior will remain ``psychic'' with respect to the virtual reality.
For the same reason, i.e., the limited access (via the interface) to the meta-world, the intrinsically definable (operationalisable) physics of the virtual reality must remain necessarily non-deterministic, because there will always be factors and information emanating from the interface which have no cause intrinsically.
Take, for example, a virtual reality which is accessible by eyephone and data glove. These interfaces - eyephone and data glove - have an intrinsic representation in the virtual reality; e.g., as two eyes and a hand with five fingers.
Consider the hand if it is not in use. Then its evolution can be completely described by the laws dominating the virtual reality. One may also say that the interface is ``idle.''
Now consider the hand when some player decides to use it. This decision cannot be predicted from within the virtual reality, but depends on outside criteria; e.g., the player coming back from somewhere else et cetera. Furthermore, with the interface ``at work,'' the evolution can no longer be completely described by the laws dominating the virtual reality. When the actor decides to leave the virtual reality, say, because the actor has run out of time, money or is simply hungry, this decision is again intrinsically undecidable.
The setup can be modeled by an infinite deterministic computation ( = the computer-generated universe) receiving data input via the interface. Whereas one might be able to formulate a deterministic ``meta''-model of both infinite computation and data input on a meta-level, the data input is not predictable nor controllable from within the infinite computation. Therefore, the data input as seen from within the computer-generated universe appears as a miracle. It may nevertheless be possible to completely describe the interface by changing the level of description to a higher ``meta-description'' level which includes the physical universe of the player.
This directly translates into Philip Frank's considerations of so-called ``Lücken in den Naturgesetzen'' (English translation ``gaps in the natural laws'') and ``Wunder'' (English translation ``miracle''); cf. [19], sections III.12-15 & VI.21. It can also be rephrased into dualistic mind-body models as for instance envisaged by Eccles [55]: In our terminology the brain as well as other body organs may be interpreted as an interface to the physical universe. The player is interpreted as an ``(intrinsically) immortal soul.'' Take The Doors' ``no one here gets out alive,'' or Godard's ``we are the dead on vacation.''
|
mx1my2, my1mx2 and mz1mz2, by which way one obtains +1. One can also do that by measurement and by counterfactual inference and multiplication of the three ``observables''
mx1mx2, my1my2 and mz1mz2, by which way one obtains -1. In that way, one has obtained either P = 1 or P = -1. Associate with P = 1 the bit state zero and with P = -1 the bit state one. Then the bit is either zero or one, depending on the way or context it was inferred. This kind of contextuality is deeply rooted in the non-Boolean algebraic structure of quantum propositions. Note also that the above argument relies heavily on counterfactual reasoning, because, for instance, only two of the six observables mij can actually be experimentally determined.
|
F is not valid in threedimensional (real) Hilbert space E3, provided one identifies the a's, b's, c's and d's with the following onedimensional subspaces of E3:
|
Let the ``or'' operation be represented by S(v)ÚS(w) = {av +bw\mid a,b Î \R } the linear span of S(v) and S(w).
Let the ``and'' operation be represented by S(v)ÙS(w) = S(v)ÇS(w) the set theoretic complement of S(v) and S(w).
Let the complement be represented by ØS(v) = {w\mid v·w = 0} the orthogonal subspace of S(v).
Let the ``implication'' relation be represented by
S(v)® S(w) º (ØS(v))ÚS(w).
Then, (5), ¼, (20) = E3 , whereas (21)
= Ø S(1,0,0) ¹ E3. Therefore, at least for states lying in the direction (1,0,0) [58], F is not a quantum tautology.
The set of eleven rays can be represented by vectors from the center of a cube to the indicated points [12], as drawn in Fig. 2.
Picture Omitted
state 1 | state 2 | state 3 | |
input function d1 | 1 | 1 | 1 |
input function d2 | 2 | 2 | 2 |
input function d3 | 3 | 3 | 3 |
output function o1 | 1 | 0 | 0 |
output function o2 | 0 | 1 | 0 |
output function o3 | 0 | 0 | 1 |
Input of 1, 2 or 3 steers the automaton into the states 1, 2 or 3, respectively. At the same time, the output of the automaton is 1 only if the guess is a ``hit,'' i.e., only if the automaton was in that state. Otherwise the output is 0. Hence, after the measurement, the automaton is in a definite state, but if the guess is no ``hit,'' the information about the initial automaton state is lost. Therefore, the experimenter has to decide before the actual measurement which one of the following hypotheses should be tested (in short-hand notation, `` {1}'' stands for ``the automaton is in state 1'' etc.):
{ 1 } = Ø{ 2,3 },{ 2 } = Ø{ 1,3 },{ 3 } = Ø{ 1,2 }. Measurement of either one of these three hypotheses (or their complement) makes impossible measurement of the other two hypotheses.
No input, i.e., the empty input string Æ, identifies all three internal automaton states. This corresponds to the trivial information that the automaton is in some internal state. Input of the symbol 1 (and all sequences of symbols starting with 1) distinguishes between the hypothesis {1} (output ``1'') and the hypothesis {2,3} (output ``0''). Input of the symbol 2 (and all sequences of symbols starting with 1) distinguishes between the hypothesis {2} (output ``1'') and the hypothesis {1,3} (output ``0''). Input of the symbol 3 (and all sequences of symbols starting with 1) distinguishes between the hypothesis {3} (output ``1'') and the hypothesis {1,2} (output ``0''). The intrinsic propositional calculus is thus defined by the partitions [6]
|
This lattice is of the ``Chinese latern'' MO3 form. It is non-distributive, but modular.
The obtained intrinsic propositional calculus in many ways resembles the lattice obtained from photon polarization experiments or from other incompatible quantum measurements. Consider an experiment measuring photon polarization. Three propositions of the form
``the photon has polarization pf1,''
``the photon has polarization pf2,''
``the photon has polarization pf3''
cannot be measured simultaneously for the angles
f1 ¹ f2 ¹ f3 ¹ f1 (mod p). An irreversible measurement of one direction of polarization would result in a state preparation, making impossible measurement of the other directions of polarization, and resulting in a propositional calculus of the ``Chinese latern'' form MO3.
Consider an arbitrary algorithm B(x) whose input is a string of symbols x. Assume that there exists a ``halting algorithm'' HALT which is able to decide whether B terminates on x or not.
Using HALT(B(x)) we shall construct another deterministic computing agent A, which has as input any effective program B and which proceeds as follows: Upon reading the program B as input, A makes a copy of it. This can be readily achieved, since the program B is presented to A in some encoded form # (B), i.e., as a string of symbols. In the next step, the agent uses the code # (B) as input string for B itself; i.e., A forms B(#(B)), henceforth denoted by B(B). The agent now hands B(B) over to its subroutine HALT. Then, A proceeds as follows: if HALT(B(B)) decides that B(B) halts, then the agent A does not halt; this can for instance be realised by an infinite DO-loop; if HALT(B(B)) decides that B(B) does not halt, then A halts.
We shall now confront the agent A with a paradoxical task by choosing A's own code as input for itself. - Notice that B is arbitrary and has not been specified yet. The deterministic agent A is representable by an algorithm with code # (A). Therefore, we are free to substitute A for B.
Assume that classically A is restricted to classical bits of information. Then, whenever A(A) halts, HALT(A(A)) forces A(A) not to halt. Conversely, whenever A(A) does not halt, then HALT(A(A)) steers A(A) into the halting mode. In both cases one arrives at a complete contradiction. In the classical computational base, this contradiction can only be consistently avoided by assuming the nonexistence of A and, since the only nontrivial feature of A is the use of the peculiar halting algorithm HALT, the impossibility of any such halting algorithm.
Assume an interaction I acting in both universes S1 and S2. Let us use this interaction for measurements. In this way we get two associated intrinsic parameter descriptions P(S1,I) and P(S2,I). We shall define the universe S2 approximately closed with respect to S1 and to the interaction I if S2 responds only ``slightly'' to changes in S1. Formally, this situation can be written as [(dP(S2,I))/( dpi)] @ 0 "pi Î P(S1,I) or just [(dP(S2,I))/( dP(S1,I))] @ 0. Using the language of cybernetics, this is identical to say that a system S2 is approximately autonomous with respect to S1 if the effect of its output affects its input only slightly, such that no feedback loop via S1 occurs [59].
Assume again two universes S1 and S2, and two interactions I1 and I2. Assume further that S1 and S2 are approximately closed with respect to one interaction, say I1:
[(dP(S1,I1))/( dP(S2,I1))] @ [(dP(S2,I1))/( dP(S1,I1))] @ 0. We shall spoil the symmetry now by requiring that one system, say S1, is sensitive to interactions I2, whereas S2 is not:
[(dP(S1,I2))/( dP(S2,I2))] ¹ 0Ù[(dP(S2,I2))/( dP(S1,I2))] @ 0. Hence, effectively one almost closed system S1 is a close realization of Archimedean point, with the system S2 and the interaction I1 to be described. I2 serves merely as a reference interaction. Since observations in S1 will not affect S2 too much, an operational parameter description P(S2,I2) will be called quasiextrinsic. Parameters in P(S2,I2), which cannot be measured by I1 in S2 are external, hidden, parameters in S2.
In this context, the extrinsic parameter description might be defined via a limit: a parameter description P(S1,I2) from S2 is called extrinsic if S1 and S2 are totally closed with respect to both interactions I1 and I2. Clearly, this is impossible to realize, since there cannot be any exchange between universes without altering the states of both.
In what follows I shall give an example of such a configuration: assume a pool filled with water, which will serve as system S1. Let us assume further an optical instrument recording electromagnetic radiation as part of our system S2, and the interactions I1 and I2, being identified with water wave interactions and electromagnetic interaction respectively. Since light does not affect water wave dynamics appreciably, but changes the state of the optical instrument, a realization of the described model is obtained, with the optical instrument yielding a quasiextrinsic view of the pool.
Let a universe be represented by the symbols ``()''. Let a cut or interface be represented by some double line symbol such as ``]['' characterizing the two universes it connects.
Then the process of distinction creates a cut within one universe. It creates two new distinct universes; the interface being along the cut; i.e., ()® (][). In the process of condensation, two formerly distinct universes communicate via the interface, which is again symbolized as (symmetric) cut; i.e., () ()® (][) . If the interface is not symmetric such as in the above (quasi-) extrinsic setup, then the process of condensation, two formerly distinct universes communicate via the interface, which is now symbolized as an asymmetric cut; i.e., () ()® ()[) or () ()® (]() . Also for asymmetric interfaces, the process of distinction is symbolized by an asymmetric cut; i.e., ()® ()[) or ()® (]().
Quantum Theory and Measurement (Princeton University Press, Princeton, 1983); cf. also [12].