Let us assume that our world is discretely organized, and that it is governed by constructive, i.e., effectively computable, laws []. By that assumption, there exists a ``blueprint,'' a complete set of rules or laws governing the universe. This is unlike in mathematics, for which Gödel, Tarski, Turing and others proved that no reasonable (i.e., strong enough and consistent) formal system will ever be able to prove all true well-formed statements. Indeed, Chaitin proved that certain mathematical entities are as random as a sequence produced by the tossing of a fair coin [,]. Hence, let us assume that, when it comes to an enumeration of laws and initial values, nature is finitely ``shallow'' while mathematics is infinitely ``deep'' [].
One might think of such a world as an ongoing computation based on a finite-size algorithm; or, equivalently, as the derivation of successive theorems of a formal system. It is very much like a virtual reality, a computer-generated universe.
Any observer who is embedded in such a system [,,] is naturally limited by the methods, devices and procedures which are operational therein. Such an observer cannot ``step outside'' [] of this ``Cartesian prison'' and is therefore bounded to self-referential perception. Can one give concrete meaning to this ``boundedness by self-reference?'' Indeed, a research program is proposed here which is capable of the formal evaluation of bounds to self-reference. This program is based on a recursion theoretic re-formulation of physics. It may result in paradigm change concerning the perception of indeterminism in physics.
Is it possible for a system to contain a ``blueprint,'' a complete representation, of itself? This question has been raised by von Neumann in his investigation of self-reproducing automata. With such a ``blueprint'' it should be possible for the automaton to construct replica of itself [,,].
To avoid confusion, it should be noted that it is never possible to have a finite description with itself as proper part. The trick is to employ representations or names of objects, whose code can be smaller than the objects themselves and can indeed be contained in that object (cf. [], p. 165).
Gödel's first incompleteness theorem [] stating its own unprovability is such an example [].
Another example is the existence of descriptions p of length |p| whose algorithmic information content
H(p) = |p|+H(|p|)+O(1) = |p|+||p ||+|||p|||+ ¼+ O(1) exceeds the length of their code. Intuitively, they can be interpreted as representing algorithmically useful information (e.g., coded in the program length, in the length of the program length, in the length of the length of the program length, ¼) which is not contained by an immediate interpretation of the symbols of the string alone [].
This is an indication that it is at least possible to represent all the (finite-size) laws governing the system within the system. A second aspect, which was the motivation for von Neumann to study self-reproduction, is the possibility for living systems to reproduce.
Even without self-reference it is impossible to guess the law governing an effectively computable system. Assume some particular (universal) machine U which is used as a ``guessing device.'' Then there exist total functions which cannot be ``guessed'' or inferred by U. One can also interpret this result in terms of the recursive unsolvability of the halting problem: there is no recursive bound on the time the guesser U has to wait in order to make sure that the guess is correct.
Computational complementarity is not directly related to diagonalization and is a second, independent source of indeterminism. It is already realizable at an elementary pre-diagonalization level, i.e., without the requirement of computational universality or its arithmetic equivalent. The corresponding machine model is the class of finite automata, characterized by a finite number of states and a finite number of input an output symbols [,].
Computational complementarity is based on the following observation []. Whenever the experimenter is part of the system, any measurement of one particular system feature causes an interaction with the observed object. This interaction may induce a transition of the observed object which results in the impossibility to measure another, complementary, observable and vice versa.
An automaton (Mealy or Moore machine) is a finite deterministic system with input and output capabilities. At any time the automaton is in a state q of a finite set of states Q. The state determines the future input-output behavior of the automaton. If an input is applied, the machine assumes a new state, depending both on the old state and on the input, emitting thereby an output, depending also on the old state and the input (Mealy machine) or depending only on the new state (Moore machine). Automata experiments are conducted by applying an input sequence and observing the output sequence. The automaton is thereby treated as a black box with known description but unknown initial state. As has already been observed by Moore [], it may occur that the automaton undergoes an irreversible state change, i.e., information about the automaton's initial state is lost. A second, later experiment may therefore be affected by the first experiment, and vice versa. Hence, both experiments are incompatible. In this setup, the observer has a qualifying influence on the measurement result insofar as a particular observable has to be chosen among a class of non-co-measurable observables. But the observer has no quantifying influence on the measurement result insofar as the outcome of a particular measurement is concerned. This gives rise to the Copenhagen interpretation of automaton logic.
I shall illustrate this fact by an example. Consider the automaton whose transition diagram is drawn in Fig.
Assume that the initial value of this automaton is unknown. The task is to get information about the initial value by input-output experiments. The automaton is constructed to undergo a transition into the state which corresponds to the input; e.g., input of symbol ``3'' will steer the automaton into state three, no matter in which state it was before. In case of a ``hit,'' it responds with output symbol ``1,'' any failure to guess the state results in the output of symbol ``0.'' Complementarity manifests itself in the following way: one has to make choices between the input of symbols ``1,'' ``2'' and ``3;'' corresponding to definite answers whether the automaton was in state ``1,'' ``2'' or ``3'' (output ``1'') and ``2 or 3,'' ``1 or 3'' or ``2 or 3'' (output ``0''). In the latter case, i.e., whenever the automaton responds with a ``0'' (for failure), one has lost information about the automaton's initial state, since it will have made a transition in the state ``1,'' ``2'' or ``3'' for sure.
Since any finite state automaton can be simulated by a universal computer, complementarity is a feature of sufficiently complex deterministic universes as well. To put it pointedly: if the physical universe is conceived as the product of a universal computation, then complementarity is an inevitable and necessary feature of its operational perception. It cannot be avoided. It may serve not only as a constructive re-interpretation of quantum complementarity but as an indication of the principal impossibility of self-reproduction by self-inspection.
Let us, for the moment, disregard the impossibility to find all laws of nature and assume that such a complete enumeration is presented to us by some oracle. What can we make from that?
Imagine statements of the form, ``feeded with program x and input y my computer will output z,'' or
``at time t the system will be in state xyz,'' or, ``on May 2nd of next year there will be sunshine in Vienna; a wind will blow from northwest at 5 km/hour.'' As a consequence of the recursive unsolvability of the halting problem [], such statements are undecidable. Indeed, there exist uncomputable observables even for computable systems whose ``laws'' and ``input parameters'' are completely determined. In particular, no effective computation can predict the behaviour of an arbitrary computable system in any ``reasonable'' (i.e., computable) time. Stated pointedly, in general there does not exist any ``computational shortcut,'' no optimization with respect to time, which would allow a forecast of the ``distant future.'' - A ``speedup'' of a calculation is generally impossible.
This blocking of speedup theorems interpretable as forecasts applies even to observers which are outside of the system. It becomes even more dramatic when rephrased in terms of self-referential prediction. The following argument resembles Zeno's paradox of ``Achilles and the Tortoise'' []. K. Popper has given a similar account [], based on what he calls ``paradox of Tristram Shandy.'' Think of the attempt of a finitely describable ``intelligence'' or computing agent to understand itself completely. It might first try to describe itself by printing its initial description. (It has been argued above that there is nothing wrong with this attempt per se, and that there indeed exist automata which contain the ``blueprint'' of themselves.) But then it has to describe itself printing its initial description. Then it has to describe itself printing its printing its initial description. Then it has to describe itself printing its printing its printing its initial description ¼ ad infinitum. Any reflection about itself ``steers'' the computing agent into a never-ending vicious circle. In a sense, ``in the limit of an infinity of such circles,'' the agent has completed the task of complete self-comprehension. Yet, for any finite time, this cannot be achieved.
I cannot go too deeply here into the ``mind-boggling'' features of quantum mechanics. But there is a peculiarity which seems rather interesting: quantum mechanics allows for nonlocal correlations without violating causality. That is, two spatially separated events which correspond to an entangled [] state can show the tendency of coordination and adjustment which cannot be explained (locally) classically [,,]. On a purely phenomenological level, this means that if one event happens, then also the other event (e.g., a click in the particle counter) happens, but there is hardly any classical way of explaining these coincidences (despite nonlocal hidden parameter models) [,,].
Another related, truly mind-boggling feature of quantum mechanics was pointedly stated by Wheeler and is well known as the delayed choice experiment []. There, the future has, expressed in classical terms, a qualifying influence over the past. All this can be well phrased in terms of the standard Copenhagen interpretation of quantum mechanics. And yet, it seems that indeed it is not unreasonable to assume that, in view of quantum mechanics, the structural organization of space-time events has to be re-organized; in addition to the revisions made necessary by relativity theory.
Can we then utilize these nonlocal coincidences and perform faster-than-light signalling? The answer to this question is unknown. It is true that, since the single events which coincide occur unpredictably and cannot be systematically controlled by any action at a distance, no EPR-type faster-than-light telegraph can be build. Shimony's thesis of peaceful coexistence [] between quantum mechanics and relativity theory relates to that quantum feature and states that quantum mechanics violates outcome independence-by coincidences of arbitrary spatially separated events-but it conforms to parameter independence-by the impossibility to control spatially separated events through some parameter of a measurement apparatus.
Interpreted rigidly, Shimony's principle of peaceful coexistence, however, is not the weakest statement which would catch the overall consistency of phenomena. We may require that even faster-than-light signalling via nonlocal quantum correlations is allowed, provided it is impossible to construct any time paradoxes or other inconsistencies.
I shall give as an example a ``high-tech'' version of Nick Herbert's FLASH-device (cf. [] and subsequent refutations []). The device would be capable of circumventing the Heisenberg uncertainty principle, i.e., complementarity, by the introduction of a ``super-observer'' (equivalent to a dualistic model). Let us assume that we want to measure the spin of a single electron in directions x and y ¹ x at the same time. (This is impossible due to complementarity.) In purporting to do so, we proceed by (i) measuring the electron spin in x-direction, then (ii) copy and (iii) keep a record of this measurement result, then (iv) reconstruct the wave function, then (v) measure the spin in y-direction. From all the steps taken, only step (iii)-keeping a record of the ``first measurement'' of the electron spin in the x-direction-is forbidden. Because in order to be able to reconstruct the wave function for the ``second'' measurement in the y-direction, quantum mechanics requires that we have to eliminate all physical information about the measurement result in the x-direction. Thereby, reversibility and one-to-one computation is enforced [,]. Keeping a record of the ``first measurement'' of the electron spin in the x-direction would clearly contradict this requirement. One possible way of circumventing complementarity and thereby of escaping the ``Cartesian prison'' of the physical world would be a dualistic, more generally, hierarchical, model, in which physical information is transferred into a sphere-spiritual or otherwise accessible to a ``super-observer''-in which the copying and recording of this information has no physical correspondent []. In this model, the state (wavefunction) of the electron is reconstructed in purely physical terms, but the ``super-observer'' retains a ``secret, obscure'' knowledge of the ``first measurement.'' After the ``second'' measurement of the electron spin in the y-direction, the ``super-observer'' can claim knowledge of the electron spin in the x-direction, but no physical measurement exists to test this claim.
Despite the merely speculative character of the above scenario, it should not be possible to produce any paradoxical situation by, for instance, faster-than-light back-to-the-future signalling (one may still allow non-paradoxical tasks). Indeed, in order to preserve consistency, information may be hidden from intrinsic observers.
Is there a moral to be learned from the various forms of uncertainties to which an intrinsic observer is exposed even in a system which is effectively computable on a step-by-step basis?
I believe, yes. The moral is to maintain a humble, open-mindedness with regards to new ideas; very much in the sense of Lakatosch []. What will turn out as progressive or degenerative research program, no contemporary peer knows.
We do not know how ``deep'' The Great Beyond is. Historic examples like electricity show that the Beyond of the past is today's technology. And it is very likely that there is plenty ``undiscover'd country'' waiting for us to be discovered, from whose bourn the traveller will hopefully return wiser and will not lose the name of action.
Quantum Theory and Measurement (Princeton University Press, Princeton, 1983), p. 152-167].
Quantum Theory and Measurement (Princeton University Press, Princeton, 1983), p. 182-213.