How real are virtual realities,
how virtual is reality? - Constructive re-interpretation
of physical undecidability

K. Svozil
Institut für Theoretische Physik, Technische Universität Wien
Wiedner Hauptstraß e 8-10/136, A-1040 Vienna, Austria


Throughout the ups and downs of scientific world conception there has been a persistent vision of a world comprehendible by human reasoning. In a contemporary, recursion theoretic understanding, the term ``reasoning'' is interpretable as ``constructive'' or, more specifically, ``mechanically computable''. An expression of this statement is the assumption that our universe is generated by the action of some deterministic computing agent; or, stated pointedly, that we are living in a computer-generated universe. Physics then reduces to the investigation of the intrinsic, ``inner view'' of a particular virtual reality which happens to be our universe. In this interpretation, formal logic, mathematics and the computer sciences are just the physical sciences of more general ``virtual'' realities, irrespective of whether they are ``really'' realized or not. We shall study several aspects of this conception, among them the conjecture that randomness in physics can be constructively re-interpreted to correspond to uncomputability and undecidability in mathematics. We shall also attack the non-constructive feature of classical physics by showing its inconsistency. Another concern is the modeling of interfaces, i.e., the means and methods of communication between two universes. On a speculative level, this may give some clue on such notorious questions such as the ocurrence of ``miracles'' or on the ``mind-body problem.''



1  Physical constructivism
    1.1  Virtual Physics
    1.2  Algorithmic physics
2  Three forms of undecidability
    2.1  Copenhagen interpretation of automaton logic
    2.2  Undecidability by diagonalization
        2.2.1  Self-description by self-examination
        2.2.2  Forecast
    2.3  Consistency versus strength - the inconsistency of classical mechanics
3  Interface design
    3.1  Generic interface and notation
    3.2  Symmetry of interface
    3.3  Joy of cyberspace - death in cyberspace
    3.4  Indeterministic interface and miracles
A  Contextuality of quantum information
B  Not all classical tautologies are quantum tautologies
C  Computational complementarity
D  Simple proof of the recursive unsolvability of the halting problem
E  Interface modeling

1  Physical constructivism

Virtual physics is the study of the intrinsic perception of computer-generated universes. Algorithmic physics is the study of physical systems by methods developed in formal logic and the computer sciences. Both fields of research may be conceived as two sides of the same constructivistic attempt to re-interpret physical indeterminism and undecidability. In that way, virtual reality can give insight and intuition for algorithmic physics, and vice versa.

I shall first give an outline of virtual physics and algorithmic physics. Then I shall propose a constructive re-interpretation of undecidability in the context of algorithmic physics. Finally, I shall come back to virtual physics, in particular to questions related to interface modeling and ethics. Technical issues are addressed in the appendix.

1.1  Virtual Physics

Look at some computer. At ``face value'' it is a dull box; nothing spectacular. The real quality of a computer is something else. It is no external place. When you ``enter'' computers with virtual reality interfaces, they are a medium to new universes; they become doors of perception.

I had a dream. I was in an old, possibly mideval, castle. I walked through it. At times I had the feeling that there was something ``out there,'' something so inconceivable hidden that it was impossible to recognize. Then suddenly I realized that there was something ``inside the walls:'' another, dual, castle, quite as spacious as the one I was walking in, formed by the inner side of what one would otherwise consider masonry. There was a small opening, and I glanced through; the inside looked like a threedimensional maze inhabited by dwarfs. The opening closed again.

Computers are exactly such openings; doors of perception to hidden universes. One may ask [1], ``what exactly makes a reality virtual?'' Or conversely, ``what makes a computer-generated universe so real?'' One may also ask, ``where exactly is this `undiscover'd country?' Is it in the circuitry? Is it on the screens, in the interfaces, in the senses, in the mind?'' - These are old questions. They have been addressed with respect to where exactly the mind is located. They can also be applied to the characters in a book or on a movie screen.

To cope with the intrinsic phenomenology of computer-generated universes systematically, we have to develop their ``virtual physics.'' This includes experiments, observations and theoretical models which are intrinsically operational. It is different from an outside description of a process of computation.

Virtual physics is neither a classical discipline of mathematics, logic or the computer sciences, nor can it pretend to be a traditional physical science. Its scope is the intrinsic perception and interpretation of pure syntax.

Is this pure syntax independent of the hardware on which it is implemented? - Yes and no: at first it would seem that within the domain of universal computation, syntactical structures are the same, no matter whether they are implemented on a silicon-based Turing machine, a nerve-based Cellular Automaton, or on a common billard table. A second glance reveals that it may be possible for a program to back-act on its hardware, very much like a malignant computer virus destroys its host processor by heating it up.

We may explore a countable number of universes of computation by virtual physics, but can we step outside of this syntactical frame formed by universal computation? There is reason to believe that this might be diffult for now. If we extend the domain of universal computation, say, by allowing some form of oracle computation, we risk inconsistency. If we restrict our domain of computation, the resulting worlds will be monotonous. There is still another question related to consistency: Will all the different universes of computation - the tree of mathematical perception - eventually collapse into a single one?

Let me point out that virtual physics is part of a program called endophysics [2,3,,5,6]. Endophysics, in short, considers observers who are embedded in the very system they observe. Such observers are naturally limited by the methods, devices and procedures which are operational therein. They cannot ``step outside'' [7] of this ``Cartesian prison'' [] and are therefore bounded to self-referential perception. Can one give concrete meaning to this ``boundedness by self-reference?'' Indeed, a research program is proposed here which is capable of the formal evaluation of bounds to self-reference. This program is based on a recursion theoretic re-formulation of physics. It may result in paradigm change concerning the perception of indeterminism in physics.

1.2  Algorithmic physics

Algorithmic physics considers a physical system as a constructive object, more specifically, as an algorithm, a process of computation. It encompasses algorithmic information theory, computational complexity theory, the theory of recursive functions and effective computability, in particular undecidability theory in the physical context. The latter fields are in their infancies, while algorithmic information theory and computational complexity theory have attracted large attention due to their applicability in statistical physics and chaos theory. Algorithmic physics is based on the assumption that it is justified to assume that the world is or at least can be suitably modeled by a machine; To be more precise, that the physical universe is conceived as a computation and can be described by a recursive function [9].

Of course, at the present moment, everyone pretending that the universe is a Turing-compatible machine is in a state of sin. There are some features of our current physical woldview which seem to be in total contradiction to an intuitive understanding of ``machine-like'' and ``computation.''

Take, for example, the classical and the quantum mechanical concept of information. Intuitively and classically, a unit of information is context-free. It is independent of what other information is or might be present. A classical bit remains unchanged, no matter by what methods it is inferred. It can be copied. No doubts can be left.

By contrast, quantum information is contextual. I.e., a quantum bit may appear different, depending on the method by which it is inferred (cf. appendix A and [10,11,12,13]). Quantum bits cannot be copied or ``cloned'' [14]. Classical tautologies are not necessarily satisfied in quantum information theory (cf. appendix B and [15,16]).

More generally, quantum systems cannot be trivially ``extended'' and embedded into a classical world [10]. Quantum complementarity states that there are observables which cannot be measured simultaneously with arbitrary accuracy. There exist events which cannot be predicted; in the present dictum, they ``occur at random.''

Even the classical physics of continua operates with entities which are formally random. For instance, with probability one, i.e., almost all, elements of the continuum are Martin-Löf/Solovay/Chaitin-random, a predicate characterizing (on the average) the uncomputability of each individual digit in an expansion. No ``algorithmic compression'' of such random reals is possible, and one would need an infinite amount of storage and time to represent and manipulate them. Classical physics based on classical analysis cannot be implemented on a universal computer.

Classical determinism stating that all ``causes'' or natural laws are computable has nothing to say about whether or not initial values corresponding to physical systems should be effectively computable. This is very unsatisfactory, to say the least. Any program p requiring some particular input s can be rewritten into a new program p requiring no (the empty list ) input. This can for instance been realized by coding the input s of p as constants of p. Likewise, any part of p can be externalized as a subprogram s, whose code can then be identified with an input for the new program p. In this sense, the terms effective computation and initial value are interchangeable and the naming merely a matter of convention. Therefore, if classical determinism leaves unspecified the computability of initial values serving as input for recursive natural laws, it may as well leave unspecified whether or not ``causes'' or natural laws are computable.

Therefore, it turns out that celestial mechanics has never been in the realm of any ``reasonable'' Laplacian computing demon but was predistined to become part of chaos theory. Of course, one could endow Laplace's demon with oracle computing power, but then oracle computation would just be another word for Almighty God.

Moreover, recursive analysis states that there are computable functions which have their maximum at an uncomputable argument [10,17]. This may be important for physical variational principles.

All this together may persuade oneself into thinking that physics has ``finally'' come to the conclusion that the world is irrational at heart and therefore cannot be fully modeled by any reasonable formalism. Any statement denying this may be considered as unnecessary at best or sheer heresy and nonsense at worst.

Contrary to these understatements, there is still reason to pretend that the universe is governed by constructive laws, and that it can be described by a finite set of symbols representing these laws. This may be the reason for the ``unreasonable effectiveness of mathematics in the natural sciences'' [18].

With regards to physics, constructivism and classical non-constructivism are paradigms, not facts. After all, we shall never be able to proof whether the world as we can perceive it is solely governed by constructive laws. For it is impossible to know all constructive laws and their consequences [19].

2  Three forms of undecidability

What then is algorithmic physics good for? I have already mentioned algorithmic information theory and complexity theory, but I would like to concentrate on a different topic. There is reason to believe that algorithmic physics in general and recursion theoretic diagonalization in particular is the royal road to a constructive re-interpretation of two different types of physical undecidability: complementarity and unpredictability.

A third type of physical undecidability, if it exists, is randomness or, its weaker form, non-recursivity of a system evolution or of the initial values et cetera. It cannot be constructively re-interpreted. As will be argued further on, any formal modeling of this third form of physical undecidability necessitates a form of oracle computation which is too strong to be consistent.

2.1  Copenhagen interpretation of automaton logic

Computational complementarity is based upon the observation [20] that an interaction of the experimenter with the observed object - modeled by an exchange of information between algorithmic objects - may induce a transition of the observed object which results in the impossibility to measure another, complementary, observable; the same is true vice versa. The observer has a qualifying influence on the measurement result insofar as a particular observable has to be chosen among a class of non-co-measurable observables. But the observer has no quantifying influence on the measurement result insofar as the outcome of a particular measurment is concerned [21].

This can be modeled by finite automata [22,23]. An automaton (Mealy or Moore machine) is a finite deterministic system with input and output capabilities. At any time the automaton is one state of a finite set of states. The state determins the future input-output behavior of the automaton. If an input is applied, the machine assumes a new state, depending both on the old state and on the input, emitting thereby an output, dependig also on the old state and the input (Mealy machine) or depending only on the new state (Moore machine). Automata experiments are conducted by applying an input sequence and observing the output sequence. The automaton is thereby treated as a black box with known description but unknown initial state. As has already been observed by Moore, it may occur that the automaton undergoes an irreversible state change, i.e., information about the automaton's initial state is lost. A second, later experiment may therefore be affected by the first experiment, and vice versa. Hence, both experiments are incompatible.

Corresponding to any such automaton there is a propositional structure, its so-called automaton logic or partition logic, for which experimental statements from input-output analysis are ordered; cf. appendix C. In Fig. 1, all experimental-logical structures of four-state Mealy automata are drawn.


Figure 1: The class of non isomorphic Hasse diagrams of the intrinsic propositional calculi of generic Mealy automata of up to four states.

Similarity and difference between quantum and computational complementarity can been made precise. A systematic investigation reveals that automaton logic is mostly non-Boolean and thus non-classical. Many but not all orthomodular lattices ocurring in quantum logic can be realized by the logic of some particular automaton [24,25], but automaton logic is not identical with quantum logic [6,26].

Since any finite state automaton can be simulated by a universal computer, the class of non-Boolean automaton logic - and not classical Boolean logic - corresponds to the natural order of events in (sufficiently complex) computer generated universes. To put it pointedly: if the physical universe is conceived as the product of a universal computation, then complementarity is an inevitable and necessary feature of its intrinsic perception or endophysics. It cannot be avoided. Computational complementarity may serve as a constructive re-interpretation quantum complementarity.

2.2  Undecidability by diagonalization

Unpredictable events ``occuring at random'' may result from the intrinsic description of systems which are computable on a step-by-step basis. As Gödel himself put it (cf. [27], p. 55),

``I think the theorem of mine which von Neumann refers to is not that on the existence of undecidable propositions or that on the lengths of proofs but rather the fact that a complete epistemological description of a language A cannot be given in the same language A, because the concept of truth of sentences of A cannot be defined in A. It is this theorem which is the true reason for the existence of undecidable propositions in the formal systems containing arithmetic.''

That a system which is computable on a step-by-step basis features uncomputability in forecasting sounds amazing, if not self-contradictory, at first. Yet this can be ``algorithmically proven'' quite easily (cf. appendix D). The method of diagonalization employed in the proof closely resembles Cantor's diagonalization method (based on the ancient liar paradox [28]) which has been applied by Gödel, Turing and others for undecidability proofs in a recursion theoretic setup.

To proof undecidability for a particular physical system, a universal computer such as a universal Turing machine is usually embedded in that system. Then, one (mostly implicitly) applies diagonalization to obtain undecidability. Therefore, any physical realisation of a computer (with potentially infinite memory) is an example for a physical system for which undecidable propositions can be formulated. Rather than consider this further, I shall concentrate on how the method of diagonalization can be applied in quantum information theory.

Diagonalization effectively transforms the classical bit value ``0'' into value ``1'' and ``1'' into ``0.'' Any information has a physical representation. The corresponding classical bit states can be quantum mechanically expressed by |0 and |1.

Quantum information theory allows a coherent superposition of the classical bit states. Therefore the quantum bit states are

{ |a,b\mid |a,b = a|0+b|1,  |a|2+|b|2 = 1,  a,b \Bbb C }    ,
with the classical bits serving as (orthonormal) base states; i.e., the quantum bit basis is { |0,|1}. The evolution representing diagonaliation can be expressed by the unitary operator

[^D] as follows

[^D] |0\r = |1\r , and [^D] |1\r = |0\r . In this state basis (t1 stands for the Pauli spin operator),

= t1 =


= |1\r 0|+ |0\r 1|    .

[^D] will be called diagonalization operator, despite the fact that the only nonvanishing components are off-diagonal.

[^D] has a fixed point at

| 1
, 1
|[1/( 2)],[1/( 2)] is an equally-weighted coherent superposition of the classical bit states and does not give rise to inconsistencies [29]. Classical undecidability is recovered if one actually measures in which one of the classical bit states |0 and |1 the system is. Every single measurement outcome will be totally undetermined, but there is a 50:50 probability for the fixed point state |[1/( 2)],[1/( 2)] to be in either

|0 = |1,0 or

|1 = |0,1 ; i.e.,

|0\mid 1
, 1
|2 = |1\mid 1
, 1
|2 = 1
This gives rise to a different use and interpretation of diagonalization techniques in quantum recursion theory [30].

2.2.1  Self-description by self-examination

Is it possible for a computable system to contain a ``blueprint,'' a complete representation, of itself? This question has been raised by von Neumann in his investigation of self-reproducing automata. With such a ``blueprint'' it should be possible for the automaton to construct replica of itself [27,31,32].

To avoid confusion, it should be noted that it is never possible to have a finite description with itself as proper part. The trick is to employ representations or names of objects, whose code can be smaller than the objects themselves and can indeed be contained in that object (cf. [32], p. 165).

Gödels first incompleteness theorem [33] stating its own unprovability is such an example [27].

Another example is the existence of descriptions p of length |p| whose algorithmic information content

H(p) = |p|+H(|p|)+O(1) = |p|+||p ||+|||p|||+ + O(1) exceeds the length of their code. Intuitively, they can be interpreted as representing algorithmically useful information (e.g., coded in the program length, in the length of the program length, in the length of the length of the program length, ) which is not contained by an immediate interpretation of the symbols of the string alone [36].

Kleene's fixed-point theorem of recursive function theory states that, given any total function f, then there exists an index i such that i and f(i) compute the same function; i.e., ji = jf(i) [31,32]. One application of the fixed point theorem is the existence of self-reproducing machines and, therefore, the existence of intrinsically representable system ``blueprints'' [6].

This is an indication that it is at least possible to represent all the (finite-size) laws governing the system within the system. A second aspect, which was the motivation for von Neumann to study self-reproduction, is the possibility for living systems to reproduce.

A totally different problem is the question how, if ever, a system can obtain such a blueprint by mere self-inspection. Two considerations yield the impossibility of such an attempt for the general case. The first one is connected to the recursive unsolvability of the rule inference problem [38,39,40,41]. The second one, which will be discussed below, is connected to the disruptive character of self-measurement [6].

Even without self-reference it is impossible to guess the law governing an effectively computable system. Assume some particular (universal) machine U which is used as a ``guessing device.'' Then there exist total functions which cannot be ``guessed'' or inferred by U. One can also interpret this result in terms of the recursive unsolvability of the halting problem: there is no recursive bound on the time the guesser U has to wait in order to make sure that the guess is correct.

Self-reproduction by self-inspection usually presupposes an unchanging original. In the general case, this is again impossible because of disruptive effects. To put it pointedly: self-measurement exhibits (paradoxical) features strongly resembling complementarity. An idealised self-referential measurement attempts the impossible: on the one hand it pretends to grasp the ``true'' value of an observable, while on the other hand it has to interact with the object to be measured and thereby inevitably changes its state. Integration of the measurement apparatus does not help because then the observables inseparably refer to the state of the object and the measurement apparatus combined, thereby surrendering the original goal of measurement (i.e., the measurement of the object). These considerations apply to quantum as well as to classical physics with the difference that quantum theory postulates a lower bound on the transfer of action by Planck's constant (h/2p). Thus, computational complementarity as described above, serves as a further modelling tool for the principal impossibility of self-reproduction by self-inspection.

2.2.2  Forecast

Let us, for the moment, disregard the impossibility to find all laws of nature and assume that such a complete enumeration is presented to us by some oracle. What can we make from that?

Imagine statements of the form, ``feeded with program x and input y my computer will output z,'' or

``at time t the system will be in state xyz,'' or, ``on May 2nd of next year there will be sunshine in Vienna; a wind will blow from northwest at 5 km/hour.'' As a consequence of the recursive unsolvability of the halting problem [42], such statements are undecidable. Indeed, there exist uncomputable observables even for computable systems whose ``laws'' and ``input parameters'' are completely determined. In particular, no effective computation can predict the behaviour of an arbitrary computable system in any ``reasonable'' (i.e., computable) time. Stated pointedly, in general there does not exist any ``computational shortcut,'' no optimisation with respect to time, which would allow a forecast of the ``distant future.'' - A ``speedup'' of a calculation is generally impossible.

This blocking of speedup theorems interpretable as forecasts applies even to observers which are outside of the system. It becomes even more dramatic when rephrased in terms of self-referential prediction. The following argument resembles Zeno's paradox of ``Achilles and the Tortoise'' [43]. K. Popper has given a similar account [44], based on what he calls ``paradox of Tristram Shandy.'' Think of the attempt of a finitely describable ``intelligence'' or computing agent to understand itself completely. It might first try to describe itself by printing its initial description. (It has been argued above that there is nothing wrong with this attempt per se, and that there indeed exist automata which contain the ``blueprint'' of themselves.) But then it has to describe itself printing its initial description. Then it has to describe itself printing its printing its initial description. Then it has to describe itself printing its printing its printing its initial description ad infinitum. Any reflection about itself ``steers'' the computing agent into a never-ending vicious circle. In a sense, ``in the limit of an infinity of such circles,'' the agent has completed the task of complete self-comprehension. Yet, for any finite time, this cannot be achieved.

In psychology, the above setup is referred to as the observing ego. In experiments of this kind - e.g., imagine a vase on a table; now imagine you imagining a vase on a table; now imagine you imagining you imagining a vase on a table; now imagine you imagining you imagining you imagining a vase on a table; now imagine you imagining you imagining you imagining you imagining a vase on a table - humans may concentrate on 3-5 levels of iteration.

2.3  Consistency versus strength - the inconsistency of classical mechanics

Quantitatively, one message of undecidability theorems is that, in a very particular sense, one cannot get more wisdom out of a system than one has put in. This can be proven within algorithmic information theory [45,46]. Related to this is the fact that it is shorter to describe a family of objects than to describe particular objects of the family.

There is yet another, qualitative, moral of undecidability theorems. It seems that whenever a system becomes too powerful, it becomes inconsistent. Conversely, any reasonable, i.e., consistent, system must be limited. This is true for formal systems as well as for physical ones.

Let us demonstrate this in the context of classical physics [47]. If one is willing to accept classical, i.e., Hilbert-style analysis, then classical continuum mechanics, and, in fact, any theory based on dense sets, becomes inconsistent.

Continuum theory, and in fact any theory based on dense sets, allows the construction of ``infinity machines,'' which could serve as oracles for the halting problem [48,49,6]. Their construction closely follows Zeno's paradox of Achilles and the Tortoise by squeezing the time it takes for successive steps of computation t with geometric progression:

Picture Omitted
    I.e., the time necessary for the n'th step becomes t(n) = kn, k < 0. The limit of infinite computation is reached in finite physical time

limN n = 1N t(n) = limN n = 1N kn = 1/(1-k).

On such oracle machines it would be possible, for instance, to ``oracle-compute'' the non-recursive limit of Specker's bounded increasing sequence of rational numbers [50] as well as Chaitin's halting ``probability'' W [45]. It would also be possible to prove or disprove Fermat's theorem by a program of a few statements [6], as well as to ``solve'' the halting problem. - Ay, there's the rub! As has been argued before, a ``halting algorithm'' may be used in a diagonalization argument to construct a complete contradiction in classical information theory (cf. Appendix D). The same argument can be used to derive the inconsistency of classical mechanics. Here, the term ``classical'' refers both to physical continuum mechanics, as well as to mathematical non-constructivism.

There is no a priori reason in classical physics to exclude such infinite processes and thus to avoid this inconsistency. One may nevertheless attack the argument both metamathematically and physically. Constructive mathematics denies the existence of objects which cannot be obtained by finite means [51,52]. Finitism in physics excludes the existence of natural entities which correspond to infinities.

A third possibility would be to accept fundamental inconsistencies. How would inconsistencies in the intrinsic phenomenology ``show up'' and be perceived by an observer? At face value, it is taken for granted that phenomena ``have to be'' consistent. Yet there is some reason to suspect that inconsistencies may be perceived as a certain type of ``fuzzyness'' or unpredictability [29]. After all, there exist reasonable algorithmic entities such as expert systems or databases which may become inconsistent, yet remain of value for certain applications.

3  Interface design

The term physical universe will be used as a synonym for the universe we live and do physics in. The terms virtual reality or computer-generated universe or cyberspace are synonyms for some reality mediated by some computing agent. The term player stands for a conscious observer, who for instance could be thought of as living in the physical universe and who is experiencing the virtual reality via some interface.

3.1  Generic interface and notation

An interface connects two universes. It is a means or mode of communication and interaction between two universes. For example, one universe may be our physical universe, while the other universe may be a virtual, computer-generated, reality. In another interpretation, both universes may be identical.

An interface always characterises a distinction between two universes [53]. This distinction may be formed by a cut within one universe. Take the process of observation. It can be modeled by a cut between the observer and the object under observation.

3.2  Symmetry of interface

For an observer in one universe, an interface is an indirect means of probing deeper into the other universe. From a syntactic point of view, the interface enables an exchange of symbols or information between two universes. The interpretation of this exchange is a question of semantics, convention and intent. In most of the cases it will be intentionally clear on which side of the interface the observer is located and on which side the observed object. An example is given in appendix E.

Yet, while in many practical cases the arguments justify the view of the interface as an asymmetric device, it should in principle be perceived and modeled as a symmetric device which allows information to flow between two universes.

Take, for example, a summer meadow. You are observing it. How is the summer meadow observing you? You have consciousness. What consciousness has the summer meadow?

3.3  Joy of cyberspace - death in cyberspace

What does it mean for an interface to some virtual reality ``to be safe?'' Will the present safety measures applying to household equipment such as television sets suffice? Will it for instance be enough to ground a data glove or to restrict the light intensity of an eye-phone?

One may state that the more powerful the interface is, the more beneficial or malign will the influences from the virtual reality towards the actor be, both from a physical and much more so psychical point of view. Since the human consciousness has the tendency to construct a ``consistent, lucid reality,'' the consciousness of the player about a virtual environment gets lost.

But even if the player using the interface is well aware of this fact, it may be hard leaving a virtual reality. What if the player is not aware? What if the player gets killed in virtual reality?

There appear to occur certain potentials for misfortune if a virtual reality back-reacts in a destructive way. There is no interface design without any interaction between the virtual reality and the senses of the observer. What if the virtual reality and the interface mailfunctions or is subject of a criminal attempt or of an unpredictable malignancy? This may hurt the observer. Take, as an example, a wrestling experience with a huge spider. One has to make sure that, at least physically, the interface cannot in any way harm the observer. Probably the only general way to do this is to allow for some form of virtual emergency exit; a super-rule push-bottom providing the observer with an exit from the interface and thus from the virtual reality at any particular instance, no matter what. Or, one may use intrinsic means of sensual limitations, very much as endorphins limit certain pains. As a consequence of unhappy occurrences, terrible traumata will occur, being a challenge both for medicine and jurisdiction.

Much as computer viruses physical destroy hardware, malign virtual realities will destroy actors physically. They may make use of the actor's primary body to harm it via the interface. This virtual backflow [6] is an instance where there is a reference, indirect though, of the virtual reality towards its meta-universe [54]. It is also an additional ``opening'' of one universe into the other; an irregular interface, if you like.

3.4  Indeterministic interface and miracles

Since for safety reasons an interface to some virtual reality will have to be designed to give only limited control to the meta-world of the person's physical existence, the actor's behavior will remain ``psychic'' with respect to the virtual reality.

For the same reason, i.e., the limited access (via the interface) to the meta-world, the intrinsically definable (operationalisable) physics of the virtual reality must remain necessarily non-deterministic, because there will always be factors and information emanating from the interface which have no cause intrinsically.

Take, for example, a virtual reality which is accessible by eyephone and data glove. These interfaces - eyephone and data glove - have an intrinsic representation in the virtual reality; e.g., as two eyes and a hand with five fingers.

Consider the hand if it is not in use. Then its evolution can be completely described by the laws dominating the virtual reality. One may also say that the interface is ``idle.''

Now consider the hand when some player decides to use it. This decision cannot be predicted from within the virtual reality, but depends on outside criteria; e.g., the player coming back from somewhere else et cetera. Furthermore, with the interface ``at work,'' the evolution can no longer be completely described by the laws dominating the virtual reality. When the actor decides to leave the virtual reality, say, because the actor has run out of time, money or is simply hungry, this decision is again intrinsically undecidable.

The setup can be modeled by an infinite deterministic computation ( = the computer-generated universe) receiving data input via the interface. Whereas one might be able to formulate a deterministic ``meta''-model of both infinite computation and data input on a meta-level, the data input is not predictable nor controllable from within the infinite computation. Therefore, the data input as seen from within the computer-generated universe appears as a miracle. It may nevertheless be possible to completely describe the interface by changing the level of description to a higher ``meta-description'' level which includes the physical universe of the player.

This directly translates into Philip Frank's considerations of so-called ``Lücken in den Naturgesetzen'' (English translation ``gaps in the natural laws'') and ``Wunder'' (English translation ``miracle''); cf. [19], sections III.12-15 & VI.21. It can also be rephrased into dualistic mind-body models as for instance envisaged by Eccles [55]: In our terminology the brain as well as other body organs may be interpreted as an interface to the physical universe. The player is interpreted as an ``(intrinsically) immortal soul.'' Take The Doors' ``no one here gets out alive,'' or Godard's ``we are the dead on vacation.''


The author acknowledges stimulating discussions with Professor Cris Calude, Dr. Günther Krenn, Professor Otto E. Rössler and Dr. Christoph Strnadl. Professor Ernst Specker made available the letter from Professor Kurt Schütte as well as the dissertation of Dr. Erna Clavadetscher-Seeberger. Thanks go also to Professor Rob Clifton for his help with the Schütte rays.


A  Contextuality of quantum information

Assume that in an EPR-type arragement [56] one wants to measure the product
P = mx1mx2my1my2mz1mz2
of the direction of the spin components of each one of the two associated particles 1 and 2 along the x, y and z-axes. Assume that the operators are normalized such that |mij| = 1, i { x,y,z}, j { 1,2}. One can determine P, for instance, by measurement and by counterfactual inference [12,13] and multiplication of the three ``observables''

mx1my2, my1mx2 and mz1mz2, by which way one obtains +1. One can also do that by measurement and by counterfactual inference and multiplication of the three ``observables''

mx1mx2, my1my2 and mz1mz2, by which way one obtains -1. In that way, one has obtained either P = 1 or P = -1. Associate with P = 1 the bit state zero and with P = -1 the bit state one. Then the bit is either zero or one, depending on the way or context it was inferred. This kind of contextuality is deeply rooted in the non-Boolean algebraic structure of quantum propositions. Note also that the above argument relies heavily on counterfactual reasoning, because, for instance, only two of the six observables mij can actually be experimentally determined.

B  Not all classical tautologies are quantum tautologies

I shall review the shortest example of a classical tautology which is not valid in threedimensional (real) Hilbert space that is known up-to-date [].

Consider the propositions

b2     ,
b3    ,
a2 b2    ,
b3    ,
b2    ,
( a1a2 b3)    ,
a2 b2    ,
( a1a2 b3)    ,
c1) (b3 d1)    ,
c2) (a1 b1 d1)    ,
b1 d2    ,
b3 d2    ,
c1) [ (a1 a2 b3)d3]    ,
c2) (b1 d3)    ,
[ (a1 a2 b3)d4]    ,
(a1 b1 d4)    ,
a2)b1     .
The proposition formed by F: (5) (20)(21) is a classical tautology.

F is not valid in threedimensional (real) Hilbert space E3, provided one identifies the a's, b's, c's and d's with the following onedimensional subspaces of E3:

S(1,0,0)     ,
S(0,1,0)     ,
S(0,1,1)     ,
S(1,0,1)     ,
S(1,1,0)     ,
S(1,0,2)     ,
S(2,0,1)     ,
S(-1,1,1)    ,
S(1,-1,1)    ,
S(1,1,-1)    ,
S(1,1,1)     ,
where S(v) = {av \mid a \R } is the subspace spanned by v.

Let the ``or'' operation be represented by S(v)S(w) = {av +bw\mid a,b \R } the linear span of S(v) and S(w).

Let the ``and'' operation be represented by S(v)S(w) = S(v)S(w) the set theoretic complement of S(v) and S(w).

Let the complement be represented by S(v) = {w\mid v·w = 0} the orthogonal subspace of S(v).

Let the ``implication'' relation be represented by

S(v) S(w) (S(v))S(w).

Then, (5), , (20) = E3 , whereas (21)

= S(1,0,0) E3. Therefore, at least for states lying in the direction (1,0,0) [58], F is not a quantum tautology.

The set of eleven rays can be represented by vectors from the center of a cube to the indicated points [12], as drawn in Fig. 2.


Picture Omitted

Figure 2: The eleven rays in the proof of the Kochen-Specker theorem based on the construction of Schütte are obtained by connecting the center of the cube to the black dots on its faces and edges.

C  Computational complementarity

Consider the transition and output tables and the graph of a (3,3,2)-Mealy type automaton drawn in Fig. 3. Let us further assume that, given only one automaton copy, the initial state is unknown to an observer. The goal of the observer is to find this unknown initial state of this automaton by performing input-output experiments on this single automaton.

state 1 state 2 state 3
input function d1 111
input function d2 222
input function d3 333
output function o1 100
output function o2 010
output function o3 001


Figure 3: Transition and output tables and figure of a (3,2,2)-automaton of the Mealy type.

Input of 1, 2 or 3 steers the automaton into the states 1, 2 or 3, respectively. At the same time, the output of the automaton is 1 only if the guess is a ``hit,'' i.e., only if the automaton was in that state. Otherwise the output is 0. Hence, after the measurement, the automaton is in a definite state, but if the guess is no ``hit,'' the information about the initial automaton state is lost. Therefore, the experimenter has to decide before the actual measurement which one of the following hypotheses should be tested (in short-hand notation, `` {1}'' stands for ``the automaton is in state 1'' etc.):

{ 1 } = { 2,3 },{ 2 } = { 1,3 },{ 3 } = { 1,2 }. Measurement of either one of these three hypotheses (or their complement) makes impossible measurement of the other two hypotheses.

No input, i.e., the empty input string , identifies all three internal automaton states. This corresponds to the trivial information that the automaton is in some internal state. Input of the symbol 1 (and all sequences of symbols starting with 1) distinguishes between the hypothesis {1} (output ``1'') and the hypothesis {2,3} (output ``0''). Input of the symbol 2 (and all sequences of symbols starting with 1) distinguishes between the hypothesis {2} (output ``1'') and the hypothesis {1,3} (output ``0''). Input of the symbol 3 (and all sequences of symbols starting with 1) distinguishes between the hypothesis {3} (output ``1'') and the hypothesis {1,2} (output ``0''). The intrinsic propositional calculus is thus defined by the partitions [6]

{{1,2,3}}     ,
v(1 )
{ {1} , {2,3} }    ,
v(2 )
{ {2} , {1,3} }    ,
v(3 )
{ {3} , {1,2} }    .
It can be represented by the lattice drawn in Fig. 4.


Figure 4: Lattice MO3 of intrinsic propositional calculus of a (3,2,2)-automaton of the Mealy type.

This lattice is of the ``Chinese latern'' MO3 form. It is non-distributive, but modular.

The obtained intrinsic propositional calculus in many ways resembles the lattice obtained from photon polarization experiments or from other incompatible quantum measurements. Consider an experiment measuring photon polarization. Three propositions of the form

``the photon has polarization pf1,''
``the photon has polarization pf2,''
``the photon has polarization pf3''

cannot be measured simultaneously for the angles

f1 f2 f3 f1 (mod p). An irreversible measurement of one direction of polarization would result in a state preparation, making impossible measurement of the other directions of polarization, and resulting in a propositional calculus of the ``Chinese latern'' form MO3.

D  Simple proof of the recursive unsolvability of the halting problem

Assume that there is an algorithmic way to forsee a particular aspect of the future of an arbitrary computation. Namely, whether or not the computation will terminate. As conceived by Turing [42], this assumption yields to a contradiction; therefore it cannot be valid. The proof follows Cantor's diagonalization argument, which was used analogously by Gödel to prove the incompleteness of arithmetic.

Consider an arbitrary algorithm B(x) whose input is a string of symbols x. Assume that there exists a ``halting algorithm'' HALT which is able to decide whether B terminates on x or not.

Using HALT(B(x)) we shall construct another deterministic computing agent A, which has as input any effective program B and which proceeds as follows: Upon reading the program B as input, A makes a copy of it. This can be readily achieved, since the program B is presented to A in some encoded form # (B), i.e., as a string of symbols. In the next step, the agent uses the code # (B) as input string for B itself; i.e., A forms B(#(B)), henceforth denoted by B(B). The agent now hands B(B) over to its subroutine HALT. Then, A proceeds as follows: if HALT(B(B)) decides that B(B) halts, then the agent A does not halt; this can for instance be realised by an infinite DO-loop; if HALT(B(B)) decides that B(B) does not halt, then A halts.

We shall now confront the agent A with a paradoxical task by choosing A's own code as input for itself. - Notice that B is arbitrary and has not been specified yet. The deterministic agent A is representable by an algorithm with code # (A). Therefore, we are free to substitute A for B.

Assume that classically A is restricted to classical bits of information. Then, whenever A(A) halts, HALT(A(A)) forces A(A) not to halt. Conversely, whenever A(A) does not halt, then HALT(A(A)) steers A(A) into the halting mode. In both cases one arrives at a complete contradiction. In the classical computational base, this contradiction can only be consistently avoided by assuming the nonexistence of A and, since the only nontrivial feature of A is the use of the peculiar halting algorithm HALT, the impossibility of any such halting algorithm.

E  Interface modeling

Let us explicitly construct the decription of an asymmetric interface between two universes S1 and S2 [5]. Assume that an intrinsic (or operational or endo-) parameter description P(S) = {S;p1,...,pn,...} with parameters pi contains parameters which could at least in principle be measured by devices and processes available in the universe S.

Assume an interaction I acting in both universes S1 and S2. Let us use this interaction for measurements. In this way we get two associated intrinsic parameter descriptions P(S1,I) and P(S2,I). We shall define the universe S2 approximately closed with respect to S1 and to the interaction I if S2 responds only ``slightly'' to changes in S1. Formally, this situation can be written as [(dP(S2,I))/( dpi)] @ 0    "pi P(S1,I) or just [(dP(S2,I))/( dP(S1,I))] @ 0. Using the language of cybernetics, this is identical to say that a system S2 is approximately autonomous with respect to S1 if the effect of its output affects its input only slightly, such that no feedback loop via S1 occurs [59].

Assume again two universes S1 and S2, and two interactions I1 and I2. Assume further that S1 and S2 are approximately closed with respect to one interaction, say I1:

[(dP(S1,I1))/( dP(S2,I1))] @ [(dP(S2,I1))/( dP(S1,I1))] @ 0. We shall spoil the symmetry now by requiring that one system, say S1, is sensitive to interactions I2, whereas S2 is not:

[(dP(S1,I2))/( dP(S2,I2))] 0[(dP(S2,I2))/( dP(S1,I2))] @ 0. Hence, effectively one almost closed system S1 is a close realization of Archimedean point, with the system S2 and the interaction I1 to be described. I2 serves merely as a reference interaction. Since observations in S1 will not affect S2 too much, an operational parameter description P(S2,I2) will be called quasiextrinsic. Parameters in P(S2,I2), which cannot be measured by I1 in S2 are external, hidden, parameters in S2.

In this context, the extrinsic parameter description might be defined via a limit: a parameter description P(S1,I2) from S2 is called extrinsic if S1 and S2 are totally closed with respect to both interactions I1 and I2. Clearly, this is impossible to realize, since there cannot be any exchange between universes without altering the states of both.

In what follows I shall give an example of such a configuration: assume a pool filled with water, which will serve as system S1. Let us assume further an optical instrument recording electromagnetic radiation as part of our system S2, and the interactions I1 and I2, being identified with water wave interactions and electromagnetic interaction respectively. Since light does not affect water wave dynamics appreciably, but changes the state of the optical instrument, a realization of the described model is obtained, with the optical instrument yielding a quasiextrinsic view of the pool.

Let a universe be represented by the symbols ``()''. Let a cut or interface be represented by some double line symbol such as ``]['' characterizing the two universes it connects.

Then the process of distinction creates a cut within one universe. It creates two new distinct universes; the interface being along the cut; i.e., () (][). In the process of condensation, two formerly distinct universes communicate via the interface, which is again symbolized as (symmetric) cut; i.e., () () (][) . If the interface is not symmetric such as in the above (quasi-) extrinsic setup, then the process of condensation, two formerly distinct universes communicate via the interface, which is now symbolized as an asymmetric cut; i.e., () () ()[) or () () (]() . Also for asymmetric interfaces, the process of distinction is symbolized by an asymmetric cut; i.e., () ()[) or () (]().


B. Wooley, Virtual Worlds (Blackwell, London, 1992; reprinted by Penguin, London, 1993).

R. J. Boskovich, De spacio et tempore, ut a nobis cognoscuntur (Vienna, 1755); English translation in A Theory of Natural Philosophy, ed. by J. M. Child (Open Court, Chicago, 1922; reprinted by MIT press, Cambridge, MA, 1966), p. 203-205.

T. Toffoli, The role of the observer in uniform systems, in Applied General Systems Research, ed. by G. Klir (Plenum Press, New York, London, 1978).

O. E. Rössler, Endophysics, in Real Brains, Artificial Minds, ed. by J. L. Casti and A. Karlquist (North-Holland, New York, 1987), p. 25; Endophysics, Die Welt des inneren Beobachters, ed. by P. Weibel (Merwe Verlag, Berlin, 1992).

K. Svozil, On the setting of scales for space and time in arbitrary quantized media (Lawrence Berkeley Laboratory preprint LBL-16097, May 1983), a revised version of which was published in Il Nuovo Cimento 96B, 127 (1986).

K. Svozil, Randomness and Undecidability in Physics (World Scientific, Singapore, 1993).

Archimedes ( 287-212 b.c.) encountered the mechanical problem to move a given weight by a given force. According to Plutarch Marcellus, `that he declared that any given weight could be moved by any given force (however small)' and boasted that, `if he were given a place to stand on, he could move the earth' [cited from T. Heath, A History of Greek Mathematics, Volume II (Clarendon Press, Oxford, 1921), p. 18].

O. E. Rössler, private communication.

This is unlike in mathematics, for which Gödel, Tarski, Turing and others proved that no reasonable (i.e., strong enough and consistent) formal system will ever be able to prove all true well-formed statements. Indeed, Chaitin proved that certain mathematical entities are as random as a sequence produced by the tossing of a fair coin [45,46]. Hence, let us assume that, when it comes to an enumeration of laws and initial values, nature is finitely ``shallow'' while mathematics is infinitely ``deep.'' In this context, the terms ``shallow'' and ``deep'' refer to algorithmic information [45] rather than to Bennett's notion of ``logical depth;'' cf. Ch. H. Bennett, Logical Depth and Physical Complexity, in The Universal Turing Machine. A Half-Century Survey, ed. by R. Herken (Kammerer & Unverzagt, Hamburg, 1988). The apparent ``paradox,'' that a complex phenotype originates from low-complex initial values and evolution is not paradoxical at all. Indeed, that the world appears complex by all means does not necessarily mean that its laws have a high algorithmic information content.

E. P. Specker, Dialectica 14, 175 (1960); S. Kochen and E. P. Specker, Journal of Mathematics and Mechanics 17, 59 (1967); reprinted in [50].

M. Redhead, Incompleteness, Nonlocality and Realism (Clarendon Press, Oxford, 1987).

A. Peres, Quantum Theory: Concepts & Methods (Kluwer Academic Publishers, Dordrecht, 1993).

N. D. Mermin, Rev. Mod. Phys. 65, 803 (1993).

N. Herbert, Foundation of Physics 12, 1171 (1982); W. K. Wooters and W. H. Zurek, Nature 299, 802 (1982); P. W. Milonni and M. L. Hardies, Phys. Lett. 92A, 321 (1982); L. Mandel, Nature 304, 188 (1983); R. J. Glauber, Amplifyers, Attenuators and the Quantum Theory of Measurement, in Frontiers in Quantum Optics, ed. by E. R. Pikes and S. Sarkar (Adam Hilger, Bristol 1986). reprinted in [50].

S. Kochen and E. P. Specker, Logical Structures arising in quantum theory, in Symposium on the Theory of Models, Proceedings of the 1963 International Symposium at Berkeley (North Holland, Amsterdam, 1965), p. 177-189; reprinted in [50].

S. Kochen and E. P. Specker, The calculus of partial propositional functions, in Proceedings of the 1964 International Congress for Logic, Methodology and Philosophy of Science, Jerusalem (North Holland, Amsterdam, 1965), p. 45-57; reprinted in [50].

G. Kreisel, Synthese 29, 11 (1974).

E. P. Wigner, ``The unreasonable effectiveness of mathematics in the natural sciences'', Richard Courant Lecture delivered at New York University, May 11, 1959 and published in Communications on Pure and Applied Mathematics 13, 1 (1960).

Ph. Frank, Das Kausalgesetz und seine Grenzen (Springer, Vienna 1932).

E. F. Moore, Gedanken-Experiments on Sequential Machines, in Automata Studies, ed. by C. E. Shannon & J. McCarthy (Princeton University Press, Princeton, 1956).

A. Zeilinger, Talk presented at the International Symposium ``The Foundational Debate, Complexity and Constructivity in Mathematics and Physics'' organized by the Institute Vienna Circle and the Kurt Gödel Society, Vienna, 15-17 September 1994

J. E. Hopcroft and J. D. Ullman, Introduction to Automata Theory, Languages, and Computation (Addison-Wesley, Reading, MA, 1979).

W. Brauer, Automatentheorie (Teubner, Stuttgart, 1984).

D. Finkelstein and S. R. Finkelstein, International Journal of Theoretical Physics 22, 753 (1983).

A. A. Grib and R. R. Zapatrin, International Journal of Theoretical Physics 29, 113 (1990); ibid 31, 1669 (1992).

M. Schaller and K. Svozil, Il Nuovo Cimento 109 B, 167 (1994).

J. von Neumann, Theory of Self-Reproducing Automata, ed. by A. W. Burks (University of Illinois Press, Urbana, 1966).

The Bible contains a passage, which refers to Epimenides, a Crete living in the capital city of Cnossus: ``One of themselves, a prophet of their own, said, `Cretans are always liars, evil beasts, lazy gluttons.' '' - St. Paul, Epistle to Titus I (12-13). For more details, see A. R. Anderson, St. Paul's epistle to Titus, in The Paradox of the Liar, ed. by R. L. Martin (Yale University Press, New Haven, 1970).

K. Svozil, The consistent use of paradoxa, TU Vienna preprint, May 1994. This observation may be the starting point for the application of quantum computers in inconsistent databases.

K. Svozil, Quantum recursion theory, TU Vienna preprint, November 1994.

H. Rogers, Theory of Recursive Functions and Effective Computability (MacGraw-Hill, New York 1967).

P. Odifreddi, Classical Recursion Theory (North-Holland, Amsterdam, 1989).

K. Gödel, Monatshefte für Mathematik und Physik 38, 173 (1931); English translation in [34] and in Davis, ref. [35].

K. Gödel, Collected Works, Volume I, Publications 1929-1936, ed. by S. Feferman, J. W. Dawson, Jr., St. C. Kleene, G. H. Moore, R. M. Solovay, J. van Heijenoort (Oxford University Press, Oxford, 1986).

M. Davis, The Undecidable (Raven Press, New York, 1965).

Here H(p) is defined [45,46,37] as the length of the smallest program p* (in prefix code) which runs on a universal (Chaitin) computer and outputs p.

M. Li and P. M. B. Vitányi, Kolmogorov Complexity and its Applications, in Handbook of Theoretical Computer Sciences, Algorithms and Complexity, Volume A (Elsevier, Amsterdam and MIT Press, Cambridge, MA., 1990).

E. M. Gold, Information and Control 10, 447 (1967).

D. Angluin and C. H. Smith, Computing Surveys 15, 237 (1983).

M. Li and P. M. B. Vitányi, Journal of Computer and System Science 44, 343 (1992).

L. M. Adleman and M. Blum, Journal of Symbolic Logic 56, 891 (1991).

A. M. Turing, Proc. London Math. Soc. (2), 42, 230 (1936-7), reprinted in [35].

H. D. P. Lee, Zeno of Elea (Cambridge University Press, Cambridge, 1936; reprinted by Adolf M. Hakkert, Amsterdam, 1967).

K. R. Popper, The British Journal for the Philosophy of Science 1, 117, 173 (1950).

G. J. Chaitin, Information, Randomness and Incompleteness, Second edition (World Scientific, Singapore, 1987, 1990); Algorithmic Information Theory (Cambridge University Press, Cambridge, 1987); Information-Theoretic Incompleteness (World Scientific, Singapore, 1992).

C. Calude, Information and Randomness - An Algorithmic Perspective (Springer, Berlin, 1994).

Contemporary physical theories are expressed in terms of continua: time, position, momentum, wave amplitudes, . The very notion of continuum embodies indeterminism insofar as ``almost all'' (i.e., with probability 1), elements of continua are (Martin-Löf/Solovay/Chaitin) random. Physical chaos, if it exists, is the necessary consequence of this fact. In these models, indeterminism is ``put-in'' from the very beginning. There is no resonable machine representation and no conceivable ``explanation'' corresponding to such models. They (together with classical, nonconstructive, mathematics) are therefore irrational at heart.

H. Weyl, Philosophy of Mathematics and Natural Science (Princeton University Press, Princeton, 1949).

A. Grünbaum, Modern Science and Zeno's paradoxes, Second edition (Allen and Unwin, London, 1968); Philosophical Problems of Space of Time, Second, enlarged edition (D. Reidel, Dordrecht, 1973).

E. Specker, Selecta (Birkhäuser Verlag, Basel, 1990).

E. Bishop and D. S. Bridges, Constructive Analysis (Springer, Berlin, 1985).

D. Bridges and F. Richman, Varieties of Constructive Mathematics (Cambridge University Press, Cambridge, 1987).

Spencer Brown's ``laws of form'' do not alwasy apply, since they result in classical Boolean logic, which is only a subset of automaton as well as quantum logic.

H. Putnam, Reason, Truth and History (Cambridge University Press, Cambridge, 1981).

J. C. Eccles, The Mind-Brain Problem Revisited: The Microsite Hypothesis, in The Principles of Design and Operation of the Brain, ed. by J. C. Eccles and O. Creutzfeldt (Springer, Berlin, 1990), p. 549.

A. Einstein, B. Podolsky and N. Rosen, Phys. Rev. 47, 777 (1935); J. F. Clauser and A. Shimony, Rep. Prog. Phys. 41, 1881 (1978); J. A. Wheeler and W. H. Zurek, eds.,

Quantum Theory and Measurement (Princeton University Press, Princeton, 1983); cf. also [12].

K. Schütte, letter to Professor Ernst P. Specker, dated from April 22nd, 1965; first published in Erna Clavadetscher-Seeberger, Eine partielle Prädikatenlogik (Dissertation, ETH-Zürich, Zürich, 1983).

R. Clifton, private communication.

F. Varela, The Principles of Biological Autonomy (North Holland, New York, 1980).

File translated from TEX by TTH, version 1.94.
On 9 Sep 1999, 13:56.