Such irreversibly observed events (whatever the relevance or meaning of those terms are [1,2,3,4]) are subject to the primary condition of consistency or self-consistency: ``Any particular irreversibly observed event either happens or does not happen, but it cannot both happen and not happen.''
Indeed, so trivial seems the requirement of consistency that David Hilbert polemicised against ``another author'' with the following words [5], ``...for me, the opinion that the [[physical]] facts and events themselves can be contradictory is a good example of thoughtlessness.''
Just as in mathematics, inconsistency, i.e., the coexistence of truth and falseness of propositions, is a fatal property of any physical theory. Nevertheless, in a certain very precise sense, quantum mechanics incorporates inconsistencies in a very subtle way, which assures overall consistency. For instance, a particle wave function or quantum state is said to ``pass'' a double slit through both slits at once, which is classically impossible. (Such considerations may, however, be considered as mere trickery quantum talk, devoid of any operational meaning.) Yet, neither a particle wave function nor quantum states are directly associable with any sort of irreversible observed event of physical reality. We shall come back to a particular quantum case in the second part of this investigation.
And just as in mathematics and in formal logic it can be argued that too strong capacities of intrinsic event forecast and intrinsic event control renders the system overall inconsistent. This fact may indeed be considered as one decisive feature in finite deterministic (``algorithmic'') models [6]. It manifests itself already in the early stages of Cantorian set theory: any claim that it is possible to enumerate the real numbers yields, via the diagonalization method, to an outright contradiction. The only consistent alternative is the acceptance that no such capacity of enumeration exists. Gödel's incompleteness theorem [7] states that any formal system rich enough to include arithmetic and elementary logic could not be both consistent and complete. Turing's theorem on the recursive unsolvability of the halting problem [8], as well as Chaitin's W numbers [9] are formalizations of related limitations in formal logics, the computer sciences and mathematics.
In what follows we shall proceed along very similar lines. We shall first argue that any capacity of total forecast or event control-even in a totally deterministic environment-is contradicting the (idealistic) idea that decisions between alternatives are possible; stated differently: that there is free will. Then we shall proceed with possibilities of forecast and event control which are consistent both with free will and the known laws of physics.
It is also clear that some form of forecast and event control is evidently possible-indeed, that is one of the main achievements of contemporary natural science, and we make everyday use of it, say, by switching on the light. These capacities derived from the standard natural sciences are characterized by a high chance of reproducibility, and therefore do not depend on single events.
We shall concentrate on very general bounds of capacities of forecast and event control; bounds which are imposed upon them by the requirement of consistency. These considerations should be fairly general and do not depend on any particular physical model. They are valid for all conceivable forms of physical theories; classical, quantum and forthcoming alike.
Indeed, suppose there exists free will. Suppose further that an agent could predict all future events, without exceptions. We shall call this the strong form of forecasting. In this case, the agent could freely decide to counteract in such a way as to invalidate that prediction. Hence, in order to avoid inconsistencies and paradoxes, either free will has to be abandoned, or it has to be accepted that complete prediction is impossible.
Another possibility would be to consider strong forms of forecasting which are, however, not utilized to alter the system. Effectively, this results in the abandonment of free will, amounting to an extrinsic, detached viewpoint. After all, what is knowledge and what is it good for if it cannot be applied and made to use?
It should be mentioned that the above argument is of an ancient type [12]. As has already been mentioned, it has been formalized recently in set theory, formal logic and recursive function theory, where it is called ``diagonalization method.''
In doing this, we are inspired by the recent advances in the foundations of quantum (information) theory. There, due to complementarity and the impossibility to clone generic states, single events may have important meanings to some observers, although they make no sense at all to other observers. One example for this is quantum cryptography. Many of these events are stochastic and are postulated to satisfy all conceivable statistical laws (correlations are nonclassical, though). In such frameworks, high degrees of reproducibility cannot be guaranteed, although single events may carry valuable information, which can even be distilled and purified.
Thus the requirement of consistency of the phenomena seems to impose rather stringent conditions on forecasting and event control. Similar ideas have already been discussed in the context of time paradoxes in relativity theory (cf. [14] and [15], ``The only solutions to the laws of physics that can occur locally ¼ are those which are globally self-consistent'').
There is, however, a possibility that the forecast and control of future events is conceivable for singular events within the statistical bounds. Such occurrences may be ``singular miracles'' which are well accountable within the known laws of physics. They will be called weak forms of forecasting and event control.
It may be argued that, in order to obey overall consistency, such a framework should not be extendable to any forms of strong forecast or event control, because, as has been argued before, this could either violate global consistency criteria or would make necessary a revision of the known laws of physics.
The relevant laws of statistics (e.g., all recursively enumerable ones) impose rather lax constraints especially on finite sequences and do not exclude local, singular, improbable events. For example, a binary sequence such as 11111111111111111111111111111111 is just as probable as the sequences 11100101110101000111000011010101 and 01010101010101010101010101010101 and its occurrence in a test is equally likely, although the ``meaning'' an observer could ascribe to it is rather different. These sequences may be embedded in and be part of much longer stochastic sequences. If short finite regular (or ``meaningful'') sequences are padded into long irregular (``meaningless'') ones, those sequences become statistically indistinguishable for all practical purposes from the previous sequences. Of course, the ``meaning'' of any such sequence may vary with different observers. Some of them may be able to decipher a sequence, others may not be capable of this capacity.
It is quite evident that per definition any finite regularity in an otherwise stochastic environment should exclude the type of high reproducability which one has gotten used to in the natural sciences. Just on the contrary: single ``meaningful'' events which are hardly reproducible might indicate a new category of phenomena which is dual to the usual ``lawful'' and highly predictable ones.
Just as it is perfectly all right to consider the statement ``This statement is true'' to be true, it may be perfectly reasonable to speculate that certain events are forecasted and controlled within the domain of statistical laws. But in order to be within the statistical laws, any such method needs not to be guaranteed to work at all times.
To put it pointedly: it may be perfectly reasonable to become rich, say, by singular forecasts of the stock and future values or in horse races, but such an ability must necessarily be not extendible, irreproducible and secretive; at least to such an extend that no guarantee of an overall strategy and regularity can be derived from it.
The associated weak forms of forecasting and event control are thus beyond any global statistical significance. Their importance and meaning seems to lie mainly on a very subjective level of singular events. This comes close to one aspect of what Jung imagined as the principle of ``synchronicity'' [16], and is dual to the more reproducible forms one is usually accustomed to.
In the first run of the experiment, no consequence is derived from the agent's capacity despite the mere recording of the data.
The second run of the experiment is like the first run, but the meaning of the forecasts or controlled events are different. They are taken as outcomes of, say gambling, against other individuals (i) with or (ii) without similar capacities, or against (iii) an anonymous ``mechanic'' agent such as a casino or a stock exchange.
As a variant of this experiment, the partners or adversaries of the agent are informed about the agent's intentions.
In the third run of experiments, the experimenter attempts to counteract the agent's capacity. Let us assume the experimenter has total control over the event. If the agent predicts or attempts to bring about to happen a certain future event, the experimenter causes the event not to happen and so on.
It might be interesting to record just how much the agent's capacity
is changed by the setup. Such an expectation might be defined from
a dichotomic observable
|
|
>From the first to the second type of experiment it should become more and more unlikely that the agent operates correctly, since his performance is leveled against other agents with more or less the same capacities. The third type of experiment should produce a total anticorrelation. Formally, this should result in a decrease of E when compared to the first round of experiment.
Another, rather subtle, deviation from the probabilistic laws may be observed if correlated events are considered. Just as in the case of quantum entanglement, it may happen that individual components of correlated systems may behave totally at random exhibit more disorder than the system as a whole [17].
If once again one assumes two dichotomic observables e(A,i),e(B,i)
of a correlated subsystem, then the correlation function
|
In summary it can be stated that, although total forecasting and event control are incompatible with free will, more subtle forms of these capacities remain conceivable even beyond the present laws of physics; at least as long as their effects upon the ``fabric of phenomena'' are consistent. These capacities are characterized by singular events and not on the reproducible patterns which are often encountered under the known laws of physics. Whether or not such capacities exist at all remains an open question. Nevertheless, despite the elusiveness of the phenomenology involved, it appears not unreasonable that the hypothesis might be testable, operationalizable and even put to use in certain contexts.
By coherent superposition, quantum theory manages to implement two classically inconsistent bits of information by one quantum bit. For example, consider the states |+ñ and |-ñ associated with the proposition that the spin of an electron in a particular direction is ``up'' or ``down,'' respectively. The coherent superposition of these two states (|+ñ+ |-ñ)/Ö2 is a 50:50 mixture of these two classically distinct possibilities and at the same time is a perfect quantum state.
Based upon this novel feature, we speculate that we may be able to solve some tasks which are classically intractable or even inconsistent by superposing quantum states in a self-consistent manner. In particular, we could speculate that diagonalization tasks using not-gates may become feasable, although the capacities of agents within such semi-closed time loops may be limited by requirements of (self-)consistency which translate into bounds by the unitary quantum time evolution. These quantum consistency requirements, however, may be less restrictive than in the classical case [27,28].
In what follows we shall consider a Mach-Zehnder interferometer as drawn in Fig. 1 with two input and two output ports [29]. The novel feature of this device is a feedback loop from the future of one output port into the past of an input port. Thereby we leave open the question of such a feedback loop into the past and how it can (if ever) be realized. Indeed, if one dislikes the idea of backwards-in-time communication, one may think of this feedback loop as a channel which, by synchronizing the beams, acts as if a beam from the future enters the input port, while this beam actually was emitted in the past from the output port.
0.7mm
Picture Omitted
If one merely introduced feedback as in classical electrical engineering, this would defy unitarity, as two input channels would be going into one forward channel, which could not be uniquely reversed. So one needs a feedback coupling that resembles a beam-splitter, as in Fig. 1. The operator M generates the effects of the feedback in time. These "beam-splitters" are figurative. Their role is to couple the two incoming channels to two outgoing channels. The operator G1 represents the ordinary time development in the absence of time feedback. The operator G2 represents an alternate possible time evolution that can take place and compete with G1 because there is feedback. We want to find in the presence of the feedback in time that is generated by the operator M. At the beam splitters, the forward amplitude is a, while the reflected amplitude is i b. The beam splitters are shown in Fig. 2.
0.7mm
Picture Omitted
They perform the unitary transformation
| (1) |
| (2) |
| (3) |
| ||||
| (6) |
| (7) |
| (8) |
| ||||||||||||
First, we want to eliminate the y4 in eqs. (9) and (10), to get
eqs. for y1 and y2 . Then from eq. (6) we can obtain
y3 ¢ . From eqs. (7) and (8),
| (11) |
| ||||||||||||
| ||||||||||||
These are two simultaneous equations that we must solve to find y1
and y2 as functions of y. To solve for y1 ,
substitute eq. (15) into (14),
| |||||||||||
| (17) |
| (18) |
| ||||||||||||||||||
| |||||||||||
| (23) |
Then, using the identity A - 1B - 1 = (BA) - 1, we finally obtain
| (24) |
| (25) |
Notice that in the denominator term in both of eqs. (24) and (25),
a and b have reversed the role of the operators they apply
to. We can finally use eq. (6) to solve for y3 ¢ = y3 (t2 ),
| (26) |
(i) For commuting M, G1 and G2,
D = b2(1+MG1) + a2(1-MG2 ), and
| (27) |
(ii) For a = 1, b = 0: This is the case where there is no
feedback. Here
| (28) |
(iii) For b = 1, a = 0: This is the case where there is only
feedback. Here
| (29) |
| (30) |
(iv') If also, a2 = b2 = [1/2]: then
| (31) |
(v) If b < < 1, which is expected to be the usual case, then the answer
only depends on b2 = g. Also, a2 = 1 - b2 = 1 - g. Then to lowest order in g, the denominator D in eq.
(26) becomes
| (32) |
| (33) |
(vi) The case that corresponds to the classical paradox where an agent shoots his
father before he has met the agent's mother, so that the agent can never be born, has an
interesting quantum-mechanical resolution. This is the case G1=0,
where there is a perfect absorber in the beam so that the system would never
get to evolve to time t2. But quantum mechanically, there is
another path along G2, the one where the agent does not shoot his
father, that has a probability b without feedback. The solution in
this case is
| (34) |
| (35) |
| (36) |
| ||||||||||||||||||||
Note that for j = 0, y3 ¢ = - e - iEDt /(h/2p) y, for any value of b. That means that no matter how small the probability of the agent ever having reached here in the first place, the fact that he is here (a ¹ 1) guarantees that even though he is certain to have shot his father if he had met him (G1=0), nonetheless the agent will not have met him! The agent will have taken the other path, with 100% certainty.
How can we understand this result? In our model, with j = 0, we have
G1=0, and MG2 = 1. Also, we will assume that b < < 1, even though this is not necessary. The various amplitudes are
| (38) |
So we see that the two paths of the beam-splitter at t1 leading to the path y1 cancel out. But of the beam y, a passes through, while of the beam y4 , only b leaks through. So the beam y4 must have a very large amplitude, which it does, as we can see from (38). In fact it has a much larger amplitude than the original beam. Similarly, in order that | y3 ¢| = | y |, then y2 must have a very large amplitude. Thus we see that there is a large current flowing around the system, between y2 and y4 . But doesn't this violate unitarity? The answer is that if they were both running forward in time, it would. But one of these currents is running forward in time, while the other runs backward in time, and so they do not in this case violate unitarity. This is how our solution is possible.
So, according to our model, in quantum mechanics, if one could travel into the past, one would only see those alternatives consistent with the world one left. In other words, while one could see the past, one could not change it. No matter how unlikely the events are that could have led to one's present circumstances, once they have actually occurred, they cannot be changed. One's trip would set up resonances that are consistent with the future that has already unfolded.
This also has consequences on the paradoxes of free will. It shows that it is perfectly logical to assume that one has many choices and that one is free to take any one of them. Until a choice is taken, the future is not determined. However, once a choice is taken, it was inevitable. It could not have been otherwise. So, looking backwards, the world is deterministic. However, looking forwards, the future is probabilistic.
The model also has consequences concerning a many worlds interpretation of quantum theory. The world may appear to keep splitting so far as the future is concerned, however once a measurement is made, only those histories consistent with that measurement are possible. In other words, with time travel, other alternative worlds do not exist, as once a measurement has been made, they would be impossible to reach from the original one.
Another interesting point comes from examining eq. (37). For small angles
j we see that
| (39) |
(vi) Sustaint case:
if we require the input and output state to be identical; i.e.,
y3(t2) = y(t1),
then we obtain a sustainment condition (for commuting M,G1,G2) of
| (40) |
Another case is G1 = G2 = 1, a phase shift in M = eij, and
a = b = 1/Ö2, for which we obtain
|y3¢| = |y|. For b = Ö{1-a2} = 1/4,
| (41) |
We summarize by stating that the structure of a quantum time travel through a Mach-Zehnder device is rich and unexpectedly elaborate. This suggests totally new szenarios for the possibility of free will and the capacities available to an agent acting in such a time loop.