· Positions: |
| ||||||||||
· Budget: |
| ||||||||||
· Research: |
| ||||||||||
· Teaching: |
| ||||||||||
· Degree Awarded: |
| ||||||||||
The year 2004 was undoubtedly in many ways a year of change.
As a result of the implementation of the new Austrian University legislation (UG2002), 2004 has seen dramatic changes of our University of Technology and of our Institute for Theoretical Physics along with it. The newly conferred increased autonomy, as much welcome as this step might have been, came at the price of a grossly inadequate budget allocation that translated into Draconian cuts of expenditures at all levels. The switch to a new organization and business model of operation without well-planned implementation and with a poorly designed business software lead to a state which could be politely described as less than creative chaos. It placed a heavy burden on our administrative staff, Mrs. Mössmer and Mrs. Unden, who handled this challenge with great fortitude and proficiency.
It is all the more remarkable that despite these adverse circumstances the core mission of our institute, excellence in research and teaching, did not suffer a major setback. The key data summarized in the executive summary attests to the high quality and productivity of the institute. 55 publications in international scientific journals (i.e. 4.5 publications per faculty member and year), a significant presence at international conferences with more than 100 contributions, many of which invited, and the award of 17 academic degrees under the supervision of our faculty members document these efforts impressively. Last but not least, the fact that externally attracted funding exceeded our basic operating budget by a factor 8 is not only a noteworthy achievement for a theory institute but testifies to the level of recognition our work enjoys with funding agencies and international peer reviewers. I would like to express my thanks to all staff members for their important contributions and their enthusiasm that helped to keep up this level of productivity.
The year 2004 brought another profound change to our institute. After 36 years of outstanding service as a full professor at this institute and the university at large, Prof. Wolfgang Kummer was promoted to Professor Emeritus effective October 1, 2004. Prof. Kummer, together with the late Prof. Hittmair, formed the nucleus around which the Institute for Theoretical Physics expanded and rose to its current status. The importance of his contributions to the growth and visibility of theoretical physics at our university as well as in Austria can hardly be overrated. The institute wishes him well for his new career as an emeritus and continues to count on his wise counsel and on his further contributions to research and teaching.
As a consequence of this change another change came along: I have been appointed director of our institute by our Dean, Prof. G. Badurek, effective January 1,2004.
Finally, also the layout of our annual report underwent a change long overdue. In line with the strong international connections our institute has developed over the years, it is now published in English. Moreover, we have attempted to shift the focus to physics highlights and relegate the inevitable statistical data to an appendix. I would like to thank Rainer Dirl and Elfriede Mössmer for their efforts to make this happen.
The purpose of this report section is to feature a few research highlights
during the year 2004. It is meant as an ``appetizer'' and is, by no means,
complete. A complete listing of published and presented research results are
given in the appendix. Interested readers are referred to the web page of the
institute (http://tph.tuwien.ac.at/) where more information can be found.
The research program at our institute is characterized by a remarkable
diversity covering a broad spectrum of topics ranging from
high-energy physics and quantum field theory to atomic and condensed matter
physics. As a focus area, non-linear dynamics of complex systems including
aspects of quantum cryptography and quantum information plays an important
role. Many of the research topics make use of and belong to the subdiscipline
``computational physics''. Keeping the available and accessible computer
infrastructure competitive remains, in view of budgetary constraints, a
constant challenge.
The breadth of activities at our institute provides advanced students as well as young researchers with the opportunity to be exposed to a multitude of state-of the art research directions and to receive a broad-based academic training. It is our intention to maintain and further develop our institute as an attractive place of choice for aspiring students and post-docs. The few highlights featured below may convey this message.
Vienna, October 2005 Joachim Burgdörfer
(Head of Institute)
According to our present knowledge there are four fundamental interactions in nature: gravity, electromagnetism, weak and strong interaction with electromagnetism and weak interaction unified in the electroweak theory. Gravity as well as electromagnetism are macroscopic phenomena, immediately present in our everyday life, like falling objects and static electricity. Weak and strong nuclear interactions, on the other hand, become only important on the microscopic, atomic and subatomic level.
Schematic presentation of fundamental interactionsThe most important aspect of the strong interaction is that it provides stability to the nucleus overcoming electric repulsion, whereas the transmutation of neutrons into protons is the most well-known weak phenomenon. The aim of fundamental physics may be described as obtaining a deeper understanding of these interactions, and penultimately finding a unified framework, which understands the different interactions as different aspects of a single truly fundamental interaction.
Describing the interactions on a more fundamental level the concepts of relativistic quantum field theories are employed. With the advent of quantum mechanics in the first decades of the 20th century it was realized that the electromagnetic field, including light, is quantized and can be seen as a stream of particles, the photons. This implies that the interaction between matter is mediated by the exchange of photons. The concept of relativistic quantum field theory is very simple, unifying a classical field theory with the concepts of quantum theory and special relativity.
Within quantum electrodynamics (QED) - a unified quantum theory of Dirac particles (fermions) and photons (bosons) - the forces between fermions are realized by the exchange of massless photons. Additionally, QED is characterized by gauge invariance. It turns out that also the strong and weak forces can be formulated in terms of quantized gauge fields. This implies the existence of quantized non-Abelian gauge theories - a generalization of the quantized Maxwell theory containing self-interactions of the gauge bosons. The quantum field theory for the strong interaction is quantum chromodynamics (QCD) which also allows to form strongly bound states. The weak interactions are mediated by the exchange of massive gauge bosons with very short ranges.
The second half of the last century was dominated by the quest for a unified quantum gauge field theories leading to the Glashow-Weinberg-Salam model, the Standard Model. In the realm of string theories and with the concepts of supersymmetry also gravity may be included in the unification. An important concept in any quantized field theory is its perturbative realization with quantum corrections described in terms of Feynman-graphs. The Figure below contains all one-loop corrections of the propagation of a nonabelian gauge boson (vacuum polarization). The wavy line represents the gauge field propagator which describes the free propagation.
Fig. 1: Full propagator in terms of free propagation and self-energy correctionsThe one-loop corrections contain products of propagators, i.e. products of distributions. Since such products are ill-defined also the corresponding Feynman-integrals in the momentum representation are divergent for high internal loop-momenta leading to the so-called ultraviolet (UV) divergences. These UV infinities demand a regularization scheme characterized by cutoffs in order to make the Feynman integrals meaningful and a corresponding renormalization program for the definition of physical quantities (physical masses, wave-functions renormalization and renormalized couplings) is needed.
The appearance of the UV singularities is caused by the fact that the interactions vertices are described by local field products if the underlying geometry is commutative. It was suggested very early by Snyder [1] in the pioneering days of quantum field theory that one could use a noncommutative structure for space-time coordinates at very small length scale to introduce an effective UV cutoff. This was motivated by the need to control the divergences of quantum loop-corrections.
In describing fundamental physics space and time are unified by the principle of Special Relativity into a four-dimensional space-time: xm=(ct,[( ®) || x]). Usually, one assumes that the xm are ordinary commuting 4-dimensional coordinates leading to the concepts of commutative geometry. In the context of commutative geometry one can discuss the fundamental interactions.
However there are many hints that the concepts of space-time as a differentiable
manifold cannot be extrapolated to the physics at short distances.
Simple heuristic arguments forbid a naive unification of the principles
of General Relativity with local quantum theory. It is impossible
to locate a particle with an arbitrary small uncertainty. On the other
hand, our understanding of the theories of fundamental interactions
and General Relativity is strongly related to standard commutative
differential geometry. The failure of standard commutative differential
geometry demands a replacement. Following Filk [2], the commuting
space-time coordinates xm of flat space are replaced by Hermitian
operators [^(x)]m
respecting in the simplest case the following algebra
|
The construction of the perturbative NCQFT leads to new types of infrared (IR) singularities which represent a severe obstacle for the renormalization program at higher order and therefore lead to inconsistencies. The IR singularities are produced by the so-called UV finite nonplanar one-loop graphs (which are expected to be UV divergent by naive power counting) in U(N) gauge models and also in scalar field theories. The interplay between expected UV divergencies and the existence of the IR singularities is the so-called UV/IR mixing problem of NCQFT. One also has to stress that the usual UV divergences may be removed by the standard renormalization procedure.
The present research activities are devoted to find solutions for the UV/IR mixing problem of noncommutative gauge field models. In order to respect the effects of noncommutativity implied by the non-abelian structure a consistent treatment requires the use of the BRS quantization procedure even for a U(1) deformed Maxwell theory.
References
Since the groundbreaking work of Einstein, gravitation is conceived as defining the geometry of spacetime - even defining the very concepts of time and space itself. Planetary motion as well as the motion of massless particles, that is to say light, become the straightest possible paths in a non-Euclidean geometry.
Fig. 1: Light-cone of an event representing its causal past and futureNot only geometry and curvature rely on the gravitational field. The causal structure is completely determined by the so called light-cone that separates events that can be influenced from those that cannot, thus embodying the principle of a finite maximum speed. In contrast to the usual (quantum) field theories this structure is no longer fixed, i.e. given a priori, but in Einstein's General Theory becomes a dynamical entity of its own device which is responsive to the distribution of other matter fields, resulting in the curvature of spacetime.
General relativity is a very successful theory. Its predictions range from the deflection of light by massive bodies which distort spacetime (Einstein-lensing) to that of gravitational radiation carrying away energy in the form of ``ripples'' in spacetime (Hulse-Taylor binary pulsar), as well as to the expansion of the universe (microwave background radiation). Still, the geometric theory of gravity suffers also from severe problems. Namely the inevitable occurrence of spacetime singularities, which was proven by Penrose and Hawking in their famous singularity theorems. Physically this means that spacetime contains regions where the curvature grows without a bound. The most prominent examples are the singularities at the ``center'' of black holes, where time itself comes to an end as well as the so-called initial singularity that occurs at the ``Big Bang'' the beginning of time. Other difficulties arise from the unification of gravity with quantum theory which governs the atomic and subatomic regime. Although several promising proposals for such a unification have been promoted, like Ashtekar's Loop Quantum Gravity and String Theory, to name just the most prominent ones, many problems have so far remained unresolved. It is therefore useful to focus on these central problematic aspects of gravity.
Spacetime singularities belong to the big stumbling blocks of the classical theory and are therefore usually excluded from the definition of spacetime itself. From the point of view of quantum theory, which considers not only the classical evolution between a given initial and final three-geometry, they may just be the sheet-lightning of a change in topology of spacetime [1]. Due to their strong localization the concept of distributions (generalized functions)
Fig. 2: Geometry of a black hole formed by a collapsing pulse of radiation.suggests itself as the mathematical structure being able to handle these singular regions. The simplemost example of a geometry with distributional curvature may be derived from the image of a cone taken to be the limit of a hyperbolic shell whose curvature concentrates more and more on the tip. The limit geometry is flat with all its curvature concentrated in a Dirac delta-function at the location of the tip. In spite of the non-linear structure of general relativity, it is still possible to construct distributional curvature quantities associated with the singular regions and beyond of all the known stationary black-holes [2]. The discussion of a continuation of the geometry of a black hole beyond its curvature singularity has to transgress the boundaries of "classical" distribution theory and make use of the so-called Colombeau-algebra of new generalized functions [3] which allows for a systematic multiplication of distributional objects. It is therefore important, both from the quantum as well as the classical point of view, to get a better understanding of these structures.
Deeper insights into the structure of physical systems have often been achieved by the imposition of symmetries.
Fig. 3: Spherically symmetric black hole.This usually breaks the problem down into simpler building blocks which ideally allow a complete solution. Gravity is no exception to this rule since the prototypic black-hole solution, the Schwarzschild geometry (actually the first exact non-trivial solution of the Einstein-equations), has been found precisely along theses lines, i.e. upon imposing spherical symmetry.
It is therefore natural to pursue a similar plan of attack for the quantization of gravity. The corresponding models become gravitational theories in a 1+1 dimensional spacetime coupled to the area of the two-sphere which becomes a dynamical variable in the reduced theory. As shown by work in our group in the absence of additional matter all such models turn out to be exactly soluble classically and allow even a background independent (``exact'') quantization in terms of the so-called first order formalism, which takes the normalized dyad and its parallel displacement as fundamental variables [4]. Coupling to matter allows the description of scattering within an exactly soluble gravitational sector thereby leading to the concept of virtual black holes, as intermediate states, which hopefully sheds some light on the process of Hawking-evaporation of four-dimensional black holes [5]. The richness of the two-dimensional structure allows also the discussion of a supersymmetric extension of the original dilaton model thereby incorporating fermionic degrees of freedom in a particularly natural form. Here new insights regarding closely related problems in String Theory have been gained [6].
References
Quantum chromodynamics (QCD) is the accepted theory of the strong interactions responsible for the binding of quarks into hadrons such as protons and neutrons, and the binding of protons and neutrons into atomic nuclei. The fundamental particles of QCD, the quarks and gluons, carry a new form of charge, which is called color because of its triplet nature in the case of the quarks (e.g. red, green, blue); gluons come in eight different colors which are composites of color and anticolor charges. However, quarks and gluons have never been observed as free particles. Nevertheless, because quarks have also electrical charge, they can literally be seen as constituents of hadrons by deep inelastic scattering using virtual photons. The higher the energy of the probing photon, the more do the quarks appear as particles propagating freely within a hadron. This feature is called ``asymptotic freedom''. It arises from so-called nonabelian gauge field dynamics, with gluons being the excitations of the nonabelian gauge fields similarly to photons being the excitations of the electromagnetic fields, except that gluons also carry color charges. Asymptotic freedom is well understood, and the Nobel prize was awarded to its main discoverers Gross, Politzer, and Wilczek in 2004.
Much less understood is the phenomenon of ``confinement'', which means that only color-neutral bound states of quarks and gluons exist. This confinement can in fact be broken in a medium if the density exceeds significantly that of nuclear matter. When hadrons overlap so strongly that they loose their individuality, quarks and gluons come into their own as the elementary degrees of freedom. It is conceivable that such conditions are realized in the cores of certain neutron stars.
Moreover, lattice gauge theory simulations have demonstrated that deconfinement also occurs at small baryon densities for temperatures above approximately 2×1012 Kelvin, corresponding to mean energies of about 200 MeV. According to the Big Bang model of the early universe, such temperatures have prevailed during the first few microseconds after the Big Bang as shown in Fig. 1.
Fig. 1: Thermal history of the Universe from the time when it was filled by a quark-gluon plasma until now.At present there are experiments being carried out in the Relativistic Heavy Ion Collider (RHIC) at the Brookhaven National Laboratory, where a tiny fire-ball with temperatures larger than the deconfinement temperature can be produced and the resulting ``quark-gluon plasma'' [1] can be investigated. Starting in 2007, similar experiments at even larger energies and thus higher temperatures will be carried out at the European collider center CERN in Geneva. There is now ample evidence for the generation of a new state of matter in these experiments, although much remains to be understood.
One recent problem is the surprisingly fast apparent thermalization of the quark-gluon plasma. This seems to be much faster than can be accounted for by calculations of elastic and inelastic scattering events. A possible explanation is that what is being observed experimentally is just early isotropization. The latter could be due to nonabelian variants of plasma instabilities that are familiar from ordinary plasma physics [2]. First results from our group which support this picture have already appeared in the 18 March 2005 issue of Physical Review Letters [3]: Numerical simulations of collective chromomagnetic and -electric fields in an anisotropic quark-gluon plasma show exponential growth of unstable modes which in the nonlinear regime lead to complicated dynamics, eventually leading to fast isotropization of the plasma. Fig. 2 visualizes the color degrees of freedom in collective fields as they evolve from initial fluctuations. The horizontal axis is the spatial direction in which there is momentum-space anisotropy in the plasma. Time flows from bottom to top, with initial conditions (at the bottom) corresponding to random color fluctuations in initially tiny collective fields.
Fig. 2: The time evolution of the color degrees of freedom in the chromomagnetic field associated with instabilities in an anisotropic quark-gluon plasma. The horizontal axis is the spatial direction (z) in which there is a momentum-space anisotropy in the quark-gluon plasma.In this plot one can see how the initial random fluctuations are swamped by the exponentially growing collective modes which involve a characteristic wavelength and locally fixed color charges (the amplitudes of the fields are not shown). After these perturbations have grown such that nonabelian self-interactions come into the play, there is rapid color precession in time (upper half of the plot), and a certain amount of spatial ``abelianization'' (i.e. finite domains of fixed color). The crucial finding, which cannot be read from this plot, is that exponential growth of these intrinsically nonabelian plasma instabilities continues until the collective fields give significant backreaction on the plasma constituents, rapidly eliminating their momentum-space anisotropies. This isotropization is much faster than the processes leading to thermalization, which occur somewhat later in the evolution of the fireball created in relativistic heavy-ion collisions.
After local thermalization has taken place, the physics of hot and dense quark-gluon matter can be described by the following sketch of a phase diagram,
Fig. 3: Qualitative sketch of the phase diagram of quark-gluon matter as a function of temperature T and quark chemical potential m. Solid lines denote first-order phase transitions, the dashed line a rapid crossover.where T is the temperature in MeV (1 MeV » 1010 K), and m is the quark chemical potential characterizing the density of net baryon number. (Nuclear densities correspond to about 308 MeV quark chemical potential.) ``SPS, RHIC, and LHC'' mark the regions of this phase diagram accessible by the older CERN experiment SPS, the present RHIC collider in Brookhaven, and the future LHC collider at CERN.
A main activity of our group is the development of improved analytical techniques to calculate the thermodynamical properties of the quark-gluon plasma [4]. One focus is on properties at small m and high temperatures, which are relevant for relativistic heavy ion colliders and the physics of the early universe. Another case of interest is high m and smaller temperatures, which is of relevance to the physics of neutron stars and proto-neutron stars.
At comparatively low temperatures, quark matter is known to form Cooper pairs and turns into a color superconductor [5]. Also at temperatures just above the superconductivity phase new phenomena appear, which reflect that quark matter has strong deviations from an ideal Fermi liquid. In particular, there is anomalous behaviour in the low-temperature specific heat, which has been calculated for the first time systematically by our group [6]. This has already found application in revised calculations of the cooling behavior of young neutron stars [7].
References
The names of the fundamental forces are related to their strength. The strong force is much stronger than electromagnetism and is thus able to overcome the repulsive force between objects with the same electrical charge (protons or quarks). The weak force is weaker than electromagnetism but still much stronger than gravity. The reason that we almost only recognize gravity in everyday life is that the macroscopic objects are neutral. They don't carry an effective color charge and they carry - if at all - only very small electric charges. For gravity there is no negative charge (negative mass), so that all the small gravitational effects add up to something which is strong enough to move galaxies and build black holes. The seperate description of the forces is quite accurate by now. This is summarized in the standard model of particle physics.
Fig. 1: Grand unification. i=1: Electromagnetism, i=2: weak interactions, i=3: strong interactionsThere is only one particle (the Higgs boson), which is predicted by the standard model and has not yet been found. A measure for the strength of a force are the coupling constants of the corresponding theory. They are, however, not constant, but depend on the energy level one is dealing with. If one extrapolates their values to high energies, one discovers that the couplings of electromagnetism, strong and weak force meet at a certain energy level almost in one single point (see Figure 1). This supports the idea that those three forces could be just different aspects of one and the same universal force. There are several theories which try to describe this unification. They are called GUTs, 'grand unified theories'. However, to be really 'grand', such a unification should also include gravity, whose coupling constant is far weaker still at this high energies. The theory, which will manage to unify all forces, including gravity, is sometimes called TOE, ``theory of everything''. String theory is one candidate, and at present actually the only one for this TOE.
Before going to explain a little bit what string theory roughly is, let us have a second look at Figure 1, where it is shown that with 'SUSY', the lines not only almost meet in one point, but they meet exactly (within present precision) in one point [1]. 'SUSY' stands for supersymmetry and means that there is an exchange symmetry between fermionic particles (like quarks and electrons) and bosonic ones (like photons and even gravitons, if one includes gravity into the considerations). It does, however, not relate the already known particles, but it predicts new supersymmetric partners to the known particles (called e.g. squarks, selectrons, photinos and gravitinos). So far none of those superparticles has been discovered, but there are a lot of theoretical reasons for believing in supersymmetry, one of them being Figure 1. Supersymmetry is an integral part of string theory, or more precisely 'superstring theory'. In about two years, the new accelerator LHC (large hadron collider) at CERN will start and try to produce the Higgs boson and the superparticles mentioned above and will therefore also be a first test for string theory.
Gravity is described by Einsteins General Relativity which explains the gravitational force as being an effect of curved space-time. It is an extremely beautiful, successful and revolutionary theory, but it is classical in the following sense: the gravitational field is smooth and one can in principle measure arbitrarily small distances. However, time evolution of the gravitational field is governed by the matter content - or more specifically - by the fields that are described by the Standard Model. The Standard Model, on the other hand, describes quantum fields, i.e. the fields consist of quanta - the particles - whose position and momenta underly Heisenbergs uncertainty relation. In a macroscopic limit, one can still think of the fields being classical smooth fields and for this reason General Relativity is extremely successful in describing large scale physics. But to avoid inconsistencies, one needs - in order to consider extreme situations like black holes correctly - to treat the gravitational field as a quantum field as well as described earlier. There is a standard procedure how to make quantum fields out of classical ones. This procedure, called quantization, unfortunately fails for gravity. The reason is that interactions of point particles produce singularities (infinite values in at least intermediate steps on the way to compute probabilities of particle collisions). Those singularities can be dealt with in the standard model, but the standard (perturbative) approach fails for 'quantum gravity'.
Fig. 2: Left: Point particle interaction, Right: Closed string interaction, note the smooth interaction surfaceIt is thus reasonable to avoid those singularities from the beginning by treating the elementary objects not as point particles, but as extended objects, which are called strings [2]. In Figure 2 one can see that the collision of two strings - joining to a single one - produces a smooth surface, while the same process for point particles is not smooth and therefore produces singularities. Considering a string instead of a point particle is a simple idea, but it has extremely far-reaching consequences. The first consequence is that a string has more degrees of freedom. It can oscillate in different modes like a guitar string. The different tones then correspond to different particles which makes it possible to describe the complete spectrum of particles by one fundamental object! While taking open or closed strings as starting point apparently leads to different string theories with different particle spectra, the very same string can start as an open one and become a closed one during some scattering processes.
According to an old idea of Kaluza and Klein (KK) it should be possible to describe also the other forces in a purely geometrical way, as it was done for gravity. Indeed they managed to produce electromagnetism by starting with a five dimensional gravity and then curling up one dimension on a very small radius, so that gravity effectively becomes four-dimensional. Components of the gravitational field belonging to the fifth dimension then show up as an electromagnetic field. The KK method needs 11 dimensions in order to describe all the fundamental forces but it never worked out to give the correct matter content. Superstring theory, on the other hand, predicts ten dimensions. Hence one has to curl up six dimensions in order to end up with a four-dimensional observable space-time. In contrast to point particles, strings have the new feature that they can wind around the curled up dimensions, thus extending the spectrum of physical states. When string theory is compactified on a circle there is a 'dual' inverse radius for which we obtain exactly the same spectrum of particles, so that the full quantum theory is indistinguishable from the first one. This implies a smallest observable scale, a feature that should be expected from any consistent quantum theory of gravity. Going below that scale would mean that one ends up with something that is actually bigger!
This is only one example of a number of dualities connecting string theories that are at first sight completely different. The above radius duality led to the discovery of other extended objects, which are not just strings but can have more dimensions and are called D-branes. They are dynamical objects on which open strings end. Gauge fields, the fields that also appear in the standard model, are restricted to those D-branes, while gravity is diluted because it can spread out into ten dimensions. This would explain the large difference between the values of the coupling constants of the standard model and of gravity, respectively: we are just living on a brane!
The duality mentioned above, relating big and small radii, can be generalized to curved spaces and is then called mirror symmetry. The curled up six-dimensional spaces have to fulfill certain properties and are mathematically called Calabi-Yau spaces. A major work of our group goes into examining and classifying those Calabi-Yau spaces [3], exploring the consequences of dualities [4], and physical properties of D-branes [5] in there.
References
Suppose we would be able to unleash the power of the quantum world in ways which would have been unthinkable only a few years ago. For instance, we could use quantum superposition, the possibility for a quantum bit to contain all conceivable and mutually excluding classical states in itself. Then, in a single computational step, we could realize the parallel processing of all these classical states, whose number grows exponentially with the number of classical bits involved, through the quantum state evolution of this single state. That is the vision of quantum parallelism, which is one of the driving forces of quantum computing, and at the same time one of the fastest growing areas of research in the last decade or so. These strategies have all been made possible with new techniques capable to produce, manipulate, and detect single quanta, such as photons, neutrons and electrons.
There are other prospects as well. Quantum processes and in particular the quantum state evolution in-between irreversible measurements are one-to-one, i.e., reversible. The ``message'' encoded into a quantum state merely gets permuted and transformed such that nothing gets lost. Thus, processes such as state copy or state deletion, which appear so familiar from classical computing, are not allowed in quantum information theory. Copy, for instance, is one-to-two, or one-to-many. Deletion is many-to-one. As a consequence, information transmission has to rely on processes which are strictly one-to-one. This elementary, innocently looking fact of quantum state evolution, can be put to practical use in areas such as cryptography, where it is tantamount to keep a secret secret; i.e., by not allowing potential eavesdroppers to divert, copy, and resubmit messages. Actually, quantum cryptography uses another mind-boggling quantum feature: complementarity; the impossibility to measure all classical observables of a state at once with arbitrary accuracy. So it is the scarcity of the quantum processes which could be harvested for new technologies. Even potential cryptanalytic techniques - such as man-in-the-middle attacks on quantum cryptography - could be perceived as a challenge to cope with the structure of the quantum world in detail.
The basis of these potential exciting new technologies is the quantum world and its relation to the performance of classical systems. Already George Boole, one hundred and fifty years ago, mused over issues which became most important today. He figured out that there are some constraints on the joint frequency of classical events which come from the requirement of consistency.
Suppose someone claims that the chances of rain in Vienna and Budapest are 0.1 in each one of the cities alone, and the joint probability of rainfall in both cities is 0.99. Would such a proposition appear reasonable? Certainly not, for even intuitively it does not make much sense to claim that it rains almost never in one of the cities, yet almost always in both of them. The worrying question remains: which numbers could be considered reasonable and consistent? Surely, the joint probability should not exceed any single probability. This certainly appears to be a necessary condition, but is it a sufficient one? Boole, and much later Bell - already in the quantum mechanical context and with a specific class of experiment in mind - derived constraints on the classical probabilities from the formalization of such considerations. In a way, these bounds originate from the conception that all classical probability distributions are just convex sums of extreme ones, which can be characterized by two-valued measures interpretable as classical truth values. They form a convex polytope bounded by Boole-Bell-type inequalities.
Remarkable, quantum probability theory is entirely different from classical probability theory, as it allows a statistics of the joint occurrence of events which extends and violates Boole's and Bell's classical constraints. Alas, quantum mechanics does not violate the constraints maximally, quantum bounds fall just ``in-between'' the classical and maximal bounds.
The question is: how much exactly and quantitatively does quantum mechanics violate these bounds? We have derived numerical as well as analytical bounds on the norm of quantum operators associated with classical Bell-type inequalities can be derived from their maximal eigenvalues. This quantitative method enables detailed predictions of the maximal violations of Bell-type inequalities, and generalizes Tsirelson's result 2Ö2 for the maximal violation of the Clauser-Horn-Shimony-Holt inequality.
We have also developed new protocols for quantum cryptography using interferometers. Thereby, we have considered sets of quantum observables corresponding to eutactic stars. Eutactic stars are systems of vectors which are the lower-dimensional ``shadow'' image, the orthogonal view, of higher-dimensional orthonormal bases. Although these vector systems are not comeasurable, they represent redundant coordinate bases with remarkable properties. One application is quantum secret sharing. The Figure below depicts a typical configuration.
Fig. 1: Quantum cryptography using single-photon sources.
(copyright) http://www.epfl.ch
References
In recent years there has been increasing interest in the control and manipulation of atomic wave functions. The engineering of wave functions promises applications in many areas of physics, such as quantum computing [1], promotion of chemical reactions towards any preferable direction [2], or optimization of high harmonic generation [3]. Theoretically any wave function can be formed as a coherent superposition of energy eigenstates. In practice, however, it is not an easy task to prepare a preselected target state experimentally. Thus there are increasing demands for establishing protocols to produce any preferable designer state starting from the states which are experimentally accessible. Recently a few protocols have been suggested to create and manipulate a wave packet. A Rydberg wave packet is a coherent superposition of highly excited atomic states, localized in phase space [4]. Due to the relatively large time and spatial scale (t ~ n3 and r ~ n2) of Rydberg atoms with quantum number n, Rydberg wave packets are known to be among the best explored quantum objects which approximately follow the dynamics of the corresponding classical particle and serve as benchmark for probing the crossover between classical and quantum dynamics. With recent advances in ultra-short pulse generation it has become possible to engineer wave packets using Rydberg atoms [5]. Using such a Rydberg wave packet as the initial state, we have demonstrated a few protocols to steer such a Rydberg wave packet towards any preferable location in phase space [6] or to manipulate the size of a wave packet using a train of short pulses, so-called half-cycle pulses (HCPs) [7].
Fig. 1: Poincaré surface of section for the periodically kicked atom by a train of kicks with n = 1.095/(2 p) and Dp = -0.1. A periodic orbit (blue dashed line) is located at the center (green cross) of main stable island (red) in the Poincaré surface. The upper frame explains graphically how the periodic orbit can be stabilized.Our first strategies for wave packet control are thus based on the analysis of the classical dynamics. When a Rydberg atom is subject to a half-cycle pulse (HCP) [8] whose duration is much shorter than the Kepler period of the Rydberg electron, the atom experiences an impulsive momentum transfer or ``kick'' given by
| (2.1) |
| (2.2) |
Fig. 2: Poincaré surfaces of section for the periodically kicked atom with Dp = -0.1/ni and (a) n = 1.095 / (2pni3), and (f) n = 10 / (2pni3). (b) is a Husimi distribution of the wave function when the initial Rydberg state (ni = 50) is subject to a periodic train of 14 kicks with the same parameters as for (a). (c), (d) and (e) show snapshots of the time development when the wave function in (b) is subject to a chirped train of pulses whose frequency is accelerated upto the value as in (f).
A more interesting and challenging task is to steer the wave packet towards any other location. When the frequency of periodical kicks is increased, the position of stable islands in classical phase space is shifted towards small position coordinates q , i.e. towards the nucleus. Exploiting the analogy to the tennis ball hitting a wall, the players position must come closer to the wall when he tries to hit the ball with the same strength but a higher frequency. Figure 2 (f) shows a Poincaré surface for a higher frequency n = 10/(2 pni3) and the shift of the positions of the islands (both read and green ones) compared to Fig. 2 (a) can be clearly seen. This observation can be profitably exploited to steer a wave packet along the q-axis. When the frequency of a train of pulses is adiabatically increased (``chirped''), the island gradually shifts its position towards the nucleus. Correspondingly, the wave packet initially localized inside the main island [red one in Fig. 2 (a)] is kept trapped inside the island and moves together with it. Figures 2 (b) to (e) show the snap shots of the wave packet evolving in time. The wave packet is steered gradually towards the nucleus and at the end when the frequency reaches the values of n = 10/(2 pni3), the wave packet is localized exactly at the classical stable island (depicted in red) in Fig. 2 (f). The wave packet subject to a chirped train of pulses is confirmed to follow the adiabatic change of the phase space structure. Alternatively the position of a wave packet along momentum (p) axis shifted by adiabatically modulating the kick strength instead of the frequency. With increasing kick strength the islands become less and less stable and therefore their sizes shrink, just as it becomes more difficult to keep hitting a ball against a wall periodically as the hitting power is increased. Therefore, the kick strength modulation not only shifts the position of the wave packet but also changes its size by trimming off its edge. This technique can be applied to a creation of a minimum uncertainty wave packet [7]. The protocols for shaping and steering wave packets with an unprecedented control developed in our group are currently being implemented experimentally [11].
References
Since the first working laser device was built by Maiman in 1960, the progress in laser technology has been tremendous. The intensity of the lasers has been increased by many orders of magnitude. Intensities reach presently well above 1020W/cm2, where plasma effects as well as relativistic effects are important. In the near future, laser intensities may even reach the critical field strength to directly produce positron-electron pairs.
At the same time, the length of the shortest pulses has decreased by more than 10 orders of magnitude (Fig. 1). While the first lasers had a pulse length of some 100ms, very short pulses can nowadays be produced through mode-locking. In 1990, Zewail et al. [1] managed to generate pulses as short as several femtoseconds, which meant that snapshots of chemical reactions could be directly taken. This opened up the field of femto-chemistry.
Fig. 1: Decrease of pulse duration as a function of time.To take time-resolved pictures of atomic processes, even shorter pulses down to the attosecond regime are needed. Such short laser pulses can be produced by the process of high harmonics generation (HHG): the time-dependent field of a strong femtosecond laser may ionize an atom and accelerate the electron in one direction. As the field changes direction, the electron might get accelerated back and emit radiation by ``bremsstrahlung'' as it hits the nucleus. The frequency of the radiation may be hundreds of times higher than the driving field. By filtering out a narrow region of the highest frequencies produced, pulses as short as some hundreds of attoseconds can be generated.
The possibility of driving an atom by a femtosecond laser as well as the usage of high harmonics generation to producing the shortest pulses presently available challenges our current understanding of the processes taking place in the atom driven by the ultrashort electric field. Two different regimes can be distinguished: the multiphoton regime (high frequency and low intensity) and the tunneling regime (low frequency and high intensity). In the multiphoton regime many experimental (see for example [2]) and theoretical studies have been performed, which have led to a fairly complete understanding of the physical processes involved. In the tunneling regime, on the other hand, recent experiments with linearly polarized lasers have shown novel and previously unexplained structures in the momentum distribution of the photoionized electrons in rare gases. The so-called ``double-hump'' structure in the longitudinal momentum distribution has been identified as a rescattering process for double-ionization [3] and as the interaction between the electron and the core for single ionization [4].
We study the hydrogen atom driven by a linearly polarized laser field both classically and quantum mechanically. For the first approach we employ the classical trajectory Monte Carlo (CTMC) method including tunnel effects (CTMC-T). The electron is allowed to tunnel through the potential barrier whenever it reaches the outer turning point. Alternatively, the time-dependent Schrödinger equation is solved numerically by means of the generalized pseudo-spectral method. The process of detecting an electron of momentum [k\vec] can then be viewed as a projection of the wave function onto the Coulomb wave functions.
Fig. 2: Doubly differential momentum distribution in multiphoton regime. I=1.5×1014W/cm2, t = 0.5 fs.New insights can be gained from doubly-differential (kzkr) momentum distributions. In the multiphoton regime (Fig. 2) the above-threshold ionization peaks are displayed as semicircles of fixed energy. The radius of each semicircle corresponds to an energy given by Uj = E0 + j w, where E0 is the ground state energy of the atom, and j the number of photons absorbed. In the tunneling regime (Fig. 3) the isoenergy circles are strongly distorted in the low momentum region, where novel structures near kz=kr=0 appear. The latter represent Ramsauer-Townsend minima [5] in the angular distribution experimentally observed for the first time in laser-atom interactions.
Fig. 3: Doubly differential momentum distributions. T= 2 fs, t = 20 fs, (a) I=5×1013W/cm2, (b) I=1×1014W/cm2, (c) I=2×1014W/cm2.
References
Fig. 1: (a) Illustration of the conventional tight-binding discretization employed in the Recursive Green's Function Method for transport through a circular quantum dot with infinite leads. Our modular approach as illustrated in (b) leads to increased efficiency in the numerical calculations.However interesting they may be, these parameter ranges are difficult to handle from a computational point of view. This is because in the ``semi-classical regime'' of small lD as well as in the ``quantum Hall regime'' of high magnetic fields B, the proper description of the transport process requires a large number of basis functions. As a result, the theoretical models which are presently being employed eventually become computationally unfeasible or numerically instable.
Fig. 2: (a) Retracing electron-hole trajectory in a normal-conducting (N) square billiard with a superconducting (S) lead attached (``Andreev billiard''). (b) Three bound electron-hole wavefunction densities which clearly show signatures of the classical retracing property.To learn more about the classical-to-quantum correspondence of Andreev billiards, it is instructive to study the bound states in these billiards and the form of their wavefunctions. Wavefunctions feature indeed an electron and a hole component that in most cases closely resemble each other (see Fig. 2b) - in analogy to the classical picture of retracing electron-hole orbits. To obtain these quantum results numerically, we calculated the scattering states for the billiard with a normal conducting lead and coupled them to construct the Andreev states [12]. We compare the quantum mechanical solutions with a semiclassical Bohr-Sommerfeld (BS) quantization of periodic orbits and propose an extension of the BS approximation which is well suited to describe Andreev billiards with hard-wall as well as soft-wall boundaries. The underlying classical periodic electron-hole orbits are directly identified in terms of pronounced density enhancements engraved in the quantum wavefunctions of Andreev states [12]. Additionally, we find states which feature very different wavefunctions for electron and hole, indicating the breakdown of the retracing approximation. Work on the inclusion of a disorder potential in the Andreev billiard is in progress.
Fig. 3: (a) Vacuum tube which features random emission of electrons. (b) Quantum billiard with tunable shutters and disorder potential (colored area). Tuning the opening of the shutters w, deviations of the Fano factor from the universal limit F=1/4 can be investigated. (c) Fano factor F as as function of the shutter opening w, as calculated numerically. Curves shown correspond to strong (\scriptscriptstyle \blacksquare,\scriptscriptstyle\square), medium (·,°), weak (\blacktriangle), and no (\vartriangle\nobreak) disorder potential. A decrease from the ``random value'' F=1/4 for small shutter openings w to F=0 for wide shutter openings w is clearly visible. The inset depicts the theoretical prediction based on a quasiclassical simulation. Note the good agreement with the numerical data from the full quantum simulation.In analogy to systems which have been studied experimentally [14], we numerically investigate shot noise in cavities with tunable openings that allow to vary the dwell time tD [15]. To simulate chaotic dynamics we add a tunable random disorder potential inside the cavity (see Fig. 3b). A remarkable result we find is that for small shutter openings the Fano factor F is always very close to the universal limit (F=1/4), independent of the strength of the disorder potential (see Fig. 3c). In particular for vanishing disorder, where chaotic dynamics in the cavity is entirely absent, this finding is surprising. We argue that diffraction at the lead openings [16] is the dominant source of shot noise. To quantify this conjecture, we develop a quasi-classical transport model for shot noise suppression which extends previous models [17,18] and agrees with the numerical data (see inset Fig. 3c).
Fig. 4: (a) Billiards with a hard wall vs. soft wall profile. (b) ``Trapped'' trajectory in a soft wall billiard with a mixed classical phase space and the density of the corresponding quantum wavefunction (a so-called ``GWB-scar'').As was pointed out previously [19] trapped trajectories lead to quasi-bound states in the corresponding quantum transport problem and appear as isolated resonances in the conductance. By analyzing the wave function probability density and the Husimi distribution at the resonance energies we find remarkable similarities between the classical and quantum phase space structures [20]. This enables us to classify resonant scattering states associated with regular, trapped and instable periodic classical trajectories.
References
Soft matter physics has become a rapidly developing branch in condensed matter physics. This is certainly due to the fact that soft matter does play an important role not only in our daily life, but also in many technological applications.
Despite the fundamental role that soft matter plays in our lives, systematic investigations of its properties have been out of reach over many decades which is due to the intrinsic complexity of these systems. Only in recent years special experimental techniques in combination with new theoretical concepts have brought along - in a fruitful cooperation among soft condensed matter scientists - a deeper insight into the intriguing phenomena of these systems. Since typical soft matter particles are mesoscopic in size, they can be be investigated with experimental methods that are much simpler to handle than, for atomic systems: information is obtained directly in real space and particles can be moved in space nearly arbitrarily with optical squeezers (for an overview see [1]).
The key problem a theoretician is faced with when dealing with soft matter is the huge number degrees of freedom that characterize the particles. Typical soft matter particles (e.g., dendrimers or microgels) are, in turn, complex aggregates of several thousands of atoms or molecules which leaves definitely no hope to describe such a system within the framework of statistical mechanics. Luckily, coarse graining methods have turned out to be a very attractive tool to derive effective interactions between two soft particles [2]: by suitably averaging over the many thousands of degrees of freedom of the constituent particles one arrives at effective potentials, that typically depend on the coordinates of the centers of mass of two interacting aggregates. In contrast to atomic systems, these effective potentials diverge only weakly at the origin or even remain finite at short distances: this reflects the fact that - as a consequence of their loose internal structure - these aggregates are allowed to overlap, to mutually penetrate, or to even intertwine when being compressed. These particular features lead, in turn, to unexpected and surprising effects both in their structural properties as well as in their phase behaviour. Some of these effects have been studied in the present project.
During the past year we have in particular focused on ionic microgels. They are mesoscopically sized, covalently cross-linked polymer networks, their diameter s being in the range between 10 nm and 1 mm. Most microgels are based on poly(N-isopropylacrylamide) (PNIPAM) or related co-polymers that are cross-linked during emulsion polymerization, a process that can produce remarkably uniform particles. When the polymer chains comprising the microgels carry ionic groups on their backbones, the latter dissociate upon solution into an aqueous solvent, leading to charged or ionic microgels. Active interest in polyelectrolyte gels (to which microgels belong as a subgroup) remains to date, due to their ability to absorb large amounts of water and act as superabsorbers or drug delivery systems.
An effective potential, Feff(r), where r is the center-to-center distance of two microgel particles, can be derived within the framework of linear-response theory [3]: Feff(r) can be split up into a bare interaction between two uniformly charged spheres and a contribution induced by the counterions. This effective potential remains finite at the origin and depends on the net microgel charge Z, the dielectric constant e of the solvent, the counterion density nc, and valency z. In addition, steric repulsions (that are due to the overlap between the monomer units of two interacting microgels) can be included in a simple model, based on standard Flory-Huggins theory.
Based on this effective two-body potential we can now determine the phase diagram for this system [4, 5]. For the fluid phase we have used a standard liquid state theory, i.e., the thermodynamically self-consistent Rogers-Young (RY) scheme [6], which provides both information on the structure and on the thermodynamic properties. For the solid phases we have applied an Einstein model with Hamiltonian H0 (characterized by a spring constant) which serves as a reference state for the true Hamiltonian, Heff: the Gibbs-Bogoljubov inequality provides (via minimization with respect to the stiffness of the spring and the cell geometry) a lowest upper bound for the free energy. The set of possible candidate structures for the solid phases was fixed in a preceding step with the help of a genetic algorithm [7] where the following structures were proposed for the density range considered: fcc, bcc, hexagonal, bco, and trigonal.
The first indications of a very peculiar phase behaviour are found from a closer analysis of the structure factor S(k) (Figure 1): as the density r becomes larger, the value of the main peak first increases (as expected), but then drops suddenly at rs3 ~ 2. This anomalous behaviour is a clear indication that re-entrant melting is to be expected.
Fig. 1: Anomalous behaviour of the structure factor S(k) for microgels with charge Z=250 and size s=100 nm for increasing density (as indicated in the inset).
Fig. 2: Phase diagram (Z vs. r) of ionic microgels of size s=100 nm. For the fluid phase the RY approach has been used.The phase diagram of an ionic microgel (Z vs. r) for s=100 nm is shown in Figure 2. For Z < 200 the system remains fluid for the entire density range considered. As we increase the charge, we encounter re-entrant melting: at densities roughly below the overlap value, r*, the system freezes into the fcc lattice, which undergoes a structural phase transformation into a bcc structure at higher densities. Upon further compression the systems remelts again, i.e., the disordered structure is energetically more favourable. For charges larger than ~ 400 the re-entrant melting scenario repeats itself, but the stable crystal lattices are not cubic: instead the system crystallizes into unusual, strongly asymmetric structures with a small number of nearest neighbours such as hexagonal, bco, and trigonal lattices.
Three remarks are in order:
All these facts demonstrate that soft interactions offer an obviously unexpected rich variety of new physical phenomena and that the conventional views on crystallization gained from hard potentials have to be revisited thoroughly. It is anticipated that not only the equilibrium but also the dynamical behavior of ionic microgel solutions will be highly unusual, opening the way for a wealth of possibilities to manipulate the rheological behavior of microgel solutions that may lead to interesting technological applications.
References
Liquid state theories establish a link between the microscopic properties of a liquid (in terms of its pair potential) and its structural and thermodynamic properties. Statistical Mechanics provides the versatile formalism to determine the relevant relations. In practice, however, these expressions cannot be applied directly, not even for the simplest non-ideal system, i.e., to hard spheres: the reason is that they become intractably complex and therefore require simplifying assumptions. These simplifications lead to approximate schemes that can be derived systematically (e.g., via graph-theoretical considerations) [1] from the exact expressions for partition sums and related quantities. One thus arrives at so-called closure relations to the Ornstein-Zernike equation that relate the total and the direct correlation functions which describe the structural properties of the system. In early years of liquid state theory well-known conventional schemes, such as the mean spherical approximation (MSA), the Percus-Yevick (PY), or the hypernetted chain (HNC) approximations have been derived.
The simplifying assumptions lead, however, to a serious drawback: if we calculate the thermodynamic properties of a given system, then we obtain - as a consequence of the approximate character of the closure relations - results that depend on which thermodynamic route has been chosen; an exact theory, on the other hand, would have led to identical data.
These inconsistencies have further consequences: first, the structural data are inaccurate; second, the determination of the phase diagram (in particular the exact location of the phase boundaries as well as a reliable description of the critical region) becomes problematic. Remedies have been searched for to cope with this problem:
Over many years, our group has accumulated expertise in liquid state theory: this applies both to numerical implementations as well as to the development of new integral-equation schemes. In the following we shall briefly report on a few recent contributions.
We have studied the phase diagram of a binary symmetrical mixture of two fluids (labeled '1' and '2'): here the potentials between the like particles are equal, i.e., F11(r) = F22(r), while the interaction between the unlike particles is fixed by F12(r) = aF11(r). a can be identified as the relevant parameter that triggers the phase behaviour of the system (see below). If a < 1 then the competition between the liquid-vapour phase transtition and the demixing transition (into a 1- and a 2-rich phase) leads to a very intriguing phase behaviour [4]; some aspects will be presented below.
The system itself is not as academic as it might seem at first sight: it shows - if we consider for the moment only the topology of the phase diagrams - the same behaviour as one-component systems endowed with an additional internal degree of freedom, such as Heisenberg liquids or fluids of particles carrying a dipolar moment. Since a binary symmetric mixture is the simplest representative among these systems it is the obvious candidate to perform detailed investigations of its phase behaviour.
Fig. 1: Three dimensional representation of the phase diagram of a binary symmetric mixture for a = 0,69; for details cf. text.We present in Figure 1 results for a = 0.69. We depict the phase diagram in the three-dimensional {temperature (T) - density (r) - concentration (c)}-space, along with the three respective, two-dimensional projections. Dark blue lines mark isothermal coexistence curves, the red and the orange lines are lines of critical points (i.e., of second-order transitions), bold light blue lines are triple lines, and turquoise lines mark the phase diagram, if no external field is applied, i.e., the so-called equimolar case, when the difference in the chemical potentials of the two species vanishes.
Despite the simplicity of the model, the phase behaviour of the system is rather complex and represents therefore a nice, instructive example of critical phenomena: among others we observe critical end points (where a critical line is truncated on a coexistence surface), triple lines, or tricritical points (where simultaneously three phases become critical). These results are based on an analytic solution of the MSA for a system with hard-core Yukawa interactions and have been confirmed by computer simulations, specifically designed to study critical phenomena [5].
In recent time we have successfully extended the scheme of SCOZA to a large variety of systems: for liquids with repulsive core (as they are, e.g., encountered in atomic systems) we are now able to consider potentials with arbitrary attractive tails. Comparison with computer simulations has shown that SCOZA does indeed remain accurate close to phase boundaries and in the critical region. Particular attention has recently been dedicated to soft systems, i.e., liquids where the potential of the particles remains finite at the origin or diverges only weakly for short distances; such interactions are typical for soft matter particles (see section 2.3.2). Also here we could show that computer simulation data for the structural and thermodynamic properties could be reproduced with high accuracy.
Special emphasis has furthermore been put on closer investigations of the HRT scheme. It is in particular the implementation of HRT which represents a challenge (both from the conceptual as well as from the numerical point of view). In an effort to localize the phase boundaries and the critical point with high accuracy, states of diverging compressibility have to be identified which turned out to be a very delicate problem [6].
References
The study of multiply-charged ion-solid interactions is of considerable technological importance for the understanding of material damage, surface modification, and plasma-wall interactions. The recent availability of sources for slow highly charged ions (HCI), namely electron cyclotron resonance (ECR) and electron beam ion sources (EBIS) has led to a flurry of research activities, both experimental and theoretical, in the field of HCI-solid interactions [1-3]. On the most fundamental level, its importance is derived from the complex many-body response of surface electrons to the strong Coulomb perturbation.
From numerous experimental as well as theoretical studies the following scenario of the HCI-surface interaction has emerged: When an HCI approaches a solid surface, one or more electrons are resonantly captured at large distances into high Rydberg states of the projectile. As a result, so-called hollow atoms (ions) are formed where the atomic charge cloud transiently resides in shells with large diameters while the core is virtually empty. Direct observation of this short-lived state is complicated by the fact that the ion is always attracted towards the surface by its self-image potential. Consequently it will suffer close collisions upon impact on the surface and the memory of the hollow atom is all but erased. This problem has motivated the study of interactions of HCI with internal
Fig. 1: Schematic picture of a capillary and three types of trajectories.surfaces of microcapillaries and nanocapillaries as an alternative technique to study above surface processes (e.g. [4]). Metal and insulating capillaries have become available at the Tokyo Metropolitan University, Japan, and at the Hahn-Meitner-Institut Berlin, Germany [5]. The use of capillary targets allows the extraction of hollow atoms in vacuum. Observation of photons or Auger electrons emitted from them in flight becomes possible. Also the energy loss an HCI suffers when passing through a capillary at distances too large for charge transfer to take place can be measured and calculated.
During the recent year we have performed a broad range of simulations to study the interaction of HCI with capillary surfaces in detail. In particular, we have concentrated on the simulation on projectiles which did not change their initial charge state during the interaction (trajectories of type (1) in Fig. 1).
| (2.3) |
Fig. 2: 2D correlation pattern between the energy loss and the scattering angle of Kr30+ ions passing through a Ni microcapillary at 2.5 eV/amu energy.However, the strength of the stopping power strongly increases as the particle approaches the surface. Fig. 2 shows the two-dimensional correlation pattern between the scattering angle and the energy loss for Kr30+ transmitted through a Ni capillary. We find a maximum energy loss of about 0.9 eV that should be detectable.
Our simulation consists of several ingredients bridging processes occurring at vastly different time scales: microscopic charge-up ( ~ 10-15 s), transport of a single ion ( ~ 10-10 s), time interval between subsequent ions ( ~ 10-1 s), and approach of dynamical equilibrium ( ~ 102 s). Initially, Q charges are deposited on the surface where Q is the initial charge state of the projectile. Due to the finite conductivity of the target material these charges move along the capillary wall or, with a small probability, diffuse into the bulk. Subsequent projectile trajectories are calculated taking into account the electric field of charges deposited previously on the capillary wall.
Fig. 3: Two dimensional angular distribution of transmitted Ne7+ ions for qin=0°, 1°, 3°, and 5°.We were able to reproduce the transmission rates for HCI transmitted through Mylar and SiO2 capillaries at energies ranging from 3 keV to 7 keV.
Another result that is consistent with experimental findings are increasing differences in the angular distributions of transmitted ions parallel and perpendicular to the plane of incidence with increasing qin. In Fig. 3 we show the two-dimensional distribution of exit angles for qin=0°, 1°, 3°, and 5°. The distribution normal to the plane of incidence (y-direction) remains almost constant for all angles. Parallel to the plane of incidence (x-direction) a slight widening and displacement of the peak from the center of the distribution is found. This is in agreement with experiments showing a small deviation of the centroid of the scattering distribution towards larger deflection angles. For incidence angles larger than 10° even the formation of double peak structures could be observed as in experiment[7].
References
BURGDÖRFER Joachim | |
burg@concord.itp.tuwien.ac.at | Tel. +43-1-58801/13610 |
SCHWEDA Manfred | |
mschweda@tph.tuwien.ac.at | Tel. +43-1-58801/13640 |
Associate Professors
DIRL Rainer | |
rdirl@tph.tuwien.ac.at | Tel. +43-1-58801/13617 |
KAHL Gerhard | |
gkahl@tph.tuwien.ac.at | Tel. +43-1-58801/13632 |
KRAEMMER Ulrike | |
on leave until 24/9/2006 | |
KREUZER Maximilian | |
kreuzer@hep.itp.tuwien.ac.at | Tel. +43-1-58801/13621 |
REBHAN Anton | |
rebhana@tph.tuwien.ac.at | Tel. +43-1-58801/13626 |
SVOZIL Karl | |
svozil@tuwien.ac.at | Tel. +43-1-58801/13614 |
BALASIN Herbert | |
hbalasin@tph.tuwien.ac.at | Tel. +43-1-58801/13624 |
LEMELL Christoph | |
lemell@concord.itp.tuwien.ac.at | Tel. +43-1-58801/13612 |
PERSSON Emil | |
emil@concord.itp.tuwien.ac.at | Tel. +43-1-58801/13630 |
ROHRINGER Nina | |
nina@concord.itp.tuwien.ac.at | Tel. +43-1-58801/13633 |
ROTTER Stefan | |
rotter@concord.itp.tuwien.ac.at | Tel. +43-1-58801/13618 |
YOSHIDA Shuhei | |
shuhei@concord.itp.tuwien.ac.at | Tel. +43-1-58801/13611 |
Administrative Staff
MÖSSMER Elfriede | |
moessmer@pop.tuwien.ac.at | Tel. +43-1-58801/13601 |
UNDEN Roswitha | |
unden@tph.tuwien.ac.at | Tel. +43-1-58801/13602 |
Professors (Emeritus) and Retired Faculty
KASPERKOVITZ Peter | |
kasperko@tph.tuwien.ac.at | Tel.: + 43-1-58801/13632 |
KUMMER Wolfgang | |
wkummer@tph.tuwien.ac.at | Tel.:+ 43-1-58801/13620 |
NOWOTNY Helmut | |
hnowotny@tph.tuwien.ac.at | Tel.:+ 43-1-58801/13634 |
External Lecturers
MAJEROTTO Walter | Lecturer a |
MARKYTAN Manfred | Lecturer a |
SCHALLER Peter | Lecturer c |
SEKE Josip | Lecturer d |
SIGMAR Dieter | Lecturer b |
SKARKE Harald | Lecturer c |
LANDSTEINER Karl | Lecturer e |
a Institut für Hochenergiephysik der Österr. Akad. d. Wissenschaften b Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts (USA) c Bank Austria - Creditanstalt d Institute for Theoretical Physics, TU Vienna e Instituto de Física Teórica, Universidad Autónoma de Madrid |
AIGNER Florian | |
aigner@concord.itp.tuwien.ac.at | Tel. +43-1-58801/13655 |
ARBÓ Diego | |
diego@concord.itp.tuwien.ac.at | Tel. +43-1-58801/13630 |
BARNA Imre | |
barna@concord.itp.tuwien.ac.at | Tel. +43-1-58801/13654 |
BERGAMIN Luzi | |
bergamin@tph.tuwien.ac.at | Tel. +43-1-58801/13622 |
BÖHMER Christian | |
boehmer@hep.itp.tuwien.ac.at | Tel. +43-1-58801/13622 |
DEISS Cornelia | |
cornelia@concord.itp.tuwien.ac.at | Tel. +43-1-58801/13655 |
DENK Stefan | |
denk@hep.itp.tuwien.ac.at | Tel. +43-1-58801/13625 |
DIMITRIOU Konstantinos | |
dimi@concord.itp.tuwien.ac.at | Tel. +43-1-58801/13655 |
FERNAUD Maria-José | |
fernaud@tph.tuwien.ac.at | Tel. +43-1-58801/13631 |
GERHOLD Andreas | |
gerhold@hep.itp.tuwien.ac.at | Tel. +43-1-58801/13637 |
GOTTWALD Dieter | |
gottwald@tph.tuwien.ac.at | Tel. +43-1-58801/13631 |
GUTTENBERG Sebastian | |
basti@hep.itp.tuwien.ac.at | Tel. +43-1-58801/13637 |
HÖRNDL Maria | |
maria@concord.itp.tuwien.ac.at | Tel. +43-1-58801/13633 |
LIBISCH Florian | |
florian@concord.itp.tuwien.ac.at | Tel. +43-1-58801/13612 |
MLADEK Bianca | |
mladek@tph.tuwien.ac.at | Tel. +43-1-58801/13631 |
NIGSCH Martin | |
nigsch@cmt.tuwien.ac.at | Tel. +43-1-58801/13616
|
REINER Albert | |
NTNU Trondheim, Norway | |
REINOSA Urko | |
reinosa@hep.itp.tuwien.ac.at | Tel. +43-1-58801/13637 |
SCHEIDEGGER Emanuel | |
esche@hep.itp.tuwien.ac.at | Tel. +43-1-58801/13623 |
SCHÖLL-PASCHINGER Elisabeth | |
elisabeth.schöll-paschinger@univie.ac.at | Tel. +43-1-58801/13631 |
SEKE Josip | |
jseke@tph.tuwien.ac.at | Tel. +43-1-58801/13651 |
TÖKÉSI Karoly | |
tokesi@concord.itp.tuwien.ac.at | Tel. +43-1-58801/13612 |
WICKENHAUSER Marlene | |
wick@concord.itp.tuwien.ac.at | Tel. +43-1-58801/13612 |
WOHLGENANNT Michael | |
miw@hep.itp.tuwien.ac.at | Tel. +43-1-58801/13637 |
ZEINER Peter | |
zeiner@tph.tuwien.ac.at | Tel.: +43-1-58801/13616 |
Coworkers who left during 2004 | |
BOZKAYA Hidir | |
Institute for Experimental Pysics, TU Vienna | |
IPP Andreas | |
ECT* Trento, Italy | |
KALYUZHNYI Yuri | |
Ukrainian Academy of Sciences, Lviv, Ukraine | |
PITSCHMANN Mario | |
Institute for Experimental Pysics, TU Vienna | |
PUTZ Volkmar | |
Institute for Photonic, TU Vienna | |
RIEGLER Erwin | |
Max Planck-Institute for Mathematics in Science | |
SCHLESINGER Karl-Georg | |
Vienna University | |
STRICKLAND Mike | |
University of Helsinki, Finland | |
THEIS Ulrich | |
Friedrich Schiller-University, Jena | |
WEINGARTNER Bernhard | |
University of Natural Resources and Applied Life Sciences, Vienna |