discussion

Problems of Quantization of General Relativity

In some sense, there does not exist a problem of "quantization of gravity". The quantization of Newtonian gravity is, in particular, a triviality. One can more reasonably talk about a problem of "quantization of relativistic gravity". But, in fact, this problem is also an almost non-existing one, if understood as the problem of incorporating relativistic effects.

What is extremely problematic, and essentially impossible, is the quantization of General Relativity (GR).

Problems caused by background independence (the equivalence principle)

More accurate, the creation of a quantum theory, which follows the same fundamental, metaphysical principles as GR, is close to impossible, because the most important fundamental principle of GR - the non-existence of a background - is in deep conflict with the principles of quantum theory.

Let's see in detail what are the main parts of the conflict between background independence of GR and quantum principles:

The "problem of time"

The problem is that in quantum theory time - the parameter which is used in its basic equation, the Schrödinger equation - is the classical Newtonian absolute time. Time in quantum theory is not "what clocks measure" - there is not even an operator for time measurement. Every measurement is, at best, an approximation. There is even a theorem which tells that there is, for every clock, a probability that it goes backward in time.

No well-defined local conservation laws for energy and momentum

The non-existence of such an absolute time in GR leads to additional problems. If a theory has a preferred time and a symmetry of translations in time, and if it has a Langrange formalism, then the Noether theorem gives a conservation law for energy. This energy plays a central role in quantum theory, because in the Schrödinger equation the Hamilton operator - the operator which measures this conserved energy - is used.

In GR, this becomes problematic. One can, of course, use arbitrary coordinates, and the equations of the theory remain unchanged. In this sense, translational symmetry is only a particular example, one can use the coordinate \(t' = t + \text{const}\) too. But this is a quite degenerated translational symmetry. As the result, what the Noether theorem gives appears also a degeneration of energy and momentum conservation: We obtain some conserved thing, but this conserved thing is zero, because of the Einstein equations: They tell us \(G_{mn} = T^{matter}_{mn}\), so that for the "energy-momentum tensor" \(T_{mn} = T^{matter}_{mn} - G_{mn}\) we have, indeed, a conservation law \(\partial_m T_{mn} = 0\) simply because \(T_{km}=0\). So, the energy and momentum conservation laws given by the Noether theorem become useless.

No well-defined Hamilton formalism

The consequence of the degeneration of the energy-momentum tensor is that the Hamilton formalism degenerates too.

This can be also seen directly from the equivalence principle: It does not give a complete evolution equation for all the components of the metric tensor \(g_{\mu\nu}(x,t)\), because a local transformation of the coordinates, which can leave the initial values as well as the boundary conditions untouched, has to be also a valid solution of the same covariant equations. But that means that the equations of motion do not fix the solution completely - which is the classical part of the hole problem of GR.

The result of this incomplete evolution equation for the Hamilton formalism is, if not completely fatal, at least sufficiently problematic. The Hamilton function itself is simply zero. Some aspects of the Hamilton formalism may be, nonetheless, saved.

Topological foam

Background independence allows for solutions with non-trivial topology. Solutions with non-trivial topology exist. In a quantum situation, given quantum tunneling possibilities, one cannot reduce the consideration on classical solutions only. So, with quantum fluctuations also topological fluctuations have to be expected. Once they are possible, they will be, even more, typical. Thus, microscopically one would have to expect, as the usual, average configuration, one with a lot of small, microscopic distortions of the topology. This expectation is named "topological foam". How one could handle such a topological foam is completely unclear.

If there would be a well-defined background, this would not be a problem - all configurations, as solutions, as quantum oscillations, would have to have the same topology as the background.

Undefined causality

How could causality be defined if we have a superpositional state of different metric fields, with different light cones, thus, different notions of Einstein causality? A background could provide a notion of causality which does not depend on the gravitational field, but is fixed by the background. But in a background-independent theory there is no chance to obtain some meaningful notion of causality.

Causality is problematic already in classical GR, given that it allows for solutions with causal loops. This would lead to paradoxes like the granfather paradox, with the possibility that one drives with a time machine into the past and kills the own grandfather as a child. After this, the own existence becomes paradoxical, because the father will never be born. In classical GR, one can at least try fatalism - everything is predefined anyway, so that everything can be predefined in such a way that no paradoxes will appear. And without an own free will there will be no possibility for me to kill my grandfather.

How this fatalistic block world may be compatible with quantum uncertainty is not clear too.

Summary

All the problems mentioned above have a quite simple solution: The acceptance of a background. This background would define preferred coordinates - some natural coordinates of the background, say, Cartesian coordinates for space and an absolute time coordinate if that background would be a classical Newtonian background.

Non-renormalizability

Beyond the problems created only by background-freedom, which simply disappear if we add a fixed Newtonian background, there is only one serious problem. Or, more accurate, a problem which seemed to be very serious for a long time, until it has been understood in a much better way: Quantum gravity is non-renormalizable. This problem remains even if we add a background.

But, after Wilson's insight into the nature of renormalizability, it become quite easy to understand what happens, as well as what would solve the conceptual problem. Namely, a non-renormalizable theory is fine as a large distance approximation, which becomes invalid below some critical distance.

The hypothesis that our theories are only large distance approximations, and become invalid below a critical distance, also nicely explains why our other field theories are renormalizable: The point is that for each term of the true fundamental theory one can compute how the term changes if the critical distance - the most importatn of the free parameters of the theory - changes. All of them change, and this change what is named "renormalization". But most of the terms become smaller in comparison with others if the critical distance decreases. So, if the critical distance becomes very small, then they can be simply ignored as irrelevant. Only a few terms survive. And these terms, which survive, are the renomalizable terms.

The only exception is gravity. For the gravitational field, as described by GR, there is simply no renormalizable term. All what we have there is the term which decreases less than all others. And this is, essentially, the term which defines GR. On the other hand, given that it is not renomalizable, it decreases faster than all the other renormalizable terms. A problem? No, a solution to a problem, namely the problem to explain why gravity is, on the quantum level, so extremely weak in comparison with all the other forces.

What can we conclude from non-renormalizability? It appears that it is not even a problem, but a tool, which allows to explain why gravity is so weak in comparison with the other forces.

But this requires to accept that our theory is not a fundamental truth, but only valid as a large distance approximation. Below some critical distance, it will be wrong and has to be replaced by a different, more fundamental theory.

And this conceptual acceptance that GR is not fundamental, but only a large distance approximation, which fails below some critical length, is another, independent good reason to doubt that the equivalence principle is a fundamental truth. It is not very plausible that, once we have to replace the theory by a completely different, that the symmetries of the large distance limit will survive. Let's not forget that the very point of the renormalization is that the fundamental theory contains a lot more terms, and almost all of these terms simply become irrelevant, unobservable for large distances. Only the few renormalizable terms survive. In this situation, it seems quite plausible that the large distance limit can have, with much less terms, a larger symmetry group.

That means, even if non-renormalizability is a different sort of problem, which is not solved by introducing a background alone, the solution of this problem will, with high probability, also destroy relativistic symmetry, so that the solution will be also, with high probability, a theory without background indepedence.

What is wrong with straightforward quantization of gravity?

What we have seen is that for the quantization of gravity there exists a straightforward solution. Let's describe this default solution:

Introducing a background

First, one has to add a background to GR. That means, we have to reintroduce classical space and time, as absolute entities not influenced by matter, into the theory. So, we have to fix a preferred system of coordinates in space and time. While there may be a lot of special conditions defining in various circumstances special systems of coordinates, there is essentially only one candidate for coordinates where the condition looks like a reasonable physical equation, so that it can be used as an additional physical equation: This is the harmonic condition: \(\square \mathfrak{x}^{\mu} = \partial_{\nu} (g^{\mu\nu}\sqrt{-g}) = 0\).

The harmonic condition can be easily incorporated into the Euler-Langrange formalism: Given that it is simply the equation for a massless scalar particle applied to the preferred coordinates, one can use the standard Lagrangian for a massless scalar particle for this purpose. This modifies the equations in a minor way, but not more than some massless scalar dark matter particle would do. Thus, this modification would be hardly incompatible with observation as long as GR itself is viable. The conceptual difference is that after this, the modified theory has normal, non-degenerated local energy and momentum conservation laws, connected as usual via the Noether theorem with translational symmetry in the preferred coordinates.

The result of this first step is, therefore, a simple and conceptually unproblematic field theory on a Newtonian background.

Let's note also that this approach is well-known, it is named the field-theoretic approach. Its problem is known to be the non-renormalizability of GR.

Regularizing the theory

Given that in this form the quantum field theory gives, nonetheless, a lot of infinities, one has to regularize it. In principle, any regularization would do the job. One can use, for example, some lattice regularization, which is nothing but what one has to do anyway if one wants to compute approximate GR solutions on a computer, except that we do not have to care about that lattice approximation giving very accurate results, which essentially simplifies the job.

If one uses harmonic coordinates to define the background, and requires a global time-like harmonic time as the preferred time (to be used in the Schrödinger equation), one can use the interpretation of the harmonic coordinates as continuity and Euler equations for some ether. This would suggest a quite natural lattice discretization in form of an "atomic ether", where the lattice nodes move on the background with the velocity of the ether, and the ether density would be defined by the density of the lattice nodes.

But let's note that these are already particular details. In principle, any regularization, by definition of the word "regularization", gives a regular theory, without infinities. And it is also clear that this regularization has to introduce a cut for waves with high momentum. There will be some critical distance so that below this critical distance the regularized theory is very different from GR itself. GR itself survives as a large distance approximation.

This approach to field theories in general, to renormalizable as well as non-renormalizable field theories, is also well-known, it is the Wilsonian approach to renormalization. Part of this approach is that non-renormalizable theories can be, nonetheless, understood and used as effective field theories.

So what is wrong with this?

The question is what is wrong with this approach to quantization of gravity.

It is certainly not any conflict with observation. On the classical level, one could think that in principle introducing a Newtonian background could cause problems. But there are no such problems. The global universe is, on the large scale, homogeneous and flat, thus, completely compatible with a flat background. There is no observational evidence for various wormholes or other causal loops or so where introducing a flat Newtonian background would be problematic. Quantum effects of gravity are almost irrelevant, and the best candidate for the critical distance is the Planck length, which is so small that to make observations at such distances is hopeless. So, straightforward quantum gravity has no problem of compatibility with observation.

So, once the objections against this approach have nothing to do with observational or experimental evidence, they have necessarily metaphysical character.

What are these metaphysical ideas, which lead to the rejection of the straightforward theory of quantum gravity?

No Background-Independence

The metaphysical concept of background-independence is named "relationalism" and goes back to Descartes:

According to Descartes, there is no “space” at all, but only physical objects which can be in touch with each other. The “position” or location, respectively, of an object is only defined by the naming of other physical objects close to it, i.e. the position of a body is the set of those objects to which the body is contiguous. Equally important is the concept of “motion”, which is defined as the change of position. Thus motion is determined by the change of contiguity, i.e. only in relation to other objects. This point of view is denoted as relationalism. (Gaul & Rovelli)

In itself, relationalism is certainly an interesting philosophical hypothesis. But there is also an alternative - the concept of absolute space as proposed by Newton:

According to Newton, “space” exists by itself, independently of the objects in it. Motion of a body can be defined with respect to space alone, irrespectively whether other objects are present. ... according to Newton, space exists independently of objects, weather they are present or not. The location of objects is the part of space that they occupy. This implies that motion can be understood without regard to surrounding objects. Similarly, Newton uses absolute time, leading to a space–time picture which provides an always present fixed background over which physics takes place. Objects can always be localized in space and time with respect to this fixed non-dynamical background. (Gaul & Rovelli)

Now, Newtonian gravity is a theory quite close to the Newtonian concept of absolute space, even if Galilean invariance of the theory created the difficulty that absolute position was not observable. Nonetheless, the Newtonian rotating bucket argument showed that at least acceleration is absolute.

Instead, GR appears to be a fully relational theory, and this is, of course, a remarkable metaphysical property of this theory. So, one can understand that many scientists consider this property as an essential one, as some "deep insight", which has to be preserved at all costs.

But, nonetheless, the alternative of Newtonian absolute space and time is what one needs in quantum theory. Moreover, from a philosophical point of view, it is quite satisfactory too. And one can reasonably argue that relationalism is bad philosophy, because it is, essentially, based on positivism: Once we cannot observe the absolute positions, or absolute time, they do not exist. This positivistic philosophy has been found to be invalid, Popper's critical rationalism is a clearly preferable, much more consistent philosophy of science.

The Equivalence Principle only an approximation

The equivalence principle is essentially another formulation of background independence, or a different aspect. Its metaphysical character is even more obvious. The equivalence principle is, obviously, motivated by positivistic philosophy: Once we are unable to observe, now, any difference between two constellations, no such difference really exists, and these constellations have to be completely equivalent in reality.

Common sense tells us about two possibilities why we cannot see differences. There is the possibility that there are, really, no differences. But there is also the other possibility that our actual theory is only a rough approximation, and that this rough approximation simply does not show us the differences, because they are too small to be visible in this approximation.

A theory about gravity distorting clocks and rulers instead of a theory of Space and Time

The spacetime interpretation presents the gravitational field as a really fundamental entity, which describes such fundamental objects as space and time, and does this in a quite non-trivial way, namely as an indivisible union, the spacetime, which is, moreover, curved.

All these assumptions about fundamentality would become invalid if GR appears to be only an approximation of some other, more fundamental theory, which becomes relevant for small distances. This would be a contradiction in itself - a theory which defines the very meaning of the word "distance" becoming invalid below a critical distance. A theory which defines space and time which becomes invalid for sufficiently small parts of space and time. So, it is clear that if GR is only an approximation, one would better forget all this talk about space, time and curved spacetimes, but, instead, start to talk about less fundamental things like rulers and clocks, which may be influenced by the gravitational field.

That GR gives fundamental insights into space and time is also incompatible with the reintroduction of the background, because if there exists such a background, it is clearly this background which defines what is space and what is time. The gravitational field is, then, something which influences and distorts rulers and clocks, but nothing which defines space and time.

The switch from a theory about space and time to a theory about the behavior of clocks and rulers can be considered as not that important, given that the philosophy of time used in GR is "time is what the clocks show". But this modification may be much more important than it seems, not that much for philosophical but psychological reasons.

It is one thing to have a theory of space and time, moreover with one which has strange, counterintuitive features, like a strange union of space and time named spacetime, which is influenced by matter, curved, and has even non-trivial topology. If you are able to understand and work with this theory, you already look like a magician, with deep insights into very fundamental things.

It is another thing to have a theory of an ether, something introduced into physics 1637, with properties which are not at all magical, but similar to those of usual stones, and the only remarkable property is that it somehow distorts clocks and rulers. Being able to understand and work with such a theory is nothing remarkable, and those who manage to do this cannot pretend to a higher status than that of a watchmaker, a technician doing some rather boring job.

Of course, a preference of scientists for looking like a magician instead of a watchmaker would not provide a valid scientific argument. Probably they would deny to have such a preference, and reject this consideration as defamatory. But the point is not at all to present scientists as con artists who want to present themselves as magicians, but the much more innocent preference for a magical world, a world incompatible with common sense. Such a preference for a magical world did not exist before the scientific revolution of 1905. At that time, the world of physics was not magical at all, but the boring world of the watchmaker. This changed 1905, with relativistic spacetime and quantum mystics entering the scene. And this mystical character of the world of physics is now something scientists are aware of when they decide about studying physics, instead of, say, biology or geology, where no conflicts with common sense have to be expected. So it seems quite natural to expect that those who decide to study physics today are fascinated instead of abhorred by a world full of magic and in conflict with common sense.

Literature