Problems of Quantization of General Relativity

In some sense, there does not exist a problem of "quantization of gravity". The quantization of Newtonian gravity is, in particular, a triviality. One can more reasonably talk about a problem of "quantization of relativistic gravity". But, in fact, this problem is also an almost non-existing one, if understood as the problem of incorporating relativistic effects.

What is extremely problematic, and essentially impossible, is the quantization of General Relativity (GR). More accurate, the creation of a quantum theory, which follows the same fundamental, metaphysical principles as GR, is close to impossible, because the most important fundamental principle of GR - the non-existence of a background - is in deep conflict with the principles of quantum theory.

Problems caused by background independence (the equivalence principle)

Let's see in detail what are the main parts of the conflict between background independence of GR and quantum principles:

The "problem of time"

The problem is that in quantum theory time - the parameter which is used in its basic equation, the Schrödinger equation - is the classical Newtonian absolute time. Time in quantum theory is not "what clocks measure" - there is not even an operator for time measurement. Every measurement of time is, at best, an approximation. There is even a theorem which tells that there is, for every clock, a probability that it goes backward in time.

What is technically named the "problem of time" is something more complicate, and related with the problem of defining some replacement for the Hamilton formalism (see below), but the conflict between the completely different concepts of time in GR and QT is nonetheless behind it too.

No well-defined local conservation laws for energy and momentum

The non-existence of such an absolute time in GR leads to additional problems. If a theory has a preferred time and a symmetry of translations in time, and if it has a Langrange formalism, then the Noether theorem gives a conservation law for energy. This energy plays a central role in quantum theory, because in the Schrödinger equation the Hamilton operator - the operator which measures this conserved energy - is used.

In GR, this becomes problematic. One can, of course, use arbitrary coordinates, and the equations of the theory remain unchanged. In this sense, translational symmetry is only a particular example, one can use the coordinate \(t' = t + \text{const}\) too. But this is a quite degenerated translational symmetry. As the result, what the Noether theorem gives appears also a degeneration of energy and momentum conservation: We obtain some conserved thing, but this conserved thing is zero, because of the Einstein equations: They tell us \(G_{mn} = T^{matter}_{mn}\), so that for the "energy-momentum tensor" \(T_{mn} = T^{matter}_{mn} - G_{mn}\) we have, indeed, a conservation law \(\partial_m T_{mn} = 0\) simply because \(T_{mn}=0\). So, the energy and momentum conservation laws given by the Noether theorem become useless.

No well-defined Hamilton formalism

The consequence of the degeneration of the energy-momentum tensor is that the Hamilton formalism degenerates too.

This can be also seen directly from the equivalence principle: It does not give a complete evolution equation for all the components of the metric tensor \(g_{\mu\nu}(x,t)\), because a local transformation of the coordinates, which can leave the initial values as well as the boundary conditions untouched, has to be also a valid solution of the same covariant equations. But that means that the equations of motion do not fix the solution completely - which is the classical part of the hole problem of GR.

The result of this incomplete evolution equation for the Hamilton formalism is, if not completely fatal, at least sufficiently problematic. The Hamilton function itself is simply zero. Some aspects of the Hamilton formalism may be, nonetheless, saved.

Topological foam

Background independence allows for solutions with non-trivial topology. Solutions with non-trivial topology exist. In a quantum situation, given quantum tunneling possibilities, one cannot reduce the consideration on classical solutions only. So, with quantum fluctuations also topological fluctuations have to be expected. Once they are possible, they will be, even more, typical. Thus, microscopically one would have to expect, as the usual, average configuration, one with a lot of small, microscopic distortions of the topology. This expectation is named "topological foam". How one could handle such a topological foam is completely unclear.

If there would be a well-defined background, this would not be a problem - all configurations, as solutions, as quantum oscillations, would have to have the same topology as the background.

Undefined causality

How could causality be defined if we have a superpositional state of different metric fields, with different light cones, thus, different notions of Einstein causality? A background could provide a notion of causality which does not depend on the gravitational field, but is fixed by the background. But in a background-independent theory there is no chance to obtain some meaningful notion of causality.

Causality is problematic already in classical GR, given that it allows for solutions with causal loops. This would lead to paradoxes like the granfather paradox, with the possibility that one drives with a time machine into the past and kills the own grandfather as a child. After this, the own existence becomes paradoxical, because the father will never be born. In classical GR, one can at least try fatalism - everything is predefined anyway, so that everything can be predefined in such a way that no paradoxes will appear. And without an own free will there will be no possibility for me to kill my grandfather.

How this fatalistic block world may be compatible with quantum uncertainty is not clear too.


All the problems mentioned above have a quite simple solution: The acceptance of a background. This background would define preferred coordinates - some natural coordinates of the background, say, Cartesian coordinates for space and an absolute time coordinate if that background would be a classical Newtonian background.


Beyond the problems created only by background-freedom, which simply disappear if we add a fixed Newtonian background, there is only one serious problem. Or, more accurate, a problem which seemed to be very serious for a long time, until it has been understood in a much better way: Quantum gravity is non-renormalizable. This problem remains even if we add a background.

But, after Wilson's insight into the nature of renormalizability, it become quite easy to understand what happens, as well as what would solve the conceptual problem. Namely, a non-renormalizable theory is fine as a large distance approximation, which becomes invalid below some critical distance.

The hypothesis that our theories are only large distance approximations, and become invalid below a critical distance, also nicely explains why our other field theories are renormalizable: The point is that for each term of the true fundamental theory one can compute how the term changes if the critical distance - the most important of the free parameters of the theory - changes. All of them change, and this change what is named "renormalization". But most of the terms become smaller in comparison with others if the critical distance decreases. So, if the critical distance becomes very small, then they can be simply ignored as irrelevant. Only a few terms survive. And the terms which survive are the renomalizable terms.

The only exception is gravity. For the gravitational field, as described by GR, there is simply no renormalizable term. All what we have there is the term which decreases less fast than all others. And this is, essentially, the term which defines GR. On the other hand, given that it is not renomalizable, it decreases faster than all the other renormalizable terms. A problem? No, a solution to a problem, namely the problem to explain why gravity is, on the quantum level, so extremely weak in comparison with all the other forces.

What can we conclude from non-renormalizability? It appears that it is not even a problem, but a tool, which allows to explain why gravity is so weak in comparison with the other forces.

But this requires to accept that our theory is not a fundamental truth, but only valid as a large distance approximation. Below some critical distance, it will be wrong and has to be replaced by a different, more fundamental theory.

And this conceptual acceptance that GR is not fundamental, but only a large distance approximation, which fails below some critical length, is another, independent good reason to doubt that the equivalence principle is a fundamental truth. It is not very plausible that, once we have to replace the theory by a completely different, that the symmetries of the large distance limit will survive. Let's not forget that the very point of the renormalization is that the fundamental theory contains a lot more terms, and almost all of these terms simply become irrelevant, unobservable for large distances. Only the few renormalizable terms survive. In this situation, it seems quite plausible that the large distance limit can have, with much less terms, a larger symmetry group.

That means, even if non-renormalizability is a different sort of problem, which is not solved by introducing a background alone, the solution of this problem will, with high probability, also destroy relativistic symmetry, so that the solution will be also, with high probability, a theory without background indepedence.