Lubos Motl (lumo) claims on his "Reference frame" that Bohmian mechanics is incompatible with loop corrections.

If one reads lumo, one has to ignore low level polemics which are inappropriate in a scientific discussion. They would not be allowed here, and to answer them here seems useless too. Nonetheless, even such presentations may contain valid arguments which are worth to be discussed.

Recovering relativistic symmetry

A first one is the general problem how a theory which is not Lorentz-covariant can give, in some approximation or some limit, a theory which is Lorentz-covariant in its observable predictions.

The first one is that in dBB theory there is a quantum equilibrium. An equilibrium is, plausibly, a state with higher symmetry than the general state. So, while dBB theory outside quantum equilibrium allows superluminal information transfer, dBB theory in quantum equilibrium is equivalent to quantum theory, and, in particular, no longer allows superluminal information transfer. What is the mechanism which suppresses the non-equilibrium states? It is a variant of the usual mechanism which leads to the usual equilibrium in thermodynamics. This has been shown by Valentini in his "sub-quantum H-theorem", which is works analogical to Boltzmann's H-theorem. Once the quantum equilibrium has been reached, the predictions of dBB theory are already equivalent to those of the corresponding quantum theory, and mainly defined by the symmetries of the corresponding classical theory.

The next important reason for Lorentz-violating effects to vanish is large distance universality. Assume we have a microscopic theory, say, some complex atomic model of some crystal. The problem is to derive the approximation of this theory for large distances. What has been intuitively understood already long ago is much better understood now after Wilson has improved our understanding of renormalization and effective field theories. Namely, most of the microscopic details have essentially no effect at all for large distances. Essentially, what matters for large distances are only the lowest order terms - higher order terms become suppressed for large distance by much higher suppression factors. What remains are, essentially, only renormalizable terms. So, very different microscopic theories may end up with very similar, and in the limit identical, large distance approximations. The whole infinity of variants simply vanishes, is reduced to a few parameters of a few renormalizable theories. Other, non-renormalizable theories have a chance to remain observable only if there are no similar renormalizable terms - as in the case of gravity. But even in this case, they will be highly suppressed. Which is the Wilsonian explanation why gravity is so weak in comparison with the other forces.

In some cases, this is already sufficient to establish relativistic symmetry. For example, if we have a microscopic theory which is described by a single field, say the density, and the equation which remains in the large distance limit is the standard wave equation for sound waves:

\[(\partial_t^2 - c^2(\partial_x^2+\partial_y^2+\partial_z^2)) \phi(x,y,z,t) = 0. \]

with some constant \(c\), the speed of sound in this medium. Now, what is the symmetry group of this wave equation? It is the symmetry group of special relativity, the Poincare group. But even if not, the situation is already much less problematic - we have no longer an infinity, but only a few terms, distinguished by such remarkable properties like renormalizability.

And now some particular properties of the particular model may become important. It may contain some symmetry by construction, or some property of the model may allow to use some other symmetry to prove relativistic symmetry. In this derivation of the Einstein Equivalence Principle the symmetry which is used is the "action equals reaction" symmetry, which one gets for free together with the Langrange formalism. What is necessary to transform the "action equals reaction" symmetry to the EEP is a particular property of the ether model - its universality: The ether does not interact with anything else, so that all the fields we observe have to be fields which describe properties of the ether itself.

Is there an equivalence proof?

Lumo suggests, that there cannot be an equivalence between dBB theory and quantum theory, with the following remark:

Unfortunately, lumo thinks that the equivalence theorems are simply wrong. The equivalence has been proven in a peer-reviewed widely cited paper (Bohm, D: A suggested interpretation of the quantum theory in terms of "hidden" variables, Phys. Rev. 85, 166-193, 1952). If it contains errors, one would expect also a published refutation. But a request for a published refutation lumo answers with

Ok, I will not start to characterize what this type of behavior remembers, the discussion in this forum should be restricted to the scientific content. One could ignore such non-peer-reviewed arguments completely, and some of them will be, indeed, not considered here (like the one presented in his posting "All realistic "interpretations" falsified by low heat capacities". If two theories are proven to be equivalent on the microscopic level, and someone claims to see different results in macroscopic variants, ....).

How quantum field theory fits into dBB theory

On the other hand, let's consider here the main point made in the post itself. It is claimed that dBB theory

Of course, the equivalence proof, even if it is quite simple, works only if there is a dBB theory. So, one first has to construct one. This is not always trivial. The simple, standard way to construct a dBB-like theory works only for a general but fixed configuration space Q and a Hamiltonian which is quadratic in the momentum variables, \(H = p^2+V(q)\).

So, at a first look, lumo seems to have a point if he considers quantum field theory, with variable particle numbers, and a relativistic energy of the particle which is not quadratic in the momentum variables. Unfortunately for this argument, pilot wave theoreticians are not obliged to choose particles as the configuration space.

In principle, they can try any maximal algebra of commuting observables as the configuration space of their dBB proposal. And, instead of the particles, which vary in their numbers, there is a much better candidate for defining the dBB configuration - the field itself. Let's look at the simplest example of a relativistic quantum field theory, a simple scalar field. Here we have the Lagrangian

\[ S = \int \frac12 \eta^{\mu\nu}\partial_\mu \varphi \partial_\nu \varphi -\frac12 m^2\varphi^2 d^4x\]

which gives the momentum variables \(\pi(x) = \partial_t \varphi(x)\) and the Hamiltonian

\[ H = \int \frac12 \pi^2 + \frac12 (\nabla\varphi)^2 + \frac12 m^2\varphi^2 d^3 x\]

which is of the required form quadratic in the momentum variables.

Of course, this would be a field theory, which shares with quantum field theory all the problems related with an infinite number of the degrees of freedom. What to do? The same as what is done in QFT, namely one regularizes the theory. A simple way to do this is lattice regularization. So, we approximate the field by the field values at lattice points at some distance h, with a large enough number of lattice nodes and some periodic boundary conditions so that their number remains finite. Then, for each lattice node n, we have a field value \(\varphi_n\), which defines the configuration, and the corresponding momentum \(\pi_n = \partial_t \varphi_n\), and the energy is defined by

\[ H = \sum_n \frac12 \pi_n^2 + V(\varphi)\]

where \(V(.)\) depends only on the configuration variables \(\varphi_n\). This already fits completely into the classical case of a fixed finite number of degrees of freedom with an energy quadratic in the momentum variables. So, the standard equivalence proof works, and the resulting dBB theory is equivalent to the correspondent quantum lattice theory.

Of course, this does not mean that a relativistic multi-particle theory with variable particle number has to fail. For such a theory, the configuration space Q has to contain parts with different particle numbers. Some proponents of dBB theory develop these directions. I don't think this is a good idea - but this is my personal opinion.

About renormalization

What about the claim that calculations for renormalization cannot be done? We have already done the most important first step - to define a regularization, one where we have a well-defined, in any sense, theory. And for this regularization, we have constructed the corresponding dBB theory, in a way that its equivalence to the quantum lattice theory is a triviality.

One can object that we have considered here only one regularization scheme - lattice regularization. Other regularization schemes may have some advantages, in particular, it may be much easier to compute some integrals, say, using dimensional regularization. But can one make an argument against dBB theory what it has not yet provided a dBB variant for quantum theory in a \(4-\varepsilon\) dimensional space? I doubt. If renormalization is reasonable method, the results should not depend on the particular regularization scheme used. So, for the renormalization in dBB context it seems sufficient to have equivalent dBB theories for one variant of a regularization of QFT.

Lumo has also raised another problem - that of fermion fields. This is, indeed, a difficult problem - but, nonetheless, already solved. But the solution will be left to a separate post.

If one reads lumo, one has to ignore low level polemics which are inappropriate in a scientific discussion. They would not be allowed here, and to answer them here seems useless too. Nonetheless, even such presentations may contain valid arguments which are worth to be discussed.

Recovering relativistic symmetry

A first one is the general problem how a theory which is not Lorentz-covariant can give, in some approximation or some limit, a theory which is Lorentz-covariant in its observable predictions.

Quote:The correct way to argue is that the generic theory in the Bohmian class contains infinitely many Lorentz-violating effects and they have no reason to vanish. So the probability that all of them cancel and produce the prediction of Lorentz-invariant phenomena – which are observed – is \(1/\infty^\infty\). It is zero for all actual purposes.So, the question is what are the reasons to vanish for the Lorentz-violating effects.

The first one is that in dBB theory there is a quantum equilibrium. An equilibrium is, plausibly, a state with higher symmetry than the general state. So, while dBB theory outside quantum equilibrium allows superluminal information transfer, dBB theory in quantum equilibrium is equivalent to quantum theory, and, in particular, no longer allows superluminal information transfer. What is the mechanism which suppresses the non-equilibrium states? It is a variant of the usual mechanism which leads to the usual equilibrium in thermodynamics. This has been shown by Valentini in his "sub-quantum H-theorem", which is works analogical to Boltzmann's H-theorem. Once the quantum equilibrium has been reached, the predictions of dBB theory are already equivalent to those of the corresponding quantum theory, and mainly defined by the symmetries of the corresponding classical theory.

The next important reason for Lorentz-violating effects to vanish is large distance universality. Assume we have a microscopic theory, say, some complex atomic model of some crystal. The problem is to derive the approximation of this theory for large distances. What has been intuitively understood already long ago is much better understood now after Wilson has improved our understanding of renormalization and effective field theories. Namely, most of the microscopic details have essentially no effect at all for large distances. Essentially, what matters for large distances are only the lowest order terms - higher order terms become suppressed for large distance by much higher suppression factors. What remains are, essentially, only renormalizable terms. So, very different microscopic theories may end up with very similar, and in the limit identical, large distance approximations. The whole infinity of variants simply vanishes, is reduced to a few parameters of a few renormalizable theories. Other, non-renormalizable theories have a chance to remain observable only if there are no similar renormalizable terms - as in the case of gravity. But even in this case, they will be highly suppressed. Which is the Wilsonian explanation why gravity is so weak in comparison with the other forces.

In some cases, this is already sufficient to establish relativistic symmetry. For example, if we have a microscopic theory which is described by a single field, say the density, and the equation which remains in the large distance limit is the standard wave equation for sound waves:

\[(\partial_t^2 - c^2(\partial_x^2+\partial_y^2+\partial_z^2)) \phi(x,y,z,t) = 0. \]

with some constant \(c\), the speed of sound in this medium. Now, what is the symmetry group of this wave equation? It is the symmetry group of special relativity, the Poincare group. But even if not, the situation is already much less problematic - we have no longer an infinity, but only a few terms, distinguished by such remarkable properties like renormalizability.

And now some particular properties of the particular model may become important. It may contain some symmetry by construction, or some property of the model may allow to use some other symmetry to prove relativistic symmetry. In this derivation of the Einstein Equivalence Principle the symmetry which is used is the "action equals reaction" symmetry, which one gets for free together with the Langrange formalism. What is necessary to transform the "action equals reaction" symmetry to the EEP is a particular property of the ether model - its universality: The ether does not interact with anything else, so that all the fields we observe have to be fields which describe properties of the ether itself.

Is there an equivalence proof?

Lumo suggests, that there cannot be an equivalence between dBB theory and quantum theory, with the following remark:

Quote:In Hardy's paradox, any local realist theory predicts the probability of a certain combined outcome to be P=0 while experiments and quantum mechanics say P=1/16. Could you please show us the calculation in Bohmian mechanics that reproduces P=1/16P=1/16?This point of Hardy's paradox is a good argument against "local" (better: Einstein-causal) realist theories. But dBB theory is not "local". And if one wants to compute the result in dBB theory, there is a simple way: One uses the equivalence theorem between dBB theory and quantum theory, and, then, uses standard quantum computations.

Unfortunately, lumo thinks that the equivalence theorems are simply wrong. The equivalence has been proven in a peer-reviewed widely cited paper (Bohm, D: A suggested interpretation of the quantum theory in terms of "hidden" variables, Phys. Rev. 85, 166-193, 1952). If it contains errors, one would expect also a published refutation. But a request for a published refutation lumo answers with

Quote:There are almost certainly no professional physics journal articles about comparisons of QFT with a Bohm theory because Bohm theories don't belong to professional particle physics (and they don't belong to professional condensed matter physics and similar things, either!).and, instead, refers to his personal website for such a proof. Hm ...

Ok, I will not start to characterize what this type of behavior remembers, the discussion in this forum should be restricted to the scientific content. One could ignore such non-peer-reviewed arguments completely, and some of them will be, indeed, not considered here (like the one presented in his posting "All realistic "interpretations" falsified by low heat capacities". If two theories are proven to be equivalent on the microscopic level, and someone claims to see different results in macroscopic variants, ....).

How quantum field theory fits into dBB theory

On the other hand, let's consider here the main point made in the post itself. It is claimed that dBB theory

Quote:... actually has a serious problem with all conceptually new effects by which quantum field theory differs from non-relativistic quantum mechanics. It isn't compatible with the particle creation and annihilation. The calculations of renormalization can't be embedded into Bohmian mechanics in any way.

Of course, the equivalence proof, even if it is quite simple, works only if there is a dBB theory. So, one first has to construct one. This is not always trivial. The simple, standard way to construct a dBB-like theory works only for a general but fixed configuration space Q and a Hamiltonian which is quadratic in the momentum variables, \(H = p^2+V(q)\).

So, at a first look, lumo seems to have a point if he considers quantum field theory, with variable particle numbers, and a relativistic energy of the particle which is not quadratic in the momentum variables. Unfortunately for this argument, pilot wave theoreticians are not obliged to choose particles as the configuration space.

In principle, they can try any maximal algebra of commuting observables as the configuration space of their dBB proposal. And, instead of the particles, which vary in their numbers, there is a much better candidate for defining the dBB configuration - the field itself. Let's look at the simplest example of a relativistic quantum field theory, a simple scalar field. Here we have the Lagrangian

\[ S = \int \frac12 \eta^{\mu\nu}\partial_\mu \varphi \partial_\nu \varphi -\frac12 m^2\varphi^2 d^4x\]

which gives the momentum variables \(\pi(x) = \partial_t \varphi(x)\) and the Hamiltonian

\[ H = \int \frac12 \pi^2 + \frac12 (\nabla\varphi)^2 + \frac12 m^2\varphi^2 d^3 x\]

which is of the required form quadratic in the momentum variables.

Of course, this would be a field theory, which shares with quantum field theory all the problems related with an infinite number of the degrees of freedom. What to do? The same as what is done in QFT, namely one regularizes the theory. A simple way to do this is lattice regularization. So, we approximate the field by the field values at lattice points at some distance h, with a large enough number of lattice nodes and some periodic boundary conditions so that their number remains finite. Then, for each lattice node n, we have a field value \(\varphi_n\), which defines the configuration, and the corresponding momentum \(\pi_n = \partial_t \varphi_n\), and the energy is defined by

\[ H = \sum_n \frac12 \pi_n^2 + V(\varphi)\]

where \(V(.)\) depends only on the configuration variables \(\varphi_n\). This already fits completely into the classical case of a fixed finite number of degrees of freedom with an energy quadratic in the momentum variables. So, the standard equivalence proof works, and the resulting dBB theory is equivalent to the correspondent quantum lattice theory.

Of course, this does not mean that a relativistic multi-particle theory with variable particle number has to fail. For such a theory, the configuration space Q has to contain parts with different particle numbers. Some proponents of dBB theory develop these directions. I don't think this is a good idea - but this is my personal opinion.

About renormalization

What about the claim that calculations for renormalization cannot be done? We have already done the most important first step - to define a regularization, one where we have a well-defined, in any sense, theory. And for this regularization, we have constructed the corresponding dBB theory, in a way that its equivalence to the quantum lattice theory is a triviality.

One can object that we have considered here only one regularization scheme - lattice regularization. Other regularization schemes may have some advantages, in particular, it may be much easier to compute some integrals, say, using dimensional regularization. But can one make an argument against dBB theory what it has not yet provided a dBB variant for quantum theory in a \(4-\varepsilon\) dimensional space? I doubt. If renormalization is reasonable method, the results should not depend on the particular regularization scheme used. So, for the renormalization in dBB context it seems sufficient to have equivalent dBB theories for one variant of a regularization of QFT.

Lumo has also raised another problem - that of fermion fields. This is, indeed, a difficult problem - but, nonetheless, already solved. But the solution will be left to a separate post.