Posts: 85
Threads: 1
Joined: Jun 2016
Reputation:
0
(06062016, 08:35 AM)Schmelzer Wrote: No, they used no inequality at all, but computed what quantum theory predicts. The result was something greater than 2, thus, violated the BellCHSH inequality, which would hold in a Einsteincausal realistic theory.
The whole point is that QT is not equivalent to any Einsteincausal realistic theory. So, of course, if they compute the QT prediction, they use the QT rules and not some inequality which holds for Einsteincausal realistic theories.
Of course, nothing can violate mathematical proofs, which is a triviality. But you can have theories which violate the assumptions of the BellCHSH theorems, thus, can violate the resulting inequalities.
Do you agree that the expectation terms are independent from each other in the QM calculation in the Wikipedia entry? If so, then they are using the inequality with the bound of 4 as I proved by simple mathematical inspection. If you don't agree then you need to demonstrate the dependency between the expectation terms. You can't because it is impossible.
Posts: 209
Threads: 29
Joined: Dec 2015
Reputation:
0
It makes no sense to make claims about what somebody is using. Of course, the calculation does not contradict the Fermat theorem, but it does not follow that they use it. Of course, their computation does not violate the inequality with bound 5820428505, which can be also easily proven, but this does not mean that they use it.
Moreover, it does not matter at all what they use to compute the QT prediction. If they want to use some Chinese supercomputer for this purpose, fine, no problem. There is no need for this, of course, they simply use standard QT rules, which allow to compute exact numbers for the expectation values.
As well, it is completely irrelevant if these terms are somehow dependent or independent. You could as well ask me if they are blue or red. As long as you do not doubt that the computation gives the correct QT predictions, the only relevant question is what QT predicts for S. If it predicts, for some particular choices of the preparation procedure and the values of a, b, a' and b', the result \(S= 2\sqrt{2}\), this is all what we need.
Posts: 46
Threads: 0
Joined: Jun 2016
Reputation:
0
06062016, 11:53 AM
(This post was last modified: 06062016, 04:36 PM by gill1109.
Edit Reason: minor correction
)
It seems that FrediFizzx does not realise the need for some insight in statistics in order to understand exactly what it means to test the BellCHSH inequality by experiment. It is important to make a distinction between empirical (observed) averages and theoretical mean values (expectation values).
Below I will write O(a, b), where "O" stands for "observed", for a correlation computed from a finite amount of data obtained from an experiment, and I will write E(a, b), where "E" stands for "expected", for a theoretical correlation derived in some theory.
In a typical CHSH experiment we end up with four data sets, one for each pair of settings (a, b), (a', b), (a, b'), (a', b'). Each data set consists of a number of pairs of outcomes +/ 1. The four datasets can have different sizes. We compute four empirical correlations: the average of the product of the outcomes, one for each of the four setting pairs . Let me denote these four correlations by O(a, b), O(a, b'), O(a', b) and O(a', b') where "O" stands for "observed".
Now we compute the CHSH quantity O(a, b)  O(a, b') + O(a', b) + O(a', b').
Given what has been said so far, it is obvious that the result can lie anywhere between 4 and +4. Here is the result of a tiny experiment leading to a CHSH value of +4: one pair of observations for each setting pair
a,b: +1, +1
a, b': +1, 1
a', b: +1, +1
a', b': +1, +1
Also a huge experiment can deliver CHSH = +4; for instance, an experiment in which the outcome pairs just listed are duplicated a billion times, once for each of the setting pairs.
Now consider an experiment in which the following actually happens, lots of times: Alice and Bob each choose a setting a or a' and b or b'; independently of those choices, "Nature" chooses a value lambda drawn at random according to a probability distribution rho over some set; Alice observes outcome A(a, lambda) or A(a', lambda) and Bob observes outcome B(b, lambda) or B(b', lambda). Here, I assume that the functions A and B take values +/1. The functions A and B and the probability distribution rho remain the same for all trials (one trial = one pair of settings and one pair of outcomes). Now we compute four averages of products O(a, b) etc., each computed on the appropriate subset of trials (the ones with the corresponding setting pair), and take a look at the CHSH quantity O(a, b)  O(a, b') + O(a', b) + O(a', b').
Obviously, the result can lie anywhere between 4 and +4 !
But if the experiment is large, then with large probability the four observed correlations will be close to the four theoretical correlations, which according to simple probability theory are E(a, b) = int A (a, lambda) B (b, lambda) rho(lambda) d lambda. By the usual simple algebra E(a, b)  E(a, b') + E(a', b) + E(a', b') lies between 2 and +2. So if the experiment is large enough, the probability that O(a, b)  O(a, b') + O(a', b) + O(a', b') lies substantially above +2 or below 2 is negligible.
Experimenters do not observe E(a, b)  E(a, b') + E(a', b) + E(a', b').
They observe and publish a realised value of O(a, b)  O(a, b') + O(a', b) + O(a', b').
They do some statistics (calculate error bars  standard deviations  or whatever) in order to show that the value which they observed of O(a, b)  O(a, b') + O(a', b) + O(a', b') lies so far above +2 as to discredit a theory according to which E(a, b)  E(a, b') + E(a', b) + E(a', b') is less than or equal to +2.
The classic approach to CHSH outlined here depends on a rather restrictive understanding of a hidden variables theory: for each new trial, lambda is drawn anew with the same probability distribution rho. Moreover, the functions A and B are assumed to remain the same throughout the experiment. It is possible to relax these assumptions.
By the way, one might like to think of the hidden variable lambda as being carried by the particles. This would seem to exclude local hidden variables theories where also some new randomness goes on in the two measurement devices. But there is no reason why we shouldn't add to lambda, as further components of a vector, hidden variables thought of as belonging to the measurement devices as well. And these different components needn't be statistically independent of one another.
Posts: 85
Threads: 1
Joined: Jun 2016
Reputation:
0
(06062016, 09:03 AM)Schmelzer Wrote: It makes no sense to make claims about what somebody is using. Of course, the calculation does not contradict the Fermat theorem, but it does not follow that they use it. Of course, their computation does not violate the inequality with bound 5820428505, which can be also easily proven, but this does not mean that they use it.
Moreover, it does not matter at all what they use to compute the QT prediction. If they want to use some Chinese supercomputer for this purpose, fine, no problem. There is no need for this, of course, they simply use standard QT rules, which allow to compute exact numbers for the expectation values.
As well, it is completely irrelevant if these terms are somehow dependent or independent. You could as well ask me if they are blue or red. As long as you do not doubt that the computation gives the correct QT predictions, the only relevant question is what QT predicts for S. If it predicts, for some particular choices of the preparation procedure and the values of a, b, a' and b', the result \(S= 2\sqrt{2}\), this is all what we need.
If you look carefully you will see in the Wikipedia article that they are comparing the result with the inequality with the bound of 2 when they should compare it with the inequality with the bound of 4. So the dependency does matter to be mathematically correct. Here is another way to look at the mistake Bell made,
http://libertesphilosophica.info/blog/wp.../Fatal.pdf
Now that Bell's theory has been totally demolished via simple mathematical reasoning using CHSH, it is time to start a new thread. Again, note that no mention of LHV models was needed to show that Bell was wrong. He simply didn't realize the nothing could violate his inequalities therefore making his conclusions invalid.
Posts: 209
Threads: 29
Joined: Dec 2015
Reputation:
0
The linked paper shows only that Joy Christian has not understood why Bell can do this. It is an application of the EPR argument.
Posts: 85
Threads: 1
Joined: Jun 2016
Reputation:
0
(06062016, 08:27 PM)Schmelzer Wrote: The linked paper shows only that Joy Christian has not understood why Bell can do this. It is an application of the EPR argument.
Nonsense. Please demonstrate how ⟨Ak(a)Bk(b)+Ak(a)Bk(b′)+Ak(a′)Bk(b)−Ak(a′)Bk(b′)⟩ could represent something that is actually physical.
Posts: 209
Threads: 29
Joined: Dec 2015
Reputation:
0
06072016, 04:59 AM
(This post was last modified: 06072016, 05:03 AM by Schmelzer.)
(06072016, 12:00 AM)FrediFizzx Wrote: (06062016, 08:27 PM)Schmelzer Wrote: The linked paper shows only that Joy Christian has not understood why Bell can do this. It is an application of the EPR argument.
Nonsense. Please demonstrate how ⟨Ak(a)Bk(b)+Ak(a)Bk(b′)+Ak(a′)Bk(b)−Ak(a′)Bk(b′)⟩ could represent something that is actually physical.
If Alice measures in direction a, and Bob too, then they get the 100% correlated result. But Alice decision to measure a does in no way disturb Bob's measurement  which is the Einstein causality assumption. So, following the EPR criterion of reality, the value measured by Bob has to be welldefined by the initial state already before the measurement is done. This predefined result is \(A(a,\lambda)\).
Once these functions objectively have to exist, and predefine the measurement results, we can use them to compute the \(E(a,b)\) as being
\[E(a,b) = \int A(a,\lambda)B(b,\lambda)\rho(\lambda)d\lambda\] and then we can measure them. As usual in statistic experiments, for a small number the result may differ a lot, but in the long run, for a large enough number, you will get the observed \(E(a,b)\) quite close to the one predicted by the theory, if the theory is correct.
But once these functions objectively have to exist, you can compute in theory even sums which you cannot measure. Because the theory itself has to be mathematically consistent.
So, the whole expression is not measured in a single experiment. But different experiments are used to define the different parts. It is a theoretical consideration which shows that in an Einsteincausal theory this is possible and gives the same results.
Posts: 85
Threads: 1
Joined: Jun 2016
Reputation:
0
(06072016, 04:59 AM)Schmelzer Wrote: (06072016, 12:00 AM)FrediFizzx Wrote: (06062016, 08:27 PM)Schmelzer Wrote: The linked paper shows only that Joy Christian has not understood why Bell can do this. It is an application of the EPR argument.
Nonsense. Please demonstrate how ⟨Ak(a)Bk(b)+Ak(a)Bk(b′)+Ak(a′)Bk(b)−Ak(a′)Bk(b′)⟩ could represent something that is actually physical.
If Alice measures in direction a, and Bob too, then they get the 100% correlated result. But Alice decision to measure a does in no way disturb Bob's measurement  which is the Einstein causality assumption. So, following the EPR criterion of reality, the value measured by Bob has to be welldefined by the initial state already before the measurement is done. This predefined result is \(A(a,\lambda)\).
Once these functions objectively have to exist, and predefine the measurement results, we can use them to compute the \(E(a,b)\) as being
\[E(a,b) = \int A(a,\lambda)B(b,\lambda)\rho(\lambda)d\lambda\] and then we can measure them. As usual in statistic experiments, for a small number the result may differ a lot, but in the long run, for a large enough number, you will get the observed \(E(a,b)\) quite close to the one predicted by the theory, if the theory is correct.
But once these functions objectively have to exist, you can compute in theory even sums which you cannot measure. Because the theory itself has to be mathematically consistent.
So, the whole expression is not measured in a single experiment. But different experiments are used to define the different parts. It is a theoretical consideration which shows that in an Einsteincausal theory this is possible and gives the same results.
Yes, that is the standard "party line" on how to try to justify flawed mathematics and flawed physical reasoning. If you want to continue believing that, then there is probably nothing I can do to convince you otherwise. But you did highlight the exact flaw, "So, the whole expression is not measured in a single experiment." So excuse me if your argument is not at all convincing physically. I think we are finished here. Lurkers can decide for themselves.
Posts: 209
Threads: 29
Joined: Dec 2015
Reputation:
0
Naming something a flaw does not make it a flaw.
Posts: 101
Threads: 0
Joined: Jun 2016
Reputation:
1
What about time? Why would it not be a hidden variable, considering that it is not a real quantity independent of spacetime? Spacetime dependence, then, would dictate that the time parameter demands specification of measure space, and once that is done one cannot avoid including the parameter as integrated into the measure.
The conventional way to do measurements of BellAspect quantum correlations is described by Richard Gill: "The experiment is about *counting*. There are two measurement devices which have binary settings ('1' or '2'). There is a binary outcome. ('+' or '') At the end of the experiment you have 16 counts. So many times you saw outcome '++' and the setting was '11', so many times '+' and the setting was '12' ... so many times '' and the setting was '22'. Quantum mechanics predicts the probabilities of outcomes given settings e.g. Prob('+'  '21'). The physicist's correlation is just the probability of equal outcomes minus the probability of different outcomes." *
Now if BellAspect is a simple discrete counting function as Richard claims, and pairwise measures have 16 possible binary outcomes, let's try using the ultimate discrete counting device  prime numbers.
Because all odd primes are congruent (mod 2), and unity is not considered a prime, let's take the ordered sequence of the first 8 primes on the natural line: 3, 5, 7, 11, 13, 17, 19, 23
This is the densest sequence of primes > 2 of cardinality 8, with the interesting properties that all pairs are congruent (mod 2), regardless of order, and 23 is the least prime that is not a member of a twin pair.
If we take quantum correlations as signed terms, as Richard describes, then every term has a corresponding negative value, for 16 unique counts. That's too many, though  we should have only 4 each (+) and (  ) values, for 4 pairwise outcomes. Look:
We find exactly four primes in the sequence with the Sophie Germain property
(P and 2P + 1 are both prime): {3, 5, 11, 23}. Let’s call this set (+). Call (  ) the remaining set {7, 13, 17, 19}
The one property that these combined sets have, that the conventional 4 X 4 correlation matrix does not – is reversibility. Here is why:
{+1, + 1, + 1, + 1} and { 1,  1,  1, 1} with an upper bound (CHSH bound) of 2, cannot describe an evolution of particle interactions – what EPR would call hidden variables – that a wave function captures by symmetry of motion that implies reversibility of states.
We can see this clearly, by writing the delta between each term of the ordered sequences:
(+) 3 5 11 23
+ + + 
4 8 6 4
   +
(  ) 7 13 17 19
So we generate the positive set as {3+4, 5+8, 11+6, 234}
The negative set is {74, 138, 176, 19+4}
So while the fundamental linear assumption of the 4 X 4 matrix is a terminating order of + + +  or    + (+ 3 – 1 or – 3 + 1), our continuous function is compelled to be reversible to the origin.
The delta terms all cancel, so we haven’t lost any consistency with the natural integer sequence (to which in fact, recursion is a native property). We’ve gained additional information though, of a time parameter that is sensitive to initial condition – is the origin positive or negative?
As we demonstrated, regarding the algebraic closure of the complex plane, the doublezero origin of C demands positivity parity starting condition  the 2 + 1 condition gives us the prime 3. The counting function still holds, when measuring quantum correlations; however, the three delta terms are hidden variables (one term is redundant).
Karl Hess and Walter Philipp introduced the time parameter ("timelike correlated parameter") to the Bell inequality, with the same result. “An example of outcomes for the products of equation 3.1 A(a,lambda)A(b,lambda) + A(a,lambda)A(c,lambda) – A(b,lambda)A(c,lambda) =/< + 1 that violate Bell’s inequality is + 1 for the terms with the plus sign and – 1 for the last term with the minus sign. Because  (1) = + 1 the sum of the three terms is then + 3, which indeed violates the Bell inequality Eq (3.1)”
In terms of computability, recursion native to the real line is equivalent to time reversibility native to general relativity cosmology, and dependent on initial condition.
* Gill, Paul Snively's blog, 2015
** Karl Hess, Einstein was Right p.47
