## Introduction to quantum computing with Q# – Part 12, Bell’s inequality

After a short multi-part detour into the world of quantum cryptography, in this part 12 of the series, we are going to return to some of the foundational concepts of quantum mechanics, and look at the programmatic verification of Bell’s inequality.

### Where were we last time?

In part 5 we looked in detail at the phenomenon of entanglement. In that blog post, which I recommend you have a look at again before proceeding here, we introduced the famous EPR thought experiment and discussed how Bohr and others responded to this paradox. We also explored the problem of non-locality which appears to be a consequence of EPR. In fact, for Albert Einstein it was the key argument proving that quantum mechanics is incomplete. Einstein famously objected to any violation of local realism and therefore suggested that quantum mechanics is incomplete. Namely, he suggested to seek for a theory and ontology based on the “hidden variables” concept, which the mathematical formalism of quantum mechanics doesn’t take into account, and which, in his view, pre-determined the quantum correlations between the particles making up an EPR pair.

Finally, we briefly mentioned that John S. Bell, in his 1964 “On The Einstein Podolsky Rosen Paradox” formukated a test that could be used to determine whether Einstein was correct in his search for a “hidden variables” approach, or whether the nature itself prohibits such theories. And this is exactly where we will pick up in this post.

### Bell’s theorem

Bell introduced a mathematical framework for inequalities that will be satisfied if one relies on the existence of hidden variables when analyzing an entangled state (later generalized by Clauser, Horne, Shimony, and Holt). He derived his inequality for the singlet state – one of the Bell states we discussed in this series already:

$$\ket{11} \rightarrow \frac{1}{\sqrt{2}}(\ket{01} – \ket{10}) = \ket{\Psi^-}$$

Bell imagined the following set up, based on Bohm’s EPR experiment variant, which we already covered in part 5. Two entangled particles in a singlet state $\ket{\Psi^-}$ fly off in the opposite directions, where they hit detectors (e.g.
Stern-Gerlach spin measuring apparatuses) configured to be measuring at angles $\theta_A$ and $\theta_B$. Obviously if $\theta_A = \theta_B$, then the original conditions from Bohm’s EPR experiment are recovered and the measured values $A(\theta_A)$ and $B(\theta_B)$ are always perfectly correlated – opposite to each other (because we started with $\ket{\Psi^-}$). However, if the two detectors are at different angles, Bell realized that the quantum mechanically predicted correlation between the obtained measurements can be written as:

$$\braket{A(\theta)B(\phi)} = -cos(\theta_A – \theta_B)$$

He then followed to derive the inequality that gets violated by the predictions of quantum mechanics, but not by hidden variable theories assuming locality.

The original inequality that Bell derived was as follows:

$$|P(\vec{a},\vec{b}) – P(\vec{a},\vec{c})| – P(\vec{b},\vec{c}) \leq 1$$

where $P_{xy}$ refers to the average value of the product of the spins of the measured particles (qubits), $a$ and $b$ are unit vectors corresponding to the two detectors and $c$ is any other unit vector. A step-by-step derivation of the inequality is beyond our scope here, and there exist various excellent resources for that already, including Bell’s original paper and the briefly mentioned CHSH paper. John Presskill has a very accessible example using classical coins in his lecture notes, and Maccone builds upon this to provide a simple proof himself. It is also worth noting that Bell’s inequality has now been generalized to apply to a class of problems, and is available in numerous variants, hence it is being referred to in plural: inequalities.

That said, the above mentioned $\braket{A(\theta)B(\phi)}$ correlation is important, because it tells us that:

• $P(\vec{a},\vec{b}) = -cos(\theta_a – \theta_b)$
• $P(\vec{a},\vec{c}) = -cos(\theta_a – \theta_c)$
• $P(\vec{b},\vec{c}) = -cos(\theta_b – \theta_c)$

We know that cosine is a periodic function, with its values oscillating between $1$ and $-1$ so it is relatively easy to deduce from the above that there will be some angles, for which Bell’s inequalities would not be violated, and some for which the violation will be maximal. Namely, when:

• $\theta_a = 0^\circ$
• $\theta_b = \frac{\pi}{3} = 60^\circ$
• $\theta_c = \frac{\pi}{2} = 120^\circ$

Then:

• $P(\vec{a},\vec{b}) = -0.5$
• $P(\vec{a},\vec{c}) = 0.5$
• $P(\vec{b},\vec{c}) = -0.5$

And substituting these values into the inequality:

$$1.5 \nleq 1$$

Which of course is a flagrant violation of the Bell’s inequality. As such, the Bell’s theorem states:

No physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics.

Blanchard writes in The Message of Quantum Science:

Standard quantum mechanics is the only probabilistic theory where the probabilities are postulated ab initio and are not a consequence of hidden deterministic processes at a deeper level. Such processes are called hidden variables and reflect the ignorance of the observer. However, the majority of physicists today believe that the probabilities in quantum mechanics are not attributable to the ignorance of hypothetical hidden variables, but are of a fundamentally different nature. This is corroborated by Bell’s inequalities, which hold in any theory with (local) hidden variables, and which were experimentally found to be violated.

Bell’s theorem turned out to be a spectacular success and was experimentally confirmed numerous times. The first one to achieve an empirical confirmation of Bell’s inequalities was Alain Aspect at the École supérieure d’optique in Orsay, the results of which were published in 1982. Aspect wrote:

(…) our experiment yields the strongest violation of Bell’s inequalities ever achieved, and excellent agreement with quantum mechanics. Since it is a straightforward transposition of the ideal Einstein-Podolsky-Rosen-Bohm scheme, the experimental procedure is very simple, and needs no auxiliary measurements as in previous experiments with single-channel polarizers. We are thus led to the rejection of realistic local theories if we accept the assumption that there
is no bias in the detected samples: Experiments support this natural assumption.

Aspect’s experiment left two major loopholes, which have since been (mostly) closed by other experimental setups that followed. At this point is clear – with profound consequences – any local realist theory, as championed by Einstein, is wrong. This also has dramatic epistemological consequences about the very nature of reality. Elegance and Enigma dedicated a chapter to survey various physicists about the implications of Bell’s theorem. Tim Mauldin wrote there:

Assuming we can accept what we seem to see, namely, that every experiment has a unique outcome (contrary to the many-worlds view) and that the correlations between experiments performed at spacelike separation violate Bell’s inequality, then we can conclude that nature is nonlocal. That is, in some way certain events at spacelike separation are physically connected to each other. Einstein’s dream of a perfectly local physics, in which the occurrences in any small region of the spacetime depend only on what happens in that region, cannot be fulfilled. It is an open question what the implications of this fact are for the relativistic account of spacetime.

A lot of authors point to Bell’s theorem as a direct confirmation of non-locality in the fabric of nature and David Wallace even hinted at the superluminal interactions. He wrote:

The violations of Bell’s inequalities seem to tell us is that the dynamics of the microworld allows interactions that are faster than light (or slower than light but backward in time, I guess, if that really means anything). If the only interactions in the world are subluminal, Bell’s inequalities would be satisfied; they’re not, so systems can interact superluminally.

However, the view adopted in this post series, namely the reality-without-realism concept, is a lot more subtle, and more in-line with Bohr’s spirt of Copenhagen, who also advocated locality, albeit much differently than Einstein. This is nicely expressed by David Mermin, who, referring to Asher Peres’s famous statement “unperformed tests have no outcomes”, provides the following viewpoint:

So for me, nonlocality is too unsubtle a conclusion to draw from the violation of Bell inequalities. My preference is for conclusions that focus on the impropriety of seeking explanations for what might have happened but didn’t. Evolution has hard-wired us to demand such explanations, since it was crucial for our ancestors to anticipate the consequences of all possible contingencies in their (classical) struggles.

Finally, this view is corroberrated by David Griffiths, who provided a very on-point and rather sharp-witted summary of the Bell’s theorem and the EPR paradox:

It is a curious twist of fathe that the EPR paradox, which assumed locality in order to prove realism, led finally to the demis of locality and left the issue of realism undecided – the outcome (as Bell put it) einstein would have liked least. Most physicists today consider that if they can’t have local realism, there’s not much point in realism at all, and for this reason nonlocal hidden variable theories occupy a rather peripheral niche.

### Bell’s inequalities in Q#

While it took almost two decades until Bell’s original idea was verified experminetally, we can now easily test for Bell’s inequalities using a quantum computer. Just like there are many variants of these inequalities, we could choose a number of ways to approach this problem in Q#.

As we already discussed, quantum mechanics predicts that maximal violations of Bell’s inequalities will happen for the following angles:

• $\theta = \frac{\pi}{3} = 60^\circ$ angle between $\vec{a}$ and $\vec{b}$
• $\phi = \frac{\pi}{3} = 60^\circ$ angle between $\vec{b}$ and $\vec{c}$
• ($\theta + \phi) = \frac{2\pi}{3} = 120^\circ$ angle between $\vec{a}$ and $\vec{c}$

A very simple quantum computational model to validate Bell’s inequalities was summarized by Diego Garcia-Martin and German Sierra from Universidad Autonoma de Madrid in their paper Five Experimental Tests on the 5-qubit IBM Quantum Computer. We will use their approach here, with some small modifications. We can measure $P(\vec{a},\vec{b})$, $P(\vec{a},\vec{c})$ and $P(\vec{b},\vec{b})$ by running 3 separate quantum circuits for each of the cases.

We shall begin with the singlet state $\ket{\Psi^-}$. We already know from the earlier posts, that we can obtain it from a pair of qubits in the state $\ket{11}$, running the $H$ gate on the first qubit, and then a $CNOT$ gate over both of them. Thus, given qubits $q1, q2$, the following Q# operation creates $\ket{\Psi^-}$:

We know that we need to create an experimental setup that will cater for the three different cases, corresponding to $P(\vec{a},\vec{b})$, $P(\vec{a},\vec{c})$ and $P(\vec{b},\vec{c})$. Therefore, we will define three separate Q# operations for them.

The first operation will be called $BellsInequalityAB$, will correspond to $P(\vec{a},\vec{b})$ and will use $\theta = \frac{\pi}{3} = 60^\circ$ angle between $\vec{a}$ and $\vec{b}$. The circuit representing the operation is shown below.

We already wrote the Q# code for the initial part of the circuit – setting up of $\ket{\Psi^-}$ state. What follows is transformation for measurement purposes. Since Q# doesn’t support measurement in an arbitrary basis, but instead only allows measurements in one of the Pauli bases, we will apply an $R_z$ rotation of $\frac{\pi}{3}$ radians. On the other hand, the circuit defines an $H$ gate followed by a standard basis $PauliZ$ measurement to achieve measurement along the X basis, but that can be expressed in the code by a single operation – measurement in $PauliX$ basis – instead. The Q# code for the circuit is shown next.

The return is a tuple of two classical bits – a pair of $00$, $10$, $01$ or $11$. Pauli X basis measurement is done using the $MResetX$ built-in operation.

Our second Bell’s inequality operation will be called $BellsInequalityAC$, will correspond to $P(\vec{a},\vec{c})$ and will use $\theta = \frac{2\pi}{3} = 120^\circ$ angle between $\vec{a}$ and $\vec{c}$. The circuit is presented next.

The Q# code is almost identical to the previous case – with the only exception being the wider angle.

The final circuit will allow us to measure the results for $P(\vec{b},\vec{c})$ and – as should be apparent at this point – will depend on $\theta = \frac{2\pi}{3} = 60^\circ$ angle between $\vec{b}$ and $\vec{c}$. The circuit is shown next – it is, unurprisngly, similar to the previous two variants, with the notable difference of the $R_z$ rotation gate being now applies to both qubits.

The corresponding Q# code is:

At this point we have all the necessary Q# operations ready – what we are still missing, is the code to orchestrate their execution using a reasonable sample size, as well as some code that will output the results in an accessible format. We will, as usually, set up a small self contained Q# program for that purpose. The entry point is shown below:

This snippet hints at the existence of a helper $Run$ operation, which we are yet to see, that takes in two input parameters – a string identifier for the operation for visualization purposes as well as a delegate pointing to one of our three Bell inequality component operations. Such setup will allow us to reuse the output/presentation code for each of the cases. One other interesting tidbit to mention is that to make the final calculation for the Bell’s inequality, we make use of the built-in $AbsD$ function, which is part of the $Microsoft.Quantum.Math$ namespace and returns the absolute value of a double-precision floating-point number.

The $Run$ operation is shown next, completing our code.

Ultimately, since we did all the heavy Q# parts already, this should be straight forward to read. The main purpose here is to provide a standardized way of dealing with and outputting the results of the other three operations. The sample size is fixed to 4096 runs. Since all three Bell’s inequality operations we wrote earlier return a tuple pair corresponding to the two classical bits measured, we use a 4 element mutable array to keep a running total of the obtained combinations. The final $P_{xy}$ is calculated as the sum of the probabilities of measuring $00$ or $11$ minus the probabilities of finding $01$ or $10$.

### Experimental results

When running our program now, we should see results similar to the ones below.

This is very encouraging – because it aligns perfectly with the predictions of quantum mechanics. Remember, that we expected the maximum violation to be at 1.5, because:

• $P(\vec{a},\vec{b}) = cos(60^\circ) = -0.5$
• $P(\vec{a},\vec{c}) = cos(120^\circ) = 0.5$
• $P(\vec{b},\vec{c}) = cos(60^\circ) = -0.5$

The experimentally obtained results for a trial run of 4096 repetitions are:

• $P(\vec{a},\vec{b}) = -0,491943359375$
• $P(\vec{a},\vec{c}) = 0,498291015625$
• $P(\vec{b},\vec{c}) = -0,508056640625$

This fully confirms the nonlocal nature of quantum phenomena, and excludes the possibility of local hidden variable theories. Of course at this point we are only running this in a local simulator, not on real quantum hardware, but it’s nevertheless excellent to see that we did not make any mistakes in our reasoning. In fact, taking a step back, and considering how difficult it was to experimentally confirm the Bell’s theorem, it is quite mind boggling that we can simply write less than a 100 lines of Q# and quickly get the answer to the question that in the first half of the 20th century has troubled some of the greatest mind in physics. Moreover, very soon we’ll be able to run these in Azure on the actual quantum hardware too.

At this point it’s worth mentioning that other quantum cloud providers have already allowed similar Bell’s inequality verification to be done on their computers. The aforementioned paper from Garcia-Martin and Sierra documents their findings from running the tests on the 5-qubit IBM hardware. Their experimental results were:

• $P(\vec{a},\vec{b}) = −0.392±0.014$
• $P(\vec{a},\vec{c}) = 0.401±0.009$
• $P(\vec{b},\vec{c}) = −0.389±0.012$

This still produces the inequality violation, albeit not as large as on the Q# local simulator:

$$1.182 ± 0.020 \nleq 1$$

Their conclusion, in that case (back in 2018), was that volatility and inefficiency of the quantum circuits were still too high for scientifc use cases:

Overall, the results obtained in these experiments, although moderately good in most cases, are still far from optimum.

However, the fidelity of quantum hardware has dramatically increased in the last years so I am confident that we will see real life problem solving and useful physical experiments being carried out on quantum computers in the cloud very soon. I am excited and curious to soon be able to see the results on Azure Quantum.

### Summary

Bell’s theorem is one of the most puzzling aspects of the nature of reality – Henry Stapp even pronounced it to be “the most profoud discovery in science”. It has forced physicists to effectively choose to give up either locality or realism, or to reach into exotic theories of Bohmian or Everrettian type. With regards to quantum computing, I continuously marvel at the fact that we can use quantum computing to experimentally verify these problems without much efforts – and doing that in Q# is a true pleasure.

This concludes part 12 of this series – last part in 2020 – and we’ll continue after New Year’s! Happy Holidays!