Once the quantum computational system grows from a single to multiple qubits, we arrive at a concept of a compound quantum system. Greg Jaeger, in his excellent study of entanglement, reminds us of a mathematical framework in which such systems are embedded:

The Hilbert space of a compound quantum system is the tensor product of the Hilbert spaces of the subsystems. The pure-state space for a system of $N$ two-level systems is the Hilbert space $\mathcal{H}^{(N)} = \mathbb{C}^2\otimes\mathbb{C}^2\otimes…\otimes\mathbb{C}^2$.

As such, from the quantum information theory point of view, when dealing with multiple qubits, the overall state of the system is described not by a sum, but by a tensor product of all the individual qubit constituents (or, to be more formal, subsystems), represented below by $\psi_n$.

$$\ket{\psi} = \ket{\psi_1}\otimes\ket{\psi_2} … \otimes\ket{\psi_n}$$

As it was determined in the earlier parts of this series, quantum gates are represented by $2^n * 2^n$ sized unitary matrices, where $n$ stands for the number of qubits the gate acts on. This definition naturally implies that while single qubit matrices are $2$x$2$ in size, two-qubit gates are going to be $4$x$4$ matrices, representing a 4-dimensional abstract complex vector spaces, or more specifically, after Jaeger, Hilbert spaces. The tensor product qualification is an important one, because that is what gives rise to the $n$ dimensions. For example, for single qubits, the computational basis consists of two basic unit vectors

$$\ket{0} = \begin{bmatrix} 1 \\ 0 \end{bmatrix}$$

$$\ket{1} = \begin{bmatrix} 0 \\ 1 \end{bmatrix}$$

Applying the tensor product between them, we get four dimensional standard basis vectors for two qubit systems

$$\ket{00} = \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix} \ket{01} = \begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \end{bmatrix}$$

$$\ket{10} = \begin{bmatrix} 0 \\ 0 \\ 1 \\ 0 \end{bmatrix} \ket{11} = \begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \end{bmatrix}$$

We could extrapolate this calculations onto larger qubit registers using the exact same mathematics. At the time of writing, the largest available quantum computer was constructed by Google and consists of 53 qubits. Describing a complete state of such a system requires $2^{53}$ dimensional Hilbert space, and $2^{53}$ is a really large number – $9,007,199,254,740,992$. And that’s not even the end – in quantum mechanics in general, in case of continuous variables, the mathematical structure is actually infinite-dimensional.

Of course such rigorous mathematical formalism – contrary to classical physics – makes it impossible to directly infer any structure of reality from the quantum mechanical equations, something that Einstein had lots of trouble with. Arkady Plotnitsky summarizes this situation in The Principles of Quantum Theory, From Planck’s Quanta to the Higgs Boson:

His [Heisenberg’s] new theory offers the possibility of predicting, in general probabilistically, the outcomes of quantum experiments, at the cost of abandoning the physical description or representation, however idealized, of the ultimate objects and processes considered. This cost was unacceptable to some, event to most, beginning with Einstein (…). The nature of the mathematics used, that of the infinite-dimensional Hilbert spaces over complex numbers, already makes it difficult to establish realist representations of physical processes.

A similar sentiment, along the lines of completely detaching this strange, complicated mathematical form of expression from the underlying reality, by the way something that falls in line with the “spirit of Copenhagen” (which we briefly mentioned in the earlier part of this series), has been also expressed by David Mermin in his contribution to Elegance and Enigma:

Quantum states, in other words, are bookkeeping tools that enable one to calculate, from a knowledge of the initial preparation and the fields acting on the system, the probability of the outcomes of the measurements on that system. (…) The Copenhagen view fits quantum computation so well that I am persuaded that quantum states are, even in the broader physical contexts, calculational tools, invented and used by physicists to enable them to predict correlations among their perceptions.

Leaving the philosophical debates about the nature of reality aside for now, let’s consider the following two qubit system, where the two qubits are in states $\ket{\psi_1}$ and $\ket{\psi_2}$ and the probability amplitudes are $\alpha_1$, $\beta_1$ and $\alpha_2$, $\beta_2$ respectively. As thoroughly analyzed in the earlier parts of this series, each qubit can written as a linear combination of two basis states.

$$\ket{\psi_1} = \alpha_1\ket{0} + \beta_1\ket{1}$$

$$\ket{\psi_2} = \alpha_2\ket{0} + \beta_2\ket{1}$$

The state of this two qubit system is then expressed by the tensor product of $\ket{\psi_1}$ and $\ket{\psi_2}$:

$$\ket{\psi_1}\otimes\ket{\psi_2} = \alpha_1\alpha_2\ket{0}\ket{0} + \alpha_1\beta_2\ket{0}\ket{1}\\ + \beta_1\alpha_2\ket{1}\ket{0} + \beta_1\beta_2\ket{1}\ket{1}$$

Additionally, in the Dirac braket notation, it is customary to merge the neighbouring kets into a single ket, to make the entire equation more succinct and readable:

$$\ket{\psi_1}\otimes\ket{\psi_2} = \alpha_1\alpha_2\ket{00} + \alpha_1\beta_2\ket{01}\\ + \beta_1\alpha_2\ket{10} + \beta_1\beta_2\ket{11}$$

The above equation is also important because we will return to it when we look at entanglement. When working with multi-qubit registers, we can always combine single-qubit unitary transformations to produce a multi-qubit transformation (gate). This is done by expressing those transformations as tensor products of single qubit transformations. Following this rule, $X \otimes Z$ is the same as $X \otimes I$ followed by $I \otimes Z$, where $I$ is the identity gate.

The opposite, however, is not always possible – not every multi-qubit transformation can be decomposed into a set of single qubit ones. The reason behind this is, again, the embodiment of the weirdness of quantum phenomena – some of the subsystems (qubits) may be entangled, and thus their states are not possible to be expressed individually anymore. However, as mentioned, for now we are going to attempt to steer away from the depths of entanglement though, as we are going to dedicate the entire next post to it.

By far the most fundamental, commonly used, and the most powerful two-qubit gate is the CNOT gate. CNOT stands for a “controlled NOT”, which means it acts like the single qubit NOT ($X$) gate but in a conditional way, where the condition spans onto the second qubit. The gate is also often referred to as “controlled bitflip”. CNOT treats the first qubit as a so-called “control qubit” and the second one as a “target qubit”. Upon application of CNOT, the value of the control qubit doesn’t change, while target qubit may change conditionally – depending on the value of the control qubit. A partial counterpart to the logic behind CNOT in classical computing is the XOR operation, or the so called exclusive OR, because the change in the target qubit value upon measurement would correspond to the XOR logic.

For two classic bit inputs XOR logic returns $0$ when the two inputs are both $0$ or both $1$, and $1$ when the two input values differ – one is a $0$ and the other a $1$. Similarly, if we leave aside the quantum mechanical concept of a superposition and the complexities arising from that, and only imagine that the qubit can only be in one of the two basis states $\ket{0}$ and $\ket{1}$ (thus behaving like a classical bit), we can summarize the effect of CNOT gate accordingly – when the control qubit is $\ket{0}$, then the target qubit doesn’t change its value. However, when the control qubit is $\ket{1}$, the value of the target swapped is swapped – from $\ket{0}$ to $\ket{1}$ and vice versa.

$$\ket{00}\rightarrow\ket{00}$$

$$\ket{01}\rightarrow\ket{01}$$

$$\ket{10}\rightarrow\ket{11}$$

$$\ket{11}\rightarrow\ket{10}$$

CNOT is mathematically expressed using the following matrix:

$$

CNOT=\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix}$$

The CNOT gate is unitary and is its own inverse and thus can be applied twice in a row to reverse its result. In other words, given an initial quantum state $\ket{\psi}$, the following relation holds:

$$

CNOT(CNOT\ket{\psi}) = \ket{\psi}

$$

There is an easy way to prove this. Let’s consider that we start in an initial quantum system state $\ket{\varphi_1,\varphi_2}$, where the control qubit is in the state $\varphi_1$ and the target qubit is in the state $\varphi_2$. Applying CNOT gate once will transform the system state to $\ket{\varphi_1,\varphi_1\oplus\varphi_2}$. Applying CNOT again will further transform that to $\ket{\varphi_1,\varphi_1\oplus(\varphi_1\oplus\varphi_2)}$. Since the XOR logic is commutative and associative, we can actually replace $\varphi_1\oplus\varphi_1$ with $0$ (recall that XOR returns $0$ when both inputs are the same), which leads to $\ket{\varphi_1,0\oplus\varphi_2}$ and that is really equivalent to the initial state $\ket{\varphi_1,\varphi_2}$.

At this point, it’s going to be beneficial to abandon purely theoretical deliberations and look at some Q# code. A word of qualification is in order though. In the three earlier posts of this series, we used a particular construct for our Q# programs – more specifically, a hybrid C# + Q# model, where C# provided a shell driver applications, and Q# contained quantum logic invoked from that application. Since then, the QDK, in one of the recent updates, shipped a new feature – standalone Q# command line applications. This feature allows us to use pure Q# programs, making the entire journey through quantum coding much more straight forward, as it eliminates the extra baggage of having to deal with an extra separate programming language. I blogged about this feature in detail on the Q# community blog recently, so I will skip the introduction to it here – I recommend however that you have a short look there.

A full example of a simmple Q# program that uses CNOT (from the $Microsoft.Quantum.Intrinsic$ namespace) against two qubits, and then measures the qubits to verify the results is shown below:

@EntryPoint() operation Start() : Unit { Message("CNOT"); ControlledNotSample(false, false); // |00> ControlledNotSample(false, true); // |01> ControlledNotSample(true, false); // |10> ControlledNotSample(true, true); // |11> } operation ControlledNotSample(controlInitialState : Bool, targetInitialState : Bool) : Unit { using ((control, target) = (Qubit(), Qubit())) { PrepareQubitState(control, controlInitialState); PrepareQubitState(target, targetInitialState); CNOT(control, target); let resultControl = MResetZ(control); let resultTarget = MResetZ(target); Message("|" + (controlInitialState ? "1" | "0") + (targetInitialState ? "1" | "0") +">==>|" + (resultControl == One ? "1" | "0") + (resultTarget == One ? "1" | "0") +">"); } } operation PrepareQubitState(qubit : Qubit, initialState : Bool) : Unit is Adj { if (initialState) { X(qubit); } }

In this sample, as mentioned, we only deal with qubits in the two orthogonal basis states $\ket{0}$ to $\ket{1}$. We always borrow (“allocate”, in the C# language) two qubits, and use them to apply the CNOT gate. The sample runs 4 times – for four different starting states $\ket{00}$, $\ket{01}$, $\ket{10}$ and $\ket{11}$. The $Message$ function prints the output to the console, and we keep track of the state transformations. What we should see is the following:

CNOT |00>==>|00> |01>==>|01> |10>==>|11> |11>==>|10>

This is of course as expected – the states $\ket{00}$ and $\ket{01}$ are left intact, while the states $\ket{10}$ and $\ket{11}$ result in the change in the target qubit value, along the lines of the classical XOR logic.

At this point it also worth mentioning that the notion of a “control qubit” and a “target qubit” is only an approximation of the real behavior of CNOT in quantum computing, and while helpful for high-level understanding, it should not be taken too literally. As it was discussed in part 1 of this series, measurements in quantum computing are basis dependent. Situation is the same with the effects of the CNOT gate. So far we were only discussing its behavior in the standard computational basis ($\ket{0},\ket{1}$), however, we could use the gate in a different basis, say, Hadamard basis ($\ket{+} ,\ket{-}$). In Hadamard basis, the notions of control and target qubits would appear to be effectively flipped, as it is the second qubit that remains unchanged, and the first one changes – conditionally – its state. The state transformations (without going into the mathematical proof) look the following:

$$\ket{++}\rightarrow\ket{++}$$

$$\ket{+-}\rightarrow\ket{–}$$

$$\ket{-+}\rightarrow\ket{-+}$$

$$\ket{–}\rightarrow\ket{+-}$$

In classical computation theory, we often speak of the so-called “universal gates” (e.g. OR, AND and NOT) – a set of gates that can be arranged to represent any logical algorithm, without involving any other gates. Such arrangement would likely not be optimal from the performance standpoint, but it would be functionally complete. In fact, a single classical gate, the NAND gate, on its own is already a universal gate – as any boolean logic function can be represented by a combination of NAND gates. In quantum computing, the concept of gate universality is a little more challenging. Nielsen and Chuang define quantum universality as follows:

A set of gates is said to be universal for quantum computation if any unitary operation may be approximated to arbitrary accuracy by a quantum circuit involving only those gates.

Such definition contains an important qualification – “approximated to arbitrary accuracy”. Quantum computing, due to its inherently different nature compared to classical boolean algebra, does not allow for a perfect set of universal gates. The main reason behind it, is that any unitary transformation can become a gate acting on a qubit. This is explained in a very logical and intuitive way by Chris Bernhardt:

As we have seen there are infinitely many possible gates that can act on just one qubit. If we take a finite number of gates and connect them in a finite number of ways, we will end up with a finite number of circuits. So, it is just not possible to have a finite number of gates that generates and infinite number of circuits.

Despite this sobering assessment, Nielsen and Chuang do identify a set of universal quantum gates – universal according to their quantum definition, not according to the classical universality definition. Those gates are $H$ gate, phase gate, ${\pi/8}$ gate and the $CNOT$ gate; the rationale behind that is that in principle unitary transformations can be decomposed into a series of two qubit rotations. This emphasizes yet again the critical importance of the CNOT gate in the quantum algorithm building. We will soon see how one multi qubit gate can actually be replaced by aa series of CNOTs.

An intuitive way of thinking about CNOT is that it is made up of the $I$ (top left corner) and the $X$ matrices (bottom right corner). Such observation is very helpful, as it allows to naturally generalize the behavior of CNOT into a common class of “controlled” qubit transformations, where a given transformation is applied conditionally, when the first control qubit is $\ket{1}$. If we follow this generalization path, it is possible to define any generic single qubit transformation $U$ such that:

$$

U=\begin{bmatrix} U_{11} & U_{12} \\ U_{21} & U_{22} \end{bmatrix}$$

We can then use the observation we already made about the composition of the CNOT gate, and combine the $I$ gate and $U$, thus creating a hypothetical controlled $U$ two-qubit gate (“$CU$”), which is shown below.

$$

CU=\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & U_{11} & U_{12} \\ 0 & 0 & U_{21} & U_{22} \end{bmatrix}$$

We can now classify the effects of this $CU$ gate in the following way:

$$\ket{00}\rightarrow\ket{00}$$

$$\ket{01}\rightarrow\ket{01}$$

$$\ket{10}\rightarrow\ket{1} \otimes U\ket{0}$$

$$\ket{11}\rightarrow\ket{1} \otimes U\ket{1}$$

Such generalizations are common in the quantum computing literature – for example Bob Sutor in his recent book Dancing with Qubits provides exhaustive information about constructing $CY$, $CZ$ or even $CR^Z_\phi$ gates. Q# does not intrinsically implement those gates, however it does provide a general purpose mechanism of creating general controlled unitary transformations via the $Controlled$ functor. A simple example of achieving CNOT functionality via $Controlled$ functor and a single qubit $X$ gate in Q# is shown below:

@EntryPoint() operation Start() : Unit { Message("CNOT via functor"); ControlledNotSampleFunctor(false, false); // |00> ControlledNotSampleFunctor(false, true); // |01> ControlledNotSampleFunctor(true, false); // |10> ControlledNotSampleFunctor(true, true); // |11> } operation ControlledNotSampleFunctor(controlInitialState : Bool, targetInitialState : Bool) : Unit { using ((control, target) = (Qubit(), Qubit())) { PrepareQubitState(control, controlInitialState); PrepareQubitState(target, targetInitialState); Controlled X([control], target); let resultControl = MResetZ(control); let resultTarget = MResetZ(target); Message("|" + (controlInitialState ? "1" | "0") + (targetInitialState ? "1" | "0") +">==>|" + (resultControl == One ? "1" | "0") + (resultTarget == One ? "1" | "0") +">"); } } operation PrepareQubitState(qubit : Qubit, initialState : Bool) : Unit is Adj { if (initialState) { X(qubit); } }

The output of this code is identical to the earlier one when we used the built-in $CNOT$ gate.

CNOT via functor |00>==>|00> |01>==>|01> |10>==>|11> |11>==>|10>

Another useful two-qubit gate is the SWAP gate. As the name suggests, it can be used to swap the states of two qubits. This is a feature that can be very useful in quantum algorithms, especially in the absence of copying (we have not touched on that subject yet, but we will return to that later in this series). SWAP gate is described by the following matrix:

$$

SWAP=\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}$$

SWAP state transformations are summarized below:

$$\ket{00}\rightarrow\ket{00}$$

$$\ket{01}\rightarrow\ket{10}$$

$$\ket{10}\rightarrow\ket{01}$$

$$\ket{11}\rightarrow\ket{11}$$

Just like the CNOT gate, SWAP function in Q# can be found in the $Microsoft.Quantum.Intrinsic$ namespace, and not surprisingly, takes two qubits as arguments. An example of using the SWAP gate in Q# is shown in the snippet below.

@EntryPoint() operation Start() : Unit { Message("SWAP"); SwapSample(false, false); // |00> SwapSample(false, true); // |01> SwapSample(true, false); // |10> SwapSample(true, true); // |11> } operation SwapSample(firstState : Bool, secondState : Bool) : Unit { using ((first, second) = (Qubit(), Qubit())) { PrepareQubitState(first, firstState); PrepareQubitState(second, secondState); SWAP(first, second); let resultFirst = MResetZ(first); let resultSecond = MResetZ(second); Message("|" + (firstState ? "1" | "0") + (secondState ? "1" | "0") +">==>|" + (resultFirst == One ? "1" | "0") + (resultSecond == One ? "1" | "0") +">"); } } operation PrepareQubitState(qubit : Qubit, initialState : Bool) : Unit is Adj { if (initialState) { X(qubit); } }

The code is actually identical to our earlier CNOT sample – with the only exception being that we (no pun intended) swapped CNOT with SWAP. The output is inline with our expectations:

SWAP |00>==>|00> |01>==>|10> |10>==>|01> |11>==>|11>

Fittingly enough to the earlier discussion about the CNOT universality, the SWAP gate can be accurately reconstructed using a series of three CNOT transformations, applied in the order shown on the circuit below.

While such composition may not appear very intuitive at first, it is actually very easy to prove mathematically. I generally encourage everyone to try these proofs as often as possible, to develop sort of an intuition for the state transformations connected to the most common gate types. In this case, we can start the proof by imagining the initial state $\ket{\psi_1}$ of the compound two-qubit system as follows:

$$\ket{\psi_1} = \ket{a} \otimes \ket{b}$$

In $\ket{\psi_1}$, $\ket{a}$ and $\ket{b}$ are in computational basis, so either $\ket{0}$ or $\ket{1}$. After applying the first CNOT we end up with $\ket{\psi_2}$:

$$\ket{\psi_2} = \ket{a} \otimes \ket{a \oplus b}$$

Following with the second CNOT, this time with reversal of control qubit, we end up with $\ket{\psi_3}$. Notice that the reversal causes the state that was already described by XOR to stay intact.

$$\ket{\psi_3} = \ket{a \oplus (a \oplus b)} \otimes \ket{a \oplus b}$$

Due to associativity of XOR, $\ket{\psi_3}$ can then be reduced to:

$$\ket{\psi_3} = \ket{b} \otimes \ket{a \oplus b}$$

Finally, applying the final CNOT (reversing the control qubit again) leads us to state $\ket{\psi_4}$:

$$\ket{\psi_4} = \ket{b} \otimes \ket{(a \oplus b) \oplus b}$$

And just like before, this can be reduced to:

$$\ket{\psi_4} = \ket{b} \otimes \ket{a}$$

If we compare $\ket{\psi_1}$ and $\ket{\psi_4}$, we can obviously easily see that the qubit states have been swapped. Of course we do not need to only rely on algebraic proof – with Q# at our disposal, it is fairly easy to verify that this particular sequence of CNOTs is equivalent to a SWAP. This is shown in the snippet below.

@EntryPoint() operation Start() : Unit { Message("SWAP with CNOTs"); SwapSampleWithCnots(false, false); // |00> SwapSampleWithCnots(false, true); // |01> SwapSampleWithCnots(true, false); // |10> SwapSampleWithCnots(true, true); // |11> } operation SwapSampleWithCnots(firstState : Bool, secondState : Bool) : Unit { using ((first, second) = (Qubit(), Qubit())) { PrepareQubitState(first, firstState); PrepareQubitState(second, secondState); CNOT(first, second); CNOT(second, first); CNOT(first, second); let resultFirst = MResetZ(first); let resultSecond = MResetZ(second); Message("|" + (firstState ? "1" | "0") + (secondState ? "1" | "0") +">==>|" + (resultFirst == One ? "1" | "0") + (resultSecond == One ? "1" | "0") +">"); } } operation PrepareQubitState(qubit : Qubit, initialState : Bool) : Unit is Adj { if (initialState) { X(qubit); } }

It prints identical result to using the built-in SWAP gate:

SWAP with CNOTs |00>==>|00> |01>==>|10> |10>==>|01> |11>==>|11>

Toffoli gate, also known as CCNOT gate, is a three qubit gate that is an extrapolation of the CNOT gate logic onto three qubits. Logically, it follows similar XOR rules – with the first two qubits are both treated as control qubits, while the third plays the role of the target qubit. The first experimental realization of the gate in a trapped ion quantum computer was done in 2008 by a group at University of Innsbruck.

Since the CCNOT gate operates on three qubits, the matrix used to express it is $8$x$8$ ($2^3$) in size:

$$

CNOT=\begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \end{bmatrix}$$

Similarly to how we did when looking at the CNOT gate, if we set aside the superposition states and only focus on the two computational basis states $\ket{0}$ and $\ket{1}$, the effects of the gate can be summarized in the following way:

$$\ket{000}\rightarrow\ket{000}$$

$$\ket{001}\rightarrow\ket{001}$$

$$\ket{010}\rightarrow\ket{010}$$

$$\ket{011}\rightarrow\ket{011}$$

$$\ket{100}\rightarrow\ket{100}$$

$$\ket{101}\rightarrow\ket{101}$$

$$\ket{110}\rightarrow\ket{111}$$

$$\ket{111}\rightarrow\ket{110}$$

In other words, the gate swaps the states $\ket{110}$ and $\ket{111}$ with each other and leaves all other states unchanged. In Q#, just like the CNOT, SWAP and other basic gates, CCNOT is part of the $Microsoft.Quantum.Intrinsic$ namespace and it actually happens to be the only built-in three-qubit gate in the language. However, other similar gates can always be manually constructed using the aforementioned $Controlled$ functor.

@EntryPoint() operation Start() : Unit { Message("SWAP with CNOTs"); SwapSampleWithCnots(false, false); // |00> SwapSampleWithCnots(false, true); // |01> SwapSampleWithCnots(true, false); // |10> SwapSampleWithCnots(true, true); // |11> } operation ToffoliSample(control1InitialState : Bool, control2InitialState : Bool, targetInitialState : Bool) : Unit { using ((control1, control2, target) = (Qubit(), Qubit(), Qubit())) { PrepareQubitState(control1, control1InitialState); PrepareQubitState(control2, control2InitialState); PrepareQubitState(target, targetInitialState); CCNOT(control1, control2, target); let resultControl1 = MResetZ(control1); let resultControl2 = MResetZ(control2); let resultTarget = MResetZ(target); Message("|" + (control1InitialState ? "1" | "0") + (control2InitialState ? "1" | "0") + (targetInitialState ? "1" | "0") +">==>|" + (resultControl1 == One ? "1" | "0") + (resultControl2 == One ? "1" | "0") + (resultTarget == One ? "1" | "0") +">"); } } operation PrepareQubitState(qubit : Qubit, initialState : Bool) : Unit is Adj { if (initialState) { X(qubit); } }

The output, of course, aligns perfectly with what we expected.

Toffoli |000>==>|000> |001>==>|001> |010>==>|010> |100>==>|100> |011>==>|011> |101>==>|101> |110>==>|111> |111>==>|110>

In this post we look at the mathematics behind multi-qubit gates, explored the CNOT, CCNOT and SWAP gates and identified a whole class of generic controlled gates. CNOT is definitely a very interesting gate from the algebraic perspective, and, as one of the universal quantum gates, is fundamentally important to the quantum information theory and quantum computing. However, since we explicitly qualified in the beginning of this post that we will ignore the superposition state of the control qubit, the CNOT behavior that we have seen so far is logical, deterministic and fully classical. Such context is useful to introduce the gate, but it also obfuscates the real reason why the gate is at the heart of quantum computing – and that’s the capability of the CNOT gate to entangle two qubits together. The next post in this series will be dedicated to entanglement, as we continue to dive deeper into the CNOT gate and the world of quantum computing.

The feature is builtt exactly according to the methodology and using the approach I described in my last post. It allows editors using OmniSharp to ask the OmniSharp language server to perform Roslyn-based semantic classification of the code (or a subset of code – a span). OmniSharp would then provide all the necessary classification information – types of classified spans and their locations, allowing the editors to do accurate and sophisticated, compiler-driven syntax highlighting.

This new revamped semantic highlighting feature shipped in OmniSharp 1.34.15 in March this year, as a */v2/highlight* endpoint of OmniSharp. We already used to have */highlight* endpoint for semantic highlighting, but the new one offers new data structures and support additional stuff, such as static modifiers, which justified the new version.

Some of OmniSharp clients, such as omnisharp-vim, used the old highlighting already, but C# extension for VS Code never implemented that. In fact, there was no support for semantic highlighting in VS Code at all until version 1.42.0, when *SemanticTokensProvider* API was finally introduced.

Instead, up to that point, syntax highlighting in the extension (in all VS Code extensions, for that matter) was provided using a dedicated textMate grammar, which is a set of regular expressions corresponding to various language features. This approach was far from ideal, and caused a lot of problems – after all, it didn't have semantic understanding of the code, so the regular expressions were usually only close approximations of what we need to highlight. Additionally, the grammar had to be continually updated and maintained as the C# syntax and its features evolved – which, admittedly, we have not been most diligent about. This was actually the reason why recently in the C# extension for VS Code syntax highlighting would break completely, for example, with async enumerable.

The new semantic highlighting endpoint was adopted by the C# extension for VS Code, as an experimental feature, in release 1.22.0, from May this year. "Experimental" means in this case that it is an opt-in feature, and has to be enabled using the following setting:

"csharp.semanticHighlighting.enabled": true, "editor.semanticHighlighting.enabled": true

*csharp.semanticHighlighting.enabled* controls enabling semantic highlighting with OmniSharp, and defaults to *false*. *editor.semanticHighlighting.enabled* controls enabling the semantic highlighting API in VS Code altogether, and defaults to *true*, however I wanted to explicitly mention it here, since if it was to be set to *false* for some reason, then the C# setting would have no effect.

The semantic highlighting is an exciting feature that improves the quality of syntax highlighting when working with C# in VS Code. I encourage you to enable this experimental setting and provide any bug report feedback in the extension repo or OmniSharp repo.

]]>In the post I go over the various aspects of the feature, discuss the new *@EntryPoint()* attribute that was introduced into Q# and dive deeper into some implementation details. Enjoy the article!

Roslyn exposes a static *Classifier* service, which can be used to ask the compiler to semantically classify the spans contained in a given document or in a semantic model (or part of it). The API exists since Roslyn 1.0 and is part of the workspace layer of Roslyn – the *Microsoft.CodeAnalysis.CSharp.Workspaces* Nuget package. Under the hood it is underpinned by an internal language service *ISyntaxClassificationService*.

*Classifier* exposes two public methods, which, as mentioned briefly, operate on a document or a semantic model level. In either case, you'd need to initialize a Roslyn workspace (most often an MSBuild based workspace) to be able to work with the API – even if you want to classify a standalone, loose piece of C#; in that case a dummy workspace is necessary.

public static class Classifier { public static async Task<IEnumerable<ClassifiedSpan>GetClassifiedSpansAsync( Document document, TextSpan textSpan, CancellationToken cancellationToken = default); public static IEnumerable<ClassifiedSpan>GetClassifiedSpans( SemanticModel semanticModel, TextSpan textSpan, Workspace workspace, CancellationToken cancellationToken = default); }

Looking at the API, you'd likely wonder why one method is async, but not the other. The reason why the document-based method is async, and the semantic model-based one isn't, is because the first one will need to internally obtain the semantic model from the document, which in itself is an async operation. Once the semantic model is available, there is no async work left to do, hence the second method doesn't need to be asynchronous anymore.

How do we use the *Classifier*? Let's imagine we'd like to classify the following simple piece of code:

using System; public class MyClass { public static void MyMethod(int value) { } }

We have already mentioned that a workspace is necessary to initialize a workspace, and the quickest way to do that is to use *AdHocWorkspace* and default *MefHostServices*. They will contain the necessary internal compiler services that the classifier requires. For simplicity of the demo, we will hardcode our input code into a local variable – in normal use cases you'd be reading from disk or from some client/user request. If you are dealing with a full C# solution, instead of stand alone C# code to classify, the more appropriate choice over *AdHocWorkspace* would be to use MSBuildWorkspace.

var code = @"using System; public class MyClass { public static void MyMethod(int value) { } }"; var host = MefHostServices.Create(MefHostServices.DefaultAssemblies); var workspace = new AdhocWorkspace(host);

Once you have the workspace, you'd need to produce a *Document* or a *SemanticModel* representing our piece of code to classify. Let's first look at the semantic model approach, as it's – in my opinion – a bit less work.

var souceText = SourceText.From(code); var syntaxTree = CSharpSyntaxTree.ParseText(souceText); var compilation = CSharpCompilation.Create("Dummy").AddReferences(MetadataReference.CreateFromFile(typeof(object).Assembly.Location)).AddSyntaxTrees(syntaxTree); var semanticModel = compilation.GetSemanticModel(syntaxTree); var classifiedSpans = Classifier.GetClassifiedSpans(semanticModel, new TextSpan(0, code.Length), workspace); foreach (var classifiedSpan in classifiedSpans) { var position = souceText.Lines.GetLinePositionSpan(classifiedSpan.TextSpan); Console.WriteLine($"{souceText.ToString(classifiedSpan.TextSpan)} - {classifiedSpan.ClassificationType} - {position.Start}:{position.End}"); }

The first thing to do, is to grab the *SourceText* representing our string-based C# code. *SourceText* can then be fed into the syntax tree parser, producing a C# syntax tree. At this point we are half way there, but we still need to initalize the compilation, as the semantic model is a product of the compilation pipeline. When you do that, you need to make sure all the necessary metadata references needed for the code to compile are available – in our case only mscorlib is needed though (*typeof(object).Assembly*). Finally, we can find the semantic model for a syntax tree by querying the newly created compilation.

Next, we can call the classifier, and pass in our semantic model, and the text span corresponding to the piece of code we want to classify. We use *new TextSpan(0, code.Length)*, which simply means the entire code will be classified; however it is also possible to tweak the *TextSpan* so that position is offset and length is shorter, and thus only part of the code would be submitted for classification – it all depends on the use cases.

At the end we print all the results, which should show us a nice set of classification info:

using - keyword - 0,0:0,5 System - namespace name - 0,6:0,12 ; - punctuation - 0,12:0,13 public - keyword - 2,12:2,18 class - keyword - 2,19:2,24 MyClass - class name - 2,25:2,32 { - punctuation - 3,12:3,13 public - keyword - 4,16:4,22 static - keyword - 4,23:4,29 void - keyword - 4,30:4,34 MyMethod - method name - 4,35:4,43 MyMethod - static symbol - 4,35:4,43 ( - punctuation - 4,43:4,44 int - keyword - 4,44:4,47 value - parameter name - 4,48:4,53 ) - punctuation - 4,53:4,54 { - punctuation - 5,16:5,17 } - punctuation - 6,16:6,17 } - punctuation - 7,12:7,13

For the sake of completeness, let's also show how the code would look like if we were to go over the document-based API. In order to add a document to a workspace, we also need to create a project that would hold that document. Overall, there are several ways of achieving that – one example is shown below. All the rest of the code (dealing with *souceText* or displaying the classified spans) is the same as before.

var souceText = SourceText.From(code); var projectInfo = ProjectInfo.Create(ProjectId.CreateNewId(), VersionStamp.Create(), "MyProject", "MyProject", LanguageNames.CSharp).WithMetadataReferences(new[] { MetadataReference.CreateFromFile(typeof(object).Assembly.Location) }); var project = workspace.AddProject(projectInfo); var document = workspace.AddDocument(project.Id, "MyFile.cs", souceText); var classifiedSpans = await Classifier.GetClassifiedSpansAsync(document, new TextSpan(0, code.Length)); foreach (var classifiedSpan in classifiedSpans) { var position = souceText.Lines.GetLinePositionSpan(classifiedSpan.TextSpan); Console.WriteLine($"{souceText.ToString(classifiedSpan.TextSpan)} - {classifiedSpan.ClassificationType} - {position.Start}:{position.End}"); }

In this approach, we do not need to manually create a *Compilation* because it will be implicitly created for us based on the *Project* we set up. Overall, there is really very little difference between the two APIs. Typically when working with structured solutions and *MSBuildWorkspace*, you'd already be dealing with the documents anyway and the code from the second sample would be more natural to use, while when working with stand alone C# classification based on *AdHocWorkspace*, then probably the first example would be less tedious to use.

The most obvious use case is to provide syntax highlighting. Using semantic classifier and the power of the compiler provides an extremely reliable and advanced way of highlighting the code, taking all the aspects and language features into account – especially when a typical alternative would be static and regular expression based. This approach is now used in the highlighting features of OmniSharp.

One final thing about classification is that if you look closely at the results we produced, there is one strange thing going on. *MyMethod* at positions *4,35:4,43* is actually classified twice:

MyMethod - method name - 4,35:4,43 MyMethod - static symbol - 4,35:4,43

once as *method name* and once as *static symbol*. The second classification is the so called “additive classification”. At the moment Roslyn only uses static symbols for this additive classification but that might change in the future. This information allows, for example, for additional highlighting to be applied to static symbols (e.g. make them bold).

You can always exclude it from the result set too, by querying the *ClassificationTypeNames.AdditiveTypeNames* collection:

var filteredClassifiedSpans = classifiedSpans.Where(s => !ClassificationTypeNames.AdditiveTypeNames.Contains(s.ClassificationType)) );

In fact, this is what we do in OmniSharp too, and this is what Visual Studio does too.

You can find the source code for this blog post at Github

]]>

Let's start our today's journey by looking at the famous Pauli spin matrices, as they are central to quantum computational transformations.

Spin is a property of each elementary particle and represents one of two types of angular momentums that can be found in quantum mechanics – intrinsic angular momentum. The value of the spin is a discrete number and, in general, we can divide particles into two families – those with half-integer spin, called fermions, and those with (round) integer spin, called bosons. Their role in nature is fundamentally different, with fermions making up what we consider ordinary matter (quarks or electrons), and bosons (photons, gluons or for example Higgs boson) acting as force carriers. This is described by the so-called spin-statistics theorem, and is still one of the more puzzling concepts in quantum mechanics.

Electrons have spin equal to $\frac{1}{2}$ (spin up) or $-\frac{1}{2}$ (spin down) which also gives it a very useful designation of being the simplest possible quantum system. If you recall the discussions in part 1 of this series that we had around the concept of a qubit, you should immediately recognize that the nature of spin, with its two orthogonal basis states and two-component vector representation, maps perfectly to what was defined as prerequisites for a qubit.

In 1927, in a breakthrough step in the development of quantum mechanics, Wolfgang Pauli formalized the mathematics behind non-relativistic theory of spin and discovered the Pauli matrices as representations of the spin operators. Overall, Pauli's contributions to modern quantum theory were immense, spearheaded of course by the formulation of the famous Pauli exclusion principle, formulated in 1925, when he was only 25 years old, and for which he ultimately received the Nobel prize in physics in 1945, after the nomination from Albert Einstein. In 1928, Pauli was appointed a Professor of Theoretical Physics at (my nearby) ETH Zürich, where, after a brief time in the USA during the 2nd World War, he went on to have an illustrious scientific career and become a towering figure in the history of this prestigious university that has produced 21 Nobel prize winners – one of them being Albert Einstein himself.

In 1939, Pauli's assistant at ETH, Markus Fierz, in his habilitation degree, formulated the aforementioned spin-statistics theorem, which was later refined further by Pauli himself. It is therefore of no surprise that today, the ETH Institute for Theoretical Physics, which was established upon Pauli's arrival at the university, is located at the Wolfgang-Pauli-Strasse 27 in Zürich.

In quantum mechanics, the three famous Pauli gates are represented by the greek letter $\sigma$, and they are shown below.

$$

\sigma_x=\begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}$$

$$

\sigma_y=\begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix}$$

$$

\sigma_z=\begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}$$

The presence of $i$, the imaginary number ($i = \sqrt{-1}$), in $\sigma_y$, is dictated by the mathematics of quantum mechanics. The state of a closed quantum system is a vector in a complex vector space and complex numbers are made up of real and imaginary components.

Pauli matrices are of fundamental importance in quantum computing too, as they represent three of the most useful single qubit gates, and will be our main focus in this part 3 of the series.

Similarly to classical computer science, where gates can be used to assemble circuits, in quantum computing, a sequence of quantum gates is usually referred to as a “quantum circuit”. The primary role of circuits is that they allow us to visualize the computation models as a sequence of bit (or, in our case, qubit) operations. The analogy between classical and quantum computing works reasonably well in this case, but a work of caution is necessary. In classical computing, the gates typically correspond (sometimes in a slightly idealized form) to physical electronic devices. That is not necessarily the case in quantum computing. Depending on the implementation model of the physical quantum computer, a gate may really have a physical meaning – for example in nuclear magnetic resonance quantum computers – or they might be a purely logical concept, enacted on a stationary particle via series of pulses – for example in trapped ion quantum computer architectures.

In this part 3 of the series, we are going to focus on single qubit gates only – we'll be looking at multi qubit gates next time. In classical computing, single bit gates – at least in boolean circuits – can only exist in two variants: identity gate, which leaves the bit value intact, and a $NOT$ gate, which flips the bit value. In other words, if we input $0$ into an identity gate, $0$ would come out, and if we input $1$, then $1$ would come out. Conversely, for the $NOT$ gate, an input of $0$ produces and output of $1$ and vice versa.

In quantum computing, however, as we already mentioned in previous parts, we can come up with infinitely many single qubit gates, as there are infinitely many ways of manipulating a quantum system that the qubit represents. The reason is

quantum transformations are mathematically described by matrices – more specifically, $2^n * 2^n$ sized unitary matrices, where $n$ stands for the number of qubits the gate acts on, and for a single qubit case, we can construct infinite amount of unique 2 x 2 unitary matrices.

Single qubit quantum gates are often referred to as *elementary quantum gates*. Most general equation we can write here for quantum gates is transforming one quantum state $\psi$ to another quantum state $\varphi$ using a unitary transformation:

$$U\ket{\psi} = \ket{\varphi}$$

Quantum circuits can be visualized in a similar way to how classical circuits are visualized, with simple diagrams representing ordered operations on qubits. My favorite basic tool for quantum circuits is called Bono, and it can be cloned from Github and run locally; however, there are many other useful circuit building tools, one of the most popular – albeit a little overwhelming at first – being Quiirk.

An example is shown below:

In the circuit above, we start with a single qubit in a basis state $\ket{0}$, and apply the Hadamard gate that we discussed last time around – aptly marked by the H symbol. What follows is the act of measurement, which is denoted by its own measurement symbol. The line type is also of significance – a single line represents a qubit, while a double line represents a classical bit – after all, after the measurement we deal with a classical 0 or 1 value only.

The simplest possible quantum gate is the 2 x 2 identity gate and it semantically corresponds to the behavior of the identity gate in classical computing. Given that it is two dimensional, in linear algebra it is often denoted with $I_2$, while in quantum mechanics it is sometimes marked as $1$. In quantum computing, however, we normally just use the letter $I$ to represent the identity gate. Mathematically it can be expressed using the matrix below:

$$

I=\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$$

We can also say that the linear transformation behind the identity is an identity function – a function that always returns the same value as was used as its argument. We can write it in the following way:

$$I\ket{\psi} = \ket{\psi}$$

Another important property of the identity matrix is that if we multiply a matrix $U$ by itself and get $I$ as a result, we know that $U$ is also its own inverse.

Let's see what would happen when we apply $I$ to a qubit:

$$I\ket{\varphi} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}(\alpha\ket{0} + \beta\ket{1}) = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}\begin{bmatrix} \alpha \\ \beta \end{bmatrix} \\ = \alpha\ket{0} + \beta\ket{1}$$

As expected, nothing really happens – when the identity gate is applied, the qubit state is left unchanged.

In quantum computing, when representing Pauli matrices, it is common to skip the quantum mechanical $\sigma$ notation and refer to the matrices (and thus, the quantum gates they represent) using the letters $X$, $Y$ and $Z$ only, which gives us:

$$

X=\begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}$$

$$

Y=\begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix}$$

$$

Z=\begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}$$

As already mentioned in part 1 of this series, we are going to try to avoid imaginary and complex numbers wherever we can (it won't always be possible) – and in this case we can simplify $Y$ and the relationship between $Y$ and $\sigma_y$ could be defined as:

$$

Y=\begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix}$$

$$\sigma_y = -iY$$

And that's going to be approach we will take here.

Each of the matrices can also be written using Dirac notation, as the outer product of the base vectors with their complex conjugates. This might seem somewhat confusing at first but quickly becomes very intuitive:

$$

X=\begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} = \ket{1}\bra{0} + \ket{0}\bra{1}$$

$$

Y=\begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} = -\ket{1}\bra{0} + \ket{0}\bra{1}$$

$$

Z=\begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} = \ket{0}\bra{0} – \ket{1}\bra{1}$$

We shall now dive deeper into the mathematical consequences of applying each of the three Pauli gates to such a quantum system, starting with $X$.

The Pauli $X$ gate is often referred to as the **bit flip gate** because it ends up swapping the probability amplitudes $\alpha$ and $\beta$ with each other. If the qubit is already in a known state $\ket{0}$ or $\ket{1}$, in other words, we know that the probability amplitude, and thus the probability itself, of either $\alpha$ or $\beta$ is zero, then the gate acts like the $NOT$ gate in classical computing – flipping the value to the opposite one.

Since we already know that we can write the state of our qubit as the linear compination of the base vectors, with $alpha$ and $beta$ being the probability amplitudes:

$$\ket{\varphi} = \alpha\ket{0} + \beta\ket{1}$$

we can now express the $X$ gate acting upon a qubit using simple algebra:

$$X\ket{\varphi} = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}(\alpha\ket{0} + \beta\ket{1}) = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}\begin{bmatrix} \alpha \\ \beta \end{bmatrix} \\ = \beta\ket{0} + \alpha\ket{1}$$

All of this, especially the theoretical symmetry to the classical $NOT$ gate, makes the Pauli $X$ gate one of the most important and easy to understand quantum gates.

On the other hand, the Pauli $Z$ gate cannot be really conceptually mapped to a classical computing gate, making it a bit more difficult to apprehend. The mathematics of it is not very complicated though, as shown below:

$$Z\ket{\varphi} = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}(\alpha\ket{0} + \beta\ket{1}) = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}\begin{bmatrix} \alpha \\ \beta \end{bmatrix} \\ = \alpha\ket{0} – \beta\ket{1}$$

In other words, we started with a state of $\alpha\ket{0} + \beta\ket{1}$ and ended with a state of $\alpha\ket{0} – \beta\ket{1}$ – the only difference being the sign. The result here may at first glance be a little confusing, especially as we already extensively discussed that the probability amplitudes can be used to obtain the actual classical probabilities of receiving a state $\ket{0}$ or $\ket{1}$ using the Born rule – by squaring the modulus of the amplitude $|\alpha|^2$ or $|\beta|^2$. And naturally, the change of the sign we encountered when applying the Pauli $Z$ gate has no impact on the classical probabilities. The $Z$ gate is thus referred to as the **phase flip gate**, as it flips the **phase** of the qubit, leaving the actual classical probabilities intact. A simple way of remembering the effect of the $Z$ gate is that it leaves $\ket{0}$ intact and multiplies $\ket{1}$ by $-1$. While this has no impact on probabilities of reading a classical 0 or 1 out of the qubit, it has some creative application scenarios in quantum algorithms.

Finally, the $Y$ gate is both a **bit flip gate** and **phase flip gate** at the same time – so it swaps the amplitudes and at the same time changes the sign of the amplitude for both $\ket{0}$ and $\ket{1}$ (same as above, they get multiplied by $-1$).

$$Y\ket{\varphi} = \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix}(\alpha\ket{0} + \beta\ket{1}) = \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix}\begin{bmatrix} \alpha \\ \beta \end{bmatrix} \\ = -\beta\ket{0} + \alpha\ket{1}$$

The three Pauli gates are all its own inverses, meaning the following holds:

$$X^2 = Y^2 = Z^2 = I$$

In practice, it means that applying the gate the second time in a row would undo the effects of the first transformation. Additionally, Pauli gates are closely related to the Hadamard gate that we already discussed last time. We will not expand the calculations (although I would encourage you do to so to practice some linear algebra) – but the following relations are true:

$$X = HZH$$

$$Y = -1HYH$$

$$Z = HXH$$

Pauli gates and the identity gate have fundamental meaning in quantum information theory in a sense that they can be used as building blocks to make up any other transformation. We could algebraically express any quantum computational linear transformation in the two dimensional complex vector space as a product of a complex unit and a linear combination of the three Pauli matrices and the identity matrix. It means that we can always find $\alpha$, $\beta$, $\gamma$ and $\delta$, as well as $\theta$ to represent any unitary transformation $U$:

$$U = e^{\theta{i}}(\alpha{I} + \beta{X} + \gamma{Y} + \delta{Z})$$

where $\alpha$ is a real number, $\beta$, $\gamma$ and $\delta$ are complex numbers and $\theta$ is larger or equal to $0$ and smaller than $2\pi$. $e$ stands for the the Euler's number.

We've spent quite some time looking at the theory, but now we are ready to go back to Q# programming. All three Pauli gates are available as functions in the $Microsoft.Quantum.Intrinsic$ namespace. These are:

- $X (qubit : Qubit)$
- $Y (qubit : Qubit)$
- $Z (qubit : Qubit)$

Identity gate is also available – mainly for completeness, but also because it is sometimes useful when an algorithm requires a no-effect action to be performed on a qubit. It is probably of no surprise to anyone that the function signature for identity is $I (qubit : Qubit)$.

A sample Q# identity operation is shown below.

operation Identity(count : Int) : Int { mutable resultsTotal = 0; using (qubit = Qubit()) { for (idx in 1..count) { I(qubit); let result = MResetZ(qubit); set resultsTotal += result == One ? 1 | 0; } return resultsTotal; } }

The operation, similarly to the previous examples in earlier parts of this series, takes in an integer input indicating the amount of time the operation should be repeated. A single qubit is allocated within the loop, then the $I$ gate is applied and a regular measurement along the Z axis (in the computational basis) is performed. The qubit is then reset to the original state as mandated by Q# ($MResetZ$ operation guarantees the reset). We keep track of the amount of ones received by keeping a running total of the measurements. Naturally, the amount of iterations minus the amount of ones gives us the amount of zeros obtained in the measurement.

Below is the following C# driver to orchestrate this Q# operation. We will use this driver for all the snippets that come, so I will not repeat it anymore (only the invoked operation name would differ). The quantum operation would be run 1000 times to give us a better statistical result set.

class Driver { static async Task Main(string[] args) { using var qsim = new QuantumSimulator(); var iterations = 1000; Console.WriteLine($"Running qubit measurement {iterations} times."); var results = await Identity.Run(qsim, iterations); Console.WriteLine($"Received {results} ones."); Console.WriteLine($"Received {iterations - results} zeros."); } }

We should get the following output when running this program:

Running qubit measurement 1000 times. Received 0 ones. Received 1000 zeros.

This is of course hardly suprising. The default state of the qubit is $\ket{0}$, which when measured guarantees to produce a classic bit 0. On top of that, us applying the $I$ gate doesn't really have any effects, other than the fact that the qubit was indeed acted upon – so the 100% rate of receiving 0s is quite obvious.

We will now turn our attention to the bit flip. The Q# code is identical to the one above, except we will use $X$ operation.

operation Bitflip(count : Int) : Int { mutable resultsTotal = 0; using (qubit = Qubit()) { for (idx in 1..count) { X(qubit); let result = MResetZ(qubit); set resultsTotal += result == One ? 1 | 0; } return resultsTotal; } }

Running this, using the same type of C# driver, would produce the following result:

Running qubit measurement 1000 times. Received 1000 ones. Received 0 zeros.

This is also very much in line with our expectations. The default state of the qubit is $\ket{0}$, which the $X$ gate flipped to $\ket{1}$ – and that, when measured, produces a bit value 1 100% percent of times.

The next thing worth experimenting with is to try to chain several gates together. For example, we already made a claim – not supported by any algebraic calculation though – that the following relation holds:

$$X = HZH$$

We could verify this easily using Q#. Our operation would look as follows:

operation HZH(count : Int) : Int { mutable resultsTotal = 0; using (qubit = Qubit()) { for (idx in 1..count) { H(qubit); Z(qubit); H(qubit); let result = MResetZ(qubit); set resultsTotal += result == One ? 1 | 0; } return resultsTotal; } }

This code is yet again similar to the previous snippets, except this time we apply a chain of gates on the same qubit – $H – Z – H$. Running this code, with the same type of C# driver as before, produces the following result:

Running qubit measurement 1000 times. Received 1000 ones. Received 0 zeros.

The result is identical to running the bit flip $X$ gate, so we have really managed to experimentally verify that $X = HZH$.

As we already mentioned, there are infinitely many possibilities for constructing single qubit gates. For the purpose of this series, the 5 we already discussed – Hadamard gate $H$, Pauli gates $X$, $Y$, $Z$ and identity gate $I$ – are most important, and we will be using them repeatedly in the next parts. However, there are several other common and interesting gates, most importantly the so-called rotation gates.

The three rotation gates $R_x$ , $R_y$ and $R_z$ are all generalizations of the the Pauli $X$, $Y$ and $Z$ gates. The names of these gates come form the fact that all single qubit transformations can be visualized as rotations around the x, y and z axes of the so-called Bloch sphere. Bloch sphere, named so after another physicist connected to ETH in Zürich, Felix Bloch is commonly used to provide a way to represent graphically (geometrically) the state of a two-dimensional quantum system, such as a qubit.

$R_x$ , $R_y$ and $R_z$ are often used as part of quantum algorithms and can be utilized, for example, to prove Bell's theorem. They are represented by the matrices below:

$$

R_x=\begin{bmatrix} cos(\frac{\varphi}{2}) & -sin(\frac{\varphi}{2})i \\ -sin(\frac{\varphi}{2})i & cos(\frac{\varphi}{2}) \end{bmatrix}$$

$$

R_y=\begin{bmatrix} cos(\frac{\varphi}{2}) & -sin(\frac{\varphi}{2}) \\ sin(\frac{\varphi}{2})i & cos(\frac{\varphi}{2}) \end{bmatrix}$$

$$

R_z=\begin{bmatrix} e^{i\frac{\varphi}{2}} & 0 \\ 0 & e^{i\frac{\varphi}{2}} \end{bmatrix}$$

We mentioned that they are generializations of the Pauli gates, and looking at the matrices closely, the relation between the rotation gates and Pauli gates should become a little bit more apparent. For example, the $R_z$ gate will actually become $I$ gate when $\varphi=0$, and it will become $Z$ gate when $\varphi=\pi$.

Similarly to Pauli gates, all three of these rotation gates are available in Q# using the $Rx (theta : Double, qubit : Qubit)$, $Ry (theta : Double, qubit : Qubit)$ and $Rz (theta : Double, qubit : Qubit)$ functions in the $Microsoft.Quantum.Intrinsic$ namespace, where $theta$ represents the desired rotation angle.

Let's try invoking $R_x$ with a 45 degree rotation and see the effects:

operation Rx45(count : Int) : Int { mutable resultsTotal = 0; using (qubit = Qubit()) { for (idx in 1..count) { Rx(45.0, qubit); let result = MResetZ(qubit); set resultsTotal += result == One ? 1 | 0; } return resultsTotal; } }

Executing this code produces the following result:

Running qubit measurement 1000 times. Received 251 ones. Received 749 zeros.

So a rotation by 45 degrees along the X axis, distributes the probabilities for obtaining one or zero 0.25-0.75 – when measuring in the computational basis along the Z axis.

In this post, we discussed in depth several important quantum computing gates – Pauli gates $X$, $Y$, $Z$, as well as the identity gate $I$. In addition to that, we looked at the rotation gates too, as generalizations of the Pauli gates. In the previous post in this series, we already had a look at the Hadamard gate $H$.

All of this is still quite basic in terms of what we can do at the Q# code level, but we are slowly building up the necessary knowledge and amassing building blocks that will be extremely helpful when putting together quantum algorithms.

Equipped in this knowledge we should be are ready to have a look at multi qubit gates and the algebraic foundations behind them – which we will do in the next part. We are also going to look at one of the more bizarre quantum phenomena – entanglement. After that, we will be ready to start putting it all to good use by exploring some quantum algorithms.

Support for decompilation was one of the most requested features for OmniSharp, so we are thrilled to finally be able to include it into OmniSharp. It is based on the wonderful open source .NET decompiler, ILSpy – the same one that powers decompilation features in Visual Studio. In fact, decompilation support in OmniSharp is very much modelled after how things work in Visual Studio.

Big words of gratitude are also necessary towards the ILSpy maintainers – Christoph and Siegfried who have pushed my lazy self throughout the process to get things finally done and helped us during the implementation phase.

The feature, like many other OmniSharp features is opt-in for the time being, meaning it is disabled by default. If you'd like to enable it, you need to add a relevant OmniSharp configuration option:

- to enable it globally, use
*%USERPROFILE%/.omnisharp/omnisharp.json*file - to enable it for a specific project only, use
*omnisharp.json*at the root of your workspace (typically at the root of the repository, next to solution file)

If the file doesn't exist, just create it. Then the following setting is needed inside:

{ "RoslynExtensionsOptions": { "enableDecompilationSupport": true } }

This is currently the only way of enabling decompilation – in the future, the C# Extension for VS Code will also get its own extension-level setting, that you could just add into the VS Code settings directly without having to deal with *omnisharp.json*.

Once enabled, decompilation support is a drop-in replacement for navigating to metadata. In other words, when you try to go to the definition of a symbol that is not part of your source, but rather is externally imported from a DLL (normally a NuGet package reference or a direct DLL reference), instead of showing you the basic metadata view with just member names and signatures, we'll attempt to perform decompilation instead, and drop you into the relevant declaration location.

This is illustrated on the animation below – with an example of some APIs from the Newtonsoft.Json NuGet package reference.

There are some known limitations at the moment around reference assemblies – those are currently not resolved to the implementation assemblies. This is, I believe, on par with Visual Studio behavior though, which also doesn't do that (at least not always).

Hope you will find the feature useful!

]]>

Last time around, we briefly mentioned that when a qubit is in superposition, “it is both 0 and 1 at the same time”. This is a rather simplistic and not necessarily accurate description, but one that is commonly used in introductory texts and popular science articles, as it manages to convey the weirdness of conceptualizing quantum states.

Since we have a lot of room here, we now have, however, the luxury to emphasise that such a statement is really stripped of any mathematical or physical substance. Thankfully, we looked a little at the mathematics behind the qubit already, so we are now well positioned to describe superposition more accurately. Namely, we know that the quantum state of a qubit is a **linear combination** of $\ket{0}$ and $\ket{1}$.

In other words, given a a qubit state that we already discussed:

$$\ket{\varphi} = \alpha\ket{0} + \beta\ket{1}$$

we can say that a qubit is in a superposition when both amplitudes (and thus, probabilities) $|\alpha|$ and $|\beta|$ are non-zero.

The ontological aspect of the superposition, and in a broader sense, the entire quantum theory, is a lot more blurry and depends on the epistemology we would choose to subscribe to.

In 1935, Austrian physicist Erwin Schrödinger published one of the most famous papers in the history of physics, “Die gegenwärtige Situation in der Quantenmechanik” (English translation is available here), in which he proposed a thought experiment around …a cat.

We'll let Schrödinger himself explain the experiment:

“A cat is penned up in a steel chamber, along with the following diabolical device (which must be secured against direct interference by the cat): in a Geiger counter there is a tiny bit of radioactive substance, so small that perhaps in the course of one hour one of the atoms decays, but also, with equal probability, perhaps none; if it happens, the counter tube discharges and through a relay releases a hammer which shatters a small flask of hydrocyanic acid. If one has left this entire system to itself for an hour, one would say that the cat still lives if meanwhile no atom has decayed. The first atomic decay would have poisoned it. The $\psi(x,t)$ function for the entire system would express this by having in it the living and the dead cat (pardon the expression) mixed or smeared out in equal parts.”

The cat has since gone to establish itself one of the symbols of quantum theory and has gained a firm position in popular culture. There have been countless references to the dead-and-alive-at-the-same-time cat in movies, books, video games and many other areas of our society. And just so that you see that physicists can have a good sense of humor too, have a quick look at one of the most popular quantum mechanics undergraduate textbooks – “Introduction to Qunatum Mechanics” by David Griffiths, which has a cat that is alive on the front cover, and a cat that is dead on the back cover.

There are plenty of competing interpretations of quantum mechanics, which can be used to explain or at least interpret the paradox, the most prevalent of which is the “Copenhagen interpretation”, championed by Niels Bohr and several other physicists that were close to him, such as Werner Heisenberg. In the view adopted in the “Copenhagen interpretation”, the superposition or, for that matter, the wave function describing a quantum object, does not describe the nature or reality in any way, nor are we allowed to reason about the reality behind a quantum object until it's actually measured. In other words, it is impossible disassociate the reality responsible for the quantum phenomena from the measurement itself, as we can only observe trace effects of quantum objects on the measurement apparatus and only infer their existence that way. This is fundamentally different from classical macro-scale physics, where systems, their state and behavior can be independently observed.

Within that view, superposition is a purely mathematical concept describing relations between various probability amplitudes of finding a particle at a given position or in a given quantum state. Thus, superposition is merely a tool that allows us to express our probabilistic expectations for the measurements. Within that thought framework, the cat paradox is no longer a paradox.

As Arkady Plotnitsky puts it in his excellent publication, with the elegance far superior to the parlance of this blog post:

“if (…) quantum states are seen (…), as defined by the $\psi$-function, strictly as mathematical entities encoding and enabling expectation catalogues concerning the outcome of possible experiments – rather than describing the behavior of quantum systems, in particular between experiments – Schrödinger's thought experiment presents no problem”.

Now that we have established (or rather, in the spirit of Copenhagen, stepped around) the ontological basics, let's look at how we can put our qubit in a superposition and what are the quantum computational consequences of doing so. Of course superposition is one of the critical aspects of quantum computing – without it, the qubit could only represent the two binary states of *0* and *1*, which would strip it of any possible advantages over classical computing.

In quantum mechanics, arbitrary transformations of the quantum state are not possible. Instead, time evolution of a quantum state is always represented by **linear unitary transformations of the associated vector space**. This is taken as an axiom, and is one of the postulates of quantum mechanics. The transformations are represented by matrices and in physics, in order for a given transformation matrix $U$ to be unitary, its Hermitian adjoint (a generalized version of conjugate transpose $U^*$ from linear algebra) $U^\dagger$ must be its own inverse. In other words, $U$ must satisfy the condition:

$$

U^\dagger U = I

$$

where *I* is the identity matrix. Therefore, algebraically speaking, in order to transform the state of a qubit into a **uniformly distributed superposition** we need to linearly transform it using the Hadamard transformation, represented by the Hadamard matrix, named like this after the French mathematician Jacques Hadamard.

As we already mentioned, quantum transformations are described by matrices – more specifically, $2^n * 2^n$ sized unitary matrices, where n stands for the number of qubits the gate acts on.

In quantum computing, quantum state transformations are represented – similarly to the analogous concept from classical computing – by computational gates. We will look in details at various gates and circuits in the next post in this series, for now we will just say tt Hadamard gate acts on a single qubit, hence it's corresponding matrix size is 2×2. The Hadamard *H* matrix is shown below:

$$

H=\begin{bmatrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} &-\frac{1}{\sqrt{2}} \end{bmatrix}$$

Since $\frac{1}{\sqrt{2}}$ appears in each matrix element, we know from linear algebra that we can simplify the equation:

$$

H=\frac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 &-1 \end{bmatrix}

$$

Our Hadamard matrix is not just unitary, it is also self adjoint (Hermitian) – so it's equal to its own conjugate transpose. In other words:

$$

H = H^\dagger

$$

This means that the following holds:

$$

HH^\dagger = HH = I

$$

We are starting to get side tracked by linear algebra again, so let's go back to our qubit. Suppose we start with a qubit prepared to be $\ket{0}$. At this point we know it's classical bit value would be *0*, because $\alpha = 1$ and $\beta = 0$, so the probability of measuring a *0* is 100%. We now apply the Hadamard gate by multiplying our ket by the Hadamard matrix.

$$H(\ket{0}) = \begin{bmatrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} &-\frac{1}{\sqrt{2}} \end{bmatrix}\begin{bmatrix} 1 \\ 0 \end{bmatrix} = \begin{bmatrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{bmatrix} = \frac{1}{\sqrt{2}}\begin{bmatrix} 1 \\ 0 \end{bmatrix} + \frac{1}{\sqrt{2}}\begin{bmatrix} 0 \\ 1 \end{bmatrix}

$$

Similarly, for $\ket{1}$:

$$H(\ket{1}) = \begin{bmatrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} &-\frac{1}{\sqrt{2}} \end{bmatrix}\begin{bmatrix} 0 \\ 1 \end{bmatrix} = \begin{bmatrix} \frac{1}{\sqrt{2}} \\ -\frac{1}{\sqrt{2}} \end{bmatrix} = \frac{1}{\sqrt{2}}\begin{bmatrix} 1 \\ 0 \end{bmatrix} – \frac{1}{\sqrt{2}}\begin{bmatrix} 0 \\ 1 \end{bmatrix}

$$

The above representation is usually more readable and understandable when beginning to work with quantum computing, especially without deeper background in physics. That said, we can also write this in a more compact Dirac notation:

$$H(\ket{0}) = \frac{1}{\sqrt{2}}\ket{0} + \frac{1}{\sqrt{2}}\ket{1} = \frac{1}{\sqrt{2}}(\ket{0}+\ket{1})

$$

$$H(\ket{1}) = \frac{1}{\sqrt{2}}\ket{0} – \frac{1}{\sqrt{2}}\ket{1} = \frac{1}{\sqrt{2}}(\ket{0}-\ket{1})

$$

The difference between the two cases (starting the process with a $\ket{0}$ vs $\ket{1}$) is the amplitude sign (+ vs -), but notice that in order to obtain classic probabilities we need to square the amplitudes anyway, and therefore we are **equally likely (50%)** to get $\ket{0}$ and $\ket{1}$ in both cases. In other words, the Hadamard gate has created a uniformly distributed superposition for us in both situations.

Another interesting property of the Hadamard gate, is that like all quantum gates, it is reversible. That is not surprising, since gates are unitary operators, which means that property is guaranteed by the underlying mathematics. Thus, applying Hadamard gate again, returns us back to the state which we started from. This is a property that does not apply to all classical computing gates – for example AND or XOR are not reversible.

In the mathematical terms, we can verify it in the following way (going back to $\ket{0}$):

$$\begin{bmatrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} &-\frac{1}{\sqrt{2}} \end{bmatrix}\begin{bmatrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \end{bmatrix}$$

and (going back to $\ket{1}$):

$$\begin{bmatrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} &-\frac{1}{\sqrt{2}} \end{bmatrix}\begin{bmatrix} \frac{1}{\sqrt{2}} \\ -\frac{1}{\sqrt{2}} \end{bmatrix} = \begin{bmatrix} 0 \\ 1 \end{bmatrix}$$

After long and winding road, we have finally arrived at the happy place that allows us to write some Q# code. We can use the example from previous blog post as the starting point – remember that we prepared a qubit there, and then measured its value. All of that, for a reminder, is shown again below (including its related C# driver code).

open Microsoft.Quantum.Canon; open Microsoft.Quantum.Intrinsic; open Microsoft.Quantum.Measurement; operation MeasureQubits(count : Int) : Int { mutable resultsTotal = 0; using (qubit = Qubit()) { for (idx in 0..count) { let result = MResetZ(qubit); set resultsTotal += result == One ? 1 | 0; } return resultsTotal; } }

class Driver { static async Task Main(string[] args) { using var qsim = new QuantumSimulator(); var repeats = 1000; Console.WriteLine($"Running qubit measurement {repeats} times."); var results = await MeasureQubits.Run(qsim, repeats); Console.WriteLine($"Received {results} ones."); Console.WriteLine($"Received {repeats - results} zeros."); } }

The code prepares a single qubit, and measures it in the Pauli Z basis, repeating this a given amount of times (1000), keeping track of the total tally of 0s and 1s measured. As we already determined last time, such set up will yield 100% 0s as the qubit is in the base state of $\ket{0}$ and when we measure it in the Pauli Z basis we are guaranteed to get a *0*.

In order to apply the uniformly distributed superposition, we can sneak in the Hadamard gate, represented by the aptly named *H()* method in Q#, right into the exact same code snippet – before we measure the qubit. This is shown next.

open Microsoft.Quantum.Canon; open Microsoft.Quantum.Intrinsic; open Microsoft.Quantum.Measurement; operation MeasureQubits(count : Int) : Int { mutable resultsTotal = 0; using (qubit = Qubit()) { for (idx in 0..count) { H(qubit); let result = MResetZ(qubit); set resultsTotal += result == One ? 1 | 0; } return resultsTotal; } }

If we now run this code using the same C# driver code as we had last time, we should see result that is close to this:

Running qubit measurement 1000 times. Received 498 ones. Received 502 zeros.

We can pause here for a moment, and marvel at our astonishing achievement. Sure, this is running on a quantum simulator only, but should this Q# code be deployed to a real Q# compatible quantum computer (which, hopefully, will be possible soon using Azure Quantum), it would, in principle, work the same way. So let's squint our eyes for a moment and imagine that it is indeed a real quantum computer that we are interacting with. What we have tapped into here, is true, nature-guaranteed randomness. The bits are fully random, with 50% chance of being 0 or 1, and those probabilities are guaranteed by the laws of quantum mechanics and the underlying mathematics we already looked at.

The true randomness right at the heart of our reality might be very difficult to make peace with. In a 1926 letter to Max Born, Albert Einstein, who couldn't agree with the probabilistic core of quantum mechanics or Heisenberg's uncertainty principle, and was convinced the ultimate nature of reality is fundamentally deterministic, wrote:

“The theory produces a good deal but hardly brings us closer to the secret of the Old One. I am at all events convinced that He does not play dice.” (org. “Die Theorie liefert viel, aber dem Geheimnis des Alten bringt sie uns kaum näher. Jedenfalls bin ich überzeugt, daß der nicht würfelt.”).

We can close off this post by doing something useful, for a change. Let's leverage the quantum superposition to build a true random number generator in Q#. Of course at this point, it should not be very surprising to anyone, given we just managed to generate random bits. As all computing students know, once we have enough bits we can form numbers out of them – such as for example 16-, 32- or 64-bit integers.

The final piece of our Q# code does just that, it generates an array of 32 random bits, which we can use to construct a truly random number.

operation RandomNumberGenerator() : Bool[] { mutable randomBits = new Bool[32]; for (idx in 0..31) { using(qubit = Qubit()) { H(qubit); let result = MResetZ(qubit); set randomBits w/= idx <- result == One; } } return randomBits; }

The code is very similar to the code we used earlier, and the Hadamard gate is of course at the center of all interesting action, guaranteeing a uniformly distributed superposition. It is worth briefly mention the somewhat esoteric syntax used for inserting elements into an array: *set array w/= idx <- value*. To be more precise, this is in fact a copy-and-update expression, since arrays in Q# are immutable. Therefore, this is really not the most efficient way of keeping track of our random 32 bits, but it was a good occasion to introduce this useful language construct.

Finally, the C# driver to run the code and then convert the random bits into a 32-bit unsigned integer looks like this:

class Driver { static async Task Main(string[] args) { using var qsim = new QuantumSimulator(); var randomBits = await RandomNumberGenerator.Run(qsim); var bitString = string.Join("", randomBits.Select(x => x ? 1 : 0)); Console.WriteLine($"Generated random bit string: {bitString}"); Console.WriteLine($"Generated random uint32: {Convert.ToUInt32(bitString, 2)}"); } }

The output of this code would be something like:

Generated random bit string: 10011111001000010110000101100110 Generated random uint32: 2669764966

And there we have it – we have achieved something useful with a quantum computer. In fact, if we think about it, it is something no classical computer can ever claim – a mathematically guaranteed, underpinned by the basic laws of nature, true random number.

In this part 2 of the blog post series we had a look at the epistemology, mathematics and Q# code behind the superposition. We discussed the algebraic meaning of quantum transformations and also briefly touched upon the concept of quantum gates. Finally, we managed to unsettle Albert Einstein a bit and created a first somewhat useful program in Q#.

In the next post in this series we will be looking in more details at quantum gates and attempt to discuss the important ones.

While it's still early days, we would want that OmniSharp users can work with .NET 5 as soon as possible, so we have just added support for .NET 5 in OmniSharp.

Let me quickly walk you through what you need to do to take advantage of that.

The magic number is 1.34.15 – that's the build of OmniSharp that supports .NET 5, and it was published on 25 March. If you are consuming OmniSharp in editor other than VS Code, you need to ensure that you use at least that version.

If you are in VS Code, with the C# Extension installed, you could – more conveniently than juggling the OmniSharp builds – install the pre-release 1.21.16 of the C# Extension, which can be found here. You'd need to download the VSIX and follow the simple instructions.

That pre-release build 1.21.16 already contains OmniSharp 1.34.15. This build should ship as stable into the VS Code Marketplace at some point next week (starting on Monday 30 March)

In principle, to support .NET 5, we need to things:

- latest
*NuGet.** packages (*5.6.0*) – to be able to correctly handle the*net5.0*TFM (introduced in preview 2) *MsBuild 16.5.0*

When OmniSharp starts it needs to choose the most appropriate MsBuild to use. It will use one of the two options:

- use it's own bundled minimal MsBuild (which in this case, supports .NET 5 already)
- use global MsBuild from your global Mono installation (if you are on *nix and have Mono >= 6.4.0) or your global Visual Studio installation (if you are on Windows and have VS >= 16.3) installed

If you are in the situation where the first option takes effect (so you have no global Mono or no global VS), then the aforementioned OmniSharp 1.34.15 guarantees that you will have .NET 5 preview support.

If you are in the second category (so you have do have global Mono or global VS), then even if you have OmniSharp 1.34.15 / VS Code Extension 1.21.16, an outdated global Mono or VS will prrevent you from enjoying support for .NET 5. In those cases you'd need to upgrade:

- global Mono to at least 6.10.0, which is the preview version at the time of writing
- Visual Studio to at least 16.6P2

Additionally, though this is generally not recommend, if you are *nix systems, you can disable the usage of global Mono completely by adding the following VS Code setting:

"omnisharp.useGlobalMono": "never"

This will actually force OmniSharp to be launched only with its bundled minimal MsBuild and its own bundled minimal Mono runtime.

In the future we will raise the minimum versions of the discovered global Mono (as mentioned, currently >= 6.4.0) and global VS (currently >= 16.3) to align with .NET 5 support, however at the moment it is too early to do so. Hope this helps.

]]>Today, we are at the dawn of the quantum computing age, a multidisciplinary field that sits at the intersection of quantum physics, computer science, mathematics and chemistry and may revolutionize the world of computing and software engineering.

In this post I am starting a new series that will, through the lens of a .NET developer, introduce the basics of quantum computing – using examples in Q#.

Historically, quantum computing dates back to early 1980s when Paul Benioff theoretized a Turing machine that was a microscopic quantum mechanical model. Roughly at the same time, completely independently in Soviet Union (on the wrong side of the Berlin Wall as we said when I was a kid on that wrong side), the brilliant Russian physicist Yuri Manin also proposed a concept of building a quantum computer.

Soon after, in 1981, Richard Feynman delivered his famous lecture Simulating Physics with Computers at the 1st MIT Physics and Computation conference. In the lecture Feynman argued that in order to truly simulate nature, which at its core is quantum mechanical, we should not use classical computing, but quantum computing. Given Feynman's profile, this, in hindsight seemingly obvious observation, is widely regarded as the moment that jump started the area of quantum computing.

What followed was plenty of progress in terms of theory and algorithms, but the field always lacked the hardware to make it all reality. Only in recent years we have finally reached the point at which quantum computers are no longer theoretical devices. Quantum hardware is getting more and more powerful, stable and accessible to the masses. Since 2016, IBM Q Experience provides access to quantum computer via the cloud (the smallest one for free!), while AWS Braket and Azure Quantum are both offering public preview programs with the goal to launch soon. Smaller companies and startups are also disrupting the field, for example AQT, an Innsbruck based company, offers access to ion-trap quantum computers on the cloud too.

Additionally, a lot is happening around bringing the quantum experience closer to regular software developers. For example, Microsoft is currently building Q#, a high level programming language specifically tailored for quantum programming, IBM founded Qiskit, a Python framework for quantum computing, while Google AI Quantum Team started Cirq, another Python framework for quantum development.

All these efforts lead to what I like to call, **democratization** of quantum computing. You no longer have to be a theoretical physicist, a PhD researcher or work for a large company with massive R&D resources to be able to access and program quantum computers – and that process will only continue to speed up.

It is impossible to talk about quantum computing without a little background on quantum mechanics. Contrary to general relativity, which we owe entirely to the brilliance of Albert Einstein, it is commonly known that quantum mechanics has many pioneers that contributed to its formation.

However, if we had to single out one man that played the pivotal in the history of quantum mechanics, I'd say it was Werner Heisenberg, who published the foundations of what became modern quantum mechanics in his 1925 paper Über quantentheoretische Umdeutung kinematischer und mechanischer Beziehungen. Following the paper, the theory was further refined, contributed to and developed by Heisenberg himself and many other brilliant physicists – Paul Dirac, Max Born, Pascual Jordan, Wolfgang Pauli and others (including especially profound contributions, sort of against his will, from Erwin Schrödinger, but that's a story for a separate day).

Heisenberg realized that it is impossible to use the paradigms macro-scale physics to correctly describe the behavior of subatomic particles. The genius of Heisenberg was that he abandoned the approach that was at the very core of classical physics – describing the nature of reality (in this case particles) using idealized mathematical models and its realism based on deterministic results. Instead, he introduced new matrix-based mathematical formalism which was based on algebraic probabilistic approach for predicting the outcome of experiments. He called that “new kinematics” and “calculus of observable quantities”. In other words, he realized that nature is random (no pun intended) by nature, took the equations of classical physics and mathematically reinterpreted them. The departure from the idealized, deterministic approach of classical physics was a profound, brilliant step, especially considering that he introudced noncommutativity, which didn't exist in theoretical physics at the time. Heisenberg realized that at quantum level, observable properties such as momentum and position should not commute.

As Arkady Plotnitsky put it in his excellent book Epistemology and Probability, “this invention of new variables brought physics, mathematics and philosophy into a new (as against classical physics) type of relationships. Each of these steps just mentioned is remarkable enough. Jointly, they constitute a truly extraordinary accomplishment, rivalling or in any event standing its ground against those of Newton and Einstein”.

Throughout these series we will continue to look back at some of the historical context of the development of quantum mechanics and the radical epistemological challenges it posted.

While the field of quantum computing (and quantum information theory), is an offspring of quantum mechanics, to program a quantum computer and to use some of the high level languages and frameworks we mentioned earlier, it is not absolutely necessary to be fluent in quantum mechanics.

Of course familiarity with quantum mechanics is going to be advantageous at the theoretical level, to be able to grasp, for example, the spin concept. Knowledge of QM may also be necessary for you to formulate and solve real life problems with quantum computers. But strictly speaking, many of the cornerstones of quantum mechanics like solving time dependent Schrödinger equation do not really play much of the role in quantum computing. So if you are not feeling too comfortable with quantum mechanics, take careful steps forward and see how it feels; there may be more unfamiliarity and weirdness – but it shouldn't discourage you from attempting to learn quantum computing.

What is needed to get started, though, is a decent understanding of linear algebra.

Before we jump into the mathematics, let's discuss a little Q# and how it fits into the picture. As mentioned earlier, there are various ways of writing programs for quantum computers, many of which revolve around Python. That said, I am personally really excited about Q#, and for a number of reasons.

First of all, I am personally a .NET developer, and in that sense, Q# is a natural fit. The Quantum Development Kit for Q# is actually built on top of .NET Core SDK, which makes the whole experience very familiar and intuitive for developers that are used to that toolchain. Q# programs are compiled and executed from the *dotnet* CLI, the libraries are distributed using Nuget package manager and the project file is a standard *csproj* project file used for C# or F# development, with custom SDK defined in it. That integration level is very similar to IL projects, which we already discussed on this blog.

Secondly, it's really appealing to have a language specifically designed for quantum computing experience. This allows a lot of quantum specific concepts – such as for example adjoints, to fit naturally into the language, instead of feeling like a bolt-on on a general purpose language. Syntactically, Q# looks like a mix of C# and F# and therefore familiarity with those languages will make entry into Q# easier.

Finally, QDK has extensions for the editors/IDEs known from the .NET world – my recommendation is to use VS Code as it is lightweight and cross platform, but if you prefer, there is an extension for Visual Studio too.

Generally speaking, Q# programming model allows you to write code for quantum hardware in a way where quantum hardware is treated as a coprocessor (much like GPU is). In that sense, your main program can be C# or even Python based, and for given operations, you'd call into your Q# code to execute a given set of instructions on a quantum device (or simulator, when running locally). QDK takes care of the interoperability between the host program (C#, Python) and Q# code itself.

To get started, we will need to install the Quantum Development Kit from Microsoft Research on our machines. The linked page contains instruction for installing the QDK as well as the necessary project templates and the editor extensions. I recommend that you pause for a moment here, and go ahead and set up the QDK now.

When starting to learn quantum computing, a decent place to begin is to explain the notion of a qubit, since, just as in classical computers everything is based on bit, in quantum computing computations are carried out by qubit manipulation. The main difference between classical bits and qubits is that qubits, instead of taking one of the two discrete (binary) values only, can also be in the superposition state. We'll explain the notion of superposition in a lot more details in the next post, so bear with me, but for now we can say that when in superposition, they are both 0 and 1 at the same time.

There is really no way to reason about qubits without discussing their mathematical representation and using some (hopefully not too complicated) linear algebra, so let's have a look.

The state of a single qubit is described by a single vector $\begin{bmatrix} \alpha \cr \beta \end{bmatrix}$ in a two dimensional Hilbert space. More generally, we can say that a qubit is a quantum system that in which we can select **two linearly independent states** representing 0 and 1 and which can be modeled using a **two dimensional complex vector space.**.

While the hardware design is out of scope for this series, in terms of physical implementation of that, qubits could be implemented using electron spins or photon polarizations.

When dealing with qubits, we have to reason about them in terms of the mathematical concept of a basis. There are always infinitely many bases to choose from (as long as the two distinguished chosen states of 0 and 1 are orthonormal), in quantum computing, two basic unit vectors $\begin{bmatrix} 1 \cr 0 \end{bmatrix}$ and $\begin{bmatrix} 0 \cr 1 \end{bmatrix}$ form the so-called **computational basis**.

In addition, we know from linear algerbra that vectors can be written as linear combinations of basis vectors. As such, qubit state $\ket{\varphi}$ can always be described as:

$$\ket{\varphi} = \alpha\begin{bmatrix} 1 \\ 0 \end{bmatrix} + \beta\begin{bmatrix} 0 \\ 1 \end{bmatrix}$$

In quantum mechanics, $\alpha$ and $\beta$ would be complex numbers, since we are really dealing with two-dimensional complex valued vector here, but for the simplicity of this discussion, we can assume those are real numbers for now.

In the Dirac notation, which is prevalent in the quantum mechanics, we can express $\begin{bmatrix} 1 \cr 0 \end{bmatrix}$ and $\begin{bmatrix} 0 \cr 1 \end{bmatrix}$ as $\ket{0}$ and $\ket{1}$, respectively.

$$\ket{\varphi} = \alpha\ket{0} + \beta\ket{1}$$

The main reason to use the Dirac notation, is that, aside from being quite succinct, it is also independent of the basis chosen.

One of the fundamental strengths of quantum computers lies in the fact that a qubit may be in a superposition state, and we can use that fact to our advantage in our algorithms. However, as soon as it is measured (in a certain basis, of course), its state (value) always collapses to one of the two basis states, either $\ket{0}$ or $\ket{1}$. This also leads us to another weird aspect of quantum mechanics – measurement of a quantum state will change that quantum state.

A word of caution here. The choice of basis is fundamentally important here – superposition is basis-dependent; in other words, a state is always in superposition with respect to certain bases and not in superposition to others. The same notion applies to measurement, when a state is measured in certain bases, it will produce deterministic results, while in others it will produce random results.

However, we digressed a bit, so let's go back to our qubit. We refer to $\alpha$ and $\beta$ as **probability amplitudes**. We can relate amplitudes to the actual classical probability of receiving a state $\ket{0}$ or $\ket{1}$ using the Born rule:

$$|\alpha|^2 + |\beta|^2 = 1$$

The classical probability of collapsing to $\ket{0}$ is therefore $|\alpha|^2$, and conversely, the probability of collapsing to $\ket{1}$ is $|\beta|^2$. This is actually one of the axioms of quantum mechanics. The rule itself is not derived from anything, it is instead given, based on experimental evidence only.

The final conclusion we can draw here is the following – until we measure it, the qubit state can be in one of infinitely many various superposition states, but we can only ever extract one classical bit out of it upon measurement. We are going to be discussing various qubit transformations in the upcoming posts.

Earlier in this post, we have mentioned the steps needed to install the QDK. We can now start with our first program.

The simplest way to do so, is to use the *dotnet* CLI. The below command creates a new C# command line application, with a Q# component. The name of the program is inferred from the name of the current folder.

dotnet new console -lang "Q#"

As previous discussed, the C# program acts as a “host application” here, while we can at any point yield to the Q# part of our application to execute any quantum operation. The template generates a default program that is not particularly exciting, here is how it should look if everything worked correctly. C#:

using System; using Microsoft.Quantum.Simulation.Core; using Microsoft.Quantum.Simulation.Simulators; namespace QubitExample { class Driver { static void Main(string[] args) { using (var qsim = new QuantumSimulator()) { HelloQ.Run(qsim).Wait(); } } } }

Q#

namespace QubitExample { open Microsoft.Quantum.Canon; open Microsoft.Quantum.Intrinsic; operation HelloQ() : Unit { Message("Hello quantum world!"); } }

You can run the program using the dotnet CLI with the regular *dotnet run* command, and it should print:

Hello quantum world!

Let's adapt this out-of-the-box template to something more useful, that will allow us to check some of the statements we made about the qubit behavior. Our first interaction with qubits will be building a small program that will allow us to allocate some qubits, measure their values and print the results.

At the Q# code level, we will change the quantum operation result from *Unit* (which is semantically equivalent to *void* in C#) to a *Int*, since we will want some data to flow back to us. We allocate a qubit with a *using* statement; you can allocate multiple qubits at once if you need, but in our case we will stick to single qubit operations. A newly allocated qubit is by convention automatically initalized to $\ket{0}$ state. Once the qubit is used and is no longer needed, it must be reset back to $\ket{0}$ state again and safely released.

As mentioned, to extract the classical bit out of a qubit, we must measure it. In Q# we can measure using the *Measure* method, and specifying the basis we want to use. In our case, we are interested in measuring the qubit in the computational basis, known as Pauli-Z basis.

The sample code is shown below:

namespace QubitExample { open Microsoft.Quantum.Canon; open Microsoft.Quantum.Intrinsic; open Microsoft.Quantum.Measurement; operation MeasureQubits(count : Int) : Int { mutable resultsTotal = 0; using (qubit = Qubit()) { for (idx in 0..count) { let result = Measure([PauliZ], [qubit]); set resultsTotal += result == One ? 1 | 0; Reset(qubit); } return resultsTotal; } } }

As input we are passing in an integer representing the amount of times we should run a measurement on a fresh qubit (new qubit for first run, and then a reset qubit for subsequent runs). We then keep the running total of the results. If at the end, *resultsTotal = 0* it means we only got zeros, if *resultsTotal = count* it means we only got ones, and anything in between means the measurements were random.

Our updated C# code (including small tweaks to reduce nesting and make it more C# 8 friendly) to invoke this looks as follows:

static async Task Main(string[] args) { using var qsim = new QuantumSimulator(); var repeats = 100; Console.WriteLine($"Running qubit measurement {repeats} times."); var results = await MeasureQubits.Run(qsim, repeats); Console.WriteLine($"Received {results} ones."); Console.WriteLine($"Received {repeats - results} zeros."); }

We can now run this code and see what happens. The result is below:

Received 0 ones. Received 100 zeros.

We got 100 zeros in 100 attempts, which is quite encouraging. Remember that we said that newly initialized qubits have a $\ket{0}$ state, and the output of the program seems to agree. We also never did anything to put the qubit into a superposition (this will be covered in the next post in this series) so no randomness should occur too.

We can verify one other claim at this point. We said that we can measure the qubit in various bases, and choosing the basis is critical for getting a deterministic or probabilistic value. To check that, let's measure in a different basis – for example Pauli-X.

It's a small change in our code – just replace *PauliX* with *PauliZ* in our *Measure* invocation.

operation MeasureQubits(count : Int) : Int { mutable resultsTotal = 0; using (qubit = Qubit()) { for (idx in 0..count) { let result = Measure([PauliX], [qubit]); set resultsTotal += result == One ? 1 | 0; Reset(qubit); } return resultsTotal; } }

When we run our program, we should see something like this:

Received 52 ones. Received 48 zeros.

The distribution is not ideal, because the sample size is small, but the pattern is pretty clear. This aligns with our earlier statement that **a quantum state is always in superposition with respect to certain bases and not in superposition to others**. This is quite profound, and we'll discuss superposition extensively next time.

Before we finish for today, one additional note. The Q# code we wrote is actually unnecessarily verbose. The language and its core library ships with a ton of shortcuts and utilities that make quantum code succinct and pleasant. In our sample, we can actually collapse the measure in standard basis (Pauli-Z) and the reset operations into a single one – *MResetZ*.

The updated code is shown below.

operation MeasureQubits(count : Int) : Int { mutable resultsTotal = 0; using (qubit = Qubit()) { for (idx in 0..count) { let result = MResetZ(qubit); set resultsTotal += result == One ? 1 | 0; } return resultsTotal; } }

In this blog post we looked at the historical background of quantum mechanics and discussed how we currently find ourselves at a breakthrough point, with a booming landscape of quantum hardware and software solutions, democratizing access to quantum computing.

We had a look at how to get started with QDK and Q# and explored the mathematical notion of a qubit. Finally, we had a look at some basic qubit measurement behavior using Q# and the quantum simulator.

In the next post in this series we will explore the mathematics and Q# code related to superposition.

I've recently had some conversations with folks about that, and it occurred to me that this is not necessarily all that obvious to everyone. That's because there are a few ways of doing that, so let's quickly run through them.

In .NET Core 2.x and earlier, you could register the MVC framework in the ASP.NET Core dependency injection container in two ways:

*services.AddMvc()**services.AddMvcCore()*

In ASP.NET Core 3.x, aside from those two approaches, there are three **additional** ways:

*services.AddControllers()**services.AddControllersWithViews()**services.AddRazorPages()*

Let's discuss the differences between them, starting from the most “lightweight” to the most “heavyweight”.

*AddMvcCore()* registers all the core services required for the MVC application to work at all. We do not need to list them all, but pretty much everything related to the controller invocation pipeline gets activated there. These are low(er) level services, that only get customized when you are doing something quite complex or unusual (i.e. building a CMS). Some examples of them are: the controller activation services, the MVC options pipeline, application model provider infrastructure, action constraints, filter pipeline, model binder infrastructure, action result executors and a few more.

At the same time, the initialized framework configuration is completely “bare bones”. It is functional from the perspective of being able to handle an incoming HTTP call, but it is missing several core features. For example, the model validation via data annotations is no activated, same with authorization.

In this set-up, you are in control of (or, if you will, you are responsible for) what is plugged in and used at runtime. In other words, if you need anything beyond the most basic framework feature, you have to add it manually. In fact, in .NET Core 2.x and earlier, not even JSON support was there; this has now changed and the *System.Text.Json* formatter is actually already included in the call to *AddMvcCore()*.

For example:

// pick what you need services.AddMvcCore() .AddDataAnnotations() // for model validation .AddApiExplorer(); // for Swagger

This should be the default choice for you if you really like to bootstrap the minimal amount of things at runtime and only activate the individual features you really use.

*AddControllers()* was introduced in ASP.NET Core 3.0 as a mechanism that would simplify the manual setup needed together with calling the lightweight *AddMvcCore()*.

What you get with *AddControllers()* is:

- everything that
*AddMvcCore()*does - authorization services – needed for authorization policies, filters and other authorization components to work
- API explorer – required if you want to build dynamic API documentation, generate Swagger/OpenAPI files
- data annotations – needed for model validation with attributes and
*IValidateableObject*to work - formatter mappings – needed for content negotiation to work
- CORS

In other words, what you can expect from *AddControllers()* is that it would give you the most comfortable setup for API development. None of the view services are registered here so you don't “drag” any of the Razor related baggage with you. What's worth noting is that the name itself – *AddControllers()* – sort of blurs the line between the ASP.NET Core and the MVC framework, as it doesn't really tell you at first glance that you are activating the MVC framework.

This should be the default choice for you if are developing an API and want to quickly and reliably bootstrap the framework.

// ready for API development services.AddControllers();

*AddControllersWithViews()* is the one you should pick if you are building a “classic” MVC site, just like we have been doing it for years – with controllers and Razor views. It will end up activating:

- everything that
*AddControllers()*does - views functionality – explicitly registers the Razor view engine
- cache tag helper

This should be the default choice for you if you do not need the new Razor pages functionality – you are either building the MVC website exactly how it was built in old desktop framework MVC and in earlier versions of ASP.NET Core MVC or if you are migrating an older site.

// ready for "classic" MVC website development // and at the same time ready for API development services.AddControllersWithViews();

*AddRazorPages()* is intended to serve as a bootstrapping helper for working with the new Razor Pages feature. Under the hood, it ends up activating the following:

- all the core Razor pages features
- everything that
*AddMvcCore()*does – this is a bit surprising at first glance - authorization services – needed for authorization policies, filters and other authorization components to work
- data annotations – needed for model validation to work
- cache tag helper

The fact that it ends up activating *AddMvcCore()* is an internal implementation detail, since the Razor Pages pipeline is relying on a lot of the core MVC infrastructure under the hood. As a side effect, it means that when calling *AddRazorPages()* you are sort of ready to do API endpoints too. This may change in the future, and therefore I wouldn't take strategic decisions based on that. In other words, even though we could now say:

// ready for Razor Pages development // and at the same time quite ready for API development services.AddRazorPages();

if you ever need to host an API and Razor Pages in same project, I'd rather recommend to make these activations explicit, so that you don't get surprised in the future when something changes internally:

// ready for Razor Pages development // ready for API development services.AddRazorPages().AddControllers();

Of course the *AddRazorPages()* should be your default choice if you plan to work with Razor Pages.

Finally, we have *AddMvc()*, which simply registers the entire kitchen sink of all the features. It gives you:

- everything that
*AddControllersWithViews()*does - everything that
*AddRazorPages()*does

// ready for everything services.AddMvc()

While I'd imagine you know what you are trying to build – if you ever have any doubts in which direction your project will evolve, or if you are afraid that some MVC feature would be missing (or in fact, if you already ran into a missing feature), calling *AddMvc()* would be the safest bet to resolve any of those worries or issues.