Let's start our today's journey by looking at the famous Pauli spin matrices, as they are central to quantum computational transformations.

Spin is a property of each elementary particle and represents one of two types of angular momentums that can be found in quantum mechanics – intrinsic angular momentum. The value of the spin is a discrete number and, in general, we can divide particles into two families – those with half-integer spin, called fermions, and those with (round) integer spin, called bosons. Their role in nature is fundamentally different, with fermions making up what we consider ordinary matter (quarks or electrons), and bosons (photons, gluons or for example Higgs boson) acting as force carriers. This is described by the so-called spin-statistics theorem, and is still one of the more puzzling concepts in quantum mechanics.

Electrons have spin equal to $\frac{1}{2}$ (spin up) or $-\frac{1}{2}$ (spin down) which also gives it a very useful designation of being the simplest possible quantum system. If you recall the discussions in part 1 of this series that we had around the concept of a qubit, you should immediately recognize that the nature of spin, with its two orthogonal basis states and two-component vector representation, maps perfectly to what was defined as prerequisites for a qubit.

In 1927, in a breakthrough step in the development of quantum mechanics, Wolfgang Pauli formalized the mathematics behind non-relativistic theory of spin and discovered the Pauli matrices as representations of the spin operators. Overall, Pauli's contributions to modern quantum theory were immense, spearheaded of course by the formulation of the famous Pauli exclusion principle, formulated in 1925, when he was only 25 years old, and for which he ultimately received the Nobel prize in physics in 1945, after the nomination from Albert Einstein. In 1928, Pauli was appointed a Professor of Theoretical Physics at (my nearby) ETH Zürich, where, after a brief time in the USA during the 2nd World War, he went on to have an illustrious scientific career and become a towering figure in the history of this prestigious university that has produced 21 Nobel prize winners – one of them being Albert Einstein himself.

In 1939, Pauli's assistant at ETH, Markus Fierz, in his habilitation degree, formulated the aforementioned spin-statistics theorem, which was later refined further by Pauli himself. It is therefore of no surprise that today, the ETH Institute for Theoretical Physics, which was established upon Pauli's arrival at the university, is located at the Wolfgang-Pauli-Strasse 27 in Zürich.

In quantum mechanics, the three famous Pauli gates are represented by the greek letter $\sigma$, and they are shown below.

$$

\sigma_x=\begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}$$

$$

\sigma_y=\begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix}$$

$$

\sigma_z=\begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}$$

The presence of $i$, the imaginary number ($i = \sqrt{-1}$), in $\sigma_y$, is dictated by the mathematics of quantum mechanics. The state of a closed quantum system is a vector in a complex vector space and complex numbers are made up of real and imaginary components.

Pauli matrices are of fundamental importance in quantum computing too, as they represent three of the most useful single qubit gates, and will be our main focus in this part 3 of the series.

Similarly to classical computer science, where gates can be used to assemble circuits, in quantum computing, a sequence of quantum gates is usually referred to as a “quantum circuit”. The primary role of circuits is that they allow us to visualize the computation models as a sequence of bit (or, in our case, qubit) operations. The analogy between classical and quantum computing works reasonably well in this case, but a work of caution is necessary. In classical computing, the gates typically correspond (sometimes in a slightly idealized form) to physical electronic devices. That is not necessarily the case in quantum computing. Depending on the implementation model of the physical quantum computer, a gate may really have a physical meaning – for example in nuclear magnetic resonance quantum computers – or they might be a purely logical concept, enacted on a stationary particle via series of pulses – for example in trapped ion quantum computer architectures.

In this part 3 of the series, we are going to focus on single qubit gates only – we'll be looking at multi qubit gates next time. In classical computing, single bit gates – at least in boolean circuits – can only exist in two variants: identity gate, which leaves the bit value intact, and a $NOT$ gate, which flips the bit value. In other words, if we input $0$ into an identity gate, $0$ would come out, and if we input $1$, then $1$ would come out. Conversely, for the $NOT$ gate, an input of $0$ produces and output of $1$ and vice versa.

In quantum computing, however, as we already mentioned in previous parts, we can come up with infinitely many single qubit gates, as there are infinitely many ways of manipulating a quantum system that the qubit represents. The reason is

quantum transformations are mathematically described by matrices – more specifically, $2^n * 2^n$ sized unitary matrices, where $n$ stands for the number of qubits the gate acts on, and for a single qubit case, we can construct infinite amount of unique 2 x 2 unitary matrices.

Single qubit quantum gates are often referred to as *elementary quantum gates*. Most general equation we can write here for quantum gates is transforming one quantum state $\psi$ to another quantum state $\varphi$ using a unitary transformation:

$$U\ket{\psi} = \ket{\varphi}$$

Quantum circuits can be visualized in a similar way to how classical circuits are visualized, with simple diagrams representing ordered operations on qubits. My favorite basic tool for quantum circuits is called Bono, and it can be cloned from Github and run locally; however, there are many other useful circuit building tools, one of the most popular – albeit a little overwhelming at first – being Quiirk.

An example is shown below:

In the circuit above, we start with a single qubit in a basis state $\ket{0}$, and apply the Hadamard gate that we discussed last time around – aptly marked by the H symbol. What follows is the act of measurement, which is denoted by its own measurement symbol. The line type is also of significance – a single line represents a qubit, while a double line represents a classical bit – after all, after the measurement we deal with a classical 0 or 1 value only.

The simplest possible quantum gate is the 2 x 2 identity gate and it semantically corresponds to the behavior of the identity gate in classical computing. Given that it is two dimensional, in linear algebra it is often denoted with $I_2$, while in quantum mechanics it is sometimes marked as $1$. In quantum computing, however, we normally just use the letter $I$ to represent the identity gate. Mathematically it can be expressed using the matrix below:

$$

I=\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$$

We can also say that the linear transformation behind the identity is an identity function – a function that always returns the same value as was used as its argument. We can write it in the following way:

$$I\ket{\psi} = \ket{\psi}$$

Another important property of the identity matrix is that if we multiply a matrix $U$ by itself and get $I$ as a result, we know that $U$ is also its own inverse.

Let's see what would happen when we apply $I$ to a qubit:

$$I\ket{\varphi} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}(\alpha\ket{0} + \beta\ket{1}) = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}\begin{bmatrix} \alpha \\ \beta \end{bmatrix} \\ = \alpha\ket{0} + \beta\ket{1}$$

As expected, nothing really happens – when the identity gate is applied, the qubit state is left unchanged.

In quantum computing, when representing Pauli matrices, it is common to skip the quantum mechanical $\sigma$ notation and refer to the matrices (and thus, the quantum gates they represent) using the letters $X$, $Y$ and $Z$ only, which gives us:

$$

X=\begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}$$

$$

Y=\begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix}$$

$$

Z=\begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}$$

As already mentioned in part 1 of this series, we are going to try to avoid imaginary and complex numbers wherever we can (it won't always be possible) – and in this case we can simplify $Y$ and the relationship between $Y$ and $\sigma_y$ could be defined as:

$$

Y=\begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix}$$

$$\sigma_y = -iY$$

And that's going to be approach we will take here.

Each of the matrices can also be written using Dirac notation, as the outer product of the base vectors with their complex conjugates. This might seem somewhat confusing at first but quickly becomes very intuitive:

$$

X=\begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} = \ket{1}\bra{0} + \ket{0}\bra{1}$$

$$

Y=\begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} = -\ket{1}\bra{0} + \ket{0}\bra{1}$$

$$

Z=\begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} = \ket{0}\bra{0} – \ket{1}\bra{1}$$

We shall now dive deeper into the mathematical consequences of applying each of the three Pauli gates to such a quantum system, starting with $X$.

The Pauli $X$ gate is often referred to as the **bit flip gate** because it ends up swapping the probability amplitudes $\alpha$ and $\beta$ with each other. If the qubit is already in a known state $\ket{0}$ or $\ket{1}$, in other words, we know that the probability amplitude, and thus the probability itself, of either $\alpha$ or $\beta$ is zero, then the gate acts like the $NOT$ gate in classical computing – flipping the value to the opposite one.

Since we already know that we can write the state of our qubit as the linear compination of the base vectors, with $alpha$ and $beta$ being the probability amplitudes:

$$\ket{\varphi} = \alpha\ket{0} + \beta\ket{1}$$

we can now express the $X$ gate acting upon a qubit using simple algebra:

$$X\ket{\varphi} = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}(\alpha\ket{0} + \beta\ket{1}) = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}\begin{bmatrix} \alpha \\ \beta \end{bmatrix} \\ = \beta\ket{0} + \alpha\ket{1}$$

All of this, especially the theoretical symmetry to the classical $NOT$ gate, makes the Pauli $X$ gate one of the most important and easy to understand quantum gates.

On the other hand, the Pauli $Z$ gate cannot be really conceptually mapped to a classical computing gate, making it a bit more difficult to apprehend. The mathematics of it is not very complicated though, as shown below:

$$Z\ket{\varphi} = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}(\alpha\ket{0} + \beta\ket{1}) = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}\begin{bmatrix} \alpha \\ \beta \end{bmatrix} \\ = \alpha\ket{0} – \beta\ket{1}$$

In other words, we started with a state of $\alpha\ket{0} + \beta\ket{1}$ and ended with a state of $\alpha\ket{0} – \beta\ket{1}$ – the only difference being the sign. The result here may at first glance be a little confusing, especially as we already extensively discussed that the probability amplitudes can be used to obtain the actual classical probabilities of receiving a state $\ket{0}$ or $\ket{1}$ using the Born rule – by squaring the modulus of the amplitude $|\alpha|^2$ or $|\beta|^2$. And naturally, the change of the sign we encountered when applying the Pauli $Z$ gate has no impact on the classical probabilities. The $Z$ gate is thus referred to as the **phase flip gate**, as it flips the **phase** of the qubit, leaving the actual classical probabilities intact. A simple way of remembering the effect of the $Z$ gate is that it leaves $\ket{0}$ intact and multiplies $\ket{1}$ by $-1$. While this has no impact on probabilities of reading a classical 0 or 1 out of the qubit, it has some creative application scenarios in quantum algorithms.

Finally, the $Y$ gate is both a **bit flip gate** and **phase flip gate** at the same time – so it swaps the amplitudes and at the same time changes the sign of the amplitude for both $\ket{0}$ and $\ket{1}$ (same as above, they get multiplied by $-1$).

$$Y\ket{\varphi} = \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix}(\alpha\ket{0} + \beta\ket{1}) = \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix}\begin{bmatrix} \alpha \\ \beta \end{bmatrix} \\ = -\beta\ket{0} + \alpha\ket{1}$$

The three Pauli gates are all its own inverses, meaning the following holds:

$$X^2 = Y^2 = Z^2 = I$$

In practice, it means that applying the gate the second time in a row would undo the effects of the first transformation. Additionally, Pauli gates are closely related to the Hadamard gate that we already discussed last time. We will not expand the calculations (although I would encourage you do to so to practice some linear algebra) – but the following relations are true:

$$X = HZH$$

$$Y = -1HYH$$

$$Z = HXH$$

Pauli gates and the identity gate have fundamental meaning in quantum information theory in a sense that they can be used as building blocks to make up any other transformation. We could algebraically express any quantum computational linear transformation in the two dimensional complex vector space as a product of a complex unit and a linear combination of the three Pauli matrices and the identity matrix. It means that we can always find $\alpha$, $\beta$, $\gamma$ and $\delta$, as well as $\theta$ to represent any unitary transformation $U$:

$$U = e^{\theta{i}}(\alpha{I} + \beta{X} + \gamma{Y} + \delta{Z})$$

where $\alpha$ is a real number, $\beta$, $\gamma$ and $\delta$ are complex numbers and $\theta$ is larger or equal to $0$ and smaller than $2\pi$. $e$ stands for the the Euler's number.

We've spent quite some time looking at the theory, but now we are ready to go back to Q# programming. All three Pauli gates are available as functions in the $Microsoft.Quantum.Intrinsic$ namespace. These are:

- $X (qubit : Qubit)$
- $Y (qubit : Qubit)$
- $Z (qubit : Qubit)$

Identity gate is also available – mainly for completeness, but also because it is sometimes useful when an algorithm requires a no-effect action to be performed on a qubit. It is probably of no surprise to anyone that the function signature for identity is $I (qubit : Qubit)$.

A sample Q# identity operation is shown below.

operation Identity(count : Int) : Int { mutable resultsTotal = 0; using (qubit = Qubit()) { for (idx in 1..count) { I(qubit); let result = MResetZ(qubit); set resultsTotal += result == One ? 1 | 0; } return resultsTotal; } }

The operation, similarly to the previous examples in earlier parts of this series, takes in an integer input indicating the amount of time the operation should be repeated. A single qubit is allocated within the loop, then the $I$ gate is applied and a regular measurement along the Z axis (in the computational basis) is performed. The qubit is then reset to the original state as mandated by Q# ($MResetZ$ operation guarantees the reset). We keep track of the amount of ones received by keeping a running total of the measurements. Naturally, the amount of iterations minus the amount of ones gives us the amount of zeros obtained in the measurement.

Below is the following C# driver to orchestrate this Q# operation. We will use this driver for all the snippets that come, so I will not repeat it anymore (only the invoked operation name would differ). The quantum operation would be run 1000 times to give us a better statistical result set.

class Driver { static async Task Main(string[] args) { using var qsim = new QuantumSimulator(); var iterations = 1000; Console.WriteLine($"Running qubit measurement {iterations} times."); var results = await Identity.Run(qsim, iterations); Console.WriteLine($"Received {results} ones."); Console.WriteLine($"Received {iterations - results} zeros."); } }

We should get the following output when running this program:

Running qubit measurement 1000 times. Received 0 ones. Received 1000 zeros.

This is of course hardly suprising. The default state of the qubit is $\ket{0}$, which when measured guarantees to produce a classic bit 0. On top of that, us applying the $I$ gate doesn't really have any effects, other than the fact that the qubit was indeed acted upon – so the 100% rate of receiving 0s is quite obvious.

We will now turn our attention to the bit flip. The Q# code is identical to the one above, except we will use $X$ operation.

operation Bitflip(count : Int) : Int { mutable resultsTotal = 0; using (qubit = Qubit()) { for (idx in 1..count) { X(qubit); let result = MResetZ(qubit); set resultsTotal += result == One ? 1 | 0; } return resultsTotal; } }

Running this, using the same type of C# driver, would produce the following result:

Running qubit measurement 1000 times. Received 1000 ones. Received 0 zeros.

This is also very much in line with our expectations. The default state of the qubit is $\ket{0}$, which the $X$ gate flipped to $\ket{1}$ – and that, when measured, produces a bit value 1 100% percent of times.

The next thing worth experimenting with is to try to chain several gates together. For example, we already made a claim – not supported by any algebraic calculation though – that the following relation holds:

$$X = HZH$$

We could verify this easily using Q#. Our operation would look as follows:

operation HZH(count : Int) : Int { mutable resultsTotal = 0; using (qubit = Qubit()) { for (idx in 1..count) { H(qubit); Z(qubit); H(qubit); let result = MResetZ(qubit); set resultsTotal += result == One ? 1 | 0; } return resultsTotal; } }

This code is yet again similar to the previous snippets, except this time we apply a chain of gates on the same qubit – $H – Z – H$. Running this code, with the same type of C# driver as before, produces the following result:

Running qubit measurement 1000 times. Received 1000 ones. Received 0 zeros.

The result is identical to running the bit flip $X$ gate, so we have really managed to experimentally verify that $X = HZH$.

As we already mentioned, there are infinitely many possibilities for constructing single qubit gates. For the purpose of this series, the 5 we already discussed – Hadamard gate $H$, Pauli gates $X$, $Y$, $Z$ and identity gate $I$ – are most important, and we will be using them repeatedly in the next parts. However, there are several other common and interesting gates, most importantly the so-called rotation gates.

The three rotation gates $R_x$ , $R_y$ and $R_z$ are all generalizations of the the Pauli $X$, $Y$ and $Z$ gates. The names of these gates come form the fact that all single qubit transformations can be visualized as rotations around the x, y and z axes of the so-called Bloch sphere. Bloch sphere, named so after another physicist connected to ETH in Zürich, Felix Bloch is commonly used to provide a way to represent graphically (geometrically) the state of a two-dimensional quantum system, such as a qubit.

$R_x$ , $R_y$ and $R_z$ are often used as part of quantum algorithms and can be utilized, for example, to prove Bell's theorem. They are represented by the matrices below:

$$

R_x=\begin{bmatrix} cos(\frac{\varphi}{2}) & -sin(\frac{\varphi}{2})i \\ -sin(\frac{\varphi}{2})i & cos(\frac{\varphi}{2}) \end{bmatrix}$$

$$

R_y=\begin{bmatrix} cos(\frac{\varphi}{2}) & -sin(\frac{\varphi}{2}) \\ sin(\frac{\varphi}{2})i & cos(\frac{\varphi}{2}) \end{bmatrix}$$

$$

R_z=\begin{bmatrix} e^{i\frac{\varphi}{2}} & 0 \\ 0 & e^{i\frac{\varphi}{2}} \end{bmatrix}$$

We mentioned that they are generializations of the Pauli gates, and looking at the matrices closely, the relation between the rotation gates and Pauli gates should become a little bit more apparent. For example, the $R_z$ gate will actually become $I$ gate when $\varphi=0$, and it will become $Z$ gate when $\varphi=\pi$.

Similarly to Pauli gates, all three of these rotation gates are available in Q# using the $Rx (theta : Double, qubit : Qubit)$, $Ry (theta : Double, qubit : Qubit)$ and $Rz (theta : Double, qubit : Qubit)$ functions in the $Microsoft.Quantum.Intrinsic$ namespace, where $theta$ represents the desired rotation angle.

Let's try invoking $R_x$ with a 45 degree rotation and see the effects:

operation Rx45(count : Int) : Int { mutable resultsTotal = 0; using (qubit = Qubit()) { for (idx in 1..count) { Rx(45.0, qubit); let result = MResetZ(qubit); set resultsTotal += result == One ? 1 | 0; } return resultsTotal; } }

Executing this code produces the following result:

Running qubit measurement 1000 times. Received 251 ones. Received 749 zeros.

So a rotation by 45 degrees along the X axis, distributes the probabilities for obtaining one or zero 0.25-0.75 – when measuring in the computational basis along the Z axis.

In this post, we discussed in depth several important quantum computing gates – Pauli gates $X$, $Y$, $Z$, as well as the identity gate $I$. In addition to that, we looked at the rotation gates too, as generalizations of the Pauli gates. In the previous post in this series, we already had a look at the Hadamard gate $H$.

All of this is still quite basic in terms of what we can do at the Q# code level, but we are slowly building up the necessary knowledge and amassing building blocks that will be extremely helpful when putting together quantum algorithms.

Equipped in this knowledge we should be are ready to have a look at multi qubit gates and the algebraic foundations behind them – which we will do in the next part. We are also going to look at one of the more bizarre quantum phenomena – entanglement. After that, we will be ready to start putting it all to good use by exploring some quantum algorithms.

Support for decompilation was one of the most requested features for OmniSharp, so we are thrilled to finally be able to include it into OmniSharp. It is based on the wonderful open source .NET decompiler, ILSpy – the same one that powers decompilation features in Visual Studio. In fact, decompilation support in OmniSharp is very much modelled after how things work in Visual Studio.

Big words of gratitude are also necessary towards the ILSpy maintainers – Christoph and Siegfried who have pushed my lazy self throughout the process to get things finally done and helped us during the implementation phase.

The feature, like many other OmniSharp features is opt-in for the time being, meaning it is disabled by default. If you'd like to enable it, you need to add a relevant OmniSharp configuration option:

- to enable it globally, use
*%USERPROFILE%/.omnisharp/omnisharp.json*file - to enable it for a specific project only, use
*omnisharp.json*at the root of your workspace (typically at the root of the repository, next to solution file)

If the file doesn't exist, just create it. Then the following setting is needed inside:

{ "RoslynExtensionsOptions": { "enableDecompilationSupport": true } }

This is currently the only way of enabling decompilation – in the future, the C# Extension for VS Code will also get its own extension-level setting, that you could just add into the VS Code settings directly without having to deal with *omnisharp.json*.

Once enabled, decompilation support is a drop-in replacement for navigating to metadata. In other words, when you try to go to the definition of a symbol that is not part of your source, but rather is externally imported from a DLL (normally a NuGet package reference or a direct DLL reference), instead of showing you the basic metadata view with just member names and signatures, we'll attempt to perform decompilation instead, and drop you into the relevant declaration location.

This is illustrated on the animation below – with an example of some APIs from the Newtonsoft.Json NuGet package reference.

There are some known limitations at the moment around reference assemblies – those are currently not resolved to the implementation assemblies. This is, I believe, on par with Visual Studio behavior though, which also doesn't do that (at least not always).

Hope you will find the feature useful!

]]>

Last time around, we briefly mentioned that when a qubit is in superposition, “it is both 0 and 1 at the same time”. This is a rather simplistic and not necessarily accurate description, but one that is commonly used in introductory texts and popular science articles, as it manages to convey the weirdness of conceptualizing quantum states.

Since we have a lot of room here, we now have, however, the luxury to emphasise that such a statement is really stripped of any mathematical or physical substance. Thankfully, we looked a little at the mathematics behind the qubit already, so we are now well positioned to describe superposition more accurately. Namely, we know that the quantum state of a qubit is a **linear combination** of $\ket{0}$ and $\ket{1}$.

In other words, given a a qubit state that we already discussed:

$$\ket{\varphi} = \alpha\ket{0} + \beta\ket{1}$$

we can say that a qubit is in a superposition when both amplitudes (and thus, probabilities) $|\alpha|$ and $|\beta|$ are non-zero.

The ontological aspect of the superposition, and in a broader sense, the entire quantum theory, is a lot more blurry and depends on the epistemology we would choose to subscribe to.

In 1935, Austrian physicist Erwin Schrödinger published one of the most famous papers in the history of physics, “Die gegenwärtige Situation in der Quantenmechanik” (English translation is available here), in which he proposed a thought experiment around …a cat.

We'll let Schrödinger himself explain the experiment:

“A cat is penned up in a steel chamber, along with the following diabolical device (which must be secured against direct interference by the cat): in a Geiger counter there is a tiny bit of radioactive substance, so small that perhaps in the course of one hour one of the atoms decays, but also, with equal probability, perhaps none; if it happens, the counter tube discharges and through a relay releases a hammer which shatters a small flask of hydrocyanic acid. If one has left this entire system to itself for an hour, one would say that the cat still lives if meanwhile no atom has decayed. The first atomic decay would have poisoned it. The $\psi(x,t)$ function for the entire system would express this by having in it the living and the dead cat (pardon the expression) mixed or smeared out in equal parts.”

The cat has since gone to establish itself one of the symbols of quantum theory and has gained a firm position in popular culture. There have been countless references to the dead-and-alive-at-the-same-time cat in movies, books, video games and many other areas of our society. And just so that you see that physicists can have a good sense of humor too, have a quick look at one of the most popular quantum mechanics undergraduate textbooks – “Introduction to Qunatum Mechanics” by David Griffiths, which has a cat that is alive on the front cover, and a cat that is dead on the back cover.

There are plenty of competing interpretations of quantum mechanics, which can be used to explain or at least interpret the paradox, the most prevalent of which is the “Copenhagen interpretation”, championed by Niels Bohr and several other physicists that were close to him, such as Werner Heisenberg. In the view adopted in the “Copenhagen interpretation”, the superposition or, for that matter, the wave function describing a quantum object, does not describe the nature or reality in any way, nor are we allowed to reason about the reality behind a quantum object until it's actually measured. In other words, it is impossible disassociate the reality responsible for the quantum phenomena from the measurement itself, as we can only observe trace effects of quantum objects on the measurement apparatus and only infer their existence that way. This is fundamentally different from classical macro-scale physics, where systems, their state and behavior can be independently observed.

Within that view, superposition is a purely mathematical concept describing relations between various probability amplitudes of finding a particle at a given position or in a given quantum state. Thus, superposition is merely a tool that allows us to express our probabilistic expectations for the measurements. Within that thought framework, the cat paradox is no longer a paradox.

As Arkady Plotnitsky puts it in his excellent publication, with the elegance far superior to the parlance of this blog post:

“if (…) quantum states are seen (…), as defined by the $\psi$-function, strictly as mathematical entities encoding and enabling expectation catalogues concerning the outcome of possible experiments – rather than describing the behavior of quantum systems, in particular between experiments – Schrödinger's thought experiment presents no problem”.

Now that we have established (or rather, in the spirit of Copenhagen, stepped around) the ontological basics, let's look at how we can put our qubit in a superposition and what are the quantum computational consequences of doing so. Of course superposition is one of the critical aspects of quantum computing – without it, the qubit could only represent the two binary states of *0* and *1*, which would strip it of any possible advantages over classical computing.

In quantum mechanics, arbitrary transformations of the quantum state are not possible. Instead, time evolution of a quantum state is always represented by **linear unitary transformations of the associated vector space**. This is taken as an axiom, and is one of the postulates of quantum mechanics. The transformations are represented by matrices and in physics, in order for a given transformation matrix $U$ to be unitary, its Hermitian adjoint (a generalized version of conjugate transpose $U^*$ from linear algebra) $U^\dagger$ must be its own inverse. In other words, $U$ must satisfy the condition:

$$

U^\dagger U = I

$$

where *I* is the identity matrix. Therefore, algebraically speaking, in order to transform the state of a qubit into a **uniformly distributed superposition** we need to linearly transform it using the Hadamard transformation, represented by the Hadamard matrix, named like this after the French mathematician Jacques Hadamard.

As we already mentioned, quantum transformations are described by matrices – more specifically, $2^n * 2^n$ sized unitary matrices, where n stands for the number of qubits the gate acts on.

In quantum computing, quantum state transformations are represented – similarly to the analogous concept from classical computing – by computational gates. We will look in details at various gates and circuits in the next post in this series, for now we will just say tt Hadamard gate acts on a single qubit, hence it's corresponding matrix size is 2×2. The Hadamard *H* matrix is shown below:

$$

H=\begin{bmatrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} &-\frac{1}{\sqrt{2}} \end{bmatrix}$$

Since $\frac{1}{\sqrt{2}}$ appears in each matrix element, we know from linear algebra that we can simplify the equation:

$$

H=\frac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 &-1 \end{bmatrix}

$$

Our Hadamard matrix is not just unitary, it is also self adjoint (Hermitian) – so it's equal to its own conjugate transpose. In other words:

$$

H = H^\dagger

$$

This means that the following holds:

$$

HH^\dagger = HH = I

$$

We are starting to get side tracked by linear algebra again, so let's go back to our qubit. Suppose we start with a qubit prepared to be $\ket{0}$. At this point we know it's classical bit value would be *0*, because $\alpha = 1$ and $\beta = 0$, so the probability of measuring a *0* is 100%. We now apply the Hadamard gate by multiplying our ket by the Hadamard matrix.

$$H(\ket{0}) = \begin{bmatrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} &-\frac{1}{\sqrt{2}} \end{bmatrix}\begin{bmatrix} 1 \\ 0 \end{bmatrix} = \begin{bmatrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{bmatrix} = \frac{1}{\sqrt{2}}\begin{bmatrix} 1 \\ 0 \end{bmatrix} + \frac{1}{\sqrt{2}}\begin{bmatrix} 0 \\ 1 \end{bmatrix}

$$

Similarly, for $\ket{1}$:

$$H(\ket{1}) = \begin{bmatrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} &-\frac{1}{\sqrt{2}} \end{bmatrix}\begin{bmatrix} 0 \\ 1 \end{bmatrix} = \begin{bmatrix} \frac{1}{\sqrt{2}} \\ -\frac{1}{\sqrt{2}} \end{bmatrix} = \frac{1}{\sqrt{2}}\begin{bmatrix} 1 \\ 0 \end{bmatrix} – \frac{1}{\sqrt{2}}\begin{bmatrix} 0 \\ 1 \end{bmatrix}

$$

The above representation is usually more readable and understandable when beginning to work with quantum computing, especially without deeper background in physics. That said, we can also write this in a more compact Dirac notation:

$$H(\ket{0}) = \frac{1}{\sqrt{2}}\ket{0} + \frac{1}{\sqrt{2}}\ket{1} = \frac{1}{\sqrt{2}}(\ket{0}+\ket{1})

$$

$$H(\ket{1}) = \frac{1}{\sqrt{2}}\ket{0} – \frac{1}{\sqrt{2}}\ket{1} = \frac{1}{\sqrt{2}}(\ket{0}-\ket{1})

$$

The difference between the two cases (starting the process with a $\ket{0}$ vs $\ket{1}$) is the amplitude sign (+ vs -), but notice that in order to obtain classic probabilities we need to square the amplitudes anyway, and therefore we are **equally likely (50%)** to get $\ket{0}$ and $\ket{1}$ in both cases. In other words, the Hadamard gate has created a uniformly distributed superposition for us in both situations.

Another interesting property of the Hadamard gate, is that like all quantum gates, it is reversible. That is not surprising, since gates are unitary operators, which means that property is guaranteed by the underlying mathematics. Thus, applying Hadamard gate again, returns us back to the state which we started from. This is a property that does not apply to all classical computing gates – for example AND or XOR are not reversible.

In the mathematical terms, we can verify it in the following way (going back to $\ket{0}$):

$$\begin{bmatrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} &-\frac{1}{\sqrt{2}} \end{bmatrix}\begin{bmatrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \end{bmatrix}$$

and (going back to $\ket{1}$):

$$\begin{bmatrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} &-\frac{1}{\sqrt{2}} \end{bmatrix}\begin{bmatrix} \frac{1}{\sqrt{2}} \\ -\frac{1}{\sqrt{2}} \end{bmatrix} = \begin{bmatrix} 0 \\ 1 \end{bmatrix}$$

After long and winding road, we have finally arrived at the happy place that allows us to write some Q# code. We can use the example from previous blog post as the starting point – remember that we prepared a qubit there, and then measured its value. All of that, for a reminder, is shown again below (including its related C# driver code).

open Microsoft.Quantum.Canon; open Microsoft.Quantum.Intrinsic; open Microsoft.Quantum.Measurement; operation MeasureQubits(count : Int) : Int { mutable resultsTotal = 0; using (qubit = Qubit()) { for (idx in 0..count) { let result = MResetZ(qubit); set resultsTotal += result == One ? 1 | 0; } return resultsTotal; } }

class Driver { static async Task Main(string[] args) { using var qsim = new QuantumSimulator(); var repeats = 1000; Console.WriteLine($"Running qubit measurement {repeats} times."); var results = await MeasureQubits.Run(qsim, repeats); Console.WriteLine($"Received {results} ones."); Console.WriteLine($"Received {repeats - results} zeros."); } }

The code prepares a single qubit, and measures it in the Pauli Z basis, repeating this a given amount of times (1000), keeping track of the total tally of 0s and 1s measured. As we already determined last time, such set up will yield 100% 0s as the qubit is in the base state of $\ket{0}$ and when we measure it in the Pauli Z basis we are guaranteed to get a *0*.

In order to apply the uniformly distributed superposition, we can sneak in the Hadamard gate, represented by the aptly named *H()* method in Q#, right into the exact same code snippet – before we measure the qubit. This is shown next.

open Microsoft.Quantum.Canon; open Microsoft.Quantum.Intrinsic; open Microsoft.Quantum.Measurement; operation MeasureQubits(count : Int) : Int { mutable resultsTotal = 0; using (qubit = Qubit()) { for (idx in 0..count) { H(qubit); let result = MResetZ(qubit); set resultsTotal += result == One ? 1 | 0; } return resultsTotal; } }

If we now run this code using the same C# driver code as we had last time, we should see result that is close to this:

Running qubit measurement 1000 times. Received 498 ones. Received 502 zeros.

We can pause here for a moment, and marvel at our astonishing achievement. Sure, this is running on a quantum simulator only, but should this Q# code be deployed to a real Q# compatible quantum computer (which, hopefully, will be possible soon using Azure Quantum), it would, in principle, work the same way. So let's squint our eyes for a moment and imagine that it is indeed a real quantum computer that we are interacting with. What we have tapped into here, is true, nature-guaranteed randomness. The bits are fully random, with 50% chance of being 0 or 1, and those probabilities are guaranteed by the laws of quantum mechanics and the underlying mathematics we already looked at.

The true randomness right at the heart of our reality might be very difficult to make peace with. In a 1926 letter to Max Born, Albert Einstein, who couldn't agree with the probabilistic core of quantum mechanics or Heisenberg's uncertainty principle, and was convinced the ultimate nature of reality is fundamentally deterministic, wrote:

“The theory produces a good deal but hardly brings us closer to the secret of the Old One. I am at all events convinced that He does not play dice.” (org. “Die Theorie liefert viel, aber dem Geheimnis des Alten bringt sie uns kaum näher. Jedenfalls bin ich überzeugt, daß der nicht würfelt.”).

We can close off this post by doing something useful, for a change. Let's leverage the quantum superposition to build a true random number generator in Q#. Of course at this point, it should not be very surprising to anyone, given we just managed to generate random bits. As all computing students know, once we have enough bits we can form numbers out of them – such as for example 16-, 32- or 64-bit integers.

The final piece of our Q# code does just that, it generates an array of 32 random bits, which we can use to construct a truly random number.

operation RandomNumberGenerator() : Bool[] { mutable randomBits = new Bool[32]; for (idx in 0..31) { using(qubit = Qubit()) { H(qubit); let result = MResetZ(qubit); set randomBits w/= idx <- result == One; } } return randomBits; }

The code is very similar to the code we used earlier, and the Hadamard gate is of course at the center of all interesting action, guaranteeing a uniformly distributed superposition. It is worth briefly mention the somewhat esoteric syntax used for inserting elements into an array: *set array w/= idx <- value*. To be more precise, this is in fact a copy-and-update expression, since arrays in Q# are immutable. Therefore, this is really not the most efficient way of keeping track of our random 32 bits, but it was a good occasion to introduce this useful language construct.

Finally, the C# driver to run the code and then convert the random bits into a 32-bit unsigned integer looks like this:

class Driver { static async Task Main(string[] args) { using var qsim = new QuantumSimulator(); var randomBits = await RandomNumberGenerator.Run(qsim); var bitString = string.Join("", randomBits.Select(x => x ? 1 : 0)); Console.WriteLine($"Generated random bit string: {bitString}"); Console.WriteLine($"Generated random uint32: {Convert.ToUInt32(bitString, 2)}"); } }

The output of this code would be something like:

Generated random bit string: 10011111001000010110000101100110 Generated random uint32: 2669764966

And there we have it – we have achieved something useful with a quantum computer. In fact, if we think about it, it is something no classical computer can ever claim – a mathematically guaranteed, underpinned by the basic laws of nature, true random number.

In this part 2 of the blog post series we had a look at the epistemology, mathematics and Q# code behind the superposition. We discussed the algebraic meaning of quantum transformations and also briefly touched upon the concept of quantum gates. Finally, we managed to unsettle Albert Einstein a bit and created a first somewhat useful program in Q#.

In the next post in this series we will be looking in more details at quantum gates and attempt to discuss the important ones.

While it's still early days, we would want that OmniSharp users can work with .NET 5 as soon as possible, so we have just added support for .NET 5 in OmniSharp.

Let me quickly walk you through what you need to do to take advantage of that.

The magic number is 1.34.15 – that's the build of OmniSharp that supports .NET 5, and it was published on 25 March. If you are consuming OmniSharp in editor other than VS Code, you need to ensure that you use at least that version.

If you are in VS Code, with the C# Extension installed, you could – more conveniently than juggling the OmniSharp builds – install the pre-release 1.21.16 of the C# Extension, which can be found here. You'd need to download the VSIX and follow the simple instructions.

That pre-release build 1.21.16 already contains OmniSharp 1.34.15. This build should ship as stable into the VS Code Marketplace at some point next week (starting on Monday 30 March)

In principle, to support .NET 5, we need to things:

- latest
*NuGet.** packages (*5.6.0*) – to be able to correctly handle the*net5.0*TFM (introduced in preview 2) *MsBuild 16.5.0*

When OmniSharp starts it needs to choose the most appropriate MsBuild to use. It will use one of the two options:

- use it's own bundled minimal MsBuild (which in this case, supports .NET 5 already)
- use global MsBuild from your global Mono installation (if you are on *nix and have Mono >= 6.4.0) or your global Visual Studio installation (if you are on Windows and have VS >= 16.3) installed

If you are in the situation where the first option takes effect (so you have no global Mono or no global VS), then the aforementioned OmniSharp 1.34.15 guarantees that you will have .NET 5 preview support.

If you are in the second category (so you have do have global Mono or global VS), then even if you have OmniSharp 1.34.15 / VS Code Extension 1.21.16, an outdated global Mono or VS will prrevent you from enjoying support for .NET 5. In those cases you'd need to upgrade:

- global Mono to at least 6.10.0, which is the preview version at the time of writing
- Visual Studio to at least 16.6P2

Additionally, though this is generally not recommend, if you are *nix systems, you can disable the usage of global Mono completely by adding the following VS Code setting:

"omnisharp.useGlobalMono": "never"

This will actually force OmniSharp to be launched only with its bundled minimal MsBuild and its own bundled minimal Mono runtime.

In the future we will raise the minimum versions of the discovered global Mono (as mentioned, currently >= 6.4.0) and global VS (currently >= 16.3) to align with .NET 5 support, however at the moment it is too early to do so. Hope this helps.

]]>Today, we are at the dawn of the quantum computing age, a multidisciplinary field that sits at the intersection of quantum physics, computer science, mathematics and chemistry and may revolutionize the world of computing and software engineering.

In this post I am starting a new series that will, through the lens of a .NET developer, introduce the basics of quantum computing – using examples in Q#.

Historically, quantum computing dates back to early 1980s when Paul Benioff theoretized a Turing machine that was a microscopic quantum mechanical model. Roughly at the same time, completely independently in Soviet Union (on the wrong side of the Berlin Wall as we said when I was a kid on that wrong side), the brilliant Russian physicist Yuri Manin also proposed a concept of building a quantum computer.

Soon after, in 1981, Richard Feynman delivered his famous lecture Simulating Physics with Computers at the 1st MIT Physics and Computation conference. In the lecture Feynman argued that in order to truly simulate nature, which at its core is quantum mechanical, we should not use classical computing, but quantum computing. Given Feynman's profile, this, in hindsight seemingly obvious observation, is widely regarded as the moment that jump started the area of quantum computing.

What followed was plenty of progress in terms of theory and algorithms, but the field always lacked the hardware to make it all reality. Only in recent years we have finally reached the point at which quantum computers are no longer theoretical devices. Quantum hardware is getting more and more powerful, stable and accessible to the masses. Since 2016, IBM Q Experience provides access to quantum computer via the cloud (the smallest one for free!), while AWS Braket and Azure Quantum are both offering public preview programs with the goal to launch soon. Smaller companies and startups are also disrupting the field, for example AQT, an Innsbruck based company, offers access to ion-trap quantum computers on the cloud too.

Additionally, a lot is happening around bringing the quantum experience closer to regular software developers. For example, Microsoft is currently building Q#, a high level programming language specifically tailored for quantum programming, IBM founded Qiskit, a Python framework for quantum computing, while Google AI Quantum Team started Cirq, another Python framework for quantum development.

All these efforts lead to what I like to call, **democratization** of quantum computing. You no longer have to be a theoretical physicist, a PhD researcher or work for a large company with massive R&D resources to be able to access and program quantum computers – and that process will only continue to speed up.

It is impossible to talk about quantum computing without a little background on quantum mechanics. Contrary to general relativity, which we owe entirely to the brilliance of Albert Einstein, it is commonly known that quantum mechanics has many pioneers that contributed to its formation.

However, if we had to single out one man that played the pivotal in the history of quantum mechanics, I'd say it was Werner Heisenberg, who published the foundations of what became modern quantum mechanics in his 1925 paper Über quantentheoretische Umdeutung kinematischer und mechanischer Beziehungen. Following the paper, the theory was further refined, contributed to and developed by Heisenberg himself and many other brilliant physicists – Paul Dirac, Max Born, Pascual Jordan, Wolfgang Pauli and others (including especially profound contributions, sort of against his will, from Erwin Schrödinger, but that's a story for a separate day).

Heisenberg realized that it is impossible to use the paradigms macro-scale physics to correctly describe the behavior of subatomic particles. The genius of Heisenberg was that he abandoned the approach that was at the very core of classical physics – describing the nature of reality (in this case particles) using idealized mathematical models and its realism based on deterministic results. Instead, he introduced new matrix-based mathematical formalism which was based on algebraic probabilistic approach for predicting the outcome of experiments. He called that “new kinematics” and “calculus of observable quantities”. In other words, he realized that nature is random (no pun intended) by nature, took the equations of classical physics and mathematically reinterpreted them. The departure from the idealized, deterministic approach of classical physics was a profound, brilliant step, especially considering that he introudced noncommutativity, which didn't exist in theoretical physics at the time. Heisenberg realized that at quantum level, observable properties such as momentum and position should not commute.

As Arkady Plotnitsky put it in his excellent book Epistemology and Probability, “this invention of new variables brought physics, mathematics and philosophy into a new (as against classical physics) type of relationships. Each of these steps just mentioned is remarkable enough. Jointly, they constitute a truly extraordinary accomplishment, rivalling or in any event standing its ground against those of Newton and Einstein”.

Throughout these series we will continue to look back at some of the historical context of the development of quantum mechanics and the radical epistemological challenges it posted.

While the field of quantum computing (and quantum information theory), is an offspring of quantum mechanics, to program a quantum computer and to use some of the high level languages and frameworks we mentioned earlier, it is not absolutely necessary to be fluent in quantum mechanics.

Of course familiarity with quantum mechanics is going to be advantageous at the theoretical level, to be able to grasp, for example, the spin concept. Knowledge of QM may also be necessary for you to formulate and solve real life problems with quantum computers. But strictly speaking, many of the cornerstones of quantum mechanics like solving time dependent Schrödinger equation do not really play much of the role in quantum computing. So if you are not feeling too comfortable with quantum mechanics, take careful steps forward and see how it feels; there may be more unfamiliarity and weirdness – but it shouldn't discourage you from attempting to learn quantum computing.

What is needed to get started, though, is a decent understanding of linear algebra.

Before we jump into the mathematics, let's discuss a little Q# and how it fits into the picture. As mentioned earlier, there are various ways of writing programs for quantum computers, many of which revolve around Python. That said, I am personally really excited about Q#, and for a number of reasons.

First of all, I am personally a .NET developer, and in that sense, Q# is a natural fit. The Quantum Development Kit for Q# is actually built on top of .NET Core SDK, which makes the whole experience very familiar and intuitive for developers that are used to that toolchain. Q# programs are compiled and executed from the *dotnet* CLI, the libraries are distributed using Nuget package manager and the project file is a standard *csproj* project file used for C# or F# development, with custom SDK defined in it. That integration level is very similar to IL projects, which we already discussed on this blog.

Secondly, it's really appealing to have a language specifically designed for quantum computing experience. This allows a lot of quantum specific concepts – such as for example adjoints, to fit naturally into the language, instead of feeling like a bolt-on on a general purpose language. Syntactically, Q# looks like a mix of C# and F# and therefore familiarity with those languages will make entry into Q# easier.

Finally, QDK has extensions for the editors/IDEs known from the .NET world – my recommendation is to use VS Code as it is lightweight and cross platform, but if you prefer, there is an extension for Visual Studio too.

Generally speaking, Q# programming model allows you to write code for quantum hardware in a way where quantum hardware is treated as a coprocessor (much like GPU is). In that sense, your main program can be C# or even Python based, and for given operations, you'd call into your Q# code to execute a given set of instructions on a quantum device (or simulator, when running locally). QDK takes care of the interoperability between the host program (C#, Python) and Q# code itself.

To get started, we will need to install the Quantum Development Kit from Microsoft Research on our machines. The linked page contains instruction for installing the QDK as well as the necessary project templates and the editor extensions. I recommend that you pause for a moment here, and go ahead and set up the QDK now.

When starting to learn quantum computing, a decent place to begin is to explain the notion of a qubit, since, just as in classical computers everything is based on bit, in quantum computing computations are carried out by qubit manipulation. The main difference between classical bits and qubits is that qubits, instead of taking one of the two discrete (binary) values only, can also be in the superposition state. We'll explain the notion of superposition in a lot more details in the next post, so bear with me, but for now we can say that when in superposition, they are both 0 and 1 at the same time.

There is really no way to reason about qubits without discussing their mathematical representation and using some (hopefully not too complicated) linear algebra, so let's have a look.

The state of a single qubit is described by a single vector $\begin{bmatrix} \alpha \cr \beta \end{bmatrix}$ in a two dimensional Hilbert space. More generally, we can say that a qubit is a quantum system that in which we can select **two linearly independent states** representing 0 and 1 and which can be modeled using a **two dimensional complex vector space.**.

While the hardware design is out of scope for this series, in terms of physical implementation of that, qubits could be implemented using electron spins or photon polarizations.

When dealing with qubits, we have to reason about them in terms of the mathematical concept of a basis. There are always infinitely many bases to choose from (as long as the two distinguished chosen states of 0 and 1 are orthonormal), in quantum computing, two basic unit vectors $\begin{bmatrix} 1 \cr 0 \end{bmatrix}$ and $\begin{bmatrix} 0 \cr 1 \end{bmatrix}$ form the so-called **computational basis**.

In addition, we know from linear algerbra that vectors can be written as linear combinations of basis vectors. As such, qubit state $\ket{\varphi}$ can always be described as:

$$\ket{\varphi} = \alpha\begin{bmatrix} 1 \\ 0 \end{bmatrix} + \beta\begin{bmatrix} 0 \\ 1 \end{bmatrix}$$

In quantum mechanics, $\alpha$ and $\beta$ would be complex numbers, since we are really dealing with two-dimensional complex valued vector here, but for the simplicity of this discussion, we can assume those are real numbers for now.

In the Dirac notation, which is prevalent in the quantum mechanics, we can express $\begin{bmatrix} 1 \cr 0 \end{bmatrix}$ and $\begin{bmatrix} 0 \cr 1 \end{bmatrix}$ as $\ket{0}$ and $\ket{1}$, respectively.

$$\ket{\varphi} = \alpha\ket{0} + \beta\ket{1}$$

The main reason to use the Dirac notation, is that, aside from being quite succinct, it is also independent of the basis chosen.

One of the fundamental strengths of quantum computers lies in the fact that a qubit may be in a superposition state, and we can use that fact to our advantage in our algorithms. However, as soon as it is measured (in a certain basis, of course), its state (value) always collapses to one of the two basis states, either $\ket{0}$ or $\ket{1}$. This also leads us to another weird aspect of quantum mechanics – measurement of a quantum state will change that quantum state.

A word of caution here. The choice of basis is fundamentally important here – superposition is basis-dependent; in other words, a state is always in superposition with respect to certain bases and not in superposition to others. The same notion applies to measurement, when a state is measured in certain bases, it will produce deterministic results, while in others it will produce random results.

However, we digressed a bit, so let's go back to our qubit. We refer to $\alpha$ and $\beta$ as **probability amplitudes**. We can relate amplitudes to the actual classical probability of receiving a state $\ket{0}$ or $\ket{1}$ using the Born rule:

$$|\alpha|^2 + |\beta|^2 = 1$$

The classical probability of collapsing to $\ket{0}$ is therefore $|\alpha|^2$, and conversely, the probability of collapsing to $\ket{1}$ is $|\beta|^2$. This is actually one of the axioms of quantum mechanics. The rule itself is not derived from anything, it is instead given, based on experimental evidence only.

The final conclusion we can draw here is the following – until we measure it, the qubit state can be in one of infinitely many various superposition states, but we can only ever extract one classical bit out of it upon measurement. We are going to be discussing various qubit transformations in the upcoming posts.

Earlier in this post, we have mentioned the steps needed to install the QDK. We can now start with our first program.

The simplest way to do so, is to use the *dotnet* CLI. The below command creates a new C# command line application, with a Q# component. The name of the program is inferred from the name of the current folder.

dotnet new console -lang "Q#"

As previous discussed, the C# program acts as a “host application” here, while we can at any point yield to the Q# part of our application to execute any quantum operation. The template generates a default program that is not particularly exciting, here is how it should look if everything worked correctly. C#:

using System; using Microsoft.Quantum.Simulation.Core; using Microsoft.Quantum.Simulation.Simulators; namespace QubitExample { class Driver { static void Main(string[] args) { using (var qsim = new QuantumSimulator()) { HelloQ.Run(qsim).Wait(); } } } }

Q#

namespace QubitExample { open Microsoft.Quantum.Canon; open Microsoft.Quantum.Intrinsic; operation HelloQ() : Unit { Message("Hello quantum world!"); } }

You can run the program using the dotnet CLI with the regular *dotnet run* command, and it should print:

Hello quantum world!

Let's adapt this out-of-the-box template to something more useful, that will allow us to check some of the statements we made about the qubit behavior. Our first interaction with qubits will be building a small program that will allow us to allocate some qubits, measure their values and print the results.

At the Q# code level, we will change the quantum operation result from *Unit* (which is semantically equivalent to *void* in C#) to a *Int*, since we will want some data to flow back to us. We allocate a qubit with a *using* statement; you can allocate multiple qubits at once if you need, but in our case we will stick to single qubit operations. A newly allocated qubit is by convention automatically initalized to $\ket{0}$ state. Once the qubit is used and is no longer needed, it must be reset back to $\ket{0}$ state again and safely released.

As mentioned, to extract the classical bit out of a qubit, we must measure it. In Q# we can measure using the *Measure* method, and specifying the basis we want to use. In our case, we are interested in measuring the qubit in the computational basis, known as Pauli-Z basis.

The sample code is shown below:

namespace QubitExample { open Microsoft.Quantum.Canon; open Microsoft.Quantum.Intrinsic; open Microsoft.Quantum.Measurement; operation MeasureQubits(count : Int) : Int { mutable resultsTotal = 0; using (qubit = Qubit()) { for (idx in 0..count) { let result = Measure([PauliZ], [qubit]); set resultsTotal += result == One ? 1 | 0; Reset(qubit); } return resultsTotal; } } }

As input we are passing in an integer representing the amount of times we should run a measurement on a fresh qubit (new qubit for first run, and then a reset qubit for subsequent runs). We then keep the running total of the results. If at the end, *resultsTotal = 0* it means we only got zeros, if *resultsTotal = count* it means we only got ones, and anything in between means the measurements were random.

Our updated C# code (including small tweaks to reduce nesting and make it more C# 8 friendly) to invoke this looks as follows:

static async Task Main(string[] args) { using var qsim = new QuantumSimulator(); var repeats = 100; Console.WriteLine($"Running qubit measurement {repeats} times."); var results = await MeasureQubits.Run(qsim, repeats); Console.WriteLine($"Received {results} ones."); Console.WriteLine($"Received {repeats - results} zeros."); }

We can now run this code and see what happens. The result is below:

Received 0 ones. Received 100 zeros.

We got 100 zeros in 100 attempts, which is quite encouraging. Remember that we said that newly initialized qubits have a $\ket{0}$ state, and the output of the program seems to agree. We also never did anything to put the qubit into a superposition (this will be covered in the next post in this series) so no randomness should occur too.

We can verify one other claim at this point. We said that we can measure the qubit in various bases, and choosing the basis is critical for getting a deterministic or probabilistic value. To check that, let's measure in a different basis – for example Pauli-X.

It's a small change in our code – just replace *PauliX* with *PauliZ* in our *Measure* invocation.

operation MeasureQubits(count : Int) : Int { mutable resultsTotal = 0; using (qubit = Qubit()) { for (idx in 0..count) { let result = Measure([PauliX], [qubit]); set resultsTotal += result == One ? 1 | 0; Reset(qubit); } return resultsTotal; } }

When we run our program, we should see something like this:

Received 52 ones. Received 48 zeros.

The distribution is not ideal, because the sample size is small, but the pattern is pretty clear. This aligns with our earlier statement that **a quantum state is always in superposition with respect to certain bases and not in superposition to others**. This is quite profound, and we'll discuss superposition extensively next time.

Before we finish for today, one additional note. The Q# code we wrote is actually unnecessarily verbose. The language and its core library ships with a ton of shortcuts and utilities that make quantum code succinct and pleasant. In our sample, we can actually collapse the measure in standard basis (Pauli-Z) and the reset operations into a single one – *MResetZ*.

The updated code is shown below.

operation MeasureQubits(count : Int) : Int { mutable resultsTotal = 0; using (qubit = Qubit()) { for (idx in 0..count) { let result = MResetZ(qubit); set resultsTotal += result == One ? 1 | 0; } return resultsTotal; } }

In this blog post we looked at the historical background of quantum mechanics and discussed how we currently find ourselves at a breakthrough point, with a booming landscape of quantum hardware and software solutions, democratizing access to quantum computing.

We had a look at how to get started with QDK and Q# and explored the mathematical notion of a qubit. Finally, we had a look at some basic qubit measurement behavior using Q# and the quantum simulator.

In the next post in this series we will explore the mathematics and Q# code related to superposition.

I've recently had some conversations with folks about that, and it occurred to me that this is not necessarily all that obvious to everyone. That's because there are a few ways of doing that, so let's quickly run through them.

In .NET Core 2.x and earlier, you could register the MVC framework in the ASP.NET Core dependency injection container in two ways:

*services.AddMvc()**services.AddMvcCore()*

In ASP.NET Core 3.x, aside from those two approaches, there are three **additional** ways:

*services.AddControllers()**services.AddControllersWithViews()**services.AddRazorPages()*

Let's discuss the differences between them, starting from the most “lightweight” to the most “heavyweight”.

*AddMvcCore()* registers all the core services required for the MVC application to work at all. We do not need to list them all, but pretty much everything related to the controller invocation pipeline gets activated there. These are low(er) level services, that only get customized when you are doing something quite complex or unusual (i.e. building a CMS). Some examples of them are: the controller activation services, the MVC options pipeline, application model provider infrastructure, action constraints, filter pipeline, model binder infrastructure, action result executors and a few more.

At the same time, the initialized framework configuration is completely “bare bones”. It is functional from the perspective of being able to handle an incoming HTTP call, but it is missing several core features. For example, the model validation via data annotations is no activated, same with authorization.

In this set-up, you are in control of (or, if you will, you are responsible for) what is plugged in and used at runtime. In other words, if you need anything beyond the most basic framework feature, you have to add it manually. In fact, in .NET Core 2.x and earlier, not even JSON support was there; this has now changed and the *System.Text.Json* formatter is actually already included in the call to *AddMvcCore()*.

For example:

// pick what you need services.AddMvcCore() .AddDataAnnotations() // for model validation .AddApiExplorer(); // for Swagger

This should be the default choice for you if you really like to bootstrap the minimal amount of things at runtime and only activate the individual features you really use.

*AddControllers()* was introduced in ASP.NET Core 3.0 as a mechanism that would simplify the manual setup needed together with calling the lightweight *AddMvcCore()*.

What you get with *AddControllers()* is:

- everything that
*AddMvcCore()*does - authorization services – needed for authorization policies, filters and other authorization components to work
- API explorer – required if you want to build dynamic API documentation, generate Swagger/OpenAPI files
- data annotations – needed for model validation with attributes and
*IValidateableObject*to work - formatter mappings – needed for content negotiation to work
- CORS

In other words, what you can expect from *AddControllers()* is that it would give you the most comfortable setup for API development. None of the view services are registered here so you don't “drag” any of the Razor related baggage with you. What's worth noting is that the name itself – *AddControllers()* – sort of blurs the line between the ASP.NET Core and the MVC framework, as it doesn't really tell you at first glance that you are activating the MVC framework.

This should be the default choice for you if are developing an API and want to quickly and reliably bootstrap the framework.

// ready for API development services.AddControllers();

*AddControllersWithViews()* is the one you should pick if you are building a “classic” MVC site, just like we have been doing it for years – with controllers and Razor views. It will end up activating:

- everything that
*AddControllers()*does - views functionality – explicitly registers the Razor view engine
- cache tag helper

This should be the default choice for you if you do not need the new Razor pages functionality – you are either building the MVC website exactly how it was built in old desktop framework MVC and in earlier versions of ASP.NET Core MVC or if you are migrating an older site.

// ready for "classic" MVC website development // and at the same time ready for API development services.AddControllersWithViews();

*AddRazorPages()* is intended to serve as a bootstrapping helper for working with the new Razor Pages feature. Under the hood, it ends up activating the following:

- all the core Razor pages features
- everything that
*AddMvcCore()*does – this is a bit surprising at first glance - authorization services – needed for authorization policies, filters and other authorization components to work
- data annotations – needed for model validation to work
- cache tag helper

The fact that it ends up activating *AddMvcCore()* is an internal implementation detail, since the Razor Pages pipeline is relying on a lot of the core MVC infrastructure under the hood. As a side effect, it means that when calling *AddRazorPages()* you are sort of ready to do API endpoints too. This may change in the future, and therefore I wouldn't take strategic decisions based on that. In other words, even though we could now say:

// ready for Razor Pages development // and at the same time quite ready for API development services.AddRazorPages();

if you ever need to host an API and Razor Pages in same project, I'd rather recommend to make these activations explicit, so that you don't get surprised in the future when something changes internally:

// ready for Razor Pages development // ready for API development services.AddRazorPages().AddControllers();

Of course the *AddRazorPages()* should be your default choice if you plan to work with Razor Pages.

Finally, we have *AddMvc()*, which simply registers the entire kitchen sink of all the features. It gives you:

- everything that
*AddControllersWithViews()*does - everything that
*AddRazorPages()*does

// ready for everything services.AddMvc()

While I'd imagine you know what you are trying to build – if you ever have any doubts in which direction your project will evolve, or if you are afraid that some MVC feature would be missing (or in fact, if you already ran into a missing feature), calling *AddMvc()* would be the safest bet to resolve any of those worries or issues.

In this post I wanted to show you a few of such less known features.

Typically, tweaking different OmniSharp features or settings happens via OmniSharp options, and the linked document describes in detail the process that OmniSharp uses to discover its configuration. To not make this article unnecessarily complicated, we can summarize the whole thing with the following:

- for global settings, use
*%USERPROFILE%/.omnisharp/omnisharp.json*file - for project specific settings, use
*omnisharp.json*at the root of your workspace (typically at the root of the repository, next to solution file)

If the file doesn't exist, just create it. The file is normally also respected in real time too, so any changes should be reflected without having to restart the editor.

Specifically in C# extension in VS Code, for some settings, there is also a special shorthand syntax that allows you to enable some stuff directly through **VS Code settings** and then VS Code would feed this into OmniSharp at startup (more on that later).

As a warm up, I wanted to remind everyone that both Roslyn analyzers and *.editorconfig* file are supported in OmniSharp, they just have to be enabled via the config, as they are switched off by default.

I blogged about both of those earlier – analyzers post, editorconfig post – you can follow these links to learn more about these features. Here we will only mention that they can be enabled by adding the following settings to your *omnisharp.json* file:

{ "RoslynExtensionsOptions": { "enableAnalyzersSupport": true }, "FormattingOptions": { "enableEditorConfigSupport": true } }

As I briefly mentioned, some of the OmniSharp features have a shortcut directly via the standard VS Code settings. That is in fact the case for both of the above, and as such, you can also enable them that way:

"omnisharp.enableEditorConfigSupport": true, "omnisharp.enableRoslynAnalyzers": true

The advantage of using *omnisharp.json*, however, is that you could commit that file to source control and other developers working on your project would get OmniSharp to behave the same way as on your machine.

Just like Visual Studio, OmniSharp has support for more sophisticated symbol renaming – renaming a symbol can propagate to comments or strings, and renaming a method symbol can also rename its overloads.

The configuration options are as follows:

{ "RenameOptions": { "RenameInComments": true, "RenameOverloads": true, "RenameInStrings": true } }

The practical consequences are the following.

1) Comments

using System; namespace ConsoleApplication { /// <summary> /// This program performs an important work and calls Bar. /// </summary> public class Program { static void Bar() {} } }";

Renaming the *Bar()* method in the snippet above, would propagate the rename into the comment that sits over the *Program* class.

2) Overloads

public class Foo { public void DoStuff() {} public void DoStuff(int foo) {} public void DoStuff(int foo, int bar) {} }";

Renaming any of the *DoStuff* overloads from the example above, would rename all of its “siblings” too. The rename is of course done over the semantic model and all usages of all the overloads are properly adjusted too.

3) Strings

namespace ConsoleApplication { public class Bar { public static string Name = "Bar"; } }

Here, renaming the class *Bar* would also rename the static string *Bar*.

When running code formatting, OmniSharp can automatically organize your imports. This doesn't mean it would remove unused usings, there is a separate code fix for that, and the feature operates on raw syntax trees only – but they would get **sorted**.

You can enable it the following way via *omnisharp.json*:

{ "FormattingOptions": { "OrganizeImports": true } }

With this switched on, as soon as you invoke formatting in the editor, the imports will be organized in a way that *System* directives will be placed first, and all other ones will be alphabetically sorted afterwards.

At the moment the feature itself is not customizable any further – meaning you cannot select the rules that are used for import organization. In the future we will add the possibility to specify spacing between groups and whether System directives should come first or not – those are the “knobs” exposed by the Roslyn compiler.

For some background, this import organization feature was actually added to Roslyn public API surface to be consumed in the dotnet format tool, and we benefited from that in OmniSharp.

This is a neat one, that solves quite an annoying default behavior. It impacts how code actions generate code and consists of two parts:

- controlling the order in which newly generated members should be added to a type
- defining whether newly generated properties should be auto-properties or have bodies that throw
*NotImplementedException*

The annoying behavior – in my book – is our default behavior for newly generated properties which are always created as throwing properties; I personally think that in overwhelming majority of cases users want auto properties instead. However, since Visual Studio default is also to generate throwing properties, we decided to keep that as a consistent default in OmniSharp too.

That said, the settings look as follows, in each case you can pick one of the two values:

{ "ImplementTypeOptions": { "PropertyGenerationBehavior": "PreferAutoProperties|PreferThrowingProperties", "InsertionBehavior": "AtTheEnd|WithOtherMembersOfTheSameKind" } }

We discussed briefly the *PropertyGenerationBehavior*. The second setting, *InsertionBehavior* behavior allows you to define that new members are generated *AtTheEnd* of the type – as the default behavior would be to group them by member kind (i.e. properties together).

OmniSharp tries to have reasonable, stable default settings, and doesn't attempt to throw lots of complexity at you from the beginning. However, it has quite a few knobs and settings that may not be generally known, that can help make your C# experience a bit more pleasant.

If you are interested in more “insider” OmniSharp tips and tricks let me know and we could cook up another post or two in the future.

]]>While there are various categories of EditorConfig conventions that you can use, in this post, I will show you how to enforce the formatting conventions (IDE0055) at build time.

The reason why the EditorConfig rule violations would not be respected at build time, is that at the moment the rules are defined in the *Microsoft.CodeAnalysis.CSharp.Features* assembly, which is only embedded into the editor, and not known to the compiler outside of the IDE context. They do have lots of dependencies (i.e. Workspace related) that prevent them from being used in the typical analyzers format – which would address the problem, as analyzers participate in build both in the IDE and in the command line.

This is of course quite a shame, because the biggest value proposition of using EditorConfig file in the first place, is to share the settings across all contributors, as well as making sure the project style settings are respected during all types of build runs.

When using EditorConfig to configure the code style in your C# project, you can define rules belonging to three categories (actually four, because there is also the “default/basic” EditorConfig indentation stuff):

- formatting conventions
- language conventions
- naming conventions

Unfortunately at the moment, for the latter two we cannot do much. But there is already a solution for build-time enforcement for the formatting conventions.

The formatting conventions all generate the same diagnostic ID – IDE0055 and, it turns out, there is a prototype port of all those EditorConfig formatting-specific rules to the analyzer infrastructure. The Roslyn teams have created a “fix formatting” analyzer that does the same thing as the logic embedded into *Microsoft.CodeAnalysis.CSharp.Features*, and deals with the IDE0055.

There is no stable version of the package yet, so in order to take advantage of this, you need to pull in the package from the Roslyn CI feed. The necessary feed definition for your *NuGet.config* is:

<?xml version="1.0" encoding="utf-8"?> <configuration> <packageSources> <add key="roslyn" value="https://dotnet.myget.org/F/roslyn/api/v3/index.json" /> </packageSources> </configuration>

And the NuGet package reference, to add to your project file:

<PackageReference Include="Microsoft.CodeAnalysis.CSharp.CodeStyle" Version="3.6.0-beta1-20071-02"> <PrivateAssets>all</PrivateAssets> <IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets> </PackageReference>

With this setup, the formatting rules are enforced via analyzer infrasturcture and you can benefit from them the same way as you'd do from the regular analyzers.

The Roslyn team has been working really hard of making EditorConfig a first class citizen in the compiler. That probably warranties its own blog post – which I will try to come back to in the future. However, to keep this brief, the support for EditorConfig in the compiler is the critical aspect of making this functionality work – as the compiler needs to read and understand the EditorConfig, and then configures its analyzer accordingly.

What is even better, is that starting with Visual Studio 16.3, you can also define diagnostic severity in EditorConfig file, and they'd be respected in the compiler. That applies to all diagnostics, including the formatting specific *IDE0055*. The syntax is as follows:

dotnet_diagnostic.IDE0055.severity = error

This now replaces the old school *.ruleset* files and can be used as single point of configuration for both Microsoft and 3rd party analyzer diagnostic severity, as well as the place to define EditorConfig conventions.

Such diagnostic severity control via EditorConfig is not supported in VS Code + OmniSharp yet, but we are working on getting it in there very soon.

Hope this post helps and happy Editorconf-ing!

]]>The IL code is then just-in-time (not always, but let’s keep things simple) compiled by the CLR/CoreCLR to machine code that can be run on the CPU. What I wanted to show you today, is that with the new *Microsoft.NET.Sdk.IL* project SDK, it is actually quite easy to create and build projects in pure IL.

Let’s have a look.

The new project system for .NET allows usage of custom project SDKs to support building a wide array of application types. You add an SDK to your project at the top of the project file, for example, the standard one for libraries and console apps is *Microsoft.NET.Sdk*.

<Project Sdk="Microsoft.NET.Sdk">

There are quite a few other project SDKs – ASP.NET Core uses *Microsoft.NET.Sdk.Web*, .NET Core workers use *Microsoft.NET.Sdk.Worker* and so on. For the most part these things do not really concern the developers, as they are set from the project template and then never change.

One interesting project SDK that is not widely known, is *Microsoft.NET.Sdk.IL*. It allows us to write .NET code (libraries, applications) using pure IL, instead of a higher level language. This can then be built via dotnet CLI, using ILASM as desktop FX, .NET Core or .NET Standard.

The SDK is actually not published to the public NuGet feed – therefore to take advantage of it, you need to add a custom NuGet feed of Core CLR to your *NuGet.config*.

https://dotnet.myget.org/feed/dotnet-core/package/nuget/Microsoft.NET.Sdk.IL

This is the old feed, but it’s where we can find the “old” version of the SDK, that was built for .NET Core 3.0. On the new feed, there is currently only latest build available, that’s already adapted towards .NET 5.0 (or .NET Core 5.0, the name is apparently not set yet).

Once the feed is added, you could also add the following *global.json* file, indicating that the SDK of this particular version should be used:

{ "msbuild-sdks": { "Microsoft.NET.Sdk.IL": "3.0.0-preview-27318-01" } }

Equipped with such setup, we could create our project. The extension for IL projects is *ilproj*, but structurally it is very similar to *csproj*.

In my case it will look the following:

<Project Sdk="Microsoft.NET.Sdk.IL"> <PropertyGroup> <TargetFramework>netstandard2.0</TargetFramework> <MicrosoftNetCoreIlasmPackageVersion>3.0.0-preview-27318-01</MicrosoftNetCoreIlasmPackageVersion> <IncludePath Condition="'$(TargetFramework)' == 'netstandard2.0'">include\netstandard</IncludePath> <IlasmFlags>$(IlasmFlags) -INCLUDE=$(IncludePath)</IlasmFlags> </PropertyGroup> </Project>

We are specifying here the reference to the *Microsoft.NET.Sdk.IL* SDK again, as well as the version of the ILASM package, which should be the same as the SDK itself.

I will be building for .NET Standard 2.0 only, but there is nothing preventing you from targeting other frameworks. For each of the supported frameworks, you could create an include folder in order to include certain common references. In my case, I will include the reference to the CorLib. To do that I create the path *include\netstandard\coreassembly.h* with the following contents:

#define CORE_ASSEMBLY "System.Runtime" .assembly extern CORE_ASSEMBLY { .publickeytoken = (B0 3F 5F 7F 11 D5 0A 3A ) .ver 4:0:0:0 }

At this point, the only thing left is to write some IL code. I therefore create a *File.il* and, first, need to import the inclusion of core lib, as well as define the assembly and module. This is the bare minimum that is needed.

#include "coreassembly.h" .assembly SampleIL { .ver 1:0:0:0 } .module SampleIL.dll

The rest will be my regular code – since this is a library not a console app, I do not need to create an entry point. I will, however, create a *Hello* type, with a *World()* method (sounds particularly exciting, doesn’t it?).

.class public auto ansi beforefieldinit Hello extends [CORE_ASSEMBLY]System.Object { .method public hidebysig static string World() cil managed { ldstr "Hello World!" ret } }

With this code in place, we can now build our program using a regular *dotnet build {path-to-my.ilproj}* command. The output should look more or less like this:

Z:\Documents\dev\il-sample>dotnet build SampleIL\SampleIL.ilproj Microsoft (R) Build Engine version 16.4.0+e901037fe for .NET Core Copyright (C) Microsoft Corporation. All rights reserved. Restore completed in 27.2 ms for Z:\Documents\dev\il-sample\SampleIL\SampleIL.ilproj. SampleIL -> Z:\Documents\dev\il-sample\SampleIL\bin\Debug\netstandard2.0\SampleIL.dll Build succeeded. 0 Warning(s) 0 Error(s) Time Elapsed 00:00:00.93

At that point we have a valid DLL we can load from any project compatible with .NET Standard 2.0 and use it. In this case, at the C# surface level, the usage would simply be:

class Program { static void Main(string[] args) { Console.WriteLine(Hello.World()); } }

Well, obviously, you can let your imagination be the limit. The most apparent use case is to do all kinds of micro-optimizations, especially if you feel like the compiler is not doing enough. That said, you could also do some completely absurd things with it too – after all it’s raw IL.

For example, you could remove the inheritance from *System.Object*:

.class public auto ansi beforefieldinit Hello { .method public hidebysig static string World() cil managed { ldstr "Hello World!" ret }

and add an ILASM switch to disable automatic inheritance to our *ilproj*:

<IlasmFlags>$(IlasmFlags) -INCLUDE=$(IncludePath) -noautoinherit</IlasmFlags>

This DLL will now build correctly, and we have managed to achieve the impossible – we now have a DLL, that was successfully emitted, where we have a type that doesn’t inherit from *System.Object*, which is really forbidden. As thrilling as this is, it is, unfortunately, equally useless, since any attempt to load and use this DLL will result in a type load exception.

We could also, given the fact that we are in IL, do some other things that are forbidden in C#. For example overload by return type. I could add a second *World()* method to our *Hello* type – differing from our previous one by return type only:

.class public auto ansi beforefieldinit Hello extends [CORE_ASSEMBLY]System.Object { .method public hidebysig static string World() cil managed { ldstr "Hello World!" ret } .method public hidebysig static int32 World() cil managed { ldc.i4 42 ret } }

We now have two *Hello.World()* methods, one returning a *string* and one returning an *int*. This is impossible to do in C# and the compiler would not allow it, but it’s allowed in IL, since you have to specify the return type when you call a method there.

So this DLL builds, and a quick peek via decompiler reveals that indeed our plot worked and we have a DLL that really is impossible to produce with C# code and the Roslyn compiler:

using System; public class Hello : Object { public static String World() { return "Hello World!"; } public static Int32 World() { return 42; } }

To be fair, again, when we try to use it from a C# program, and invoke *Hello.World()* there, the compiler will not know which method to select and complain:

CS0121 The call is ambiguous between the following methods or properties: 'Hello.World()' and 'Hello.World()'

What is interesting is that it would work with reflection, for example, the following code compiles and executes just fine:

var intOverload = typeof(Hello). GetMethods(System.Reflection.BindingFlags.Public | System.Reflection.BindingFlags.Static). First(m => m.Name == "World" && m.ReturnType == typeof(int)); var stringOverload = typeof(Hello). GetMethods(System.Reflection.BindingFlags.Public | System.Reflection.BindingFlags.Static). First(m => m.Name == "World" && m.ReturnType == typeof(string)); Console.WriteLine(intOverload.Invoke(null, null)); Console.WriteLine(stringOverload.Invoke(null, null));

These are all little useless hacks and quirks, but hopefully they illustrate the really cool aspect of being able to just write raw IL. The code from this article is available on Github. I hope it will help you get started building IL projects.

For “real world” use cases I recommend you check out System.Runtime.CompilerServices.Unsafe which is build using the *Microsoft.NET.Sdk.IL* SDK. The library can be used to do a bunch of unsafe operations like unsafe casts – in a way that would not be possible to be expressed in C#.

Let's have a look.

Via the Microsoft.Extensions.Configuration package, .NET Core has a really rich and flexible application configuration model.

While this technique discussed here is in no way a replacement for .NET Core configuration constructs, for simple, low-level cases it might be interesting to have a look at a little known feature called *RuntimeHostConfigurationOption*.

In .NET Core, as well as in .NET Framework 4.6+, there is an AppContext type, which was designed as a mechanism to provide switch functionalities for library authors. The type is not very widely used, and if you ever interacted with *AppContext*, chances are you have done so to access its BaseDirectory property. It is its by far most popular API – it returns the path of the base directory that the assembly resolver uses to probe for assemblies. In fact a quick Github search reveals over 12k usages of it.

Aside from that, *AppContext* also comes with a static private data dictionary, which you can access via *GetData(string name)* method. What is particularly interesting, is that while there is a public method to get the data out of *AppContext*, there is no (or so it seems), way to get the data into it. This is something we will explore now, and this is where runtime host configuration options come in.

As it turns out, what you could do, is you could simply add *RuntimeHostConfigurationOption* entry (or entries) into your project file (csproj) and the .NET Core runtime will automatically feed them into the runtime *AppContext* data dictionary.

For example, consider the following program:

<Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <TargetFramework>netcoreapp3.0</TargetFramework> </PropertyGroup> <ItemGroup> <RuntimeHostConfigurationOption Include="abc" Value="123" /> <RuntimeHostConfigurationOption Include="foo" Value="bar" /> </ItemGroup> </Project>

When this program executes, you can read the settings from the data dictionary do the following:

class Program { static void Main(string[] args) { var config1 = AppContext.GetData("abc"); var config2 = AppContext.GetData("foo"); // prints 123 Console.WriteLine(config1); // prints bar Console.WriteLine(config2); } }

The way it works in practice, is that the values are not magically compiled into the assembly, but instead the dotnet SDK, at build time, using its *GenerateRuntimeConfigurationFiles* MsBuild task, copies them into a special *${project}.runtimeconfig.json* file, which is published into the output folder. When you then execute your app, it is automatically loaded by the runtime.

The file has a simple JSON format and in my case the file looks like this, and you could find the runtime host configuration options in the *configProperties*:

{ "runtimeOptions": { "tfm": "netcoreapp3.0", "framework": { "name": "Microsoft.NETCore.App", "version": "3.0.0" }, "configProperties": { "abc": 123, "foo": "bar" } } }

While this doesn't really replace the richness of the configuration system, it provides a nice no-frills alternative for setting some low level flags and knobs for your code.

Can you programmatically write data into the *AppContext* data dictionary?

This is actually quite an interesting story. If you look at the implementation of *AppContext* in the CoreFX repository, you will find out that it indeed has a SetData method. However, it is hidden on the reference assembly, meaning you can't use it in your code, since it won't compile.

But, knowing that it's there at runtime – its missing just at compile time – you could always pick it up with reflection.

typeof(AppContext).GetMethod("SetData", BindingFlags.Public | BindingFlags.Static) .Invoke(null, new object[] { "hello", "world" }); var hello = AppContext.GetData("hello"); // prints world Console.WriteLine(hello);

This is quite baffling at first – the method is not available at compile time, but available at runtime – but we do compile our code against reference assemblies and run it against implementation assemblies so these things can and do happen.

This is of course an interesting piece of trivia, but not really the recommended usage of course. It is however possible to write to *AppContext* data dictionary in a typed way, and that's via the good old *AppDomain* static type (even though there are no app domains in .NET Core).

AppDomain.CurrentDomain.SetData("secret", "hooray"); var secret = AppContext.GetData("secret"); // prints hooray Console.WriteLine(secret); var secret2 = AppDomain.CurrentDomain.GetData("secret"); // also prints hooray Console.WriteLine(secret2);

As it turns out, the data dictionary of *AppContext* is actually backing the *SetData*/*GetData* of *AppDomain.CurrentDomain*.

There is not really a lot to see there, but if you are interested in the source code for this blog post, it's, as always, available on Github.

]]>