C# semantic classification with Roslyn

A while ago, I blogged about using Roslyn's completion service. In today's post, I wanted to continue looking at some of the excellent compiler features that can be utilized to build IDE-like features in your projects. This time, we will look at how to do semantic classification of the code using Roslyn.

Using the classifier

Roslyn exposes a static Classifier service, which can be used to ask the compiler to semantically classify the spans contained in a given document or in a semantic model (or part of it). The API exists since Roslyn 1.0 and is part of the workspace layer of Roslyn – the Microsoft.CodeAnalysis.CSharp.Workspaces Nuget package. Under the hood it is underpinned by an internal language service ISyntaxClassificationService.

Classifier exposes two public methods, which, as mentioned briefly, operate on a document or a semantic model level. In either case, you'd need to initialize a Roslyn workspace (most often an MSBuild based workspace) to be able to work with the API – even if you want to classify a standalone, loose piece of C#; in that case a dummy workspace is necessary.

Looking at the API, you'd likely wonder why one method is async, but not the other. The reason why the document-based method is async, and the semantic model-based one isn't, is because the first one will need to internally obtain the semantic model from the document, which in itself is an async operation. Once the semantic model is available, there is no async work left to do, hence the second method doesn't need to be asynchronous anymore.

How do we use the Classifier? Let's imagine we'd like to classify the following simple piece of code:

We have already mentioned that a workspace is necessary to initialize a workspace, and the quickest way to do that is to use AdHocWorkspace and default MefHostServices. They will contain the necessary internal compiler services that the classifier requires. For simplicity of the demo, we will hardcode our input code into a local variable – in normal use cases you'd be reading from disk or from some client/user request. If you are dealing with a full C# solution, instead of stand alone C# code to classify, the more appropriate choice over AdHocWorkspace would be to use MSBuildWorkspace.

Once you have the workspace, you'd need to produce a Document or a SemanticModel representing our piece of code to classify. Let's first look at the semantic model approach, as it's – in my opinion – a bit less work.

The first thing to do, is to grab the SourceText representing our string-based C# code. SourceText can then be fed into the syntax tree parser, producing a C# syntax tree. At this point we are half way there, but we still need to initalize the compilation, as the semantic model is a product of the compilation pipeline. When you do that, you need to make sure all the necessary metadata references needed for the code to compile are available – in our case only mscorlib is needed though (typeof(object).Assembly). Finally, we can find the semantic model for a syntax tree by querying the newly created compilation.

Next, we can call the classifier, and pass in our semantic model, and the text span corresponding to the piece of code we want to classify. We use new TextSpan(0, code.Length), which simply means the entire code will be classified; however it is also possible to tweak the TextSpan so that position is offset and length is shorter, and thus only part of the code would be submitted for classification – it all depends on the use cases.

At the end we print all the results, which should show us a nice set of classification info:

For the sake of completeness, let's also show how the code would look like if we were to go over the document-based API. In order to add a document to a workspace, we also need to create a project that would hold that document. Overall, there are several ways of achieving that – one example is shown below. All the rest of the code (dealing with souceText or displaying the classified spans) is the same as before.

In this approach, we do not need to manually create a Compilation because it will be implicitly created for us based on the Project we set up. Overall, there is really very little difference between the two APIs. Typically when working with structured solutions and MSBuildWorkspace, you'd already be dealing with the documents anyway and the code from the second sample would be more natural to use, while when working with stand alone C# classification based on AdHocWorkspace, then probably the first example would be less tedious to use.

Why do you need the classification?

The most obvious use case is to provide syntax highlighting. Using semantic classifier and the power of the compiler provides an extremely reliable and advanced way of highlighting the code, taking all the aspects and language features into account – especially when a typical alternative would be static and regular expression based. This approach is now used in the highlighting features of OmniSharp.

One final thing about classification is that if you look closely at the results we produced, there is one strange thing going on. MyMethod at positions 4,35:4,43 is actually classified twice:

once as method name and once as static symbol. The second classification is the so called “additive classification”. At the moment Roslyn only uses static symbols for this additive classification but that might change in the future. This information allows, for example, for additional highlighting to be applied to static symbols (e.g. make them bold).

You can always exclude it from the result set too, by querying the ClassificationTypeNames.AdditiveTypeNames collection:

In fact, this is what we do in OmniSharp too, and this is what Visual Studio does too.

You can find the source code for this blog post at Github