mu88 Developer Blog Buy Me A Coffee

Migrating from .NET Core 3.1 to .NET 6 - Part 1

Together with my colleagues at Swiss Post, I’m working on a .NET Solution with about 20 projects based on .NET Core 3.1. It has a pretty common architecture:

  • Data Access Layer based on Entity Framework Core 3.1
  • Business Logic based on .NET Core 3.1
  • Web API based on ASP.NET Core 3.1

The front-facing client is written in Angular and for our tests we’re using xUnit and the fabulous Fluent Assertions library. Give them a star if you appreciate their work as much as I do!

I’ve spent the last days migrating our code base to .NET 6 and I want to share some experiences I made and hurdles I’ve cleared.

Where and how to start?

.NET 6 allows to reference projects targeting older .NET (Core) version, e. g. .NET Core 3.1, but not the other way round.
Therefore I started the migration from the outside (ASP.NET Core) to the inside (EF Core). That approach ensured that I had a compiling Solution at all times. But I have to admit that the unit and integration tests were not usable during the transition due to runtime assembly conflicts.

The steps for each and every project were pretty much the same:

  • Upgrade the Target Framework Moniker within *.csproj to net6.0
  • Upgrade all NuGet packages
  • Remove unnecessary NuGet packages (always mind the Pathfinder rule, right?)
  • Fix build warnings related to breaking API changes, obsolete members, etc.

Doing this for all of our 20 projects took me about one day. On the one side that feels pretty quick for such a major upgrade. But when considering which obstacles I’ve encountered, I’d say I could have been twice as fast 😉

And if you’re asking yourself: did this guy even consider using the .NET Upgrade Assistant? Yes, I did! Before starting with all the manual work, I tried to migrate our Solution with that neat tool.
But I probably used it in the wrong way, because I always ended up with either a partially migrated project (e. g. missing/invalid NuGet packages) or no changes at all (just sticking to .NET Core 3.1).

Working with ASP.NET Core 6

Migrating to ASP.NET Core’s new version worked almost fluently. I had some build warnings related to some now nullable properties (e. g. HttpResponseMessage.Headers.Location or HttpContext.User.Identity), but fixing that was just a matter of seconds.

There are so many heated discussions whether to use the new Minimal APIs or not. I just can say that I didn’t introduce them: not because I don’t like them, but because we have a well working set of controllers which I didn’t want to refactor.

Working with EF Core 6

Upgrading EF Core was more… well, I’d say challenging 😎 compared to ASP.NET Core.

Use entities instead of IDs

We had some sort of data seeding on DbContext creation to provide certain test data to our tests. Unfortunately the data seeder made heavily use of object IDs when referring to other objects. Just consider the following, rather simplified example:

public class Author
    public int Id { get; set; }
    // some stuff

public class Book
    public Author Author { get; set; }

    public int AuthorId { get; set; }
    // more stuff

Within the data seeder, there was code which looked similar to:

var author = new Author { Id = 1 };
var book = new Book { AuthorId = 1 };

This works if the author entity also gets persisted with an ID of 1. I didn’t analyze it in depth, but that was not longer the case after the upgrade to EF Core 6.
So when running the tests, most of them failed due to some sort of foreign key violation. The fix was pretty simple:

var author = new Author();
var book = new Book { Author = author };

But finding all the relevant spots took me several hours of boring “fix and try”.

So here comes an advice to my future self: do not rely on object IDs or even manage them on your own when using EF Core! They are an implementation detail and should be considered evil.
Actually I’m following that approach in my private projects by declaring all ID-related properties as private members.


Microsoft took the chance and embraced Nullable Reference Types (imho one of THE BEST C# features EVER) with EF Core as well.

Therefore some common APIs like DbContext.FindAsync<T>() are now returning ValueTask<T?>, requiring the caller to do proper null checking. In many cases that was valuable feedback, because it was indeed putting spotlight onto potential null references.

SQLite for in-memory testing

Some of our test run against an in-memory SQLite database. On test session start, we were calling DbContext.Database.Migrate(). Our main database in production is SQL Server, so all EF Core Migrations were tailored to that particular DBMS.
But after upgrading, migrating the database within the tests failed with an exception saying that varchar(max) is not supported. Well, that’s right: varchar(max) is indeed a SQL Server feature and not supported in SQLite.

Fortunately I came across this GitHub issue which showed me the trick: calling DbContext.Database.EnsureCreated() instead of DbContext.Database.Migrate() helps. Otherwise I’d have to create dedicated EF Core Migrations for SQLite.
But honestly I didn’t check how this did work in the past.


That’s it for today! In the next part, I will share some broader experiences, e. g. on new language features and IDEs.

Take care and thanks for reading 👋🏻

Leveraging the power of ReSharper Templates

Among .NET developers, the JetBrains tool ReSharper (or R#) is some kind of silver bullet when it comes to code analysis and refactoring. However, I’ve often seen developers not being familiar with ReSharper Code Templates. That’s why I want to give a brief introduction into this topic.

Basically, Code Templates are little code snippets that can be used in different scopes of coding. JetBrains calls them Live Templates, Surround Templates and File Templates. Let’s take a look at them!

Live Templates

Live Templates are snippets that can be inserted while coding in a file. This can be if statements or for each loops: just start typing if, hit Enter and ReSharper will come up with a little workflow guiding us through the different parts, e. g. the condition.

Out of the box, there are 170+ templates that come with ReSharper. But we can define our own templates via the Templates Explorer (Extensions → ReSharper → Tools → Templates Explorer… in Visual Studio). For example, I have a small template named xunitasync that looks like this:

When typing xunitasync within a class, this template will create an asynchronous xUnit test body for me. When looking closer to the template code, we notice the two strings $TestName$ and $END$. These are Template Parameters and they are context-aware. In the most easiest way, they are just plain strings that ReSharper will ask us to enter when using the template. For example, the $TestName$ is just the name of the test we want to use. $END$ is the location where the caret will be placed after the template has been applied. In my case, I want to start writing the Arrange part of the test.

By context-aware, I mean that such parameters can be more than just plain strings. For example, a parameter can be configured so that whenever it is applied, IntelliSense will pop-up and a type is requested. This happens in the example for the parameter $TestType$. JetBrains calls them Template Macros and there’s a bunch of them (see here).

I have plenty of those live templates, e. g. for creating To-do items or Get-Only C# Properties.

Surround Templates

The second interesting option are Surround Templates. They can be used to surround a selected piece of code with another piece of code. Let’s look at the following template:

We are already familiar with the $END$ parameter. The $SELECTION$ parameter represents the piece of code that is selected. Before this code, a Stopwatch instance stopwatch is created and started. After the selected code, stopwatch is stopped. I use this frequently as the most trivial form of performance measurement.

With this template, the following code…

…becomes to…

File Templates

I tend to use them not as much as Live or Surround Templates, but File Templates are also very handy in some situations. For example, when working with MSTest, each test class has to be decorated with the [TestClass] attribute. I have a File Template that creates a MSTest file for me and it looks like this:

Again, there are some parameters which are pretty obvious: $Namespace$ stands for the Namespace and it will be automatically retrieved by a macro. So I never have to take care of this manually, it will be determined by the file location within the Solution.

Postfix Templates

It is not an exaggeration to say that I like these the most. With Postfix Templates, I can write new MyService().var and the little suffix .var will create a variable. Or we can use .foreach to create a for each loop.

Unfortunately, we cannot create Postfix Templates through the Templates Explorer yet. But I hope the JetBrains guys will make this possible in the future 😃.


This was a short introduction into the concept of ReSharper Templates. There is a ton more to say, but I hope this is enough to catch your curiosity. For me in personal, I can say that ReSharper templates have revolutionized the way that I code. I can keep my focus on what I want to write, not how. And the extensibility gives me a way to be more productive in small, but repetitious coding tasks.

If you need more information, I’d recommend the excellent documentation. There is a list of all the available macros, GIFs to see the Templates in action and much more.

Last but not least, I’d be curious whether you’ve created a custom template and for what.

Thank you for reading and take care!

Utilizing Docker when testing performance enhancements

With the beginning of 2021, I started my new job as a Full Stack Developer for Swiss Post. My first task was to improve the performance of a long-running ASP.NET Core Web API endpoint. As usual, I was writing unit and integration tests to ensure that the new code is doing what it is supposed to.

When finished with coding, I wanted to measure how the enhancements apply to a larger, more realistic dataset. I decided to figure out how I can make use of Docker containers.

For local development, we’re using SQL Server LocalDB. Since Microsoft shipped WSL 2, Linux Docker Containers are super fast on Windows 10 and I switched over to SQL Server for Linux. I created a container with the following command:

docker run --name sqlserver2017 -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=yourStrong(!)Password' -p 1433:1433 -d

Now I applied our database schema using an Entity Framework Core Migration. This resulted in a complete but empty database. I created a snapshot of my current data state so that I could always come back to my starting point. I did so by calling:

docker commit sqlserver2017 mu88/db:initialized

This command created a new Docker Image mu88/db:initialized by taking a snapshot from the container sqlserver2017.

Thankfully, my wise colleagues had already created some Web API endpoints to seed test data. So all I had to do was a little HTTP POST and wait a couple of minutes to generate my test data (lets say for 50 customers). After that I was ready to test my new code. Since the mentioned long-running process does some data mutation, I created another snapshot:

docker commit sqlserver2017 mu88/db:test_50_customers

Now I could trigger the long-running process and extracted some relevant metrics after it was finished. The numbers were not so clear, that’s why I decided to do another test with 500 customers.

For this, I had to stop and remove the actually running container and go back to my starting point mu88/db:initialized:

docker stop sqlserver2017
docker rm sqlserver2017
docker run --name sqlserver2017 -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=yourStrong(!)Password' -p 1433:1433 -d mu88/db:initialized

Please note the very last line: I started a new container using my snapshot image. Now I did another HTTP POST to generate 500 customers and created another snapshot:

docker commit sqlserver2017 mu88/db:test_500_customers

After another performance test, the metrics looked good, but we immediately found another code spot that yelled for optimization. For the testing, I could rely on my snapshots by calling

docker stop sqlserver2017
docker rm sqlserver2017
docker run --name sqlserver2017 -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=yourStrong(!)Password' -p 1433:1433 -d mu88/db:test_50_customers


docker stop sqlserver2017
docker rm sqlserver2017
docker run --name sqlserver2017 -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=yourStrong(!)Password' -p 1433:1433 -d mu88/db:test_500_customers

With every round of “code & measure”, that approach became more and more valuable because I could rely on a set of data snapshots.

For me this was a super interesting lesson of how container technologies can help when it comes to testing.

I hope this was interesting for you. Thanks for reading!

Is .NET Core cool enough to cool a Raspberry Pi? - Part 2

In the last post, I mainly described how to set up the software for the Raspberry Pi Fan Controller. In this part, I will focus on the hardware part and bringing everything together.

Bring it all together

During the development, I could easily test my app by using the Inverse of Control pattern and utilizing Dependency Injection to inject a fake temperature provider and fan controller. At a certain point, I was ready to test it on the Raspi.

At first, I was convinced to deploy the app via Docker. But after some time, I was not sure whether a sudo command executed from within a Docker container will be forwarded to the OS (remember the temperature measurement). So I decided to ship it as a self-contained executable. This can be done as follows:

The following command copies the build results to the Raspi:

On the Raspi, we have to allow the app to be executed:

And finally, start the app using sudo. This is important because otherwise, reading the temperature doesn’t work.

There were some firewall/reverse proxy issues in my case, but that would be beyond this post. In the end, I could successfully access the app via http://raspberry:5000/cool and it was showing the current temperature.


This was definitely the hardest part for me in this project. But several other blog posts like the following gave me the necessary information which components I had to buy and how to connect them:

Finally, I bought:

  • Breadboard
  • Red LED
  • Transistor BC 337
  • Resistor 680 Ω for the Transistor
  • Resistor 1 kΩ for the LED
  • Jumper wires

Because I was afraid to somehow destroy the fan, I made a first test with my controller software and the LED:

After a successful test, I switched over and used the fan:

And it was working! So I had no more excuses to solder everything and do the final assembly:

Register the app as a service

Now that everything was working fine, I wanted to register my little app as a service. This will ensure that the controller automatically gets started after a reboot.

For this, I had to create a service unit configuration file on the Raspi:

It has the following content:

With the following commands, the service will be created:

Now the app will start on every reboot.


Because of recent developments in the .NET ecosystem, I was able to write a controller for a Linux device like a Raspberry Pi. I could leverage all the new features my favorite platform provides:

  • Cross-platform
  • Worker Services
  • ASP.NET Core Blazor Server

In theory, this app could be easily ported to any other OS like Windows 10 IoT Core or device like Arduino - if the necessary parts like temperature retrieval are available.

For me, this was another great experience of how the modern development world can look like: serve every platform with the tools of your choice.

Thank you for reading!

Is .NET Core cool enough to cool a Raspberry Pi? - Part 1

A couple of weeks ago, I bought a new toy: a Raspberry Pi 4 Model B. I wanted to set up my own DNS server, but mainly, I wanted to get in contact with this new platform. Since I had already read other blog posts saying that the Raspi gets quite warm under normal conditions, I ordered a case with a built-in fan as well.
The shipment arrived and I assembled everything curiously. First impression: wow, the ramp-up time to assemble and install is super fast! Second impression: the built-in fan is a bit loud… and my spouse thought so as well :wink: So the Raspi had to live its first days in the kitchen.

In the next days, I found several blog posts describing how to build a small electric circuit and a bit of software to control the fan. The hardware part was new to me anyway. But for the software, the blog authors were mostly using Python. Since my heart beats for .NET and C#, I was intrigued by the idea of using my favorite technologies. And I found the .NET Core IoT Libraries - a NuGet package provided by Microsoft to build applications for devices like the Raspi. This package was my missing piece in the puzzle - how to control the hardware. Now I was on fire and decided to build a fan controller based on Blazor Server and the found NuGet package.

All the code can be found in my GitHub repo Raspi Fan Controller. Lets focus on the main parts:

  • Temperature provider
  • Temperature controller
  • Fan controller
  • Frontend

Temperature provider

To control the temperature, we need to measure it, right? Fortunately, the Raspi’s OS Raspian comes with a built-in command to retrieve its current temperature:

It returns a text like temp=39.0°C. The class Logic\RaspiTemperatureProvider does a little bit of RegEx to parse the current temperature and unit into a tuple (39.0, "C").

After retrieving the current temperature, we can act on it.

Temperature controller

The temperature controller Logic\RaspiTemperatureController is nothing but a while loop. It regularly checks the current temperature and turns on the fan if a upper threshold is reached and turns it off if a lower threshold is reached.
This loop is async: since the temperature controller will be started by a .NET Core Worker Service, the while’s exit condition is the CancellationToken provided by the ASP.NET Core environment.
In between, there is a sleep time between two loop runs via Task.Delay().

Fan controller

This is the place where we really access the hardware. The Raspi has so called General Purpose Input/Output (GPIO) pins - physical pins on its board that can be used for custom extension. The NuGet package .NET Core IoT Library abstracts and allows us to set these pins in a very easy way. Take a look into Logic\RaspiFanController:

The turn on the fan, the GPIO pin 17 is set to high value. And that’s it.


The user interface is a single Razor page providing the necessary information like current temperature and threshold. These information are read from the temperature controller.

In the next part, I will describe how to bring everything together.