19 Mar 2021
Among .NET developers, the JetBrains tool ReSharper (or R#) is some kind of silver bullet when it comes to code analysis and refactoring. However, I’ve often seen developers not being familiar with ReSharper Code Templates. That’s why I want to give a brief introduction into this topic.
Basically, Code Templates are little code snippets that can be used in different scopes of coding. JetBrains calls them Live Templates, Surround Templates and File Templates. Let’s take a look at them!
Live Templates are snippets that can be inserted while coding in a file. This can be
if statements or
for each loops: just start typing
Enter and ReSharper will come up with a little workflow guiding us through the different parts, e. g. the condition.
Out of the box, there are 170+ templates that come with ReSharper. But we can define our own templates via the Templates Explorer (Extensions → ReSharper → Tools → Templates Explorer… in Visual Studio). For example, I have a small template named
xunitasync that looks like this:
xunitasync within a class, this template will create an asynchronous xUnit test body for me. When looking closer to the template code, we notice the two strings
$END$. These are Template Parameters and they are context-aware. In the most easiest way, they are just plain strings that ReSharper will ask us to enter when using the template. For example, the
$TestName$ is just the name of the test we want to use.
$END$ is the location where the caret will be placed after the template has been applied. In my case, I want to start writing the Arrange part of the test.
By context-aware, I mean that such parameters can be more than just plain strings. For example, a parameter can be configured so that whenever it is applied, IntelliSense will pop-up and a type is requested. This happens in the example for the parameter
$TestType$. JetBrains calls them Template Macros and there’s a bunch of them (see here).
I have plenty of those live templates, e. g. for creating To-do items or Get-Only C# Properties.
The second interesting option are Surround Templates. They can be used to surround a selected piece of code with another piece of code. Let’s look at the following template:
We are already familiar with the
$END$ parameter. The
$SELECTION$ parameter represents the piece of code that is selected. Before this code, a
stopwatch is created and started. After the selected code,
stopwatch is stopped. I use this frequently as the most trivial form of performance measurement.
With this template, the following code…
I tend to use them not as much as Live or Surround Templates, but File Templates are also very handy in some situations. For example, when working with MSTest, each test class has to be decorated with the
[TestClass] attribute. I have a File Template that creates a MSTest file for me and it looks like this:
Again, there are some parameters which are pretty obvious:
$Namespace$ stands for the Namespace and it will be automatically retrieved by a macro. So I never have to take care of this manually, it will be determined by the file location within the Solution.
It is not an exaggeration to say that I like these the most. With Postfix Templates, I can write
new MyService().var and the little suffix
.var will create a variable. Or we can use
.foreach to create a
for each loop.
Unfortunately, we cannot create Postfix Templates through the Templates Explorer yet. But I hope the JetBrains guys will make this possible in the future 😃.
This was a short introduction into the concept of ReSharper Templates. There is a ton more to say, but I hope this is enough to catch your curiosity. For me in personal, I can say that ReSharper templates have revolutionized the way that I code. I can keep my focus on what I want to write, not how. And the extensibility gives me a way to be more productive in small, but repetitious coding tasks.
If you need more information, I’d recommend the excellent documentation. There is a list of all the available macros, GIFs to see the Templates in action and much more.
Last but not least, I’d be curious whether you’ve created a custom template and for what.
Thank you for reading and take care!
12 Feb 2021
With the beginning of 2021, I started my new job as a Full Stack Developer for Swiss Post. My first task was to improve the performance of a long-running ASP.NET Core Web API endpoint. As usual, I was writing unit and integration tests to ensure that the new code is doing what it is supposed to.
When finished with coding, I wanted to measure how the enhancements apply to a larger, more realistic dataset. I decided to figure out how I can make use of Docker containers.
For local development, we’re using SQL Server LocalDB. Since Microsoft shipped WSL 2, Linux Docker Containers are super fast on Windows 10 and I switched over to SQL Server for Linux. I created a container with the following command:
docker run --name sqlserver2017 -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=yourStrong(!)Password' -p 1433:1433 -d mcr.microsoft.com/mssql/server:2017-latest
Now I applied our database schema using an Entity Framework Core Migration. This resulted in a complete but empty database. I created a snapshot of my current data state so that I could always come back to my starting point. I did so by calling:
docker commit sqlserver2017 mu88/db:initialized
This command created a new Docker Image
mu88/db:initialized by taking a snapshot from the container
Thankfully, my wise colleagues had already created some Web API endpoints to seed test data. So all I had to do was a little
HTTP POST and wait a couple of minutes to generate my test data (lets say for 50 customers). After that I was ready to test my new code. Since the mentioned long-running process does some data mutation, I created another snapshot:
docker commit sqlserver2017 mu88/db:test_50_customers
Now I could trigger the long-running process and extracted some relevant metrics after it was finished. The numbers were not so clear, that’s why I decided to do another test with 500 customers.
For this, I had to stop and remove the actually running container and go back to my starting point
docker stop sqlserver2017
docker rm sqlserver2017
docker run --name sqlserver2017 -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=yourStrong(!)Password' -p 1433:1433 -d mu88/db:initialized
Please note the very last line: I started a new container using my snapshot image. Now I did another
HTTP POST to generate 500 customers and created another snapshot:
docker commit sqlserver2017 mu88/db:test_500_customers
After another performance test, the metrics looked good, but we immediately found another code spot that yelled for optimization. For the testing, I could rely on my snapshots by calling
docker stop sqlserver2017
docker rm sqlserver2017
docker run --name sqlserver2017 -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=yourStrong(!)Password' -p 1433:1433 -d mu88/db:test_50_customers
docker stop sqlserver2017
docker rm sqlserver2017
docker run --name sqlserver2017 -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=yourStrong(!)Password' -p 1433:1433 -d mu88/db:test_500_customers
With every round of “code & measure”, that approach became more and more valuable because I could rely on a set of data snapshots.
For me this was a super interesting lesson of how container technologies can help when it comes to testing.
I hope this was interesting for you. Thanks for reading!
24 Apr 2020
In the last post, I mainly described how to set up the software for the Raspberry Pi Fan Controller. In this part, I will focus on the hardware part and bringing everything together.
Bring it all together
During the development, I could easily test my app by using the Inverse of Control pattern and utilizing Dependency Injection to inject a fake temperature provider and fan controller. At a certain point, I was ready to test it on the Raspi.
At first, I was convinced to deploy the app via Docker. But after some time, I was not sure whether a
sudo command executed from within a Docker container will be forwarded to the OS (remember the temperature measurement). So I decided to ship it as a self-contained executable. This can be done as follows:
The following command copies the build results to the Raspi:
On the Raspi, we have to allow the app to be executed:
And finally, start the app using
sudo. This is important because otherwise, reading the temperature doesn’t work.
There were some firewall/reverse proxy issues in my case, but that would be beyond this post. In the end, I could successfully access the app via http://raspberry:5000/cool and it was showing the current temperature.
This was definitely the hardest part for me in this project. But several other blog posts like the following gave me the necessary information which components I had to buy and how to connect them:
Finally, I bought:
- Red LED
- Transistor BC 337
- Resistor 680 Ω for the Transistor
- Resistor 1 kΩ for the LED
- Jumper wires
Because I was afraid to somehow destroy the fan, I made a first test with my controller software and the LED:
After a successful test, I switched over and used the fan:
And it was working! So I had no more excuses to solder everything and do the final assembly:
Register the app as a service
Now that everything was working fine, I wanted to register my little app as a service. This will ensure that the controller automatically gets started after a reboot.
For this, I had to create a service unit configuration file on the Raspi:
It has the following content:
With the following commands, the service will be created:
Now the app will start on every reboot.
Because of recent developments in the .NET ecosystem, I was able to write a controller for a Linux device like a Raspberry Pi. I could leverage all the new features my favorite platform provides:
- Worker Services
- ASP.NET Core Blazor Server
In theory, this app could be easily ported to any other OS like Windows 10 IoT Core or device like Arduino - if the necessary parts like temperature retrieval are available.
For me, this was another great experience of how the modern development world can look like: serve every platform with the tools of your choice.
Thank you for reading!
24 Apr 2020
A couple of weeks ago, I bought a new toy: a Raspberry Pi 4 Model B. I wanted to set up my own DNS server, but mainly, I wanted to get in contact with this new platform. Since I had already read other blog posts saying that the Raspi gets quite warm under normal conditions, I ordered a case with a built-in fan as well.
The shipment arrived and I assembled everything curiously. First impression: wow, the ramp-up time to assemble and install is super fast! Second impression: the built-in fan is a bit loud… and my spouse thought so as well So the Raspi had to live its first days in the kitchen.
In the next days, I found several blog posts describing how to build a small electric circuit and a bit of software to control the fan. The hardware part was new to me anyway. But for the software, the blog authors were mostly using Python. Since my heart beats for .NET and C#, I was intrigued by the idea of using my favorite technologies. And I found the .NET Core IoT Libraries - a NuGet package provided by Microsoft to build applications for devices like the Raspi. This package was my missing piece in the puzzle - how to control the hardware. Now I was on fire and decided to build a fan controller based on Blazor Server and the found NuGet package.
All the code can be found in my GitHub repo Raspi Fan Controller. Lets focus on the main parts:
- Temperature provider
- Temperature controller
- Fan controller
To control the temperature, we need to measure it, right? Fortunately, the Raspi’s OS Raspian comes with a built-in command to retrieve its current temperature:
It returns a text like
temp=39.0°C. The class
Logic\RaspiTemperatureProvider does a little bit of RegEx to parse the current temperature and unit into a tuple
After retrieving the current temperature, we can act on it.
The temperature controller
Logic\RaspiTemperatureController is nothing but a
while loop. It regularly checks the current temperature and turns on the fan if a upper threshold is reached and turns it off if a lower threshold is reached.
This loop is
async: since the temperature controller will be started by a .NET Core Worker Service, the
while’s exit condition is the
CancellationToken provided by the ASP.NET Core environment.
In between, there is a sleep time between two loop runs via
This is the place where we really access the hardware. The Raspi has so called General Purpose Input/Output (GPIO) pins - physical pins on its board that can be used for custom extension. The NuGet package .NET Core IoT Library abstracts and allows us to set these pins in a very easy way. Take a look into
The turn on the fan, the GPIO pin 17 is set to high value. And that’s it.
The user interface is a single Razor page providing the necessary information like current temperature and threshold. These information are read from the temperature controller.
In the next part, I will describe how to bring everything together.
07 Feb 2020
During the last couple of months, I was doing a major refactoring of the dependency injection infrastructure on the product I build with my colleagues. The application relies heavily on the service locator pattern. To improve the testability, a refactoring pattern evolved that some other people might find useful.
Let’s start with an example showing the initial situation:
The component to refactor is
CarFactory. As you can see, a global static
ServiceLocator is used to obtain the engine and chassis instances building up the car to construct. Writing a unit test for this class can be cumbersome because you have to consider the global service locator. Furthermore, the
ServiceLocator obscures the usage of further dependencies like
The pure idea of dependency injection would teach us to refactor the code to something like this:
Now we’re requesting the necessary dependencies via constructor injection. For unit testing, this is a perfect situation, because now we can inject mocks that mimic the required behavior and everything works fine.
But since we’re not using the service locator anymore, somebody has to provide the necessary dependencies within the production code.
Sure, we could use a composition root and a dependency injection container. But depending on the circumstances (size of application, amount of time, etc.), this can become a very hard piece of work or even almost impossible.
Instead of using constructor injection, we could set up an integration test with a differently configured service locator. But whenever possible, I tend to favour unit over integration tests because they are usually faster and have a narrower scope.
So basically, there are two seemingly competing demands:
- Don’t change the public API in order to keep the production code as untouched as possible.
- Increase the testability.
And this is how I tended to consolidate the two demands:
As you can see, the approach is pretty close to the former one using constructor injection. The difference lies in the two constructors: we still have the constructor specifying all the necessary dependencies, but it is declared
public constructor still defines no parameters. However, it is calling the private constructor and resolves the necessary dependencies using the
ServiceLocator. This way, nothing changes in terms of the component’s public API and behavior.
But then what is the added value in terms of unit testing? Unlike the C# compiler, .NET allows the use of private constructors via reflection (see here). This enables us to call the private constructor from an unit test.
Doing so manually for each and every unit test would be a pain. Fortunately, there are packages like AutoMocker for Moq that take away the pain. Using that package, our test looks like this:
Using this refactoring technique enabled me to write unit tests for a whole bunch components of our application.
But it is important to keep one thing in mind: a private constructor is not marked as
private just for fun. There are reasons why the component’s creator chose it that way. Furthermore, we’re bypassing the compiler via reflection - usually not the best idea
So this technique is more like medicine: use it only in small doses or preferably not at all. Whenever possible, go for dependency injection all the way.
Happy coding and thank you for reading!