mu88 Developer Blog Buy Me A Coffee

Dependency injection and legacy code

During the last couple of months, I was doing a major refactoring of the dependency injection infrastructure on the product I build with my colleagues. The application relies heavily on the service locator pattern. To improve the testability, a refactoring pattern evolved that some other people might find useful.

Let’s start with an example showing the initial situation:

The component to refactor is CarFactory. As you can see, a global static ServiceLocator is used to obtain the engine and chassis instances building up the car to construct. Writing a unit test for this class can be cumbersome because you have to consider the global service locator. Furthermore, the ServiceLocator obscures the usage of further dependencies like IEngine and IChassis.

The pure idea of dependency injection would teach us to refactor the code to something like this:

Now we’re requesting the necessary dependencies via constructor injection. For unit testing, this is a perfect situation, because now we can inject mocks that mimic the required behavior and everything works fine.

But since we’re not using the service locator anymore, somebody has to provide the necessary dependencies within the production code.
Sure, we could use a composition root and a dependency injection container. But depending on the circumstances (size of application, amount of time, etc.), this can become a very hard piece of work or even almost impossible.
Instead of using constructor injection, we could set up an integration test with a differently configured service locator. But whenever possible, I tend to favour unit over integration tests because they are usually faster and have a narrower scope.

So basically, there are two seemingly competing demands:

  • Don’t change the public API in order to keep the production code as untouched as possible.
  • Increase the testability.

And this is how I tended to consolidate the two demands:

As you can see, the approach is pretty close to the former one using constructor injection. The difference lies in the two constructors: we still have the constructor specifying all the necessary dependencies, but it is declared private.
The public constructor still defines no parameters. However, it is calling the private constructor and resolves the necessary dependencies using the ServiceLocator. This way, nothing changes in terms of the component’s public API and behavior.

But then what is the added value in terms of unit testing? Unlike the C# compiler, .NET allows the use of private constructors via reflection (see here). This enables us to call the private constructor from an unit test.
Doing so manually for each and every unit test would be a pain. Fortunately, there are packages like AutoMocker for Moq that take away the pain. Using that package, our test looks like this:

Using this refactoring technique enabled me to write unit tests for a whole bunch components of our application.

But it is important to keep one thing in mind: a private constructor is not marked as private just for fun. There are reasons why the component’s creator chose it that way. Furthermore, we’re bypassing the compiler via reflection - usually not the best idea :wink:
So this technique is more like medicine: use it only in small doses or preferably not at all. Whenever possible, go for dependency injection all the way.

Happy coding and thank you for reading!

Localizing texts in a server-side Blazor app

Like so many of us, I’m playing around with the different varieties of Blazor. Recently, I’ve ported an application based on Angular 2 and Electron to Blazor Server and Electron.NET. Thereby, I came across the topic of translating the app.

Technically, my purpose was not just translating the app into a specific language. I wanted to do it properly. Maybe you’ve stumbled across the terms i18n and l10n. Those cryptic acronyms stand for internationalization and localization. The first one simply means: I modify my code in a way so that it is capable of handling different languages. The second one means: Now lets introduce the translations for the different languages. So i18n can be considered as the base of l10n and it allows you to add new languages without changing the code consuming it.
Do you wonder where these strange names come from? It is pretty simple: take VS Code or Notepad++, paste the words internationalization and localization into it and count the number of characters between i and n resp. l and n: 18 and 10.

But how to apply i18n and l10n to Blazor Server? At the time of porting my app (a couple of month ago), there were almost no information about that topic. So I had to find my own solution.

Since Blazor Server is an ASP.NET Core application, I’ve written an injectable component called CustomTranslator:

As you can see, the interface ICustomTranslator accepts a string and returns the translation. The implementation CustomTranslator does a lookup in a property called Localizer. This component is a built-in functionality of ASP.NET Core (see here) and it comes from the Dependency Injection container.

To use the custom translation service, it has to be registered within Startup.cs:

Let’s go through this step by step. Within ConfigureServices(), basic localization is enabled and we’re telling ASP.NET Core to look for all translations in the resource folder Resources. Next, the custom translation component is registered within the Dependency Injection container as a singleton.
The array supportedCultures within Configure() defines all the languages the application will support. The following line configures the application to support the requested languages and defines that the default language of my application is German (de).

Now we can go to any Razor page and the Dependency Injection container will provide the custom translation component:

By using the following code, the translator can be consumed:

Of course, we could also consume the translation component in any other class provided by the Dependency Injection container:

And that’s all! Well, almost :wink: The code would work, but it wouldn’t do anything because there are no translations yet. By creating the two resource files Resources\CustomTranslator.en.resx (English) and Resources\CustomTranslator.de.resx (German) and adding the key SomeString with an appropriate translation, the mission is completed. When running the app, the UI will be localized in German.

If I’d like to add support for French, all I had to do would be:

  • Create a file Resources\CustomTranslator.fr.resx containing all the translations.
  • Add new CultureInfo("fr") to the array supportedCultures in Startup.Configure().

As you’ve hopefully seen, doing i18n and l10n with Blazor Server is not difficult at all. For people being familiar with ASP.NET Core, it should be super straight-forward.
For me, the biggest challenge was to understand that the folder name Resources and file names CustomTranslator.<<language>>.resx have to match an exact pattern. Otherwise, they won’t be recognized.

Thank you for reading!

Profiling a .NET Core 3.0 Console App running in Docker for Windows with dotTrace

Recently, I was asked to profile a .NET Console App running in Docker for Windows. I’m a big fan of the JetBrains tools for .NET: ReSharper, dotPeek, dotTrace - they are all part of my toolbelt. Since I’ve never profiled a Docker container with dotTrace, this post shall illustrate how to do this.

First of all, we need some code to profile. The complete code can be downloaded from this GitHub repo. But basically, it is nothing more than this:

As we can see, every second 100 random numbers will be generated and printed to the console.

Furthermore, a Dockerfile is needed and it looks like this:

We’re pulling the base image mcr.microsoft.com/windows/servercore:1903, adding the compiled application and setting it as the ENTRYPOINT.

Before building the Docker image, the application has to be built using the dotnet global tool:

dotnet publish -c Release

Afterwards, we can build the Docker image:

docker build -t test-with-docker .

Before running and profiling the container, please make sure that you have dotTrace installed and if not happened yet, make your self comfortable with the how-to Starting Remote Profiling Session. In summary, it says the following:

  • Unzip RemoteAgent.zip to the environment to profile (in our case the Docker container).
  • Start dotTrace and connect to the Remote Agent URL. By default, the Remote Agent uses port 9100.
  • Attach to the application.

So lets do this step by step. At first, we will start the Docker container and map the container port 9100 to its local pendant:

docker run -d -p 9100:9100 --name test test-with-docker

To copy the unzipped Remote Agent, the following command has to be executed:

docker cp RemoteAgent/. test:/RemoteAgent

This copies all the content from the host’s folder RemoteAgent to the container’s folder RemoteAgent. In case this commands fails saying that you cannot copy content while a container is running: this seems to be a Windows/Hyper V limitation. We can work around this by stopping the container, copying the content and finally starting it again:

docker stop test
docker cp RemoteAgent/. test:/RemoteAgent
docker start test

Now the Remote Agent is there, but is has to be started:

docker exec -d test RemoteAgent/RemoteAgent.exe

Finally, we can connect to our application using dotTrace. As Remote Agent URL, we use net.tcp://localhost:9100/RemoteAgent. This accesses the local port 9100 of my machine which is mapped to port 9100 of the Docker container where the Remote Agent is up and running. Now we can attach dotTrace to TestWithDocker.exe and collect snapshots as usual.

As you can see in the following screenshot, everything works as usual when profiling an application and we find our method DoSomeWork():

Dive into your Android's device network traffic

Recently, I came across the challenge to analyze the network traffic of my Android smartphone. I wanted to know which HTTP requests a specific app executes if the user triggers an UI action. When analyzing any app on a Windows PC, my silver bullet is Fiddler. So I was curious how to do this with my smartphone.

The general approach is to make Fiddler the smartphone’s network proxy. For this, both PC and smartphone have to be in the same network.

At first, we need an installation of Fiddler. This web debugging proxy has tons of options, but comes with a well-defined out-of-the-box setting. After starting the application, we immediately see the incoming and outgoing traffic from the browser or mail client.

This works because Fiddler registers itself as a local proxy running on port 8888. The port can be changed in the options: Tools -> Options -> Connections -> Fiddler listens on port

While we’re here, we enable the option Allow remote computers to connect. This will allow the Android phone to use Fiddler as its proxy.

That’s it about configuring Fiddler. Finally, we can set up the Android device to use the proxy. To do so, open the settings of the WiFi to use and expand the Advanced options.

Now we have to enter the following values:

  • Proxy = Manual
  • Proxy hostname = IP address of your PC where Fiddler is running (e. g. 192.168.178.53)
  • Proxy port = Port on your PC on which Fiddler is listening (e.g. 8888)

To see whether it works we can simply check this by trying to open an arbitrary website in the browser. Now we should see all the network traffic within Fiddler. If doesn’t work, check your firewall whether incoming traffic to Fiddler’s port is allowed.

Build and debug Qooxdoo with Visual Studio Code

Because of my job, I got in touch with the JavaScript framework Qooxdoo. I’ve never heard about it before, since the web development solar system seems to rotate around Angular, React and all the other big frameworks.

The process of building is pretty straightforward: you simply have to run a Python script, usually called generate.py. That does all the ceremony of bundling, etc. and gives you the final application.

When it comes to debugging of the application, you can put on your usual “web developer tool belt”: open the HTML in your favorite browser and use it’s Developer Tools.

For me as a Visual Studio loving C# developer, it is still odd to have three different components to build and debug my app:

  • IDE to develop
  • Console to build
  • Browser tools to debug

Of course, these are primarily prejudices. I have read too many articles not to know that it is much easier today. So I wanted to combine all the three steps into on environment, which is Visual Studio Code.

I’ll pass the first step, since it is not worth mentioning the I can write JavaScript code in VS Code :grin: Let’s focus on the second step, which is building the application. According to the Microsoft documentation, I’ve set up a .vscode\tasks.json with the following content:

{
    "version": "2.0.0",
    "tasks": [
        {
            "label": "Build HelloWorld",
            "type": "shell",
            "command": "${workspaceFolder}/generate.py",
            "group": {
                "kind": "build",
                "isDefault": true
            }
        }
    ]
}

It creates a default build task called Build HelloWorld (which is my sample application) that simply calls the Python generator script.

Lastly, there is the step of debugging my built application right from VS Code. Again, the Microsoft documentation was very helpful. To debug an application, I had to set up the file .vscode\launch.json in the following way:

{
    "version": "0.2.0",
    "configurations": [
        {
            "type": "chrome",
            "request": "launch",
            "name": "Launch HelloWorld",
            "file": "${workspaceFolder}/source/index.html"
        }
    ]
}

This creates the launch profile Launch HelloWorld which launches Google Chrome with my application. Furthermore, the IDE gets attached to the browser and I can set breakpoints in VS Code.

For me, this is a pretty convenient setting which reduces a C# developers anxiety to work with JavaScript code :wink: If you want a jump start, you can use my sample application which is available on GitHub.