mu88 Developer Blog Buy Me A Coffee

Profiling a .NET Core 3.0 Console App running in Docker for Windows with dotTrace

Recently, I was asked to profile a .NET Console App running in Docker for Windows. I’m a big fan of the JetBrains tools for .NET: ReSharper, dotPeek, dotTrace - they are all part of my toolbelt. Since I’ve never profiled a Docker container with dotTrace, this post shall illustrate how to do this.

First of all, we need some code to profile. The complete code can be downloaded from this GitHub repo. But basically, it is nothing more than this:

As we can see, every second 100 random numbers will be generated and printed to the console.

Furthermore, a Dockerfile is needed and it looks like this:

We’re pulling the base image, adding the compiled application and setting it as the ENTRYPOINT.

Before building the Docker image, the application has to be built using the dotnet global tool:

dotnet publish -c Release

Afterwards, we can build the Docker image:

docker build -t test-with-docker .

Before running and profiling the container, please make sure that you have dotTrace installed and if not happened yet, make your self comfortable with the how-to Starting Remote Profiling Session. In summary, it says the following:

  • Unzip to the environment to profile (in our case the Docker container).
  • Start dotTrace and connect to the Remote Agent URL. By default, the Remote Agent uses port 9100.
  • Attach to the application.

So lets do this step by step. At first, we will start the Docker container and map the container port 9100 to its local pendant:

docker run -d -p 9100:9100 --name test test-with-docker

To copy the unzipped Remote Agent, the following command has to be executed:

docker cp RemoteAgent/. test:/RemoteAgent

This copies all the content from the host’s folder RemoteAgent to the container’s folder RemoteAgent. In case this commands fails saying that you cannot copy content while a container is running: this seems to be a Windows/Hyper V limitation. We can work around this by stopping the container, copying the content and finally starting it again:

docker stop test
docker cp RemoteAgent/. test:/RemoteAgent
docker start test

Now the Remote Agent is there, but is has to be started:

docker exec -d test RemoteAgent/RemoteAgent.exe

Finally, we can connect to our application using dotTrace. As Remote Agent URL, we use net.tcp://localhost:9100/RemoteAgent. This accesses the local port 9100 of my machine which is mapped to port 9100 of the Docker container where the Remote Agent is up and running. Now we can attach dotTrace to TestWithDocker.exe and collect snapshots as usual.

As you can see in the following screenshot, everything works as usual when profiling an application and we find our method DoSomeWork():

Dive into your Android's device network traffic

Recently, I came across the challenge to analyze the network traffic of my Android smartphone. I wanted to know which HTTP requests a specific app executes if the user triggers an UI action. When analyzing any app on a Windows PC, my silver bullet is Fiddler. So I was curious how to do this with my smartphone.

The general approach is to make Fiddler the smartphone’s network proxy. For this, both PC and smartphone have to be in the same network.

At first, we need an installation of Fiddler. This web debugging proxy has tons of options, but comes with a well-defined out-of-the-box setting. After starting the application, we immediately see the incoming and outgoing traffic from the browser or mail client.

This works because Fiddler registers itself as a local proxy running on port 8888. The port can be changed in the options: Tools -> Options -> Connections -> Fiddler listens on port

While we’re here, we enable the option Allow remote computers to connect. This will allow the Android phone to use Fiddler as its proxy.

That’s it about configuring Fiddler. Finally, we can set up the Android device to use the proxy. To do so, open the settings of the WiFi to use and expand the Advanced options.

Now we have to enter the following values:

  • Proxy = Manual
  • Proxy hostname = IP address of your PC where Fiddler is running (e. g.
  • Proxy port = Port on your PC on which Fiddler is listening (e.g. 8888)

To see whether it works we can simply check this by trying to open an arbitrary website in the browser. Now we should see all the network traffic within Fiddler. If doesn’t work, check your firewall whether incoming traffic to Fiddler’s port is allowed.

Build and debug Qooxdoo with Visual Studio Code

Because of my job, I got in touch with the JavaScript framework Qooxdoo. I’ve never heard about it before, since the web development solar system seems to rotate around Angular, React and all the other big frameworks.

The process of building is pretty straightforward: you simply have to run a Python script, usually called That does all the ceremony of bundling, etc. and gives you the final application.

When it comes to debugging of the application, you can put on your usual “web developer tool belt”: open the HTML in your favorite browser and use it’s Developer Tools.

For me as a Visual Studio loving C# developer, it is still odd to have three different components to build and debug my app:

  • IDE to develop
  • Console to build
  • Browser tools to debug

Of course, these are primarily prejudices. I have read too many articles not to know that it is much easier today. So I wanted to combine all the three steps into on environment, which is Visual Studio Code.

I’ll pass the first step, since it is not worth mentioning the I can write JavaScript code in VS Code :grin: Let’s focus on the second step, which is building the application. According to the Microsoft documentation, I’ve set up a .vscode\tasks.json with the following content:

    "version": "2.0.0",
    "tasks": [
            "label": "Build HelloWorld",
            "type": "shell",
            "command": "${workspaceFolder}/",
            "group": {
                "kind": "build",
                "isDefault": true

It creates a default build task called Build HelloWorld (which is my sample application) that simply calls the Python generator script.

Lastly, there is the step of debugging my built application right from VS Code. Again, the Microsoft documentation was very helpful. To debug an application, I had to set up the file .vscode\launch.json in the following way:

    "version": "0.2.0",
    "configurations": [
            "type": "chrome",
            "request": "launch",
            "name": "Launch HelloWorld",
            "file": "${workspaceFolder}/source/index.html"

This creates the launch profile Launch HelloWorld which launches Google Chrome with my application. Furthermore, the IDE gets attached to the browser and I can set breakpoints in VS Code.

For me, this is a pretty convenient setting which reduces a C# developers anxiety to work with JavaScript code :wink: If you want a jump start, you can use my sample application which is available on GitHub.

Map Service Search

Almost a year ago, I gave a talk at the FME User Conference in Vancouver with the title “Hey dude, where is my Workspace?” This talk was focussed on the question how to keep track of all the FME Workspaces flying around all over the enterprise. In my case, this challenge was tremendously simplified by a tool which allowed to search through the metadata of Workspaces hosted on several instances of FME Server.
Since that time, I was curious whether it would be possible to adapt the approach and apply it to our ArcGIS Server farm. With the growing demand for Map Services (HTTP services serving map content) and therefore growing infrastructure, it gets harder to stay on top of things and answer questions like:

  • Which Map Services use a specific dataset?
  • Which ArcGIS Server instance hosts the Map Service named XYZ?
  • etc.

The idea gets born

For quite some time I know that you can see all the used workspaces of a Map Service in ArcGIS Server Manager. But until a couple of weeks ago, I’ve never asked myself the question: WHERE ARE THEY COMING FROM?
With the help of Chrome’s Developer Tools, I discovered that ArcGIS Server Manager uses the ArcGIS Server Administrator API for this, more precisely it calls https://<<Name of ArcGIS Server>>/arcgis/admin/services/<<Name of Map Service>>.MapServer/iteminfo/manifest/manifest.json. This JSON object contains all the desired information.

From theory to practice

Based on this discovery, I’ve built a web crawler which determines all Map Services of an ArcGIS Server, retrieves their corresponding manifest.json and extracts the desired information. I’m a big fan of the FME platform, so my hammer for this nail was a FME Workspace. Since it is all about a couple of HTTP requests, the Workspace is quite simple. By doing this, the process provides the following information:

  • Name of the Map Service
  • Environment of the ArcGIS Server (e. g. Staging or Production)
  • Type of the ArcGIS Server (e. g. Intranet or Internet)
  • All used datasources with the following details:
    • Type of the datasource (e. g. SDE or File Geodatabase)
    • Name of the datasource (e. g. name of the Oracle or SQL Server instance)
    • Name of the dataset (name of the Feature Class, table, file, etc.)
    • Authentication mode (e. g. Operating System Authentication or Database Authentication)
    • Username used to access the datasource (only in case of an Enterprise Geodatabase)

Now I had my information, but where to store them? Due to the experiences with Elasticsearch, I decided to use this product as my back end to store the data as a search index. Working with Elasticsearch means that you have to communicate with their REST API to create and modify the search index - just another HttpCaller-Transformer in the FME Workspace.

The last step was to build a small, lightweight client to access the collected information. For the sake of truth I’ve to admit that my comfort zone is the back end - I like to design, develop, test and stress REST APIs and stuff like that, but when it comes to client development, I always feel like a child on its first day of school. So I did a lot of research, checked out all the new frameworks, but finally I ended up with a combination of AngularJS and Bootstrap.
The client is pretty simple: it communicates with Elasticsearch through the REST API. In the bootstrap phase, all the distinct values for type and environment of ArcGIS Server are read through Elasticsearch Aggregrations. This enables to filter for specific types and environments with two drop-down lists.
After doing a search, the user gets presented all Map Services matching the search criteria. In the detail view of a search result, all datasources are listed and the onces matching the search term are highlighted.

Time for a demo

And that’s all! For those being curious to see and use the app, I’ve created a small demo:
Start exploring the sample data by entering the search term ‘tree’!

The source code as well as the FME Workspace to create the Elasticsearch index are available on GitHub. Feel free to use it, I’d appreciate if this little tool would also help others facing the same challenges.

Thanks for reading!