Choosing a windows container OS

One of the most important decisions to make when containerizing your windows application is what container OS to use.

To understand this problem one needs to understand what essentially is changing.

  1. From customer to customer the choice of operating system support options will  change between LTSC and SAC.
  2. From application to application, the technology will change,

Lets have a closer look …

Customer to Customer (LTSC or SAC)

Different customers will have different server operating systems and support agreements in place. The container host will be Windows Server 2016 machine. However there are two categories of support channels to consider. Each of these options resonate with the possible builds of the container host you will find out at a customer. Your container OS must be compatible with the customers infrastructure. (the infrastructure may very well be your own in some cases)

  • LTSC
    • Basically the Long Term support channel is for the non-early adopters, and non-innovators.
    • Updates only include patches and security updates.
    • Release cadence is every 2 to 3 years.
    • Supported for 5 years.
    • Examples: Windows Server 2016, Windows Server 2019
  • STSC or SAC
    • Released cadence at twice per year.
    • For early adopters and innovators.
    • OS title often reflects the month and year of the release. (ServerCore 1709, ServerCore 1802)
    • Supported for 18 months in main stream production.
    • Example Windows Sever 1709 and Window server 1802.

This chart will help you quickly make the correct decision, regarding the support channel your container OS should be using.


Application to Application (NanoServer or ServerCore)

You always want to use the smallest possible container OS, that supports your application. Microsoft container OS ships in two flavors:

  1. ServerCore
    1. Supports most* server roles.
    2. Image size between 2 and 5 gig depending on host OS support channel you target.
    3. Best fit for lift and shift brown field applications
  2. NanoServer
    1. Supports a limited set of server roles.
    2. Has a limited dot net run time.
    3. Built in support for dot net core run-time.
    4. 5.0 is apparently also supported.
    5. Best fit for the green field applications.

Tag naming convention

Look at the naming convention in below images and see how it resonates with the decision tree, you would use. In this example for an Webforms application you will see the following naming convention.


The container you choose will be microsoft/, but at least now you understand how you got there. At the root of our decision tree is the customers support channel, next it is the OS, then finally the core run time requirements for you application.

Always remember you are dealing with a layered file system, so by convention, the tag description should reflect what is in its root. All images in docker-hub for the Microsoft stack follows this convention.


As an architect I am always looking to secure these technical decisions in a fluent programming model, thus nothing prevents us from doing just that.

It is possible to capture such decisions into a layer which in turn will allow us to add some useful utilities into the layer. These utilities are typically used by all the images we will create, example include Url-rewite.

The result is that you only have a handful of images to maintain, making container server patching really easy.



What is docker and why will it change everything!

So what is all the all this Docker hype about? Software is multi-faceted, and getting it working on your dev-environment is one thing, but getting it into production presents many complicated challenges for dev-ops teams. Your software will always run in your labs against a set of external assumptions. This could be the operating system, a message queue, a database or even dependencies on the java and dot net run-times. The fact is that these external components to your software is often different in production, and all too often we see software failure, due to the external components no longer integrating correctly in the production environment.

In addition to this, we have to create rather complex installers, to navigate, the differences in the production environment, vs that in the test lab.  Sometimes your application will need to run on more than one server. Many enterprises will often have the need to scale up rapidly both in a vertical and horizontal manner, to deal with user demand for resources.  Overtime these environments will need to be upgraded, maintained and analysed for problem solving.  All of these things are the traditional challenges for a typical development operations department.

So how does docker change all of this? In its simplest form docker allows you to modularize your application into units of logical machine boundaries.  I will discuss container driven design later. The granularity of your module or docker service is an instance of your software AND the actual logical operating system with all its dependencies.

Docker containers spin up in seconds, and memory footprint is negligent, but the promise is, is that your software always runs exactly the same way in prod as in the lab.

Because docker containers are relatively small, it is possible to host thousands of these containers  over docker SWARM clusters potentially containing hundreds of nodes.

Docker provides an entire eco-system to manage, container distribution and management, which seamlessly aligns itself with the dev-op challenges.

The reason I believe that this technology will change everything is that both sides ultimately win. The vendor and the customer. Cost of ownership is radically reduces for the vendor, while the customer enjoys a superior security posture, better server density and a guarantee that the software will function correctly. For me this is the most compelling reason to learn docker, and get involved.

I am going to get more into these things in some future posts, with some very technical articles, on container orchestration engines such as Service Fabric, Kubernetes and SWARM. I will also demonstrate how to set up a cluster in Azure, create a load balances, configure reverse proxy containers to isolate all transport security concerns, and then integrate this with docker’s service discovery, to distribute load around the cluster. Fun times … stay tuned …  I will be doing all of this with the Microsoft stack.


Modernizing legacy applications with Docker – Part 1


Today I will discuss how to modernize your web-forms application do run as a docker service. Once you have done this it will be possible to run your application in the cloud, and have it orchestrated by SWARM, Kubernete or even Service Fabric. This will allow your application to run at scale, be more reliable and available.

So today you are going to learn how to lift and shift that legacy application into a container. Visual studio is providing built in support for this today directly in the IDE (Watch here), however your situation is often very different and unique, and your dependencies will need to be described in a manner that is probably a lot more fine grained. Although the sales pitch claims it can deal with brown fields, you will often need to do much more plumbing to support it. You will probably need specific server roles, or provide bootstrap to install certificates and other settings that change from customer to customer.

This tutorial will show you how to express these fine grained concerns, and provide a more detailed and controlled approach.

Potential show stoppers

If your legacy application uses the following technology you will not be able to port it at is, and you will need to make code changes  or even design changes to gain compatibility.

  • MSMQ    (No support for this technology on the server core 2016 container OS)
  • MSDTC  (No support for this across all container OS as far as I understand)

I talk more about these technology problems and the work around in this article.

What you will need to get going

  1. Windows 10 with creators update or Windows Server 2016 (Check it out here)
  2. Download and install docker for windows (Download it here)
  3. Visual Studio 2015 or later. (Download it here)

For this tutorial I will using the following docker version:

Docker CE about box,

If you have have a later version, you need not worry the commands I will be using are very basic and will still work.

Developers and tester will use Docker CE. Docker CE provides some GUI that simplifies certain development tasks. CE only runs on Windows 10. You can also use Docker EE if you are following this tutorial on Win server 2016.


Lets get going

For the purpose of this tutorial I am going to be using the default Web-forms template. I have created the web-application and it now looks like this …

Default webforms template.

The application dependencies are as follows:

  • Operating system
  • IIS
  • Your software

Note: legacy web forms application is using an assembly called System.Web, which couples it to the IIS server role. Modern core applications do not have a hard coupled dependency and hence the requirements can change from scenario to scenario.

Creating the image

Much like a class is the blue-print of the object, so is the image the blue-print of the container. Create a docker file that will contain the commands to build the image.  The docker file is also going to be the target of the docker build command.

Docker file

  • A docker file is nothing more than a text file named “Dockerfile” without an extension.

Simply create the  docker file in the root directory of the application.

  1. Right click on the project and select Add->New Item
  2. Select a text file, click add button
  3. Rename the file in the solution explorer (f2), and remove the “.txt”

You will now have this:

Dockerfile created and we ready to begin!


 An image is defined using a set of docker commands. The “FROM” command is often the first command in the docker file. This commands allows your image to inherit from definitions provided by the software vendor.

Choosing the correct base is probably the most important decision.

Docker uses a layered file system.  In this example we will need to run our image on an official image from Microsoft. The image can be found here on the docker hub, This image was built from other official images , and the docker file can be seen here. By following each “FROM” statement the base image can be traced, which is ultimately a flavor of container OS.

Windows server core is an OS specifically designed to run inside a container and is released as an official image regularly on the docker hub. It is important to keep the image referenced in the “FROM” statement up to date.



You need to choose a container OS, based on two important aspects read about it here.

Adding layers

The image will have additional layers added for each docker file command expressed in the file below.

# escape=`

FROM microsoft/aspnet:4.7.1-windowsservercore-ltsc2016

#Set the shell and indicate we want to use powershell
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference ='SilentlyContinue';"]

# We can copy the published content into the wwwroot folder.
ADD ./bin/release/publish ./inetpub/wwwroot

# Lets execute a powershell commands to install choco and nuget, then we can install the WebConfigTransformRunner, so that we can apply configuration transformations dynamically.
RUN	Set-ExecutionPolicy Bypass; iex ((New-Object System.Net.WebClient).DownloadString(''));`
	choco install nuget.commandline --allow-empty-checksums -y;`
	nuget install WebConfigTransformRunner -Version

# Set the entry point,
ENTRYPOINT C:\ServiceMonitor.exe w3svc

On line 1 the (`) back tick represents the character which is used for the line feed , the default is (\) back slash if this line is missing. In windows when ever a path is expressed it will need to escaped. Not good for windows, but it works great for Linux users. So be sure not to leave out the special comment.

On line 3 we make use of the FROM command, to choose an environment based on framework 4.7.1, which will run on ServerCore2016 LTSC. Subsequent container OS updates, will use the same tag, so be aware that your build server may pull down a newer image. You can specify a build specific tag, which puts you firmly in control as to when you will update you container OS.

At line 6 we tell docker that we wish to use the power shell, shell using the SHELL command. This will allows us later to run power shell inside the container as we build it.

The ADD command on line 9 copies the files into the container. The first argument is the relevant path on the build machine, and the 2nd argument is the destination folder in the container. In this case the target directory already exists because we inherit the image. The ADD will force creation and copy from the source recursively into the target even if the target directory does not yet exit. A similar command exists called COPY. The main difference is, is that ADD can also reference a source url.

The RUN command, will execute the power-shell script in its first argument. The run command is capable of executing a Powershell script consisting of many commands. Remember this because it is not good to add excessive layers by adding multiple RUN commands for each modification. Also consider moving such commands to a bootstrap file. Strictly speaking a container should only do one thing, and it should be as small as possible. Exceptions to this rule apply when the brownfield component requires special handing. To demonstrate the RUN command, we are sneaking in some utility software that we will user in part 2.

ENTRYPOINT command is used to start the container . This basically makes the container behave the same as the wrapped application. When docker starts the container, it uses the defined entry point, which starts the application.  Inversely should the process fail, the container will fail. Essentially this is exactly the same as the CMD command, Here is a good article explaining more about this.

And that’s it you done. All that is left is to build your first image and run it. So be sure to save and then lets continue.

Building the image

  1. In visual studio, right click on the project and select publish.
  2. Select the folder options and accept the defaults.
  3. The published website is now correctly aligned with dockerfile.
  4. Open Powershell using administration rights and change directory to the project folder.
  5. Then run the following command:

Do not forget the “.” underlined in red. this is the second argument of the docker build command and indicates the build context, which is relevant to our current position. The -t argument indicates the registryname/imagename:tag in my case the registry on docker hub is wedocode and my image name is webformsapp. I did not need to specify the latest tag, as this is the default. Obviously it would be good practice to insert your version number and any other root information.

Docker will now start pulling down all the base layers from the Microsoft registry.  Docker then DE-compresses the layers and stores it in the local image repository. A hash is calculated for each layer, and it will not re-download the same layer, even if the tag is different.

Run your first container

You can now execute from the shell docker images this will list the images out as follows (Yellow block):

docker images, docker run -d -p 8903:80 --rm --hostname webapp --name container wedocode/webformsapp:latest
List the docker images and instantiate the container.

Next you will type the docker run command in the green box.  Parameter -d (detached mode) -p (host port : container port … only relevant when coming in from outside the docker host the external port maps to the container). –rm tells docker to remove the container when on shut down. Parameter — hostname gives the container a hostname.  Only the docker host can resolve container host names. (see the red box) . The  –name parameter specifies container name. Docker commands, can use the container name or docker id, as a reference.


I have pushed the image to my docker registry on docker hub. You can pull the container from here to run it.

The source code can be cloned or downloaded here.

Part 2

We have also installed some tools into our image such as the web-conig transformer. We will be using this in part 2 as we create a container boot-strapper, to do some configuration at start time.