Modernizing legacy applications with Docker – Part 1


Today I will discuss how to modernize your web-forms application do run as a docker service. Once you have done this it will be possible to run your application in the cloud, and have it orchestrated by SWARM, Kubernete or even Service Fabric. This will allow your application to run at scale, be more reliable and available.

So today you are going to learn how to lift and shift that legacy application into a container. Visual studio is providing built in support for this today directly in the IDE (Watch here), however your situation is often very different and unique, and your dependencies will need to be described in a manner that is probably a lot more fine grained. Although the sales pitch claims it can deal with brown fields, you will often need to do much more plumbing to support it. You will probably need specific server roles, or provide bootstrap to install certificates and other settings that change from customer to customer.

This tutorial will show you how to express these fine grained concerns, and provide a more detailed and controlled approach.

Potential show stoppers

If your legacy application uses the following technology you will not be able to port it at is, and you will need to make code changes  or even design changes to gain compatibility.

  • MSMQ    (No support for this technology on the server core 2016 container OS)
  • MSDTC  (No support for this across all container OS as far as I understand)

I talk more about these technology problems and the work around in this article.

What you will need to get going

  1. Windows 10 with creators update or Windows Server 2016 (Check it out here)
  2. Download and install docker for windows (Download it here)
  3. Visual Studio 2015 or later. (Download it here)

For this tutorial I will using the following docker version:

Docker CE about box,

If you have have a later version, you need not worry the commands I will be using are very basic and will still work.

Developers and tester will use Docker CE. Docker CE provides some GUI that simplifies certain development tasks. CE only runs on Windows 10. You can also use Docker EE if you are following this tutorial on Win server 2016.


Lets get going

For the purpose of this tutorial I am going to be using the default Web-forms template. I have created the web-application and it now looks like this …

Default webforms template.

The application dependencies are as follows:

  • Operating system
  • IIS
  • Your software

Note: legacy web forms application is using an assembly called System.Web, which couples it to the IIS server role. Modern core applications do not have a hard coupled dependency and hence the requirements can change from scenario to scenario.

Creating the image

Much like a class is the blue-print of the object, so is the image the blue-print of the container. Create a docker file that will contain the commands to build the image.  The docker file is also going to be the target of the docker build command.

Docker file

  • A docker file is nothing more than a text file named “Dockerfile” without an extension.

Simply create the  docker file in the root directory of the application.

  1. Right click on the project and select Add->New Item
  2. Select a text file, click add button
  3. Rename the file in the solution explorer (f2), and remove the “.txt”

You will now have this:

Dockerfile created and we ready to begin!


 An image is defined using a set of docker commands. The “FROM” command is often the first command in the docker file. This commands allows your image to inherit from definitions provided by the software vendor.

Choosing the correct base is probably the most important decision.

Docker uses a layered file system.  In this example we will need to run our image on an official image from Microsoft. The image can be found here on the docker hub, This image was built from other official images , and the docker file can be seen here. By following each “FROM” statement the base image can be traced, which is ultimately a flavor of container OS.

Windows server core is an OS specifically designed to run inside a container and is released as an official image regularly on the docker hub. It is important to keep the image referenced in the “FROM” statement up to date.



You need to choose a container OS, based on two important aspects read about it here.

Adding layers

The image will have additional layers added for each docker file command expressed in the file below.

# escape=`

FROM microsoft/aspnet:4.7.1-windowsservercore-ltsc2016

#Set the shell and indicate we want to use powershell
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference ='SilentlyContinue';"]

# We can copy the published content into the wwwroot folder.
ADD ./bin/release/publish ./inetpub/wwwroot

# Lets execute a powershell commands to install choco and nuget, then we can install the WebConfigTransformRunner, so that we can apply configuration transformations dynamically.
RUN	Set-ExecutionPolicy Bypass; iex ((New-Object System.Net.WebClient).DownloadString(''));`
	choco install nuget.commandline --allow-empty-checksums -y;`
	nuget install WebConfigTransformRunner -Version

# Set the entry point,
ENTRYPOINT C:\ServiceMonitor.exe w3svc

On line 1 the (`) back tick represents the character which is used for the line feed , the default is (\) back slash if this line is missing. In windows when ever a path is expressed it will need to escaped. Not good for windows, but it works great for Linux users. So be sure not to leave out the special comment.

On line 3 we make use of the FROM command, to choose an environment based on framework 4.7.1, which will run on ServerCore2016 LTSC. Subsequent container OS updates, will use the same tag, so be aware that your build server may pull down a newer image. You can specify a build specific tag, which puts you firmly in control as to when you will update you container OS.

At line 6 we tell docker that we wish to use the power shell, shell using the SHELL command. This will allows us later to run power shell inside the container as we build it.

The ADD command on line 9 copies the files into the container. The first argument is the relevant path on the build machine, and the 2nd argument is the destination folder in the container. In this case the target directory already exists because we inherit the image. The ADD will force creation and copy from the source recursively into the target even if the target directory does not yet exit. A similar command exists called COPY. The main difference is, is that ADD can also reference a source url.

The RUN command, will execute the power-shell script in its first argument. The run command is capable of executing a Powershell script consisting of many commands. Remember this because it is not good to add excessive layers by adding multiple RUN commands for each modification. Also consider moving such commands to a bootstrap file. Strictly speaking a container should only do one thing, and it should be as small as possible. Exceptions to this rule apply when the brownfield component requires special handing. To demonstrate the RUN command, we are sneaking in some utility software that we will user in part 2.

ENTRYPOINT command is used to start the container . This basically makes the container behave the same as the wrapped application. When docker starts the container, it uses the defined entry point, which starts the application.  Inversely should the process fail, the container will fail. Essentially this is exactly the same as the CMD command, Here is a good article explaining more about this.

And that’s it you done. All that is left is to build your first image and run it. So be sure to save and then lets continue.

Building the image

  1. In visual studio, right click on the project and select publish.
  2. Select the folder options and accept the defaults.
  3. The published website is now correctly aligned with dockerfile.
  4. Open Powershell using administration rights and change directory to the project folder.
  5. Then run the following command:

Do not forget the “.” underlined in red. this is the second argument of the docker build command and indicates the build context, which is relevant to our current position. The -t argument indicates the registryname/imagename:tag in my case the registry on docker hub is wedocode and my image name is webformsapp. I did not need to specify the latest tag, as this is the default. Obviously it would be good practice to insert your version number and any other root information.

Docker will now start pulling down all the base layers from the Microsoft registry.  Docker then DE-compresses the layers and stores it in the local image repository. A hash is calculated for each layer, and it will not re-download the same layer, even if the tag is different.

Run your first container

You can now execute from the shell docker images this will list the images out as follows (Yellow block):

docker images, docker run -d -p 8903:80 --rm --hostname webapp --name container wedocode/webformsapp:latest
List the docker images and instantiate the container.

Next you will type the docker run command in the green box.  Parameter -d (detached mode) -p (host port : container port … only relevant when coming in from outside the docker host the external port maps to the container). –rm tells docker to remove the container when on shut down. Parameter — hostname gives the container a hostname.  Only the docker host can resolve container host names. (see the red box) . The  –name parameter specifies container name. Docker commands, can use the container name or docker id, as a reference.


I have pushed the image to my docker registry on docker hub. You can pull the container from here to run it.

The source code can be cloned or downloaded here.

Part 2

We have also installed some tools into our image such as the web-conig transformer. We will be using this in part 2 as we create a container boot-strapper, to do some configuration at start time.






About The Author

Passionate about technology and software architecture. Husband and father. Chief Architect at Assima. Director of research and development for Assima South Africa.

1 thought on “Modernizing legacy applications with Docker – Part 1

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.