Choosing a windows container OS

One of the most important decisions to make when containerizing your windows application is what container OS to use.

To understand this problem one needs to understand what essentially is changing.

  1. From customer to customer the choice of operating system support options will  change between LTSC and SAC.
  2. From application to application, the technology will change,

Lets have a closer look …

Customer to Customer (LTSC or SAC)

Different customers will have different server operating systems and support agreements in place. The container host will be Windows Server 2016 machine. However there are two categories of support channels to consider. Each of these options resonate with the possible builds of the container host you will find out at a customer. Your container OS must be compatible with the customers infrastructure. (the infrastructure may very well be your own in some cases)

  • LTSC
    • Basically the Long Term support channel is for the non-early adopters, and non-innovators.
    • Updates only include patches and security updates.
    • Release cadence is every 2 to 3 years.
    • Supported for 5 years.
    • Examples: Windows Server 2016, Windows Server 2019
  • STSC or SAC
    • Released cadence at twice per year.
    • For early adopters and innovators.
    • OS title often reflects the month and year of the release. (ServerCore 1709, ServerCore 1802)
    • Supported for 18 months in main stream production.
    • Example Windows Sever 1709 and Window server 1802.

This chart will help you quickly make the correct decision, regarding the support channel your container OS should be using.

 

Application to Application (NanoServer or ServerCore)

You always want to use the smallest possible container OS, that supports your application. Microsoft container OS ships in two flavors:

  1. ServerCore
    1. Supports most* server roles.
    2. Image size between 2 and 5 gig depending on host OS support channel you target.
    3. Best fit for lift and shift brown field applications
  2. NanoServer
    1. Supports a limited set of server roles.
    2. Has a limited dot net run time.
    3. Built in support for dot net core run-time.
    4.  ASP.net 5.0 is apparently also supported.
    5. Best fit for the green field applications.

Tag naming convention

Look at the naming convention in below images and see how it resonates with the decision tree, you would use. In this example for an ASP.net Webforms application you will see the following naming convention.

 

The container you choose will be microsoft/asp.net:4.7.1-windowsservercore-ltsc2160, but at least now you understand how you got there. At the root of our decision tree is the customers support channel, next it is the OS, then finally the core run time requirements for you application.

Always remember you are dealing with a layered file system, so by convention, the tag description should reflect what is in its root. All images in docker-hub for the Microsoft stack follows this convention.

Tip

As an architect I am always looking to secure these technical decisions in a fluent programming model, thus nothing prevents us from doing just that.

It is possible to capture such decisions into a layer which in turn will allow us to add some useful utilities into the layer. These utilities are typically used by all the images we will create, example include Url-rewite.

The result is that you only have a handful of images to maintain, making container server patching really easy.

 

 

What is docker and why will it change everything!

So what is all the all this Docker hype about? Software is multi-faceted, and getting it working on your dev-environment is one thing, but getting it into production presents many complicated challenges for dev-ops teams. Your software will always run in your labs against a set of external assumptions. This could be the operating system, a message queue, a database or even dependencies on the java and dot net run-times. The fact is that these external components to your software is often different in production, and all too often we see software failure, due to the external components no longer integrating correctly in the production environment.

In addition to this, we have to create rather complex installers, to navigate, the differences in the production environment, vs that in the test lab.  Sometimes your application will need to run on more than one server. Many enterprises will often have the need to scale up rapidly both in a vertical and horizontal manner, to deal with user demand for resources.  Overtime these environments will need to be upgraded, maintained and analysed for problem solving.  All of these things are the traditional challenges for a typical development operations department.

So how does docker change all of this? In its simplest form docker allows you to modularize your application into units of logical machine boundaries.  I will discuss container driven design later. The granularity of your module or docker service is an instance of your software AND the actual logical operating system with all its dependencies.

Docker containers spin up in seconds, and memory footprint is negligent, but the promise is, is that your software always runs exactly the same way in prod as in the lab.

Because docker containers are relatively small, it is possible to host thousands of these containers  over docker SWARM clusters potentially containing hundreds of nodes.

Docker provides an entire eco-system to manage, container distribution and management, which seamlessly aligns itself with the dev-op challenges.

The reason I believe that this technology will change everything is that both sides ultimately win. The vendor and the customer. Cost of ownership is radically reduces for the vendor, while the customer enjoys a superior security posture, better server density and a guarantee that the software will function correctly. For me this is the most compelling reason to learn docker, and get involved.

I am going to get more into these things in some future posts, with some very technical articles, on container orchestration engines such as Service Fabric, Kubernetes and SWARM. I will also demonstrate how to set up a cluster in Azure, create a load balances, configure reverse proxy containers to isolate all transport security concerns, and then integrate this with docker’s service discovery, to distribute load around the cluster. Fun times … stay tuned …  I will be doing all of this with the Microsoft stack.

 

Modernizing legacy applications with Docker – Part 1

Overview

Today I will discuss how to modernize your asp.net web-forms application do run as a docker service. Once you have done this it will be possible to run your application in the cloud, and have it orchestrated by SWARM, Kubernete or even Service Fabric. This will allow your application to run at scale, be more reliable and available.

So today you are going to learn how to lift and shift that legacy application into a container. Visual studio is providing built in support for this today directly in the IDE (Watch here), however your situation is often very different and unique, and your dependencies will need to be described in a manner that is probably a lot more fine grained. Although the sales pitch claims it can deal with brown fields, you will often need to do much more plumbing to support it. You will probably need specific server roles, or provide bootstrap to install certificates and other settings that change from customer to customer.

This tutorial will show you how to express these fine grained concerns, and provide a more detailed and controlled approach.

Potential show stoppers

If your legacy application uses the following technology you will not be able to port it at is, and you will need to make code changes  or even design changes to gain compatibility.

  • MSMQ    (No support for this technology on the server core 2016 container OS)
  • MSDTC  (No support for this across all container OS as far as I understand)

I talk more about these technology problems and the work around in this article.

What you will need to get going

  1. Windows 10 with creators update or Windows Server 2016 (Check it out here)
  2. Download and install docker for windows (Download it here)
  3. Visual Studio 2015 or later. (Download it here)

For this tutorial I will using the following docker version:

Docker CE about box,

If you have have a later version, you need not worry the commands I will be using are very basic and will still work.

Developers and tester will use Docker CE. Docker CE provides some GUI that simplifies certain development tasks. CE only runs on Windows 10. You can also use Docker EE if you are following this tutorial on Win server 2016.

 

Lets get going

For the purpose of this tutorial I am going to be using the default Web-forms template. I have created the web-application and it now looks like this …

Default webforms template.

The application dependencies are as follows:

  • Operating system
  • IIS
  • Your software

Note: legacy web forms application is using an assembly called System.Web, which couples it to the IIS server role. Modern ASP.net core applications do not have a hard coupled dependency and hence the requirements can change from scenario to scenario.

Creating the image

Much like a class is the blue-print of the object, so is the image the blue-print of the container. Create a docker file that will contain the commands to build the image.  The docker file is also going to be the target of the docker build command.

Docker file

  • A docker file is nothing more than a text file named “Dockerfile” without an extension.

Simply create the  docker file in the root directory of the application.

  1. Right click on the project and select Add->New Item
  2. Select a text file, click add button
  3. Rename the file in the solution explorer (f2), and remove the “.txt”

You will now have this:

Dockerfile created and we ready to begin!

 

 An image is defined using a set of docker commands. The “FROM” command is often the first command in the docker file. This commands allows your image to inherit from definitions provided by the software vendor.

Choosing the correct base is probably the most important decision.

Docker uses a layered file system.  In this example we will need to run our image on an official image from Microsoft. The image can be found here on the docker hub, This image was built from other official images , and the docker file can be seen here. By following each “FROM” statement the base image can be traced, which is ultimately a flavor of container OS.

Windows server core is an OS specifically designed to run inside a container and is released as an official image regularly on the docker hub. It is important to keep the image referenced in the “FROM” statement up to date.

 

 

You need to choose a container OS, based on two important aspects read about it here.

Adding layers

The image will have additional layers added for each docker file command expressed in the file below.

# escape=`

FROM microsoft/aspnet:4.7.1-windowsservercore-ltsc2016

#Set the shell and indicate we want to use powershell
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference ='SilentlyContinue';"]

# We can copy the published content into the wwwroot folder.
ADD ./bin/release/publish ./inetpub/wwwroot

# Lets execute a powershell commands to install choco and nuget, then we can install the WebConfigTransformRunner, so that we can apply configuration transformations dynamically.
RUN	Set-ExecutionPolicy Bypass; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'));`
	choco install nuget.commandline --allow-empty-checksums -y;`
	nuget install WebConfigTransformRunner -Version 1.0.0.1

# Set the entry point,
ENTRYPOINT C:\ServiceMonitor.exe w3svc

On line 1 the (`) back tick represents the character which is used for the line feed , the default is (\) back slash if this line is missing. In windows when ever a path is expressed it will need to escaped. Not good for windows, but it works great for Linux users. So be sure not to leave out the special comment.

On line 3 we make use of the FROM command, to choose an ASP.net environment based on framework 4.7.1, which will run on ServerCore2016 LTSC. Subsequent container OS updates, will use the same tag, so be aware that your build server may pull down a newer image. You can specify a build specific tag, which puts you firmly in control as to when you will update you container OS.

At line 6 we tell docker that we wish to use the power shell, shell using the SHELL command. This will allows us later to run power shell inside the container as we build it.

The ADD command on line 9 copies the files into the container. The first argument is the relevant path on the build machine, and the 2nd argument is the destination folder in the container. In this case the target directory already exists because we inherit the ASP.net image. The ADD will force creation and copy from the source recursively into the target even if the target directory does not yet exit. A similar command exists called COPY. The main difference is, is that ADD can also reference a source url.

The RUN command, will execute the power-shell script in its first argument. The run command is capable of executing a Powershell script consisting of many commands. Remember this because it is not good to add excessive layers by adding multiple RUN commands for each modification. Also consider moving such commands to a bootstrap file. Strictly speaking a container should only do one thing, and it should be as small as possible. Exceptions to this rule apply when the brownfield component requires special handing. To demonstrate the RUN command, we are sneaking in some utility software that we will user in part 2.

ENTRYPOINT command is used to start the container . This basically makes the container behave the same as the wrapped application. When docker starts the container, it uses the defined entry point, which starts the application.  Inversely should the process fail, the container will fail. Essentially this is exactly the same as the CMD command, Here is a good article explaining more about this.

And that’s it you done. All that is left is to build your first image and run it. So be sure to save and then lets continue.

Building the image

  1. In visual studio, right click on the project and select publish.
  2. Select the folder options and accept the defaults.
  3. The published website is now correctly aligned with dockerfile.
  4. Open Powershell using administration rights and change directory to the project folder.
  5. Then run the following command:

Do not forget the “.” underlined in red. this is the second argument of the docker build command and indicates the build context, which is relevant to our current position. The -t argument indicates the registryname/imagename:tag in my case the registry on docker hub is wedocode and my image name is webformsapp. I did not need to specify the latest tag, as this is the default. Obviously it would be good practice to insert your version number and any other root information.

Docker will now start pulling down all the base layers from the Microsoft registry.  Docker then DE-compresses the layers and stores it in the local image repository. A hash is calculated for each layer, and it will not re-download the same layer, even if the tag is different.

Run your first container

You can now execute from the shell docker images this will list the images out as follows (Yellow block):

docker images, docker run -d -p 8903:80 --rm --hostname webapp --name container wedocode/webformsapp:latest
List the docker images and instantiate the container.

Next you will type the docker run command in the green box.  Parameter -d (detached mode) -p (host port : container port … only relevant when coming in from outside the docker host the external port maps to the container). –rm tells docker to remove the container when on shut down. Parameter — hostname gives the container a hostname.  Only the docker host can resolve container host names. (see the red box) . The  –name parameter specifies container name. Docker commands, can use the container name or docker id, as a reference.

Materials

I have pushed the image to my docker registry on docker hub. You can pull the container from here to run it.

The source code can be cloned or downloaded here.

Part 2

We have also installed some tools into our image such as the web-conig transformer. We will be using this in part 2 as we create a container boot-strapper, to do some configuration at start time.

 

 

 

 

 

Decomposing systems with – The Method

The method

is the name of a service orientated software design methodology. It’s primary value proposition is that demonstrates a mechanical approach to software design leading to repeatable outcomes. The artifact of this process is a system decomposition, that maximizes operational flexibility, satisfies the runtime constraints and contains change at the component boundary. 

Invented by Juval Lowy and practiced today by IDesign it is indeed a remarkable system design methodology. It resolves many of the issues we face with modern software production. After completing the courses from IDesign (his company) you are left asking yourself how you ever built software without this knowledge.

Some of the practical highlights include:

  • Notation specification for showing all constraints of a modern SOA.
  • Calling out your components based on a method of seeking out business volatility, but at the same time preserving a good level of granularity.
  • Strict rules to categorize your components into, Resouce Access, Managers, Engines and Utilities.
  • Strict set of interaction rules between component categories. (Layered Architecture)
  • Strict set of rules for the responsibility of each component.
  • Deliniate the encapsulation of the primary sequence or use case orchestration from other activities.

On the downside for people like myself in Africa, his courses will cost the same price as a small house, once airfares and accommodation is taken into account.

That said, it is worth every cent and will probably save you loads money in the long run. In my view you cannot put a price tag on his courses it is priceless in my opinion.

I have had the privilege of working with Juval and spending some time with him both in Johannesburg and San Jose. Activities included a trip to the cradle of human kind, the lion park however the subject was always software 🙂

When I was attending the Project Design Master class in California he invited me to his farm.

The clarrity of his thoughts resonate practically with every aspect of his life. His farm is a system. It is off grid and there are designated components that manage every aspect. At heart he is an engineer, his passionate about technology and truly gifted.

If you can get onto one of his courses it will be an experience.

 

 

 

Why developers should not write features.

At least not in the context of the system …

Now I know this is going to create many comments and many will not agree with me, but allow me to explain why.

The reasons for my statement are many, but the one that comes to mind first is that feature specifications tends to be incorrect. They are also changing constantly as in reality they try to capture a dynamic real world problem in a static snapshot . The feature specification only presents a synopsis of the user interaction and the desired outcome. The user will do x in some way using y to achieve z.

Let’s suppose the outcome of our solution architecture was a system that resembled a car. The car had components like seats, wheels, engine and gearbox.

Your feature specification would come in the form of a user story that looks something like this:

Some <persona> will do some <action>  so that they can achieve some <goal>.

<Johnny> needs a vehicle he can <commute> to <get to work>.

<Sandra> needs a vehicle so that she can <transport goods> to <deliver the goods to the market>

<Bronwyn> must be able to <Park> the vehicle so that it <is not obstructing traffic>

… and so your feature specifications continue forever …

Given 3 developers and a sprint you would land up with the following components:

Commute

Parking

Goods Transport

 

In the physical world, one will not design a car, based on the feature spec that says the user needs to travel from home to work. In fact if there was a feature spec for a car, based on user stories, and we built each story, we would have a huge amount of components. Many of these components would be variations of the same thing. In the context of feature driven development methodologies how is it ever possible to call out the  gear box and engine components? Can you imagine how many variations of engine and gearbox you will have if each developer creates similar components for , parking, reversing, driving ….

A feature specification also lacks the detail of the non-functional characteristics of the system. What are the transaction boundaries, security contexts, process boundaries,  machine boundaries, error handling, availability / durability / scaling strategies?

Sadly in most cases system plumbing runs on a one to one implementation ratio with the feature of the day. This means your system begins to explode with components, as each developer derives there own strategy to support there particular feature in isolation. When the feature changes, the change tends to resonate with the entire system, often injecting loads of risk into the production pipe line.

Scrum (which I have nothing against) unfortunately tends to put features on a story board as opposed to components. Developers tend to sit around the table and pick which features they would like to build. This act substitutes the the action of architecture and system design  for the act of “give me the spec” and I will build “parking”. The problem is this code stays in the system and adds to technical debt of the project life cycle.

Much like anything we build in the physical world, a feature ONLY surfaces as the product of integrating the underlying components together.  An architect will compile the list of required components, based on the natureof the over arching problem and not the feature specifications.  The developers should be building the components, and then finally integrate the parts, into working software meeting specification, under the guidance of the architect.

It is for this reason that I say, developers should not build features in the context of the system, they should focus on building the gear boxes and the engines and forget about driving, parking and reversing. It is the job of the architect to ensure that the composition of components, can meet all requirements based on the nature of the business. The composition should also capture the business volatility as to contain change at the component level.

All of this is much easier said than done, and it is for this reason that so many software systems, become non-maintainable and are due for that inevitable re-write. The fact is is that the act of writing a user story or feature, is easier than considering the real nature of the problem, and encapsulating that which changes. The skilled software architect if supported by the business can literally save the business millions of R&D dollars.

It is for the very nature of change, that system design is so important. A well designed system, is easy to maintain, lower in risk and cost of ownership is radically reduced.

More on architecture to follow soon …

Please be sure to leave your rant below. Thanks for reading 🙂