December 2017

Volume 32 Number 12

[Containers]

Modernize a .NET App with Docker and Windows Server Containers

By Sean Iannuzzi

By now you’ve most likely heard of Docker, Docker containers and, with the introduction of Windows Server 2016, integrated Windows Server Containers. I truly believe that within a few years Docker containers will become the standard for how Web sites, applications and other systems run, as opposed to relying on running virtual machines (VMs) to support appli­cations. Using Docker has enabled scalability, isolation, and security while also ensuring that applications and systems are configured properly with little support from a deployment standpoint. Compared to the complexity of setting up a VM and configuring all the required features, the simplicity of the Docker setup is very beneficial. Just as physical machine requests were gradually phased out in favor of VMs, it’s more likely than not that Docker will begin to replace the need for VMs in the next few years—if not sooner.

In this article I’ll focus on how I leveraged a container approach, using Windows Server 2016, file sharing and socket communications with Windows Server Containers, to modernize several .NET applications. I’ll provide details on how I used Windows PowerShell to create a Docker image and share files and sockets between a host Windows Server 2016 system and a Windows Server Container. It’s likely that many of your applications have common functionality such as this that you’d need to enable to ensure your .NET application can be ported to a Docker container. Many of the features I review aren’t Windows Server Container-specific and could be leveraged for any applications that have similar functionality.  

The Business Challenge: A CPU-, Memory- and Disk-Intensive .NET App

My business challenge was to modernize several existing .NET and C++ console applications responsible for handling large volumes of data, which involved very heavy CPU-, memory- and disk-­intensive processing. I needed to expose these console applications in a more traditional Web model where the system would migrate from a single-user system to a multi-user supported setup. Given how the applications were set up and the volume of data being processed, I didn’t want to manage multiple copies of the data or executables across VMs.

As part of this business challenge, I needed to determine how I could best scale out these applications, as well as minimize network latency and file management across the network. Performance of the applications was critical, and any use of network sharing, file sharing or other distributed processing would significantly impact their performance. Therefore, in order for this business challenge to be considered successful, I needed to provide a scalable model that also yielded a high level of performance (with regard to CPU, memory and disk IO), without having to maintain multiple copies of my data. As with most projects, the timeline to deliver a newly modernized and scalable version of these applications was very limited, eliminating the possibility of a complete redesign.

Important Features: File Sharing, Socket Connections and .NET in Docker

For my particular applications, I considered several options prior to landing on using Docker and, more specifically, on Windows Server Containers. As part of my evaluation, I had three very specific technical challenges to prove out in order to migrate the applications successfully to Docker: 

  • Running a traditional .NET app in Docker.
  • Leveraging file sharing between the host system and my Docker container.
  • Enabling socket communication between the host and the Docker container.

I’ll show you in detail how to overcome these technical challenges and how to implement the concepts with Docker and Windows Server Containers running on Windows Server 2016. The concepts themselves are just the beginning when considering how many of your .NET applications could potentially be migrated to Docker or Windows Server Containers. The examples I’ll review can be applied or expanded more broadly to address various application features, which in turn can provide your applications with a more modernized deployment.

Application Performance: 8GB RAM, 10TB of File Processing

Before I dive too deeply into the options and concepts I considered, I want to provide a little more detail on the applications and systems that I moved to Docker containers. First and foremost, the applications are rather unique in the type of work they perform, and they’re very process-, memory- and disk-intensive. Moreover, the speed at which the applications perform is critical to the success of the system.

My applications, primarily designed for a single user, perform very complex calculations processing data files and were built with a combination of C++ and the .NET Framework. To give you an idea of the performance challenges of my system, it takes approximately 8GB of RAM per user to perform calculations on data files that are upward of 10TB in size, and requires pre-allocated memory and extremely fast disk speeds to process the large volumes of data in seconds. The system also uses socket connections for invocation and notification from the requestor. As the applications and systems evolved, I found I needed a quick way to scale the system and support multi-user processing. I expect many of you can think of similar applications that might benefit by being moved into a container.

Solution Options: Reengineer, Auto-Scale, Docker

The technical challenges I faced involved evaluating the different ways I might achieve my goals. I considered three options.

  1. Reengineering: One option was to reengineer the entire application suite. This would surely work, but given the size and complexity of my system, I needed a solution that would introduce less risk and not take as long to complete. Waiting a year or even a several months to redesign the system was not going to be acceptable. However, it was still important to evaluate this option in the event it might turn out to be a reasonable solution. 
  2. Auto-Scaling: Another option was to evaluate how I could leverage VMs and auto-scaling. This would definitely be quicker than rewriting the application and would lessen the risk overall. However, it would add a lot of overhead because of the time it would take to allocate a VM, especially a VM with 10TB of storage. Even though I could find solutions for this, such as using standby instances and then handling the provisioning and de-provisioning of the servers via an additional layer or application, it still didn’t seem like the best approach. However, this option was definitely moving me in the right direction because it didn’t involve reengineering the entire application and could deploy multiple executables per VM and scale out the VMs automatically. I decided to continue my search for a simpler implementation model using a more modern technological approach.
  3. Docker Container: The last option I considered was to use Docker, with interoperability between the host system and the Docker containers. Using Docker containers would allow me to scale the system as needed without having to reengineer the entire system. This approach would lessen the risks involved with reengineering the application, provide a level of isolation for security purposes, and allow me to implement these updates quickly while still providing the level of scale I needed.

Deploying .NET Apps with Docker

The main issues I had with the Docker option was that the application was written in .NET and C++ and I had concerns that my application wouldn’t be able to be run in Docker directly. As soon as I began researching how to migrate my .NET/C++ apps to Docker, I learned that it would require an upgrade or redesign. Keeping in mind that my approach had to be quick, I began to learn more about Windows Server 2016 and the fully integrated Windows Server Containers. By leveraging Windows Server Containers, I was hoping I could leave the application as is and deploy all dependencies along with the other required setup in my container. The initial technical challenge I encountered was that traditional Docker containers for .NET apps require .NET Core, while my application was written with .NET and C++. Of course, I could’ve upgraded the application to .NET Core, but this would’ve involved a significant effort and I was trying to deploy a solution that was as quick as possible with the least amount of risk. I was also trying to ensure that I included the ability to scale, along with a level of isolation and security to my application, as well.

 Although the use of Windows Server Containers was beginning to look very promising, I still needed to test a number of different concepts—such as file sharing and socket connections—that you might also find very useful. While much of what I describe is unique to my particular setup, the options and concepts are not and can be leveraged for other systems that need this type of migration or scale without having to redesign or rewrite the application. Of course, this approach doesn’t replace an application redesign, but does offer time for a team to redesign the application if that’s the desired direction. As part of that redesign, the team can reengineer the application that’s Docker-compatible or -enabled.

In the next few sections I’ll describe:

  1. How I set up the Windows Server 2016 VM to support Windows Server Containers.
  2. How I created my Docker image using PowerShell.
  3. The Docker file based on Windows Server Core.
  4. How to enable advanced file sharing between the host and the container.
  5. How to enable a socket listener from the host and the container.

Windows Server 2016 and Containers

To get started I deployed a Windows Server 2016 VM and enabled the appropriate features, such as .NET Framework, IIS and containers, as shown in Figure 1.

Enabling the .NET Framework and Container Services
Figure 1 Enabling the .NET Framework and Container Services

Please note that in order to build this type of solution you must have the .NET Framework installed.

After installing all of the required features, I validated each of them accordingly. To make sure Docker was running properly I ran the PowerShell command docker –version. I then verified that the Windows Docker Engine Service was also running by typing “(get-service “Docker”).Status” from PowerShell. As a final step, I performed a docker pull request for the Windows Server Core Docker image from dockr.ly/2i7pDSn. After the pull request completed I verified that the Docker image was created successfully by running the command docker images.

Once I installed the Windows Container Services and set up the environment with my base Docker image, I was ready to begin working with my .NET console application.

.NET App Setup

I started with a very basic console application using the .NET Framework 4.6.1. The application really didn’t do much other than take an argument and display a response. Before going too far with the full migration to a Windows container, I wanted to make sure that the required functionality was going to work as intended. However, there were a number of steps I needed to take before I could run the application in a container on Windows Server 2016.

The first step was to create a reusable “build” PowerShell script that would build the application and create a Docker image on Windows Server 2016. To accomplish this task I wrote two functions, one to perform the msbuild and another to create the actual Docker image, as shown in Figure 2.

Figure 2 PowerShell Functions to Build the Application and Create a Docker Image

Set-StrictMode -Version Latest
$ErrorActionPreference="Stop"
$ProgressPreference="SilentlyContinue"
s
# Docker image name for the application
$ImageName="myconsoleapplication"
function Invoke-MSBuild ([string]$MSBuildPath, [string]$MSBuildParameters) {
  Invoke-Expression "$MSBuildPath $MSBuildParameters"
}
function Invoke-Docker-Build ([string]$ImageName, [string]$ImagePath,
  [string]$DockerBuildArgs = "") {
  echo "docker build -t $ImageName $ImagePath $DockerBuildArgs"
  Invoke-Expression "docker build -t $ImageName $ImagePath $DockerBuildArgs"
}

The next step in the script was to execute these two functions, passing in all required parameters:

Invoke-MSBuild -MSBuildPath "MSBuild.exe" -MSBuildParameters
  ".\myconsoleapplication.csproj /p:OutputPath=.\publish /p:Configuration=Release"
Invoke-Docker-Build -ImageName $ImageName -ImagePath "."

With the build script in hand, all that remained was to create my Docker file and my console application would be enabled for Windows Server Containers running on Windows Server 2016. Note that from a development standpoint, it can be helpful when testing the build process to use the Visual Studio command prompt, which will include MSBuild in your path. As part of the preliminary setup I installed a base Docker image named Windows Server Core that had all of the base features I needed to run my application. When creating the Docker file, I told Docker to use this image and publish my application with the name “myconsoleapplication.exe” as the entry point: 

FROM microsoft/windowsservercore
ADD publish/ /
ENTRYPOINT myconsoleapplication.exe

The entry point will be the Main function in the console application.

Final Build and Deployment to Windows Server 2016

Once I had a complete .NET console application that was enabled for Windows Server Containers, I was ready to deploy my application. An easy way I found to do this for testing was to simply copy the application folder to the VM. Once I copied the application to the server, I executed the PowerShell script to build the application. I navigated to the source directory and then ran the ./build command from PowerShell.

The output of the build script should look similar to the result shown in Figure 3.

Figure 3 Output of the Build Script

docker build -t myconsoleapplication .
Sending build context to Docker daemon  6.058MB
Step 1/3 : FROM microsoft/windowsservercore
 ---> 2cddde20d95d
Step 2/3 : ADD publish/ /
 ---> 452c4b42caa5
Removing intermediate container cafb387a3634
Step 3/3 : ENTRYPOINT myconsoleapplication.exe
 ---> Running in a128ff044ef3
 ---> 4c7dce888b36
Removing intermediate container a128ff044ef3
Successfully built 4c7dce888b36
Successfully tagged myconsoleapplication:latest

To confirm that my Docker image was created successfully, I ran the docker images command again and I could see the new Docker Image, as shown in Figure 4.

Console Application As a Docker Image
Figure 4 Console Application As a Docker Image

Testing the Windows Server Container Console App

The very last step I took before getting into some very specific features was to test my application to make sure that it would, in fact, run within a Windows Server Container. To do so I ran the following command:

docker run --rm -it myconsoleapplication
  ".NET Framework App Running in Windows Container"

As expected, the application output the argument passed to it in the console window.

That took care of the basics for deploying, configuring and setting up a .NET app that can run in a Windows Server Container. At this point, you might be thinking of many existing applications you could potentially move to a Windows Server Container. However, there are still a few key features I found very helpful—such as file sharing and socket communication—that you might also find useful. In the next section I’ll delve a little more into these features and how to leverage them in your own applications.

Docker Container: Enabling Advanced File Features

Like many console applications, yours may have a fair number of files that are being leveraged for different reasons; whether it’s for logging or processing or something else, the use of files might be intensive. For my particular set of applications, I was reading in very large files and didn’t want to copy the files to every container. I also wanted to do my best to optimize disk I/O, and using a shared folder on the network—a file server—introduced too much latency and impacted performance when trying to read in such large files. Furthermore, I didn’t want to create multiple versions of my appli­cation with various configurations, ports and directories as this would be a maintenance nightmare. As a result, I started evaluating how I could share my files on my host system running the Docker service and then access those files from within my container. What I found is that this is extremely easy and it doesn’t matter if you’re using Windows Server Containers or running a Docker container in Linux. Docker has full support for this type of functionality. In fact, what was the most beneficial to my setup was that as long as I mounted a drive in the container to correlate with the internal directories, I didn’t even need to modify my application. The only change I made was to have a parameter set my path when I ran the Docker container instead of reading it from a configuration file.

I was able to keep all of the file processing and paths intact because they were all relative paths and within a main directory. That meant I didn’t have to change the core logic of my application, which in turn alleviated much of the risk I might have engendered by changing my .NET console application.

To test this functionality, I added a basic file IO process to my console application by inserting the following code:

using (StreamWriter sw = File.AppendText(@"C:\containertmp\testfile.txt"))
{
  sw.WriteLine(DateTime.Now.ToString() + " - " + args[0]);
}

I then redeployed my solution to the Windows Server 2016 VM. This also required rebuilding the image by running the ./build PowerShell script.

The last step to enable this functionality was to create a directory on the host that I needed to expose to the Docker container. In my case, I just created a folder called hostcontainershare. The key to doing this was how I mounted this folder from the Windows Server Host System to the Docker container. Surprisingly, this is extremely easy to accomplish by passing in the following argument to the docker run command:

-v [source directory or path]:[container directory or path]

This argument is set up to accept a source and target. For example, I passed in first my local Windows Server Host direc­tory and then how I wanted it mounted inside the container. Here’s the entire docker run command:

docker run --rm -it -v c:\hostcontainershare:c:\containertmp myconsoleapplication
  ".NET Framework App Writing to Host Folder" 1

There are various ways to accomplish this functionality both in Windows Server Containers and Docker containers, but for my .NET console application, I found this method very simple and easy to implement. An illustration of how this is set up is shown in Figure 5.

Host and Windows Server Container File Operations Overview
Figure 5 Host and Windows Server Container File Operations Overview

The result of the docker run command was a file written to my host directory from within my Docker container, as shown in Figure 6.

Example of Write Access from the Docker Container to the Windows Server 2016 Host
Figure 6 Example of Write Access from the Docker Container to the Windows Server 2016 Host

Enabling this functionality provided significant advantages for my application because of what it’s doing with very large files. I found I didn’t need to duplicate my files across all of the containers. As long as I have optimal or solid state drives on my host, the file processing is much faster than using a shared folder, network drive or other non-local site. The benefits of using this technique are countless for traditional console applications.

With successful file sharing, I had one last feature to conquer—socket connections, which I’ll discuss in the next section.

Docker Container: Enabling Advanced Socket Features

One of the main features I needed to prove out was being able to communicate from a host socket connection to an internal container socket connection. Again, much of this functionality can be leveraged in both Windows Server Containers and Docker containers because the setup is controlled via command-line arguments that specify how the Docker container is running and what ports are being exposed.

To support this functionality, I created client and server socket applications that would establish a connection from the client appli­cation running on Windows Server to a server-side application listener running as a Windows Server Container. I also added into my application the code necessary to listen on a specific socket and then respond in the console with the data and bytes received.

I leveraged Microsoft’s socket examples from Asynchronous Client Socket Example at bit.ly/2gDKYz2 and Asynchronous Server Socket Example at bit.ly/2i8VUbK for the base code segments I integrated into my application.

I did make a few changes to the server-side code to assist with getting the IP address of the container so that when I was using the client socket application I’d be able to provide the assigned IP address. I was able to obtain the NAT details of the container by running the following command:

docker network inspect nat

I also ran various lookups to retrieve the IP address of the container, but to make it easy to debug and troubleshoot I added in a loop that retrieved all of the IP addresses and then wrote them out to the console window:

foreach (var info in ipHostInfo.AddressList)
{
  Console.WriteLine("\nIP: " + info);
}

I also set the port to the specific port I was testing for my socket connection. I once again deployed my application to the Windows Server 2016 VM, as well as copied my client application to the server in order to test the connectivity. By default, no custom ports are exposed from the container and the container won’t allow a TCP socket connection. In order to enable this functionality I needed to give Docker the appropriate run arguments, similar to what was needed to share a folder.

In my case, I wanted to connect to port 50020 from the host running my client application to the .NET console application running within my Windows Server Container. Figure 7 illustrates how the application is set up.

Client to Windows Server Container Host Socket Communication
Figure 7 Client to Windows Server Container Host Socket Communication

Once everything was set up and configured, I needed to tell the Windows Server Container and Docker container that I want to expose certain ports from my container to the host machine. To enable this behavior I specified the following argument to the Docker run command:

-p [host port]:[container port]

You can expose multiple ports by repeating this argument for each one, for example -p 50020: 50020 –p 50019:50019, and so forth. By running my container and exposing the ports I was ready to test that I have a connection from the Windows Server Container console application to my client running on the Windows Server 2016 VM.

The complete command I used to run the Windows Server Container was:

docker run --rm -it -p 50010:50010 -v c:\hostcontainershare:c:\containertmp myconsoleapplication
  ".NET Framework App Listening on Socket" 2

Once I launched the Windows Server Container running the console application, I was ready to start my client application. The container console application showed me the current IP address of the container and the fact that it was listening on the socket I specified. All I needed to do next was launch my client application and pass in the current IP address of the container I wanted the client application to connect to and my testing would be complete. As shown in Figure 8, the client application connected to the IP address of the container console application displayed on the screen and sent a small set of data over the socket. Success!

Client Socket Application Sending Data to Container Console Application
Figure 8 Client Socket Application Sending Data to Container Console Application

Wrapping Up

Given the nature of the applications I was running, I needed several specific features to be available with Docker. When I learned that Windows Server Containers would let me run my .NET console application, I was fairly optimistic that I’d be able to access a file from the host and enable socket communication from the host system to my Docker container. What I was most impressed with is the ability to share folders and files while also exposing sockets and ports specific to my applications or any other applications. With Windows Server 2016, the integration of Windows Server Containers is extremely smooth, with very little configuration or orchestration required to deploy Windows containers. For .NET app you’re planning to migrate to Docker, I definitely recommend using Windows Server Containers and exposing features of Docker as needed to ensure you application will run as expected. As with most applications and sharing of resources, security must always be considered and reviewed. You still must use caution when sharing data or sockets from a host system to a container. When enabling such functionality, you need to be extremely careful not to introduce a vulnerability. In addition, sharing files and opening ports between a host system and a container must be handled with care to avoid security risks. I found that with my application, I was able to provide a high-level of scalability while also modernizing certain components of the application overall. The application can now be deployed into a more scalable setup using Docker Swarm or other scaling models that allow the application to run, limited only by cost or by the level of the hardware. As a bonus, this solution provided me with the much-needed time to evaluate if a redesign was needed or if this solution could be the permanent solution. With many of the features shown in this article, hopefully you can begin your own migration and design to modernize your .NET applications.


Sean Iannuzzi has been in the technology industry for more than 20 years and has played a pivotal role in bridging the gap between technology and business visions for a plethora of social networking, Big Data, database solutions, cloud computing, e-commerce and financial applications of today. Iannuzzi has experience with more than 50 unique technology platforms, has achieved more than a dozen technical awards/certifications and specializes in driving technology direction and solutions to help achieve business objectives.

Thanks to the following Microsoft technical expert for reviewing this article: Jesse Squire


Discuss this article in the MSDN Magazine forum