September 2015

Volume 30 Number 9

Azure Insider - Creating Unified, Heroku-Style Workflows Across Cloud Platforms

By Bruno Terkaly | September 2015

A few years ago, Microsoft Azure was just getting off the ground and there was little media coverage. Over the last few years that has changed dramatically. The engineering teams at Microsoft and the community at large have contributed significantly. With this installment of Azure Insider, the series moves to a more customer-centric, case-study view of the world.

For this first Azure case study, I’ve spoken to Gabriel Monroy. He saw an opportunity and quickly developed it into a technology and a start-up company called Deis. His company was quickly acquired, and Monroy became CTO of the new company. When I first met Monroy and started working with his technology at a hackathon, I told him, “It won’t be long before you’re acquired.” This was back in January 2015. Not more than a few months later, his fledgling company was acquired by EngineYard.

Mother of Invention

There continues to be an explosion of distributed computing platforms that span both on-premises and public clouds. These platforms are powered by container-­enabled OSes such as Linux and Windows, Docker containers, and cluster-enabled versions of Linux such as CoreOS.

Most successful open source projects are born out of need. Imagine you’re an architect helping the financial community set up large clusters of virtual machines, trying to support development, testing and product development. Before long, you realize you keep solving the same problem over and over.

That’s just what happened to Monroy, who was doing Linux development for the financial community back in 2005 and 2006. He was leveraging some of the earliest technologies around containerization, probably at the same time Solomon Hykes started hacking to create Docker. Much of Monroy’s work, in fact, ended up in Docker.

Around that time many companies were struggling with the same need to streamline the development/testing/production pipeline. The ideal was to get to the stage of continuous integration—that elevated state that gets your software to the users in an automated and timely fashion.

Companies wanted a repeatable process, but there were little or no tools. Companies also wanted developer self-service. They didn’t want developers held back by the lack of hardware or the tyranny of IT operations. Developers didn’t want to pull in ops just to iterate on a new idea or project.

So, instead, developers started to work in a nefarious world of shadow IT—secretly provisioning infrastructure and freeing themselves from the dependency of others. Developers also wanted to be able to operate on any public cloud, whether Amazon Web Services, Digital Icean, Google or Azure. They also wanted to run on bare metal in their own datacenters, if necessary.

Opportunity Knocks

Back in late-2007 and early-2008, Heroku offered a new approach to distributed computing, focusing on Ruby developers who wanted a single environment to develop, test and deploy applications. Developers wanted to focus on their applications, not on the underlying infrastructures. They wanted a single command-line interface with the underlying platform that would let them focus on just the app and its data. They didn’t want to worry about availability, downtime, disaster recovery, deployment, production, scaling up and down as needed, version control, and all those typical issues. At the same time, they did not want to depend on outside IT administrators to support their workload. That’s when Monroy first saw opportunity.

A number of related technologies were coming together that triggered an entrepreneurial spark in Monroy’s mind. He could enable Heroku-style developer workflows on multiple cloud platforms. Before diving into all the technologies that enabled Monroy’s idea, here’s a look at this idyllic world where developers can enjoy Heroku-style workflows on virtually any public cloud with Deis.

The following code installs the Deis platform. This assumes there’s a cluster of CoreOS Linux machines with which to work (hosted on-premises or in the cloud):

# Install Deis tooling
$ deisctl install platform
# Deis platform is running on a cluster
$ deisctl start platform
$ deis register https://deis.example.com

With the exception of a few commands relating to logins and SSH certificates, the development team is ready to leverage Deis and start deploying applications. Once Deis is installed, developers can deploy applications to development, then test and move them to production with just a handful of commands.

Enabling Technologies

Other technologies were coming to fruition at the same time, which helped Deis flourish, as shown in Figure 1.

Converging Technologies that Support Deis
Figure 1 Converging Technologies that Support Deis

Containerization is a critical technology present in modern server-side OSes. It has been in Linux for some time. While it’s not currently present in Windows Server, it should be soon. The concept of containerization is you take a host OS and partition it in multiple dimensions—memory, CPU and disk. You can break one physical computer running one physical OS into multiple containers. Each container is segregated so applications run isolated from each other while sharing the basic host OS.

This increases efficient hardware utilization, because containers can run in parallel without affecting one another. Linux Containers (LXC) isolate CPU, memory, file I/O and network resources. LXC includes namespaces, which isolate the application from the OS and separate the process trees, network access, user IDs and file systems.

Monroy had been leveraging LXC in the early days, even before it was a fundamental part of Docker. Then Docker came along and democratized containerization by standardizing it across Linux distributions. The real breakthrough came when Solomon created a central repository of Docker images. This made available an ecosystem of publicly available containers other developers could reuse at will. There are more than 14,000 available images available at registry.hub.docker.com.

You can find almost every conceivable application pattern to accelerate your next project. You can even make your own images available through this registry. So if you want to use Nginx or Kafka in your application, you don’t need to worry about downloading and installing applications, configuring system settings and generally having to know the idiosyncrasies of individual software applications. Deis curates your applications as Docker images, and then distributes across your cluster as Docker containers. It’s easy to compose your own applications within a container by leveraging a Docker file:

FROM centos:latest
COPY . /app
WORKDIR /app
CMD python -m SimpleHTTPServer 5000
EXPOSE 5000

Once you’ve defined your Docker files and provisioned Deis on your cluster, your application deployment and management becomes more simple and powerful. When you combine this with the Git source code repository, it’s the best of both worlds. You can use versioning for both application source code, as well as the infrastructure (Docker container) itself.

This style of application development and deployment is repeatable and predictable. It greatly accelerates the ability to move between development, testing and production. A Docker file deploys a simple agent to the server to dev, test or production:

# Assume the current folder contains Docker files
$ git add .
$ git commit -m "notes by a developer"
$ git push deis master

Back to Heroku

Monroy noticed Heroku was making huge inroads in the developer community because it greatly simplifies application deployment, execution and management, mostly Ruby and Node.js apps.

Developers typically sit on a hosted command prompt that lets them perform virtually all aspects of application development, infrastructure provisioning and scaling tasks. Monroy found that brilliant—a single place for a developer to get all his work done, and minimize the plethora of development tools.

A lot of the management headaches of running a cluster were automated—backup and restore, configuring DNS, defining a load balancer, monitoring disk usage, managing users, platform logging and monitoring, and so on. Adding nodes is a simple case of modifying a URL in a cloud-config file through a command-line interface. Perhaps the most important aspect of running a cluster is to make it self-healing, such that failover and disaster recovery are automatically included.

CoreOS

While Heroku had the funding and the time to build this custom platform, Monroy needed an off-the-shelf solution for managing clusters that included pieces such as load balancing, monitoring and lighting, failover, and so on. At about the same time, one of the Linux distributions known as CoreOS was also gaining developer mindshare.

CoreOS brought the perfect mix of technologies to help facilitate the world Monroy envisioned. CoreOS is an open source Linux OS designed for cluster deployments. It focuses on automation, deployment, security, reliability and scalability. These were precisely the characteristics with which Heroku attracted developers.

CoreOS did indeed provide the silver bullet for which Monroy was looking. CoreOS isn’t an ordinary Linux-based OS. Interestingly, it’s trying to pioneer its own version of a Docker container called the Rocket container runtime.

The innovation CoreOS brings to the table is significant. Monroy and his team were most interested in etcd, fleet and flannel.  Etcd is a distributed key value store that provides a reliable way to store data across a cluster of machines by gracefully handling master elections during network partitions that will tolerate machine failure, including the master. The etcd key value store that stores cluster configuration information is also intelligently distributed throughout the cluster.

An easy-to-use API lets you change values in this configuration file, which is then automatically replicated to other nodes in the cluster. Flannel provides a virtual network that gives a subnet to each post to be used with each container runtime. This provides each container a unique, routable IP within the cluster. This dramatically reduces port-mapping complexity.

Fleet helps you think of your cluster as a single init system, freeing you from having to worry about the individual machines where your containers are running. Fleet automatically guarantees your container will be running somewhere on the cluster. So if the machine fails or needs updating, the fleet software will automatically move your workload to a qualified machine in the cluster.

Notice in Figure 2 you can send a put request to the cluster to tell fleet the desired state of a particular service. That’s underlying plumbing that makes up fleet services. This put request on your behalf frees you from having to worry about the details of the cluster.

Conceptual Diagram of CoreOS
Figure 2 Conceptual Diagram of CoreOS

Monroy now had everything he needed to build Deis—a standardized containerization model from Docker and a cluster-aware version/implementation of Linux called CoreOS. He could now offer Heroku-style development for the masses, not just those who could afford the hefty fees required by Salesforce.com, the company that now offers Heroku as a service.

Three basic components are a control plane, a Docker register and a data plan. These all work in concert. It starts with the developer using the Git push in a new release, which might include both source code for the application and a Docker build file.

This new build, along with the existing configuration, results in a new release. This is then pushed up to the Docker registry. The scheduler running in the data plan then pulls the released images into dev, test or production.

At this stage, the containers are managed both by CoreOS and by Deis, providing fault-tolerance, scalability, and the other Platform-as-a-Service features. Also in the data plan is the router, which takes denial submitted by the app use and “routes” users to the appropriate container to satisfy the request. Figure 3 depicts these technologies working together.

Deis Architecture
Figure 3 Deis Architecture

Wrapping Up

Some of the most successful open source projects out there don’t reinvent the wheel. They take preexisting components, bring them together under a single umbrella and apply technology in a unique way. Monroy and the team at Deis did just that. They harnessed the power of Docker containers, CoreOS and Heroku-style workflows. The team could implement Deis not only on Azure, but also in other public clouds, not to mention on-premises, as well.

In the next installment of Azure Insider case studies, I’ll examine Docker. What is Docker? How did it become a billion dollar company in just a few years and how is it changing the way applications are developed, tested and rolled into production?


Bruno Terkaly is a principal software engineer at Microsoft with the objective of enabling development of industry-leading applications and services across devices. He’s responsible for driving the top cloud and mobile opportunities across the United States and beyond from a technology-enablement perspective. He helps partners bring their applications to market by providing architectural guidance and deep technical engagement during the ISV’s evaluation, development and deployment. Terkaly also works closely with the cloud and mobile engineering groups, providing feedback and influencing the roadmap.

Thanks to the following technical expert for reviewing this article: Gabriel Monroy (EngineYard)