Cloud & Engineering

Sohrab Hosseini

Dock Tales, Or: How I Learned to Stop Worrying and Love the Container

Posted by Sohrab Hosseini on 23 March 2015

docker, tech, docktales

Docker has just turned two and it has been a heck of a two years. Looking back, describing its rise as meteoric now seems like such a misnomer as there is no sign of slowing down.

Moby Dock

Moby Dock & Friends

Here at Deloitte Platform Engineering, we have been watching the rise of this technology with great interest. There has been a lot of hype around Docker and every man and his dog have been containerising anything they can get their hands on. But once you cut through all that, there are compelling reasons to use containers and we have done so in multiple client engagements to create predictable and manageable solutions. In order to discuss the reasoning, it helps to first describe what Docker, and containers in general, are. This will be the first of the series focusing on Docker and its ecosystem.

Containers

Shipping containers were a milestone to the shipping industry. Before them, cargo were sorted and loaded manually by dock labour, also known as dockers. This was a slow and error-prone process. The 1950s saw the introduction of intermodal containers and their standardised dimensions. This meant that the same cranes could be used for loading/unloading cargo at a much faster rate, regardless of what was inside. Ships and trucks were made to be able to carry these containers to all corners of the globe. It no longer mattered if you were shipping boxes of matchsticks in the same load as gun powder. Each was in their own container, sealed and isolated.

Intermodal Container

An intermodal container (attribution: KMJ at the German language Wikipedia)

Software containers work in similar ways and solve similar problems. For a software application to behave consistently, it requires that a specific set of conditions are met in its runtime, such as the correct version of the operating system, specific libraries or an application server be available. Furthermore these requirements need to be met across multiple runtimes, such as production/non-production servers or nodes in a cluster.

DevOps’ answer to this problem is immutable infrastructure. You should need to only describe your runtime once and the automated process should ensure that they are met in all the deployment servers. This is the niche that so far has been filled by the likes of Chef and Puppet. They approach this problem by constantly monitoring each server and ensuring that it aligns to the ideal state. These tools have brought us most of the way but I believe containers are what will take us the rest of the way.

Picture a container as its namesake, a box. Regardless of what is happening on the outside, the interior of the container remains the same. This guarantees that once I configure the inside of a container to my requirements, it remains the same regardless of where the container is running.

This turns the deployment process on its head. Now the application is “deployed” during the build time, into a container. This means that the majority of deployment issues are resolved before the container ever leaves the developer’s PC. Similarly, the actual deployment to a server, becomes nothing but moving the container to that server and running it.

This technology has made some of my fondest excuses obsolete. I can no longer utter the words “but this works on my machine”. I can no longer watch with sadistic delight as an Ops guy struggles to deploy a new application I’ve thrown over the fence to him. Given these, there would have been riots in the street, if containers did not also make the developers’ life easier. Total control over the environment where the application is deployed means shorter time to production, not to mention greater confidence in production deployments.

Containers vs. Virtual Machines

Containers are essentially a virtualisation technology. The likes of VMWare and VirtualBox virtualise the hardware; Containers push this virtualisation to a level higher, the operating system. This essentially means that all containers share the same operating system, but each is given an isolated workspace to use.

The more we describe containers, the more they sound like virtual machines. But the differences are quite significant and why containers may succeed where VMs failed in application deployment space.

Traditional vs. VM vs. Container

Traditional Deployment vs. Virtual Machine vs. Container

In contrast to VMs, containers are

  • Faster: There is no guest OS you have to wait for to boot. Your container’s boot-time is almost your application’s start-up time.
  • Smaller: Container images (especially Docker - see next section) are much smaller than VMs.
  • More efficient: There is no memory/processor overhead of running a whole second Guest OS.

Another point of comparison is security. I would be the first person to tell you that containers are not as secure as VMs. If you want to see how a virus behaves on a machine, for the love of god, use a VM! This is not the value proposition of containers. Sure they are more secure than having applications run directly on the physical machine but they will never be as secure as a VM. Another installment of this series will concentrate on Docker security.

Docker

Docker started off as proprietary product, powering dotCloud’s PaaS. In March 2013, it was released as an open source project and gained so much momentum that dotCloud refocused all their effort on its development, going as far as renaming themselves to Docker Inc.

At the age of two and latest version of 1.5, Docker would be considered, in this industry, an immature product and not ready for production. And yet, it has risen to popularity at an astounding rate, with one of the most active communities we have seen in a while. A lot of big guys, like Google, Red Hat and even Microsoft are putting their weight behind it.

This may sound curious since Docker has not introduced anything new. Containers have been around for a while; Linux LXC, Solaris Zones, FreeBSD Jails and others all predate Docker. So why did Docker take off the way it did?

The existing containers had one large (and almost crippling) problem: a high barrier to entry. Unless you really know your way around a kernel, working with containers has always been akin to pulling teeth from the proverbial chicken. That is ‘til Docker. Suddenly containers become usable to the average developer. This was thanks to some simple but powerful features:

  • A mechanism to describe containers through a plain-text file with a handful of simple commands (Dockerfile),
  • an easy-to-use command-line to build and manage containers,
  • a Git-like vocabulary to track changes to a container, and
  • using cacheable layers (via UnionFS) to reduce build time

In my humble opinion, Dockerfile was the killer-app of the lot. Through it, you can now codify your infrastructure into a reusable asset and at the same time, source control and version control it.

As useful as containers may be, Docker showed that if you make your tools easy-to-use and fast-to-run, your rate of adoption would skyrocket. This commodification (it is not a real tech article if we are not dropping in enough -ification words) has made it possible so that anyone with the smallest inclination and a very short time commitment, can learn enough about containers to become dangerous.

Stay tuned for more.

 

If you like what you read, join our team as we seek to solve wicked problems within Complex Programs, Process Engineering, Integration, Cloud Platforms, DevOps & more!

 

Have a look at our opening positions in Deloitte. You can search and see which ones we have in Cloud & Engineering.

 

Have more enquiries? Reach out to our Talent Team directly and they will be able to support you best.

Leave a comment on this blog: