Chances are you’ve heard something about Docker in recent months. Straight from the whale’s mouth,
Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications. Consisting of Docker Engine, a portable, lightweight runtime and packaging tool, and Docker Hub, a cloud service for sharing applications and automating workflows, Docker enables apps to be quickly assembled from components and eliminates the friction between development, QA, and production environments. As a result, IT can ship faster and run the same app, unchanged, on laptops, data center VMs, and any cloud.
If we think about what Docker is, it seems to fit in nicely with a few things. First, the DevOps model. If apps can be quickly assembled from components, they can be quickly integrated and tested. Docker also doesn’t care how you’re running the environment beneath it, so the app can be moved quickly and easily from any docker environment, from someone’s laptop for testing to a more enterprise grade application. Also, the notion of abstracting the app from the OS underneath, and allowing it to be shared also kind of reminds us of something else.
A VM is Like a Container, Right?
Well, sort of. The fundamentals are quite similar. Instead of running a Hypervisor like ESXi, we’re running the Docker Engine on top of hardware. A virtual machine is a very specific configuration that is running an operating system, and any other sort of components that specific application running in that virtual machine needs to run. The key here is the operating system is not shared between virtual machines, each virtual machine must have its own operating system installed. The Docker container contains an app, or multiple apps, and any sort of shared resources such as binaries and libraries the app needs to run. Think of the efficiency of the environment if I can have every component of an application running on top of the same operating system. When I put all of the components of my app into a container, that container acts as a logical construct, ensuring for segregation of apps and workload, allowing me to run a multitennant environment with multiple apps without worrying about them stepping on each other. Here’s my attempt to visually represent it, sort of like I did for VMware and Clustered Data ONTAP:
How Do I Pack the Ship?
Docker, like OpenStack, can be run bare metal, or on top of an operating system. As with OpenStack, there can be benefits to not running bare metal. Wait, what? That’s right, this virtualization person said sometimes we may not want to do things bare metal. CoreOS is a lightweight, powerful linux distribution designed with Docker in mind. In fact, all applications that run on CoreOS run inside of Docker. The theory behind this is to provide a lightweight linux distribution with applications abstracted. CoreOS also has great things such as their FastPass technology which utilizes an active-passive partition scheme. This means I can update my CoreOS as a single component, and if things go horribly wrong, quickly revert to the working version of my OS. Scott Lowe has a great introduction to CoreOS and its features.
When Do I Get On the Boat?
So, when do I hop on the container ship? Well, that depends. Docker, when coupled with something like CoreOS, is ideal for scale out application architectures. CoreOS’ fleet allows Docker containers to be deployed anywhere in a cluster, and also adds a layer of fault tolerance. Fleet will maintain whatever number of copies of a component you need, creating new ones in the event of a hardware failure. This means my application should be able to recover from some sort of component loss as a feature. If you’re looking at redesigning a legacy application, or implementing a new application that would benefit from a distributed architecture, it may be time to take a visit to the shipyard and take a look at what Docker can do for you.
Whatever you put inside your containers, there will inevitably be a use case for Docker you will stumble upon. It’s worth it to do some research now, so you know how to pack the containers when they show up. One great resource is #vBrownBag talk on Docker and VMware with Chris Sears. Another great #vBrownBag talk from the OpenStack Summit in Atlanta is a Docker and OpenStack Demo by Docker’s Eric Windisch. I’ve also got 20 more days of blogging left in #vDM30in30, so you may even see some more getting started information from yours truly!
Song of the Day – Clean Bandit – Rather Be ft. Jess Glynne
Melissa is an Independent Technology Analyst & Content Creator, focused on IT infrastructure and information security. She is a VMware Certified Design Expert (VCDX-236) and has spent her career focused on the full IT infrastructure stack.
Recap of #vDM30in30 – Enjoy! @ Virtual Design Master
Tuesday 11th of November 2014
[…] Preparing for the Docker Journey […]