Skip to Content

Abstract Thoughts on Abstraction: OpenStack and Hypervisors

We keep hearing about OpenStack, but what exactly are we talking about?  It’s not an apples to apples comparison to anything else in technology which can make it difficult to understand for some.  OpenStack is one of those sort of nebulous (see what I did there?) things many people know about, but not everyone may understand.  Or people may focus on a subset of projects and features and not examine the whole picture, which, by the way, is perfectly okay.  There’s also a number of answers based on the use case you’re implementing for, but in the simplest terms, OpenStack is open source software for creating clouds.

According to the OpenStack Project,

“OpenStack software controls large pools of compute, storage, and networking resources throughout a datacenter, managed through a dashboard or via the OpenStack API. OpenStack works with popular enterprise and open source technologies making it ideal for heterogeneous infrastructure.”

So let’s pick this definition, according to the OpenStack Project itself apart a little bit.  Fair warning, things may get a little bit weird, it is time for a little bit of a thought experiment.

paris3(Continuing with my theme of OpenStack Summit in Paris this week)

First, let’s start with the hardware layer, which, is and always will be arguably one of my favorites.  We’ve got a “storage array” and “servers”.  Storage is the easy part.  We could use anything here according to OpenStack, but since I’m the most well versed in NetApp Clustered Data ONTAP, that’s going to be our storage foundation.  Now, since I can use any protocol with NetApp, I’m going to be using this cluster for Swift, Cinder, and soon Manilla.  It works for all of them, so we’re covered here.  Storage is done.  Like I said, this is the easy part.

Now, for our servers.  How many?  How big?  Do I want some pools with massive amounts of CPU and Memory available from only a few hosts, and others with pools spread across many more?  Many of the thoughts we have here, when designing a cloud infrastructure are the same we have when designing the virtualized infrastructures we’ve grown to hold so dear.

What’s running on my servers?  I could yes, install OpenStack natively, but OpenStack lacks some features I feel like I need in an enterprise environment.  What happens if I have a hardware failure?  Do I really want my controller instances to go down and stay down?  Or what if I want vMotion type capabilities to protect some of my more critical instances?  I’m going to run with a hypervisor.

Now, which hypervisor?  Well, that’s going to depend on a lot of things.  What skill set I have in house.  What applications I want to run, and what hypervisor they are supported on.

Hypervisors are in an interesting spot right now.  There’s multiple vendors all with their unique feature sets, and their use cases.  Some people have moved from one hypervisor to another, only to go back again, while others are now running multiple in their production environment.

So, is OpenStack really a hypervisor for hypervisors?  Hear me out…hypervisors are all about abstracting the operating system from the physical components and making them portable, making them resilient.  If I’m running OpenStack with multiple hypervisors, aren’t I doing the same thing?  Using OpenStack as the automation and orchestration framework on top of the hypervisor?  Is OpenStack the final abstraction layer we need on our way to the cloud, and software defined everything?

Kenneth Hui makes a really interesting comparison between cloud and virtualization, in his OpenStack HA presentation, and he has a point.  Who really cares about my instance?  There’s nothing there, anyway, it is all about the volumes attached to it.  As long as my volumes are on a shared storage device, I can attach them to whatever instance I need them on.  Blow up an instance on one hypervisor, and create it on another.  This gives me even more portability going from public cloud to private cloud and back again, or perhaps even between my own data centers.  Or within my data center.  The OpenStack projects really are new abstraction layers for their respective technologies Cinder (block storage), Swift (object storage), Glance (Image service), and Trove (database service) for example.

The hypervisor will be come less important, as long as we can get the features we need from whichever one we chose to use.  The OpenStack advantage is that it provides a common set of APIs and a powerful framework to run all of the underlying data center and cloud components. Got Hyper-V? No problem, run OpenStack. Got vSphere? No problem, run OpenStack. Got KVM? Sure, go for it.  Combination of them?  Have at it, while using OpenStack as a tool to cross hypervisors as needed.

This new layer of abstraction is key for data center mobility, and is helping us define the policy based data center of the future.  Policy based infrastructure is the future, but that will be another post this month.

Song of the Day – Everclear – Santa Monica

Weekend Recap #vDM30in30 Nov 5 – 9 (the long version) @ Virtual Design Master

Sunday 9th of November 2014

[…] Abstract Thoughts on Abstraction: OpenStack and Hypervisors […]