I’ve always felt like storage was one of the fundamental building blocks of virtualization. The VMware environment I worked with sucked up the most storage out of any other group or application, even with storage efficiencies like deduplication in place. When there was something wrong with the environment’s storage, it quickly impacted everything running on it, grinding virtual machines to a halt. This was also one of the main reasons I decided to work for a storage company. OpenStack, cloud, and the orchestration and automation that come with it, are the natural evolution of virtualization. We’ve gotten to the point where it doesn’t make sense to spend all of our precious time managing our virtualized environments, and we can’t if we want to continue to provide value to our customers and the businesses we support. The automation and orchestration are key pieces to enable the DevOps movement, and allow us to get back to innovating instead of simply managing. Storage is still central to this, and the basic principles we learned with virtualization stay the same with this evolution.
Storage is one of the main building blocks of most corporate data centers, which shows the importance it will have with OpenStack. We have Cinder for block based storage (Cinder…block? Get it? Clearly the creators of OpenStack do), Swift for object based storage (we want our storage to be fast, after all), and debuting as an official project in Juno, we have Manilla for file sharing. Let’s take a closer look at the building blocks of OpenStack.
First, let’s look at Cinder. Cinder is where the block storage volumes for our OpenStack Nova instances reside. Here’s where it gets interesting. While you certainly could go complete whitebox, and use local storage, that may not provide the availability you need in an enterprise environment. Many storage vendors do infact integrate with Cinder, and OpenStack publishes a Cinder Support Matrix with vendors, arrays, and what release they were supported in.
In addition to providing supportability, leveraging a shared storage array often lets organizations leverage their existing investments to get started, which can be key in the beginnings of OpenStack in an environment. A hypervisor can be installed on some extra servers, and volumes can be carved up from existing storage arrays to get an environment up and running fairly quickly. One of the great things about OpenStack is you don’t necessarily have to run out and buy a whole new dedicated infrastructure to get started. Perhaps you don’t have the fastest servers laying around, but proving the use case for OpenStack on them may be enough to open the gate for some new toys…I mean new equipment.
When an instance is created, it has something called an ephemeral volume, which is where the core operating system is stored. Usually, the instance is based off of one of the default five “flavors”. A flavor is basically a virtual hardware configuration for an instance, and the size of the ephemeral volume is defined in the flavor. Think of an ephemeral volume as a C:\ for a Windows server. Many times only the operating system lives here, with additional drives (Cinder volumes) being created for applications or data. The ephemeral volume can be run from shared storage, but the default is to run from the host where the instance is created. If an instance is destroyed, the ephemeral volume is destroyed with it. A Cinder volume is a persistent volume that can be kept after the instance is destroyed, for example, you may decide to destroy an instance and deploying a new one instead of upgrading it. If this instance hosted a database, the database would be on a Cinder volume that would survive the instance being destroyed and re-created.
Cinder, of course is only one part of an OpenStack implementation. Cinder talks to Nova (compute), the storage controller, and the hypervisor. Storage platforms have a unique Cinder driver that allows for integration with the storage array, such as the ability to use array level SnapShots and clones. All of this communication happens through APIs. Using a Cinder volume is a two step process, before it can be attached to a Nova instance, it first must be created. This can be done through the Horizon dashboard using any web browser, with a Python client at the command line, or also with the Cinder API. Keystone (the OpenStack identity service) also plays a role in Cinder volume creation, as the requesting user’s credentials will be validated and role authorization confirmed before the volume is created. After the volume has been created, the user will be able to view the volume information in the Horizon dashboard.
The Cinder driver is a little bit like VMware’s VAAI, in the sense that it allowing integration between OpenStack’s Cinder and the storage array being utilized. For example, in the Juno release, a NetApp FlexVol’s capacity will now reported individually to the Cinder scheduler, and Cinder will decide which pool the volume is created in. With more SDS (Software-Defined Storage) access to the underlying hardware, Cinder is able to tie directly into the features and advantages that the storage can offer.
After a Cinder volume has been created, it can be attached to a Nova instance, by either iSCSI or NFS. While Cinder is referred to as block storage, it can be deployed using NFS when it is deployed on top of a hypervisor. The hypervisor will then present the Cinder volumes as block storage to the instance. This simplifies storage layer management by eliminating the need for LUNs. The Nova REST API invokes the Cinder API (after authentication, of course), allowing the Cinder volume to be attached to the Nova instance. Nova creates the actual connection using the information it has received from Cinder.
Today, with the Juno release of OpenStack becoming available, Cinder, as well as the other projects have been further enhanced with a number of new features. Volume Replication, Consistency Groups and SnapShots of Consistency Groups, and Volume Pools all become supported. These are important features for enterprise availability of Cinder volumes, and the Nova instances they service. You can view the full Cinder release notes here.
One of the biggest obstacles OpenStack will have to overcome is the lack of some enterprise features such as high availability and live migration of instances. With every release, the gap continues to close, and OpenStack becomes more viable in the enterprise space. Hypervisor specific flavors of OpenStack, such as the Red Hat Linux OpenStack Platform and VMware Integrated OpenStack help to add these features, and provide the extra layer enterprises need to begin to utilize OpenStack today.