Many of us remember when virtualization was in its infancy, when we didn’t have features like high availability to protect against hardware failures, and live migration to move a virtual server off of the host it resided on for maintenance. Back then, the benefits were all about reduced power and cooling requirements, and using less space in our data centers. It was also about abstracting the server from the hardware layer in order to protect against hardware failures. This also gave us unique challenges like dealing with virtual server sprawl since the hardware never hung around, and older guest operating systems (Windows NT, I’m looking at you), hanging around well past their prime.
(OpenStack Summit finishes up in Paris…next stop, Vancouver!)
Cloud, more specifically OpenStack is still in its infancy. We have less options for features like high availability to protect against hardware failures, or live migration to move a virtual server off of the host it resides on for maintenance. Sure, there are ways to achieve it, and projects to implement it but we don’t have it as we stand now. Without some of these key features we’re so used to in our virtualized environment it makes it hard for us to really transition to an environment like OpenStack in its purest form.
So where do we turn to get the enterprise features we desire? How do we design an enterprise ready OpenStack environment? How do we manage and maintain it, with the rapid release cycle, with new features and bug fixes constantly coming out. Another driver of virtualization was the ability to consolidate our virtual machine management, allowing fewer admins to manage bigger environments.
Unless we’re organizations with huge engineering staffs, which not many enterprises have been in this economic environment, it is near impossible to provide this pure OpenStack environment, based on commodity hardware. As we evolve into OpenStack architects and administrators, a huge shift will have to take place in the way we’re doing things. We’re having to come out of our shells, and talk with other community members to figure out who’s doing the newest things, and who’s contributing to what. We’re having to stay ahead of the curve, so we’re ready to adapt to the new projects and changes, so that we can enable our customers in our organizations to go further faster. As cool as OpenStack may be, it’s nothing unless it can enable a competitive advantage to our businesses.
One way to manage this shift, is to leverage some of the technologies we’re already familiar with under the covers. We can leverage things like Red Hat Enterprise Linux OpenStack Platform SUSE Cloud to power our cloud under the covers, giving us the ability to do things like update all of our controllers easily, and run controllers in an active/active method, providing the enterprise reliability our core OpenStack components need. We can leverage enterprise storage systems to get enterprise features like deduplication, and to have the ability to use things like flash acceleration and provide always on storage with NetApp Clustered Data ONTAP for our cinder volumes. We can use an enterprise storage arrays like NetApp E-Series to power our Swift environments, instead of racks and racks of storage servers and eliminate the need for replica copies.
The commitment from these, and other enterprise vendors, is a sign OpenStack is starting to grow up. By drivers, distributions, documentation, or any combination of those, enterprises are enabled to begin small scale deployments using their existing hardware in a supportable fashion. It also reminds us of a time back in the early days of virtualization when not every hardware vendor supported this new technology. As more vendors came on board, it gave virtualization the ability to grow, whiteout being locked into a specific hardware specification. We used to talk about the “new” virtualization tools like VMware ESX and later Microsoft Hyper-V the way we talk about things like OpenStack (and CloudStack) years ago. Eventually, we reached the next stage of maturity and they regarded as reliable, and even the preferred deployment platform. We are on the cusp of major change again with OpenStack.
The biggest stumbling block to cloud in our data centers may not be the technology, but the people and the processes. An organization could hire a large engineering staff to deploy OpenStack on commodity hardware in their environment, but if their customers don’t understand how to consume the services, there really isn’t a point. OpenStack and the cloud merely serve as a platform to enable innovation in organizations.
We can’t automate anything if we don’t have a process in place, whether it be deploying a development environment, or procedures for putting systems into production. OpenStack can naturally enable DevOps, but only if an organization is ready to make the move.
The evolution of cloud is the evolution of virtualization all over again. The question is, who will be ready for it? Who will make the big bets at the right time and grow a wildly successful business? Time will tell. In the meantime, if you feel strongly that cloud, more specifically, OpenStack is the way of the future, there are many resources to get you going. Make sure you check out a great session from the OpenStack summit earlier this week, Building a Cloud Career in OpenStack.
Song of the Day – The Wallflowers – One Headlight
Melissa is an Independent Technology Analyst & Content Creator, focused on IT infrastructure and information security. She is a VMware Certified Design Expert (VCDX-236) and has spent her career focused on the full IT infrastructure stack.
Weekend Recap #vDM30in30 Nov 5 – 9 (the long version) @ Virtual Design Master
Sunday 9th of November 2014
[…] OpenStack, the Evolution of Virtualization […]