When it comes to setting up a new VMware vSphere environment, there are many things that must be considered. While it is not hard to install VMware products, making a few decisions up front can make things a lot easier later on. One key task in this process is to install ESXi, or the hypervisor itself. In fact, one of the first things you will do is to install ESXi, since you need a place to install the VMware vCenter Server Appliance.
Beyond the how of installing ESXi, you will have to ask yourself the question “Where will I install ESXi?”. Now, let’s take a look at the various options we have when it comes to installing the ESXi component of a VMware vSphere environment.
Heres the deal: when we talk about installation locations with ESXi, we often refer to them as boot locations. I am going to explain the ones I have commonly encountered, and some of the pros and cons of each.
Install ESXi on Local Disk (Boot ESXi From Local Disk)
When I say local disk, I mean a Hard Disk Drives that are in a drive bay in the server. This was something I used to do almost all of the time, about ten years ago. Mirroring the hard drive protects against hard drive failure. The only downside is that HDDs are expensive, and quite honestly, there is better things to do with those disks with the current versions of ESXi, such as vSphere Flash Read Cache, or host cache for swapping (not that you would ever be in a situation where you were swapping, right?). This is especially true of blade servers, which may only have two valuable HDD bays. You could also boot from an SSD in a drive bay of course, but those are even more expensive than HDDs, so I’m not going to really mention them. If you are considering booting from a SSD, you may want to take a look at your budget and see if you can use the money you were purchasing those boot SSDs with for something else, or keep the SSDs for caching and boot from SD cards.
Install ESXi on a SD Card or USB Stick (Boot ESXi From SD Card/USB Stick)
Booting from SD cards is one of my favorite methods of booting ESXi. Now, SD cards are plenty big and plenty cheap, but this wasn’t the case years ago, so this boot method wasn’t as common, even if it was supported. You can mirror SD cards just like you would a HDD, and have those drive bays available for other things. I’ve seen far less SD card failures in hosts than HDD failures (this is just my anecdotal evidence, of course, your may have seen the opposite). The only downside to this is it can require some additional configuration. The ESXi scratch partition needs to be placed on persistent storage, or you’re going to see lots of annoying error messages. Don’t worry, there are lots of easy ways to set this up, but you’re going to need some sort of persistent storage like a VMFS or NFS volume to do it on, which of course you would have anyway for your VMs to live on. For information on how to do this, see VMWare KB 1033696 – Creating a persistent scratch location for ESXi 4.x/5.x/6.x.
Install ESXi on a SAN LUN (Boot ESXi from SAN)
Alternatively, you can boot your ESXi hosts from a SAN LUN. Once you configure your host to boot from SAN, which you would do on whatever hardware vendor management system you are using, you install ESXi as you normally would. The only downside to this, is it does require extra up front configuration, since you need to set up LUNs and zoning. However, you can use iSCSI for boot from SAN, which removes the need for zoning. One thing I often see as a negative about boot from SAN is that it creates a dependency on an external storage array. While this is true, most often this is also the array you are hosting your VMs on, so if you completely lose your array, you have much bigger problems than booting your hosts, since you can’t run your VMs anyway. While this is something to consider, I don’t think it negatively impacts the decision to boot from SAN.
Boot ESXi From Air (Use vSphere Auto Deploy for ESXi)
While Auto Deploy is more of an install method than a boot method, it is still worth mentioning, specifically stateless cacheing. vSphere Auto Deploy is a essentially a fancy way to PXE boot your ESXi hosts. There’s much more to it, but it uses a series of rules to deploy ESXi, and then configure it with Host Profiles or other customizations to your desired specifications. This, however, depends highly on the underlaying infrastructure running. With stateless caching, you still allocate a boot disk from ESXi to protect against one of these component failures. Even in the event of some infrastructure component failures, your host will boot from the local image vs the Auto Deploy image. There is also a stateful install method, which will perform the initial deployment of an ESXi host, and it will then boot from disk. In vSphere 6.5, a graphical user interface was added for Auto Deploy, reducing complexity. This method requires the most upfront work, but is really a set it and forget it model.
There are many different ways for your ESXi hosts to boot, and many different places to install ESXi. The important thing is to pick a method that matches your budget and your use case. For example, if you have a very small environment, it may just make sense to boot from an SD card, but with a large environment you may want to set up Auto Deploy to handle your installation and boot of ESXi. Now that the boot location has been determined, ESXi needs to be installed. Stay tuned for a post on different methods of installing ESXi.