Skip to Content

#NAS4LIFE

I’m a NAS girl when it comes down to it.  Before I did the whole SaaDJ (Storage as a Day Job) thing, I was a VMware architect and administrator with a huge environment.  My previous knowledge of storage basically came from what I saw on the VMware side of things.  Please note, the following are my PERSONAL experiences and preferences.

IMG_2550

(Sometimes I feel like the NAS portion of my environment was like driving one of these)

My environment was a pretty good mix of SAN and NAS, with most of the new stuff going in as NAS, and by NAS I mean NetApp controllers connected with NFS.  In fact, the magic of VMware and NFS was the main driver for me becoming a NetApp SE, I had a fantastic experience with the product.

Let me first talk about my most hated experience with SAN.  The thing that happened 9 times out of 10 when I was on call.  LUNs filling up.  Yuck.  I have shivers down my spine just thinking of it.  Someone would thin provision on the LUN, keep deploying because hey look, there’s space, then the app would get turned over and within a week, boom.  Donezo.  The LUN would fill up and the VMs would pause, and nothing would be able to boot up.  Then, I’d have to get downtime for the VMs that were actually working to shut them down, so I could migrate to another LUN.  I hated this.

On the other side, with NFS, I could have my volumes grow automatically when they hit a certain threshold.  Never had to deal with this issue there.  It was glorious.

Now, what storage queue depth?  Cringe.  Yeah, I said it, and I also realize things are somewhat better in this arena now, but it still caused me many, many nightmares.  One of the earliest lessons I learned in my beginning VMware days (around 3.0, with a smattering of 2.5.4) is that VMware hosts do NOT like their storage messed with.  Ever.  They will totally throw a fit and make you miserable until you fix it.

I had several issues where the SAN storage array was hammered by something else, and it ticked off my sweet little VMware hosts.  Things started queuing, and we began to cry, because we knew it would be a long night (or a long couple days), which it was.  Probably one of the most frustrating things was having our environment impacted by someone else’s rogue application.

Guess, what, no queue depth to worry about with NFS data stores…and with Clustered Data ONTAP I can set quality of service, so I can make sure those rogue apps don’t impact me.  That’s one of my favorite features.  No more getting impacted by all the rogue developers who didn’t do anything all year, and now are trying to cram all their testing in in December!  Yes, that happened too.  Again, shared SAN, and my hosts got mad.

What about when the deployment team wanted to deploy a HUGE vm?  Now I know in VMFS5, this problem went away, but back in the day our LUNs were 300GB a 2MB block size, which meant the biggest virtual disk I could have was 512GB…even if I deployed a bigger LUN.  So the solution to that lovely issue was to get all sorts of fun approvals to deploy a non-standard LUN, then deploy it with a bigger block size.  Which took a while…because you know, zoning.  People weren’t happy about that.

Or you know, I could just direct them to the new cluster backed backed by NFS volumes.  NFS doesn’t rally care how big your file is.  I can’t remember the reason anymore, but there was a reason they had to stay in that specific SAN based ESX cluster.  It was a bummer.

Those are just a couple of reasons I love NFS, and would use it over anything LUN based any day of the week.  No zoning, no worrying about Fibre Channel switches…because when those puppies were out of ports, we all had a bad time (yeah yeah, I know, FcOE and Nexus, but I’ll still take my NFS, thanks).  No HBAs.  I can just fill up my host with 10 gig Ethernet capabilities…I can use my existing IP network if I want to, and really.  It. Just. Works.

I suppose I need to be fair though.  I did experience a NFS storage issue once.  Someone unplugged the 10 gig connections from the NAS in the data center.  Oops.

Guillaume

Tuesday 19th of August 2014

NFS wasn't always "fun" with VMWare. Before ESXi 5.x, there was some performance issues due to how it was implemented by VMWare.

But I also think that NFS storage will grow a lot compared to iSCSI or FC environnements (no more SCSI reservation issues \o/)

However, you still have to be careful to get correct performances, especially if you don't chose a storage vendor but a DIY NAS : * Linux's NFS stack tends not to be that stable for instance * You have to start enough server threads to handle the load * You have to worry about which FS you use to store your VMs, and deals with the related bugs (ie. a filled XFS partition create a dead lock that impact all data accesses, even on other partitions -- most would use ZFS I guess anyway) * You could have performance issues if you only have one NFS connection between your host and the NAS * And everything that happens when you share the data with the storage network: QoS, (CEE), ...

Never used NetApp NAS, but I was quite displeased by their management interfaces... many different ones, many licenses (tracking performances is one of them, and quite expensive), ...

Josh

Sunday 22nd of June 2014

I only had to deploy NFS once to realize that all the time I spent implementing block storage was for naught.