Skip to Content

Using Ansible to Configure NetApp ONTAP Systems

Recently, I had a need to configure some NetApp Select systems.  I worked at NetApp for over five years, so I have configured my share of both ONTAP and SolidFire systems.  After I configured one, I really didn’t want to do it again. I found a great getting started with Ansible series on thePub, and wrote my first Ansible playbooks to configure various components of my NetApp.

Other than the occasional line of PowerCLI, I do not really share much about development or coding on this blog.  Why?  Simple.  It isn’t really my thing, and I don’t think I am very good at it.  The driver behind me finally automating something is usually I’m just plain sick of doing it. 

Why Ansible and NetApp?

I was on the phone sharing my screen with a couple of collages yesterday and one of them asked me why I was using NetApp and Ansible together.

This is a great question that really made me think.  First and foremost, I didn’t want to configure ONTAP manually anymore.  Secondly, I wanted to try something new.

I am the farthest thing from a developer you have ever met.  I studied Electrical Engineering in college, and really only had to deal with a little bit of C++ and Assembly language.  

Sure I use PowerShell for a bunch of things when it comes to VMware vSphere just because I have developed it over the years as I needed it.

I also began to use Terraform to build VMs for testing in my lab this year, but I’m not a developer.  I also don’t like coding most of the time.  I figured I would give Ansible a try since I had never used it, and had already broadened my horizons with Terraform this year.

Surprisingly, I really did like Ansible, so I continued to use it for configuring my NetApp devices.  I like it so much that I started looking at configuring some of my vSphere hosts with it, but I already have such a vast library of PowerShell for that I did not really want to spend the time differing it out.

Are NetApp and Ansible Hard to Use Together?

I went from knowing nothing about Ansible to two fully configured and functional ONTAP Select systems in about four hours.  

I thought this was pretty good, though you may think otherwise.  I also already knew how to configure a NetApp, so it was just a matter of figuring out how to do what I wanted with Ansible, not learning how to configure a Net App from nothing.

I found Ansible much easier to use and understand compared to Terraform for example.  I know they are two different things, but they were my two forays into automation this year so they are really the only things I can compare besides PowerShell/PowerCLI which I have been using for a long time.

I can’t say enough about the guide on NetApp’s thePub.io to help you get started with Ansible and NetApp.  That is what I used to get started with and based these examples on.  It has everything you need!

Understanding How Ansible Works

Before you get started, there are a couple of things you should know to save you some time.

First and foremost, you will need a Linux machine to use Ansible, there isn’t a Windows option.  I simply deployed a Photon Linux VM and installed Ansible on it.

Even if you aren’t the best with Linux, trust me, you can handle this.

The only thing Ansible really cares about from a syntax standpoint is the way things are indented.  You don’t have to worry about semicolons or anything like that.  

I wrote my playbooks using Notepad++ on Windows, then used WinSCP to transfer the files to my Linux VM.  There’s no fancy GitHub or anything like that being used here.

The syntax for running Ansible Playbooks is also easy:

ansible-playbook playbookname.yml

I’m going to put a couple of my example playbooks here.  Are they a bit of a mess?  Anyone who is actually good at Ansible may tell me yes, but they work and have gotten my systems configured for me which is all that I really needed.  I’m not going for good coding points and practices here, just working systems.

Ansible and NetApp Example 1 – From a Blank ONTAP System to iSCSI

Since I really decided I wanted to suffer, I configured my ONTAP system for iSCSI use.  I have nothing against iSCSI, but I do personally prefer NFS.

This playbook completely configures your NetApp, from creating aggregates to creating lifs, a SVM, and adding your iSCSI initiators to your igroup.

In each of my playbooks, I direct Ansible to a variables file.  This is because I want to re-use the code to configure multiple systems very similarly.

In this case, my variables file is called ONTAPvars.yaml and contains the following:

hostname: "X.X.X.X"
username: "admin"
password: "Password!"
vserver: NetApp_iSCSI
iscsilifaddress: X.X.X.X

This is pretty simple and self explanatory, but it allows me to simply change this file to build more systems, which is great.  Make sure to change all of these variables to fit your environment.  X.X.X.X represents an IP address.

Now, for the playbook itself, which I called ONTAP.yml


---
- hosts: localhost
  gather_facts: false
  vars:
    login: &login
      hostname: "{{ hostname }}"
      username: "{{ username }}"
      password: "{{ password }}"
      https: true
      validate_certs: false
  vars_files: /home/playbooks/ONTAP1/ONTAPvars.yml
  tasks:
    - name: Assign unowned disks
      na_ontap_disks:
        node: ontap-select-2
        hostname: "{{ hostname }}"
        username: "{{ username }}"
        password: "{{ password }}"
        <<: *login
    - name: Create Aggregates
      na_ontap_aggregate:
        state: present
        service_state: online
        name: aggr1
        disk_count: 1
        hostname: "{{ hostname }}"
        username: "{{ username }}"
        password: "{{ password }}"
        <<: *login
    - name: Create SVM
      na_ontap_svm:
        state: present
        name: "{{ vserver }}"
        root_volume: vol1
        root_volume_aggregate: aggr1
        root_volume_security_style: mixed
        hostname: "{{ hostname }}"
        username: "{{ username }}"
        password: "{{ password }}"
        <<: *login
    - name: Create interface
      na_ontap_interface:
        state: present
        interface_name: iscsi-705
        home_port: e0c
        home_node: ontap-select-2
        role: data
        protocols: iscsi
        admin_status: up
        firewall_policy: mgmt
        address: "{{ iscsilifaddress }}"
        netmask: 255.255.255.0
        vserver: "{{ vserver }}"
        hostname: "{{ hostname }}"
        username: "{{ username }}"
        <<: *login
        password: "{{ password }}"
    - name: Create iscsi service
      na_ontap_iscsi:
        state: present
        service_state: started
        vserver: "{{ vserver }}"
        hostname: "{{ hostname }}"
        username: "{{ username }}"
        password: "{{ password }}"   
        <<: *login
    - name: Create iSCSI Igroup
      na_ontap_igroup:
        state: present
        name: iGroupName
        initiator_group_type: iscsi
        ostype: vmware
        initiators: iqn,iqn,iqn
        vserver: "{{ vserver }}"
        hostname: "{{ hostname }}"
        username: "{{ username }}"
        password: "{{ password }}"
        <<: *login
    - name: Create FlexVol
      na_ontap_volume:
        state: present
        name: iSCSIVol
        is_infinite: False
        aggregate_name: aggr1
        size: 150
        size_unit: gb
        junction_path: /iSCSIVol
        vserver: "{{ vserver }}"
        hostname: "{{ hostname }}"
        username: "{{ username }}"
        password: "{{ password }}"  
        <<: *login    

I think I explained the names of these Ansible tasks pretty well in the name field.

Remember, here is my first ever Ansible playbook.  There are a bunch of things I could have done better like:

  • I hard coded the node name.  No clue why.  That was silly.
  • I hard coded the volume name, because I wanted it to be the same on every system. You could easily use a variable for this if you wanted different volume names per system.
  • In the initiators, I also hard coded the iGroup Name and initiators because I wanted to have the same thing on all of my ONTAP systems.  Be sure to paste your initiators in the Create iSCSI Igroup task after initiators:

Ansible and NetApp Example 2 – From a Blank ONTAP System to NFS

In this example, I build out a complete ONTAP system, but this time with NFS.

The basics of the first example are the same with a couple of changes for NFS such as:

  • Creating the export policy
  • Creating a management interface for the NFS SVM
  • Creating a vserver admin account for the NFS SVM

I also went much heavier on the variables this time to make it easier to deploy.  Some things I didn’t add to variables since I always deploy them the same way, but those are easily modified.

In this case, my variables file is called ONTAP-master-2-vars.yml and contains the following:

hostname: X.X.X.X
username: admin
password: Password!
vserver: NetApp_NFS2
node: ONTAPSelectD-1
nfslifaddress: X.X.X.X
nfsmgmtaddress: X.X.X.X
nfsvolname: NFSVol2
junction: /NFSVol2
vsadminpassword: Password!

Where X.X.X.X are IP addresses.

Next is the Ansible playbook.  I named this playbook ONTAP-master-2.yml.


---
- hosts: localhost
  gather_facts: false
  vars:
    login: &login
      hostname: "{{ hostname }}"
      username: "{{ username }}"
      password: "{{ password }}"
      https: true
      validate_certs: false
  vars_files: /home/playbooks/Node2/ONTAP-master-2-vars.yml
  tasks:
    - name: Assign unowned disks
      na_ontap_disks:
        node: "{{ node }}"
        hostname: "{{ hostname }}"
        username: "{{ username }}"
        password: "{{ password }}"
        <<: *login
    - name: Create Aggregates
      na_ontap_aggregate:
        state: present
        service_state: online
        name: aggr1
        disk_count: 1
        hostname: "{{ hostname }}"
        username: "{{ username }}"
        password: "{{ password }}"
        <<: *login
    - name: Create NFS SVM
      na_ontap_svm:
        state: present
        name: "{{ vserver }}"
        root_volume: vol_nfs_root
        root_volume_aggregate: aggr1
        root_volume_security_style: mixed
        hostname: "{{ hostname }}"
        username: "{{ username }}"
        password: "{{ password }}"
        <<: *login
    - name: Create interface for NFS
      na_ontap_interface:
        state: present
        interface_name: NFS-704
        home_port: e0b
        home_node: "{{ node }}"
        role: data
        protocols: nfs
        admin_status: up
        firewall_policy: data
        address: "{{ nfslifaddress }}"
        netmask: 255.255.255.0
        vserver: "{{ vserver }}"
        hostname: "{{ hostname }}"
        username: "{{ username }}"
        password: "{{ password }}"
        <<: *login
    - name: Create interface for NFS Mgmt
      na_ontap_interface:
        state: present
        interface_name: NFS-mgmt
        home_port: e0a
        home_node: "{{ node }}"
        role: data
        protocols: none
        admin_status: up
        firewall_policy: mgmt
        address: "{{ nfsmgmtaddress }}"
        netmask: 255.255.255.0
        vserver: "{{ vserver }}"
        hostname: "{{ hostname }}"
        username: "{{ username }}"
        password: "{{ password }}"
        <<: *login 
    - name: create vsadmin for NFS 
      na_ontap_user:
        state: present
        name: vsadmin
        applications: ssh
        authentication_method: password
        set_password: "{{ vsadminpassword }}"
        lock_user: True
        role_name: vsadmin
        vserver: "{{ vserver }}"
        hostname: "{{ hostname }}"
        username: "{{ username }}"
        password: "{{ password }}"
        <<: *login 
    - name: Create NFS service
      na_ontap_nfs:
        state: present
        service_state: started
        nfsv3: enabled
        vserver: "{{ vserver }}"
        hostname: "{{ hostname }}"
        username: "{{ username }}"
        password: "{{ password }}"   
        <<: *login
    - name: Create ExportPolicy vSphere
      na_ontap_export_policy_rule:
        state: present
        name: vSphere_All
        vserver: "{{ vserver }}"
        client_match: 0.0.0.0/0
        ro_rule: any
        rw_rule: any
        protocol: nfs
        super_user_security: any
        allow_suid: true
        hostname: "{{ hostname }}"
        username: "{{ username }}"
        password: "{{ password }}"
        <<: *login   
    - name: Create FlexVol for NFS
      na_ontap_volume:
        state: present
        name: "{{ nfsvolname }}"
        is_infinite: False
        aggregate_name: aggr1
        size: 30
        size_unit: gb
        junction_path: "{{ junction }}"
        policy: vSphere_all
        vserver: "{{ vserver }}"
        hostname: "{{ hostname }}"
        username: "{{ username }}"
        password: "{{ password }}"  
        <<: *login       

What Happens When You Run An Ansible NetApp Playbook?

After you run your Ansible Playbook Against your NetApp, you should see something like this:

ansible NetApp config

If one of your tasks fails, you can just fix it and run your playbook again, which makes Ansible code pretty simple to debug and work with.

Ansible will also basically tell you what you did wrong.

Coming Soon: NetApp and Ansible…

I have more to do with my NetApp Select systems, so stay tuned for more of my terrible Ansible playbooks, like one to configure a NFS SVM.  My point in sharing this is that Ansible is actually an easy and accessible way to configure your NetApp ONTAP systems in an automated manner, even if you are not a programmer.