Create New NFS Project on Tegile

The basis of a Project on the Tegile array is applying permissions and policies to a single volume or group of volumes. This means that changes made at the Project level can propagate to the volumes that live inside that project. If new IP addresses need to be added for read/write and root access for all the volumes, that can be handled at the Project-level instead of having to modify each export. However, you still have the ability to make changes at the individual volume level if that’s required.

In this setup, I’ll create a new project to host my Windows workloads in VMware. I’ll create a volume for Windows 2012 Operating System files and allow all the host on my NFS network read/write and root access to this volume.

1. Login to the web interface of the Tegile array
tegileproject020415-step1
2. Click on “Data”
tegileproject020415-step2
3. Click on the Pool that will host this new project
tegileproject020415-step3
4. In the “Project” window, click “Add Project”
tegileproject020415-step4
5. Enter the name of the Project , choose the Purpose, and select “NFS” for access type. Click “Next”
tegileproject020415-step5
6. Enter the Share Name, enter the number of mount points (more can be added later), and enter any Share limits or reservations. Click “Next”
tegileproject020415-step6
7. Set “NFS Sharing” to “on”. Set “Access Mode” to “Read-Write”, set “Access Type” to “IP” and enter the individual IP addresses or the subnet that will have access to this share. Check the box for “Root Access” then click “Add”. Repeat for each IP/Subnet then click “Next”
tegileproject020415-step7
8. Set your snapshot policy (if required). This can be configured at a later time as well. Click “Next”
tegileproject020415-step8
9. Review your settings and click “Finish”
tegileproject020415-step9
10. Click on the newly created Project and then you will see the volume share name and the mountpoint
tegileproject020415-step10

At this point, we just need to mount this new Volume on our ESXi hosts which can be done manually through the vSphere client on each host or we can do it through PowerCLI which will do it a lot faster.

In the example above, the name of our Volume is “2012_OS”, the path is our share mountpoint (/export/PDX_Windows/2012_OS) and the IP of the Tegile is 192.168.1.15. You’ll need to define the hostname as it appears in vCenter. To mount to a single host we can use the following PowerCLI command:

New-Datastore -NFS -VMHost "Hostname" -Name "2012_OS" -Path /export/PDX_Windows/2012_OS -NfsHost 192.168.1.15

To mount to an entire cluster, we can use this command after defining the name of the cluster as it appears in vCenter:

Get-Cluster "ClusterName" | Get-VMHost | New-Datastore -NFS -Name "2012_OS" -Path /export/PDX_Windows/2012_OS -NfsHost 192.168.1.15

Provision New Floating IP on Tegile

As I begin the process of reconfiguring my Tegile from a test/lab array into a production array I thought it would be a great opportunity to document more of the setup and provisioning steps involved in administering the array. In our environment we are using 10gbe without configuring LACP on the switches and letting the Tegile handle the network availability. Obviously, every environment is different, this is just the approach we took for this array.

These steps walk you through the process of provisioning an additional VLAN on the 10gbe interfaces and then creating a floating IP address that is owned by the node running the disk pool.

1. Login to the non-shared management IP of each HA Node
2. Login as “admin” with the correct password
tegileIP012715-step2
3. Click on the “Settings” tab and then click “Network”
tegileIP012715-step3
4. Under “Network Settings” on the left column, click on “Interface”
tegileIP012715-step4
5. Under “Physical Network Interfaces”, click on one of the 10gbe interfaces (named ixgbe2 and ixgbe3 on this array). Click the “+” to add a VLAN
tegileIP012715-step5
6. Enter the name of the VLAN following the guidelines below and the VLAN number and click “OK”.
a. Our naming convention is protocol + interface number + _ + VLAN number. We are using cifs on interface “ixgbe3” and the VLAN is 100
tegileIP012715-step6
7. Click “OK” to this message about saving the config
tegileIP012715-step7
8. Repeat step 5 for the other 10gbe interface changing the name to reflect the number of the other interface.
tegileIP012715-step8
9. Click “Save” to bring these new VLAN online
tegileIP012715-step9
a. Notice that the state changes to “up” after saving
tegileIP012715-step9a
10. Now we need to assign an IP address to these interfaces. We are not using LACP, so under “IP Groups” click the “Add IP Group” button
tegileIP012715-step10
11. Click the arrow next to “Network Properties”. Enter the name, check the boxes next to the newly created VLANs we added to each interface, then enter the IP address and subnet of this new subnet. Click “OK”
a. The naming convention is “ipmp + _ + protocol + filer node number. IPMP is “IP Multipathing”, cifs is the protocol, and this is node “A” which is the first node
tegileIP012715-step11a
b. Click “OK” to this message about saving the config
tegileIP012715-step11b
12. Now we see the IPMP group has been created, but isn’t up.
tegileIP012715-step12
13. Click the “Save” button at the button
tegileIP012715-step13
14. Click “OK” for confirmation
tegileIP012715-step14
15. Now we can see that the interface is up
tegileIP012715-step15
16. Repeat these steps on the other node of the HA pair. Changing the IP Group name to “ipmp_cifs2” and choosing a different IP address
tegileIP012715-step16
17. Back on the primary node, click on “Settings” then “HA”
tegileIP012715-step17
18. On the active resource group (we only have 1 which is “Resource Group A” click “Add Floating IP”
tegileIP012715-step18
19. Enter the shared IP address and netmask (this is a unique IP and different than either of the IP addresses entered earlier) then choose the IP Groups we created on each node. Click “OK”
tegileIP012715-step19
20. Now we have a new IP address that will be used by whichever node owns the Resource Group
tegileIP012715-step20

The steps are pretty straightforward, but can be confusing in the beginning. Our local SE from Tegile walked us through this config when we were evaluating, but it was important for me to know how to do these things on my own.

Tegile Array Replication and Restore

These days most of my replication is handled at the VM-level by software design for virtualization. While that is the case for most of my evironment, I still have a few non-virtualized workloads that run on shared storage that need to be replicated in the event of a disaster at my primary location. This process has never been too complex from my days of working with NetApp and now as I continue exploring the Tegile I’m happy to say that it’s just as easy through the GUI.

Documenting this process for my non-virtual workloads would be a little difficult so I’ve decided to document this process using an NFS datastore containing a few virtual machines. The first half of this guide is setting up the replication relationship and replicating the data. The second-half is the process to actually restore that data and make it usable at your DR site.

 

1. Login to the web interface of the Tegile that is the replication source
2. Click on “Settings” then “App-Aware”
tegiledr111214-step2
3. Click on “Zebi Replication” on the left column
tegiledr111214-step3
4. Under the tab “Replication Target” click the “Add” button (This is adding the DR Tegile as the target array)
tegiledr111214-step4
5. Enter the name or IP of the array (the shared Management IP address) and the username/password (Optionally you can specify a port range for replication which we won’t be doing for this documentation) and click “Add”
tegiledr111214-step5
6. Once it has been successfully added it will appear in the “Replication Target” list
tegiledr111214-step6
7. Login to the web interface of the DR target Tegile, click on “Settings” then “App-Aware”, choose “Zebi Replication” on the left column and then click on “Replication Source” tab. You should see your other array listed here (The IP address will be the “management” IPs of each controller, not the shared management IP for both arrays)
tegiledr111214-step7
8. Back on the Primary Tegile (Replication source) click on “Data”
tegiledr111214-step8
9. Click on the disk pool then then project that will be replicated
tegiledr111214-step9
10. For this documentation I’ve created a Project named “NFS_Replication” with a volume named DR_Windows with 4 VMs inside. Click on the project that will be replicated and click on the “Edit” button
tegiledr111214-step10
11. Click on “Replication” on the left column
tegiledr111214-step11
12. Click the “Add Replication” button
tegiledr111214-step12

a. Select the Target System and click “Next”
tegiledr111214-step12a
b. Select the “Target Pool” and enter a name for the “Replication Project”. Click “Next”
tegiledr111214-step12b
c. Choose what options are required and which volumes will be replicated (This test only has one volume, DR_Windows, but you can include or exclude any volumes that exist in this project. We’ll choose quiesce which will perform a VMware snapshot to put the OS in a consistent state. Click “Next”
tegiledr111214-step12c
d. Choose your schedule (manual or automatic), frequency, and how many additional snapshots (restore points) will be saved on the target array. For this example we’ll do daily replication that happens at 10:49 am and we’ll keep 14 snapshots. Click “Finish”
tegiledr111214-step12d
e. Once it’s all setup, you’ll see your target array, the target pool, and the target project
tegiledr111214-step12e

13. I have 4 VMs in that datastore (DR-Test01-04). Once the time hits, we can see that snapshots are taken, then removed, for each of the VMs in that datastore.
tegiledr111214-step13
14. On the DR target array, we can see we now have snapshots available for this project. (The reason there are 2 is because I initiated a manual replication sync for testing first)
tegiledr111214-step14

a. To manually kick off a replica snapshot, on the source array, find the project, click on “Replication” and then click the “Play” button that says “Replicate”
tegiledr111214-step14a

 

That is how simple it is to setup replication. Now let’s imagine we need to spin up those replicated VMs in this volume. Here is how we do that.

 

1. On the DR target array, click on Data, select the pool, then click on “Replica (1)” to view the replica project
tegilerest111214-step1
2. Click the “Edit” button for the NFS volume
tegilerest111214-step2
3. Click on “Snapshots” and find the snapshot you want to bring live (We’ll choose the latest version). Click the “Clone” button
tegilerest111214-step3

a. Cloning the snapshot will allow us to create a new project and NFS Volume from this snapshot and spin up these VMs in DR. By doing a clone, we’re able to continue to replicate data in the event you are testing replication instead of having an actual DR event.

4. Enter a name for the new Project (DR_NFS_Replication for this writing) and a name for the mount point (/export/DR_NFS_Replication for this writing) and click “Clone”
tegilerest111214-step4
5. If successful, you’ll receive this message about the new project being created. Click “OK”
tegilerest111214-step5
6. Close the window for “Share Configuration” and click on “Local (1)” under “Projects”
tegilerest111214-step6
7. Click on the “DR_NFS_Replication” project then view the Mountpoint of the Share (/export/DR_NFS_Replication/DR_Windows). Note the “c” before the share name which denotes it was a clone from another projects
tegilerest111214-step7
8. Click the “Edit” button for the project and then click on “Sharing”
tegilerest111214-step8
9. This is where you will add the IP addresses or range of IPs that need read/write and root access to the shares in this project. The IP addresses/ranges will carry over from the source array. Our IP range is the same in DR as our lab so we’ll leave this alone.
tegilerest111214-step9
10. Connect to your DR vCenter server or ESXi hosts. Click on the host, then “Configuration”, then “Storage”
tegilerest111214-step10
11. Click “Add Storage” towards the top right
tegilerest111214-step11
12. Choose “Network File System” and click “Next”
tegilerest111214-step12
13. Enter the NFS IP address of the DR Tegile, enter the folder path (/export/DR_NFS_Replication/DR_Windows) and then enter the name of the Datastore (DR_Windows). Click “Next”
tegilerest111214-step13
14. Review the summary info and click “Finish”
tegilerest111214-step14
15. Repeat for each host that needs access to this datastore. Afterwards, right click the datastore and click “Browse Datastore”
tegilerest111214-step15
16. Inside you’ll see the 4 VMs we that were located in here before. Open each folder, right click the VM name.vmx file and choose “Add to inventory”
tegilerest111214-step16
17. Enter the name and location for the VM and click “Next”
tegilerest111214-step17
18. Choose the Cluster or host and click “Next”
tegilerest111214-step18
19. Review the settings and click “Finish.” Repeat for each VM that needs to be added.
tegilerest111214-step19
20. Power on all the VMs and now you can run any validation tests or bring these VMs live in a DR event
tegilerest111214-step20

 

Obviously, the process of mounting the datastore in your DR vCenter Server and re-adding the VMs one by one would be time consuming and tedious. When developing your DR plans, having this process scripted (easy enough in something like PowerCLI) ahead of time on the vCenter side of things would ease that burden. From the standpoint of the Tegile, this process is fairly intuitive and simple to setup. One of the things I love is that by default the data you are bringing live on the DR site is a clone and replication continues running without being affected.

Performing Tegile System Upgrade

One of the more nerve-racking tasks as a storage admin is performing upgrades to your storage arrays. Through the years I’ve done a few upgrades to my NetApps and even when following directions on how to do it, I still worry that something isn’t going to go right and I’ll be left restoring a lot of data. I don’t exactly love working in the CLI either which adds to the nerves.

While working with this Tegile HA2400 we had a software update available (2.1.2.4.140802 to 2.1.2.5.140925) and it seemed like a great time to document this process. Software updates are done through the web interface and are done in just a few clicks on each controller. With failovers that allow for minimal interruption, I was able to perform this upgrade towards the end of the working day and we never had any application interruptions.

Below are the steps to peform this system upgrade.

1. If running in Active/Passive, login to the web interface of the passive Zebi node (default credentials are admin/tegile)
tegileupg110514-step1
2. Verify that it’s the passive node by viewing the available pools. If there are no pools running on this node, you will only see “Zebi System” as the pool name
tegileupg110514-step2
3. Click on “Settings” then “Administration”
tegileupg110514-step3
4. On the left side, click “System Upgrade”
tegileupg110514-step4
5. Click the link for “Check for Upgrades”
tegileupg110514-step5
6. If there are any available updates, they will appear next to “Update Available”
tegileupg110514-step6
7. Click the “Upgrade” button, click “Upgrade Local” and then click “OK” to confirm upgrading to the latest version
tegileupg110514-step7a
tegileupg110514-step7b
8. The installation will begin and show the status of the tasks it is performing followed by a notification that the node is rebooting.
tegileupg110514-step8a
tegileupg110514-step8b
9. After the node has rebooted, log back in to the web interface
tegileupg110514-step9
10. Click on the Node name in the top right corner to verify the new version is running
tegileupg110514-step10
11. Click the Flag icon in the top right and then the “ACK” button for the upgrade events that are generated.
tegileupg110514-step11
12. Click on the Node name again and then click “Go to peer node” (this will open a new tab to connect to the other node in the cluster)
tegileupg110514-step12
13. Click on “Settings” and then “HA”
tegileupg110514-step13
14. Click “Switch Over All Resources” and click “OK” to confirm
tegileupg110514-step14
tegileupg110514-step14b
15. Once you receive this message on controller A, all resources have been migrated
tegileupg110514-step15
16. Click on “Settings” then “Administration”
tegileupg110514-step16
17. Click on “System Upgrade” and then ensure that the “Update Available” version matches the version applied to the partner node
tegileupg110514-step17
18. Click the “Upgrade” button, then click “Upgrade Local” (not that it recognizes the peer has already been upgraded) and click “OK”
tegileupg110514-step18a
tegileupg110514-step18b
19. The current task status will display just as before and then you’ll be notified once the node is rebooting just as before
tegileupg110514-step19
20. After the reboot, log back in to the web interface and click the node in the top right corner to verify the version
tegileupg110514-step20
21. Click on “Settings” and then “HA”
tegileupg110514-step21
22. After the last upgrade, all the resources sitting on Controller B moved back to Controller A and now Controller B shows standby
tegileupg110514-step22

That is all there is to it. The whole process from start to finish was under 15 minutes (I think closer to 10 if I didn’t screenshot the whole process). The steps for an active/active setup would be essentially the same, but you would move all the resources off one controller and on to the other prior to performing the first upgrade. Interestingly, despite not having auto failback enabled (Settings -> HA -> Advanced Options) after the upgrade completed all the resources that were on controller B moved back to controller A. During the next upgrade I will see if that happens again or was just a fluke this time around. I might even do that upgrade with a heavier load on the box just to see what happens.

Tegile NFS Datastore Management in vCenter

As the primary VMware and storage admin, I try to minimize the number of tools I have to use to accomplish my tasks. When it comes to provisioning and managing volumes for VMware, I prefer to do it all from within the vSphere if possible. The VSC console for my NetApp filers has saved a lot of time over the years, but as we continue to explore our Tegile array we can see what their software has to offer.

My last post was about registering the Tegile plugin with vCenter to have this functionality available in the vSphere client. This post goes into the basic administration of NFS volumes from within the vSphere client.

Prerequisites:
1. Credentials to the Tegile web interface (default is admin/tegile)
2. Registered the Tegile plugin on your vCenter server. Click here for those steps.

Steps:
1. Login to the vSphere thick client then click on “Home” and choose “Tegile Management” under “Solutions and Applications”
tegilenfs092214-step1
2. Proceed through any security warnings and login to the Tegile interface
tegilenfs092214-step2
3. On the left you’ll see a list of all the datastores on the Tegile that have been mounted on the ESXi hosts in this vCenter. Towards the bottom, click on “Add Datastore”
tegilenfs092214-step3
4. Enter the following information and click “Create”

a.Name: Name of the datastore
b. Type: Whether block or file based (SAN or NAS)
c. Protocol: NFS, iSCSI
d. Quota: Check this box to set a max size of the volume
e. ESX/ESXi Server (Version): Check the hosts that this datastore will be provisioned to
f. Pool: The disk pool for this datastore (if multiple are available)
g. Project: The project that this datastore will be associated with
h. Purpose: The type of workload hosted on this datastore (important for block size assignment)
i. Zebi Floating IP Address: The IP each ESXi host will connect to
tegilenfs092214-step4i

5. Once the operation is complete, click “OK”
tegilenfs092214-step5
6. The new datastore has been created and mounted and appears in the list of Zebi datastores
tegilenfs092214-step6
7. Click the “More Details” button for the newly created datastore to see all the details of this volume
tegilenfs092214-step7
8. In order to resize this volume, click the “resize” button
tegilenfs092214-step8

a. Check the box for “New Share Quota” and enter the new size and press “Submit”
tegilenfs092214-step8a

9. This view will refresh and the new size will be reflected
tegilenfs092214-step9
10. I have moved a virtual machine into this datastore to test the snapshot function with quiesce enabled. Click the “Snapshot” button for the datastore
tegilenfs092214-step10
11. Enter the name of the snapshot, change “Quiesce” to “on” and click “Create”
tegilenfs092214-step11
12. You’ll receive a message that snapshot creation has been triggered. Click “OK”
tegilenfs092214-step12

a. A new task will be created to snapshot all VMs that are in that datastore
tegilenfs092214-step12a

13. Once the task to remove the virtual machine snapshot completes, click the “Refresh” button on the snapshot screen to see the new snapshot
tegilenfs092214-step13
14. To delete the snapshot, check the box to the snapshot and press the “Delete” button
tegilenfs092214-step14

a. Click “Yes” to confirm deletion
tegilenfs092214-step14a
b. After this box disappears the snapshot is deleted
tegilenfs092214-step14b

i. *UPDATED 10/9/14* There was a bug in version 2.1.2.4.140802 of the Zebi software that stopped the confirmation box was going away after the snapshot deletion completed. Clicking “No” would allow you to return to the snapshot list without any errors. In version 2.1.2.5.140925 this has been fixed and now the confirmation box disappears after the snapshot deletion completes.

Those are the basic functions you can perform from within the plugin. In a future release I would like to see the ability to create full snapshot schedules from the plugin. Since I am the one who is responsible for VMware and storage in our environment it’s simple for me to create the schedule on the web interface of the Tegile array, but that is not always the case. Another function I would like to see is mounting existing datastores on new hosts without having to go through the “Add Storage” process in vCenter for each host.

I’m confident the functionality will get there and I’ll continue to build my list of feature requests for the Tegile team.

Register vCenter Server on Tegile

After 7 years of NetApp administration and implementation I have started looking for a new storage vendor that can “do it all” like NetApp has been able to do. Protocol support is a big deal in each of the environments I’ve worked in, but performance (IOPs and low-latency) are 2 things my existing NetApps haven’t been able to provide. The idea of adding capacity just to add performance is an antiquated way of thinking and NetApp just hasn’t been able to keep up with the evolving storage market.

I am starting a short series on Tegile setup and administration. Tegile came to us a couple of months ago and has impressed us from the very first conversation and all throughout our sizing and implementation. The box is simple to setup and administer and its performance is crushing our current NetApp.

This guide walks you through connecting the Tegile array to your vCenter server, installing the NFS VAAI Plugin, and setting the Tegile recommended values on the ESXi hosts. Once this is completed, you’ll be able to provision new volumes, resize existing volumes, create VM-aware storage snapshots as well as view storage performance of your VMs all from within the vSphere client.

Prerequisites:
1. Admin credentials to the Tegile and vCenter server
2. Dedicated service account in vCenter (I created an account called “ZebiAdmin”)
3. Root password for the ESXi hosts (required to set recommended values)

 

Steps:
1. Connect to the web interface of the storage array and login with Admin credentials

a. Default username: admin
b. Default password: tegile

vctegile091614-step1
2. Click on “Settings” then choose “App-Aware”
vctegile091614-step2
3. Click “Add vCenter/ESXi Host” towards the bottom
vctegile091614-step3
4. Enter the following information:

a. Host Name/IP address: Host name or IP of the vCenter server
b. Username: User account with admin access to vCenter
c. Password: Password for user account
d. Enable Quiesce: This needs to be checked if quiescing will be used at all (a VMware snapshot is taken during thestorage snapshot process for OS consistency). Can be toggled per snapshot job

vctegile091614-step4d
5. Click “Test” to see if the connection is successful. If it is, the “Save” button will turn solid blue and can be clicked
vctegile091614-step5
6. Click “OK” to confirm enabling of quiesce on VMware
vctegile091614-step6
7. Once saved, click the green “Register” button to add the Tegile plugin to vCenter
vctegile091614-step7
8. Once the registration is successful, click “OK”
vctegile091614-step8
9. Login to the vSphere thick client (not the web client). Click the “Home” button then click on “Tegile Management” under “Solutions and Applications” (Click yes to proceed through any certificate warnings)
vctegile091614-step9
10. Login to the Tegile web interface (Likely the same username and password as in step 1)
vctegile091614-step1
11. In this interface you’ll see a list of Datastores on the Tegile that are mounted on your ESXi hosts as well as real-time stats of your array, datastores, and VMs.
vctegile091614-step11
vctegile091614-step11-2
12. Click on “ESX Settings”
vctegile091614-step12
13. Select all the ESXi hosts and then click the Green Arrow icon to install/upgrade the VAAI NFS plugin on these hosts
vctegile091614-step13
14. After the install completes (may take 2-3 minutes), click the “Configuration” button for each host
vctegile091614-step14
15. Login to the ESXi host (likely “root” credentials)
vctegile091614-step15

a. Click “Yes” to enable SSH on this host if it isn’t already enabled
vctegile091614-step15a

16. NFS.MaxQueueDepth should be set to “32” and the rules for iSCSI and FC can be installed in this location. Click “Save” to enable these changes

17. After the NFS VAAI plugin has been installed and settings saved, reboot the host. Repeat for each host in vCenter.

a. The settings changes are immediate, but the NFS VAAI plugin requires a host reboot

 

The process is simple and straight forward. This same process on the NetApp requires the Virtual Storage Console plugin to be installed on a separate server and configured then registered on the vCenter side with much more configuration. Also, installing the NetApp NFS VAAI plugin on the hosts is done through vCenter Update Manager and has been downloaded separately from the NetApp support site. That being said, the Tegile solution is lacking some of the polish that NetApp provides. I would like to see recommended values of the ESXi hosts set all at once, as opposed to one host at a time. In addition, I’d like the Tegile to change NFS.MaxVolumes default value from 8 to something much higher like the NetApp (256).