Veeam v9 – New Feature Annoucements

While the need for backups hasn’t changed, how you use these backups has. Not only that, the speed at which we can recover our data is changing as well. As the cost of downtime continues to grow, having to restore an entire server just to recover one file or a small number of files just won’t cut it. Your backup needs to be backup quickly and restore even faster.

The improvements in Veeam v9 are doing just that. Veeam has been introducing faster and faster ways to backup and restore (and limit the impact on production virtual machines during backups as well) for years and v9 is no exception. There are a few new options I want to touch on that are pain points I’ve experienced in my environments.

1. Backups from Snapmirror/Snapvault destinations.
As a former NetApp admin, I love the idea of minimizing the effect of backups on my virtual machines. By enabling backup from snapmirror destinations, you can get your VMs offsite using built in software on your NetApp array, and then create off-SAN backups that aren’t limited by your snapmirror rentention schedule due to space constraints.

2. Direct NFS Backup Mode
Direct SAN access has been in Veeam Backup & Replication forever, but backing up VMs on your NFS datastores was a different story. A proxied connection was required through an ESXi host to backup these VMs. In v9, a brand new NFS client was written by the engineers at Veeam to connect directly to your NFS volumes and backup VMs without additional host impact, latency, or speed constraints.

3. Per VM-backup File chain
As the size of your backup job grows, the managing of that file gets to be painful. As your backup repository begins to fill up you’re left having to migrate the entire backup file to a new repository. By creating a Per-VM backup file chain, one job can be created for all of your virtual machines, but each VM has its own file chain. This feature is especially useful with the next feature I’ll talk about.

4. Scale-out Backup Repository
Backup repository management has always been one of the largest pain points when managing Veeam backup jobs. I remember my first Veeam setup I was limited to 2TB LUNs on my backup server and I had to create 8 of them to store my backups. As backup jobs couldn’t span repositories, this meant I was creating individual jobs tied to repositories and then rebalancing as repositories began to fill. The Scale-out backup repository feature allows a virtual backup repository to be create on top of your current physical repositories. Now fewer jobs need to be created and you’re able to take advantage of all the space in each repository. Thanks to Luca Dell’Oca for clarifying that maintenance mode and evacuation are also supported. This mean if a repository needs to be taken down (due to SAN maintenance for example) it can be marked as maintenance mode and excluded from the repository during maintenance operations.

For me, these are the 3 big features I’m happy to see in Veeam v9. There are additional features such as explorers for Oracle, Active Directory (support for AD-integrated DNS and GPO restoration!), SQL Server and SharePoint. The entire list of new features can be found at the link below.

Click here for all the feature announcements.

Veeam v9 – New Feature Annoucements

Create New NFS Project on Tegile

The basis of a Project on the Tegile array is applying permissions and policies to a single volume or group of volumes. This means that changes made at the Project level can propagate to the volumes that live inside that project. If new IP addresses need to be added for read/write and root access for all the volumes, that can be handled at the Project-level instead of having to modify each export. However, you still have the ability to make changes at the individual volume level if that’s required.

In this setup, I’ll create a new project to host my Windows workloads in VMware. I’ll create a volume for Windows 2012 Operating System files and allow all the host on my NFS network read/write and root access to this volume.

1. Login to the web interface of the Tegile array
tegileproject020415-step1
2. Click on “Data”
tegileproject020415-step2
3. Click on the Pool that will host this new project
tegileproject020415-step3
4. In the “Project” window, click “Add Project”
tegileproject020415-step4
5. Enter the name of the Project , choose the Purpose, and select “NFS” for access type. Click “Next”
tegileproject020415-step5
6. Enter the Share Name, enter the number of mount points (more can be added later), and enter any Share limits or reservations. Click “Next”
tegileproject020415-step6
7. Set “NFS Sharing” to “on”. Set “Access Mode” to “Read-Write”, set “Access Type” to “IP” and enter the individual IP addresses or the subnet that will have access to this share. Check the box for “Root Access” then click “Add”. Repeat for each IP/Subnet then click “Next”
tegileproject020415-step7
8. Set your snapshot policy (if required). This can be configured at a later time as well. Click “Next”
tegileproject020415-step8
9. Review your settings and click “Finish”
tegileproject020415-step9
10. Click on the newly created Project and then you will see the volume share name and the mountpoint
tegileproject020415-step10

At this point, we just need to mount this new Volume on our ESXi hosts which can be done manually through the vSphere client on each host or we can do it through PowerCLI which will do it a lot faster.

In the example above, the name of our Volume is “2012_OS”, the path is our share mountpoint (/export/PDX_Windows/2012_OS) and the IP of the Tegile is 192.168.1.15. You’ll need to define the hostname as it appears in vCenter. To mount to a single host we can use the following PowerCLI command:

New-Datastore -NFS -VMHost "Hostname" -Name "2012_OS" -Path /export/PDX_Windows/2012_OS -NfsHost 192.168.1.15

To mount to an entire cluster, we can use this command after defining the name of the cluster as it appears in vCenter:

Get-Cluster "ClusterName" | Get-VMHost | New-Datastore -NFS -Name "2012_OS" -Path /export/PDX_Windows/2012_OS -NfsHost 192.168.1.15
Create New NFS Project on Tegile

Add NFS Datastore to Cluster via PowerCLI

I have been digging into more and more PowerCLI the last month or so trying to explore faster ways to accomplish common tasks. Using the NetApp VSC plugin inside vCenter I can provision a brand new NFS datastore to an entire cluster in just a few clicks, but there is no built in way to do this for mounting an existing datastore. The below script is just a simple way to mount an NFS datastore to a named cluster.

$ClusterName = "ProdCluster"
$DatastoreName = "VM_Win2003_NA5"
$DatastorePath = "/vol/VM_Win2003_NA5"
$NfsHost = "192.168.1.5"
get-cluster $ClusterName | get-vmhost | New-Datastore -NFS -Name $DatastoreName -Path $DatastorePath -NfsHost $NfsHost

Or you can replace each variable with the actual value in the script when mounting multiple datastores in the same script.

get-cluster "ProdCluster" | get-vmhost | New-Datastore -NFS -Name "VM_Win2003_NA5" -Path "/vol/VM_Win2003_NA5" -NfsHost 192.168.1.5

The next step here will be running this script from vCO and passing the variables directly from vCO. Maybe one day I’ll have the time to figure out just how to do that…

Add NFS Datastore to Cluster via PowerCLI

Tegile Array Replication and Restore

These days most of my replication is handled at the VM-level by software design for virtualization. While that is the case for most of my evironment, I still have a few non-virtualized workloads that run on shared storage that need to be replicated in the event of a disaster at my primary location. This process has never been too complex from my days of working with NetApp and now as I continue exploring the Tegile I’m happy to say that it’s just as easy through the GUI.

Documenting this process for my non-virtual workloads would be a little difficult so I’ve decided to document this process using an NFS datastore containing a few virtual machines. The first half of this guide is setting up the replication relationship and replicating the data. The second-half is the process to actually restore that data and make it usable at your DR site.

 

1. Login to the web interface of the Tegile that is the replication source
2. Click on “Settings” then “App-Aware”
tegiledr111214-step2
3. Click on “Zebi Replication” on the left column
tegiledr111214-step3
4. Under the tab “Replication Target” click the “Add” button (This is adding the DR Tegile as the target array)
tegiledr111214-step4
5. Enter the name or IP of the array (the shared Management IP address) and the username/password (Optionally you can specify a port range for replication which we won’t be doing for this documentation) and click “Add”
tegiledr111214-step5
6. Once it has been successfully added it will appear in the “Replication Target” list
tegiledr111214-step6
7. Login to the web interface of the DR target Tegile, click on “Settings” then “App-Aware”, choose “Zebi Replication” on the left column and then click on “Replication Source” tab. You should see your other array listed here (The IP address will be the “management” IPs of each controller, not the shared management IP for both arrays)
tegiledr111214-step7
8. Back on the Primary Tegile (Replication source) click on “Data”
tegiledr111214-step8
9. Click on the disk pool then then project that will be replicated
tegiledr111214-step9
10. For this documentation I’ve created a Project named “NFS_Replication” with a volume named DR_Windows with 4 VMs inside. Click on the project that will be replicated and click on the “Edit” button
tegiledr111214-step10
11. Click on “Replication” on the left column
tegiledr111214-step11
12. Click the “Add Replication” button
tegiledr111214-step12

a. Select the Target System and click “Next”
tegiledr111214-step12a
b. Select the “Target Pool” and enter a name for the “Replication Project”. Click “Next”
tegiledr111214-step12b
c. Choose what options are required and which volumes will be replicated (This test only has one volume, DR_Windows, but you can include or exclude any volumes that exist in this project. We’ll choose quiesce which will perform a VMware snapshot to put the OS in a consistent state. Click “Next”
tegiledr111214-step12c
d. Choose your schedule (manual or automatic), frequency, and how many additional snapshots (restore points) will be saved on the target array. For this example we’ll do daily replication that happens at 10:49 am and we’ll keep 14 snapshots. Click “Finish”
tegiledr111214-step12d
e. Once it’s all setup, you’ll see your target array, the target pool, and the target project
tegiledr111214-step12e

13. I have 4 VMs in that datastore (DR-Test01-04). Once the time hits, we can see that snapshots are taken, then removed, for each of the VMs in that datastore.
tegiledr111214-step13
14. On the DR target array, we can see we now have snapshots available for this project. (The reason there are 2 is because I initiated a manual replication sync for testing first)
tegiledr111214-step14

a. To manually kick off a replica snapshot, on the source array, find the project, click on “Replication” and then click the “Play” button that says “Replicate”
tegiledr111214-step14a

 

That is how simple it is to setup replication. Now let’s imagine we need to spin up those replicated VMs in this volume. Here is how we do that.

 

1. On the DR target array, click on Data, select the pool, then click on “Replica (1)” to view the replica project
tegilerest111214-step1
2. Click the “Edit” button for the NFS volume
tegilerest111214-step2
3. Click on “Snapshots” and find the snapshot you want to bring live (We’ll choose the latest version). Click the “Clone” button
tegilerest111214-step3

a. Cloning the snapshot will allow us to create a new project and NFS Volume from this snapshot and spin up these VMs in DR. By doing a clone, we’re able to continue to replicate data in the event you are testing replication instead of having an actual DR event.

4. Enter a name for the new Project (DR_NFS_Replication for this writing) and a name for the mount point (/export/DR_NFS_Replication for this writing) and click “Clone”
tegilerest111214-step4
5. If successful, you’ll receive this message about the new project being created. Click “OK”
tegilerest111214-step5
6. Close the window for “Share Configuration” and click on “Local (1)” under “Projects”
tegilerest111214-step6
7. Click on the “DR_NFS_Replication” project then view the Mountpoint of the Share (/export/DR_NFS_Replication/DR_Windows). Note the “c” before the share name which denotes it was a clone from another projects
tegilerest111214-step7
8. Click the “Edit” button for the project and then click on “Sharing”
tegilerest111214-step8
9. This is where you will add the IP addresses or range of IPs that need read/write and root access to the shares in this project. The IP addresses/ranges will carry over from the source array. Our IP range is the same in DR as our lab so we’ll leave this alone.
tegilerest111214-step9
10. Connect to your DR vCenter server or ESXi hosts. Click on the host, then “Configuration”, then “Storage”
tegilerest111214-step10
11. Click “Add Storage” towards the top right
tegilerest111214-step11
12. Choose “Network File System” and click “Next”
tegilerest111214-step12
13. Enter the NFS IP address of the DR Tegile, enter the folder path (/export/DR_NFS_Replication/DR_Windows) and then enter the name of the Datastore (DR_Windows). Click “Next”
tegilerest111214-step13
14. Review the summary info and click “Finish”
tegilerest111214-step14
15. Repeat for each host that needs access to this datastore. Afterwards, right click the datastore and click “Browse Datastore”
tegilerest111214-step15
16. Inside you’ll see the 4 VMs we that were located in here before. Open each folder, right click the VM name.vmx file and choose “Add to inventory”
tegilerest111214-step16
17. Enter the name and location for the VM and click “Next”
tegilerest111214-step17
18. Choose the Cluster or host and click “Next”
tegilerest111214-step18
19. Review the settings and click “Finish.” Repeat for each VM that needs to be added.
tegilerest111214-step19
20. Power on all the VMs and now you can run any validation tests or bring these VMs live in a DR event
tegilerest111214-step20

 

Obviously, the process of mounting the datastore in your DR vCenter Server and re-adding the VMs one by one would be time consuming and tedious. When developing your DR plans, having this process scripted (easy enough in something like PowerCLI) ahead of time on the vCenter side of things would ease that burden. From the standpoint of the Tegile, this process is fairly intuitive and simple to setup. One of the things I love is that by default the data you are bringing live on the DR site is a clone and replication continues running without being affected.

Tegile Array Replication and Restore

Tegile NFS Datastore Management in vCenter

As the primary VMware and storage admin, I try to minimize the number of tools I have to use to accomplish my tasks. When it comes to provisioning and managing volumes for VMware, I prefer to do it all from within the vSphere if possible. The VSC console for my NetApp filers has saved a lot of time over the years, but as we continue to explore our Tegile array we can see what their software has to offer.

My last post was about registering the Tegile plugin with vCenter to have this functionality available in the vSphere client. This post goes into the basic administration of NFS volumes from within the vSphere client.

Prerequisites:
1. Credentials to the Tegile web interface (default is admin/tegile)
2. Registered the Tegile plugin on your vCenter server. Click here for those steps.

Steps:
1. Login to the vSphere thick client then click on “Home” and choose “Tegile Management” under “Solutions and Applications”
tegilenfs092214-step1
2. Proceed through any security warnings and login to the Tegile interface
tegilenfs092214-step2
3. On the left you’ll see a list of all the datastores on the Tegile that have been mounted on the ESXi hosts in this vCenter. Towards the bottom, click on “Add Datastore”
tegilenfs092214-step3
4. Enter the following information and click “Create”

a.Name: Name of the datastore
b. Type: Whether block or file based (SAN or NAS)
c. Protocol: NFS, iSCSI
d. Quota: Check this box to set a max size of the volume
e. ESX/ESXi Server (Version): Check the hosts that this datastore will be provisioned to
f. Pool: The disk pool for this datastore (if multiple are available)
g. Project: The project that this datastore will be associated with
h. Purpose: The type of workload hosted on this datastore (important for block size assignment)
i. Zebi Floating IP Address: The IP each ESXi host will connect to
tegilenfs092214-step4i

5. Once the operation is complete, click “OK”
tegilenfs092214-step5
6. The new datastore has been created and mounted and appears in the list of Zebi datastores
tegilenfs092214-step6
7. Click the “More Details” button for the newly created datastore to see all the details of this volume
tegilenfs092214-step7
8. In order to resize this volume, click the “resize” button
tegilenfs092214-step8

a. Check the box for “New Share Quota” and enter the new size and press “Submit”
tegilenfs092214-step8a

9. This view will refresh and the new size will be reflected
tegilenfs092214-step9
10. I have moved a virtual machine into this datastore to test the snapshot function with quiesce enabled. Click the “Snapshot” button for the datastore
tegilenfs092214-step10
11. Enter the name of the snapshot, change “Quiesce” to “on” and click “Create”
tegilenfs092214-step11
12. You’ll receive a message that snapshot creation has been triggered. Click “OK”
tegilenfs092214-step12

a. A new task will be created to snapshot all VMs that are in that datastore
tegilenfs092214-step12a

13. Once the task to remove the virtual machine snapshot completes, click the “Refresh” button on the snapshot screen to see the new snapshot
tegilenfs092214-step13
14. To delete the snapshot, check the box to the snapshot and press the “Delete” button
tegilenfs092214-step14

a. Click “Yes” to confirm deletion
tegilenfs092214-step14a
b. After this box disappears the snapshot is deleted
tegilenfs092214-step14b

i. *UPDATED 10/9/14* There was a bug in version 2.1.2.4.140802 of the Zebi software that stopped the confirmation box was going away after the snapshot deletion completed. Clicking “No” would allow you to return to the snapshot list without any errors. In version 2.1.2.5.140925 this has been fixed and now the confirmation box disappears after the snapshot deletion completes.

Those are the basic functions you can perform from within the plugin. In a future release I would like to see the ability to create full snapshot schedules from the plugin. Since I am the one who is responsible for VMware and storage in our environment it’s simple for me to create the schedule on the web interface of the Tegile array, but that is not always the case. Another function I would like to see is mounting existing datastores on new hosts without having to go through the “Add Storage” process in vCenter for each host.

I’m confident the functionality will get there and I’ll continue to build my list of feature requests for the Tegile team.

Tegile NFS Datastore Management in vCenter