Thoughts in the Airport

Traveling is one of my least favorite things. I have never done well on flights, waiting to take off, sitting still for hours, and feeling trapped. That trapped feeling is worse when I’m stuck in the middle or window seat. If I’m not on aisle I don’t want to fly.

This time, however, it’s different. Sure, I’ve been to tech conferences before. I’ve been to a handful of VMworld’s, went to Cisco Live and a few smaller conferences as well. But Storage Field Day? This is my first time being selected as a delegate at a Tech Field Day event. As I sit in the airport I’m nervous for a completely different reason.

Tech Field Day events are filled with companies presenting their latest and greatest products and solutions. This is an event that skips the marketing and gets into the nitty gritty. The delegates (11 of us this time around) get to ask questions of the people who built these products and have a vast knowledge of their inner workings. Viewers watching the live stream can have their questions relayed to the presenters via Twitter and the #SFD8 hashtag so they gain a better understanding as well.

So why the nerves? Sitting alongside storage experts such as Howard Marks and Ray Lucchesi who run the GreyBeards on Storage podcast (which I subscribe to) and Scott D. Lowe who is an author, a blogger, a former CIO and someone is who well known and well respected in the industry. It just so happens that Howard and Scott have done a combined 36 Tech Field Day events. Alex Galbraith, Viper V.K, Jon Klaus, Dan Frith, Mark May, Enrico Signoretti, and Jarett Kulm are the remaining delegates who are also well known and respected as well. For a first timer like me, it doesn’t get much more intimidating.

That’s just the delegates, I haven’t even mentioned the presenters. We’ll be on-site at the Coho Data, Pure Storage, and Cohesity. With a recent IPO I’ll be curious to see what Pure will be showcasing and with their first GA release I’m interested in hearing more about Cohesity and where their product is at.

Violin Memory, Intel, INFINIDAT, Nimble Storage, Nexgen, Qumulo and Primary Data will also be presenting. With so little coming from Violin lately I’m curious what they’ve been up to (besides declaring that disk is dead). I’m also interested in where Nimble is at. With most of their competitors offering all flash solutions, Nimble is one of the last few hybrid-only vendors. Have they throw a bunch of SSD’s on their arrays and called it “All Flash” (ala NetApp) or are they working on something new?

As I sit in the airport at PDX waiting for my flight to arrive from Denver my nerves are about adding value to this event. Asking the questions and offering the perspective of a customer who has been responsible for deploying and administering storage over the last 7 years. Holding my own along side these storage industry experts and not letting myself get intimidated. This is out of my comfort zone, but I’m up for the challenge.

Thoughts in the Airport

Storage Field Day Here I Come!

Storage has been a component of my job for most of my IT career. It’s something that I’ve enjoyed, but hadn’t been something I’ve had the time to focus on. Coming from smaller organizations, I’ve been responsible for almost everything in the environment which rarely gave me an opportunity to be an expert in any one technology.

A few years ago the company I worked for was going through a storage refresh and I was tasked with evaluating our existing storage platform and determine our needs going forward. I spent time with nearly every major storage vendor there was going into depth on every aspect I could in order to determine the “best choice.” In the end I gained an understanding of storage that I never had before and it became a passion for me.

All that being said, I am both honored and humbled to be selected as a delegate for Storage Field Day 8! The Tech Field Day events have been something I’ve watched over the last couple of years and I have become a huge fan. These event give the viewers a chance to learn and ask questions about the latest technologies from the presenters. These events are about getting through the marketing and getting into the details. This is a great opportunity to educate yourself on the different products being presented.

Don’t miss all the presentations for Storage Field Day 8 on October 21-23. I am particularly interested in hearing more about what Coho Data and Cohesity are doing, but I’m looking forward to all the presentations.

Storage Field Day Here I Come!

Track Datastore Add & Removes With PowerCLI

While working with the data protection team at my job I was asked if there was any way to track new datastores being added to a vSphere cluster. When new LUNs are allocated to our vSphere clusters, the data protection team isn’t always made aware ahead of time. Normally this isn’t a big deal, but in our case we have a product that requires access to specified datastores for backups. In order to maintain access to these virtual machines for backup purposes, we need to be notified when new datastores are added.

As I sat and thought about how I could accomplish this task I came up with a couple ideas, but figured a scheduled task with PowerCLI/PowerShell would be the easiest to implement. In this script we will connect to the vCenter server, get all the datastores in the cluster, write a file daily with a date stamp, then compare the current and previous day’s datastore output files and write that to a third file that will only display the new datastores that have been added or the datastores that have been removed.

I’ve broken down the script so I can explain each section making it easy to understand. Before I had any knowledge of PowerShell/PowerCLI, modifying something to fit my environment when I didn’t understand what was happening at each step was time consuming and frustrating.

1. This is where we define the name of the vCenter instance we’ll be connecting to and the name of the cluster we’re interested in.

$vCenter = "LabvCenter.domain.com"
$Cluster = "LabCluster"

2. This is where we define the output location for our datastores and difference file. I chose to drop it into a folder named for the cluster, but that can be removed.

$filePath = "C:\test\" + $Cluster + "\"

3. This is where we connect to vCenter and then immediately wait 15 seconds which can fix issues of commands running before security warnings are displayed

Connect-VIserver $vCenter
Start-Sleep -s 15

4. This will gather all the datastores in the cluster and exclude any datastore that has a name containing “*-local”. The wildcard is important because the local datastores contain “servername + -local” and if the wildcard wasn’t there all of the local datastores would be included because no datastore is named exactly “-local”

$Datastores = Get-Cluster -Name $Cluster | Get-Datastore | Where {$_.Name -notlike "*-local"}

5. I prefer the format of 2 digit month, 2 digit day, 2 digit year. This will get the current date of the system running this script, then convert it to this format of 051415 for example.

$today = (Get-Date).ToString("MMddyy")
$yesterday = (Get-Date).AddDays(-1).ToString("MMddyy")
$2DaysAgo = (Get-Date).AddDays(-2).ToString("MMddyy")

6. This will set the file name and location for the output from 2 days ago. If that file exists, it will be removed. Rather than have an output from every day saved until I manually remove it, this process seemed better. I chose to delete the file from 2 days ago as opposed to deleting yesterday’s file after we run the comparison in case we see a huge change in the difference file we can manually compare the 2 files to try to find the error.

$2DayOldFile = $filepath + $Cluster + $2DaysAgo + ".txt"
If (Test-Path $2DayOldFile){Remove-Item $2DayOldFile}

7. This will set the file path and name to the file path defined at the top, plus the cluster name, plus the date and add .txt to the end.

$CurrentFile = $filePath + $Cluster + $today + ".txt"
$YesterdaysFile = $filePath + $Cluster + $yesterday + ".txt"

8. Here we are exporting all the datastores from Step 4 by name and outputting to the file name/path defined in Step 7.

$Datastores | Select Name | Out-File $CurrentFile

9. This is where we set the name and path for the difference file that will track the datastore add/remove.

$DifferenceFile = $filePath + "Datastore-Changes" + ".txt"

10. This will read the content from today’s content and yesterday’s content.

$YesterdaysContent = Get-Content $YesterdaysFile
$CurrentContent = Get-Content $CurrentFile

11. Here we are comparing the content we just read in step 10.

$Compare = Compare-Object $YesterdaysContent $CurrentContent

12. The standard way “Compare-Object” outputs this data shows difference with a side indicator of <= or => depending on where the difference exists. Rather than remember which file was read first to determine whether a datastore was added or removed, we change the column names. If a datastore existed yesterday, but is missing today it is labeled as “Removed”. If a datastore didn’t exist yesterday, but does today it is labeled as “Added”.

$compare | foreach {
if ($_.sideindicator -eq '<=')
{$_.sideindicator = "Removed"}

if ($_.sideindicator -eq '=>')
{$_.sideindicator = "Added"}
}

13. This will take the results from step 11 with formatting of step 12 then change the column names. The list of objects compared is normally named “InputObject” and then “Added or Removed” is normally “SideIndicator”. Maybe that’s fine, but I prefer something a little easier to read. I’ve renamed “InputObject” to “Datastore” but also I add the current date and we change “SideIndicator” to “Added or Removed”. Once that is done, we output that file to the path and name defined in Step 9. The reason why we include the current date in the “Datastore” column is because we are using “-Append” with the “Out-File” command. This will add a dated entry of changes that occurred to the bottom of the existing (or new) output file. This means we aren’t overwriting the same file every day, we are just adding to it. In case you forget to check this file for a few days you won’t lose that data.

$Compare |
select @{l='Datastore' + ' - ' + (Get-Date);e={$_.InputObject}},@{l='Added or Removed';e={$_.SideIndicator}} |
Out-File -Append $DifferenceFile

Now that we know what this thing does, let’s see it in action. I have run the output over 3 days and this is how the output file is displayed. We can see that on 05-14-15 we added Lab-Datastore-10 which didn’t exist on 05-13-15. Then on 05-15-15 we removed Lab-Datastore-03 and we added -11 and -12.
image

When running the script I commented out the removal of the 2 day old file so we could compare manually. Now we have an output file created (Datastore-Changes.txt) that should show the differences.
image

Inside Datastore-Changes.txt we see that on 5/14 the datastore “Lab-Datastore-10” was added and on 5/15 we lost Lab-Datastore-03, but added 11 and 12.

image

We can delete this file at any time and the next time this script runs we’ll create a brand new file. This means there is no dependency on this file already existing in order for the script to run and doesn’t require us to keep a long list of all the datastore add/removes for all eternity. Now you’ll just need to save the script schedule it to run using Windows Task Scheduler.

Below is the full scripts with comments.

#Define the vCenter Server and Cluster
$vCenter = "LabvCenter.domain.com"
$Cluster = "LabCluster"

#Set the path location for the output files
$filePath = "C:\test\" + $Cluster + "\"

#Connect to the vCenter Server and sleep for 15 seconds (necessary for security warnings)
Connect-VIserver $vCenter
Start-Sleep -s 15

#Get a list of all the datastores
$Datastores = Get-Cluster -Name $Cluster | Get-Datastore | Where {$_.Name -notlike "*-local"}

#Get the current date in the correct format
$today = (Get-Date).ToString("MMddyy")
$yesterday = (Get-Date).AddDays(-1).ToString("MMddyy")
$2DaysAgo = (Get-Date).AddDays(-2).ToString("MMddyy")

#Delete the output from 2 days ago (Remove this section if you want to keep the history)
$2DayOldFile = $filepath + $Cluster + $2DaysAgo + ".txt"
If (Test-Path $2DayOldFile){Remove-Item $2DayOldFile}

#Set the filename to include today's date
$CurrentFile = $filePath + $Cluster + $today + ".txt"
$YesterdaysFile = $filePath + $Cluster + $yesterday + ".txt"

#Export those datastores to a TXT file
$Datastores | Select Name | Out-File $CurrentFile

#Set file name & path for difference file
$DifferenceFile = $filePath + "Datastore-Changes" + ".txt"

#Get the content for yesterday and today's files
$YesterdaysContent = Get-Content $YesterdaysFile
$CurrentContent = Get-Content $CurrentFile

#Compare yesterday's and today's files
$Compare = Compare-Object $YesterdaysContent $CurrentContent

#Change the source/target column to "Removed" and "Added"
$compare | foreach { 
      if ($_.sideindicator -eq '')
        {$_.sideindicator = "Added"}
     }

#Change the column name output to "Datastore + Date" and "Added or Removed" then output to file
 $Compare | 
   select @{l='Datastore' + ' - ' + (Get-Date);e={$_.InputObject}},@{l='Added or Removed';e={$_.SideIndicator}} |
   Out-File -Append $DifferenceFile
Track Datastore Add & Removes With PowerCLI

Create New NFS Project on Tegile

The basis of a Project on the Tegile array is applying permissions and policies to a single volume or group of volumes. This means that changes made at the Project level can propagate to the volumes that live inside that project. If new IP addresses need to be added for read/write and root access for all the volumes, that can be handled at the Project-level instead of having to modify each export. However, you still have the ability to make changes at the individual volume level if that’s required.

In this setup, I’ll create a new project to host my Windows workloads in VMware. I’ll create a volume for Windows 2012 Operating System files and allow all the host on my NFS network read/write and root access to this volume.

1. Login to the web interface of the Tegile array
tegileproject020415-step1
2. Click on “Data”
tegileproject020415-step2
3. Click on the Pool that will host this new project
tegileproject020415-step3
4. In the “Project” window, click “Add Project”
tegileproject020415-step4
5. Enter the name of the Project , choose the Purpose, and select “NFS” for access type. Click “Next”
tegileproject020415-step5
6. Enter the Share Name, enter the number of mount points (more can be added later), and enter any Share limits or reservations. Click “Next”
tegileproject020415-step6
7. Set “NFS Sharing” to “on”. Set “Access Mode” to “Read-Write”, set “Access Type” to “IP” and enter the individual IP addresses or the subnet that will have access to this share. Check the box for “Root Access” then click “Add”. Repeat for each IP/Subnet then click “Next”
tegileproject020415-step7
8. Set your snapshot policy (if required). This can be configured at a later time as well. Click “Next”
tegileproject020415-step8
9. Review your settings and click “Finish”
tegileproject020415-step9
10. Click on the newly created Project and then you will see the volume share name and the mountpoint
tegileproject020415-step10

At this point, we just need to mount this new Volume on our ESXi hosts which can be done manually through the vSphere client on each host or we can do it through PowerCLI which will do it a lot faster.

In the example above, the name of our Volume is “2012_OS”, the path is our share mountpoint (/export/PDX_Windows/2012_OS) and the IP of the Tegile is 192.168.1.15. You’ll need to define the hostname as it appears in vCenter. To mount to a single host we can use the following PowerCLI command:

New-Datastore -NFS -VMHost "Hostname" -Name "2012_OS" -Path /export/PDX_Windows/2012_OS -NfsHost 192.168.1.15

To mount to an entire cluster, we can use this command after defining the name of the cluster as it appears in vCenter:

Get-Cluster "ClusterName" | Get-VMHost | New-Datastore -NFS -Name "2012_OS" -Path /export/PDX_Windows/2012_OS -NfsHost 192.168.1.15
Create New NFS Project on Tegile

Provision New Floating IP on Tegile

As I begin the process of reconfiguring my Tegile from a test/lab array into a production array I thought it would be a great opportunity to document more of the setup and provisioning steps involved in administering the array. In our environment we are using 10gbe without configuring LACP on the switches and letting the Tegile handle the network availability. Obviously, every environment is different, this is just the approach we took for this array.

These steps walk you through the process of provisioning an additional VLAN on the 10gbe interfaces and then creating a floating IP address that is owned by the node running the disk pool.

1. Login to the non-shared management IP of each HA Node
2. Login as “admin” with the correct password
tegileIP012715-step2
3. Click on the “Settings” tab and then click “Network”
tegileIP012715-step3
4. Under “Network Settings” on the left column, click on “Interface”
tegileIP012715-step4
5. Under “Physical Network Interfaces”, click on one of the 10gbe interfaces (named ixgbe2 and ixgbe3 on this array). Click the “+” to add a VLAN
tegileIP012715-step5
6. Enter the name of the VLAN following the guidelines below and the VLAN number and click “OK”.
a. Our naming convention is protocol + interface number + _ + VLAN number. We are using cifs on interface “ixgbe3” and the VLAN is 100
tegileIP012715-step6
7. Click “OK” to this message about saving the config
tegileIP012715-step7
8. Repeat step 5 for the other 10gbe interface changing the name to reflect the number of the other interface.
tegileIP012715-step8
9. Click “Save” to bring these new VLAN online
tegileIP012715-step9
    a. Notice that the state changes to “up” after saving
tegileIP012715-step9a
10. Now we need to assign an IP address to these interfaces. We are not using LACP, so under “IP Groups” click the “Add IP Group” button
tegileIP012715-step10
11. Click the arrow next to “Network Properties”. Enter the name, check the boxes next to the newly created VLANs we added to each interface, then enter the IP address and subnet of this new subnet. Click “OK”
     a. The naming convention is “ipmp + _ + protocol + filer node number. IPMP is “IP Multipathing”, cifs is the protocol, and this is node “A” which is the first node
tegileIP012715-step11a
     b. Click “OK” to this message about saving the config
tegileIP012715-step11b
12. Now we see the IPMP group has been created, but isn’t up.
tegileIP012715-step12
13. Click the “Save” button at the button
tegileIP012715-step13
14. Click “OK” for confirmation
tegileIP012715-step14
15. Now we can see that the interface is up
tegileIP012715-step15
16. Repeat these steps on the other node of the HA pair. Changing the IP Group name to “ipmp_cifs2” and choosing a different IP address
tegileIP012715-step16
17. Back on the primary node, click on “Settings” then “HA”
tegileIP012715-step17
18. On the active resource group (we only have 1 which is “Resource Group A” click “Add Floating IP”
tegileIP012715-step18
19. Enter the shared IP address and netmask (this is a unique IP and different than either of the IP addresses entered earlier) then choose the IP Groups we created on each node. Click “OK”
tegileIP012715-step19
20. Now we have a new IP address that will be used by whichever node owns the Resource Group
tegileIP012715-step20

The steps are pretty straightforward, but can be confusing in the beginning. Our local SE from Tegile walked us through this config when we were evaluating, but it was important for me to know how to do these things on my own.

Provision New Floating IP on Tegile

Restore Files & AD Objects From NetApp & Veeam v8

With the release of Veeam Backup & Replication v8 we can restore directly from NetApp Snapshots. Whether it’s an entire VM, individual files, or just some objects in Active Directory, you can do it all from the Veeam console. For a guide on installing and configuring Veeam v8 with NetApp storage, click here

We’ll be testing the restore of individual files and some Active Directory objects for this blog post. In this scenario we have a couple Domain Controllers (2008 R2) and a couple of member servers with some files that we’ll delete. We also have an OU with a couple users, a member server, and a group.

Each of these VMs sit on either of these two volumes, Win_2008 and Win_2012. If you click on “Storage Infrastructure” in the Veeam Backup and Replication console, then expand your NetApp storage you’ll see a list of all the volumes available and their snapshots.
veeamrest120114-part1

1. I’ve taken a snapshot in NetApp System Manager of these volumes. To list these snaps, refresh the volume by right-clicking on the volume and choosing “Rescan volume” or right click on the storage array and choose “Rescan Storage” (Since we have 2 volumes to refresh, we’ll rescan storage.
veeamrest120114-step1
2. A new window will popup showing the progress
veeamrest120114-step2
3. Once completed, we now see the new snapshot I created called “Pre-delete”
veeamrest120114-step3
4.I’m going to delete a file from the server “Lab2008” (on the Win_2008 datastore) and “Lab2012” (on the Win_2012 datastore) that are sitting on my desktop.
veeamrest120114-step4a
veeamrest120114-step4b
5. And let’s also delete the OU “Delete Test” which contains a couple test users, a group they are apart of and the VM “Lab2008”
veeamrest120114-step5
6. Now that those files and OU\objects have been delete, let’s go back to the Veeam console and see what we can recover. We’ll start with the files for the “Lab2012” VM.
7. Expanding “Win_2012” datastore in “Storage Infrastructure” view, click on the name of the snapshot I created earlier and we see the “Lab2012” VM
veeamrest120114-step7
8. We right-click on “Lab2012”, hover over “Restore guest files” and then choose “Microsoft Windows”
veeamrest120114-step8
9. Under the “File Level Restore” screen, click “Customize” in the bottom right corner
veeamrest120114-step9
10. As long as you’re restoring to a vCenter/Host that’s already been added to Veeam, choose the host, resource pool (if any) and folder. Click “OK” then click “Next”
veeamrest120114-step10
11. Enter a reason for the restore and click “Next”
veeamrest120114-step11
12. Click “Finish”
veeamrest120114-step12
13. The restore session will open and mount the snapshot/VM to the chosen host
veeamrest120114-step13
14. In vCenter, we see these 2 tasks of creating a datastore and registering the virtual machine.
veeamrest120114-step14
15. On the host, we see a new powered off VM with the name of “Lab2012” followed by a GUID.
veeamrest120114-step15
16. Back at the Veeam console, the Backup Browser window appears and we can browse to the location of the deleted file
veeamrest120114-step16
17. From here, we can copy the file to our local machine or restore it directly to the Virtual Machine. Right click on the file and choose “Restore” then “Overwrite”
veeamrest120114-step17
18. We’ll pick “Use the following account” and choose my Lab Domain credentials and click “OK”
veeamrest120114-step18
19. The restore process will start and you’ll see this output if you click “Show Details”
veeamrest120114-step19
20. Logging back in to “Lab2012” we can see the file has been restored
veeamrest120114-step20
21. Close the “Restoring files” window in the Veeam console and the “Backup Browser” window. After they’re closed, the VM will be unregistered on the host and the datastore will be unmounted.
22. I’m doing a restore from “Lab2008” but this time I will just copy the file to my local computer instead of restoring to the guest VM. After browsing the datastore snapshots and choosing “Restore Guest Files”, we’ll browse the directory structure, locate the file, right-click and choose “Copy To”
veeamrest120114-step22
23. A window will pop up to choose the folder location on your machine and whether to preserve permissions and ownership. Then click “OK”
veeamrest120114-step23
24. Now in the root of the C: drive we have the “Lab2008-txt” file
veeamrest120114-step24
25. Let’s look at the “Lab2008” VM now. It was in that OU we deleted and after rebooting it and trying to login we receive the message “The security database on the server does not have a computer account for this workstation trust relationship”. We can fix that.
veeamrest120114-step25
26. Back in the Veeam console and the “Pre-delete” snapshot for the “Win_2008” datastore, we’ll locate the “Lab-DC01” VM. Right click on the VM, hover over “Restore application items” and then click “Microsoft Active Directory objects”
veeamrest120114-step26
27. Our host settings are saved from the last restore we did, so click “Next”
veeamrest120114-step27
28. Enter a restore reason and click “Next”
veeamrest120114-step28
29. Review the summary and click “Finish”
veeamrest120114-step29
30. The Veeam Explorer for Microsoft Active Directory window will appear
veeamrest120114-step30
31. Then the VM will be mounted in vCenter
veeamrest120114-step31
32. Once the Veeam Explorer window for AD opens, you’ll be able to browse your Domain object. We’ll expand the “LabOU” object where we see “Delete Test” with the same 2 test users, “Lab2008” server and the group those users belong to.
veeamrest120114-step32
33. Right click the “Delete Test” OU and choose “Restore container to LabDC.local”
veeamrest120114-step33
34. Enter the credentials for the account with access to add objects to the domain and click “OK”
veeamrest120114-step34
35. You’ll see the progress of the restore and then the summary of how many objects were restored
veeamrest120114-step35

(In order for this to work your Veeam server will need network access to the live domain controller)

36. If we refresh the screen for Active Directory Users and Computers on “Lab-DC01” we’ll see the OU is back with all of it’s objects
veeamrest120114-step36
37. In the properties for the users, we can see that group membership was retained. The group “Email Group” is located in another OU and that membership was restored as well
veeamrest120114-step37
38. And now when we try to login to “Lab2008” with domain credentials it works with no issues.

 

How fast can this restore happen? From the time I opened the Veeam console until the time the OU reported as being restored took 3 minutes and 34 seconds. In an emergency where someone accidentally deletes an entire OU, a user account, a server, or anything else, they can all be restored in under 5 minutes time without the need to reset any passwords and everything will work without anyone ever noticing. Veeam is awesome and just keeps getting better and better.

Restore Files & AD Objects From NetApp & Veeam v8

Veeam v8 Install With NetApp Config

Veeam has released v8 of it’s Backup and Replication software. As a long time Veeam user this is a release I have been waiting for. Previously, Veeam had released support for storage snapshots on HP storage arrays, but with my environments being primarily NetApp over the last few years I wasn’t able to take advantage. Now in v8, we can restore and backup directly from snapshot. This speeds up the process and limits the impact on the Virtual Machines in the environment.

This guide walks you through a brand new installation of Veeam Backup & Replication v8 on Server 2012 and how to add your NetApp storage array as an object to browse existing snapshots. This is a high-level guide and in the future I’ll do a more in-depth backup/restore from Storage. For my guide on installing Veeam v7 with Windows 2012 R2 Data Deduplication, click here.

If you’re not interested in a custom SQL Express installation as well, pick up the guide at step 15. Steps 1-15 show how to install SQL Express to the secondary drive to prevent growing databases from affecting the main OS partition.

Prerequisites:

1. Dedicated server for installing Veeam
2. License file for Veeam (copied out to the server)
3. Latest version of Veeam v8 downloaded and mounted on the server (the installer is in an .ISO)
4. A service account for running the Veeam services (Optional, but my preferred method)
5. Username/password with admin rights to vCenter
6. Username/password for NetApp array (for this post I’ll be using the ‘root’ account)

Steps:

1. Right click the DVD drive and click “Open”
veeamv8111714-step1
2. Navigate to Redistr -> x64. Locate SQLEXPRx64.exe, right click and choose “Run as administrator”
veeamv8111714-step2
3. Click “Yes” to run the installer if prompted
veeamv8111714-step3
4. Under the “Installation” section, click “New SQL Server stand-alone installation”
veeamv8111714-step4
5. Click the check box for “I accept the license terms” and decide if you want to send feature usage data to Microsoft then click “Next”
veeamv8111714-step5
6. Ensure the check box for “Include SQL Server product updates” is checked and click “Next”
veeamv8111714-step6
7. Updates and setup files will install…
veeamv8111714-step7
8. Choose the features to install (Database Engine Services is the only thing required). Choose the install directory (I always choose the secondary drive of the machine and click “Next”
veeamv8111714-step8
9. Choose a name for the instance or leave as default (SQLExpress), choose the instance root directory (secondary drive again) and click “Next”
veeamv8111714-step9
10. Enter a service account for running the SQL DB engine (or leave it as local system) and click “Next”
veeamv8111714-step10
11. Choose “Mixed mode” for the authentication type then enter a password for the “sa” account (Immediately save this password somewhere). Choose the groups/users that will be SQL Server administrators
veeamv8111714-step11a

a. Be default, only users/groups added here will have access to the Veeam console. If you don’t want to grant permissions to the SQL instance, you can grant access to these users/groups for the Veeam database after it has been created

12. Click on the “Data Directories” tab and ensure all the directories are pointing to the secondary drive and click “Next”
veeamv8111714-step12
13. Choose whether to send error reports and click “Next” and the installation will begin
veeamv8111714-step13
14. Once the installation completes, click “Close”
veeamv8111714-step14
15. Close the “SQL Server Installation Center” window. Navigate back to the root of the DVD drive. Right click on “Setup.exe” and choose “Run as administrator”
veeamv8111714-step15
16. Click “Yes” to run the installer if prompted
veeamv8111714-step16
17. Click “install” for “Veeam Backup & Replication”
veeamv8111714-step17
18. Click “Next”
veeamv8111714-step18
19. Read and accept the license terms and click “Next”
veeamv8111714-step19
20. Click “Browse” and locate your license file then click “Next”
veeamv8111714-step20
21. Choose the features to install and the install directory then click “Next”

a. To install to a different location (like a secondary drive), the folders need to be created ahead of time
veeamv8111714-step21

22. If any features are missing, click “Install”
veeamv8111714-step22
23. Once the system configuration check passes, click “Next”
veeamv8111714-step23
24. Review the default configuration and if no changes need to be made, click “Install”
veeamv8111714-step24
25. Once the install completes, click “Finish”
veeamv8111714-step25
26. Close the setup window and restart the server
27. After the server finishes rebooting, login and view the services to ensure the Veeam and SQL services that are “Automatic” have started
veeamv8111714-step27
28. Open “Veeam Backup & Replication”
veeamv8111714-step28
29. Click “Managed servers” on the left side and then click “VMware vSphere”
veeamv8111714-step29
30. Enter the name or IP of the vCenter Server and click “Next”
veeamv8111714-step30
31. Click the “Add” button and then enter the username/password of an account with permissions on the vCenter server. Click “OK” then click “Next”
veeamv8111714-step31
32. Click “Finish”
veeamv8111714-step32
33. To add your NetApp storage systems to Veeam, click on “Storage Infrastructure” and then click the “Add Storage” button
veeamv8111714-step33
34. Click “NetApp Data ONTAP”
veeamv8111714-step34
35. Enter the Name or IP of the storage system and click “Next”
veeamv8111714-step35
36. Click “Add” to add credentials to connect to the NetApp then choose the protocol and port. Click “Next”
veeamv8111714-step36
37. If the name/IP and credentials work, click finish and discovery of VMs and LUNs/Volumes will begin.
veeamv8111714-step37
38. Once storage and VMs have been discovered, click “Close”
veeamv8111714-step38
39. In the “Storage Infrastructure” view, expand “NetApp”, then the storage system. Choose a volume with virtual machines and current volume snapshots. Expand the volume, choose a snapshot and see what VMs are inside.
veeamv8111714-step39

a. From this view you can delete existing snapshots, create new storage snapshots, and rescan the volume for new snapshots. At the VM-level, you can instantly recover the VM from snapshot, restore guest-OS files, and even restore objects from Active Directory, Exchange, SQL or SharePoint.
veeamv8111714-step39a

40. Click on “Backup & Replication” then expand “Backups” and click on “Storage snapshots.” You’ll see a list of all the volumes that have snapshots, what VM’s are in those snapshots, and how many restore points are available.
veeamv8111714-step40

This is the basics of installing Veeam v8 and connecting to your vCenter Server and NetApp Storage. The process is incredibly simple and like every else from Veeam it just works. In the future I intend to add more restore scenarios such as application item recovery and VM recovery from storage snapshots.

Veeam v8 Install With NetApp Config