Thoughts in the Airport

Traveling is one of my least favorite things. I have never done well on flights, waiting to take off, sitting still for hours, and feeling trapped. That trapped feeling is worse when I’m stuck in the middle or window seat. If I’m not on aisle I don’t want to fly.

This time, however, it’s different. Sure, I’ve been to tech conferences before. I’ve been to a handful of VMworld’s, went to Cisco Live and a few smaller conferences as well. But Storage Field Day? This is my first time being selected as a delegate at a Tech Field Day event. As I sit in the airport I’m nervous for a completely different reason.

Tech Field Day events are filled with companies presenting their latest and greatest products and solutions. This is an event that skips the marketing and gets into the nitty gritty. The delegates (11 of us this time around) get to ask questions of the people who built these products and have a vast knowledge of their inner workings. Viewers watching the live stream can have their questions relayed to the presenters via Twitter and the #SFD8 hashtag so they gain a better understanding as well.

So why the nerves? Sitting alongside storage experts such as Howard Marks and Ray Lucchesi who run the GreyBeards on Storage podcast (which I subscribe to) and Scott D. Lowe who is an author, a blogger, a former CIO and someone is who well known and well respected in the industry. It just so happens that Howard and Scott have done a combined 36 Tech Field Day events. Alex Galbraith, Viper V.K, Jon Klaus, Dan Frith, Mark May, Enrico Signoretti, and Jarett Kulm are the remaining delegates who are also well known and respected as well. For a first timer like me, it doesn’t get much more intimidating.

That’s just the delegates, I haven’t even mentioned the presenters. We’ll be on-site at the Coho Data, Pure Storage, and Cohesity. With a recent IPO I’ll be curious to see what Pure will be showcasing and with their first GA release I’m interested in hearing more about Cohesity and where their product is at.

Violin Memory, Intel, INFINIDAT, Nimble Storage, Nexgen, Qumulo and Primary Data will also be presenting. With so little coming from Violin lately I’m curious what they’ve been up to (besides declaring that disk is dead). I’m also interested in where Nimble is at. With most of their competitors offering all flash solutions, Nimble is one of the last few hybrid-only vendors. Have they throw a bunch of SSD’s on their arrays and called it “All Flash” (ala NetApp) or are they working on something new?

As I sit in the airport at PDX waiting for my flight to arrive from Denver my nerves are about adding value to this event. Asking the questions and offering the perspective of a customer who has been responsible for deploying and administering storage over the last 7 years. Holding my own along side these storage industry experts and not letting myself get intimidated. This is out of my comfort zone, but I’m up for the challenge.

Storage Field Day Here I Come!

Storage has been a component of my job for most of my IT career. It’s something that I’ve enjoyed, but hadn’t been something I’ve had the time to focus on. Coming from smaller organizations, I’ve been responsible for almost everything in the environment which rarely gave me an opportunity to be an expert in any one technology.

A few years ago the company I worked for was going through a storage refresh and I was tasked with evaluating our existing storage platform and determine our needs going forward. I spent time with nearly every major storage vendor there was going into depth on every aspect I could in order to determine the “best choice.” In the end I gained an understanding of storage that I never had before and it became a passion for me.

All that being said, I am both honored and humbled to be selected as a delegate for Storage Field Day 8! The Tech Field Day events have been something I’ve watched over the last couple of years and I have become a huge fan. These event give the viewers a chance to learn and ask questions about the latest technologies from the presenters. These events are about getting through the marketing and getting into the details. This is a great opportunity to educate yourself on the different products being presented.

Don’t miss all the presentations for Storage Field Day 8 on October 21-23. I am particularly interested in hearing more about what Coho Data and Cohesity are doing, but I’m looking forward to all the presentations.

Track Datastore Add & Removes With PowerCLI

While working with the data protection team at my job I was asked if there was any way to track new datastores being added to a vSphere cluster. When new LUNs are allocated to our vSphere clusters, the data protection team isn’t always made aware ahead of time. Normally this isn’t a big deal, but in our case we have a product that requires access to specified datastores for backups. In order to maintain access to these virtual machines for backup purposes, we need to be notified when new datastores are added.

As I sat and thought about how I could accomplish this task I came up with a couple ideas, but figured a scheduled task with PowerCLI/PowerShell would be the easiest to implement. In this script we will connect to the vCenter server, get all the datastores in the cluster, write a file daily with a date stamp, then compare the current and previous day’s datastore output files and write that to a third file that will only display the new datastores that have been added or the datastores that have been removed.

I’ve broken down the script so I can explain each section making it easy to understand. Before I had any knowledge of PowerShell/PowerCLI, modifying something to fit my environment when I didn’t understand what was happening at each step was time consuming and frustrating.

1. This is where we define the name of the vCenter instance we’ll be connecting to and the name of the cluster we’re interested in.

$vCenter = "LabvCenter.domain.com"
$Cluster = "LabCluster"

2. This is where we define the output location for our datastores and difference file. I chose to drop it into a folder named for the cluster, but that can be removed.

$filePath = "C:\test\" + $Cluster + "\"

3. This is where we connect to vCenter and then immediately wait 15 seconds which can fix issues of commands running before security warnings are displayed

Connect-VIserver $vCenter
Start-Sleep -s 15

4. This will gather all the datastores in the cluster and exclude any datastore that has a name containing “*-local”. The wildcard is important because the local datastores contain “servername + -local” and if the wildcard wasn’t there all of the local datastores would be included because no datastore is named exactly “-local”

$Datastores = Get-Cluster -Name $Cluster | Get-Datastore | Where {$_.Name -notlike "*-local"}

5. I prefer the format of 2 digit month, 2 digit day, 2 digit year. This will get the current date of the system running this script, then convert it to this format of 051415 for example.

$today = (Get-Date).ToString("MMddyy")
$yesterday = (Get-Date).AddDays(-1).ToString("MMddyy")
$2DaysAgo = (Get-Date).AddDays(-2).ToString("MMddyy")

6. This will set the file name and location for the output from 2 days ago. If that file exists, it will be removed. Rather than have an output from every day saved until I manually remove it, this process seemed better. I chose to delete the file from 2 days ago as opposed to deleting yesterday’s file after we run the comparison in case we see a huge change in the difference file we can manually compare the 2 files to try to find the error.

$2DayOldFile = $filepath + $Cluster + $2DaysAgo + ".txt"
If (Test-Path $2DayOldFile){Remove-Item $2DayOldFile}

7. This will set the file path and name to the file path defined at the top, plus the cluster name, plus the date and add .txt to the end.

$CurrentFile = $filePath + $Cluster + $today + ".txt"
$YesterdaysFile = $filePath + $Cluster + $yesterday + ".txt"

8. Here we are exporting all the datastores from Step 4 by name and outputting to the file name/path defined in Step 7.

$Datastores | Select Name | Out-File $CurrentFile

9. This is where we set the name and path for the difference file that will track the datastore add/remove.

$DifferenceFile = $filePath + "Datastore-Changes" + ".txt"

10. This will read the content from today’s content and yesterday’s content.

$YesterdaysContent = Get-Content $YesterdaysFile
$CurrentContent = Get-Content $CurrentFile

11. Here we are comparing the content we just read in step 10.

$Compare = Compare-Object $YesterdaysContent $CurrentContent

12. The standard way “Compare-Object” outputs this data shows difference with a side indicator of <= or => depending on where the difference exists. Rather than remember which file was read first to determine whether a datastore was added or removed, we change the column names. If a datastore existed yesterday, but is missing today it is labeled as “Removed”. If a datastore didn’t exist yesterday, but does today it is labeled as “Added”.

$compare | foreach {
if ($_.sideindicator -eq '<=')
{$_.sideindicator = "Removed"}

if ($_.sideindicator -eq '=>')
{$_.sideindicator = "Added"}
}

13. This will take the results from step 11 with formatting of step 12 then change the column names. The list of objects compared is normally named “InputObject” and then “Added or Removed” is normally “SideIndicator”. Maybe that’s fine, but I prefer something a little easier to read. I’ve renamed “InputObject” to “Datastore” but also I add the current date and we change “SideIndicator” to “Added or Removed”. Once that is done, we output that file to the path and name defined in Step 9. The reason why we include the current date in the “Datastore” column is because we are using “-Append” with the “Out-File” command. This will add a dated entry of changes that occurred to the bottom of the existing (or new) output file. This means we aren’t overwriting the same file every day, we are just adding to it. In case you forget to check this file for a few days you won’t lose that data.

$Compare |
select @{l='Datastore' + ' - ' + (Get-Date);e={$_.InputObject}},@{l='Added or Removed';e={$_.SideIndicator}} |
Out-File -Append $DifferenceFile

Now that we know what this thing does, let’s see it in action. I have run the output over 3 days and this is how the output file is displayed. We can see that on 05-14-15 we added Lab-Datastore-10 which didn’t exist on 05-13-15. Then on 05-15-15 we removed Lab-Datastore-03 and we added -11 and -12.
image

When running the script I commented out the removal of the 2 day old file so we could compare manually. Now we have an output file created (Datastore-Changes.txt) that should show the differences.
image

Inside Datastore-Changes.txt we see that on 5/14 the datastore “Lab-Datastore-10” was added and on 5/15 we lost Lab-Datastore-03, but added 11 and 12.

image

We can delete this file at any time and the next time this script runs we’ll create a brand new file. This means there is no dependency on this file already existing in order for the script to run and doesn’t require us to keep a long list of all the datastore add/removes for all eternity. Now you’ll just need to save the script schedule it to run using Windows Task Scheduler.

Below is the full scripts with comments.

#Define the vCenter Server and Cluster
$vCenter = "LabvCenter.domain.com"
$Cluster = "LabCluster"

#Set the path location for the output files
$filePath = "C:\test\" + $Cluster + "\"

#Connect to the vCenter Server and sleep for 15 seconds (necessary for security warnings)
Connect-VIserver $vCenter
Start-Sleep -s 15

#Get a list of all the datastores
$Datastores = Get-Cluster -Name $Cluster | Get-Datastore | Where {$_.Name -notlike "*-local"}

#Get the current date in the correct format
$today = (Get-Date).ToString("MMddyy")
$yesterday = (Get-Date).AddDays(-1).ToString("MMddyy")
$2DaysAgo = (Get-Date).AddDays(-2).ToString("MMddyy")

#Delete the output from 2 days ago (Remove this section if you want to keep the history)
$2DayOldFile = $filepath + $Cluster + $2DaysAgo + ".txt"
If (Test-Path $2DayOldFile){Remove-Item $2DayOldFile}

#Set the filename to include today's date
$CurrentFile = $filePath + $Cluster + $today + ".txt"
$YesterdaysFile = $filePath + $Cluster + $yesterday + ".txt"

#Export those datastores to a TXT file
$Datastores | Select Name | Out-File $CurrentFile

#Set file name & path for difference file
$DifferenceFile = $filePath + "Datastore-Changes" + ".txt"

#Get the content for yesterday and today's files
$YesterdaysContent = Get-Content $YesterdaysFile
$CurrentContent = Get-Content $CurrentFile

#Compare yesterday's and today's files
$Compare = Compare-Object $YesterdaysContent $CurrentContent

#Change the source/target column to "Removed" and "Added"
$compare | foreach { 
      if ($_.sideindicator -eq '')
        {$_.sideindicator = "Added"}
     }

#Change the column name output to "Datastore + Date" and "Added or Removed" then output to file
 $Compare | 
   select @{l='Datastore' + ' - ' + (Get-Date);e={$_.InputObject}},@{l='Added or Removed';e={$_.SideIndicator}} |
   Out-File -Append $DifferenceFile

Create New NFS Project on Tegile

The basis of a Project on the Tegile array is applying permissions and policies to a single volume or group of volumes. This means that changes made at the Project level can propagate to the volumes that live inside that project. If new IP addresses need to be added for read/write and root access for all the volumes, that can be handled at the Project-level instead of having to modify each export. However, you still have the ability to make changes at the individual volume level if that’s required.

In this setup, I’ll create a new project to host my Windows workloads in VMware. I’ll create a volume for Windows 2012 Operating System files and allow all the host on my NFS network read/write and root access to this volume.

1. Login to the web interface of the Tegile array
tegileproject020415-step1
2. Click on “Data”
tegileproject020415-step2
3. Click on the Pool that will host this new project
tegileproject020415-step3
4. In the “Project” window, click “Add Project”
tegileproject020415-step4
5. Enter the name of the Project , choose the Purpose, and select “NFS” for access type. Click “Next”
tegileproject020415-step5
6. Enter the Share Name, enter the number of mount points (more can be added later), and enter any Share limits or reservations. Click “Next”
tegileproject020415-step6
7. Set “NFS Sharing” to “on”. Set “Access Mode” to “Read-Write”, set “Access Type” to “IP” and enter the individual IP addresses or the subnet that will have access to this share. Check the box for “Root Access” then click “Add”. Repeat for each IP/Subnet then click “Next”
tegileproject020415-step7
8. Set your snapshot policy (if required). This can be configured at a later time as well. Click “Next”
tegileproject020415-step8
9. Review your settings and click “Finish”
tegileproject020415-step9
10. Click on the newly created Project and then you will see the volume share name and the mountpoint
tegileproject020415-step10

At this point, we just need to mount this new Volume on our ESXi hosts which can be done manually through the vSphere client on each host or we can do it through PowerCLI which will do it a lot faster.

In the example above, the name of our Volume is “2012_OS”, the path is our share mountpoint (/export/PDX_Windows/2012_OS) and the IP of the Tegile is 192.168.1.15. You’ll need to define the hostname as it appears in vCenter. To mount to a single host we can use the following PowerCLI command:

New-Datastore -NFS -VMHost "Hostname" -Name "2012_OS" -Path /export/PDX_Windows/2012_OS -NfsHost 192.168.1.15

To mount to an entire cluster, we can use this command after defining the name of the cluster as it appears in vCenter:

Get-Cluster "ClusterName" | Get-VMHost | New-Datastore -NFS -Name "2012_OS" -Path /export/PDX_Windows/2012_OS -NfsHost 192.168.1.15

Provision New Floating IP on Tegile

As I begin the process of reconfiguring my Tegile from a test/lab array into a production array I thought it would be a great opportunity to document more of the setup and provisioning steps involved in administering the array. In our environment we are using 10gbe without configuring LACP on the switches and letting the Tegile handle the network availability. Obviously, every environment is different, this is just the approach we took for this array.

These steps walk you through the process of provisioning an additional VLAN on the 10gbe interfaces and then creating a floating IP address that is owned by the node running the disk pool.

1. Login to the non-shared management IP of each HA Node
2. Login as “admin” with the correct password
tegileIP012715-step2
3. Click on the “Settings” tab and then click “Network”
tegileIP012715-step3
4. Under “Network Settings” on the left column, click on “Interface”
tegileIP012715-step4
5. Under “Physical Network Interfaces”, click on one of the 10gbe interfaces (named ixgbe2 and ixgbe3 on this array). Click the “+” to add a VLAN
tegileIP012715-step5
6. Enter the name of the VLAN following the guidelines below and the VLAN number and click “OK”.
a. Our naming convention is protocol + interface number + _ + VLAN number. We are using cifs on interface “ixgbe3” and the VLAN is 100
tegileIP012715-step6
7. Click “OK” to this message about saving the config
tegileIP012715-step7
8. Repeat step 5 for the other 10gbe interface changing the name to reflect the number of the other interface.
tegileIP012715-step8
9. Click “Save” to bring these new VLAN online
tegileIP012715-step9
a. Notice that the state changes to “up” after saving
tegileIP012715-step9a
10. Now we need to assign an IP address to these interfaces. We are not using LACP, so under “IP Groups” click the “Add IP Group” button
tegileIP012715-step10
11. Click the arrow next to “Network Properties”. Enter the name, check the boxes next to the newly created VLANs we added to each interface, then enter the IP address and subnet of this new subnet. Click “OK”
a. The naming convention is “ipmp + _ + protocol + filer node number. IPMP is “IP Multipathing”, cifs is the protocol, and this is node “A” which is the first node
tegileIP012715-step11a
b. Click “OK” to this message about saving the config
tegileIP012715-step11b
12. Now we see the IPMP group has been created, but isn’t up.
tegileIP012715-step12
13. Click the “Save” button at the button
tegileIP012715-step13
14. Click “OK” for confirmation
tegileIP012715-step14
15. Now we can see that the interface is up
tegileIP012715-step15
16. Repeat these steps on the other node of the HA pair. Changing the IP Group name to “ipmp_cifs2” and choosing a different IP address
tegileIP012715-step16
17. Back on the primary node, click on “Settings” then “HA”
tegileIP012715-step17
18. On the active resource group (we only have 1 which is “Resource Group A” click “Add Floating IP”
tegileIP012715-step18
19. Enter the shared IP address and netmask (this is a unique IP and different than either of the IP addresses entered earlier) then choose the IP Groups we created on each node. Click “OK”
tegileIP012715-step19
20. Now we have a new IP address that will be used by whichever node owns the Resource Group
tegileIP012715-step20

The steps are pretty straightforward, but can be confusing in the beginning. Our local SE from Tegile walked us through this config when we were evaluating, but it was important for me to know how to do these things on my own.

Change IP of vCSA

While changing the IP address of my vCenter Server is not something I’ve ever had to do before that changed this week. In my quest to separate networks into more logical groupings instead of everything living on the same subnet I had to change the IP address of my vCenter Server Appliance to place it on a new network along with the hosts it was managing. There is apparently a right way and a wrong way to do this.

I logged into the vCSA web interface (vCenterIP:5480), clicked on the “Network” tab and then click on “Address” and assumed this would be the correct place. So I changed the IP address and clicked “Save Settings” then rebooted the appliance.

changeip012315-step1

Yeah…that wasn’t right. As I watched the appliance boot from the console I saw a lot of errors being thrown trying to access services running on the old address and failing. Then I decided to shut down (not reboot) the vCSA and try a different method. This is a pretty simple process, but in case you’re looking for the right way of doing it, this is what worked for me.

Once the appliance is powered off, right click and choose “Edit Settings”
changeip012315-step2

Click the “Options” tab then choose “Properties” under “vApp Options”
changeip012315-step3

Enter the new IP address, gateway, and any other information that is changing. If you’re moving it to a new portgroup, update that now as well and click “OK”
changeip012315-step4

Once the changes have been made, power on the appliance and you should see the new addresses being referenced during start up.
changeip012315-step5

And now that start up is complete, we see the new IPs listed for managing the appliance and you should be able to connect on the new IP.
changeip012315-step6

Like I said, this is a very simple process. Once the vCSA was running, my hosts were notified of the change and were still in their cluster. Nothing bad happened and the lab continued to function as expected.

Add NFS Datastore to Cluster via PowerCLI

I have been digging into more and more PowerCLI the last month or so trying to explore faster ways to accomplish common tasks. Using the NetApp VSC plugin inside vCenter I can provision a brand new NFS datastore to an entire cluster in just a few clicks, but there is no built in way to do this for mounting an existing datastore. The below script is just a simple way to mount an NFS datastore to a named cluster.

$ClusterName = "ProdCluster"
$DatastoreName = "VM_Win2003_NA5"
$DatastorePath = "/vol/VM_Win2003_NA5"
$NfsHost = "192.168.1.5"
get-cluster $ClusterName | get-vmhost | New-Datastore -NFS -Name $DatastoreName -Path $DatastorePath -NfsHost $NfsHost

Or you can replace each variable with the actual value in the script when mounting multiple datastores in the same script.

get-cluster "ProdCluster" | get-vmhost | New-Datastore -NFS -Name "VM_Win2003_NA5" -Path "/vol/VM_Win2003_NA5" -NfsHost 192.168.1.5

The next step here will be running this script from vCO and passing the variables directly from vCO. Maybe one day I’ll have the time to figure out just how to do that…

Invalid Virtual Machine Configuration

When a Snapshot of a VM is created and one of the disks is removed prior to removal of the Snapshot, the error “Invalid Virtual Machine Configuration” will appear when attempting to delete that snapshot. This will also prevent any additional snapshots from being created.

In our situation, a snapshot was taken using NetApp Virtual Storage Console plugin during a schedule backup job. At the time of snapshot removal an Oracle load test was being performed on the same storage system. This caused excessive latency and prevented the snapshot from being removed. Follow the steps below to fix this issue.

1. Locate the Virtual machine in vCenter that is throwing this error and select it
2. Click on the “Summary” tab for the VM
invalidVM121214-step2
3. Under “Storage, Right-click on the OS data drive and click “Browse datastore”
invalidVM121214-step3
4. Locate the Folder for this Virtual Machine and open it
5. Locate the file <vmname>.vmsd
invalidVM121214-step5
6. Right click on the .VMSD file and choose “rename”
invalidVM121214-step6
7. Change the name to <vmname>.vmsd.old
invalidVM121214-step7
8. Right click on the Virtual Machine, hover over “Snapshot” then choose “Take Snapshot”
invalidVM121214-step8
9. Enter a name and ensure both boxes are unchecked and click “OK”
invalidVM121214-step9
10. The VM snapshot might fail with a message “A general system error occurred”, this is normal.
11. Right click on the Virtual Machine, hover over “Snapshot”, then choose “Snapshot Manager”
invalidVM121214-step11
12. The previous Snapshot that was there will be gone, but the recent snapshot will remain (this is normal). Click the Snapshot and click “Delete All” and “Yes” to confirm delete
invalidVM121214-step12
13. Try taking a new snapshot and ensure it works
14. As a matter of clean, ensure that you delete the <vmname>.vmsd.old file once you’re finished. No need to leave stale files laying around

Why VMUG?

As I think back on my career and where I began I’m reminded of the people in my professional life that have helped me along the way. My first job was at a nationwide company primarily doing desktop support for the local office as well as some of the remote locations. My boss, Jason, was someone I related to and we had a great working relationship. I could ask for help on anything and he would stop what he was doing and make time to show me the right way to do it; not just do it for me. While I reported to him, he treated me as an equal.

My next job was for a startup and I was the lone IT guy. No matter the task, whether IT or not, I did it. It was a job where I had ownership over everything. The attitude that the success of our company rested on my shoulders was something that drove me to work harder each day. My boss, Ron, was a great person that I could freely talk to about everything. We brainstormed together how this business would run, where it would go, and how we’d get there. There was a mutual respect.

At a later job I met one of my favorite boss’s and one of my better friends. While we both struggled with the company itself, we made the best of our situation and we worked hard for each other. He trusted me and believed in me. He saw a lot in me and throughout the years has pushed me to do more and to be better than what I was. Not just in my career, but in my life as well.

My favorite jobs all had one thing in common: 1 person that made me better, that kept me learning, and that pushed and encouraged me. The person I bounced all of my (sometimes ridiculous) ideas off of and they listen to all of them. When you’re building your career, that is what you need. You need the help of people around you to build you up.

The point of this post isn’t to talk about how important your boss or good co-workers are. Sometimes you don’t have the benefit of having a boss or good co-workers who CAN or WILL teach you. Sometimes you’re a 1 man IT shop and all you have is yourself and Google. When you don’t have that support at your job that’s when you need a good community.

Why is VMUG and this community so important? . When I started working with VMware I learned how to edit some Virtual Machine settings and then made a ton of assumptions on how things worked because I never had the time to learn and didn’t always have the benefit of someone to ask. Each and every day I’m just trying to keep my head above water, but I’m not growing in my career. Those were the days I wished I had a community I could go to and ask all the questions I had.

With so many VMware products and so many configuration options, every member adds value to VMUG. Being active in in VMUG isn’t just for “experts”. Sharing the knowledge you’ve gained in YOUR environment can help someone in theirs. When you share your struggles the community is there for you. When someone else shares their struggles, you can be there for them.

I have recently joined as a leader for VMUG in Portland, Oregon. The vision I have for us is a community that is actively working to help each other succeed. We can ask questions, share ideas, or just talk over beers during a happy hour. I didn’t want to be a VMUG leader because I’m an expert, far from it. I wanted to be a leader because I want to see us succeed. I want each and every VMUG member to know they have a place to turn whenever they need help. The only way we can be successful is if our members are active and talk to each other.

The more events you come to, the more connections you’ll make, the larger your community will grow and the better this VMUG will be. Everyone has value regardless of how big or how small the environment they support. No matter the skill level, years of experience, certifications or any other factor. The VMware User Group is nothing without its users.

We are all in this together and your VMUG community is on your side.

Restore Files & AD Objects From NetApp & Veeam v8

With the release of Veeam Backup & Replication v8 we can restore directly from NetApp Snapshots. Whether it’s an entire VM, individual files, or just some objects in Active Directory, you can do it all from the Veeam console. For a guide on installing and configuring Veeam v8 with NetApp storage, click here

We’ll be testing the restore of individual files and some Active Directory objects for this blog post. In this scenario we have a couple Domain Controllers (2008 R2) and a couple of member servers with some files that we’ll delete. We also have an OU with a couple users, a member server, and a group.

Each of these VMs sit on either of these two volumes, Win_2008 and Win_2012. If you click on “Storage Infrastructure” in the Veeam Backup and Replication console, then expand your NetApp storage you’ll see a list of all the volumes available and their snapshots.
veeamrest120114-part1

1. I’ve taken a snapshot in NetApp System Manager of these volumes. To list these snaps, refresh the volume by right-clicking on the volume and choosing “Rescan volume” or right click on the storage array and choose “Rescan Storage” (Since we have 2 volumes to refresh, we’ll rescan storage.
veeamrest120114-step1
2. A new window will popup showing the progress
veeamrest120114-step2
3. Once completed, we now see the new snapshot I created called “Pre-delete”
veeamrest120114-step3
4.I’m going to delete a file from the server “Lab2008” (on the Win_2008 datastore) and “Lab2012” (on the Win_2012 datastore) that are sitting on my desktop.
veeamrest120114-step4a
veeamrest120114-step4b
5. And let’s also delete the OU “Delete Test” which contains a couple test users, a group they are apart of and the VM “Lab2008”
veeamrest120114-step5
6. Now that those files and OU\objects have been delete, let’s go back to the Veeam console and see what we can recover. We’ll start with the files for the “Lab2012” VM.
7. Expanding “Win_2012” datastore in “Storage Infrastructure” view, click on the name of the snapshot I created earlier and we see the “Lab2012” VM
veeamrest120114-step7
8. We right-click on “Lab2012”, hover over “Restore guest files” and then choose “Microsoft Windows”
veeamrest120114-step8
9. Under the “File Level Restore” screen, click “Customize” in the bottom right corner
veeamrest120114-step9
10. As long as you’re restoring to a vCenter/Host that’s already been added to Veeam, choose the host, resource pool (if any) and folder. Click “OK” then click “Next”
veeamrest120114-step10
11. Enter a reason for the restore and click “Next”
veeamrest120114-step11
12. Click “Finish”
veeamrest120114-step12
13. The restore session will open and mount the snapshot/VM to the chosen host
veeamrest120114-step13
14. In vCenter, we see these 2 tasks of creating a datastore and registering the virtual machine.
veeamrest120114-step14
15. On the host, we see a new powered off VM with the name of “Lab2012” followed by a GUID.
veeamrest120114-step15
16. Back at the Veeam console, the Backup Browser window appears and we can browse to the location of the deleted file
veeamrest120114-step16
17. From here, we can copy the file to our local machine or restore it directly to the Virtual Machine. Right click on the file and choose “Restore” then “Overwrite”
veeamrest120114-step17
18. We’ll pick “Use the following account” and choose my Lab Domain credentials and click “OK”
veeamrest120114-step18
19. The restore process will start and you’ll see this output if you click “Show Details”
veeamrest120114-step19
20. Logging back in to “Lab2012” we can see the file has been restored
veeamrest120114-step20
21. Close the “Restoring files” window in the Veeam console and the “Backup Browser” window. After they’re closed, the VM will be unregistered on the host and the datastore will be unmounted.
22. I’m doing a restore from “Lab2008” but this time I will just copy the file to my local computer instead of restoring to the guest VM. After browsing the datastore snapshots and choosing “Restore Guest Files”, we’ll browse the directory structure, locate the file, right-click and choose “Copy To”
veeamrest120114-step22
23. A window will pop up to choose the folder location on your machine and whether to preserve permissions and ownership. Then click “OK”
veeamrest120114-step23
24. Now in the root of the C: drive we have the “Lab2008-txt” file
veeamrest120114-step24
25. Let’s look at the “Lab2008” VM now. It was in that OU we deleted and after rebooting it and trying to login we receive the message “The security database on the server does not have a computer account for this workstation trust relationship”. We can fix that.
veeamrest120114-step25
26. Back in the Veeam console and the “Pre-delete” snapshot for the “Win_2008” datastore, we’ll locate the “Lab-DC01” VM. Right click on the VM, hover over “Restore application items” and then click “Microsoft Active Directory objects”
veeamrest120114-step26
27. Our host settings are saved from the last restore we did, so click “Next”
veeamrest120114-step27
28. Enter a restore reason and click “Next”
veeamrest120114-step28
29. Review the summary and click “Finish”
veeamrest120114-step29
30. The Veeam Explorer for Microsoft Active Directory window will appear
veeamrest120114-step30
31. Then the VM will be mounted in vCenter
veeamrest120114-step31
32. Once the Veeam Explorer window for AD opens, you’ll be able to browse your Domain object. We’ll expand the “LabOU” object where we see “Delete Test” with the same 2 test users, “Lab2008” server and the group those users belong to.
veeamrest120114-step32
33. Right click the “Delete Test” OU and choose “Restore container to LabDC.local”
veeamrest120114-step33
34. Enter the credentials for the account with access to add objects to the domain and click “OK”
veeamrest120114-step34
35. You’ll see the progress of the restore and then the summary of how many objects were restored
veeamrest120114-step35

(In order for this to work your Veeam server will need network access to the live domain controller)

36. If we refresh the screen for Active Directory Users and Computers on “Lab-DC01” we’ll see the OU is back with all of it’s objects
veeamrest120114-step36
37. In the properties for the users, we can see that group membership was retained. The group “Email Group” is located in another OU and that membership was restored as well
veeamrest120114-step37
38. And now when we try to login to “Lab2008” with domain credentials it works with no issues.

 

How fast can this restore happen? From the time I opened the Veeam console until the time the OU reported as being restored took 3 minutes and 34 seconds. In an emergency where someone accidentally deletes an entire OU, a user account, a server, or anything else, they can all be restored in under 5 minutes time without the need to reset any passwords and everything will work without anyone ever noticing. Veeam is awesome and just keeps getting better and better.