ESXi Host Patching with PowerShell & Update Manager

Update Manager certainly makes host patching simple, but leaves a few things to be desired. How many times have you attempted to update a host in Update Manager only to have the host never enter maintenance mode because of a DRS rule, VMware tools installation or local ISO mapped to a VM? I wanted to find a way to check for all these things as I’m performing the patching process and be able to accomplish it at the cluster level.

For the script itself I have broken it down into the different sections along with screenshots of what you’ll see when running the script. It makes it a little busy to follow along with for this entry, but hopefully it makes sense. At the bottom of the page I have the whole script put together to make it easier to copy and run it on your own.

Let’s dig into the script.

1. While you can manually define the vCenter server in the script, I prefer being prompted as I have multiple vCenter servers that I work with. The multiple lines and color emphasis was for a customer that would forget to enter the vCenter name and instead enter the ESXi host name.

Write-Host "Enter the FQDN of the " -NoNewline
Write-Host "[vCenter Server]" -ForegroundColor Red -NoNewline
Write-Host " to connect to: " -NoNewline
$vCenterFqdn = Read-Host
Connect-viserver $vCenterFqdn

2. Here we’re going to list all the clusters. I use this menu system all the time now in my PowerShell scripts to make it easier to make selections instead of having to remember and manually enter the name of an object. This is getting all the clusters then converting the number selection that’s entered into the cluster name.

$global:i=0
Get-Cluster | Sort Name | Select @{Name="Number";Expression={$global:i++;$global:i}},Name -OutVariable menu | format-table -AutoSize
$clusterNum = Read-Host "Select the number of the Cluster to be patched"
$clusterName = $menu | where {$_.Number -eq $clusterNum}


3. Now that we have the cluster we’re going to work with we search for DRS rules. Specifically, we’re looking for “Must Run” rules. This will prevent a VM from moving to another host. While every environment is different and they have “must run” rules for a variety of reasons, I’m comfortable disabling this during patch events. If there are any rules we’re going to list the rule names in the PowerShell console and give you the option to disable or not.

a. Remember, this is only looking at “Must Run” DRS rules for the entire cluster, not for an individual host. If you’re patching, odds are you’ll be doing the entire cluster anyway so I didn’t break this down on a host-by-host basis.

$drsRules = Get-Cluster $($clusterName.Name) | Get-DrsVMHostRule | Where {$_.Type -eq "MustRunOn"} | Where {$_.Enabled -eq $True}
IF ($drsRules.Count -gt "0"){Write-Host "The following rules may prevent a host from entering Maintenance mode:" -foreground "Yellow"; $drsRules.Name; $disableRules = Read-Host "Press Y to disable these rules. Anything else to continue";
IF ($disableRules -eq "Y"){Write-Host "Disabling DRS Rules..." -foreground "Yellow";
foreach ($name in $drsRules){Set-DrsVMHostRule -rule $name -enabled:$false}} ELSE {Write-Host "Skipping disabling of DRS Rules. Continuing..." -foreground "Yellow"}} ELSE {Write-Host "No "Must Run" Rules in $($clusterName.Name). Continuing..." -foreground "Yellow"}

In the picture I have the name of the DRS rule highlighted (the VM name was in the rule so it’s been obscured).


4. Now that we’ve decided what to do with our DRS rules, we can get down to selecting the baseline. This script can be used for both patching and for Upgrades. There is a check later on in the script that will skip the “Staging” step and go right to remediation if it’s an upgrade. Once again, we’re using that menu selection function to display all upgrades/baselines and let us choose the one to use.

$global:i=0
Get-Baseline | Select Name | Sort Name | Select @{Name="Number";Expression={$global:i++;$global:i}},Name -OutVariable menu | format-table -AutoSize
$baselineNum = Read-Host "Select the number of the Baseline to be attached"
$baselineName = $menu | where {$_.Number -eq $baselineNum}
Write-Host "Attaching $($baselineName.Name) Baseline to $($clusterName.Name)..." -Foreground "Yellow"
$baseline = Get-Baseline $baselineName.Name
Attach-Baseline -Baseline $baseline -Entity $clusterName.Name


5. Here’s where we’re going to complicate things a bit. I have 2 loops in this script. Loop number 1 is for checking if a host has any patches available. We’ll check a selected host against the attached baseline, if there are no available updates/upgrades then we report that in the PowerShell console and return to the host selection screen. The second loop is when a selected host has been patched we return to the host selection screen to choose the next one in the list.

DO
{
DO
{

6. Now that we’ve opened up our loop, we can start with selecting a host in the cluster. Once again, menu selection, this time we’re getting all the hosts in the chosen cluster and we’re displaying the host name, build, esxi version, and state. This makes it easier to know what hosts have been patched, which ones are still left, and what hosts are already in maintenance mode. In a larger environment you may forget what host name you were working on so seeing if a host was in maintenance mode and ready to be upgrade may be beneficial.

$global:i=0
Get-Cluster $clusterName.Name | Get-VMhost | Sort Name | Select @{Name="Number";Expression={$global:i++;$global:i}},Name,Build,Version,State -OutVariable menu | format-table -AutoSize
$hostNum = Read-Host "Select the number of the Host to be patched"
$hostName = $menu | where {$_.Number -eq $hostNum}


7. With our first host chosen we’re going to scan its inventory to see what patches it currently has installed.

Write-Host "Scanning $($hostName.Name) patch inventory..." -foreground "Yellow"
Scan-Inventory -Entity $hostName.Name


8. Now that we’ve scanned it, we’re going to check it for compliance. If there are patches available, we’ll move on to the next step to see if there are any VMs with ISO or Vmware tools installations. If there aren’t any patches, we’re reporting that and then sending us back to the host selection screen.

a. As a note, the second ‘}’ after the “Write-Host ‘Host is out of date” command is to close the second loop from step 5.

Write-Host "Scanning $($hostName.Name) for patch compliance..." -foreground "Yellow"
$compliance = Get-Compliance $hostName.Name
IF ($compliance.Status -eq "Compliant"){Write-Host "No available patches for $($hostName.Name). Choose a different host" -foreground "Red"}ELSE{Write-Host "Host is out of date" -foreground "Yellow"}}
UNTIL ($compliance.Status -ne "Compliant")


9. Now that we have some patches to apply, we check for active VMware tools installations. We perform the lookup for VMs with tools installer mounted then we perform a count on that output. If there are more than 0, we list all the VMs. Now that you see all the VMs, you can press ‘Y’ to force the unmount and continue or you can ignore it and hope the VMs move.

a. The unmount command works most of the time, but on some Linux OS’s I’ve run into issues with it. Just keep that in mind

$vmtools = Get-VMHost $hostName.Name | Get-VM | Where {$_.ExtensionData.RunTime.ToolsInstallerMounted -eq "True"} | Get-View
IF ($vmtools.Count -gt "0"){Write-Host "The following VMs on $($hostName.Name) have VMTools Installer Mounted:";
$vmtools.Name;
$unmountTools = Read-Host "Press "Y" to unmount VMTools and continue. Anything else to skip VMTools unmounting";
IF ($unmountTools -eq "Y") {Write-Host "Unmounting VMTools on VMs..." -foreground "Yellow"; foreach ($vm in $vmtools) {$vm.UnmountToolsInstaller()}}ELSE{Write-Host "Skipping VMTools unmounting..." -foreground "Yellow"}}ELSE{Write-Host "No VMs found with VMTools Installer mounted. Continuing..." -foreground "Yellow"}


10. With all our VMware tools installations killed, we move on to ISOs. ISO’s that are stored in shared datastores won’t have an issue moving, but if ISOs have been mounted directly to a VM through a console window those can cause a hang up. Again, you know your environment better than me so use your best judgement when picking what to do.

$mountedCDdrives = Get-VMHost $hostName.Name | Get-VM | Where { $_ | Get-CdDrive | Where { $_.ConnectionState.Connected -eq "True" } }
IF ($mountedCDdrives.Count -gt "0"){Write-Host "The following VMs on $($hostName.Name) have mounted CD Drives:";
$mountedCDdrives.Name;
$unmountDrives = Read-Host "Press "Y" to unmount these ISOs and continue. Anything else to skip ISO unmounting";
IF ($unmountDrives -eq "Y") {Write-Host "Unmounting ISOs on VMs..." -foreground "Yellow"; foreach ($vm in $mountedCDdrives) {Get-VM $vm | Get-CDDrive | Set-CDDrive -NoMedia -Confirm:$False}}ELSE{Write-Host "Skipping ISO unmounting..." -foreground "Yellow"}}ELSE{Write-Host "No VMs found with ISOs mounted. Continuing..." -foreground "Yellow"}


11. Now we check if the host is in maintenance mode. This check isn’t required and we could just try to put a host in maintenance mode that’s already in maintenance mode without any errors, I just prefer to have this called out so people know that the host will be placed in maintenance mode. Also, if you don’t want to confirm and just want the host to automatically go into maintenance mode, you can remove the “Read-Host “Press Enter to place $($hostName.Name in Maintenance mode”;” section and it will automatically place the host in maintenance mode.

$hostState = Get-VMHost $hostname.Name
IF ($hostState.State -eq "Maintenance"){Write-Host "$($hostName.Name) is already in maintenance mode. Continuing to patch Staging/Remediation" -foreground "Yellow"}ELSE{Read-Host "Press Enter to place $($hostName.Name) in Maintenance mode"; Start-Sleep 7; Write-Host "Enabling Maintenance mode for $($hostName.Name). This may take a while..." -foreground "Yellow"; Set-VMHost $hostName.Name -State "Maintenance"}


12. This was an interesting issue I ran into. I had a customer running ESXi 6.0 with PernixData installed which wasn’t compatible with ESXi 6.5 which we were upgrading to. When we attempted to upgrade we’d fail because the PernixData VIB was present. I threw this check in to see if this VIB existed on their hosts and to remove it before proceeding. I also added a second placeholder VIB name in case you have multiple VIBs to remove you can just replace the name with the appropriate VIB name and even add additional VIBs with another -OR $_.ID -eq “vibname”

$esxcli = Get-esxcli -vmhost $hostName.Name
$vibCheck = $esxcli.software.vib.list() | Where {($_.ID -eq "PernixData_bootbank_pernixcore-vSphere6.0.0_3.5.0.2-39793" -OR $_.ID -eq "Other_vib_name_xxxxxx")}
IF ($vibCheck.Count -gt "0"){Write-Host "Incompatible VIB found. Removing from host..." -foreground "Yellow"; foreach ($a in $vibCheck){$esxcli.software.vib.remove($null, $true, $false, $true, $a.Name)}}ELSE{Write-Host "No known incompatible VIBs found. Continuing..." -foreground "Green"}


13. And, of course, if removing a VIB we need to reboot so now we throw this reboot check in there as well. If there were no VIBs found in Step 12, this will be ignored. Otherwise, we prompt for reboot, enter the reboot command, check for the host to enter the NotResponding state and report on the state until it responds in vCenter and returns to Maintenance state.

IF ($vibCheck.Count -gt "0" -AND $baseline.BaselineType -eq "Upgrade"){Read-Host "VIBs were removed from host. Press enter to reboot host before attempting upgrade";Restart-VMhost $hostName.Name -confirm:$false}ELSE{$skip = "1"; Write-Host ""}
IF ($skip -ne "1"){
Write-Host "$($hostName.Name) is going to reboot..." -foreground "Yellow"
do {
Start-Sleep 3
$hostState = (get-vmhost $hostName.Name).ConnectionState
}
while ($hostState -ne "NotResponding")
Write-Host "$($hostName.Name) is currently down..." -foreground "Yellow"

#Wait for server to reboot
do {
Start-Sleep 5
$hostState = (get-vmhost $hostName.Name).ConnectionState
Write-Host "Waiting for $($hostName.Name) to finish rebooting..." -foreground "Yellow"
}
while ($hostState -ne "Maintenance")
Write-Host "$($hostName.Name) is back up..." -foreground "Yellow"}ELSE{Write-Host ""}

14. Now that all that work is done, we can start staging patches. If this is a patch baseline we run stage command. If it’s an upgrade baseline, we’ll skip this step

IF ($baseline.BaselineType -eq "Upgrade"){Write-Host "$($baseline.Name) is an Upgrade Baseline. Skipping to remediation..." -foreground "Yellow"}ELSE{Write-Host "Staging patches to $($hostName.Name) in Cluster $($clusterName.Name)..." -foreground "Yellow"; Stage-Patch -entity $hostName.Name -baseline $baseline}


15. Once patches have been staged (or upgrades ready to push) it’s time for remediation. We prompt that the host will reboot on its own once the patch has completed and we set a few advanced options. These are the defaults, but can still be environment specific so check to make sure this is what you want to use.

Write-Host "Remediating patches on $($hostName.Name) in Cluster $($clusterName.Name). Host will reboot when complete" -foreground "Yellow"
Remediate-Inventory -Entity $hostName.Name -Baseline $baseline -HostFailureAction Retry -HostNumberofRetries 2 -HostRetryDelaySeconds 120 -HostDisableMediaDevices $true -ClusterDisableDistributedPowerManagement $true -ClusterDisableHighAvailability $true -confirm:$false -ErrorAction SilentlyContinue


At the top of our PowerShell window we get the percentage of completion for our task. It’s not very accurate as it stays at 30% then goes to 92% when it’s nearly complete.

16. Once the host has been rebooted and comes back online we want to see the current status of that host to ensure updates were successful. We are comparing the build number we grabbed before we started patching against the build number after the reboot. If they are the same, something didn’t work and we need to check into it. Otherwise, we do nothing.

Write-Host "Retrieving Host build status..." -foreground "Yellow"
$hostBuild = Get-VMHost $hostName.Name
IF ($hostBuild.Build -eq $hostState.Build){Write-Host "Patch/Upgrade was not applied. Check status in vCenter and re-run the script. Exiting..." -foreground "Red";$error;Start-Sleep 20;break}ELSE{}

17. Now that the host was patched, we show a list of all the hosts in that cluster along with their build, version, and state. This gives us a full view of the cluster so we can see if there are any hosts left to be patched and then we exit maintenance mode for this host.

Get-Cluster $clusterName.Name | Get-VMhost | Select Name,Build,Version,State | Sort Name | format-table -autosize
Write-Host "Exiting Maintenance mode for Host $($hostName.Name)..." -foreground "Yellow"
Get-VMHost $hostName.Name | Set-VMHost -State Connected


18. Based on that list will determine the answer to our next question. We are being prompted to re-enable the DRS rules we previously disabled (if any). If any rules were chosen to be disabled we captured that in a variable in step 3. We can choose to re-enable just those disabled rules by pressing ‘Y’ or if there are other hosts left to patch we just press any other key to continue.

IF ($disableRules -eq "Y") {$enableRules = Read-Host "If Cluster patching is complete press "Y" to re-enable DRS rules. Anything else to continue";
IF ($enableRules -eq "Y") {Write-Host "Re-enabling DRS Must Run rules" -foreground "Yellow"; 
foreach ($name in $drsRules){Set-DrsVMHostRule -rule $name -enabled:$true}} ELSE {
Write-Host "DRS Rules not being re-enabled. Continuing..." -foreground "Yellow"}} ELSE {}


19. In this last question we’re just displaying the output from our last host patched and prompting the user to quit patching or go back to step 6 and pick the next host in the cluster to patch.

$answer = Read-Host "$($hostname.Name) patched in Cluster $($clusterName.Name). Press "1" to re-run the script. Anything else to exit"


20. Finally, to close out the first loop, we have the following lines. In step 19 we have the variable $answer which asks the user to enter ‘1’ to re-run the script and pick another host. This line at the bottom is saying until the user enters something other than 1, keep performing that loop. If anything else is entered, the script exits. Answering “1” will start the script over from Step 6. We will perform another “Get-Cluster | Get-VMHost” on the chosen cluster and retrieve the current build and state information for each of the hosts and display the updated results. As you can see from the screenshot below, vmm-04 is no in a Connected state with a Build number of 9298722,

}
UNTIL ($answer -ne "1")


Below is the script all put together to copy and test. Like all scripts pulled from the internet, make sure you test them in a lab/isolated environment until you can ensure proper functionality.

Write-Host "Enter the FQDN of the " -NoNewline
Write-Host "[vCenter Server]" -ForegroundColor Red -NoNewline
Write-Host " to connect to: " -NoNewline
$vCenterFqdn = Read-Host
Connect-viserver $vCenterFqdn

$global:i=0
Get-Cluster | Sort Name | Select @{Name="Number";Expression={$global:i++;$global:i}},Name -OutVariable menu | format-table -AutoSize
$clusterNum = Read-Host "Select the number of the Cluster to be patched"
$clusterName = $menu | where {$_.Number -eq $clusterNum}

$drsRules = Get-Cluster $($clusterName.Name) | Get-DrsVMHostRule | Where {$_.Type -eq "MustRunOn"} | Where {$_.Enabled -eq $True}
IF ($drsRules.Count -gt "0"){Write-Host "The following rules may prevent a host from entering Maintenance mode:" -foreground "Yellow"; $drsRules.Name; $disableRules = Read-Host "Press Y to disable these rules. Anything else to continue";
IF ($disableRules -eq "Y"){Write-Host "Disabling DRS Rules..." -foreground "Yellow";
foreach ($name in $drsRules){Set-DrsVMHostRule -rule $name -enabled:$false}} ELSE {Write-Host "Skipping disabling of DRS Rules. Continuing..." -foreground "Yellow"}} ELSE {Write-Host "No "Must Run" Rules in $($clusterName.Name). Continuing..." -foreground "Yellow"}

$global:i=0
Get-Baseline | Select Name | Sort Name | Select @{Name="Number";Expression={$global:i++;$global:i}},Name -OutVariable menu | format-table -AutoSize
$baselineNum = Read-Host "Select the number of the Baseline to be attached"
$baselineName = $menu | where {$_.Number -eq $baselineNum}
Write-Host "Attaching $($baselineName.Name) Baseline to $($clusterName.Name)..." -Foreground "Yellow"
$baseline = Get-Baseline $baselineName.Name
Attach-Baseline -Baseline $baseline -Entity $clusterName.Name

DO
{
DO
{
$global:i=0
Get-Cluster $clusterName.Name | Get-VMhost | Sort Name | Select @{Name="Number";Expression={$global:i++;$global:i}},Name,Build,Version,State -OutVariable menu | format-table -AutoSize
$hostNum = Read-Host "Select the number of the Host to be patched"
$hostName = $menu | where {$_.Number -eq $hostNum}

Write-Host "Scanning $($hostName.Name) patch inventory..." -foreground "Yellow"
Scan-Inventory -Entity $hostName.Name

Write-Host "Scanning $($hostName.Name) for patch compliance..." -foreground "Yellow"
$compliance = Get-Compliance $hostName.Name 
IF ($compliance.Status -eq "Compliant"){Write-Host "No available patches for $($hostName.Name). Choose a different host" -foreground "Red"}ELSE{Write-Host "Host is out of date" -foreground "Yellow"}}
UNTIL ($compliance.Status -ne "Compliant")

$vmtools = Get-VMHost $hostName.Name | Get-VM | Where {$_.ExtensionData.RunTime.ToolsInstallerMounted -eq "True"} | Get-View
IF ($vmtools.Count -gt "0"){Write-Host "The following VMs on $($hostName.Name) have VMTools Installer Mounted:";
$vmtools.Name;
$unmountTools = Read-Host "Press "Y" to unmount VMTools and continue. Anything else to skip VMTools unmounting";
IF ($unmountTools -eq "Y") {Write-Host "Unmounting VMTools on VMs..." -foreground "Yellow"; foreach ($vm in $vmtools) {$vm.UnmountToolsInstaller()}}ELSE{Write-Host "Skipping VMTools unmounting..." -foreground "Yellow"}}ELSE{Write-Host "No VMs found with VMTools Installer mounted. Continuing..." -foreground "Yellow"}

$mountedCDdrives = Get-VMHost $hostName.Name | Get-VM | Where { $_ | Get-CdDrive | Where { $_.ConnectionState.Connected -eq "True" } }
IF ($mountedCDdrives.Count -gt "0"){Write-Host "The following VMs on $($hostName.Name) have mounted CD Drives:";
$mountedCDdrives.Name;
$unmountDrives = Read-Host "Press "Y" to unmount these ISOs and continue. Anything else to skip ISO unmounting";
IF ($unmountDrives -eq "Y") {Write-Host "Unmounting ISOs on VMs..." -foreground "Yellow"; foreach ($vm in $mountedCDdrives) {Get-VM $vm | Get-CDDrive | Set-CDDrive -NoMedia -Confirm:$False}}ELSE{Write-Host "Skipping ISO unmounting..." -foreground "Yellow"}}ELSE{Write-Host "No VMs found with ISOs mounted. Continuing..." -foreground "Yellow"}

$hostState = Get-VMHost $hostname.Name
IF ($hostState.State -eq "Maintenance"){Write-Host "$($hostName.Name) is already in maintenance mode. Continuing to patch Staging/Remediation" -foreground "Yellow"}ELSE{
#Read-Host "Press Enter to place $($hostName.Name) in Maintenance mode"; Start-Sleep 7; Write-Host "Enabling Maintenance mode for $($hostName.Name). This may take a while..." -foreground "Yellow"; Set-VMHost $hostName.Name -State "Maintenance"}
Write-Host "Enabling Maintenance mode for $($hostName.Name). This may take a while..." -foreground "Yellow"; ; Start-Sleep 7; Set-VMHost $hostName.Name -State "Maintenance"}

$esxcli = Get-esxcli -vmhost $hostName.Name
$vibCheck = $esxcli.software.vib.list() | Where {($_.ID -eq "PernixData_bootbank_pernixcore-vSphere6.0.0_3.5.0.2-39793" -OR $_.ID -eq "Other_vib_name_xxxxxx")}
IF ($vibCheck.Count -gt "0"){Write-Host "Incompatible VIB found. Removing from host..." -foreground "Yellow"; foreach ($a in $vibCheck){$esxcli.software.vib.remove($null, $true, $false, $true, $a.Name)}}ELSE{Write-Host "No known incompatible VIBs found. Continuing..." -foreground "Green"}

IF ($vibCheck.Count -gt "0" -AND $baseline.BaselineType -eq "Upgrade"){Read-Host "VIBs were removed from host. Press enter to reboot host before attempting upgrade";Restart-VMhost $hostName.Name -confirm:$false}ELSE{$skip = "1"; Write-Host ""}
IF ($skip -ne "1"){
Write-Host "$($hostName.Name) is going to reboot..." -foreground "Yellow"
do {
Start-Sleep 3
$hostState = (get-vmhost $hostName.Name).ConnectionState
}
while ($hostState -ne "NotResponding")
Write-Host "$($hostName.Name) is currently down..." -foreground "Yellow"

#Wait for server to reboot
do {
Start-Sleep 5
$hostState = (get-vmhost $hostName.Name).ConnectionState
Write-Host "Waiting for $($hostName.Name) to finish rebooting..." -foreground "Yellow"
}
while ($hostState -ne "Maintenance")
Write-Host "$($hostName.Name) is back up..." -foreground "Yellow"}ELSE{Write-Host ""}

IF ($baseline.BaselineType -eq "Upgrade"){Write-Host "$($baseline.Name) is an Upgrade Baseline. Skipping to remediation..." -foreground "Yellow"}ELSE{Write-Host "Staging patches to $($hostName.Name) in Cluster $($clusterName.Name)..." -foreground "Yellow"; Stage-Patch -entity $hostName.Name -baseline $baseline}

Write-Host "Remediating patches on $($hostName.Name) in Cluster $($clusterName.Name). Host will reboot when complete" -foreground "Yellow"
Remediate-Inventory -Entity $hostName.Name -Baseline $baseline -HostFailureAction Retry -HostNumberofRetries 2 -HostRetryDelaySeconds 120 -HostDisableMediaDevices $true -ClusterDisableDistributedPowerManagement $true -ClusterDisableHighAvailability $true -confirm:$false -ErrorAction SilentlyContinue

Write-Host "Retrieving Host build status..." -foreground "Yellow"
$hostBuild = Get-VMHost $hostName.Name
IF ($hostBuild.Build -eq $hostState.Build){Write-Host "Patch/Upgrade was not applied. Check status in vCenter and re-run the script. Exiting..." -foreground "Red";$error;Start-Sleep 20;break}ELSE{}

Get-Cluster $clusterName.Name | Get-VMhost | Select Name,Build,Version,State | Sort Name | format-table -autosize
Write-Host "Exiting Maintenance mode for Host $($hostName.Name)..." -foreground "Yellow"
Get-VMHost $hostName.Name | Set-VMHost -State Connected

IF ($disableRules -eq "Y") {$enableRules = Read-Host "If Cluster patching is complete press "Y" to re-enable DRS rules. Anything else to continue";
IF ($enableRules -eq "Y") {Write-Host "Re-enabling DRS Must Run rules" -foreground "Yellow"; 
foreach ($name in $drsRules){Set-DrsVMHostRule -rule $name -enabled:$true}} ELSE {
Write-Host "DRS Rules not being re-enabled. Continuing..." -foreground "Yellow"}} ELSE {}

$answer = Read-Host "$($hostname.Name) patched in Cluster $($clusterName.Name). Press "1" to re-run the script. Anything else to exit"

}
UNTIL ($answer -ne "1")

Create vCenter 5.5 Upgrade Baseline

I have a preference to do brand new installs of ESXi for each new release. With new releases there are new options, new features, and caveats with existing functionality. This means the migration process takes longer, but it helps ensure that I’m applying current best practices each and every time instead of applying upgrades to a flawed design.

In some instances this isn’t a concern and we can use vCenter with Update Manager to upgrade hosts to the latest version of ESXi and preserve our current configuration (name, IP, storage, etc). I use this process when remotely upgrading Hosts in my colo facility without having console access to the physical servers.

This is a step by step guide to creating an upgrade baseline to upgrade an existing ESXi Host (5.0 for this writing) to 5.5 and begin the upgrade process on a Host.

Prerequisites:

1. Existing host running 5.0 or 5.1 connected to vCenter Server 5.5
2. vCenter Server 5.5 with Update Manager installed
3. Downloaded .ISO of ESXi 5.5

Steps:

1. Using the vSphere thick client (not web client), connect to the vCenter server and click the “Home” button followed by “Update Manager” under “Solutions and Applications”
UPG052714-step1
2. Click on “ESXi Images” tab
UPG052714-step2
3. Click the link for “import ESXi Image” to wards the top right corner
UPG052714-step3
4. Click “Browse” and locate the .ISO of ESXi, click “Open” then click “Next”
UPG052714-step4

  • a. If you receive a security warning, click the check box to install the certificate and click “Ignore”
  • b. The ISO should upload. When completed click “Next”
    UPG052714-step4b

5. Enter the name of this upgrade baseline identifying the version in the name or description then click “Finish”
UPG052714-step5
6. Click the “Home” button followed by “Hosts and Clusters”
UPG052714-step6
7. Click on the Host to be upgraded and then click the “Update Manager” tab
UPG052714-step7
8. Click the “Attach” link towards the top right corner
UPG052714-step8
9. Place a check in just the upgrade baseline created and then click “Attach”
UPG052714-step9
10. Click the “Remediate” button towards the lower right corner

  • a. Confirm Upgrade baselines and the ESXi 5.5 baseline are selected then click “Next”
    UPG052714-step10a
  • b. Accept the license agreement and click “Next”
  • c. Leave “Remove installed third-party software” unchecked and click “Next”
  • d. Leave the schedule as “Immediate” and click “Next”
  • e. Since this host is not in a cluster, choose “Power off virtual machines” and click “Next” (THIS WILL POWER OFF ANY VMS THAT ARE ON THAT HOST)
    UPG052714-step10e
  • f. Click the “Finish” button

11. This process takes awhile and you’ll lose access to the server while it is remediating. If you have access to the console during this time, it is a good idea to have it open and watch the progress.

Once the upgrade is complete the Host will be available within vCenter and will be running ESXi 5.5. Once completed, make sure you double-check your settings (time, network, DNS) to ensure all your settings are still there. Also, take this time to attach your patches baseline and get the latest patches applied to this Host.

vCenter 5.5 Update Manager Install with SQL Mirroring

When I first started at my current job we were a company with a few standalone SQL Servers. There were development and production instances on both SQL 2005 and 2008. This isn’t a problem, but we lacked any kind of High Availability for these databases. One of the first projects I took on was creating a SQL 2012 Failover Cluster. The setup was relatively painless and it provided us the ability to patch SQL hosts without having to take down any of the applications that depending on it. The drawback was every time I did a cluster failover vCenter Update Manager would stop working and the service needed to be restarted. A minor annoyance, but something that always bothered me.

To alleviate this (and with available SQL licenses), I implemented a new SQL 2012 Mirrored instance and while I was building our brand new ESXi 5.5 environment it was the perfect time to move the vCenter Update Manager database to SQL mirroring. While I don’t have a blog post about how to setup SQL Mirroring (but I do have the process documented), this shows the process of provisioning the databases on the Principle and the Mirror and the commands to mirror the database with automatic failover (with a Witness server). In the future I hope to blog about the setup of SQL Mirroring.

 

Prerequisites:

  1. Have vCenter 5.5 already installed and running
  2. Download the ISO for vCenter 5.5 from VMware which will need to be mounted on the server that will host vCenter Update Manager (VUM).
  3. Have an additional Disk drive added to the destination server hosting Update manager because I prefer leaving the OS drive for the OS and all programs are installed on the secondary data disk.
  4. 3 Servers with SQL installed and configured for mirroring (Principle, Mirror, Witness).
  5. Install the 64-bit SQL 10 Native Client from the SQL 2008 install .ISO (sqlncli.msi) on the server hosting VUM.
  6. A domain user account to run the VUM service and connect to SQL (domain\vupdatemanager for this writing)

 

SQL Mirroring Configuration:

  1. Connect to the principle SQL server (SQLMir-01 for this writing)
  2. Expand Security and Logins. Right click “Logins” and click “New Login”
    DB051314-step2
  3. Enter the login name for the Update Manager Active Directory account, choose “Windows Authentication”
    1. Change the “Default database” to “msdb” and click “OK”
      DB051314-step3a
    2. Click on “User Mapping” and place a check next to “msdb” then under “Database role membership” place a check next to “db_owner”
      DB051314-step3b
  4. Right click on “Databases” and choose “New Database”
    DB051314-step4

    1. Enter the database name
      DB051314-step4a

      1. Click the “…” button next to “Owner” and browse for the login we just created, place a check mark for it and click “OK” and “OK”
        DB051314-step4ai
    2. Click the “Options” link on the left side and ensure that Recovery Model is set to “Full” and Compatibility level is set to “SQL Server 2012 (110)” then click “OK”
      DB051314-step4b
  5. Right click on the newly created database and go to “Tasks” followed by “Back Up”
    DB051314-step5

    1. Name the backup file and note the location of the backup file and click “OK”
      DB051314-step5a
    2. Navigate to that Location and copy the backup
      DB051314-step5b
    3. Paste this file on to the Mirror Server
      DB051314-step5c
  6. Connect to the Mirror SQL Server (SQLMir-02 for this writing) and create the Update Manager account just like in Step 3 on that server as well (Do not create the database)
  7. Right click on “Databases” and choose “Restore Database”
    DB051314-step7

    1. Click “Device” for the source, then click the “…” button, click the “Add” button and it locate the .BAK file. Click on it and click “OK”, then “OK” again.
      DB051314-step7a
    2. Click the “Options” link on the left side and change “Recovery state” to “RESTORE WITH NORECOVERY” then click “OK”
      DB051314-step7b
  8. On the Mirror SQL server (SQLMir-02), click on “New Query” and run the following command: (This is creating the connection for the Mirror to allow mirroring from the Principle)
    DB051314-step8

    1. ALTER DATABASE vCenterUpdateManager
      SET PARTNER = 'TCP://SQLMIR01-Mirror.domain.com:5022'
  9. Back on the primary SQL server, click on “New Query” and run the following commands:
    1. ALTER DATABASE vCenterUpdateManager
      SET PARTNER = 'TCP://SQLMIR01-Mirror.domain.com:5022'
      GO
      ALTER DATABASE vCenterUpdateManager
      SET WITNESS = 'TCP://SQLWIT01-Mirror.domain.com:5022'

The SQL Servers (Principle, Mirror, Witness) have multiple network connections (Production, Mirror, and Backup). A DNS entry was created for their Mirror network IPs to allow them to communicate over a non-routable network to minimize latency. Mirroring would work if I set the string to “TCP://SQLMir-01.domain.com:5022” if a private network isn’t available.

 

vCenter Update Manager Install/Config:

  1. Login to the server as the user account that will connecting to vCenter/update manager database (domain\vupdatemanager for this writing)
  2. Create a 32bit ODBC connection to the SQL database
    a. Navigate to C:\Windows\SysWOW64 and open “odbcad32.exe”
    VUM051314-step2a
    b. Click the “System DSN” tab then click the “Add” button
    VUM051314-step2b
    c. Scroll to the bottom and choose “SQL Server Native Client 10.0” and click “Finish”
    VUM051314-step2c
    d. Enter the name of the connection and find the SQL Server\Instance and click “Next”
    VUM051314-step2d
    e. Choose “With Integrated Windows authentication” and click “Next”
    f. Change the default database to the Update Manager Database then set the Mirror Server as the SQL Server Name\Instance. Click “Next”
    VUM051314-step2f
    g. Click “Finish” then click “Test Data Source”. If test is successful, click “OK” then “OK” again and again
    VUM051314-step2g
  3. After the ISO has been mounted on the virtual machine, open “Computer” and open the CD
  4. If the installer doesn’t automatically open, locate the “autorun” application and double-click it.
  5. At the installer screen, choose “vSphere Update Manager” under the “VMware vCenter Support Tools” section. Then click “Install”
    VUM051314-step5
    a. Choose the appropriate language and click “OK”
    b. Click “Next” to begin the install process
    c. Accept the license agreement and click “Next”
    d. Leave the box for “Download updates from default sources” checked and click “Next”
    VUM051314-step5d
    e. Enter the FQDN or IP of the vCenter server to be connected to as well as the username/password for the account you’re currently logged in as (I’ve made this account an Administrator in vCenter at the Datacenter level)
    VUM051314-step5ei
    f. Choose “Use an existing supported database” and then choose the DSN connection created in step 2 and click “Next”
    VUM051314-step5f
    g. Click “Next” to confirm the database information and click “OK” to ignore the warning about Full recovery
    h. Choose the IP address and note the ports being used then click “Next”
    VUM051314-step5h
    i. Change the Install directory from C: to D: and then click “Next”
    VUM051314-step5i
    j. Click “Install”
    k. Click “Finish”
  6. After installation completes, press the Start button, Administrative Tools, then Services
    VUM051314-step6
    a. Locate the “VMware vSphere Update Manager Service”, right click and choose “Properties”
    VUM051314-step6a
    b. Click the “Log On” tab and click the “This account” button then enter the login information for the domain account used for update manager then click “Apply”
    VUM051314-step6b
    c. Click “OK” for the dialog box about granting log on as a service rights
    d. After the new service account has been applied, click the “General” tab then click the “Stop” button. Once the service has stopped, hit the “Start” button. Then click “OK”
  7. Open up the vSphere client (not the web interface) and login to the vCenter server
    a. Click the “Home” button
    VUM051314-step7a
    b. Click the “Update Manager” button under “Solutions and Applications”
    VUM051314-step7b
    c. Click on the “Baselines and Groups” tab
    VUM051314-step7c
    d. Click the “Create” link towards the top right corner under “Compliance View”
    VUM051314-step7d
    e. Select “Host Baseline Group” and give it a name (“All Patches” for this example). Click “Next”
    VUM051314-step7e
    f. Click “Next” through “Upgrades” page
    g. Select both Critical and Non-critical patches and click “Next”
    VUM051314-step7g
    h. Click “Next” through the “Extensions” page
    i. Review the settings and click “Finish”
    VUM051314-step7i
  8. Click the “Home” button again then choose “Hosts and Clusters”
    VUM051314-step8
    a. (For this writing, we’ll attach the baseline group to the Datacenter, but I usually apply this at the cluster level)
    b. Click on the Datacenter then click on the “Update Manager” tab
    VUM051314-step8b
    c. Click the “Attach” link towards the top right corner
    VUM051314-step8c
    d. Under “Baseline Groups” choose the name of the Baseline group created and click “Attach”
    VUM051314-step8d
    e. Once attached, all the Hosts will display under “All Groups and Independent Baselines”. Click the “scan” button towards the top right corner
    VUM051314-step8e
    f. Click the “Scan” button on the pop up box
    VUM051314-step8f
    g. Once scanning is completed, click the “Stage” button towards the bottom right corner
    VUM051314-step8g
    h. Ensure both Critical and Non-critical patches are selected as well as the host and click “Next”
    VUM051314-step8h
    i. Click “Next” after reviewing the patches to be applied
    VUM051314-step8i
    j. Then click “Finish” (All patches that can be staged will be placed on the host, some that can’t be staged will be loaded once you choose “Remediate”)
    k. Once staged, click the “Remediate” button towards the bottom right corner
    l. Click the baseline group created earlier then click “Next”
    VUM051314-step8l
    m. Review the patches and click “Next”
    VUM051314-step8m
    n. Choose “Immediately” for the remediation time and click “Next”
    VUM051314-step8n
    o. Choose your VM power state options (In a multi-host cluster choosing “Do Not Change VM Power State” will cause VMs to be vMotioned to another host when entering maintenance mode)
    VUM051314-step8o
    p. Click Finish (This will cause the Host to enter maintenance mode, apply patches, and reboot if necessary)
  9. After the host finishes rebooting we’ll see the new build number
    VUM051314-step9

Applying baselines at the cluster level will help to ensure all your hosts are running the same builds/patches and help prevent version mismatch issues. I prefer to created one baseline for all my hosts that includes any required extensions. In my environment we run NetApp storage which requires a host component to take advantage of VAAI. By adding this into my required patching I make sure all my hosts are able to take advantage of this.