View Host Allocation by Cluster in PowerCLI

Viewing resource allocation at a cluster level is something we don’t do enough. In the past I would look at a specific host and see if there was an issue of over allocating memory or CPUs or when budgeting for the next year I would gather stats, but rarely did I save most of that information. One of the bigger mistakes I would make is looking at the cluster as a whole and grabbing the total RAM and CPU for the cluster and compare that against the total RAM and CPU allocated to the VMs that reside on that cluster and assume that the average was the number to base my calculations on. What that doesn’t take into account is things like DRS rules where certain VMs are pinned to a host, separated from each other, or DRS being disabled all together.

This started me down a path of creating a report to show what the current utilization was for each one of my clusters and then breaking that down to the hosts in each cluster so I could get an idea of how well my VMs were spread across a cluster.

I have a decent number of clusters I’m working with so the first thing we’ll do is get all the clusters in vCenter and sort them by name. I prefer doing a name sort so they appear in the order I’m used to looking at them in vCenter

$allClusters = Get-Cluster | Sort Name

Now we’ll open our ForEach loop for all the clusters. We make this a variable so we can see all the output once the script is completed. Then we’ll create an empty array.

$clusterOutput = ForEach ($cluster in $allClusters) {
$report = @()

We’ll get all the hosts in each cluster one cluster at a time and open another ForEach loop for each one of those hosts as well.

$allHosts = $cluster | Get-VMHost | Sort name
ForEach ($vmHost in $allHosts) {

We’re going to get all the VMs on each host now. I’m only concerned about powered on VMs, but depending on your environment you may want to omit the PowerState clause. Once we get all the VMs on a host, we want to calculate how much Memory is allocated and how many CPUs are allocated. The “Measure-Object -sum” will add those numbers together for us and we’ll call that number in the report.

$vms = $vmHost | Get-VM | Where {$_.PowerState -eq "PoweredOn"}
$vmMemSum = $vms.memoryGB | Measure-Object -sum
$vmCpuSum = $vms.NumCpu | Measure-Object -sum

Now that we have the total VM memory and CPU allocated for the host, we want to see the ratio of CPUs allocated to available on the host. We use the $ratio variable to capture this value, then use PowerShell math to divide the number of vCPUs allocated to the VMs by the number of pCPUs available on the host. We then round that number to 2 decimal places.

$ratio = [math]::round($vmCpuSum.sum/$vmhost.NumCpu,2)

With all the numbers captured we can start creating the table view by defining the column names. Host name, Host State, Host memory, VM Memory, Host CPUs, VM CPUs, and VM CPU to Host CPU value are what we’re interested in.

$row = "" | Select VMHost, State, "Host Memory", "VM Memory", "Host CPU", "VM CPU", "vCPU per pCPU"

To populate this table we use $row.<Column Name> and give it the value using =. Because of the ForEach loop we’re repeating this for every single VM Host in a cluster.

$row.VMhost = $vmhost.Name
$row.State = $vmhost.ConnectionState
$row."Host Memory" = [math]::round($vmhost.MemoryTotalGB,2)
$row."VM Memory" = [math]::round($vmMemSum.sum,2)
$row."Host CPU" = $vmhost.NumCpu
$row."VM CPU" = $vmCpuSum.sum
$row."vCPU per pCPU" = $ratio

Once that has been completed we need to add the rows to our empty array and then close the ForEach loop for the hosts.

$report += $row}

At this point we now have a completed table view of the resource allocation. Since we’ll be running this in PowerShell we’ll need to display the name of the cluster between each report otherwise you might not be able to immediately recognize what cluster is being referenced. Use “Write-Output” instead of “Write-Host” so this displays in the correct order in the script. When using Write-Output with a variable in the output it needs to be wrapped in $( ) otherwise the variable name will be displayed instead.

Write-Output "$($cluster.Name) Resource Allocation"

In order to have this display per cluster we’ll call the cluster output here and then close the ForEach loop on the clusters.

$report | Format-Table -Autosize}

We can then display the output using the $clusterOutput variable we created in step 2.

$clusterOutput

This is what the output will look like:

Instead of just displaying this in the console you could export this to CSV to save it for reference. Below is the full script.

$allClusters = Get-Cluster | Sort Name
$clusterOutput = ForEach ($cluster in $allClusters) {
$report = @()
$allHosts = $cluster | Get-VMHost | Sort Name
ForEach ($vmhost in $allHosts) {
$vms = $vmhost | Get-VM | Where {$_.PowerState -eq "PoweredOn"}
$vmMemSum = $vms.memoryGB | Measure-Object -sum
$vmCpuSum = $vms.NumCpu | Measure-Object -sum
$ratio = [math]::round($vmCpuSum.sum/$vmhost.NumCpu,2)
$row = "" | Select VMHost, State, "Host Memory", "VM Memory", "Host CPU", "VM CPU", "vCPU per pCPU"
$row.VMhost = $vmhost.Name
$row.State = $vmhost.ConnectionState
$row."Host Memory" = [math]::round($vmhost.MemoryTotalGB,2)
$row."VM Memory" = [math]::round($vmMemSum.sum,2)
$row."Host CPU" = $vmhost.NumCpu
$row."VM CPU" = $vmCpuSum.sum
$row."vCPU per pCPU" = $ratio
$report += $row}
Write-Output "$($cluster.Name) Resource Allocation"
$report | Format-Table -Autosize}
$clusterOutput

Table View of Datastores Mounted in a Cluster

I really wish I knew more about PowerShell and even the correct terminology because then maybe this wouldn’t have been so difficult to figure out. I spent 2 days trying to get this to work properly and it wasn’t until I was writing this post with the garbage version of this script that I discovered a more efficient way of doing things. Not knowing what the different functions I normally use are called made it difficult to google and discover alternate methods to accomplish what I was trying to accomplish.

With that out of the way, let’s talk about what it is I’m trying to accomplish. A while ago I was working with a customer that wanted to manage all their standard Portgroups with PowerShell/PowerCLI. I had the thought back then that a table view that listed all the portgroups in a cluster in columns then all the hosts in that cluster in rows with X’s marking what hosts had which portgroups. This seemed like a good idea, but I had no idea how to accomplish it at the time. Here we are a year later and the idea popped up at work again this time to see what datastores were mounted on which hosts. With a lot more PowerShell’ing under my belt I thought I was up to the task.

Normally I would accomplish something like that using a PowerShell array. That would normally be accomplished by doing the following:

$report = @()
$row = "" | Select Hostname, Datastore01, Datastore02
$row.Hostname = $hostname
$row.Datastore01 = $datastore01.name
$row.Datastore02 = $datastore02.name
$report += $row

This would have been the ideal solution except I needed to pass a dynamic number of datastores per cluster and I wouldn’t know ahead of time the names of these datastores. The goal of any script I write is re-use given that I have multiple clusters and multiple vCenters to manage. What I wasn’t able to figure out with this approach was how to pass an array of datastore names on the “$row = “” | Select Hostname, Datastore01…” line. No matter what I did I couldn’t make it work. This lead me down another, very inefficient path. What I didn’t realize at the time was that I could accomplish the same thing with “New-Object PSObject” and “Add-Member”.

I got this to work, but it would only record the values of the first (or last) host depending on how I added values. This brought me to the point of doing a blank array then creating a second array that I would update values on for each host and add that array to my initial array. This felt sloppy and inefficient because I was repeating lines in the script to create the second array and it felt like it could be done better. Then I thought about doing the same thing, but this time using a count to create a new array then after the first run adding the entries to the first array. A few tests and eventually I figured it out.

Now for the script.

1. Here we define the cluster name, gather all the datastores in that cluster, and then get all the hosts in the cluster as well. I added the check for NFS type because that is what I use in my environment and it eliminates any local datastores that may be present on a host from appearing in the cluster check.

$cluster = "ClusterName"
$datastores = Get-Cluster $cluster | Get-Datastore | Where {$_.Type -eq "NFS"} | Sort Name
$allHosts = Get-Cluster $cluster | Get-VMHost

2. After that we are opening a ForEach loop. Inside that loop we use $count++ to test how many times we’ve run this loop. Since we aren’t using the $count variable anywhere else this has no value. $count++ will increase by one each time starting with the number 1 on the first run.

ForEach ($vmhost in $allHosts) {
$count++

3. The next lines are creating our array and populating some of the data. New-Object PSObject is creating a blank object. We reference this blank object and add a new column with “Add-Member” and a name of “HostName” with the name of the first ESXi host in the cluster being set as the value. Then we’re going to open another ForEach loop to add a column name with each of the datastore names.

$report = New-Object PSObject
$report | Add-Member -MemberType NoteProperty -Name "HostName" -Value $vmhost.Name
ForEach ($ds in $datastore) {

4. If the host doesn’t have the datastore present we are leave the value blank, but if the datastore is present we mark it with an “X”. We have to create a new variable ($getDS in this case) and check for the datastore. Adding “-ErrorAction SilentlyContinue” will allow us to run the script and not see any errors if the datastore is missing, but still capture the data. The “IF (!$getDS)” is checking if the $getDS variable is empty. Once the host has been checked for that datastore we perform another “Add-Member” to add the datastore as a column and add the value if the datastore is present or not.

$getDS = $vmhost | Get-Datastore $ds.Name -ErrorAction SilentlyContinue
IF (!$getDS) {$present = " "} ELSE {$present = "X"}
$report | Add-Member -MemberType NoteProperty -Name $ds.Name -Value $present}

5. At this point we have collected data on only 1 of our hosts. If we ended the loop here all we’d do is overwrite the data we just wrote over and over until we finished with all the hosts. The next part is where we use that $count++ from step 2. If $count equals 1 (the first run) then we create a new object called $newReport based on $report which contains the data from 1 host. On the next loop we increase $count by one (now it’s value is “2”), replace all the data that existed in $report previously with a new host, and take that new object, $newReport, and add $report to it.

IF ($count -eq "1"){$newReport = New-Object PSObject $report} ELSE {[array]$newReport += $report}}

6. Now that all the data is combined we can view it by running $newReport | Format-Table. This gives us the view below and we can see that we have a few datastores not present on some of our hosts.

$newReport | Sort Hostname | Format-Table

a. This data can also be exported to a CSV file when there are more datastores than can be displayed in your powershell console

$newReport | Export-CSV "C:\datastoreReport.csv" -NoTypeInformation

Below is the full code for this script. You can even wrap this in another ForEach loop for every cluster to see them all at once, but if you do that you’ll have to clear out the $report table by doing “$report = ”” and set the count of your count variable to 0 by doing “$count = 0” once your open the ForEach cluster loop.

$cluster = "ClusterName"
$datastores = Get-Cluster $cluster | Get-Datastore | Where {$_.Type -eq "NFS"} | Sort Name
$allHosts = Get-Cluster $cluster | Get-VMHost
ForEach ($vmhost in $allHosts) {
$count++
$report = New-Object PSObject
$report | Add-Member -MemberType NoteProperty -Name "HostName" -Value $vmhost.Name
ForEach ($ds in $datastores) {
$getDS = $vmhost | Get-Datastore $ds.Name -ErrorAction SilentlyContinue
IF (!$getDS) {$present = " "} ELSE {$present = "X"}
$report | Add-Member -MemberType NoteProperty -Name $ds.Name -Value $present}
IF ($count -eq "1"){$newReport = New-Object PSObject $report} ELSE {[array]$newReport += $report}}

$newReport | Sort HostName | Format-Table

ESXi Host Patching with PowerShell & Update Manager

Update Manager certainly makes host patching simple, but leaves a few things to be desired. How many times have you attempted to update a host in Update Manager only to have the host never enter maintenance mode because of a DRS rule, VMware tools installation or local ISO mapped to a VM? I wanted to find a way to check for all these things as I’m performing the patching process and be able to accomplish it at the cluster level.

For the script itself I have broken it down into the different sections along with screenshots of what you’ll see when running the script. It makes it a little busy to follow along with for this entry, but hopefully it makes sense. At the bottom of the page I have the whole script put together to make it easier to copy and run it on your own.

Let’s dig into the script.

1. While you can manually define the vCenter server in the script, I prefer being prompted as I have multiple vCenter servers that I work with. The multiple lines and color emphasis was for a customer that would forget to enter the vCenter name and instead enter the ESXi host name.

Write-Host "Enter the FQDN of the " -NoNewline
Write-Host "[vCenter Server]" -ForegroundColor Red -NoNewline
Write-Host " to connect to: " -NoNewline
$vCenterFqdn = Read-Host
Connect-viserver $vCenterFqdn

2. Here we’re going to list all the clusters. I use this menu system all the time now in my PowerShell scripts to make it easier to make selections instead of having to remember and manually enter the name of an object. This is getting all the clusters then converting the number selection that’s entered into the cluster name.

$global:i=0
Get-Cluster | Sort Name | Select @{Name="Number";Expression={$global:i++;$global:i}},Name -OutVariable menu | format-table -AutoSize
$clusterNum = Read-Host "Select the number of the Cluster to be patched"
$clusterName = $menu | where {$_.Number -eq $clusterNum}


3. Now that we have the cluster we’re going to work with we search for DRS rules. Specifically, we’re looking for “Must Run” rules. This will prevent a VM from moving to another host. While every environment is different and they have “must run” rules for a variety of reasons, I’m comfortable disabling this during patch events. If there are any rules we’re going to list the rule names in the PowerShell console and give you the option to disable or not.

a. Remember, this is only looking at “Must Run” DRS rules for the entire cluster, not for an individual host. If you’re patching, odds are you’ll be doing the entire cluster anyway so I didn’t break this down on a host-by-host basis.

$drsRules = Get-Cluster $($clusterName.Name) | Get-DrsVMHostRule | Where {$_.Type -eq "MustRunOn"} | Where {$_.Enabled -eq $True}
IF ($drsRules.Count -gt "0"){Write-Host "The following rules may prevent a host from entering Maintenance mode:" -foreground "Yellow"; $drsRules.Name; $disableRules = Read-Host "Press Y to disable these rules. Anything else to continue";
IF ($disableRules -eq "Y"){Write-Host "Disabling DRS Rules..." -foreground "Yellow";
foreach ($name in $drsRules){Set-DrsVMHostRule -rule $name -enabled:$false}} ELSE {Write-Host "Skipping disabling of DRS Rules. Continuing..." -foreground "Yellow"}} ELSE {Write-Host "No "Must Run" Rules in $($clusterName.Name). Continuing..." -foreground "Yellow"}

In the picture I have the name of the DRS rule highlighted (the VM name was in the rule so it’s been obscured).


4. Now that we’ve decided what to do with our DRS rules, we can get down to selecting the baseline. This script can be used for both patching and for Upgrades. There is a check later on in the script that will skip the “Staging” step and go right to remediation if it’s an upgrade. Once again, we’re using that menu selection function to display all upgrades/baselines and let us choose the one to use.

$global:i=0
Get-Baseline | Select Name | Sort Name | Select @{Name="Number";Expression={$global:i++;$global:i}},Name -OutVariable menu | format-table -AutoSize
$baselineNum = Read-Host "Select the number of the Baseline to be attached"
$baselineName = $menu | where {$_.Number -eq $baselineNum}
Write-Host "Attaching $($baselineName.Name) Baseline to $($clusterName.Name)..." -Foreground "Yellow"
$baseline = Get-Baseline $baselineName.Name
Attach-Baseline -Baseline $baseline -Entity $clusterName.Name


5. Here’s where we’re going to complicate things a bit. I have 2 loops in this script. Loop number 1 is for checking if a host has any patches available. We’ll check a selected host against the attached baseline, if there are no available updates/upgrades then we report that in the PowerShell console and return to the host selection screen. The second loop is when a selected host has been patched we return to the host selection screen to choose the next one in the list.

DO
{
DO
{

6. Now that we’ve opened up our loop, we can start with selecting a host in the cluster. Once again, menu selection, this time we’re getting all the hosts in the chosen cluster and we’re displaying the host name, build, esxi version, and state. This makes it easier to know what hosts have been patched, which ones are still left, and what hosts are already in maintenance mode. In a larger environment you may forget what host name you were working on so seeing if a host was in maintenance mode and ready to be upgrade may be beneficial.

$global:i=0
Get-Cluster $clusterName.Name | Get-VMhost | Sort Name | Select @{Name="Number";Expression={$global:i++;$global:i}},Name,Build,Version,State -OutVariable menu | format-table -AutoSize
$hostNum = Read-Host "Select the number of the Host to be patched"
$hostName = $menu | where {$_.Number -eq $hostNum}


7. With our first host chosen we’re going to scan its inventory to see what patches it currently has installed.

Write-Host "Scanning $($hostName.Name) patch inventory..." -foreground "Yellow"
Scan-Inventory -Entity $hostName.Name


8. Now that we’ve scanned it, we’re going to check it for compliance. If there are patches available, we’ll move on to the next step to see if there are any VMs with ISO or Vmware tools installations. If there aren’t any patches, we’re reporting that and then sending us back to the host selection screen.

a. As a note, the second ‘}’ after the “Write-Host ‘Host is out of date” command is to close the second loop from step 5.

Write-Host "Scanning $($hostName.Name) for patch compliance..." -foreground "Yellow"
$compliance = Get-Compliance $hostName.Name
IF ($compliance.Status -eq "Compliant"){Write-Host "No available patches for $($hostName.Name). Choose a different host" -foreground "Red"}ELSE{Write-Host "Host is out of date" -foreground "Yellow"}}
UNTIL ($compliance.Status -ne "Compliant")


9. Now that we have some patches to apply, we check for active VMware tools installations. We perform the lookup for VMs with tools installer mounted then we perform a count on that output. If there are more than 0, we list all the VMs. Now that you see all the VMs, you can press ‘Y’ to force the unmount and continue or you can ignore it and hope the VMs move.

a. The unmount command works most of the time, but on some Linux OS’s I’ve run into issues with it. Just keep that in mind

$vmtools = Get-VMHost $hostName.Name | Get-VM | Where {$_.ExtensionData.RunTime.ToolsInstallerMounted -eq "True"} | Get-View
IF ($vmtools.Count -gt "0"){Write-Host "The following VMs on $($hostName.Name) have VMTools Installer Mounted:";
$vmtools.Name;
$unmountTools = Read-Host "Press "Y" to unmount VMTools and continue. Anything else to skip VMTools unmounting";
IF ($unmountTools -eq "Y") {Write-Host "Unmounting VMTools on VMs..." -foreground "Yellow"; foreach ($vm in $vmtools) {$vm.UnmountToolsInstaller()}}ELSE{Write-Host "Skipping VMTools unmounting..." -foreground "Yellow"}}ELSE{Write-Host "No VMs found with VMTools Installer mounted. Continuing..." -foreground "Yellow"}


10. With all our VMware tools installations killed, we move on to ISOs. ISO’s that are stored in shared datastores won’t have an issue moving, but if ISOs have been mounted directly to a VM through a console window those can cause a hang up. Again, you know your environment better than me so use your best judgement when picking what to do.

$mountedCDdrives = Get-VMHost $hostName.Name | Get-VM | Where { $_ | Get-CdDrive | Where { $_.ConnectionState.Connected -eq "True" } }
IF ($mountedCDdrives.Count -gt "0"){Write-Host "The following VMs on $($hostName.Name) have mounted CD Drives:";
$mountedCDdrives.Name;
$unmountDrives = Read-Host "Press "Y" to unmount these ISOs and continue. Anything else to skip ISO unmounting";
IF ($unmountDrives -eq "Y") {Write-Host "Unmounting ISOs on VMs..." -foreground "Yellow"; foreach ($vm in $mountedCDdrives) {Get-VM $vm | Get-CDDrive | Set-CDDrive -NoMedia -Confirm:$False}}ELSE{Write-Host "Skipping ISO unmounting..." -foreground "Yellow"}}ELSE{Write-Host "No VMs found with ISOs mounted. Continuing..." -foreground "Yellow"}


11. Now we check if the host is in maintenance mode. This check isn’t required and we could just try to put a host in maintenance mode that’s already in maintenance mode without any errors, I just prefer to have this called out so people know that the host will be placed in maintenance mode. Also, if you don’t want to confirm and just want the host to automatically go into maintenance mode, you can remove the “Read-Host “Press Enter to place $($hostName.Name in Maintenance mode”;” section and it will automatically place the host in maintenance mode.

$hostState = Get-VMHost $hostname.Name
IF ($hostState.State -eq "Maintenance"){Write-Host "$($hostName.Name) is already in maintenance mode. Continuing to patch Staging/Remediation" -foreground "Yellow"}ELSE{Read-Host "Press Enter to place $($hostName.Name) in Maintenance mode"; Start-Sleep 7; Write-Host "Enabling Maintenance mode for $($hostName.Name). This may take a while..." -foreground "Yellow"; Set-VMHost $hostName.Name -State "Maintenance"}


12. This was an interesting issue I ran into. I had a customer running ESXi 6.0 with PernixData installed which wasn’t compatible with ESXi 6.5 which we were upgrading to. When we attempted to upgrade we’d fail because the PernixData VIB was present. I threw this check in to see if this VIB existed on their hosts and to remove it before proceeding. I also added a second placeholder VIB name in case you have multiple VIBs to remove you can just replace the name with the appropriate VIB name and even add additional VIBs with another -OR $_.ID -eq “vibname”

$esxcli = Get-esxcli -vmhost $hostName.Name
$vibCheck = $esxcli.software.vib.list() | Where {($_.ID -eq "PernixData_bootbank_pernixcore-vSphere6.0.0_3.5.0.2-39793" -OR $_.ID -eq "Other_vib_name_xxxxxx")}
IF ($vibCheck.Count -gt "0"){Write-Host "Incompatible VIB found. Removing from host..." -foreground "Yellow"; foreach ($a in $vibCheck){$esxcli.software.vib.remove($null, $true, $false, $true, $a.Name)}}ELSE{Write-Host "No known incompatible VIBs found. Continuing..." -foreground "Green"}


13. And, of course, if removing a VIB we need to reboot so now we throw this reboot check in there as well. If there were no VIBs found in Step 12, this will be ignored. Otherwise, we prompt for reboot, enter the reboot command, check for the host to enter the NotResponding state and report on the state until it responds in vCenter and returns to Maintenance state.

IF ($vibCheck.Count -gt "0" -AND $baseline.BaselineType -eq "Upgrade"){Read-Host "VIBs were removed from host. Press enter to reboot host before attempting upgrade";Restart-VMhost $hostName.Name -confirm:$false}ELSE{$skip = "1"; Write-Host ""}
IF ($skip -ne "1"){
Write-Host "$($hostName.Name) is going to reboot..." -foreground "Yellow"
do {
Start-Sleep 3
$hostState = (get-vmhost $hostName.Name).ConnectionState
}
while ($hostState -ne "NotResponding")
Write-Host "$($hostName.Name) is currently down..." -foreground "Yellow"

#Wait for server to reboot
do {
Start-Sleep 5
$hostState = (get-vmhost $hostName.Name).ConnectionState
Write-Host "Waiting for $($hostName.Name) to finish rebooting..." -foreground "Yellow"
}
while ($hostState -ne "Maintenance")
Write-Host "$($hostName.Name) is back up..." -foreground "Yellow"}ELSE{Write-Host ""}

14. Now that all that work is done, we can start staging patches. If this is a patch baseline we run stage command. If it’s an upgrade baseline, we’ll skip this step

IF ($baseline.BaselineType -eq "Upgrade"){Write-Host "$($baseline.Name) is an Upgrade Baseline. Skipping to remediation..." -foreground "Yellow"}ELSE{Write-Host "Staging patches to $($hostName.Name) in Cluster $($clusterName.Name)..." -foreground "Yellow"; Stage-Patch -entity $hostName.Name -baseline $baseline}


15. Once patches have been staged (or upgrades ready to push) it’s time for remediation. We prompt that the host will reboot on its own once the patch has completed and we set a few advanced options. These are the defaults, but can still be environment specific so check to make sure this is what you want to use.

Write-Host "Remediating patches on $($hostName.Name) in Cluster $($clusterName.Name). Host will reboot when complete" -foreground "Yellow"
Remediate-Inventory -Entity $hostName.Name -Baseline $baseline -HostFailureAction Retry -HostNumberofRetries 2 -HostRetryDelaySeconds 120 -HostDisableMediaDevices $true -ClusterDisableDistributedPowerManagement $true -ClusterDisableHighAvailability $true -confirm:$false -ErrorAction SilentlyContinue


At the top of our PowerShell window we get the percentage of completion for our task. It’s not very accurate as it stays at 30% then goes to 92% when it’s nearly complete.

16. Once the host has been rebooted and comes back online we want to see the current status of that host to ensure updates were successful. We are comparing the build number we grabbed before we started patching against the build number after the reboot. If they are the same, something didn’t work and we need to check into it. Otherwise, we do nothing.

Write-Host "Retrieving Host build status..." -foreground "Yellow"
$hostBuild = Get-VMHost $hostName.Name
IF ($hostBuild.Build -eq $hostState.Build){Write-Host "Patch/Upgrade was not applied. Check status in vCenter and re-run the script. Exiting..." -foreground "Red";$error;Start-Sleep 20;break}ELSE{}

17. Now that the host was patched, we show a list of all the hosts in that cluster along with their build, version, and state. This gives us a full view of the cluster so we can see if there are any hosts left to be patched and then we exit maintenance mode for this host.

Get-Cluster $clusterName.Name | Get-VMhost | Select Name,Build,Version,State | Sort Name | format-table -autosize
Write-Host "Exiting Maintenance mode for Host $($hostName.Name)..." -foreground "Yellow"
Get-VMHost $hostName.Name | Set-VMHost -State Connected


18. Based on that list will determine the answer to our next question. We are being prompted to re-enable the DRS rules we previously disabled (if any). If any rules were chosen to be disabled we captured that in a variable in step 3. We can choose to re-enable just those disabled rules by pressing ‘Y’ or if there are other hosts left to patch we just press any other key to continue.

IF ($disableRules -eq "Y") {$enableRules = Read-Host "If Cluster patching is complete press "Y" to re-enable DRS rules. Anything else to continue";
IF ($enableRules -eq "Y") {Write-Host "Re-enabling DRS Must Run rules" -foreground "Yellow"; 
foreach ($name in $drsRules){Set-DrsVMHostRule -rule $name -enabled:$true}} ELSE {
Write-Host "DRS Rules not being re-enabled. Continuing..." -foreground "Yellow"}} ELSE {}


19. In this last question we’re just displaying the output from our last host patched and prompting the user to quit patching or go back to step 6 and pick the next host in the cluster to patch.

$answer = Read-Host "$($hostname.Name) patched in Cluster $($clusterName.Name). Press "1" to re-run the script. Anything else to exit"


20. Finally, to close out the first loop, we have the following lines. In step 19 we have the variable $answer which asks the user to enter ‘1’ to re-run the script and pick another host. This line at the bottom is saying until the user enters something other than 1, keep performing that loop. If anything else is entered, the script exits. Answering “1” will start the script over from Step 6. We will perform another “Get-Cluster | Get-VMHost” on the chosen cluster and retrieve the current build and state information for each of the hosts and display the updated results. As you can see from the screenshot below, vmm-04 is no in a Connected state with a Build number of 9298722,

}
UNTIL ($answer -ne "1")


Below is the script all put together to copy and test. Like all scripts pulled from the internet, make sure you test them in a lab/isolated environment until you can ensure proper functionality.

Write-Host "Enter the FQDN of the " -NoNewline
Write-Host "[vCenter Server]" -ForegroundColor Red -NoNewline
Write-Host " to connect to: " -NoNewline
$vCenterFqdn = Read-Host
Connect-viserver $vCenterFqdn

$global:i=0
Get-Cluster | Sort Name | Select @{Name="Number";Expression={$global:i++;$global:i}},Name -OutVariable menu | format-table -AutoSize
$clusterNum = Read-Host "Select the number of the Cluster to be patched"
$clusterName = $menu | where {$_.Number -eq $clusterNum}

$drsRules = Get-Cluster $($clusterName.Name) | Get-DrsVMHostRule | Where {$_.Type -eq "MustRunOn"} | Where {$_.Enabled -eq $True}
IF ($drsRules.Count -gt "0"){Write-Host "The following rules may prevent a host from entering Maintenance mode:" -foreground "Yellow"; $drsRules.Name; $disableRules = Read-Host "Press Y to disable these rules. Anything else to continue";
IF ($disableRules -eq "Y"){Write-Host "Disabling DRS Rules..." -foreground "Yellow";
foreach ($name in $drsRules){Set-DrsVMHostRule -rule $name -enabled:$false}} ELSE {Write-Host "Skipping disabling of DRS Rules. Continuing..." -foreground "Yellow"}} ELSE {Write-Host "No "Must Run" Rules in $($clusterName.Name). Continuing..." -foreground "Yellow"}

$global:i=0
Get-Baseline | Select Name | Sort Name | Select @{Name="Number";Expression={$global:i++;$global:i}},Name -OutVariable menu | format-table -AutoSize
$baselineNum = Read-Host "Select the number of the Baseline to be attached"
$baselineName = $menu | where {$_.Number -eq $baselineNum}
Write-Host "Attaching $($baselineName.Name) Baseline to $($clusterName.Name)..." -Foreground "Yellow"
$baseline = Get-Baseline $baselineName.Name
Attach-Baseline -Baseline $baseline -Entity $clusterName.Name

DO
{
DO
{
$global:i=0
Get-Cluster $clusterName.Name | Get-VMhost | Sort Name | Select @{Name="Number";Expression={$global:i++;$global:i}},Name,Build,Version,State -OutVariable menu | format-table -AutoSize
$hostNum = Read-Host "Select the number of the Host to be patched"
$hostName = $menu | where {$_.Number -eq $hostNum}

Write-Host "Scanning $($hostName.Name) patch inventory..." -foreground "Yellow"
Scan-Inventory -Entity $hostName.Name

Write-Host "Scanning $($hostName.Name) for patch compliance..." -foreground "Yellow"
$compliance = Get-Compliance $hostName.Name 
IF ($compliance.Status -eq "Compliant"){Write-Host "No available patches for $($hostName.Name). Choose a different host" -foreground "Red"}ELSE{Write-Host "Host is out of date" -foreground "Yellow"}}
UNTIL ($compliance.Status -ne "Compliant")

$vmtools = Get-VMHost $hostName.Name | Get-VM | Where {$_.ExtensionData.RunTime.ToolsInstallerMounted -eq "True"} | Get-View
IF ($vmtools.Count -gt "0"){Write-Host "The following VMs on $($hostName.Name) have VMTools Installer Mounted:";
$vmtools.Name;
$unmountTools = Read-Host "Press "Y" to unmount VMTools and continue. Anything else to skip VMTools unmounting";
IF ($unmountTools -eq "Y") {Write-Host "Unmounting VMTools on VMs..." -foreground "Yellow"; foreach ($vm in $vmtools) {$vm.UnmountToolsInstaller()}}ELSE{Write-Host "Skipping VMTools unmounting..." -foreground "Yellow"}}ELSE{Write-Host "No VMs found with VMTools Installer mounted. Continuing..." -foreground "Yellow"}

$mountedCDdrives = Get-VMHost $hostName.Name | Get-VM | Where { $_ | Get-CdDrive | Where { $_.ConnectionState.Connected -eq "True" } }
IF ($mountedCDdrives.Count -gt "0"){Write-Host "The following VMs on $($hostName.Name) have mounted CD Drives:";
$mountedCDdrives.Name;
$unmountDrives = Read-Host "Press "Y" to unmount these ISOs and continue. Anything else to skip ISO unmounting";
IF ($unmountDrives -eq "Y") {Write-Host "Unmounting ISOs on VMs..." -foreground "Yellow"; foreach ($vm in $mountedCDdrives) {Get-VM $vm | Get-CDDrive | Set-CDDrive -NoMedia -Confirm:$False}}ELSE{Write-Host "Skipping ISO unmounting..." -foreground "Yellow"}}ELSE{Write-Host "No VMs found with ISOs mounted. Continuing..." -foreground "Yellow"}

$hostState = Get-VMHost $hostname.Name
IF ($hostState.State -eq "Maintenance"){Write-Host "$($hostName.Name) is already in maintenance mode. Continuing to patch Staging/Remediation" -foreground "Yellow"}ELSE{
#Read-Host "Press Enter to place $($hostName.Name) in Maintenance mode"; Start-Sleep 7; Write-Host "Enabling Maintenance mode for $($hostName.Name). This may take a while..." -foreground "Yellow"; Set-VMHost $hostName.Name -State "Maintenance"}
Write-Host "Enabling Maintenance mode for $($hostName.Name). This may take a while..." -foreground "Yellow"; ; Start-Sleep 7; Set-VMHost $hostName.Name -State "Maintenance"}

$esxcli = Get-esxcli -vmhost $hostName.Name
$vibCheck = $esxcli.software.vib.list() | Where {($_.ID -eq "PernixData_bootbank_pernixcore-vSphere6.0.0_3.5.0.2-39793" -OR $_.ID -eq "Other_vib_name_xxxxxx")}
IF ($vibCheck.Count -gt "0"){Write-Host "Incompatible VIB found. Removing from host..." -foreground "Yellow"; foreach ($a in $vibCheck){$esxcli.software.vib.remove($null, $true, $false, $true, $a.Name)}}ELSE{Write-Host "No known incompatible VIBs found. Continuing..." -foreground "Green"}

IF ($vibCheck.Count -gt "0" -AND $baseline.BaselineType -eq "Upgrade"){Read-Host "VIBs were removed from host. Press enter to reboot host before attempting upgrade";Restart-VMhost $hostName.Name -confirm:$false}ELSE{$skip = "1"; Write-Host ""}
IF ($skip -ne "1"){
Write-Host "$($hostName.Name) is going to reboot..." -foreground "Yellow"
do {
Start-Sleep 3
$hostState = (get-vmhost $hostName.Name).ConnectionState
}
while ($hostState -ne "NotResponding")
Write-Host "$($hostName.Name) is currently down..." -foreground "Yellow"

#Wait for server to reboot
do {
Start-Sleep 5
$hostState = (get-vmhost $hostName.Name).ConnectionState
Write-Host "Waiting for $($hostName.Name) to finish rebooting..." -foreground "Yellow"
}
while ($hostState -ne "Maintenance")
Write-Host "$($hostName.Name) is back up..." -foreground "Yellow"}ELSE{Write-Host ""}

IF ($baseline.BaselineType -eq "Upgrade"){Write-Host "$($baseline.Name) is an Upgrade Baseline. Skipping to remediation..." -foreground "Yellow"}ELSE{Write-Host "Staging patches to $($hostName.Name) in Cluster $($clusterName.Name)..." -foreground "Yellow"; Stage-Patch -entity $hostName.Name -baseline $baseline}

Write-Host "Remediating patches on $($hostName.Name) in Cluster $($clusterName.Name). Host will reboot when complete" -foreground "Yellow"
Remediate-Inventory -Entity $hostName.Name -Baseline $baseline -HostFailureAction Retry -HostNumberofRetries 2 -HostRetryDelaySeconds 120 -HostDisableMediaDevices $true -ClusterDisableDistributedPowerManagement $true -ClusterDisableHighAvailability $true -confirm:$false -ErrorAction SilentlyContinue

Write-Host "Retrieving Host build status..." -foreground "Yellow"
$hostBuild = Get-VMHost $hostName.Name
IF ($hostBuild.Build -eq $hostState.Build){Write-Host "Patch/Upgrade was not applied. Check status in vCenter and re-run the script. Exiting..." -foreground "Red";$error;Start-Sleep 20;break}ELSE{}

Get-Cluster $clusterName.Name | Get-VMhost | Select Name,Build,Version,State | Sort Name | format-table -autosize
Write-Host "Exiting Maintenance mode for Host $($hostName.Name)..." -foreground "Yellow"
Get-VMHost $hostName.Name | Set-VMHost -State Connected

IF ($disableRules -eq "Y") {$enableRules = Read-Host "If Cluster patching is complete press "Y" to re-enable DRS rules. Anything else to continue";
IF ($enableRules -eq "Y") {Write-Host "Re-enabling DRS Must Run rules" -foreground "Yellow"; 
foreach ($name in $drsRules){Set-DrsVMHostRule -rule $name -enabled:$true}} ELSE {
Write-Host "DRS Rules not being re-enabled. Continuing..." -foreground "Yellow"}} ELSE {}

$answer = Read-Host "$($hostname.Name) patched in Cluster $($clusterName.Name). Press "1" to re-run the script. Anything else to exit"

}
UNTIL ($answer -ne "1")