Category Archives: Windows Server 2016

Shutdown Storage Spaces Direct (S2D) or Azure Stack HCI Hyper-Converged cluster safely

Yes, we are building clustered solutions to keep as high uptime as possible but sometimes there is a planned or unplanned electrical outage or maintenance work on power lines when we are simply forced to shutdown our cluster – and in that situation we want to do it safely.

When we talk about Storage Spaces Direct (S2D) on Windows Server 2016 / 2019 / 2022 in a hyper-converged scenario (when hyper-v virtualization and storage are inside the same system) it is very important to take care of properly shut down such system not to get in problematic situations where data corruption or some other issues could emerge. Becouse of that Microsoft has a great article about how to safely and properly shutdown a node in S2D configuration.

I would like to share with you a concept that could help you with getting whole cluster safely turned off.

Scenario consists of 2-node S2D solution, standalone hyper-v (on which I run file share witness (for S2D)) and PRTG that by using SNMP monitors APC UPS 2200:

So first of all we need to get the information about Battery capacity by using SNMP query to APC Network management card – this will be the value that we will monitor and based on the current value we will trigger some actions.

Then we need to prepare Notifications templates where we define Powershell scripts to be executed.
I am using three scripts:
First script will make a graceful stop of storage services and put S2D Cluster N2 in maintenance mode (all roles will be drained to S2D Cluster N1) after that it will shut down S2D Cluster N2
Second script will trigger shutdown of all virtual machines on S2D Cluster N1 and after 180 seconds it will shut down the S2D Cluster N1
– Third script will shut down third hyper-v host (standalone)

With the action Execute Program on our Notification Template we define which script we would like template to use and username and password that will be used only to execute the script on local machine (PRTG) – credentials for powershell remoting that will do the shutdown jobs can be safely saved separately so you do not need to enter plain-text credentials to access the hosts anywhere.

After that we need to configure triggers – when scripts will be executed based on the battery capacity – so in my case I decided to set it up like this:

  • When battery is on 65% turn off S2D Cluster N2 (drain roles (VMs and cluster service roles to S2D Cluster N1), put the node in maintenance mode, shut down the physical node S2D Cluster N2).
  • When battery is on 45% turn off S2D Cluster N1 by firs shutting down all VMs, than wait 180 seconds for shutdown to complete and then shut down physical S2D Cluster N1.
  • When battery is on 15% turn off standalone Hyper-V host – where our Witness and PRTG VMs are running

If we check the scriptblocks inside our scripts:

Shutdown-N2.ps1 (the script that in my case we will run first):

In first part of the script we need to setup credentials that will be used to execute powershell remoting:
You can do this buy simply entering username and password into the script (Please do not do that! Powershell allows you to do it way more securely. Please read this article about securely saving encrypted password in separate file.

Invoke-Command -ComputerName S2D-N2 -Credential $credential -ScriptBlock {
$nodename = ‘S2D-N2’
Suspend-ClusterNode -Name S2D-N2 -Drain -Wait
Get-StorageFaultDomain -type StorageScaleUnit | Where-Object {$_.FriendlyName -eq $nodename} | Enable-StorageMaintenanceMode
Stop-ClusterNode -name S2D-N2
Start-Sleep -Seconds 10
Stop-Computer -Force
}

Shutdown-N1ps1 (the second script that will be executed – this will turn off VMs and finaly S2D Cluster N1):

Invoke-Command -ComputerName S2D-N1 -Credential $credential -ScriptBlock {
Get-VM | Stop-VM -Force -AsJob
Start-Sleep -Seconds 180
Stop-Computer -Force
}

Shutdown-HyperV.ps1 (the third script that will turn off stand alone Hyper-V host):

Invoke-Command -ComputerName StandaloneHyperV -Credential $credential -ScriptBlock {
Stop-Computer -Force
}

So the shutdown sequence will be:
– when electricity is turned off and PRTG gets the info by querying UPS that capacity of the battery is under 65 %:
S2D Cluster – N2 will bi gracefully stopped (by draining roles and putting it in maintenance mode and shutdown after that)
– when the battery is under 45 %:
S2D cluster – N1 will be gracefully stopped (by shutting down all VMs and finally shutting down)
– when the battery capacity is under 15 %:
Our standalone host (where PRTG and File Share Witness (needed for S2D Cluster)) will be shutdown.

The procedure to turn the system back on is the following:
– First we will turn on standalone host (and Files Share Witness VM)
Please do not turn on PRTG server until UPS battery capacity is not over 65% (because PRTG will turn on the procedures again if capacity is below 65%)
– When you checked that standalone host has network connectivity and File Share Witness VM is working and has connectivity too we can proceed further by turning on S2D Cluster N1
– When S2D Cluster N1 is up we can turn on VMs* (as Witness is there and N1 is fully functional you are able to start your production VMs – there will be more data to resync so if you have time it is better to wait for N2 to get back online and put it out of maintenance mode.)
– We can now turn on S2D Cluster N2 and when it comes back online we need to bring it back into fully functional Cluster member state by executing the script:

$ClusterNodeName = ‘S2D-N2’
Start-ClusterNode -name $ClusterNodeName
Get-StorageFaultDomain -type StorageScaleUnit | Where-Object {$_.FriendlyName -eq $ClusterNodeName} | Disable-StorageMaintenanceMode
Resume-ClusterNode -Name $ClusterNodeName -Failback Immediate

After executing the script you can check the progress of storage re-synchronization by executing Powershell cmdlet: Get-StorageJob

When UPS battery capacity reaches over 65% you can turn on your PRTG monitoring system again.

Advertisement

Mass/bulk TimeToLive update Windows server DNS (primary zones)

TJust in case someone needs to bulk/mass update (for example) TimeToLive parameter on all A (CNAME, MX, TXT…) records in all primary zones on Windows Server 2016/2019 DNS server …

$allzones = Get-DnsServerZone | Where-Object -Property ZoneType -EQ -Value “Primary”
foreach ($allzone in $allzones) {
$olds = Get-DnsServerResourceRecord -ZoneName $allzone.ZoneName -Name “@” -RRType A
foreach ($old in $olds) {
#$old = “”
#$new = “”
$new = $old.Clone()
$new.TimeToLive = [System.TimeSpan]::FromMinutes(1)
Set-DnsServerResourceRecord -OldInputObject $old -NewInputObject $new -ZoneName $allzone.ZoneName -PassThru
}
}

“Poor man” monitoring of creation/enablement and addition and removal to/from security group of an account in Active Directory (part 2)

Next step is to monitor addition and/or removal of user to/from security group – in this example I will show that alert is triggered when user is added to domain admins security group.
The script is a bit modified so it covers the user that added another user to a security group, a user that was added to a security group and which group user was added to.

$EventMessage = get-winevent -FilterHashtable @{Logname=’Security’;ID=4728} -MaxEvents 1 | fl TimeCreated, Message
$eventmessagetstring = $EventMessage | Out-String
$EventMessageAccountNameTextAdmin = $EventMessagetstring | Select-String -Pattern “Subject:\s+\S+\s+\S+\s+\S+\s+\S+\s+\S+\s+\S+\s+\S+\s+\S+\s+\S+” -AllMatches | Select -ExpandProperty matches | Select -ExpandProperty value
$EventMessageAccountNameTextUser = $EventMessagetstring | Select-String -Pattern “Member:\s+\S+\s+\S+\s+\S+\s+\S+\s+\S+\s+\S+\s+\S+\s+\S+” -AllMatches | Select -ExpandProperty matches | Select -ExpandProperty value
$EventMessageAccountNameTextGroup = $EventMessagetstring | Select-String -Pattern “Group:\s+\S+\s+\S+\s+\S+\s+\S+\s+\S+\s+\S+\s+\S+\s+” -AllMatches | Select -ExpandProperty matches | Select -ExpandProperty value
$EmailTo = “me@domain.com”
$EmailFrom = “alert@domain.com”
$Subject = “New user in Active Directory!”
$Body = “User was added to group by: `n $EventMessageAccountNameTextAdmin `n `n `n User that was added to securty group: `n $EventMessageAccountNameTextUser `n `n `n Security group user was added to: `n $EventMessageAccountNameTextGroup”
$SMTPServer = “YourSMTPServer”
$SMTPMessage = New-Object System.Net.Mail.MailMessage($EmailFrom,$EmailTo,$Subject,$Body)
$SMTPClient = New-Object Net.Mail.SmtpClient($SmtpServer, 25)
$SMTPClient.Send($SMTPMessage)

I have created new Task Scheduler task in which now I am calling new script that I have named SecurityGroup.ps1

All the other stuff is configured in the same way as written in my previous post.

security group

Demystifying SMB 3 x multichannel – part 9 – VM1 (Debian Linux 10) on Host1 to VM2 (Windows Server 2019) on Host2

I finally managed to make it work … 🙂 So this time we are trying to establish multichannel between Debian Linux with Samba 4.9.5-Debian and Windows server 2019 (that preferres SMB 3.1.1 dialect). Each of VMs on separate Hyper-V hosts has 4 virtual network adapters connected. I entered the hostnames of VMs in hosts files both – on Windows and Linux as I am not running any DNS server in the test network.
So I added 4 entries on each machine. On Debian I have created a simple smb.conf example file to make it work:

[global]
workgroup = WORKGROUP
interfaces = eth0, eth1, eth2, eth3
bind interfaces only = Yes
vfs objects = recycle aio_pthread
aio read size = 1
aio write size = 1
strict locking = No
use sendfile = no
server multi channel support = yes
server string = samba server
security = USER
encrypt passwords = yes
smb passwd file = /etc/samba/smbpasswd
guest ok = yes

[storage]
comment = Storage
path = /var/samba
writeable = yes
public = no

As you can see in video by using Linux command in terminal: smbstatus I am getting similar information as running get-smbmultichannel Powershell cmdlet on Windows. I can clearly see how servers are connected between them by using SMB protocol.

As you can see in video Windows machine, from which I am copying data to Linux utilizes all four network adapters but we are getting only 2 gigabit throughput. On Linux side there are only two NICs utilized. I was not able to make it work by using all four adapters (like machines were utilizing in previous part in Windows VM to Windows VM scenario). Well I just wanted to demonstrate that concept works also in mixed environment with Windows and Linux.

(Mass) Modifying SOA record values by using Set-DnsServerResourceRecord

Today I wanted to update all serial numbers (to make sure that are written in YYYYMMDD00 way) on my primary DNS zones on my Windows server 2019 DNS server.

This is the script to do this massive change – by using this script anyone can modify any parameters in DNS.

$allzones = Get-DnsServerZone | Where-Object -Property ZoneType -EQ -Value “Primary”
foreach ($allzone in $allzones) {
$old = “”
$new = “”
$old = Get-DnsServerResourceRecord -ZoneName $allzone.ZoneName -Name “@” -RRType Soa
$new = $old.Clone()
$new.RecordData.SerialNumber = 2019080400
Set-DnsServerResourceRecord -OldInputObject $old -NewInputObject $new -ZoneName $allzone.ZoneName -PassThru
}

Demystifying SMB 3.x multichannel – part 8 – VM1 on Hyper-V host 1 to VM2 on Hyper-V host 2 – 4 NICs in each VM

We are upgrading configuration from previous part (7) so we are adding additional Virtual Network Adapters to both VMs (so each will have 4).

*** When I “hot added” network cards you can see that the throughput was bad (probably we should wait for couple of seconds or minutes for reconfiguration as new network adapters were added) – so on 56th second I am pausing the video for a VMs reboot and on 58th second I am resuming recording after VMs reboot. You can see that after reboot everything works great and we are getting maximum speed out of our 4 physical NICs in each of our Hyper-V hosts.

Demystifying SMB 3.x multichannel – part 7 – VM1 on Hyper-V host 1 to VM2 on Hyper-V host 2 – single NIC in each VM

From physical we are moving to virtual now – so I have created a small demo of two VMs on two separate Hyper-V hosts (connected to same switch with 4 physical NICs each). Each VM has only one Virtual Network Adapter.

As we can see we are getting 1 gigabit throughput from first to second VM. We can alo see the utilization of physical NICs on our Hyper-V hosts (transfer is using only one NIC).

Demystifying SMB 3.x multichannel – part 6 – Hyper-V server to Hyper-V server example with Switch embedded teaming (Windows server 2016/2019 only) in VMSwitch with multiple (4) adapters on host

Finally we are approaching the solution that is giving us great bandwidth by utilizing all four network adapters – we are still using Switch embedded teaming solution to team physical interfaces directly when creating Hyper-V Virtual Switch but this time with a slightly different command in Powershell.

New-VMSwitch -Name Team01 -EnableEmbeddedTeaming $true -AllowManagementOS $false -NetAdapterName NIC1,NIC2,NIC3,NIC4

!Warning! When you execute this command you will remain without connectivity, so I suggest to continue with following commands and to execute them consequently. So after creating a Virtual Switch consisting of our four physical NICs and combined with embedded teaming feature we are ready to give our Hyper-V host management network cards.

Add-VMNetworkAdapter -ManagementOS -SwitchName Team01 -Name MGMT01
Add-VMNetworkAdapter -ManagementOS -SwitchName Team01 -Name MGMT02
Add-VMNetworkAdapter -ManagementOS -SwitchName Team01 -Name MGMT03
Add-VMNetworkAdapter -ManagementOS -SwitchName Team01 -Name MGMT04

Finally we are ready to test copying of files between our two Hyper-V hosts.

As you can see with teaming that is configured by using new Switch embedded teaming functionality in Hyper-V Virtual Switch and by creating four adapters for management OS (host) we are getting the same results as we did in part 1 of this series – when we were using just our 4 physical NICs without any additional configuration.

Demystifying SMB 3.x multichannel – part 5 – Hyper-V server to Hyper-V server example with Switch embedded teaming (Windows server 2016/2019 only) in VMSwitch with single virtual network card

In this article we are covering new concept of teaming interfaces – used when you have Hyper-V role installed as it is only available in conjunction with Virtual Switch – that is called Switch embedded teaming (or SET) – so basically if you are using a physical server for some other roles you should still stick to “classical” NIC teaming (NetLbfo) that has been available since Windows server 2012.

Since SET is available I am using it – and I have also reconfigured some “old fashion design” configurations.

Quoting original documentation:

SET is an alternative NIC Teaming solution that you can use in environments that include Hyper-V and the Software Defined Networking (SDN) stack in Windows Server 2016/2019. SET integrates some NIC Teaming functionality into the Hyper-V Virtual Switch.

Virtual Switch that has Switch embedded teaming enabled by default uses Switch independent mode and Dynamic load distribution – you can change that in Powershell.

The next very important thing is that SET preserves RDMA functionality so you can use it in conjunction.

There is another great piece of documentation about “classic” team solution in Windows and Switch embedded teaming located here. I have copied the comparison table to have a quick look at features.

lbfo vs set
* from documentation: https://gallery.technet.microsoft.com/Windows-Server-2016-839cb607

So in the next video we are using New-VMSwitch cmdlet to create Virtual Switch with embedded teaming parameter:

New-VMSwitch -Name Team01 -EnableEmbeddedTeaming $true -AllowManagementOS $true -NetAdapterName NIC1,NIC2,NIC3,NIC4

Out of this adapter we are getting similar configuration as we did in part 3 – so only one virtual network card for our Hyper-V host.

And just to make sure … Let’s check default configuration made by cmdlet that we just fired of load balancing and teaming mode by using Get-VMSwitchTeam cmdlet:

teaming

As you can see we are getting also the same result as in part 3 – so only 1 gigabit throughput between Hyper-V server 1 and Hyper-V server 2.

Demystifying SMB 3.x multichannel – part 4 – Hyper-V server to Hyper-V server example with windows teaming tool (server manager / powershell) and VMSwitch with multiple virtual network cards

We are pushing it forward – in previous example (part 3) we made virtual switch just by simply using Hyper-V Manager (or Powershell) but with no extra configuration – the result was that when copying from server to server we got only 1 gigabit throughput.

Now, we are trying to upgrade the scenario by using Powershell (you can only do this by using Powershell or by using System Center Virtual Machine Manager (that below also uses Powershell :)) – we are going to create Virtual Switch but then we are going to assign more than just one virtual network card to host operating system (our Hyper-V host):

So by doing:

New-VMSwitch -Name Team01 -AllowManagementOS $false -NetAdapterName Team01

We simply create a virtual switch, that does not have in previous part mentioned checkbox  “Allow management operating system to share this network adapter” checked so, no Virtual Network card is created – !Warning! If you run only this cmdlet you will cut yourself out of your Hyper-V host – so it is better to prepare also the second part and run it all together so we will continue by using cmdlet Add-VMNetworkAdapter:

Add-VMNetworkAdapter -ManagementOS -SwitchName Team01 -Name MGMT01
Add-VMNetworkAdapter -ManagementOS -SwitchName Team01 -Name MGMT02
Add-VMNetworkAdapter -ManagementOS -SwitchName Team01 -Name MGMT03
Add-VMNetworkAdapter -ManagementOS -SwitchName Team01 -Name MGMT04

This cmdlets will create 4 virtual adapters for your Hyper-V Host to use (yes, you can also use VLANS with this network adapters).

As can be seen in the video we are getting better results than with a single virtual network adapter but still we are getting not more than 2 gigabit of bandwidth – and it is not stable.