Tag Archives: storage spaces direct

Windows Server 2025 – Storage Spaces Direct (S2D) Campus Cluster – part 1 – Preparation and deployment

And finally it is here! An update for Windows Server 2025 that we were waiting for and it is allowing us to create so called Campus Cluster. You can read more about it at the official Microsoft statement, here.

My demo setup looks like this – we have site called Ljubljana there are two racks Rack 1 and Rack 2 and in each rack we have two nodes (in Rack 1 we have N1 and N2 and in Rack 2 we have N3 and N4).

These four servers are domain joined (we have an extra VM called DC with Windows Server 2025 and Active Directory installed), they are connected to “LAN” network of the site with subnet 10.10.10.0/24 and each node has additional two network cards, one connected to a network that I call Storage network 1 with subnet 10.11.1.0/24 and Storage network 2 with subnet 10.11.2.0/24. Storage networks should be RDMA enabled (10, 25, 40, 100 Gbps) low latency, high bandwidth networks – don’t forget to enable Jumbo frames on them (if possible (it should be :))).

It looks like this:


To be able to setup hyper-converged infrastructure (HCI) with Storage Spaces Direct you need Failover Cluster feature and Hyper-V role installed on server. I am doing it via Powershell:

(This cmdlet will require reboot as Hyper-V is installed…)
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All

Install-WindowsFeature -Name Hyper-V -IncludeManagementTools

Install-WindowsFeature Failover-Clustering -IncludeManagementTools

When prerequisites are meet I will run cmdlet to form a cluster C1 with nodes N1-N4, with static IP address in the “LAN” segment and no storage (as I do not have any at this moment:

New-Cluster -Name C1 -Node n1, n2, n3, n4 -StaticAddress 10.10.10.10 -NoStorage


Before enabling Storage Spaces Direct you need to define Site, Racks and Nodes in those Racks. I have formed a typical site Ljubljana (Slovenian capital city) in that site I have created two racks in virtually two datacenters: DC1 and DC2 and I put those racks in site Ljubljana, then I added nodes in racks.

You can do it by using Powershell cmdlets:

New-ClusterFaultDomain -Name Ljubljana -FaultDomainType Site
New-ClusterFaultDomain -Name DC1 -FaultDomainType Rack
New-ClusterFaultDomain -Name DC2 -FaultDomainType Rack
Set-ClusterFaultDomain -Name DC1 -FaultDomain Ljubljana
Set-ClusterFaultDomain -Name DC2 -FaultDomain Ljubljana
Set-ClusterFaultDomain -Name N1 -FaultDomain DC1
Set-ClusterFaultDomain -Name N2 -FaultDomain DC1
Set-ClusterFaultDomain -Name N3 -FaultDomain DC2
Set-ClusterFaultDomain -Name N4 -FaultDomain DC2

At the end you should have something like this if you go with Powershell cmdlet: Get-ClusterFaultDomain

When this is done you can proceed with enabling Storage Spaces Direct where you will be first asked if you want to perform this action (like every time when you enabled it until now) but coupe of seconds later you will be prompted again as now, system understood that we have site and racks and nodes in different racks.

Prompt will inform you that: Set rack fault tolerance on S2D pool. This is normally racommended on setups with multiple racks continue with Y (Yes).

In couple of seconds you can already observe newly created Cluster Pool 1 that will consist of all disks from all nodes (in my case 32 disks as every node has 8 disks dedicated for Storage Spaces Direct).

As by official documentation you should perform update storage pool by using Powershell cmdlet:

Get-StoragePool S2D* | Update-StoragePool

You will be prompted with information that StoragePool will be upgraded to latest version and that this is irreversible action – proceed with Y (Yes).

Check that the version is now 29 by using:

(Get-CimInstance -Namespace root/microsoft/windows/storage -ClassName MSFT_StoragePool -Filter ‘IsPrimordial = false’).CimInstanceProperties[‘Version’].Value

Now you can proceed with forming disks – 4 copy mirror (for most important VMs) or 2 copy mirror (for less important workloads).

I modified a bit official examples for fixed and thin provisioned 2 way or 4 way mirror disks and I used this oneliners:

New-Volume -FriendlyName “FourCopyVolumeFixed” -StoragePoolFriendlyName S2D* -FileSystem CSVFS_ReFS –Size 20GB -ResiliencySettingName Mirror -PhysicalDiskRedundancy 3 -ProvisioningType Fixed -NumberOfDataCopies 4

New-Volume -FriendlyName “FourCopyVolumeThin” -StoragePoolFriendlyName S2D* -FileSystem CSVFS_ReFS –Size 20GB -ResiliencySettingName Mirror -PhysicalDiskRedundancy 3 –ProvisioningType Thin -NumberOfDataCopies 4

New-Volume -FriendlyName “TwoCopyVolumeFixed” -StoragePoolFriendlyName S2D* -FileSystem CSVFS_ReFS –Size 20GB -ResiliencySettingName Mirror -PhysicalDiskRedundancy 1 -ProvisioningType Fixed

New-Volume -FriendlyName “TwoCopyVolumeThin” -StoragePoolFriendlyName S2D* -FileSystem CSVFS_ReFS –Size 20GB -ResiliencySettingName Mirror -PhysicalDiskRedundancy 1 -ProvisioningType Thin

In next episode I will put some workloads on this nodes and simulate failures to see how Campus Cluster handles them.


Windows Server 2025 – Stretched cluster with S2D

As we are expecting final release of Windows Server 2025 we can take a look at Stretched cluster capability in combination with Storage Spaces Direct (S2D).

In this quick demo I have setup a 4 node S2D stretched cluster between two Active Directory Sites and Services subnets (as Failover Cluster is using subnets in ADDS to determine in which site nodes are residing).

In my case I have Default First Site Name (Ljubljana) and another site called Portoroz. In Ljubljana our nodes are in segment 10.5.1.0/24 and in Portoroz in 10.5.2.0/24

We have two virtual machines located in a dedicated (stretched) VLAN (5) so they are in same subnet no matter on which node they are running.

Two VMs have IPs: 10.5.5.111 (vm1) and 10.5.5.112 (vm2) that I am live migrating to remote nodes (sitebn1 / sitebn2).

We have two options on how to deploy and use such scenario – we can use it in active / active mode so VMs and CSVs are on both sites and CSVs (synchronous) replicated to other pair of nodes. There is also the possibility to use it in active / passive mode so by just replicating CSVs from site 1 to site 2.

For demo purposes I have reduced the time that cluster starts its processes to establish availability of failed resources from 240 (4 minutes) seconds to 10 seconds.
Value of 240 seconds is there since Windows Server 2016 and introduction of so called compute resiliency that allowed this 4 minutes for nodes to return from let say network outage or something like that. You can reduce this timer by using Powershell:

(Get-Cluster).ResiliencyDefaultPeriod = ValueInSeconds

In the video you can see that I am live migrating VM first to site 2 and afterwards turning off the nodes in site 2. In around 15 seconds VM2 is available again this time on CSV (volume02) that was brought online on site 1.


I think this concept can be interesting for many customers and I am really looking forward for the final release of Windows Server 2025! Good job, Microsoft!

Shutdown Storage Spaces Direct (S2D) or Azure Stack HCI Hyper-Converged cluster safely

Yes, we are building clustered solutions to keep as high uptime as possible but sometimes there is a planned or unplanned electrical outage or maintenance work on power lines when we are simply forced to shutdown our cluster – and in that situation we want to do it safely.

When we talk about Storage Spaces Direct (S2D) on Windows Server 2016 / 2019 / 2022 in a hyper-converged scenario (when hyper-v virtualization and storage are inside the same system) it is very important to take care of properly shut down such system not to get in problematic situations where data corruption or some other issues could emerge. Becouse of that Microsoft has a great article about how to safely and properly shutdown a node in S2D configuration.

I would like to share with you a concept that could help you with getting whole cluster safely turned off.

Scenario consists of 2-node S2D solution, standalone hyper-v (on which I run file share witness (for S2D)) and PRTG that by using SNMP monitors APC UPS 2200:

So first of all we need to get the information about Battery capacity by using SNMP query to APC Network management card – this will be the value that we will monitor and based on the current value we will trigger some actions.

Then we need to prepare Notifications templates where we define Powershell scripts to be executed.
I am using three scripts:
First script will make a graceful stop of storage services and put S2D Cluster N2 in maintenance mode (all roles will be drained to S2D Cluster N1) after that it will shut down S2D Cluster N2
Second script will trigger shutdown of all virtual machines on S2D Cluster N1 and after 180 seconds it will shut down the S2D Cluster N1
– Third script will shut down third hyper-v host (standalone)

With the action Execute Program on our Notification Template we define which script we would like template to use and username and password that will be used only to execute the script on local machine (PRTG) – credentials for powershell remoting that will do the shutdown jobs can be safely saved separately so you do not need to enter plain-text credentials to access the hosts anywhere.

After that we need to configure triggers – when scripts will be executed based on the battery capacity – so in my case I decided to set it up like this:

  • When battery is on 65% turn off S2D Cluster N2 (drain roles (VMs and cluster service roles to S2D Cluster N1), put the node in maintenance mode, shut down the physical node S2D Cluster N2).
  • When battery is on 45% turn off S2D Cluster N1 by firs shutting down all VMs, than wait 180 seconds for shutdown to complete and then shut down physical S2D Cluster N1.
  • When battery is on 15% turn off standalone Hyper-V host – where our Witness and PRTG VMs are running

If we check the scriptblocks inside our scripts:

Shutdown-N2.ps1 (the script that in my case we will run first):

In first part of the script we need to setup credentials that will be used to execute powershell remoting:
You can do this buy simply entering username and password into the script (Please do not do that! Powershell allows you to do it way more securely. Please read this article about securely saving encrypted password in separate file.

Invoke-Command -ComputerName S2D-N2 -Credential $credential -ScriptBlock {
$nodename = ‘S2D-N2’
Suspend-ClusterNode -Name S2D-N2 -Drain -Wait
Get-StorageFaultDomain -type StorageScaleUnit | Where-Object {$_.FriendlyName -eq $nodename} | Enable-StorageMaintenanceMode
Stop-ClusterNode -name S2D-N2
Start-Sleep -Seconds 10
Stop-Computer -Force
}

Shutdown-N1ps1 (the second script that will be executed – this will turn off VMs and finaly S2D Cluster N1):

Invoke-Command -ComputerName S2D-N1 -Credential $credential -ScriptBlock {
Get-VM | Stop-VM -Force -AsJob
Start-Sleep -Seconds 180
Stop-Computer -Force
}

Shutdown-HyperV.ps1 (the third script that will turn off stand alone Hyper-V host):

Invoke-Command -ComputerName StandaloneHyperV -Credential $credential -ScriptBlock {
Stop-Computer -Force
}

So the shutdown sequence will be:
– when electricity is turned off and PRTG gets the info by querying UPS that capacity of the battery is under 65 %:
S2D Cluster – N2 will bi gracefully stopped (by draining roles and putting it in maintenance mode and shutdown after that)
– when the battery is under 45 %:
S2D cluster – N1 will be gracefully stopped (by shutting down all VMs and finally shutting down)
– when the battery capacity is under 15 %:
Our standalone host (where PRTG and File Share Witness (needed for S2D Cluster)) will be shutdown.

The procedure to turn the system back on is the following:
– First we will turn on standalone host (and Files Share Witness VM)
Please do not turn on PRTG server until UPS battery capacity is not over 65% (because PRTG will turn on the procedures again if capacity is below 65%)
– When you checked that standalone host has network connectivity and File Share Witness VM is working and has connectivity too we can proceed further by turning on S2D Cluster N1
– When S2D Cluster N1 is up we can turn on VMs* (as Witness is there and N1 is fully functional you are able to start your production VMs – there will be more data to resync so if you have time it is better to wait for N2 to get back online and put it out of maintenance mode.)
– We can now turn on S2D Cluster N2 and when it comes back online we need to bring it back into fully functional Cluster member state by executing the script:

$ClusterNodeName = ‘S2D-N2’
Start-ClusterNode -name $ClusterNodeName
Get-StorageFaultDomain -type StorageScaleUnit | Where-Object {$_.FriendlyName -eq $ClusterNodeName} | Disable-StorageMaintenanceMode
Resume-ClusterNode -Name $ClusterNodeName -Failback Immediate

After executing the script you can check the progress of storage re-synchronization by executing Powershell cmdlet: Get-StorageJob

When UPS battery capacity reaches over 65% you can turn on your PRTG monitoring system again.

How to monitor storage replication after Storage spaces direct node reboot (ex. after updates)

Hi!

I have two node Storage spaces direct scenario and after updating and rebooting one of the nodes in cluster I need to wait storage operations to complete (yes I am updating this scenario manualy :)).

If you want to check the progresss of this synchronization / repair of Storage spaces just drop this in Powershell on one of the nodes:

Get-StorageJob | Select Name,IsBackgroundTask,ElapsedTime,JobState,PercentComplete,@{label=”BytesProcessed (GB)”;expression={$_.BytesProcessed/1GB}},@{label=”Total Size (GB)”;expression={$_.BytesTotal/1GB}} | ft

You should get something like that…

Remember – if you have Storage spaces direct in two-node scenario you SHOULD WAIT for this job to complete – if you reboot second node to soon your CSV will go offline! So keep calm and Powershell! 🙂

System x3650 M5 – M5210 controller jbod mode – invalid arguments

Recently I wanted to use disks connected to M5210 raid controller as JBOD…
My controller has also additional cache and battery pack and in this configuration you are getting the error “The requested command has invalid arguments.” when trying to Apply change to JBOD…
001 002 003
well… If you physical remove the cache from your controller…
IMG_2476 IMG_2477
…and a reboot… you will be able to enable JBOD mode…
IMG_2479 IMG_2480 IMG_2481 IMG_2483 IMG_2484 IMG_2485 IMG_2486 IMG_2487 IMG_2488 IMG_2489 IMG_2490 IMG_2491 IMG_2492 IMG_2493 IMG_2494 IMG_2495 IMG_2496 IMG_2497
…finaly in Windows – disk manager you can see your disks…
IMG_2499

Important NOTE for those that would like to test Storage Spaces Direct technology that is comming with Windows Server 2016 (currently TP5)!

This will not work as BUS TYPE is still RAID and this is not supported – it should be SAS. The setup will fail at the step Enable-ClusterStorageSpacesDirect

008

The only solution is to buy a simple HBA as written here.

“…All SATA and SAS devices must be attached to a SAS Host Bus Adapter (HBA). This HBA must be a “simple” HBA, which means the devices shows as SAS devices in Windows Server…”