Category Archives: Windows server 2019

Demystifying SMB 3.x multichannel – part 3 – Hyper-V server to Hyper-V server example with windows teaming tool (server manager / powershell) and VMSwitch on top

As I told you at the beginning of this series I am a big fan of Hyper-V – I have been implementing it since 2008 (when nobody believed this would ever become a serious virtualization platform :)). So in Windows server 2012 / 2012 R2 the most common way of setting up your Hyper-V networking was to just team your NICs by using Windows provided tool and then just to create a VMSwitch on top of it – by using Hyper-V manager or Powershell and by using the checkbox Allow management operating system to share this network adapter. After this process you ended up with a new virtual NIC called for example vEthernet (Team01).

Like in previous scenario (part 2) we have 1 gigabit speed when copying files from server to server. And yes, if there was a third server we would probably start using next NIC so we would have 2 gigabit traffic from server 1 – 1 gigabit to server 2 and 1 gigabit to server 2 – but still just a gigabit to each of them.

In this video you can see that we are upgrading previous scenario (teamed NICs) by enabling Hyper-V Virtual Switch (External type) using Hyper-V Manager – you could also do that by using Powershell following the documentation.

Demystifying SMB 3.x multichannel – part 2 – server to server example with windows teaming tool (server manager / powershell)

As you probably saw in my previous post – if you leave your cards just as they are – connected to switch SMB multichannel kicks in when you start to copy something to another machine that also has multiple NICs … But what happens in server to server scenario when you team your interfaces by using teaming that is included in windows – the one that you can configure by using server manager (and of course by using PS).
Well when you team your interfaces you get a new interface (you will see an interface with Microsoft Network Adapter Multiplexor).
Well in server to server scenario it means that you have only one NIC which reduces the speed of your copying to a speed of a single card in NIC.
As you can see also in Powershell by using Get-SmbMultichannelConnection cmdlet we have just one session.
Yes, if there was a third server we would probably start using next NIC so we would have 2 gigabit traffic from server 1 – 1 gigabit to server 2 and 1 gigabit to server 2 – but still just a gigabit to each of them.

Just a quick remark … You can create teaming interface by using Server manager or you can use Powershell – more information about creating teamed interface can be found here.

Demystifying SMB 3.x multichannel – part 1 – quick introduction

I am a big fan of SMB 3.x multi-channel feature that Microsoft implemented for the first time in Windows server 2012. As I am also a big fan of Hyper-V and I want my hosts to have the ability to copy files between them (ISOs, VHDXs …) as fast as possible I wanted to create this short series of articles about multi-channel feature. I was really happy when I saw Mr. Linus Sebastian posted a video Quadruple Your Network Speed for $100 with SMB 3.0 Multichannel! so I decided to create a small series of videos to also see what advantages of using it in a production environment are and why.

For this test I used 2x Dell R730xd with 2 CPUs (Xeon E5-2620) and with Dell Intel I350 Quad-Port Gigabit Ethernet and MikroTik switch CRS226-24G-2S+.

For those not familiar with SMB 3.x multichannel I would like to point out an (old) article by Mr. Jose Barreto: https://blogs.technet.microsoft.com/josebda/2012/06/28/the-basics-of-smb-multichannel-a-feature-of-windows-server-2012-and-smb-3-0/

So in this first part I would like to show, how SMB 3.x multichannel (I am putting that x there as SMB versions are changing (table at point 4) in each release of Windows server (and client too!) works.
In this demo I will be using Windows server 2019 which uses SMB dialect 3.1.1. You can check the dialect that your servers / clients are using by typing following cmdlet in Powershell: Get-SmbConnection

smb dialect

In the following video you can see the first example – two servers connected with 4 NICs each to the switch – without any extra configuration (there are no IP addresses configured) but you can see that when we copy files from server 1 to server 2 we utilise all 4 NICs on server 1 and all 4 NICs on server 2 – it can be clearly seen also on switch. To get information how your server utilizes SMB 3.x multichannel you can use Powershell cmdlet: Get-SmbMultichannelConnection

 

Reconfigure Hyper-V replica replication interval

I like the feature of Hyper-V replica – but sometimes if you are configuring it quickly you might fail to set the right replication interval (by default 5 minutes). There is a possibility in PowerShell to change the interval so for example if you have configured your replication to happen every 5 minutes and you want to change that to 30 seconds you can do it by using this cmdlet (this one will change all current replicas to 30 seconds – you can do it for individual replication also):

Get-VMReplication | Set-VMReplication -ReplicationFrequencySec 30

 

 

S2D 2.0 (Windows server 2019) – Nested mirror and Nested mirror accelerated parity – how to expand a tier / volume?

As a big fan of S2D since Windows server 2016 and with more than 15 implementation of various systems and configurations I wanted to help you out with resizing of new (only 2 node S2D on Windows server 2019 supported) feature called Nested mirror and Nested mirror accelerated parity volumes.

First of all – just to be short – Microsoft did a great job to address the possibility of two simultaneous failures in two node S2D scenarios. You have the possibility to use two resiliency mechanisms – one that gives you more performance but it takes a lot of space called – Nested mirror (so all the data is written 4 times – 2 times on one node an 2 times on the other) which gives you ~ 25% of usable space and Nested mirror accelerated parity where you combine nested mirror with parity so you are able to achieve around 40-45% of usable space – but yes – party tier is more compute intensive as redundancy must be calculated – so it reduces performance (test so far do not show dramatic impact).

You can read more about this two options that were released with Windows server 2019 on a Microsoft website – official documentation.

I was playing a bit with configuration of some volumes by using this two new options and I decided to create a mirror accelerated parity which can be done by using Powershell (for not that is the only option).

I created tier templates as described in MS documentation (mentioned earlier). So I finished with the result that looks like:

Get-Volume -FriendlyName NestedMirrorAcceleratedParity | ftps2

Underneath you can see that there are two tiers that are fundamental parts of this volume.

Get-StorageTier | ft FriendlyName,TierClass,ResiliencySettingName,FaultDomainREdundancy,Size,FootprintOnPool
ps1

So if you want to extend your volume you must extend first of all every tier (or only one of them) – so in this case: NestedMirrorAcceleratedParity-NestedParity and/or NestedMirrorAcceleratedParity-NestedMirror.

You can do it just by using commandlet – for example for party tier:

Resize-StorageTier -FriendlyName NestedMirrorAcceleratedParity-NestedParity -Size 6TB

*System will not allow you to go over size of total pool capacity – for example my pool has 24 TB of space – my nested mirror and resiliency have a pretty nice footprint on pool so what I did was a volume that was created from 1 TB of nested mirror (for speed) and 7 TB of parity (for capacity) – S2D / ReFS will take care of dynamic hot/cold data placement.
s2dsize

As you can see the sum of FootprintOnPool in TB is under my total capacity and system does not allow me to make a bigger tiers. You can also see storage efficiency that I get from physical disks after using this two resiliency mechanisms.

tier1

After resizing one or both tiers than you can query your virtual disk for supported size that it can be extended to.

Get-VirtualDisk -FriendlyName NestedMirrorAcceleratedParity | Get-Disk | Get-Partition | Where Type -eq Basic | Get-PartitionSupportedSize

After receiving the maxsize parameter you can expand your virtual disks partition (in my case from 6 TB to 8 TB (of which 1 TB nested mirror and 7 TB nested parity):

Get-VirtualDisk -FriendlyName NestedMirrorAcceleratedParity | Get-Disk | Get-Partition | ? Type -eq Basic | Resize-Partition -Size 8796076224512

As a result you can see that disk that was resized from 6TB to 8TB in Admin center which I encourage you to try and to start using if you are jumping in Microsoft software defined storage / network journey!

admin center