Demystifying SMB 3.x multichannel – part 5 – Hyper-V server to Hyper-V server example with Switch embedded teaming (Windows server 2016/2019 only) in VMSwitch with single virtual network card

In this article we are covering new concept of teaming interfaces – used when you have Hyper-V role installed as it is only available in conjunction with Virtual Switch – that is called Switch embedded teaming (or SET) – so basically if you are using a physical server for some other roles you should still stick to “classical” NIC teaming (NetLbfo) that has been available since Windows server 2012.

Since SET is available I am using it – and I have also reconfigured some “old fashion design” configurations.

Quoting original documentation:

SET is an alternative NIC Teaming solution that you can use in environments that include Hyper-V and the Software Defined Networking (SDN) stack in Windows Server 2016/2019. SET integrates some NIC Teaming functionality into the Hyper-V Virtual Switch.

Virtual Switch that has Switch embedded teaming enabled by default uses Switch independent mode and Dynamic load distribution – you can change that in Powershell.

The next very important thing is that SET preserves RDMA functionality so you can use it in conjunction.

There is another great piece of documentation about “classic” team solution in Windows and Switch embedded teaming located here. I have copied the comparison table to have a quick look at features.

lbfo vs set
* from documentation: https://gallery.technet.microsoft.com/Windows-Server-2016-839cb607

So in the next video we are using New-VMSwitch cmdlet to create Virtual Switch with embedded teaming parameter:

New-VMSwitch -Name Team01 -EnableEmbeddedTeaming $true -AllowManagementOS $true -NetAdapterName NIC1,NIC2,NIC3,NIC4

Out of this adapter we are getting similar configuration as we did in part 3 – so only one virtual network card for our Hyper-V host.

And just to make sure … Let’s check default configuration made by cmdlet that we just fired of load balancing and teaming mode by using Get-VMSwitchTeam cmdlet:

teaming

As you can see we are getting also the same result as in part 3 – so only 1 gigabit throughput between Hyper-V server 1 and Hyper-V server 2.

Demystifying SMB 3.x multichannel – part 4 – Hyper-V server to Hyper-V server example with windows teaming tool (server manager / powershell) and VMSwitch with multiple virtual network cards

We are pushing it forward – in previous example (part 3) we made virtual switch just by simply using Hyper-V Manager (or Powershell) but with no extra configuration – the result was that when copying from server to server we got only 1 gigabit throughput.

Now, we are trying to upgrade the scenario by using Powershell (you can only do this by using Powershell or by using System Center Virtual Machine Manager (that below also uses Powershell :)) – we are going to create Virtual Switch but then we are going to assign more than just one virtual network card to host operating system (our Hyper-V host):

So by doing:

New-VMSwitch -Name Team01 -AllowManagementOS $false -NetAdapterName Team01

We simply create a virtual switch, that does not have in previous part mentioned checkbox  “Allow management operating system to share this network adapter” checked so, no Virtual Network card is created – !Warning! If you run only this cmdlet you will cut yourself out of your Hyper-V host – so it is better to prepare also the second part and run it all together so we will continue by using cmdlet Add-VMNetworkAdapter:

Add-VMNetworkAdapter -ManagementOS -SwitchName Team01 -Name MGMT01
Add-VMNetworkAdapter -ManagementOS -SwitchName Team01 -Name MGMT02
Add-VMNetworkAdapter -ManagementOS -SwitchName Team01 -Name MGMT03
Add-VMNetworkAdapter -ManagementOS -SwitchName Team01 -Name MGMT04

This cmdlets will create 4 virtual adapters for your Hyper-V Host to use (yes, you can also use VLANS with this network adapters).

As can be seen in the video we are getting better results than with a single virtual network adapter but still we are getting not more than 2 gigabit of bandwidth – and it is not stable.

Demystifying SMB 3.x multichannel – part 3 – Hyper-V server to Hyper-V server example with windows teaming tool (server manager / powershell) and VMSwitch on top

As I told you at the beginning of this series I am a big fan of Hyper-V – I have been implementing it since 2008 (when nobody believed this would ever become a serious virtualization platform :)). So in Windows server 2012 / 2012 R2 the most common way of setting up your Hyper-V networking was to just team your NICs by using Windows provided tool and then just to create a VMSwitch on top of it – by using Hyper-V manager or Powershell and by using the checkbox Allow management operating system to share this network adapter. After this process you ended up with a new virtual NIC called for example vEthernet (Team01).

Like in previous scenario (part 2) we have 1 gigabit speed when copying files from server to server. And yes, if there was a third server we would probably start using next NIC so we would have 2 gigabit traffic from server 1 – 1 gigabit to server 2 and 1 gigabit to server 2 – but still just a gigabit to each of them.

In this video you can see that we are upgrading previous scenario (teamed NICs) by enabling Hyper-V Virtual Switch (External type) using Hyper-V Manager – you could also do that by using Powershell following the documentation.

Demystifying SMB 3.x multichannel – part 2 – server to server example with windows teaming tool (server manager / powershell)

As you probably saw in my previous post – if you leave your cards just as they are – connected to switch SMB multichannel kicks in when you start to copy something to another machine that also has multiple NICs … But what happens in server to server scenario when you team your interfaces by using teaming that is included in windows – the one that you can configure by using server manager (and of course by using PS).
Well when you team your interfaces you get a new interface (you will see an interface with Microsoft Network Adapter Multiplexor).
Well in server to server scenario it means that you have only one NIC which reduces the speed of your copying to a speed of a single card in NIC.
As you can see also in Powershell by using Get-SmbMultichannelConnection cmdlet we have just one session.
Yes, if there was a third server we would probably start using next NIC so we would have 2 gigabit traffic from server 1 – 1 gigabit to server 2 and 1 gigabit to server 2 – but still just a gigabit to each of them.

Just a quick remark … You can create teaming interface by using Server manager or you can use Powershell – more information about creating teamed interface can be found here.

Demystifying SMB 3.x multichannel – part 1 – quick introduction

I am a big fan of SMB 3.x multi-channel feature that Microsoft implemented for the first time in Windows server 2012. As I am also a big fan of Hyper-V and I want my hosts to have the ability to copy files between them (ISOs, VHDXs …) as fast as possible I wanted to create this short series of articles about multi-channel feature. I was really happy when I saw Mr. Linus Sebastian posted a video Quadruple Your Network Speed for $100 with SMB 3.0 Multichannel! so I decided to create a small series of videos to also see what advantages of using it in a production environment are and why.

For this test I used 2x Dell R730xd with 2 CPUs (Xeon E5-2620) and with Dell Intel I350 Quad-Port Gigabit Ethernet and MikroTik switch CRS226-24G-2S+.

For those not familiar with SMB 3.x multichannel I would like to point out an (old) article by Mr. Jose Barreto: https://blogs.technet.microsoft.com/josebda/2012/06/28/the-basics-of-smb-multichannel-a-feature-of-windows-server-2012-and-smb-3-0/

So in this first part I would like to show, how SMB 3.x multichannel (I am putting that x there as SMB versions are changing (table at point 4) in each release of Windows server (and client too!) works.
In this demo I will be using Windows server 2019 which uses SMB dialect 3.1.1. You can check the dialect that your servers / clients are using by typing following cmdlet in Powershell: Get-SmbConnection

smb dialect

In the following video you can see the first example – two servers connected with 4 NICs each to the switch – without any extra configuration (there are no IP addresses configured) but you can see that when we copy files from server 1 to server 2 we utilise all 4 NICs on server 1 and all 4 NICs on server 2 – it can be clearly seen also on switch. To get information how your server utilizes SMB 3.x multichannel you can use Powershell cmdlet: Get-SmbMultichannelConnection

 

Exchange 2010 to Exchange 2016 mailbox move useful Powershell cmdlets

It is time to migrate last Exchange 2010 servers as they are going end of support soon …
As I am doing this migrations I just wanted to put some Powershell cmdlets into this blog post that can be useful when doing it.

If you want to speed up things a bit:

New-MoveRequest -Identity “xxx@xxx.si” -TargetDatabase “DBEX1601” -Priority Emergency

If you want to check status of your requests:

Get-MoveRequest

If you want to remove completed move requests you can do:

Get-MoveRequest -MoveStatus Completed | Remove-MoveRequest

If you want to get more information about the moves you can try:

Get-MoveRequest | Get-MoveRequestStatistics | Sort-Object PercentComplete -Descending

 

Reconfigure Hyper-V replica replication interval

I like the feature of Hyper-V replica – but sometimes if you are configuring it quickly you might fail to set the right replication interval (by default 5 minutes). There is a possibility in PowerShell to change the interval so for example if you have configured your replication to happen every 5 minutes and you want to change that to 30 seconds you can do it by using this cmdlet (this one will change all current replicas to 30 seconds – you can do it for individual replication also):

Get-VMReplication | Set-VMReplication -ReplicationFrequencySec 30

 

 

Sending SMS using Infobip service and MikroTik tool / fetch feature

My recent article related to enhancement of Netwatch feature in MikroTik was created as a prerequisite for a simple alerting solution with e-mail / SMS notifications channels.

I am using Infobip SMS platform and they have clear and simple API solution (nicely documented) for sending SMS messages (I was able to make it work from Powershell – documented here.).

I was searching a bit and I saw that MikroTik changed something in the way tool called fetch works when we need to send header fields – as it can be read here, here and in official documentation here.

The working command – tested on MikroTik with RouterOS 6.44.3 (june 2019) is:

/tool fetch http-method=post mode=https http-header-field=”content-type:application/json,Authorization:Basic key23123832″ http-data=”{ \”from\”:\”MyMonitoring\”, \”to\”:[\”386xxyyyzzz\”], \”text\”:\”HOST x.x.x.x DOWN\”}” url=https://api.infobip.com/sms/1/text/single

So – the important thing to point out is the way you provide http-header-field:
http-header-field=”content-type:application/json,Authorization:Basic key23123832″

Hope it helps!

MikroTik – Netwatch enhanced (updated June 2019)

With MikroTik one can create an excellent e-mail / SMS alerting system when a host goes down or returns up.
In Tools there is Netwatch feature – but it has one disadvantage – it triggers “up” or “down” commands / scripts – but sometimes one missed ping does not mean that the host is permanently offline / online. Because of that I have written a script that can extend the ping checks (in my example for another 10 seconds – after first ping failed (triggered by Netwatch)) and only after being absolutely sure that host is offline or online triggers an event – e-mail message or SMS message (using some SMS gateway – covered in this article).

As you can see scripts can successfully handle event when host goes offline and when it comes online again:
example

Scripts can also handle “flapping host” (host going down and returning up in less then 10 seconds) behavior:

example2

What do you need to setup such monitoring system:
1. Tools / Netwatch – create two entries for same host:
example 3

2. System / Scripts – you will need to create two scripts – for down and up events:
scripts

on-down – script:
:log error message=”Host x.x.x.x is down! Disabling Netwatch host down monitoring – taking over with script on-down – checking reachability of host x.x.x.x each second for ten seconds!”
:tool netwatch disable numbers=0
:local countup value=0
:while (($countup < 10) && ([:ping address=x.x.x.x interval=1 count=1]=0)) do={:set countup value=($countup+1); :delay 1000ms; :log error message=”Host x.x.x.x is offline. Check number: $countup” };
:if ($countup < 10) do={:log warning message=”Host x.x.x.x online again in less than ten seconds/checks – up on check number: $countup. Enabling netwatch.”; :tool netwatch enable numbers=0; :tool e-mail send to=my.email@gmail.com subject=”Host Up after $countup” body=”Host Up after $countup”; } else={:log error message=”After ten seconds/checks host x.x.x.x is still offline – probably there is an major issue/outage – sending e-mail/SMS.”; :tool e-mail send to=my.email@gmail.com subject=”Host x.x.x.x Down” body=”Host x.x.x.x Down! Host x.x.x.x Down!”;}
:tool netwatch enable numbers=1

on-up – script:
:log warning message=”Host x.x.x.x is up! Disabling Netwatch host up monitoring – taking over with script on-up – checking reachability of host x.x.x.x each second for ten seconds!”
:tool netwatch enable numbers=0
:local countdown value=0
:while (($countdown < 10) && ([:ping address=x.x.x.x interval=1 count=1]=1)) do={:set countdown value=($countdown+1); :delay 1000ms; :log warning message=”Host x.x.x.x is online. Check number: $countdown” };
:if ($countdown < 10) do={:log error message=”Host x.x.x.x offline again in less than ten seconds/checks – down on check number: $countdown. Enabling netwatch monitoring.”; :tool netwatch enable numbers=1; :tool e-mail send to=luka@manojlovic.net subject=”Host x.x.x.x Down after $countup” body=”Host x.x.x.x Down after $countup”; } else={:log warning message=”After ten seconds/checks host x.x.x.x is still online – probably everything is ok – sending e-mail/SMS.”; :tool e-mail send to=luka@manojlovic.net subject=”Host x.x.x.x Up” body=”Host x.x.x.x Up! Up!”;}
:tool netwatch enable numbers=0

 

S2D 2.0 (Windows server 2019) – Nested mirror and Nested mirror accelerated parity – how to expand a tier / volume?

As a big fan of S2D since Windows server 2016 and with more than 15 implementation of various systems and configurations I wanted to help you out with resizing of new (only 2 node S2D on Windows server 2019 supported) feature called Nested mirror and Nested mirror accelerated parity volumes.

First of all – just to be short – Microsoft did a great job to address the possibility of two simultaneous failures in two node S2D scenarios. You have the possibility to use two resiliency mechanisms – one that gives you more performance but it takes a lot of space called – Nested mirror (so all the data is written 4 times – 2 times on one node an 2 times on the other) which gives you ~ 25% of usable space and Nested mirror accelerated parity where you combine nested mirror with parity so you are able to achieve around 40-45% of usable space – but yes – party tier is more compute intensive as redundancy must be calculated – so it reduces performance (test so far do not show dramatic impact).

You can read more about this two options that were released with Windows server 2019 on a Microsoft website – official documentation.

I was playing a bit with configuration of some volumes by using this two new options and I decided to create a mirror accelerated parity which can be done by using Powershell (for not that is the only option).

I created tier templates as described in MS documentation (mentioned earlier). So I finished with the result that looks like:

Get-Volume -FriendlyName NestedMirrorAcceleratedParity | ftps2

Underneath you can see that there are two tiers that are fundamental parts of this volume.

Get-StorageTier | ft FriendlyName,TierClass,ResiliencySettingName,FaultDomainREdundancy,Size,FootprintOnPool
ps1

So if you want to extend your volume you must extend first of all every tier (or only one of them) – so in this case: NestedMirrorAcceleratedParity-NestedParity and/or NestedMirrorAcceleratedParity-NestedMirror.

You can do it just by using commandlet – for example for party tier:

Resize-StorageTier -FriendlyName NestedMirrorAcceleratedParity-NestedParity -Size 6TB

*System will not allow you to go over size of total pool capacity – for example my pool has 24 TB of space – my nested mirror and resiliency have a pretty nice footprint on pool so what I did was a volume that was created from 1 TB of nested mirror (for speed) and 7 TB of parity (for capacity) – S2D / ReFS will take care of dynamic hot/cold data placement.
s2dsize

As you can see the sum of FootprintOnPool in TB is under my total capacity and system does not allow me to make a bigger tiers. You can also see storage efficiency that I get from physical disks after using this two resiliency mechanisms.

tier1

After resizing one or both tiers than you can query your virtual disk for supported size that it can be extended to.

Get-VirtualDisk -FriendlyName NestedMirrorAcceleratedParity | Get-Disk | Get-Partition | Where Type -eq Basic | Get-PartitionSupportedSize

After receiving the maxsize parameter you can expand your virtual disks partition (in my case from 6 TB to 8 TB (of which 1 TB nested mirror and 7 TB nested parity):

Get-VirtualDisk -FriendlyName NestedMirrorAcceleratedParity | Get-Disk | Get-Partition | ? Type -eq Basic | Resize-Partition -Size 8796076224512

As a result you can see that disk that was resized from 6TB to 8TB in Admin center which I encourage you to try and to start using if you are jumping in Microsoft software defined storage / network journey!

admin center