Category Archives: Hyper-V

Demystifying SMB 3 x multichannel – part 9 – VM1 (Debian Linux 10) on Host1 to VM2 (Windows Server 2019) on Host2

I finally managed to make it work … 🙂 So this time we are trying to establish multichannel between Debian Linux with Samba 4.9.5-Debian and Windows server 2019 (that preferres SMB 3.1.1 dialect). Each of VMs on separate Hyper-V hosts has 4 virtual network adapters connected. I entered the hostnames of VMs in hosts files both – on Windows and Linux as I am not running any DNS server in the test network.
So I added 4 entries on each machine. On Debian I have created a simple smb.conf example file to make it work:

[global]
workgroup = WORKGROUP
interfaces = eth0, eth1, eth2, eth3
bind interfaces only = Yes
vfs objects = recycle aio_pthread
aio read size = 1
aio write size = 1
strict locking = No
use sendfile = no
server multi channel support = yes
server string = samba server
security = USER
encrypt passwords = yes
smb passwd file = /etc/samba/smbpasswd
guest ok = yes

[storage]
comment = Storage
path = /var/samba
writeable = yes
public = no

As you can see in video by using Linux command in terminal: smbstatus I am getting similar information as running get-smbmultichannel Powershell cmdlet on Windows. I can clearly see how servers are connected between them by using SMB protocol.

As you can see in video Windows machine, from which I am copying data to Linux utilizes all four network adapters but we are getting only 2 gigabit throughput. On Linux side there are only two NICs utilized. I was not able to make it work by using all four adapters (like machines were utilizing in previous part in Windows VM to Windows VM scenario). Well I just wanted to demonstrate that concept works also in mixed environment with Windows and Linux.

Demystifying SMB 3.x multichannel – part 8 – VM1 on Hyper-V host 1 to VM2 on Hyper-V host 2 – 4 NICs in each VM

We are upgrading configuration from previous part (7) so we are adding additional Virtual Network Adapters to both VMs (so each will have 4).

*** When I “hot added” network cards you can see that the throughput was bad (probably we should wait for couple of seconds or minutes for reconfiguration as new network adapters were added) – so on 56th second I am pausing the video for a VMs reboot and on 58th second I am resuming recording after VMs reboot. You can see that after reboot everything works great and we are getting maximum speed out of our 4 physical NICs in each of our Hyper-V hosts.

Demystifying SMB 3.x multichannel – part 7 – VM1 on Hyper-V host 1 to VM2 on Hyper-V host 2 – single NIC in each VM

From physical we are moving to virtual now – so I have created a small demo of two VMs on two separate Hyper-V hosts (connected to same switch with 4 physical NICs each). Each VM has only one Virtual Network Adapter.

As we can see we are getting 1 gigabit throughput from first to second VM. We can alo see the utilization of physical NICs on our Hyper-V hosts (transfer is using only one NIC).

Demystifying SMB 3.x multichannel – part 6 – Hyper-V server to Hyper-V server example with Switch embedded teaming (Windows server 2016/2019 only) in VMSwitch with multiple (4) adapters on host

Finally we are approaching the solution that is giving us great bandwidth by utilizing all four network adapters – we are still using Switch embedded teaming solution to team physical interfaces directly when creating Hyper-V Virtual Switch but this time with a slightly different command in Powershell.

New-VMSwitch -Name Team01 -EnableEmbeddedTeaming $true -AllowManagementOS $false -NetAdapterName NIC1,NIC2,NIC3,NIC4

!Warning! When you execute this command you will remain without connectivity, so I suggest to continue with following commands and to execute them consequently. So after creating a Virtual Switch consisting of our four physical NICs and combined with embedded teaming feature we are ready to give our Hyper-V host management network cards.

Add-VMNetworkAdapter -ManagementOS -SwitchName Team01 -Name MGMT01
Add-VMNetworkAdapter -ManagementOS -SwitchName Team01 -Name MGMT02
Add-VMNetworkAdapter -ManagementOS -SwitchName Team01 -Name MGMT03
Add-VMNetworkAdapter -ManagementOS -SwitchName Team01 -Name MGMT04

Finally we are ready to test copying of files between our two Hyper-V hosts.

As you can see with teaming that is configured by using new Switch embedded teaming functionality in Hyper-V Virtual Switch and by creating four adapters for management OS (host) we are getting the same results as we did in part 1 of this series – when we were using just our 4 physical NICs without any additional configuration.

Demystifying SMB 3.x multichannel – part 5 – Hyper-V server to Hyper-V server example with Switch embedded teaming (Windows server 2016/2019 only) in VMSwitch with single virtual network card

In this article we are covering new concept of teaming interfaces – used when you have Hyper-V role installed as it is only available in conjunction with Virtual Switch – that is called Switch embedded teaming (or SET) – so basically if you are using a physical server for some other roles you should still stick to “classical” NIC teaming (NetLbfo) that has been available since Windows server 2012.

Since SET is available I am using it – and I have also reconfigured some “old fashion design” configurations.

Quoting original documentation:

SET is an alternative NIC Teaming solution that you can use in environments that include Hyper-V and the Software Defined Networking (SDN) stack in Windows Server 2016/2019. SET integrates some NIC Teaming functionality into the Hyper-V Virtual Switch.

Virtual Switch that has Switch embedded teaming enabled by default uses Switch independent mode and Dynamic load distribution – you can change that in Powershell.

The next very important thing is that SET preserves RDMA functionality so you can use it in conjunction.

There is another great piece of documentation about “classic” team solution in Windows and Switch embedded teaming located here. I have copied the comparison table to have a quick look at features.

lbfo vs set
* from documentation: https://gallery.technet.microsoft.com/Windows-Server-2016-839cb607

So in the next video we are using New-VMSwitch cmdlet to create Virtual Switch with embedded teaming parameter:

New-VMSwitch -Name Team01 -EnableEmbeddedTeaming $true -AllowManagementOS $true -NetAdapterName NIC1,NIC2,NIC3,NIC4

Out of this adapter we are getting similar configuration as we did in part 3 – so only one virtual network card for our Hyper-V host.

And just to make sure … Let’s check default configuration made by cmdlet that we just fired of load balancing and teaming mode by using Get-VMSwitchTeam cmdlet:

teaming

As you can see we are getting also the same result as in part 3 – so only 1 gigabit throughput between Hyper-V server 1 and Hyper-V server 2.

Demystifying SMB 3.x multichannel – part 4 – Hyper-V server to Hyper-V server example with windows teaming tool (server manager / powershell) and VMSwitch with multiple virtual network cards

We are pushing it forward – in previous example (part 3) we made virtual switch just by simply using Hyper-V Manager (or Powershell) but with no extra configuration – the result was that when copying from server to server we got only 1 gigabit throughput.

Now, we are trying to upgrade the scenario by using Powershell (you can only do this by using Powershell or by using System Center Virtual Machine Manager (that below also uses Powershell :)) – we are going to create Virtual Switch but then we are going to assign more than just one virtual network card to host operating system (our Hyper-V host):

So by doing:

New-VMSwitch -Name Team01 -AllowManagementOS $false -NetAdapterName Team01

We simply create a virtual switch, that does not have in previous part mentioned checkbox  “Allow management operating system to share this network adapter” checked so, no Virtual Network card is created – !Warning! If you run only this cmdlet you will cut yourself out of your Hyper-V host – so it is better to prepare also the second part and run it all together so we will continue by using cmdlet Add-VMNetworkAdapter:

Add-VMNetworkAdapter -ManagementOS -SwitchName Team01 -Name MGMT01
Add-VMNetworkAdapter -ManagementOS -SwitchName Team01 -Name MGMT02
Add-VMNetworkAdapter -ManagementOS -SwitchName Team01 -Name MGMT03
Add-VMNetworkAdapter -ManagementOS -SwitchName Team01 -Name MGMT04

This cmdlets will create 4 virtual adapters for your Hyper-V Host to use (yes, you can also use VLANS with this network adapters).

As can be seen in the video we are getting better results than with a single virtual network adapter but still we are getting not more than 2 gigabit of bandwidth – and it is not stable.

Demystifying SMB 3.x multichannel – part 3 – Hyper-V server to Hyper-V server example with windows teaming tool (server manager / powershell) and VMSwitch on top

As I told you at the beginning of this series I am a big fan of Hyper-V – I have been implementing it since 2008 (when nobody believed this would ever become a serious virtualization platform :)). So in Windows server 2012 / 2012 R2 the most common way of setting up your Hyper-V networking was to just team your NICs by using Windows provided tool and then just to create a VMSwitch on top of it – by using Hyper-V manager or Powershell and by using the checkbox Allow management operating system to share this network adapter. After this process you ended up with a new virtual NIC called for example vEthernet (Team01).

Like in previous scenario (part 2) we have 1 gigabit speed when copying files from server to server. And yes, if there was a third server we would probably start using next NIC so we would have 2 gigabit traffic from server 1 – 1 gigabit to server 2 and 1 gigabit to server 2 – but still just a gigabit to each of them.

In this video you can see that we are upgrading previous scenario (teamed NICs) by enabling Hyper-V Virtual Switch (External type) using Hyper-V Manager – you could also do that by using Powershell following the documentation.

Reconfigure Hyper-V replica replication interval

I like the feature of Hyper-V replica – but sometimes if you are configuring it quickly you might fail to set the right replication interval (by default 5 minutes). There is a possibility in PowerShell to change the interval so for example if you have configured your replication to happen every 5 minutes and you want to change that to 30 seconds you can do it by using this cmdlet (this one will change all current replicas to 30 seconds – you can do it for individual replication also):

Get-VMReplication | Set-VMReplication -ReplicationFrequencySec 30

 

 

Add-VMNetworkAdapterExtendedAcl – allow only specific traffic to a VM and allow all outgoing traffic from a VM on Windows server 2016 – Hyper-V

Block all trafic to a VM:
Add-VMNetworkAdapterExtendedAcl –VMName “vm01” –Action “Deny” –Direction “Inbound” –Weight 10
Allow (for example) TCP 80 (HTTP) and TCP 443 (HTTPS) to a VM:
Add-VMNetworkAdapterExtendedAcl –VMName “vm01” –Action “Allow” –Direction “Inbound” –LocalPort 80 –Protocol “TCP” –Weight 11
Add-VMNetworkAdapterExtendedAcl –VMName “vm01” –Action “Allow” –Direction “Inbound” –LocalPort 443 –Protocol “TCP” –Weight 12
 
Allow any TCP and UDP from VM to ANY port and ANY address:
Add-VMNetworkAdapterExtendedAcl -VMName “vm01” -Action Allow -Direction Outbound -RemotePort Any -Protocol tcp -Weight 100 -IdleSessionTimeout 3600 -Stateful $True
Add-VMNetworkAdapterExtendedAcl -VMName “vm01” -Action Allow -Direction Outbound -RemotePort Any -Protocol udp -Weight 101 -IdleSessionTimeout 3600 -Stateful $True
 
Want to start over? Remove all ACLs:
Get-VMNetworkAdapterExtendedAcl -VMName “vm01” | Remove-VMNetworkAdapterExtendedAcl

How to monitor storage replication after Storage spaces direct node reboot (ex. after updates)

Hi!

I have two node Storage spaces direct scenario and after updating and rebooting one of the nodes in cluster I need to wait storage operations to complete (yes I am updating this scenario manualy :)).

If you want to check the progresss of this synchronization / repair of Storage spaces just drop this in Powershell on one of the nodes:

Get-StorageJob | Select Name,IsBackgroundTask,ElapsedTime,JobState,PercentComplete,@{label=”BytesProcessed (GB)”;expression={$_.BytesProcessed/1GB}},@{label=”Total Size (GB)”;expression={$_.BytesTotal/1GB}} | ft

You should get something like that…

Remember – if you have Storage spaces direct in two-node scenario you SHOULD WAIT for this job to complete – if you reboot second node to soon your CSV will go offline! So keep calm and Powershell! 🙂