Creating a NIC teaming

There are two ways to invoke the New Team dialog box:

  • Select the Tasks menu in the Teams tile and then select New Team, or
  • Right click on an available adapter in the Network Adapters tab and select the Add to new team item. Multi-select works for this: you can select multiple adapters, right-click on one, select Add to new team, and they will all be pre-marked in the New Team dialog box.

Both of these will cause the New Team dialog box to pop-up.

clip_image002

When the New Team dialog box pops-up there are two actions that MUST be taken before the team can be created:

  • A Team name must be provided, and
  • One or more adapters must be selected to be members of the team

Optionally, the administrator may select the Additional properties item and configure the teaming mode, load distribution mode, and the name of the first (primary) team interface.

clip_image004

In Additional properties the Load distribution mode drop-down provides only two options: Address Hash and Hyper-V Port. The Address Hash option in the UI is the equivalent of the TransportPorts option in Windows PowerShell. To select additional Address hashing algorithms use Windows PowerShell as described below.

This is also the place where those who want to have a Standby adapter in to set the Standby adapter. Selecting the Standby adapter drop-down will give a list of the team members. The administrator can set one of them to be the Standby Adapter. A Standby adapter is not used by the team unless and until another member of the team fails. Standby adapters are only permitted in Switch Independent mode. Changing the team to any Switch Dependent mode will cause all members to be made active members.

When the team name, the team members, and optionally any additional properties (including the Primary team interface name or standby adapter) have been set to the administrator’s choices, the administrator will click on the OK button and the team will be created. Team creation may take several seconds and the NICs that are becoming team members will lose communication for a very short time.

Teams can also be created through Windows PowerShell. The Windows PowerShell to do exactly what these figures have shown is New-NetLbfoTeam Team1 NIC1,NIC2

Teams can be created with custom advanced properties.

New-NetLbfoTeam Team1 NIC1,NIC2 -TeamingMode LACP ‑LoadBalancingAlgorithm HyperVPorts

If the team is being created in a VM, you MUST follow the instructions to allow guest teaming as described in previous post (NIC teaming on Virtual Machines).

Checking the status of a team

Whenever the NIC Teaming UI is active the current status of all NICs in the team, the status of the team, and the status of the server will be shown. In the picture bellow, in the Network Adapters tab of the Adapters and Interfaces tile, NIC 3 shows as faulted. The reason given is Media Disconnected (i.e., the cable is unplugged). This causes the team, Team1, to show a Warning as it is still operational but degraded. If all the NICs in the team were faulted it would show Fault instead of Warning. The server, DONST-R710, now shows Warning. If the team was not operational the server indication would be Fault. This makes it easy to scan the list of servers to see if there are any problems.

clip_image006

     

    Cheers,


    Marcos Nogueira
    http://blog.marcosnogueira.org
    Twitter: @mdnoga

    Update on System Center 2012 SP1 Update Rollup 2 (UR2) for Virtual Machine Manager

    Yesterday Microsoft added Update Rollup 2 for System Center 2012 SP1 Virtual Machine Manager to System Center 2012 SP1 Update Rollup 2 release. 

    The master KB article for  System Center 2012 SP1 Update Rollup 2 has been updated to reflect the summary of issues and installation instructions for Update Rollup 2 for System Center 2012 SP1 – VMM.  You can refer to it here

    Important actions for Update Rollup 2 for System Center 2012 SP1- Virtual Machine Manager

    In order to install Update Rollup 2 package for System Center 2012 SP1-  Virtual Machine Manager, you will need to uninstall Update Rollup 1 for System Center SP1 – Virtual Machine Manager package from your system. 

    – If you download Update Rollup 2 package for System Center 2012 SP1 Virtual Machine Manager from Microsoft Update Catalog and install Update Rollup 2 without un-installing Update Rollup 1  you should un-install Update Rollup 2 package for Virtual Machine Manager and then un-install Update Rollup 1 for System Center 2012 SP1 – Virtual Machine Manager via control panel. 

    – If you are using WSUS to update System Center 2012 SP1 – Virtual Machine Manager and you have already installed Update Rollup 1 for System Center 2012 SP1 – Virtual Machine Manager then you will not receive Update Rollup 2 notification until Update Rollup 1 is uninstalled.

    Why is this necessary?

    When Update Rollup 2 is applied to a system which is running System Center 2012 SP1 Virtual Machine Manager with UR1, the installer does not patch files correctly. This is caused by the way UR 1 was packaged. As such the product fixes in UR1 are correct; it is the packaging of UR1 that causes this issue. If you do not need UR2, then you should continue to operate with UR1.   However, if you choose to stay on Update Rollup 1 for System Center 2012 SP1 Virtual Machine Manager and a later Update Rollup is released that you need to implement you will still need to remove Update Rollup 1 first.

    Cheers,


    Marcos Nogueira
    http://blog.marcosnogueira.org
    Twitter: @mdnoga

    The components of the NIC Teaming Management UI

    The NIC Teaming management UI consists of 3 primary windows (tiles):

    • The Servers tile
    • The Teams tile
    • The Adapters and Interfaces tile

    clip_image002[5]

    The Adapters and Interfaces tile is shared by two tabs:

    • The Network Adapters tab
    • The Team Interfaces tab

    Each tile or tab has a set of columns that can be shown or hidden. The column chooser menus are made visible by right-clicking on any column header. (For illustrative purposes the screen shot in the picture bellow shows a column chooser in every tile. Only one column chooser can be active at a time.)

    Contents of any tile may be sorted by any column. To sort by a particular column left click on the column title. In the picture bellow the Servers tile is sorted by Server Name; the indication is the little triangle in the Name column title in the Servers tile.

    clip_image004[5]

    Each tile also has a Tasks dropdown menu and a right-click context menu. The Tasks menus can be opened by clicking on the Tasks box at the top right corner of the tile and then any available task in the list can be selected. The right-click context menus are activated by right-clicking in the tile. The menu options will vary based on context. (For illustrative purposes the screen shot in the picture bellow shows all the Tasks menus and a right-click menu in every tile. Only one right-click menu or Tasks menu can be active at any time.). clip_image006

    In the picture bellow shows the Tasks menus and right-click menu for the Team Interfaces tab.

    clip_image007[6]

     

    Cheers,


    Marcos Nogueira
    http://blog.marcosnogueira.org
    Twitter: @mdnoga

    Windows Server 2012 NIC Teaming tools for troubleshooting

    NIC Teaming and the powerful administration tools in Windows Server 2012 are very powerful tools that can be misused, misconfigured, and may cause loss of connectivity if the administrator isn’t careful. Here are some common issues:

    Using VLANs

    VLANs are another powerful tool. There are a few rules for using VLANs that will help to make the combination of VLANs and NIC Teaming a very positive experience.

    1) Anytime you have NIC Teaming enabled, the physical switch ports the host is connected to should be set to trunk (promiscuous) mode. The physical switch should pass all traffic to the host for filtering.

    2) Anytime you have NIC Teaming enabled, you must not set VLAN filters on the NICs using the NICs advanced properties settings. Let the teaming software or the Hyper-V switch (if present) do the filtering.

    VLANs in a Hyper-V host

    1) In a Hyper-V host VLANs should be configured only in the Hyper-V switch, not in the NIC Teaming software. Configuring team interfaces with VLANs can easily lead to VMs that are unable to communicate on the network due to collisions with VLANs assigned in the Hyper-V switch.

    VLANs in a Hyper-V VM

    1) The preferred method of supporting multiple VLANs in a VM is to provide the VM multiple ports on the Hyper-V switch and associate each port with a VLAN. Never team these ports in the VM as it will certainly cause communication problems.

    2) If the VM has multiple SR-IOV VFs make sure they are on the same VLAN before teaming them in the VM. It’s easily possible to configure the different VFs to be on different VLANs and, like in the previous case, it will certainly cause communication problems.

    3) The only safe way to use VLANs with NIC Teaming in a guest is to team Hyper-V ports that are

    a. Each connected to a different Hyper-V switch, and

    b. Each configured to be associated with the same VLAN (or all associated with untagged traffic only).

    c. If you must have more than one VLAN exposed into a guest OS consider renaming the ports in the guest to indicate what the VLAN is. E.g., if the first port is associated with VLAN 12 and the second port is associated with VLAN 48, rename the interface vEthernet to be vEthernetVLAN12 and the other to be vEthernetVLAN48. (Renaming interfaces is easy using the Windows PowerShell Rename-NetAdapter cmdlet or by going to the Network Connections panel in the guest and renaming the interfaces.

    Interactions with other teaming solutions

    Some users will want to use other NIC teaming solutions for a variety of reasons. This can be done but there are some risks that the system administrator should be aware of.

    1. If the system administrator attempts to put a NIC into a 3rd party team that is presently part of a Microsoft NIC Teaming team, the system will become unstable and communications may be lost completely.

    2. If the system administrator attempts to put a NIC into a Microsoft NIC Teaming team that is presently part of a 3rd party teaming solution team the system will become unstable and communications may be lost completely.

    As a result it is STRONGLY RECOMMENDED that no system administrator ever run two teaming solutions at the same time on the same server. The teaming solutions are unaware of each other’s existence resulting in potentially serious problems.

    In the event that an administrator violates these guidelines and gets into the situation described above the following steps may solve the problem.

    1. Reboot the server. Forcibly power-off the server if necessary to get it to reboot.

    2. When the server has rebooted run this Windows PowerShell cmdlet:

    Get-NetLbfoTeam | Remove-NetLbfoTeam

    3. Use the 3rd party teaming solution’s administration tools and remove all instances of the 3rd party teams.

    4. Reboot the server again.

    Microsoft continues its longstanding policy of not supporting 3rd party teaming solutions. If a user chooses to run a 3rd party teaming solution and then encounters networking problems, the customer should call their teaming solution provider for support. If the issue is reproducible without the 3rd party teaming solution in place, please report the problem to Microsoft.

    Disabling and Enabling with Windows PowerShell

    The most common reason for a team to not be passing traffic is that the team interface is disabled. We’ve seen a number of cases where attempts to use the power of Windows PowerShell have resulted in unintended consequences. For example, the sequence:

    Disable-NetAdapter *

    Enable-NetAdapter *

    does not enable all the netadapters that it disabled. This is because disabling all the underlying physical member NICs will cause the team interface to be removed and no longer show up in Get-NetAdapter. Thus the Enable-NetAdapter * will not enable the team NIC since that adapter has been removed. It will however enable the member NICs, which will then cause the team interface to show up. The team interface will still be in a “disabled” state since you have not enabled it. Enabling the team interface will cause traffic to begin to flow again.

    Cheers,


    Marcos Nogueira
    http://blog.marcosnogueira.org
    Twitter: @mdnoga

    NIC teaming on Virtual Machines

    NIC Teaming in a VM only applies to VM-NICs connected to external switches. VM-NICs connected to internal or private switches will show as disconnected when they are in a team.

    clip_image002

    NIC teaming in Windows Server 2012 may also be deployed in a VM. This allows a VM to have virtual NICs (synthetic NICs) connected to more than one Hyper-V switch and still maintain connectivity even if the physical NIC under one switch gets disconnected. This is particularly important when working with Single Root I/O Virtualization (SR-IOV) because SR-IOV traffic doesn’t go through the Hyper-V switch and thus cannot be protected by a team in or under the Hyper-V host. With the VM-teaming option an administrator can set up two Hyper-V switches, each connected to its own SR-IOV-capable NIC.

    · Each VM can have a virtual function (VF) from one or both SR-IOV NICs and, in the event of a NIC disconnect, fail-over from the primary VF to the back-up adapter (VF).

    · Alternately, the VM may have a VF from one NIC and a non-VF VM-NIC connected to another switch. If the NIC associated with the VF gets disconnected, the traffic can fail-over to the other switch without loss of connectivity.

    clip_image004

    Note: Because fail-over between NICs in a VM might result in traffic being sent with the MAC address of the other VM-NIC, each Hyper-V switch port associated with a VM that is using NIC Teaming must be set to allow teaming There are two ways to enable NIC Teaming in the VM:

    1) In the Hyper-V Manager, in the settings for the VM, select the VM’s NIC and the Advanced Settings item, then enable the checkbox for NIC Teaming in the VM.

    clip_image005

    2) Run the following Windows PowerShell cmdlet in the host with elevated (Administrator) privileges.

    Set-VMNetworkAdapter -VMName <VMname> -AllowTeaming On

    Teams created in a VM can only run in Switch Independent configuration, Address Hash distribution mode (or one of the specific address hashing modes). Only teams where each of the team members is connected to a different external Hyper-V switch are supported.

    Teaming in the VM does not affect Live Migration. The same rules exist for Live Migration whether or not NIC teaming is present in the VM.

    No teaming of Hyper-V ports in the Host Partition

    Hyper-V virtual NICs exposed in the host partition (vNICs) must not be placed in a team. Teaming of virtual NIC’s (vNICs) inside of the host partition is not supported in any configuration or combination. Attempts to team vNICs may result in a complete loss of communication in the event that network failures occur.

    Feature compatibilities

    NIC teaming is compatible with all networking capabilities in Windows Server 2012 with five exceptions: SR-IOV, RDMA, Native host Quality of Service, TCP Chimney, and 802.1X Authentication.

    · For SR-IOV and RDMA, data is delivered directly to the NIC without passing it through the networking stack (in the host OS in the case of virtualization). Therefore, it is not possible for the team to look at or redirect the data to another path in the team.

    · When QoS policies are set on a native or host system and those policies invoke minimum bandwidth limitations, the overall throughput through a NIC team will be less than it would be without the bandwidth policies in place.

    · TCP Chimney is not supported with NIC teaming in Windows Server 2012 since TCP Chimney has the entire networking stack offloaded to the NIC.

    · 802.1X Authentication should not be used with NIC Teaming and some switches will not permit configuration of both 802.1X Authentication and NIC Teaming on the same port.

    Feature

    Comments

    Datacenter bridging (DCB)

    Works independent of NIC Teaming so is supported if the team members support it.

    IPsec Task Offload (IPsecTO)

    Supported if all team members support it.

    Large Send Offload (LSO)

    Supported if all team members support it.

    Receive side coalescing (RSC)

    Supported in hosts if any of the team members support it. Not supported through Hyper-V switches.

    Receive side scaling (RSS)

    NIC teaming supports RSS in the host. The Windows Server 2012 TCP/IP stack programs the RSS information directly to the Team members.

    Receive-side Checksum offloads (IPv4, IPv6, TCP)

    Supported if any of the team members support it.

    Remote Direct Memory Access (RDMA)

    Since RDMA data bypasses the Windows Server 2012 protocol stack, team members will not also support RDMA.

    Single root I/O virtualization (SR-IOV)

    Since SR-IOV data bypasses the host OS stack, NICs exposing the SR-IOV feature will no longer expose the feature while a member of a team. Teams can be created in VMs to team SR-IOV virtual functions (VFs).

    TCP Chimney Offload

    Not supported through a Windows Server 2012 team.

    Transmit-side Checksum offloads (IPv4, IPv6, TCP)

    Supported if all team members support it.

    Virtual Machine Queues (VMQ)

    Supported when teaming is installed under the Hyper-V switch.

    QoS in host/native OSs

    Use of minimum bandwidth policies will degrade throughput through a team.

    Virtual Machine QoS (VM-QoS)

    VM-QoS is affected by the load distribution algorithm used by NIC Teaming. For best results use HyperVPorts load distribution mode.

    802.1X authentication

    Not compatible with many switches. Should not be used with NIC Teaming.

    NIC Teaming and Virtual Machine Queues (VMQs)

    VMQ and NIC Teaming work well together; VMQ should be enabled anytime Hyper-V is enabled. Depending on the switch configuration mode and the load distribution algorithm, NIC teaming will either present VMQ capabilities to the Hyper-V switch that show the number of queues available to be the smallest number of queues supported by any adapter in the team (Min-queues mode) or the total number of queues available across all team members (Sum-of-Queues mode). Specifically,

    · if the team is in Switch-Independent teaming mode and the Load Distribution is set to Hyper-V Port mode, then the number of queues reported is the sum of all the queues available from the team members (Sum-of-Queues mode);

    · Otherwise the number of queues reported is the smallest number of queues supported by any member of the team (Min-Queues mode).

    Here’s why.

    · When the team is in switch independent/Hyper-V Port mode the inbound traffic for a VM will always arrive on the same team member. The host can predict which member will receive the traffic for a particular VM so NIC Teaming can be more thoughtful about which VMQ Queues to allocate on a particular team member. NIC Teaming, working with the Hyper-V switch, will set the VMQ for a VM on exactly one team member and know that inbound traffic will hit that queue.

    · When the team is in any switch dependent mode (static teaming or LACP teaming), the switch that the team is connected to controls the inbound traffic distribution. The host’s NIC Teaming software can’t predict which team member will get the inbound traffic for a VM and it may be that the switch distributes the traffic for a VM across all team members. As a result the NIC Teaming software, working with the Hyper-V switch, programs a queue for the VM on every team member, not just one team member.

    · When the team is in switch-independent mode and is using an address hash load distribution algorithm, the inbound traffic will always come in on one NIC (the primary team member) – all of it on just one team member. Since other team members aren’t dealing with inbound traffic they get programmed with the same queues as the primary member so that if the primary member fails any other team member can be used to pick up the inbound traffic and the queues are already in place.

    There are a few settings that will help the system perform even better.

    Each NIC has, in its advanced properties, values for *RssBaseProcNumber and *MaxRssProcessors.

    · Ideally each NIC should have the *RssBaseProcNumber set to an even number greater than or equal to two (2). This is because the first physical processor, Core 0 (logical processors 0 and 1), typically does most of the system processing so the network processing should be steered away from this physical processor. (Some machine architectures don’t have two logical processors per physical processor so for such machines the base processor should be greater than or equal to 1. If in doubt assume your host is using a 2 logical processor per physical processor architecture.)

    · If the team is in Sum-of-Queues mode the team members’ processors should be, to the extent possible, non-overlapping. For example, in a 4-core host (8 logical processors) with a team of 2 10Gbps NICs, you could set the first one to use base processor of 2 and to use 4 cores; the second would be set to use base processor 6 and use 2 cores.

    · If the team is in Min-Queues mode the processor sets used by the team members must be identical.

    Cheers,


    Marcos Nogueira
    http://blog.marcosnogueira.org
    Twitter: @mdnoga