Overview of NIC Teaming (LBFO) in Windows Server 2012

NIC teaming, also known as Load Balancing/Failover (LBFO), allows multiple network adapters to be placed into a team for the purposes of

· bandwidth aggregation, and/or

· traffic failover to maintain connectivity in the event of a network component failure.

This feature has long been available from NIC vendors but until now NIC teaming has not been included with Windows Server.

The following sections address:

· NIC teaming architecture

· Bandwidth aggregation (also known as load balancing) mechanisms

· Failover algorithms

· NIC feature support – stateless task offloads and more complex NIC functionality

· A detailed walkthrough how to use the NIC Teaming management tools

NIC teaming is available in Windows Server 2012 in all editions, both Server Core and Full Server versions. NIC teaming is not available in Windows 8, however the NIC teaming User Interface and the NIC Teaming Windows PowerShell Cmdlets can both be run on Windows 8 so that a Windows 8 PC can be used to manage teaming on one or more Windows Server 2012 hosts.

Existing architectures for NIC teaming

Today virtually all NIC teaming solutions on the market have an architecture similar to that shown in Figure 1.

image

Figure 1 – Standard NIC teaming solution architecture and Microsoft vocabulary

One or more physical NICs are connected into the NIC teaming solution common core, which then presents one or more virtual adapters (team NICs [tNICs] or team interfaces) to the operating system. There are a variety of algorithms that distribute outbound traffic between the NICs.

The only reason to create multiple team interfaces is to logically divide inbound traffic by virtual LAN (VLAN). This allows a host to be connected to different VLANs at the same time. When a team is connected to a Hyper-V switch all VLAN segregation should be done in the Hyper-V switch instead of in the NIC Teaming software.

Configurations for NIC Teaming

There are two basic configurations for NIC Teaming.

Switch-independent teaming. This configuration does not require the switch to participate in the teaming. Since in switch-independent mode the switch does not know that the network adapter is part of a team in the host, the adapters may be connected to different switches. Switch independent modes of operation do not require that the team members connect to different switches; they merely make it possible.

    • Active/Standby Teaming: Some administrators prefer not to take advantage of the bandwidth aggregation capabilities of NIC Teaming. These administrators choose to use one NIC for traffic (active) and one NIC to be held in reserve (standby) to come into action if the active NIC fails. To use this mode set the team in Switch-independent teaming. Active/Standby is not required to get fault tolerance; fault tolerance is always present anytime there are at least two network adapters in a team.
    • Switch-dependent teaming. This configuration that requires the switch to participate in the teaming. Switch dependent teaming requires all the members of the team to be connected to the same physical switch.

There are two modes of operation for switch-dependent teaming:

Generic or static teaming (IEEE 802.3ad). This mode requires configuration on both the switch and the host to identify which links form the team. Since this is a statically configured solution there is no additional protocol to assist the switch and the host to identify incorrectly plugged cables or other errors that could cause the team to fail to perform. This mode is typically supported by server-class switches.

Dynamic teaming (IEEE 802.1ax, LACP). This mode is also commonly referred to as IEEE 802.3ad as it was developed in the IEEE 802.3ad committee before being published as IEEE 802.1ax. IEEE 802.1ax works by using the Link Aggregation Control Protocol (LACP) to dynamically identify links that are connected between the host and a given switch. This enables the automatic creation of a team and, in theory but rarely in practice, the expansion and reduction of a team simply by the transmission or receipt of LACP packets from the peer entity. Typical server-class switches support IEEE 802.1ax but most require the network operator to administratively enable LACP on the port.

Both of these modes allow both inbound and outbound traffic to approach the practical limits of the aggregated bandwidth because the pool of team members is seen as a single pipe.

Algorithms for traffic distribution

Outbound traffic can be distributed among the available links in many ways. One rule that guides any distribution algorithm is to try to keep all packets associated with a single flow (TCP-stream) on a single network adapter. This rule minimizes performance degradation caused by reassembling out-of-order TCP segments.

NIC teaming in Windows Server 2012 supports the following traffic distribution algorithms:

Hyper-V switch port. Since VMs have independent MAC addresses, the VM’s MAC address or the port it’s connected to on the Hyper-V switch can be the basis for dividing traffic. There is an advantage in using this scheme in virtualization. Because the adjacent switch always sees a particular MAC address on one and only one connected port, the switch will distribute the ingress load (the traffic from the switch to the host) on multiple links based on the destination MAC (VM MAC) address. This is particularly useful when Virtual Machine Queues (VMQs) are used as a queue can be placed on the specific NIC where the traffic is expected to arrive. However, if the host has only a few VMs, this mode may not be granular enough to get a well-balanced distribution. This mode will also always limit a single VM (i.e., the traffic from a single switch port) to the bandwidth available on a single interface. Windows Server 2012 uses the Hyper-V Switch Port as the identifier rather than the source MAC address as, in some instances, a VM may be using more than one MAC address on a switch port.

Address Hashing. This algorithm creates a hash based on address components of the packet and then assigns packets that have that hash value to one of the available adapters. Usually this mechanism alone is sufficient to create a reasonable balance across the available adapters.

The components that can be specified as inputs to the hashing function include the following:

  • Source and destination MAC addresses
  • Source and destination IP addresses
  • Source and destination TCP ports and source and destination IP addresses

The TCP ports hash creates the most granular distribution of traffic streams resulting in smaller streams that can be independently moved between members. However, it cannot be used for traffic that is not TCP or UDP-based or where the TCP and UDP ports are hidden from the stack, such as IPsec-protected traffic. In these cases, the hash automatically falls back to the IP address hash or, if the traffic is not IP traffic, to the MAC address hash.

Interactions between Configurations and Load distribution algorithms

Switch Independent configuration / Address Hash distribution

This configuration will send packets using all active team members distributing the load through the use of the selected level of address hashing (defaults to using TCP ports and IP addresses to seed the hash function).

Because a given IP address can only be associated with a single MAC address for routing purposes, this mode receives inbound traffic on only one team member (the primary member). This means that the inbound traffic cannot exceed the bandwidth of one team member no matter how much is getting sent.

This mode is best used for:

a) Native mode teaming where switch diversity is a concern;

b) Active/Standby mode teams; and

c) Teaming in a VM.

It is also good for:

d) Servers running workloads that are heavy outbound, light inbound workloads (e.g., IIS).

Switch Independent configuration / Hyper-V Port distribution

This configuration will send packets using all active team members distributing the load based on the Hyper-V switch port number. Each Hyper-V port will be bandwidth limited to not more than one team member’s bandwidth because the port is affinitized to exactly one team member at any point in time.

Because each VM (Hyper-V port) is associated with a single team member, this mode receives inbound traffic for the VM on the same team member the VM’s outbound traffic uses. This also allows maximum use of Virtual Machine Queues (VMQs) for better performance over all.

This mode is best used for teaming under the Hyper-V switch when

a) The number of VMs well-exceeds the number of team members; and

b) A restriction of a VM to not greater than one NIC’s bandwidth is acceptable

Switch Dependent configuration / Address Hash distribution

This configuration will send packets using all active team members distributing the load through the use of the selected level of address hashing (defaults to 4-tuple hash).

Like in all switch dependent configurations, the switch determines how to distribute the inbound traffic among the team members. The switch is expected to do a reasonable job of distributing the traffic across the team members but it has complete independence to determine how it does so.

Best used for:

a) Native teaming for maximum performance and switch diversity is not required; or

b) Teaming under the Hyper-V switch when an individual VM needs to be able to transmit at rates in excess of what one team member can deliver.

Switch Dependent configuration / Hyper-V Port distribution

This configuration will send packets using all active team members distributing the load based on the Hyper-V switch port number. Each Hyper-V port will be bandwidth limited to not more than one team member’s bandwidth because the port is “affinitized” to exactly one team member at any point in time.

Like in all switch dependent configurations, the switch determines how to distribute the inbound traffic among the team members. The switch is expected to do a reasonable job of distributing the traffic across the team members but it has complete independence to determine how it does so.

Best used when:

a) Hyper-V teaming when VMs on the switch well-exceed the number of team members and

b) When policy calls for switch dependent (e.g., LACP) teams and

a) When the restriction of a VM to not greater than one NIC’s bandwidth is acceptable.

Cheers,


Marcos Nogueira
http://blog.marcosnogueira.org
Twitter:  @mdnoga

Update for SCVMM 2012 SP1 on Windows server 2012

If you are using System Center 2012 Virtual Machine Manager Service Pack 1 (SCVMM) to manage storage and assign LUNS, there is a collection of updates that may improve this functionality. These updates are for Windows Server 2012 and need to be applied on hosts, VMM servers and any other system that will interact with storage.

These updates are offered via Windows Update (see KB2785094) but you can also download a standalone installer here: http://www.microsoft.com/en-us/download/details.aspx?id=36259.

Cheers,


Marcos Nogueira
http://blog.marcosnogueira.org
Twitter: @mdnoga

General Methodology for Troubleshooting Hyper-V Replica

Hyper-V Replica connectivity issues between Primary and Replica servers

Symptom:  Hyper-V Replica functionality is disrupted and the Hyper-V VMMS\Admin log reports general network connectivity errors between the Primary and Replica server
  1. Verify the Replica server is booted and running.
  2. Check network connectivity and name resolution functionality between the Primary and Replica server by executing ping and nslookup tests.  If ping test fails, resolve network connectivity issues.  If name resolution fails, check DNS
  3. Ensure the Replica server is listening on the Replica Server Port.  This can be accomplished by running a netstat -ano command on the Replica server after verifying the  appropriate firewall rule has been Enabled or the custom firewall rule has been configured to allow Inbound communications on the configured port
    troblehooting_Rep
  4. Inspect the System Event Log on the Primary and Replica servers to determine if there is any failure condition associated with network functionality
  5. Run the Hyper-V Best Practice Analyzer (BPA) and inspect the report for any configuration or operational issues

Configuring a virtual machine for replication

Symptom:  Configuring a virtual machine for replication fails.
  1. Verify the Replica server is booted and running.
  2. Check network connectivity between and name resolution functionality the Primary and Replica server by executing a ping and nslookup tests.  If the ping test fails, resolve network connectivity issues. If name resolution fails, check DNS
  3. Ensure the Replica server is listening on the Replica Server Port and the Authentication Type is configured correctly.
  4.  If the Replica server configuration matches the parameters entered in the Enable Replication wizard,  verify the Firewall on the Replica server has been configured to allow Inbound communications on the Replica Server Port
  5. Inspect the System Event Log on the Primary and Replica servers to determine if there is any failure condition associated with network functionality
  6. Inspect the Hyper-V VMMS\Admin Log for any events related to network connectivity on both the Primary and Replica servers

Virtual machine Planned Failover process

A virtual machine Planned Failover process is a planned event where a running virtual machine on the Primary server is moved to a designated Replica server.

Symptom:  The Check that virtual machine is turned off Pre-Requisite test fails.
  1. Ensure the virtual machine has been shut down prior to executing a Planned Failover to a Replica server
Symptom:  The Check configuration for allowing revers replication test fails.
  1. Ensure the Primary server has also been configured as a Replica server.  The assumption is that if a Planned Failover is executed to a Replica server, the virtual machine will use the Primary server as the new Replica server.   This configuration in the virtual machine is included as part of the Planned Failover process
Symptom:  Send un-replicated data to Replica server fails.
  1. Verify network connectivity to the Replica server using the procedures outlined in the Hyper-V Replica connectivity issues between Primary and Replica servers section

Configuring a virtual machine for Reverse Replication

Symptom:  Reverse Replication configuration for a virtual machine results in a failure.
  1. Verify network connectivity to the Hyper-V server being used as a Replica server using the procedures outlined in the Hyper-V Replica connectivity issues between Primary and Replica servers section

Initial Replication (IR) for a virtual machine

Symptom:  Initial Replication (IR) for a virtual machine fails.
  1. Verify network connectivity to the Replica server using the procedures outlined in the Hyper-V Replica connectivity issues between Primary and Replica servers section
  2. Ensure the protocol configuration between the Primary and Replica server match
  3. Verify the Primary server is authorized to replicate with the Replica server this includes verifying the Security Tags match
  4. Ensure the Authentication method matches between the Primary and Replica server
  5. If there is an error on the Replica server indicating there is insufficient storage space,   verify there is sufficient storage space available on the drive hosting the virtual machine replica file(s).  If there is insufficient storage space, add additional storage space

Delta Replication (DR) for a virtual machine

Symptom:  Delta Replication (DR) for a virtual machine fails
  1. Verify network connectivity to the Replica server using the procedures outlined in the Hyper-V Replica connectivity issues between Primary and Replica servers section
  2. Ensure the protocol configuration between the Primary and Replica server match
  3. Verify the Primary server is authorized to replicate with the Replica server
  4. Ensure the Authentication method matches between the Primary and Replica server
  5. Check for any error(s) on the Replica server indicating there is insufficient storage space available to host the virtual machine replica files
  6. Check for any error(s) on the Replica server indicating the virtual machine files could not be located
Symptom:  Application-consistent replicas are not generated by the Primary server and replicated to the Replica server
  1. Verify the virtual machine has been configured to replicate application-consistent replicas to the Replica server
  2. Verify the Integration Services version of the Guest matches what is installed in the Host (if there is a mismatch, a Warning message will be registered in the Hyper-V-Integration Admin log)
  3. Check the virtual machine Integration Services and verify the Backup (Volume snapshot) integration component is enabled in the Guest
  4. Review the system event log in the Guest and determine if there any errors pertaining to the Volume Shadow Copy Service (VSS)
  5. Test VSS in the Guest by executing a backup of the operating system
  6. Execute a backup on the Hyper-V host and verify the Guest can be backed up

Replication Broker issues

Symptom:  When enabling a virtual machine for replication, a connection to the Client Access Point (CAP) being used by the Hyper-V Replica Cluster Replication Broker cannot be made.
  1. Ensure all the resources supporting the Hyper-V Replica Clustering Replication Broker are Online in the cluster.  If there are any failures for the resources in the group, troubleshoot the failures using standard Failover Cluster troubleshooting procedures
  2. Move the resource group containing the Hyper-V Replica Clustering Replication Broker to another node in the cluster and attempt to enable replication for a virtual machine using the Client Access Point for the Hyper-V Replica Clustering Replication Broker

Guest IP functionality

Symptom:  After initiating a Failover for a virtual machine, the configured Failover TCP/IP settings for the virtual machine in the Replica server are not implemented and a connection to the virtual machine cannot be made.
  1. Ensure the Integration Components in the virtual machine have been updated.  This problem could occur in down-level operating systems running in a virtual machine on a Windows Server 2012 Hyper-V server
  2. Check the Hyper-V-Integration\Admin event log for an Event ID: 4010 Warning message reporting a problem with the Hyper-V Data Exchange functionality with the virtual machine experiencing this problem.  Additionally, an Event ID: 4132 Error message will be recorded indicating a problem applying IP settings to a network adapter in the virtual machine experiencing this problem
  3. Update the Integration Components in the virtual machine

VLANs with Hyper-V Network Virtualization

Isolating different departments’ virtual machines can be a challenge on a shared network. When entire networks of virtual machines must be isolated, the challenge becomes even greater. Traditionally, VLANs have been used to isolate networks, but VLANs are very complex to manage on a large scale. The following are the primary drawbacks of VLANs:

· Cumbersome reconfiguration of production switches is required whenever virtual machines or isolation boundaries must be moved. Moreover, frequent reconfigurations of the physical network to add or modify VLANs increases the risk of an outage.

· VLANs have limited scalability because typical switches support no more than 1,000 VLAN IDs (with a maximum of 4,095).

· VLANs cannot span multiple subnets, which limits the number of nodes in a single VLAN and restricts the placement of virtual machines based on physical location.

In addition to these drawbacks, virtual machine IP address assignment presents other key issues when organizations move to the cloud:

· Required renumbering of service workloads.

· Policies that are tied to IP addresses.

· Physical locations that determine virtual machine IP addresses.

· Topological dependency of virtual machine deployment and traffic isolation.

The IP address is the fundamental address that is used for layer-3 network communication because most network traffic is TCP/IP. Unfortunately, when moving to the cloud, the addresses must be changed to accommodate the physical and topological restrictions of the datacenter. Renumbering IP addresses is cumbersome because all associated policies that are based on IP addresses must also be updated.

The physical layout of a datacenter influences the permissible potential IP addresses for virtual machines that run on a specific server or blade that is connected to a specific rack in the datacenter. A virtual machine provisioned and placed in the datacenter must adhere to the choices and restrictions regarding its IP address. The typical result is that datacenter administrators assign IP addresses to the virtual machines and force virtual machine owners to adjust all the policies that were based on the original IP address. This renumbering overhead is so high that many enterprises choose to deploy only new services into the cloud and leave legacy applications unchanged.

To solve these problems, Windows Server 2012 introduces Hyper-V Network Virtualization, a new feature that enables you to isolate network traffic from different business units or customers on a shared infrastructure, without having to use VLANs. Network Virtualization also lets you move virtual machines as needed within your virtual infrastructure while preserving their virtual network assignments. You can even use Network Virtualization to transparently integrate these private networks into a pre-existing infrastructure on another site.

Hyper-V Network Virtualization extends the concept of server virtualization to permit multiple virtual networks, potentially with overlapping IP addresses, to be deployed on the same physical network. With Network Virtualization, you can set policies that isolate traffic in a dedicated virtual network independently of the physical infrastructure. The following figure illustrates how you can use Network Virtualization to isolate network traffic that belongs to two different customers. In the figure, a Blue virtual machine and a Yellow virtual machine are hosted on a single physical network, or even on the same physical server. However, because they belong to separate Blue and Yellow virtual networks, the virtual machines cannot communicate with each other even if the customers assign these virtual machines IP addresses from the same address space.

image

To virtualize the network, Hyper-V Network Virtualization uses the following elements:

· Two IP addresses for each virtual machine.

· Generic Routing Encapsulation (GRE).

· IP address rewrite.

· Policy management server.

IP addresses

Each virtual machine is assigned two IP addresses:

· Customer Address (CA) is the IP address that the customer assigns based on the customer’s own intranet infrastructure. This address lets the customer exchange network traffic with the virtual machine as if it had not been moved to a public or private cloud. The CA is visible to the virtual machine and reachable by the customer.

· Provider Address (PA) is the IP address that the host assigns based on the host’s physical network infrastructure. The PA appears in the packets on the wire exchanged with the Hyper-V server hosting the virtual machine. The PA is visible on the physical network, but not to the virtual machine.

The layer of CAs is consistent with the customer’s network topology, which is virtualized and decoupled from the underlying physical network addresses, as implemented by the layer of PAs. With Network Virtualization, any virtual machine workload can be executed without modification on any Windows Server 2012 Hyper-V server within any physical subnet, if Hyper-V servers have the appropriate policy settings that can map between the two addresses.

This approach provides many benefits, including cross-subnet live migration, customer virtual machines running IPv4 while the host provider runs an IPv6 datacenter or vice-versa, and using IP address ranges that overlap between customers. But perhaps the biggest advantage of having separate CAs and PAs is that it lets customers move their virtual machines to the cloud with minimal reconfiguration.

Generic Routing Encapsulation

GRE is a tunneling protocol (defined by RFC 2784 and RFC 2890) that encapsulates various network layer protocols inside virtual point-to-point links over an Internet Protocol network. Hyper-V Network Virtualization in Windows Server 2012 uses GRE IP packets to map the virtual network to the physical network. The GRE IP packet contains the following information:

· One customer address per virtual machine.

· One provider address per host that all virtual machines on the host share.

· A Tenant Network ID embedded in the GRE header Key field.

· Full MAC header.

The following figure illustrates GRE in a Network Virtualization environment.

clip_image004

IP Address Rewrite

Hyper-V Network Virtualization uses IP Address Rewrite to map the CA to the PA. Each virtual machine CA is mapped to a unique host PA. This information is sent in regular TCP/IP packets on the wire. With IP Address Rewrite, there is little need to upgrade existing network adapters, switches, and network appliances, and it is immediately and incrementally deployable today with little impact on performance. The following figure illustrates the IP Address Rewrite process.

clip_image006

Policy management server

The setting and maintenance of Network Virtualization capabilities require using a policy management server, which may be integrated into the management tools used to manage virtual machines.

 

 

Network Virtualization example

Blue Corp and Yellow Corp are two companies that want to move their Microsoft SQL Server infrastructures into the cloud, but they want to maintain their current IP addressing. Thanks to the new Network Virtualization feature of Hyper-V in Windows Server 2012, Cloud is able to accommodate this request, as shown in the following figure.

clip_image001

Before moving to the hosting provider’s shared cloud service:

· Blue Corp ran a SQL Server instance (named SQL) at the IP address 10.1.1.1 and a web server (named WEB) at the IP address 10.1.1.2, which uses its SQL server for database transactions.

· · Yellow Corp ran a SQL Server instance, also named SQL and assigned the IP address 10.1.1.1, and a web server, also named WEB and also at the IP address 10.1.1.2, which uses its SQL server for database transactions.

Both Blue Corp and Yellow Corp move their respective SQL and WEB servers to the same hosting provider’s shared IaaS service where they run the SQL virtual machines in Hyper-V Host 1 and the WEB virtual machines in Hyper-V Host 2. All virtual machines maintain their original intranet IP addresses (their CAs):

· CAs of Blue Corp virtual machines: SQL is 10.1.1.1, WEB is 10.1.1.2.

· CAs of Yellow Corp virtual machines: SQL is 10.1.1.1, WEB is 10.1.1.2.

Both companies are assigned the following PAs by their hosting provider when the virtual machines are provisioned:

· PAs of Blue Corp virtual machines: SQL is 192.168.1.10, WEB is 192.168.1.12.

· PAs of Yellow Corp virtual machines: SQL is 192.168.1.11, WEB is 192.168.1.13.

The hosting provider creates policy settings that consist of an isolation group for Yellow Corp that maps the CAs of the Yellow Corp virtual machines to their assigned PAs, and a separate isolation group for Blue Corp that maps the CAs of the Blue Corp virtual machines to their assigned PAs. The provider applies these policy settings to both Hyper-V Host 1 and Hyper-V Host 2.

When the Blue Corp WEB virtual machine on Hyper-V Host 2 queries its SQL server at 10.1.1.1, the following occurs:

· Hyper-V Host 2, based on its policy settings, translates the addresses in the packet from:
Source: 10.1.1.2 (the CA of Blue Corp WEB)
Destination: 10.1.1.1 (the CA of Blue Corp SQL)
to
Source: 192.168.1.12 (the PA for Blue Corp WEB)
Destination: 192.168.1.10 (the PA for Blue Corp SQL)

· When the packet is received at Hyper-V Host 1, based on its policy settings, Network Virtualization translates the addresses in the packet from:
Source: 192.168.1.12 (the PA for Blue Corp WEB)
Destination: 192.168.1.10 (the PA for Blue Corp SQL)
back to
Source: 10.1.1.2 (the CA of Blue Corp WEB)
Destination: 10.1.1.1 (the CA of Blue Corp SQL) before delivering the packet to the Blue Corp SQL virtual machine.

When the Blue Corp SQL virtual machine on Hyper-V Host 1 responds to the query, the following happens:

· Hyper-V Host 1, based on its policy settings, translates the addresses in the packet from:
Source: 10.1.1.1 (the CA of Blue Corp SQL)
Destination: 10.1.1.2 (the CA of Blue Corp WEB)
to
Source: 192.168.1.10 (the PA for Blue Corp SQL)
Destination: 192.168.1.12 (the PA for Blue Corp WEB)

· When Hyper-V Host 2 receives the packet, based on its policy settings, Network Virtualization translates the addresses in the packet from:
Source: 192.168.1.10 (the PA for Blue Corp SQL)

Destination: 192.168.1.12 (the PA for Blue Corp WEB)
to
Source: 10.1.1.1 (the CA of Blue Corp SQL)
Destination: 10.1.1.2 (the CA of Blue Corp WEB) before delivering the packet to the Blue Corp WEB virtual machine.

A similar process for traffic between the Yellow Corp WEB and SQL virtual machines uses the settings in the Yellow Corp isolation group. With Network Virtualization, Yellow Corp and Blue Corp virtual machines interact as if they were on their original intranets, but they are never in communication with each other, even though they are using the same IP addresses. The separate addresses (CAs and PAs), the policy settings of the Hyper-V hosts, and the address translation between CA and PA for inbound and outbound virtual machine traffic, all act to isolate these two sets of servers from each other.

Setting and maintaining Network Virtualization capabilities requires the use of a policy management server, which may be integrated into tools used to manage virtual machines.

Two techniques are used to virtualize the IP address of the virtual machine. The preceding example with Blue Corp and Yellow Corp shows IP Rewrite, which modifies the CA IP address of the virtual machine’s packets before they are transferred on the physical network. IP Rewrite can provide better performance because it is compatible with existing Windows networking offload technologies such as VMQs.

The second IP virtualization technique is GRE Encapsulation (RFC 2784). With GRE Encapsulation, all virtual machines packets are encapsulated with a new header before being sent on the wire. GRE Encapsulation provides better network scalability because all virtual machines on a specific host can share the same PA IP address. Reducing the number of PAs means that the load on the network infrastructure associated with learning these addresses (IP and MAC) is greatly reduced.

Requirements

Network Virtualization requires Windows Server 2012 and the Hyper-V server role.

Summary

With Network Virtualization, you now can isolate network traffic from different business units or customers on a shared infrastructure, without having to use VLANs. Network Virtualization also lets you move virtual machines as needed within your virtual infrastructure while preserving their virtual network assignments. Finally, you can use Network Virtualization to transparently integrate these private networks into a pre-existing infrastructure on another site.

Network Virtualization benefits include:

· Tenant network migration to the cloud with minimum reconfiguration or effect on isolation. Customers can keep their internal IP addresses while they move workloads onto shared IaaS clouds, minimizing the configuration changes needed for IP addresses, DNS names, security policies, and virtual machine configurations. In software-defined, policy-based datacenter networks, network traffic isolation does not depend on VLANs, but is enforced within Hyper-V hosts, based on multitenant isolation policies. Network administrators can still use VLANs for traffic management of the physical infrastructure if the topology is primarily static.

· Tenant virtual machine deployment anywhere in the datacenter. Services and workloads can be placed or migrated to any server in the datacenter while keeping their IP addresses, without being limited to physical IP subnet hierarchy or VLAN configurations.

· Simplified network and improved server/network resource use. The rigidity of VLANs, along with the dependency of virtual machine placement on physical network infrastructure, results in overprovisioning and underuse. By breaking this dependency, Virtual Server Virtual Networking increases the flexibility of virtual machine workload placement, thus simplifying network management and improving server and network resource use. Server workload placement is simplified because migration and placement of workloads are independent of the underlying physical network configurations. Server administrators can focus on managing services and servers, while network administrators can focus on overall network infrastructure and traffic management.

· Works with today’s hardware (servers, switches, appliances) to maximize performance. Network Virtualization can be deployed in today’s datacenter, and yet is compatible with emerging datacenter “flat network” technologies, such as TRILL (Transparent Interconnection of Lots of Links), an IETF standard architecture intended to expand Ethernet topologies.

Full management through Windows PowerShell and WMI. You can use Windows PowerShell to script and automate administrative tasks easily. Windows Server 2012 includes Windows PowerShell cmdlets for Network Virtualization that let you build command-line tools or automated scripts for configuring, monitoring, and troubleshooting network isolation policies.

 

Cheers,


Marcos Nogueira
http://blog.marcosnogueira.org
Twitter: @mdnoga

Hyper-V Failover Cluster as a Primary or Replica Server

Failover Clustering has proven its value in making virtualized workloads highly available.  We saw this in Windows Server 2008 using Quick Migration and then in Windows Server 2008 R2 with the addition of Live Migration. Failover Clustering can also play an important role as a Replica Cluster.  To accommodate this, a new role has been added in Failover Clustering called the Hyper-V Replica Broker.  A new resource type, Virtual Machine Replication Broker, was added to support this new Role.

Failover Replication Broker Architecture

The Hyper-V Replica Broker runs in a Replica cluster and provides a Replica server name (connection point (a.k.a.  Client Access Point (CAP))) for initial virtual machine placement when contacted by a Primary server. After a virtual machine is initially replicated to the Replica Cluster, the Hyper-V Replica  Broker provides the virtual machine to Replica Server (cluster node) mapping to ensure the Primary server can replicate data for the virtual machine to the correct node in the cluster in support of mobility scenarios on the Replica side (e.g. Live\Quick Migration, or Storage Migration).

The Hyper-V Replica Broker is used to configure the replication settings for all nodes in the cluster.  In standalone Hyper-V servers, the Hyper-V Manager is used to configure replication settings.  The Failover Cluster Manager is used to configure replication settings in the Replica cluster.  Using the Hyper-V Replica Role, Replication Settings across the entire cluster are set.

clu1

 The replication settings are the same as those for standalone Hyper-V servers.

conf2

 Network Considerations for Hyper-V Replica Scenarios

There are scenarios where the Replica server, or Replica cluster, will reside at a Disaster Recovery (DR) site located across a WAN link and the DR site uses a completely different network-addressing scheme than the Primary site.  In this configuration, when virtual machines are failed over to a DR site, a new IP configuration will be needed for each network configured in the virtual machine.  To accommodate this scenario, there is built-in functionality in Hyper-V Replica where virtual machines network settings can be modified to include configuration information for a different network at a DR site.  To take advantage of this, the Hyper-V Administrator must modify the network configuration for each replicated virtual machine on the Replica server.  If connectivity to networks at the replica site is required, the settings for all networks a virtual machine is connected to must be modified. The Hyper-V Administrator can provide both IPv4 and IPv6 configuration information for a virtual machine.  The Failover TCP/IP setting, which is available after replication is enabled for the virtual machine, is used to provide the network configuration information in the virtual machine.

clu3

The addressing information provided is used when a Failover action (Planned Failover or Failover) is executed.  The configuration of the Guest virtual machine IP settings in this manner only applies to Synthetic Network Adapters and not Legacy Network Adapters.  The operating system running in the Guest virtual machine must be one of the following – Windows Server “8” Beta, Windows Server 2008 R2, Windows Server 2008, Windows Server 2003 SP2 (or higher), Windows 7, Vista SP2 (or higher), and Windows XP SP2 (or higher).  The latest Windows Server “8” Beta Integration Services must be installed in the virtual machine.

The information is reflected in the virtual machine configuration file located on the Replica server.

clu4