Load Balanced and Availability Set with multiple VMs

When it comes to best practices to how to setup multiple virtual machines using a load balanced and availability set, the information out there is either outdated or hard to find.

What is the scenario? Imagine that you need to set a few VMs that need to be shared the configuration and some files between them. How you could do it?

After a few searches on the web, I come across with the IIS and Azure Files blog post. Although this post is dated of October 2015, and as you know, Azure is changing in a very fast pace. My first though was, is this still applicable? After a few tests on my test environment, I found that it’s! Surprisingly! So, if you follow all the steps in the post you may configured your environment.

In my case, there was a specific requirement that this approach wasn’t applicable. My workloads required low latency. So, I went again searching how I could achieve this. And then I found the solution on GitHub! Microsoft publish a template that the only thing you need is fill the blanks. THANK YOU!

This is the template that I’m referring too, 201-vmss-win-iis-app-ssl.

Solution overview and deployed resources

This template will create the following Azure resources

  1. A VNet with two subnets. The VNet and the subnet IP prefixes are defined in the variables section i.e. appVnetPrefix, appVnetSubnet1Prefix & appVnetSubnet2Prefix respectively. Set these two accordingly.
  2. A NSG to allow http, https and rdp access to the VMSS. The NSG is assigned to the subnets.
  3. Two NICs, two Public IPs and two VMSSs with Windows Server 2012 R2
    3.1) The first VMSS is used for hosting the WebSite and the 2nd VMSS is used for hosting the Services (WebAPI/WCF etc.) 3.2) The VMSSs are load balanced with Azure load balancers. The load balancers are configured to allow RDP access by port ranges 3.3) The VMSSs are configured to auto scale based on CPU usage. The scaled out instances are automatically configured with Windows features, application deployment packages, SSL Certificates, the necessary IIS sites and SSL bindings
  4. The 1st VMSS is deployed with a pfx certificate installed in the specified certificate store. The source of the certificate is stored in an Azure Key Vault
  5. The DSC script configures various windows features like IIS/Web Role, IIS Management service and tools, .Net Framework 4.5, Custom login, request monitoring, http tracking, windows auth, application initialization etc.
  6. DSC downloads Web Deploy 3.6 & URL Rewrite 2.0 and installs the modules
  7. DSC downloads an application deployment package from an Azure Storage account and installs it in the default website
  8. DSC finds the certificate from the local store and create a 443 binding
  9. DSC creates the necessary rules, so any incoming http traffic gets automatically redirected to the corresponding https end points

The following resources are deployed as part of the solution

A VNet with two subnet

The VNet and the subnet IP prefixes are defined in the variables section i.e. appVnetPrefix, appVnetSubnet1Prefix & appVnetSubnet2Prefix respectively. Set these two accordingly.

  • NSG to define the security rules – It defines the rules for http, https and rdp acces to the VMSS. The NSG is assigned to the subnets
  • Two NICs, two Public IPs and two VMSSs with Windows Server 2012 R2
  • Two Azure load balancers one each for the VMSSs
  • A Storage accounts for the VMSS as well as for the artifacts

Prerequisites

  1. You should have a custom domain ready and point the custom domain to the FQDN of the first public IP/Public IP for the Web Load balancer
  2. SSL certificate: You should have a valid SSL certificate purchased from a CA or be self signed
  3. Create an Azure KeyVault and upload the certificate to the KeyVault. Currently, Azure KeyVault supports certificates in pfx format. If the certificates are not in pfx format then import those to a windows cert store on a local machine and then export those to a pfx format with embeded private key and root certificate.

 

Cheers,

Marcos Nogueira
Azure MVP
azurecentric.com
Twitter: @mdnoga

 

Managed disk on Azure

You probably already saw this when you are creating a Virtual Machine on Azure. After you insert the basic information like Name of the VM, choose the size, then comes the time to define and configure the settings of that VM. One of the first thing is the Use of managed disk.

But what is managed disks? How they work? What are the implication of using the Managed disks?

So, first thing, Managed Disk allow you to abstract the storage accounts where you will use on your virtual machine (see pictures below). When you select that you want to use managed disk, you don’t have to setup or choose the storage account where those disks will be stored.

When you don’t want to use Managed disks, you have to select the storage account.

With Managed disk, you only have to specify the size of the disk, and Azure manage for you. That allows you more granular access control. You don’t have to care with the storage account limits and you will gain higher scalability, meaning that you can create up to 10000 disks per region per subscription.

Managed disk will increase your resilience for your availability sets, by making sure that the disk will belong to a storage unit that is on a different fault domain. In my experience, when you create storage account, it’s not guarantee that your storage account will be on a different fault domain. That scenario, even if you use availability sets on the setup, doesn’t avoid a single point of failure.

But if you are thinking, that you prefer to use storage accounts, to control the access to the VHDs, with managed disks you can use RBAC as well, to assign the permissions for a managed disk to one or more users. In this scenario, you have to managed disk by disk, and not to the entire storage account. That means more granular access control. You can prevent, for example, a user of copy that vhd, but still use the virtual machine.

The integration with Azure Backup is great. You can use Azure Backup Service with managed disk to create a backup job that will easy your VM restoration. Managed disks although, only support the Locally Redundant Storage (LRS) as a replication option, this mean that 3 copies of the vhd within the region.

To resume, here are the benefits of managed disks:

  • Simple and scalable VM deployment
  • Better reliability for Availability Sets
  • Granular Access control
  • Azure Backup service support

Cheers,

Marcos Nogueira
Azure MVP

azurecentric.com
Twitter: @mdnoga

How to extend Azure Service Fabric to on-premise?

You can deploy a Service Fabric cluster on any physical or virtual machine running the Windows Server operating system, including ones residing in your on-premises datacenters. You use the standalone Windows Server package available for download from the Azure online documentation website (see here).

To deploy an on-premises Service Fabric cluster, perform the following steps:

  1. Plan for your cluster infrastructure. You must consider the resiliency of the hardware and network components and the physical security of their location because in this scenario, you can no longer rely on the resiliency and security features built in to the Azure platform.
  2. Prepare the Windows Server computer to satisfy the prerequisites. Each computer must be running Windows Server 2012 or Windows Server 2012 R2, have at least 2 GB of RAM, and have the Microsoft .NET Framework 4.5.1 or later. Additionally, ensure that the Remote Registry service is running.
  3. Determine the initial cluster size. At a minimum, the cluster must have three nodes, with one node per physical or virtual machine. Adjust the number of nodes according to your projected workload.
  4. Determine the number of fault domains and upgrade domains. The assignment of fault domains should reflect the underlying physical infrastructure and the level of its resiliency to a single component failure. Typically, you consider a single physical rack to constitute a single fault domain. The assignment of upgrade domains should reflect the number of nodes that you plan to take down during application or cluster upgrades.
  5. Download the Service Fabric standalone package for Windows Server.
  6. Define the cluster configuration. Specify the cluster settings before provisioning. To do this, modify the ClusterConfig.json file included in the Service Fabric standalone package for Windows Server. The file has several sections, including:
    o    NodeTypes. NodeTypes allow you to divide cluster nodes into groups represented by a node types and assign common properties to the nodes within each group. These properties specify endpoint ports, placement constraints, or capacities.
    o    Nodes. Nodes determine the settings of individual cluster nodes, such as the node name, IP address, fault domain, or upgrade domain.
  7. Run the create cluster script. After you modify the ClusterConfig.json file to match your requirements and preferences, run the CreateServiceFabricCluster.ps1 Windows PowerShell script and reference the .json file as its parameter. You can run the script on any computer with connectivity to all the cluster nodes.
  8. Connect to the cluster. To manage the cluster, connect to it via http://IP_Address_of_a_node:19080/Explorer/index.htm, where IP_Address_of_a_node is the IP address of any of the cluster nodes.

 

Cheers,

Marcos Nogueira
azurecentric.com
Twitter: @mdnoga

Virtual Machine Scalability in Azure – Part 1

You can provide automatic scalability for Azure VMs by using Azure VM scale sets. A VM scale set consists of a group of automatically provisioned Windows or Linux virtual machines that share identical configuration and deliver the same functionality to support a service or application. With a VM scale set, the number of virtual machines can increase or decrease, adjusting dynamically to changes in demand for its workload. The workload should be stateless to efficiently handle deprovisioning of VMs when scaling in. To implement autoscaling, you should leverage capabilities of the Microsoft.Insights resource provider.

VM scale sets integrate with Azure load balancers to efficiently handle dynamic distribution of network traffic across multiple virtual machines. They also support network address translation (NAT) rules, allowing for connectivity to individual virtual machines in the same scale set. VMs in the same scale set are automatically distributed across five fault domains and five update domains.

From a storage perspective, you can configure VM scale sets with both managed and unmanaged disks. Using managed disks offers additional scalability benefits. In particular, with managed disks, when using an Azure Marketplace image to provision a VM scale set, you can scale out up to 1000 VMs. With unmanaged disks, the upper limit is 100 VMs per scale set.

When using custom images, managed disks allow you to scale out up to 100 VMs. With unmanaged disks, you should limit your deployment to 20 VMs. You can increase this number to 40 if you set the overprovision property of the VM scale set to false. This way, you ensure that the aggregate Input/Output Operations Per Second (IOPS) of virtual disks in the VM scale set stays below the 20,000 IOPS limit of a single Standard Azure storage account.

Note: VM scale sets are available only when using the Azure Resource Manager deployment model.

This solution differs from the classic VM horizontal scaling approach, which required you to preprovision each VM that you wanted to bring online, to accommodate increased demand.

Implementing VM scale sets

To provision a VM scale set, you can use the Azure portal, Azure PowerShell, Azure Command Line-Interface (Azure CLI), or Azure Resource Manager templates. When using templates, you should reference the Microsoft.Compute/virtualMachineScaleSets resource type. This resource type implements a number of properties, including:

  • tier. The size of the virtual machines in the VM scale set.
  • capacity. The number of virtual machine instances that the scale set will auto-provision.
  • virtualMachineProfile. The disk, operating system, and network settings of the virtual machines in the scale set.

To configure Autoscale, reference the Microsoft.Insights/autoscaleSettings resource type in an Azure Resource Manager template. Some of the more relevant properties that this resource type implements include:

  • metricName. The name of the performance metric that determines whether to trigger horizontal scaling (for example, Percentage CPU).
  • metricResourceUri. The resource identifier designating the virtual machine scale set to monitor.
  • timeGrain. The frequency with which performance metrics are collected (between 1 minute and 12 hours).
  • Statistic. The method of calculating aggregate metrics from multiple virtual machines (Average, Minimum, Maximum).
  • timeWindow. Range of time for metrics calculation (between 5 minutes and 12 hours).
  • timeAggregation. The method of calculating aggregate metrics over time (Average, Minimum, Maximum, Last, Total, Count).
  • Threshold. The value that triggers the scale action. For example, if you set it to 50 when using the Percentage CPU metricName, the number of virtual machines in the set would increase when the CPU usage exceeds 50 percent (specifics would depend on other parameters, such as statistics, timeWindow, or timeAggregation).
  • Operator. The criterion that determines the method of comparing collected metrics and the threshold (Equals, NotEquals, GreaterThan, GreaterThanOrEqual, LessThan, LessThanOrEqual).
  • Direction. The type of horizontal scaling invoked as the result of reaching the threshold (increase or decrease, representing, respectively, scaling out or scaling in).
  • Value. The number of virtual machines added to or removed from the scale set (one or more).
  • Cooldown. The amount of time to wait since the most recent scaling event before the next action occurs (from 1 minute to 1 week).

To understand how Azure scale regarding the virtual machines, I recommend to watch this video from Mark Russinovich about Virtual Machine Scale Sets.

Virtual Machines High Availability on Azure

In general, you want your Azure virtual machine environment to be resilient to hardware failures and maintenance events that might occur occasionally within the Azure infrastructure. The primary mechanism provided by the Azure platform that helps you accomplish this objective is the availability set feature.

Availability sets are designed to gracefully handle two types of ebent that might result in downtime of individual Azure virtual machines.

  • Planned outages. These outages occur because of planned system maintenance events that require a temporary virtual machine downtime. In particular, while most Azure platform updates are transparent to platform as a service (PaaS) and IaaS infrastructure, some of them might involve reboots of Hyper-V hosts. To accommodate such types of events, Azure implements update domains.
  • Unplanned outages. These outages can negatively affect availability of individual virtual machines in an unexpected way, and potentially for longer than the time frame of a planned Hyper-V host restart. While the Azure platform is designed to be highly resilient, there might be cases where a hardware failure results in virtual machine downtime. In Azure, unplanned outage events are mitigated by using fault domains.

Understanding availability sets

To provide resiliency for your IaaS-based solutions, you should group two or more virtual machines providing the same functionality in an availability set. An availability set is a logical grouping of two or more virtual machines. By assigning virtual machines to the same availability set, you automatically distribute them across separate fault domains and separate update domains.

Update domains

An availability set consists of up to 20 update domains (you have the ability to increase this number from its default of 5). Each update domain represents a set of physical hosts that Azure Service Fabric can update and reboot at the same time without affecting overall availability of virtual machines grouped in the same availability set.

 

When you assign more than five virtual machines to the same availability set (assuming the default settings), the sixth virtual machine is placed into the same update domain as the first virtual machine, the seventh in the same update domain as the second virtual machine, and so on. During planned maintenance, only hosts in one of these five update domains are rebooted concurrently, while hosts in the other four remain online.

Fault domains

Fault domains define a group of Hyper-V hosts that, due to their placement, could be affected by a localized failure (such as servers installed in a rack serviced by the same power source or networking switches). Azure Service Fabric distributes virtual machines (VMs) in the same availability set across either two (with Azure classic deployment) or up to three (when using Azure Resource Manager) fault domains.

By placing application servers, such as web or database servers in function-based availability sets and then using load balancing or additional failover mechanism, you can protect each service and enable traffic to be continuously served by at least one instance of each service.

Configuring availability sets

Availability set configuration is mostly governed by the Azure Service Fabric, and, beyond the initial setup and VM assignment, does not require user interaction. To add one or more virtual machines to an availability set, simply assign the same availability set on their Settings blade. The portal also allows you to create a new availability set by offering it as one of its Azure Marketplace components in the Compute category.

When you create an availability set, you must specify the following settings:

  • Name. A unique sequence of up to 80 characters, starting with either a letter or a number, followed by letters, numbers, underscores, dashes, or periods, and ending with a letter, a digit, or an underscore.
  • Resource Group. A resource group into which you must deploy the Azure VMs that will become part of the availability set.
  • Location. The Azure region that is hosting the VMs which will be part of the availability set.
  • Fault domains. The number of fault domains (up to three) associated with the availability set.
  • Update domains. The number of update domains (up to 20) associated with the availability set.
  • Managed. An indication that the availability set will host the VMs that use managed disks. For more information about managed disks, refer to the “Configuring virtual machine disks” lesson in this module.

Azure PowerShell provides an alternative approach to managing availability sets. The following cmdlets handle creating, modifying, and removing availability sets respectively:

New-AzureRmAvailabilitySet
Set-AzureRmAvailabilitySet
Remove-AzureRmAvailabilitySet

Considerations for virtual machine availability

When configuring availability sets for Azure virtual machines:

  • Configure two or more virtual machines in an availability set for redundancy. The primary purpose of an availability set is to provide resiliency to failure of a single virtual machine. If you do not use multiple virtual machines in an availability set, you gain no benefit from the availability set. In addition, for Internet-facing virtual machines to qualify for 99.95% external connectivity Service Level Agreement (SLA), they must be part of the same availability set (with two or more VMs per set).
    Note: It is critical to understand that it is not possible to add an existing Azure virtual machine to an availability set. You need to specify that a virtual machine will be part of an availability set when you provision the VM.
  • Configure each application tier as a separate availability sets. As long as virtual machines in your deployment provide the same functionality, such as web service or database management system, you should configure them as part of the same availability set to ensure that at least one VM in each tier is always available.
  • Wherever applicable, combine load balancing with availability sets. You can implement an Azure load balancer in conjunction with an availability set to distribute incoming connections among its virtual machines, as long as the application running on them supports such configuration. In addition to distributing incoming connections, a load balancer is capable of detecting a virtual machine or an application failure and redirect network traffic to other nodes in the availability set.